diff --git a/.gitattributes b/.gitattributes index 5abe174604971d0960afe0afd98734150c2acb5f..add301842b3368657f174f6e06b890d02d2f6fb0 100644 --- a/.gitattributes +++ b/.gitattributes @@ -230,3 +230,4 @@ data_all_eng_slimpj/shuffled/split/split_finalac/part-04.finalac filter=lfs diff data_all_eng_slimpj/shuffled/split/split_finalac/part-08.finalac filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-10.finalaa filter=lfs diff=lfs merge=lfs -text data_all_eng_slimpj/shuffled/split/split_finalaa/part-04.finalaa filter=lfs diff=lfs merge=lfs -text +data_all_eng_slimpj/shuffled/split/split_finalad/part-14.finalad filter=lfs diff=lfs merge=lfs -text diff --git a/data_all_eng_slimpj/shuffled/split/split_finalad/part-14.finalad b/data_all_eng_slimpj/shuffled/split/split_finalad/part-14.finalad new file mode 100644 index 0000000000000000000000000000000000000000..af66f09a49aec741398750629b61331c2832957f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split/split_finalad/part-14.finalad @@ -0,0 +1,3 @@ +version https://git-lfs.github.com/spec/v1 +oid sha256:a7dca82833b63f255322affe6d0b3034b5778bc85a18effeb932c3fc5adafc2a +size 12576683824 diff --git a/data_all_eng_slimpj/shuffled/split2/finalzqhm b/data_all_eng_slimpj/shuffled/split2/finalzqhm new file mode 100644 index 0000000000000000000000000000000000000000..36e03e6ea11e3ab814e73684d15a0a5c8cde7955 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzqhm @@ -0,0 +1,5 @@ +{"text":"\\section{Background}\n\nAn $m$-by-$n$ matrix $A$ is \\textit{totally positive} (TP) or \\textit{totally nonnegative} (TN) if all of its minors (determinants of square submatrices) are positive or nonnegative, respectively. Na\\\"ively, a matrix has exponentially many minors that must be checked to verify it is TP or TN. However, due to Fekete's Theorem, only the \\textit{contiguous} minors, that is, minors whose row and column index sets are consecutive, must be checked to verify total positivity. Even better, only the \\textit{initial minors} (contiguous minors whose row or column set includes 1) need to be checked \\cite{totallynonnegative}. The number of initial minors in an $m$-by-$n$ matrix is precisely $mn$, and calculating determinants may be done in $O(n^{2.373})$ time, so this means there is a polynomial time algorithm for checking total positivity \\cite{detcomplexitynew}. In Section~\\ref{dodgson}, we demonstrate a simple cubic-time algorithm to check total positivity based on Dodgson condensation.\n\nA \\textit{partial matrix} is one in which some entries are unspecified and free to be chosen. A \\textit{completion} of a partial matrix is a choice of values for the unspecified entries, resulting in a conventional matrix of the same size. The \\textit{TP completion problem} asks for which partial matrices there exists a completion that is TP \\cite{totallynonnegative}. A partial matrix is \\textit{partial TP} if all of its minors that consist only of specified entries are positive. Clearly a necessary condition for there to be a TP completion of a partial matrix is that the partial matrix is partial TP. This is not sufficient in general. In Section~\\ref{main}, we prove that determining if a partial matrix is partial TP lies in the complexity class co-NP-complete, in stark contrast to the case for conventional matrices. We actually prove a more general result for strongly and weakly sign regular matrices that includes checking partial TP and TN as special cases.\n\nFinally, in Section~\\ref{algs}, we provide an algorithm for checking partial TP which runs in exponential time in the number of unspecified entries and polynomial time in the size of the partial matrix. This generalizes to any matrix property which is inherited by all submatrices.\n\nOur principal observation about the complexity of checking partial TP is clearly relevant to the solution of TP completion problems.\n\n\\section{TP in Cubic Time}\n\\label{dodgson}\n\nDodgson condensation is a method for computing the determinant of a matrix by repeatedly applying the Desnanot-Jacobi-Sylvester identity\n\\[ \\det(M)=\\frac{\\det(M_1^1)\\det(M_n^n)-\\det(M_1^n)\\det(M_n^1)}{\\det(M_{1,n}^{1,n})}, \\]\nin which $M$ is an $n$-by-$n$ matrix and $M_{a_1,a_2,...}^{b_1,b_2,...}$ denotes the submatrix obtained by deleting rows $a_1,a_2,...$ and columns $b_1,b_2,...$. Dodgson condensation calculates the determinants of ever-larger contiguous minors until the determinant of the whole matrix is found, as follows:\n\\begin{enumerate}\n \\item Set $A\\gets M$.\n \\item Let matrix $B$ be $(n-1)$-by-$(n-1)$ with entries $b_{i,j}=a_{i,j}a_{i+1,j+1}-a_{i,j+1}a_{i+1,j}$. Evidently, the entries of $B$ are the contiguous $2$-by-$2$ minors of $M$.\n \\item Let matrix $C$ be $(n-2)$-by-$(n-2)$ with entries $c_{i,j}=(b_{i,j}b_{i+1,j+1}-b_{i,j+1}b_{i+1,j})\/a_{i+1,j+1}$. In light of the identity mentioned above, the entries of $C$ are the contiguous $3$-by-$3$ minors of $M$.\n \\item Set $A\\gets B$ and $B\\gets C$, and repeat step 3 to obtain a new matrix $C$ of dimension one less than before. This matrix consists of the contiguous minors of $M$ one size greater than before. Do this until the resulting matrix is $1$-by-$1$. Its entry is the determinant of the original matrix $M$.\n\\end{enumerate}\n\nIf ever an interior entry of the previous matrix is zero, step 2 is ill-defined. However, note that we compute every contiguous minor of the matrix at some point in the algorithm, so if any entry of any of the matrices is nonpositive, we can immediately conclude the original matrix was not TP. If instead the above algorithm terminates with all entries encountered positive, then all contiguous minors of the original matrix were positive, and it is TP. It is clear that we may apply the same algorithm on non-square matrices, stopping when we obtain a matrix with either only one row or only one column.\n\nAnalysis of the complexity of this algorithm is straightforward. To get the contiguous $(k+1)$-by-$(k+1)$ minors from the contiguous $k$-by-$k$ minors, this algorithm performs two multiplications, one subtraction, and one division per $(k+1)$-by-$(k+1)$ minor. If the matrix is $m$-by-$n$ with $m\\le n$, there are $(m-k)(n-k)$ such minors, and the largest minors are $m$-by-$m$ (so the largest value of $k$ is $m-1$). Therefore the asymptotic runtime is given by\n\\[ \\sum_{k=1}^{m-1}(m-k)(n-k)c, \\]\nwith $c$ the cost of performing the arithmetic operations. This sum is $(3m^2n-m^3-3mn+m)\\cdot c\/6$, which is $O(m^2n)$. For square matrices this is cubic time.\n\nNote there is no general analogue to Fekete's Theorem for total nonnegativity, so the cubic time algorithm presented here cannot be straightforwardly adapted to get a cubic time algorithm for determining TN. There are non-TN matrices where all contiguous minors have nonnegative determinant, such as the one below:\n\\[ \\begin{bmatrix}\n1 & 0 & 1 \\\\\n1 & 0 & 0\n\\end{bmatrix}. \\]\n\n\\section{Partial TP is Co-NP-complete}\n\\label{main}\n\nA matrix is called \\textit{weakly sign regular} (WSR) with signature $(e_i)$, $e_i\\in\\{-1,1\\}$ for all $i$, if all of its $k$-by-$k$ minors $A$ satisfy $e_kA \\ge 0$ for all $k$. If the minors satisfy $e_kA > 0$ for all $k$, the matrix is \\textit{strictly sign regular} (SSR). Partial matrices are partial WSR if their $k$-by-$k$ specified minors satisfy $e_kA \\ge 0$ for all $k$ and similarly for partial SSR. Note that TP (TN) is the special case of SSR (WSR) with $e_i=1$ for all $i$.\n\nA \\textit{biclique} is an induced subgraph of a bipartite graph that is a complete bipartite graph; a biclique is \\textit{balanced} if it has the same number of vertices in both parts. The \\textsc{balanced biclique} problem asks if there is a balanced biclique in a given bipartite graph with at least $2k$ for given integer $k$. The \\textsc{balanced biclique} problem is shown to be NP-complete in \\cite{npcomplete} by a transformation from the \\textsc{clique} problem.\n\nWe reduce the \\textsc{balanced biclique} problem to instances of the complement of \\textsc{partial $(e_i)$-SSR} and \\textsc{partial $(e_i)$-WSR}, the problems of checking if a given partial matrix is partial SSR\/WSR with a fixed signature $(e_i)$. This shows that \\textsc{partial $(e_i)$-SSR} and \\textsc{partial $(e_i)$-WSR} are co-NP-hard. It is easy to show they are also co-NP, so they are co-NP-complete. As special cases, both \\textsc{partial TN} and \\textsc{partial TP} are also co-NP-complete.\n\nFirst, we recall a known fact that may be found in \\cite{ando} and elsewhere. It may be proven inductively by ``bordering'' \\cite{totallynonnegative}.\n\n\\begin{lemma}\n\\label{lem}\nFor any positive integers $m$ and $n$ and signature $(e_i)$, there exists an $m$-by-$n$ SSR (WSR) matrix with signature $(e_i)$.\n\\end{lemma}\n\nWe then have our main theorem:\n\n\\begin{theorem}\nBoth \\textsc{partial $(e_i)$-SSR} and \\textsc{partial $(e_i)$-WSR} are co-NP-complete for any signature $(e_i)$.\n\\begin{proof}\n\\textbf{Co-NP-hard:} Given a bipartite graph $G$ with parts $U=\\{u_1,u_2,\\dots,u_m\\}$ and $V=\\{v_1,v_2,\\dots,v_n\\}$ and an integer $k$, we construct a particular $m$-by-$n$ partial matrix $M$. First, let $X$ be an $m$-by-$n$ SSR matrix with signature $(f_i)=(e_1,e_2,\\dots, e_{k-1},-e_k,-e_{k+1},\\dots)$. If $\\{v_i,u_j\\}$ is an edge of $G$, let $m_{ij}$ be specified and equal to $x_{ij}$; otherwise let $m_{ij}$ be unspecified.\n\nNow $G$ contains a balanced biclique of size $k$ or greater if and only if $M$ has a $k$-by-$k$ specified minor $A$. This minor, if it exists, satisfies $f_kA>0$. This means $M$ is \\textit{not} partial SSR (or WSR) with signature $(e_i)$, since it has a $k$-by-$k$ minor with $-e_kA\\not>0$ ($-e_kA\\not\\ge0$). Thus, \\textsc{partial $(e_i)$-SSR (WSR)} is co-NP-hard.\n\n\\textbf{Co-NP:} Given a partial matrix $M$ which is not partial SSR (WSR) with a given signature $(e_i)$, one can provide a witness $k$-by-$k$ minor $A$ that can be computed in polynomial time which does not satisfy $e_kA>0$ ($e_kA\\ge0$), so \\textsc{partial $(e_i)$-SSR (WSR)} is co-NP.\n\nWe conclude \\textsc{partial $(e_i)$-SSR (WSR)} is co-NP-complete.\n\\end{proof}\n\\end{theorem}\n\nOf interest is a special case of this theorem:\n\n\\begin{cor}\nBoth \\textsc{partial TP} and \\textsc{partial TN} are co-NP-complete.\n\\begin{proof}\nThe \\textsc{partial TP} problem is precisely the \\textsc{partial $(1,1,\\dots)$-SSR} problem, and the \\textsc{partial TN} problem is precisely the \\textsc{partial $(1,1,\\dots)$-WSR} problem.\n\\end{proof}\n\\end{cor}\n\n\\section{Algorithms for Checking Partial TP}\n\\label{algs}\n\nConsider a partial matrix with one unspecified entry at the $(i,j)$ position. Any submatrix with only specified entries that includes row $i$ cannot include column $j$ due to this unspecified entry, and similarly if it includes column $j$ then it cannot include row $i$. This means to check if this partial matrix is partial TP, it is sufficient to check if the two conventional matrices formed by removing row $i$ or column $j$ are TP. These matrices are denoted $M_i$ and $M^j$, following the notation in Section~\\ref{dodgson}.\n\nThis generalizes to the case of general partial matrices:\n\n\\begin{theorem}\nIf a partial matrix $M$ has an unspecified entry at the $(i,j)$ position, then it is partial TP if and only if the two partial matrices $M_i$ and $M^j$ are partial TP.\n\\begin{proof}\nAny fully specified square submatrix of $M$ must lie entirely within $M_i$ or $M^j$ (or both), since it cannot include the $(i,j)$ entry. Hence if both of these partial matrices are partial TP, any specified square submatrix of $M$ has positive determinant, so $M$ is partial TP. The converse is trivial.\n\\end{proof}\n\\end{theorem}\n\nThis immediately gives a recursive algorithm to check partial TP:\n\\begin{enumerate}\n \\item If the partial matrix $M$ has no unspecified entries, then check if it is TP using the cubic time algorithm presented previously.\n \\item Otherwise, say the $(i,j)$ entry is unspecified. Recursively check $M_i$ or $M^j$ for partial TP; the original matrix is partial TP if and only if both submatrices are partial TP.\n\\end{enumerate}\n\nIt is clear this algorithm runs in time $O(2^xn^3)$, where $x$ is the number of unspecified entries in the partial matrix, since each unspecified entry generates at most two subproblems with at most $x-1$ unspecified entries, for a total of at most $2^x$ subproblems. Consequently, if $x\\le c\\log n$ for a fixed $c$, this algorithm runs in polynomial time, with the degree depending on $c$. It is also worth noting that the number of specified minors of an $m$-by-$n$ partial matrix is at most $\\sum_{k=1}^{\\min(m,n)}\\binom{m}{k}\\binom{n}{k}=\\binom{m+n}{n}-1\\le 2^{m+n}$, so this algorithm is only faster than brute-force checking every specified minor in the case that there are relatively few unspecified entries.\n\nIt is also clear that this generalizes to the case of checking partial SSR or partial WSR for any sign signature, including partial TN, or indeed to any matrix property that is inherited by all submatrices. If one has an algorithm to check a matrix for such a property $P$ that runs in time $O(f(n))$, then the appropriate modification of the above algorithm will check partial matrices for ``partial $P$'' in time $O(2^xf(n))$.\n\nFurthermore, one can connect this result to the bipartite graph construction of the previous section. A \\textit{maximal biclique} is a bliclique that is not a subgraph of any larger bliclique. Then we have the following more general theorem:\n\n\\begin{theorem}\nLet $G$ be the bipartite graph corresponding to a partial matrix $M$. Then $M$ is partial TP iff each of the submatrices corresponding to the maximal blicliques of $G$ are TP.\n\n\\begin{proof}\nEach fully specified square submatrix of $M$ corresponds to a balanced biclique in $G$. Each balanced biclique is either maximal or a subgraph of a larger biclique, so if all of the maximal bicliques correspond to TP submatrices of $M$, then every specified minor is positive, by virtue of being a minor of a TP submatrix. The converse is trivial.\n\\end{proof}\n\\end{theorem}\n\nTherefore any algorithm to enumerate maximal bicliques of a bipartite graph translates into an algorithm for efficiently checking partial TP, simply by checking for TP in each maximal biclique (and it will also inherit a cubic factor from this check).\n\nThis is a generalization of the algorithm presented earlier in this section because that algorithm may be obtained from the observation that, if vertices $a_i$ and $b_j$ in bipartite graph $G$ do not have an edge between them, the set of maximal bicliques of $G$ is the union of the sets of maximal bicliques in $G-a_i$ and $G-b_j$.\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Nuclear Bulge\\ X-ray sources}\n\nThe \\citet{wang02} and \\citet{muno03} \\textit{Chandra}\\ surveys of the Nuclear Bulge\\\nrevealed a large population of weak, hard point-like X-ray sources\\ that could\naccount for up to 10\\% of the previously observed ``diffuse'' X-ray\nemission from the Nuclear Bulge. Numerous studies have tried to characterise\nthese newly discovered X-ray sources\\ of the Nuclear Bulge\\ surveys based on their X-ray\nproperties. \\citet{pfah02} suggest that a large fraction may be\nwind-accreting neutron stars with high mass\ncompanions. \\citet{muno03,muno06} and \\citet{ruit06} propose that they\ncould be white dwarfs accreting from main sequence counterparts\n(cataclysmic variables, polars and intermediate\npolars). \\citet{will03} and \\citet{liu06} believe them to be neutron\nstars with low mass companions and \\citet{wu07} have speculated that\nthey could be isolated neutron stars and black holes accreting from\nthe interstellar medium. However, the weak nature of the sources means\nthat positive identification of the majority of the sources is\nimpossible based solely on the X-ray data. Identification of the\nstellar counterparts to these X-ray sources\\ we will allow the differentiation\nbetween these possibilities.\n\nAs shown in \\citet{band05}, to identify candidate counterparts to\nX-ray sources\\ in the Nuclear Bulge, very high resolutions imaging is required\nto avoid issues of confusion due to the high stellar density of the\nregion. In addition to that, the extremely high, variable extinction\ntowards the Nuclear Bulge requires these observations to be performed\nin the near-infrared\\ or longer wavelengths. Previously, only the DENIS and\n2MASS\\ surveys had observed the Nuclear Bulge\\ entirely in the near-infrared, however as\nwill be shown, their depth and resolution is insufficient for the\npurposes of identifying the counterparts to the majority of the Nuclear Bulge\\\nX-ray sources.\n\n\\section{The UKIDSS-GPS}\n\nThe (United Kingdom Infrared Deep Sky Survey - Galactic Plane Survey)\nUKIDSS-GPS is one of the five near infrared Public Legacy Surveys that\nare being undertaken by the UKIDSS consortium, using the Wide Field\nCamera on the United Kingdom Infrared Telescope\n\\citep{luca07}. Surveying the Northern Galactic Plane and the Nuclear Bulge\\ in\n$J$, $H$ and $K$ bands, integration times of 80s, 80s and 40s\nrespectively achieve median 5-$\\sigma$ depths of $J=19.77$, $H=19.00$\nand $K=18.05$ (Vega system) for the most recent Data Release 2\n\\citep[DR2; ][]{warr07}. The resolution of the UKIDSS-GPS observations\nare 0.2\\arcsec, an order of magnitude better than the 2MASS\\ survey,\nwith a similar positional accuracy of $\\sim 0.1\\arcsec$.\n\n\\begin{figure*}\n \\begin{center}\n \\includegraphics[width=0.99\\textwidth]{gosling_a1_fig1.eps}\n \\caption[]{Near-infrared images of the Nuclear Bulge. {\\it Left}: $\\sim\n 2\\deg \\times 2\\deg$ false 3-colour image of the Nuclear Bulge\\ in the near-infrared\\\n (blue for $J$-band observations, green for $H$-band and red for\n $K$-band) containing $\\sim 4,500,000$ point sources. The white box\n is the approximate location of the images displayed on the {\\it\n right}. {\\it Top-right}: old 2MASS\\ 3-colour image of the region\n indicated. {\\it Bottom-right}: the equivalent image from the new\n UKIDSS-GPS survey demonstrating the vast improvement in both\n resolution and depth of the images. The white circle is the\n 1\\arcsec\\ radius error circle of one of the \\textit{Chandra}\\ Nuclear Bulge\\ X-ray sources\\\n demonstrating the scale of the images. }\n \\label{f:nbimage}\n \\end{center}\n\\end{figure*}\n\nIn the Nuclear Bulge, the crowding of this extremely dense stellar region\ncoupled with the high level of intervening extinction reduces the\ncompletion limit of the survey to $J=18.5$, $H=16.0$ and $K=14.5$ with\ndetection limits of $J=20.5$, $H=19.5$ and $K=18.0$. Despite this, the\n$\\sim 2\\deg \\times 2\\deg$ region shown in Figure \\ref{f:nbimage}\ncontains $\\sim 4,500,000$ point sources. As such it is the first\nsurvey covering the entire Nuclear Bulge\\ to have the resolution and depth to\nallow the identification of candidate counterparts to the majority of\nthe Nuclear Bulge\\ X-ray sources.\n\n\\section{X-ray source\\ counterparts}\n\nWe used the TOPCAT program to identify candidate counterparts to the\nX-ray sources. We used a larger $0.3\\arcsec$ positional error for the\nUKIDSS-GPS Nuclear Bulge\\ source positions, than the stated accuracy of\n$0.1\\arcsec$, to account for the confusion in the Nuclear Bulge caused\nby the very high stellar density. The ``out-of-the-box'' \\textit{Chandra}\\\npositional error is $1\\arcsec$, though this can be considerably larger\n(up to $9\\arcsec$) in some cases due to image distortion in the outer\nregions of the \\textit{Chandra}\\ array and\/or to the very low S\/N of the\nfaintest sources detected. For the purpose of this study, we discarded\nall X-ray sources\\ for which the X-ray positional errors were greater than\n$3\\arcsec$ radius; for error circles greater than this size, the\nnumber of candidate counterparts is sufficiently high (5-30 stars) to\nrender meaningless any attempt at astrometric matching between the\nnear-infrared\\ and X-ray catalogs. We used the $0.3\\arcsec$ positional error\nfor the UKIDSS-GPS source positions, along with the derived positional\nerrors from the \\citet{muno03} and \\citet{muno06} X-ray source\\ catalogues to\nset a maximum separation of $3.3\\arcsec$ between and X-ray source\\ and a star\nthat can be considered a match.\n\n\\begin{table*}\n \\begin{center}\n \\caption[Number of X-ray source counterparts]{The numbers of\n candidate counterparts to the \\citet{muno06} catalogue of X-ray\n sources from the \\citet{wang02} \\textit{Chandra}\\ observation of the\n Nuclear Bulge. The number of counterparts has been broken down in\n to X-ray sources\\ with positional errors of $0\\arcsec - 1\\arcsec$,\n $1\\arcsec - 2\\arcsec$, $2\\arcsec - 3\\arcsec$ and the total numbers\n on the bottom row. }\n \\vspace{5mm}\n \\begin{tabular}{c|rrrr|r}\n \\hline\n \\hline\n X-ray error size & \\multicolumn{5}{c}{Number of candidate counterparts} \\\\\n (arcsec) & 1 & 2 & 3 & $\\geq 4$ & Total \\\\\n \\hline\n $0\\arcsec - 1\\arcsec$ & 523 & 110 & 49 & 27 & 709 \\\\\n $1\\arcsec - 2\\arcsec$ & 600 & 153 & 58 & 43 & 854 \\\\\n $2\\arcsec - 3\\arcsec$ & 245 & 78 & 44 & 33 & 400 \\\\\n \\hline\n $0\\arcsec - 3\\arcsec$ & 1368 & 341 & 151 & 103 & 1963 \\\\\n \\hline\n \\hline\n \\end{tabular}\n \\label{t:ncounterparts}\n \\end{center}\n\\end{table*}\n\nOf the 4256 X-ray sources\\ in the \\citet{muno03, muno06} catalogues, 3963\nX-ray sources\\ have positional errors of less than $3\\arcsec$. From these, we\nfound 3076 candidate counterparts to 1963 of the X-ray sources. There are 1368\nX-ray sources\\ with only one candidate counterpart, 341 with two candidate\ncounterparts, 151 with three and 103 with 4 or more candidate\ncounterparts. Table \\ref{t:ncounterparts} gives a breakdown of the\nnumbers of counterparts for X-ray sources\\ with error circles of $0\\arcsec -\n1\\arcsec$, $1\\arcsec - 2\\arcsec$ and $2\\arcsec - 3\\arcsec$. As can be\nseen, for X-ray sources\\ with positional errors of $0\\arcsec - 2\\arcsec$, the\nmajority of these have only one candidate counterpart, whereas\nthose with $1\\arcsec - 2\\arcsec$ errors have a slightly larger\nproportion with 2 candidate counterparts. As might be expected, it is\nthe sources with the larger error circles, $2\\arcsec - 3\\arcsec$,\nwhich have a much larger proportion with multiple candidate\ncounterparts.\n\nFigure \\ref{f:xrscolmag} shows colour magnitude diagrams (CMDs) of the\ncandidate counterparts compared to a representative sample of Nuclear\nBulge stars from the UKIDSS-GPS DR2\\ catalogue. From the diagrams it can be seen\nthat the X-ray sources\\ follow the general trend of the overall stellar\ndistribution but with some differences to their specific\ndistribution. For instance, the CMDs containing $J$-band data reveal a\nhigh proportion of the candidate counterparts to be in the local,\nmain-sequence arm of the CMDs. The majority of the remaining candidate\ncounterparts in all three CMDs are consistent with red giants at a\nGalactic Centre distance with variable levels of extinction (see\nposter contribution by Gosling et al. for details of the Nuclear Bulge\\\nextinction).\n\n\\begin{figure*}\n \\begin{center}\n \\includegraphics[width=0.3\\textwidth]{gosling_a1_fig2.eps}\n \\includegraphics[width=0.3\\textwidth]{gosling_a1_fig3.eps}\n \\includegraphics[width=0.3\\textwidth]{gosling_a1_fig4.eps}\n \\caption[]{Colour magnitude diagrams (CMDs) of the candidate\n counterparts to the \\textit{Chandra}\\ X-ray sources\\ ({\\it red} points) overlaid\n on a representative sample of the Nuclear Bulge stellar\n population ({\\it black} points). From left to right the diagrams\n are $H\\ vs\\ J\\!-\\!H$, $K\\ vs\\ J\\!-\\!K$ and $K\\ vs\\ H\\!-\\!K$\n respectively. There is an increase in the number of candidate\n counterparts in the $K\\ vs\\ H\\!-\\!K$ CMD as the $H$ and $K$-band\n observations are less affected by the extinction towards the\n Nuclear Bulge.}\n \\label{f:xrscolmag}\n \\end{center}\n\\end{figure*}\n\nFigure \\ref{f:xrscolmag} also demonstrates the increase in candidate\ncounterparts that occurs when sources with photometry in the $H$ and\n$K$-band is compared to those for which there is photometry in all 3\nor $J$ and $H$-bands. There is a large amount of extinction towards\nthe Nuclear Bulge\\ (see additional poster contribution of Gosling et al. for\nfurther details), This extinction has a greater effect on shorter\nwavelengths and so is most likely to obscure a source in the $J$ band.\nThis is also apparent in the increase in sources being predominately\nthose with large values of $H\\!-\\!K$ indicating high levels of\nextinction.\n\n\\section{The next stages}\n\nIdentifying the candidate counterparts to the Nuclear Bulge\\ X-ray sources\\ is only the\nfirst step in understanding this population. To better understand\nthem, it will be necessary to identify the true companion to each of\nthe X-ray sources\\ where possible. In order to do this, we intend to search\nfor accretion signatures in the candidate counterparts. We are\nundertaking a narrow-band Brackett-$\\gamma$ ({\\rm Br-}\\ensuremath{\\gamma}, a known accretion\nsignature) imaging survey of the Nuclear Bulge\\ to identify sources with {\\rm Br-}\\ensuremath{\\gamma}\\\nexcess, and will also follow-up the candidate counterparts with near-infrared\\\nmulti-object spectroscopy using the FLAMINGOS-2 instrument soon to be\ncommissioned on Gemini South. We will also compare our catalogue of\ncandidate counterparts to the \\textit{Spitzer}\\ catalogue of point sources of\n\\citet{rami07} to increase our wavelength coverage of these sources\nand to identify those with spectra indicating that they are likely to\nbe in accreting or X-ray emitting systems.\n\n\\section{Conclusions}\n\nWe have presented an initial matching of the source positions of the\n\\textit{Chandra}\\ Nuclear Bulge\\ surveys of \\citet{wang02} and \\citet{muno03} to the new\nUKIDSS-GPS near-infrared\\ survey of the Nuclear Bulge. In doing so, we have identified\ncandidate counterparts to $\\sim50\\%$ of the X-ray sources\\ in this extremely\ncrowded and heavily extincted region. We show that these candidate\ncounterparts are consistent with the overall stellar distribution of\nobservations towards the Nuclear Bulge\\ although the relative proportions of the\npopulations are different.\n\nCandidate counterparts that are observed in the $J$-band have a higher\ntendency to be foreground sources and those observed only in the $H$\nand $K$-bands are more likely to be Nuclear Bulge\\ sources. This is most likely\nto be an effect of the high levels of extremely spatially variable\nextinction (see additional poster contribution by Gosling et al. for\nfurther details) towards the Nuclear Bulge.\n\nFurther observations including narrow-band {\\rm Br-}\\ensuremath{\\gamma}\\ imaging and near-infrared\\\nspectroscopy using the FLAMINGOS-2 instrument as well as comparison to\nother datasets such as the \\textit{Spitzer}\\ observations of the Nuclear Bulge\\ will be\nused to better identify the true companions to these X-ray sources\\ and so\ngain an understanding as to the nature of the Nuclear Bulge\\ X-ray source\\ population.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{INTRO}\n\nIn the present work we consider a generalization of general relativity (GR) \\cite{Einstein:1916vd} with cosmological constant in the Einstein-Cartan (EC) formalism \\cite{Utiyama:1956sy,Kibble:1961ba,Sciama:1964wt,Zanelli:2005sa}, \\emph{i.e.}, where the fundamental variables are the vierbein and the spin-connection instead the metric tensor and the affine-connection as in the Palatini formalism \\cite{Bergmann:1980wt,Ferraris1982}. In this generalization, the usual Einstein-Hilbert action \\cite{DeSabbata:1986sv} is supplemented by a quadratic curvature term and a quadratic torsion term. \n\nSpecifically, we study vacuum static and spherically symmetric solutions of this model by considering the case of vanishing torsion. Because we are considering the EC formalism, the action provides two field equations, one for the vierbein and another for the spin-connection. First, we show that a de Sitter spacetime is an exact vacuum solution of these equations. This is a non-trivial result since the system of equations is over-determined for vanishing torsion. After that, we obtain a perturbative solution around the Schwarzschild-de Sitter solution by neglecting the spin-connection equation, which is equivalent to impose vanishing torsion at the action level. It is important to be clear that, to obtain such solution, the curvature squared term is treated as perturbation around the usual Einstein term with cosmological constant. Hence, it the the generalized Einstein equation which is perturbed instead a perturbed solution around a fixed background. \n\nStatic and spherically symmetric solutions in alternative gravity models is a recurrent subject of investigation \\cite{Stelle:1977ry,Seifert:2007fr,Sebastiani:2010kv,Lu:2015psa}. In particular, K.~Stelle investigated this subject in a quite general higher derivative scenario in the metric formalism \\cite{Stelle:1977ry}. Hence, Stelle's work encompasses the contributions of quadratic curvature terms. Nevertheless, we call attention to the main differences between \\cite{Stelle:1977ry} and the present paper: First, our work is performed in the EC formalism instead of the metric formalism. Second, as already explained, our perturbed solution is a deformation around the Schwarzschild-de Sitter spacetime obtained by considering the quadratic curvature term as a perturbation while Stelle's result is a perturbed solution around the Minkowski background obtained by imposing a perturbation on the solution.\n\nIt is worth mentioning that, although our main motivation is the action originated as an emergent gravity in a quantum gauge theory scenario \\cite{Sobreiro:2011hb,Sobreiro:2012dp,Sobreiro:2010qf,Sobreiro:2016fks}, our analysis and discussions are general enough to encode any gravity theory as above described. Nevertheless, we will refer to the model \\cite{Sobreiro:2011hb,Sobreiro:2012dp,Sobreiro:2010qf,Sobreiro:2016fks} whenever we find it elucidative. Perhaps, the most important general motivation would be the recent detection of gravitational waves by the \\textit{Laser Interferometer Gravitational-wave Observatory} (LIGO) collaboration \\cite{Abbott:2016blz}. Such discover brings a new era on black hole physics as well as the possibility to test alternative solutions derived of modified theories of gravity that should expand the horizons of the well-known Einstein's gravitational theory.\n\nThis work is organized as follows: In Sec.~\\ref{odesys} we define the model and the respective field equations and specify them for static and spherically symmetric variables. The exact solution is discussed in Sect.~\\ref{exacsol}. In Sec.~\\ref{pertsol} we provide the perturbative solution. In Sect.~\\ref{therm}, a the thermodynamical aspects of the solutions are briefly discussed. Finally, our final considerations are displayed in Sect.~\\ref{rmks}.\n\n\\section{Action and field equations}\\label{odesys}\n\nLet us consider the following gravity action,\n\\begin{equation}\\label{ym-map-grav-obs}\n S_{\\mathrm{grav}}=\\frac{1}{16\\pi G}\\int \\bigg(\\frac{3}{2\\Lambda^2}R^{\\mathfrak{a}}_{~\\mathfrak{b}}\\star R_{\\mathfrak{a}}^{~\\mathfrak{b}} + T^\\mathfrak{a}\\star T_\\mathfrak{a}-\\frac{\\epsilon}{2}\\varepsilon_\\mathfrak{abcd} R^\\mathfrak{ab} e^\\mathfrak{c}e^\\mathfrak{d} + \\frac{\\tilde{\\Lambda}^2}{12}\\varepsilon_\\mathfrak{abcd}e^\\mathfrak{a}e^\\mathfrak{b}e^\\mathfrak{c}e^\\mathfrak{d}\\bigg)~.\n\\end{equation}\nwhere $R^\\mathfrak{a}_{~\\mathfrak{b}} = d \\omega^\\mathfrak{a}_{~\\mathfrak{b}}+\\omega^{\\mathfrak{a}}_{~\\mathfrak{c}} \\omega^\\mathfrak{c}_{~\\mathfrak{b}}$ is the curvature 2-form, $T^\\mathfrak{a}=d e^\\mathfrak{a}+\\omega^\\mathfrak{a}_{~\\mathfrak{b}}e^\\mathfrak{b}$ is the torsion 2-form, $\\omega^\\mathfrak{a}_{~\\mathfrak{b}}$ is the spin connection 1-form, and $e^\\mathfrak{a}$ is the vierbein 1-form. Moreover, $\\star$ stands for the Hodge dual operator, $G$ is the Newton constant, $\\widetilde{\\Lambda}$ is the cosmological constant, and $\\Lambda$ is a mass parameter. The corresponding vacuum field equations are easily obtained. For the vierbein, we get\n\\begin{equation} \\label{eq:eq_mov1}\n\\frac{3}{2\\Lambda^2}R^{\\mathfrak{bc}} \\star (R_{\\mathfrak{bc}}e_{\\mathfrak{a}})+ T^{\\mathfrak{b}}\\star\\left(T_{\\mathfrak{b}}e_{\\mathfrak{a}}\\right)+\\mathrm{D}\\star T_{\\mathfrak{a}}- \\varepsilon_{\\mathfrak{abcd}}\\left( R^{\\mathfrak{bc}}e^{\\mathfrak{d}} - \\frac{\\tilde{\\Lambda}^{2}}{3}e^{\\mathfrak{b}}e^{\\mathfrak{c}}e^{\\mathfrak{d}}\\right) = 0~,\n\\end{equation}\nwhile for the spin connection, we obtain,\n\\begin{equation} \\label{eq:eq_mov2}\n\\frac{3}{\\Lambda^2}D\\star R_{\\mathfrak{a}\\mathfrak{b}} \n+e_{\\mathfrak{b}} \\star T_{\\mathfrak{a}}\n-e_{\\mathfrak{a}} \\star T_{\\mathfrak{b}} -\\varepsilon_{\\mathfrak{abcd}} T^{\\mathfrak{c}} e^{\\mathfrak{d}} = 0~.\n\\end{equation}\nEqs.~\\eqref{eq:eq_mov1} and Eq.~\\eqref{eq:eq_mov2} are coupled nonlinear differential equations and thus, highly difficult to solve without any special insight. For this reason, we proceed with the simplest case where torsion is set to vanish. Hence, Eqs.~\\eqref{eq:eq_mov1} and Eq.~\\eqref{eq:eq_mov2} reduce to\\footnote{It is worth mentioning that the action \\eqref{ym-map-grav-obs} describes an emergent gravity associated to an $SO(5)$ Yang-Mills theory \\cite{Sobreiro:2011hb}. However, the field equations \\eqref{eq:eq_mov1mod} and \\eqref{eq:eq_mov2mod} are general enough to describe any gravity theory with a Riemann squared curvature in the action. Whenever is relevant, we will comment about the model developed in \\cite{Sobreiro:2011hb} and the results here found.}\n\\begin{eqnarray}\n\\frac{3}{2\\Lambda^2}R^{\\mathfrak{bc}} \\star (R_{\\mathfrak{bc}}e_{\\mathfrak{a}}) - \\varepsilon_{\\mathfrak{abcd}}\\left(R^{\\mathfrak{bc}} e^{\\mathfrak{d}} - \\frac{\\tilde{\\Lambda}^{2}}{3}e^{\\mathfrak{b}}e^{\\mathfrak{c}}e^{\\mathfrak{d}}\\right) &=& 0~, \\label{eq:eq_mov1mod}\\\\\n\\frac{3}{\\Lambda^2}D\\star R_{\\mathfrak{a}\\mathfrak{b}} &=& 0~.\\label{eq:eq_mov2mod}\n\\end{eqnarray}\n\nOur aim is to solve equations \\eqref{eq:eq_mov1mod} and \\eqref{eq:eq_mov2mod} for static and spherically symmetric conditions. To do so, we will consider two different situations:\n\\begin{itemize}\n\\item First, we consider both equations \\eqref{eq:eq_mov1mod} and \\eqref{eq:eq_mov2mod} and find an exact solution, corresponding to a strong influence of the quadratic curvature term. The result we find is the usual de Sitter solution with an effective cosmological constant given by a mix between $\\Lambda$ and $\\widetilde{\\Lambda}$.\n\n\\item Second, by taking $\\Lambda$ as a huge quantity\\footnote{This situation is consistent with the results of \\cite{Assimos:2013eua,Sobreiro:2016fks} where 1- and 2-loop explicit computations predict a huge value for $\\Lambda$ in the effective gravity model constructed in \\cite{Sobreiro:2011hb}.} when compared to the quadratic curvature term, we treat this term as a perturbation. The perturbed solution is a deformation of the usual Schwarzschild-de Sitter solution. To obtain such solution, the equation \\eqref{eq:eq_mov2mod} is neglect because it is a pure perturbation in $\\Lambda^{-2}$. This situation is equivalent to set $T=0$ at the action \\eqref{ym-map-grav-obs} before the computation of the field equations.\n\\end{itemize}\n\nEqs.~\\eqref{eq:eq_mov1mod} and \\eqref{eq:eq_mov2mod}, in Schwarzschild coordinates,\n\\begin{equation}\\label{spheric-symm}\ne^0=e^{\\alpha(r)}\\mathrm{d}t~~~,~~~e^1=e^{\\beta(r)}\\mathrm{d}r ~~~,~~~e^2=r\\mathrm{d}\\theta~~~,~~~e^3=r \\sin\\theta \\mathrm{d}\\phi~,\n\\end{equation}\ncan be recasted as\n\\begin{equation}\\label{edo-t}\n\\sigma\\left[2\\left(\\frac{e^{-2\\beta}\\partial_r\\beta}{r}\\right)^2+\\left(\\frac{1-e^{-2\\beta}}{r^2}\\right)^2\\right]+2\\left(\\frac{e^{-2\\beta}\\partial_r\\beta}{r}\\right)+\\frac{1-e^{-2\\beta}}{r^2}+3\\lambda =0~,\n\\end{equation}\n\\begin{equation}\\label{edo-r}\n\\sigma\\left[2\\left(\\frac{e^{-2\\beta}\\partial_r\\alpha}{r}\\right)^2+\\left(\\frac{1-e^{-2\\beta}}{r^2}\\right)^2\\right]-2\\left(\\frac{e^{-2\\beta}\\partial_r\\alpha}{r}\\right)+\\frac{1-e^{-2\\beta}}{r^2}+3\\lambda =0~,\n\\end{equation}\n\\begin{eqnarray}\\label{edo-theta-phi}\n& &\\sigma\\left\\{\\left[e^{-(\\alpha+\\beta)}\\partial_r\\left(e^{-\\beta}\\partial_r e^{\\alpha}\\right)\\right]^2+\\left(\\frac{e^{-2\\beta}\\partial_r\\alpha}{r}\\right)^2+\\left(\\frac{e^{-2\\beta}\\partial_r\\beta}{r}\\right)^2\\right\\}+\\nonumber\\\\\n&& -e^{-(\\alpha+\\beta)}\\partial_r\\left(e^{-\\beta}\\partial_r e^{\\alpha}\\right)-\\frac{e^{-2\\beta}\\partial_r\\alpha}{r}+\\frac{e^{-2\\beta}\\partial_r\\beta}{r}+3\\lambda=0~,\n\\end{eqnarray}\n\\begin{equation}\\label{edo-scalarcurv}\n\\partial_r \\mathcal{R}=0~,\n\\end{equation}\nfor $\\mathfrak{a}=0~$, $\\mathfrak{a}=1~$ and $\\mathfrak{a}=2$, respectively. In Eq.~\\eqref{edo-scalarcurv}, $\\mathcal{R}$ stands for the scalar curvature. We notice that differential equations obtained for $\\mathfrak{a}=2$ and $\\mathfrak{a}=3$ are identical. The constants in Eqs.~\\eqref{edo-t}, \\eqref{edo-r} and \\eqref{edo-theta-phi} are $\\sigma\\equiv -3\/(2\\Lambda^2)$ and $\\lambda\\equiv -\\tilde{\\Lambda}^2\/3$.\nThe combination of Eqs.~\\eqref{edo-t} and \\eqref{edo-r} leads to the following constraint\n\\begin{equation}\n\\partial_r\\left[\\alpha(r)+\\beta(r)\\right] \\left\\{\\partial_r\\left[\\beta(r)-\\alpha(r)\\right] +\\frac{r}{\\sigma}e^{2\\beta(r)}\\right\\}=0~,\n\\end{equation}\nwhich allows two possibilities:\n\\begin{itemize}\n\\item $\\partial_r\\left(\\alpha(r)+\\beta(r)\\right)=0 \\Rightarrow \\alpha(r)+\\beta(r)=f(t)$ - Thus, we have freedom to re-scale the time coordinate with simply $f(t)=0$.\n\\item $\\partial_r\\beta(r)+\\partial_r\\alpha(r) +\\sigma^{-1}r\\exp(2\\beta(r))=0$ - This condition is unusual and quite complicated since it is a coupled non-linear differential equation for $\\alpha(r)$ and $\\beta$.\n\\end{itemize}\nWe will stick to the first possibility, which is the usual relation in General Relativity and impose $\\alpha(r)=-\\beta(r)$.\n\n\n\\section{Exact solution}\\label{exacsol}\n\nIn order to solve analytically the system of equations \\eqref{edo-t}-\\eqref{edo-scalarcurv}, we start by subtracting Eq.~\\eqref{edo-r} from Eq.~\\eqref{edo-theta-phi}, obtaining\\footnote{The partial derivatives were changed to ordinary ones, since $\\beta$ is only $r$-dependent.},\n\\begin{equation}\\label{nde-tminustheta}\n\\left(\\frac{\\ddot{h}}{2}+\\frac{1-h}{r^2}\\right)\\left[\\sigma\\left(\\frac{\\ddot{h}}{2}-\\frac{1-h}{r^2}\\right)-1\\right]=0~,\n\\end{equation}\nwhere, the condition $\\alpha+\\beta=0$ was employed. Moreover, we have defined $e^{-2\\beta(r)}\\equiv h$ and $\\dot{h}\\equiv dh\/dr$. Eq.~\\eqref{nde-tminustheta} can be decomposed in two independent differential equations where only one must be zero. First, we have\n\\begin{equation}\\label{exact-edo01}\nr^2\\ddot{h}+2h-2\\left(1+\\frac{r^2}{\\sigma}\\right)=0~,\n\\end{equation}\nwhose solution is\n\\begin{equation}\nh(r)=1+\\frac{r^2}{2\\sigma}+\\sqrt{r}\\left[c_1\\cos\\left(\\frac{\\sqrt{7}}{2}\\ln r\\right)+c_2\\sin\\left(\\frac{\\sqrt{7}}{2}\\ln r\\right)\\right]~,\n\\end{equation}\nwhere $c_1$ and $c_2$ are integration constants. It turns out that this solution does not satisfy the whole differential equation system \\eqref{edo-t}-\\eqref{edo-scalarcurv}.\n\nThe second possibility is\n\\begin{equation}\\label{exact-edo02}\nr^2\\ddot{h}-2h+2=0~,\n\\end{equation}\nwhose solution is given by\n\\begin{equation}\nh(r)=1+c_3r^2+\\frac{c_4}{r}~,\\label{hhhh}\n\\end{equation}\nwhere $c_3$ and $c_4$ are integration constants. It is a straightforward computation to show that we must have $c_4=0$ and $c_3\\neq0$ in order to the solution \\eqref{hhhh} satisfy the system \\eqref{edo-t}-\\eqref{edo-scalarcurv}. Moreover, there are two possible values for the constant $c_3$, namely $\\Upsilon_p$ and $\\Upsilon_m$,\n\\begin{eqnarray}\\label{Ypm}\n\\Upsilon_p &=& \\frac{\\Lambda^2}{3} \\left[1 + \\sqrt{1-2\\frac{\\tilde{\\Lambda}^2}{\\Lambda^2}}\\right]~,\\nonumber\\\\\n\\Upsilon_m &=& \\frac{\\Lambda^2}{3} \\left[1 - \\sqrt{1-2\\frac{\\tilde{\\Lambda}^2}{\\Lambda^2}}\\right]~.\n\\end{eqnarray}\nHence\n\\begin{eqnarray}\\label{exact-sol-pm}\ne^{-2\\beta_p} &=& 1-\\Upsilon_p r^2~,\\nonumber\\\\\ne^{-2\\beta_m} &=& 1-\\Upsilon_m r^2~.\n\\end{eqnarray}\nThe solutions \\eqref{exact-sol-pm} satisfy the system of differential equations \\eqref{edo-t}-\\eqref{edo-scalarcurv}, simultaneously. This is an important and necessary verification since this system of equations is over-determined. From \\eqref{Ypm}, it is clear that $\\tilde{\\Lambda}^2$ cannot exceed $\\Lambda^2\/2$, otherwise, the solution founded is inconsistent. Moreover, if $2\\tilde{\\Lambda}^2=\\Lambda^2$, only one solution is allowed $\\Upsilon_m=\\Upsilon_p$.\n\nNow, if we assume that $\\Lambda^2$ has a large value and $\\tilde{\\Lambda}^2$ has a small value\\footnote{This is consistent, for instance, with the explicit values $\\Lambda^2\\approx 7.665\\times 10^{31}\\textrm{TeV}^2\\gg\\tilde{\\Lambda}^2\\approx 1.000\\times 10^{-92}\\textrm{TeV}^2$ found in \\cite{Assimos:2013eua,Sobreiro:2016fks}.}, we can expand \\eqref{Ypm} to find\n\\begin{eqnarray}\\label{Yappr}\n\\Upsilon_s &\\approx &\\frac{1}{3}\\tilde{\\Lambda}^2~,\\nonumber\\\\\n\\Upsilon_b &\\approx &\\frac{2}{3}\\Lambda^2~.\n\\end{eqnarray}\nThe first case, $\\Upsilon_s$ has a narrow value if we take $\\widetilde{\\Lambda}$ as its observational value. On the other hand, the second case stems for a de Sitter-like space with a very small radius, since $\\Lambda^2$ is very big. Hence, we have a weak curvature regime for $\\Upsilon_s$ and a strong one for $\\Upsilon_b$.\n\nObviously, all usual properties of de Sitter spacetime are maintained\\footnote{An alternaltive and simple way to find the solution \\eqref{exact-sol-pm} is to directly deal with equation \\eqref{eq:eq_mov1} in form notation and try an \\emph{ansatz} solution of the form\n\\begin{equation}\nR^{\\mathfrak{ab}}=\\zeta e^{\\mathfrak{a}} e^{\\mathfrak{b}}\\;,\\label{ap1}\n\\end{equation}\nwhere $\\zeta$ is a constant mass parameter. The solution \\eqref{ap1} is a natural choice since we have the usual cosmological constant term in \\eqref{eq:eq_mov1}. The direct substitution of \\eqref{ap1} in \\eqref{eq:eq_mov1} leads to the characteristic equation for $\\zeta$,\n\\begin{equation}\n\\frac{3}{2\\Lambda^2}\\zeta^2-\\zeta+\\frac{\\tilde{\\Lambda}^2}{3}=0\\;,\\label{ap2}\n\\end{equation}\nproviding $\\zeta=\\Upsilon_{p,m}$. Hence, solution \\eqref{ap1} is an alternative covariant form of the curvature associated with solution \\eqref{exact-sol-pm}.}.\n\n\\section{Perturbative solution}\\label{pertsol}\n\nFrom this point, we consider the quadratic curvature to be a small perturbation in Eq.~\\eqref{edo-t}. For that, we multiply Eq.~\\eqref{edo-t} by $\\lambda$, then\n\\begin{equation}\\label{edo-t-pert}\n\\eta\\left[2\\left(\\frac{e^{-2\\beta}\\partial_r\\beta}{r}\\right)^2+\\left(\\frac{1-e^{-2\\beta}}{r^2}\\right)^2\\right]+\\lambda\\left[2\\left(\\frac{e^{-2\\beta}\\partial_r\\beta}{r}\\right)+\\frac{1-e^{-2\\beta}}{r^2}+3\\lambda\\right] =0~.\n\\end{equation}\nIn this form, Eq.~\\eqref{edo-t-pert} can be solved analytically through the employment of perturbation theory if $\\eta\\equiv\\sigma\\lambda\\equiv\\tilde{\\Lambda}^2\/2\\Lambda^2$ is a very small dimensionless parameter. Hence, the quadratic term can be treated as a perturbation around the term proportional to $\\lambda$. It is evident that the term proportional to $\\lambda$ is the usual Einstein equation with cosmological constant $\\widetilde{\\Lambda}$. For simplicity, let $u(r)=1-e^{-2\\beta}$ and take all derivatives as ordinary ones. So, we rewrite Eq.~\\eqref{edo-t-pert} as\n\\begin{equation}\\label{edo-t-u}\n\\eta\\left[\\frac{1}{2}{\\dot{u}}^2+\\left(\\frac{u}{r^2}\\right)^2\\right] + \\lambda\\left(r\\dot{u} + u + 3\\lambda r^2\\right)=0~.\n\\end{equation}\nA perturbative solution of Eq.~\\eqref{edo-t-u} has the general form\n\\begin{equation}\\label{SPert-geral}\nu(r)=u_0(r) + \\eta u_1(r) + \\eta^2 u_2(r) + \\eta^3 u_3(r) + \\cdots~.\n\\end{equation}\nSubstituting \\eqref{SPert-geral} in the Eq.~\\eqref{edo-t-u}, and split order by order in $\\eta$, we find an infinite set of descent coupled equations\n\\begin{eqnarray}\\label{hierarq-pert-u}\n\\frac{d}{dr}(r u_0)+3\\lambda r^2 &=& 0~,\\nonumber\\\\\n\\frac{d}{dr}(r u_1) + \\frac{1}{2\\lambda}\\left(\\frac{du_0}{dr}\\right)^2 + \\frac{u_0^2}{\\lambda r^2} &=& 0~,\\nonumber\\\\\n\\frac{d}{dr}(r u_2)+\\frac{1}{\\lambda}\\left(\\frac{du_0}{dr} \\frac{du_1}{dr}\\right) + \\frac{2u_0 u_1}{\\lambda r^2} &=& 0~,\\nonumber\\\\\n\\frac{d}{dr}(r u_3) + \\frac{1}{2\\lambda}\\left(\\frac{du_1}{dr}\\right)^2 + \\frac{u_1^2}{\\lambda r^2} +\\frac{1}{\\lambda}\\left(\\frac{du_0}{dr}\\frac{du_2}{dr}\\right) + \\frac{2u_0 u_2}{\\lambda r^2} &=& 0~,\\nonumber\\\\\n\\frac{d}{dr}(r u_4) + \\frac{1}{\\lambda}\\left(\\frac{du_0}{dr} \\frac{du_3}{dr}\\right) + \\frac{2u_0 u_3}{\\lambda r^2} + \\frac{1}{\\lambda}\\left(\\frac{du_1}{dr} \\frac{du_2}{dr}\\right) + \\frac{2u_1 u_2}{\\lambda r^2} &=& 0~,\\nonumber\\\\\n&\\vdots&\\;.\n\\end{eqnarray}\nWith such hierarchy of equations \\eqref{hierarq-pert-u}, we can solve, iteratively, all the equations above. Starting with the zeroth order, we obtain\n\\begin{equation}\nu_0 = \\frac{\\tilde{\\Lambda}^2}{3} r^2 + \\frac{2GM}{r}~,\n\\end{equation}\nwhich is the usual Schwarzschild-de Sitter solution \\cite{Gibbons:1977mu,Cardoso:2003sw,Faraoni:2015ula}, obviously, since this is the situation for $\\eta=0$. Hence, the integration constant at the $1\/r$ term is obtained from the Newtonian limit. Solving iteratively the rest of the equations \\eqref{hierarq-pert-u}, we find the following solution (at fourth order), \n\\begin{eqnarray}\\label{pert-sol-4th}\ne^{-2\\beta}&\\approx & 1-\\frac{2GM}{r}-\\frac{\\tilde{\\Lambda}^2}{3}r^2-\\eta\\left(\\frac{\\mathcal{C}_{12}}{r}+\\mathcal{C}_{11}r^2+\\frac{\\mathcal{C}_{13}}{r^4}\\right)- \\eta^2\\left(\\frac{\\mathcal{C}_{22}}{r}+\\mathcal{C}_{21}r^2+\\frac{\\mathcal{C}_{23}}{r^4}+\\frac{\\mathcal{C}_{24}}{r^7}\\right)\\nonumber\\\\\n&-&\\eta^3\\left(\\frac{\\mathcal{C}_{32}}{r}+\\mathcal{C}_{31}r^2+\\frac{\\mathcal{C}_{32}}{r^4}+\\frac{\\mathcal{C}_{34}}{r^7}+\\frac{\\mathcal{C}_{35}}{r^{10}}\\right) - \\eta^4\\left(\\frac{\\mathcal{C}_{42}}{r}+\\mathcal{C}_{41}r^2+\\frac{\\mathcal{C}_{43}}{r^4}+\\frac{\\mathcal{C}_{44}}{r^7}+\\frac{\\mathcal{C}_{45}}{r^{10}}+\\frac{\\mathcal{C}_{46}}{r^{13}}\\right)+\\ldots\\nonumber\\\\\n\\end{eqnarray}\nwhere the constants $C_{k\\ell}$ can be arranged as\\footnote{The solution \\eqref{pert-sol-4th} can be generalized to all orders in a very concise form given by\n\\begin{equation}\\label{pert-sol-gen}\ne^{-2\\beta}=1-\\sum_{k=0}^{\\infty}\\eta^k\\sum_{\\ell=1}^{k+2}\n\\mathcal{C}_{k\\ell}r^{5-3\\ell}~~,\n\\end{equation}\nwhere the general constants $\\mathcal{C}_{k\\ell}$ depend on the previous order constants. In particular, the $\\mathcal{C}_{k2}$, with $k=0,1,2,\\dots$, stand for the actual integration constants.}\n\\begin{equation}\n\\mathcal{C}_{k\\ell}\\equiv\\left(\\begin{array}{cccccc}\\label{CIs}\n\\frac{\\tilde{\\Lambda}^2}{3} & 2GM & & & & \\\\\n\\frac{\\tilde{\\Lambda}^2}{3} & \\mathcal{C}_{12} & \\frac{6G^2M^2}{\\tilde{\\Lambda}^2} & & & \\\\\n2\\frac{\\tilde{\\Lambda}^2}{3} & \\mathcal{C}_{22} & \\frac{6GM}{\\tilde{\\Lambda}^2}\\Omega_1 & -\\frac{36G^3M^3}{\\tilde{\\Lambda}^4} & & \\\\\n5\\frac{\\tilde{\\Lambda}^2}{3} & \\mathcal{C}_{32} & \\frac{9}{2\\tilde{\\Lambda}^2}\\Omega_2 & \\frac{54G^2M^2}{\\tilde{\\Lambda}^4}\\Omega_4 & \\frac{312G^4M^4}{\\tilde{\\Lambda}^6} & \\\\\n14\\frac{\\tilde{\\Lambda}^2}{3} & C_{42} & -\\frac{3}{\\tilde{\\Lambda}^2}\\Omega_3 & \\frac{3GM}{\\tilde{\\Lambda}^2}\\Omega_5 & \\frac{54G^2M^2}{\\tilde{\\Lambda}^4}\\Omega_6 & -\\frac{3564G^5M^5}{\\tilde{\\Lambda}^8}\n\\end{array}\\right)\n\\end{equation}\nwhere the index $k$ (line) and $\\ell$ (column) run along the discrete intervals $[0,\\infty]$ and $[1,k+2]$, respectively, and\n\\begin{eqnarray}\n\\Omega_1 &=& \\mathcal{C}_{12}-2GM~,\\nonumber\\\\\n\\Omega_2 &=& \\left[\\mathcal{C}_{12}^2 + 4GM\\left(6GM-2\\mathcal{C}_{12}-\\mathcal{C}_{22}\\right)\\right]~,\\nonumber\\\\\n\\Omega_3 &=&\\left[\\mathcal{C}_{12}\\left(\\mathcal{C}_{12}+ \\mathcal{C}_{22}-12GM\\right)- 2GM\\left(2\\mathcal{C}_{22}+ \\mathcal{C}_{32}-20GM\\right)\\right]~,\\nonumber\\\\\n\\Omega_4 &=& \\left(8GM-3\\mathcal{C}_{12}\\right)~,\\nonumber\\\\\n\\Omega_5 &=& \\left[3\\mathcal{C}_{12}^2-2GM\\left( 12\\mathcal{C}_{12}+3\\mathcal{C}_{22}-24GM\\right)\\right]~,\\nonumber\\\\\n\\Omega_6 &=& 2\\left(\\mathcal{C}_{12}-3GM\\right)~.\n\\end{eqnarray}\n\nThe differential equation Eq.~\\eqref{edo-theta-phi} can be solved for $\\alpha$ once we have $\\beta$. The result is consistent with the condition $\\alpha=-\\beta$. Moreover, as discussed before, Eq.~\\eqref{edo-scalarcurv} can be neglected. Simple consistency checks are: the limits $\\eta\\rightarrow 0$ and $\\tilde{\\Lambda}^2=0$, providing a pure Schwarzschild solution; the limits $M=0$ and $\\eta\\rightarrow 0$ result in a de Sitter spacetime solution; and, as already seen, the limit $\\eta\\rightarrow 0$ provides the Schwarzschild-de Sitter solution.\n\nIt is interesting to take the limit $r\\gg 2GM$ in the solution \\eqref{pert-sol-4th}. The result is immediate,\n\\begin{equation}\\label{pert-ds-far}\ne^{-2\\beta}\\approx\\left(1-\\tilde{\\Upsilon} r^2\\right)~,\n\\end{equation}\nwhere\n\\begin{equation}\\label{Y-far}\n\\tilde{\\Upsilon}\\approx \\frac{\\tilde{\\Lambda}^2}{3}+\\eta\\left(\\frac{\\tilde{\\Lambda}^2}{3}\\right)+\\eta^2\\left(2\\frac{\\tilde{\\Lambda}^2}{3}\\right)+\\eta^3\\left(5\\frac{\\tilde{\\Lambda}^2}{3}\\right)+\\eta^4\\left(14\\frac{\\tilde{\\Lambda}^2}{3}\\right)+\\dots~,\n\\end{equation}\nwhich is a perturbatively asymptotically de Sitter spacetime, as expected. Remarkably, the relation \\eqref{Y-far} can be written as\n\\begin{equation}\\label{Y-notrunc}\n\\widetilde{\\Upsilon}=\\left(\\sum_{w=0}^\\infty \\eta^w a_w\\right)\\frac{\\tilde{\\Lambda}^2}{3}~,\n\\end{equation}\nwhere\n\\begin{equation}\na_w=\\frac{(2w)!}{(w+1)!w!}\n\\end{equation}\nare the Catalan numbers\\footnote{Named after the discovery of the sequence of natural numbers by the Belgian mathematician Eug\\`ene C. Catalan $(1814-1894)$, which made several contributions to combinatorial mathematics \\cite{Weisstein}.}. Moreover, by setting $\\eta=0$ in \\eqref{Y-far} we find $\\tilde{\\Upsilon}=\\Upsilon_s$. Hence, we have an asymptotic relation between the perturbative solution and the exact one. Further, by expanding the constant $\\Upsilon_p$ in \\eqref{Ypm} for small $\\widetilde{\\Lambda}\/\\Lambda$, we get the same expression \\eqref{Y-notrunc}. Hence, we have consistency between the exact and perturbative solution for small $\\widetilde{\\Lambda}\/\\Lambda$.\n\nFor the next sections, for the sake of simplicity, we attain ourselves to the first order correction in $\\eta$. Thus, the solution \\eqref{pert-sol-4th} is reduced to \n\\begin{equation}\\label{pert-sol-1st} \ne^{-2\\beta} \\approx 1-\\frac{\\mathcal{C}_{02}}{r}-\\mathcal{C}_{01}r^2-\\eta\\left(\\frac{\\mathcal{C}_{12}}{r}+\\mathcal{C}_{11}r^2+\\frac{\\mathcal{C}_{13}}{r^4}\\right)~.\n\\end{equation}\nwhere the explicit form of the constants are displayed in \\eqref{CIs}.\n\n\\subsection{Horizons}\\label{ehor1}\n\nEq.~\\eqref{pert-sol-1st} can be used to determine the horizons by solving $e^{-2\\beta}=0$\n\\cite{Cardoso:2003sw,Faraoni:2015ula,Bhattacharya:2010vr,Wald:1984rg}. Thus, we must solve the following perturbed algebraic equation\n\\begin{equation}\\label{eq-hor-1st}\nr^3\\left(r-\\mathcal{C}_{02}-\\mathcal{C}_{01}r^3\\right)\n-\\eta\\left(\\mathcal{C}_{11}r^6+\\mathcal{C}_{12}r^3 + \\mathcal{C}_{13}\\right)=0~,\n\\end{equation}\nwhere the constants $\\mathcal{C}$'s are listed in \\eqref{CIs}. The solution can be taken as a perturbative one of the form\n\\begin{equation}\\label{r-hor-1st}\nr\\approx r_0+\\eta r_1~.\n\\end{equation}\nSubstituting Eq.~\\eqref{r-hor-1st} in Eq.~\\eqref{eq-hor-1st} we obtain a system of two algebraic equations. At zeroth order, already with the substitution of constants $\\mathcal{C}$s, we have\n\\begin{equation}\\label{cubeq-0}\nr_0^3-\\frac{3}{\\tilde{\\Lambda}^2}r_0+\\frac{6GM}{\\tilde{\\Lambda}^2}=0~,\n\\end{equation}\nand, at first order,\n\\begin{equation}\\label{cubeq-1}\nr_0^3\\left(1+3\\mathcal{C}_{01}r_0^2\\right)r_1-\\mathcal{C}_{13}r_0^3-\\mathcal{C}_{11}r_0^6 - \\mathcal{C}_{13}=0~,\n\\end{equation}\nwhich we left with the constants $\\mathcal{C}$'s for the sake of simplicity. The polynomial discriminant of the Eq.~\\eqref{cubeq-0} is easily computed\n\\begin{equation}\n\\Delta=\\frac{108}{\\tilde{\\Lambda}^6}\\left(1-9G^2M^2\\tilde{\\Lambda}^2\\right)~,\n\\end{equation}\nwhich is important to determine the nature of the roots of Eq.~\\eqref{cubeq-0}. If, and only if $\\Delta>0$, Eq.~\\eqref{cubeq-0} has three real roots. Such condition implies that $3GM\\tilde{\\Lambda}<1$, since $\\tilde{\\Lambda}$, $G$ and $M$ are positive quantities. By applying the trigonometric method to find all the roots of Eq.~\\eqref{cubeq-0}, it is found only two different positive roots, namely,\n\\begin{eqnarray}\\label{Hroots-0}\nr_{01} &=& \\frac{1}{\\tilde{\\Lambda}}\\left(c_\\xi+\\sqrt{3}s_\\xi\\right)~,\\nonumber\\\\\nr_{02} &=& \\frac{1}{\\tilde{\\Lambda}}\\left(c_\\xi-\\sqrt{3}s_\\xi\\right)~,\n\\end{eqnarray}\nwhere $c_\\xi\\equiv\\cos\\xi$, $s_\\xi\\equiv\\sin\\xi$ and $\\xi=1\/3\\arccos\\left(3GM\\tilde{\\Lambda}\\right)$. The third root is $r_{03}=-\\left(r_{01}+r_{02}\\right)$, which is essentially negative and it is not physical. Since $0<3GM\\tilde{\\Lambda}<1\\;\\Rightarrow\\;0<\\arccos\\left(3GM\\tilde{\\Lambda}\\right)<\\pi\/2$ we have $r_{01}>r_{02}>0$. Accordingly, $r_{01}$ stands for the cosmological horizon and $r_{02}$ stands for the mass distribution event horizon.\n\nNow, the substitution of Eq.~\\eqref{Hroots-0} in Eq.~\\eqref{cubeq-1} provides\n\\begin{equation}\\label{Hroots-1}\nr_{1\\ell}=\\frac{1}{3\\left(1-\\frac{\\tilde{\\Lambda}^2}{3} r_{0\\ell}^2\\right)} \\left(-\\frac{6G^2M^2}{\\tilde{\\Lambda}^2}\\frac{1}{r_{0\\ell}^3}+\\mathcal{C}_{12}+\\frac{\\tilde{\\Lambda}^2}{3} r_{0\\ell}^3\\right)~,\n\\end{equation}\nwith $\\ell=1$ or $\\ell=2$. Hence, the form of the horizons at first order are\\footnote{We point out that the horizons depend on the integration constant $C_{12}$, which may depend explicitly on the mass $M$. The qualitative\/quantitative behaviour of such horizons and the respective thermodynamical quantities is directly bounded by $C_{12}$. In the next steps we are assuming that $C_{12}$ has a linear dependence on $M$, so all graphs in this paper are plotted under this assumption. This is a natural hypothesis since $C_{12}$ appear as a factor of the pure Schwarzschild term correction $\\sim 1\/r$. Nevertheless, it may be ajusted by employing more sophisticated boundary conditions. A complete analysis of this fine tuning is left for future investigation \\cite{Silveira:2017xxx}.}\n\\begin{eqnarray}\\label{HZs}\nr_b &=& \\frac{1}{\\tilde{\\Lambda}}\\left\\{\\left(c_\\xi -\\sqrt{3}s_\\xi\\right)+\\eta \\frac{\\sec(3\\xi)}{6}\\left[-\\frac{18G^2M^2\\tilde{\\Lambda}^2}{\\left(c_\\xi-\\sqrt{3}s_\\xi\\right)^2}+3\\mathcal{C}_{12} \\left(c_\\xi-\\sqrt{3}s_\\xi\\right) + \\left(c_\\xi-\\sqrt{3}s_\\xi\\right)^4\\right]\\right\\} ~,\\nonumber\\\\\nr_c &=& \\frac{1}{\\tilde{\\Lambda}}\\left\\{\\left(c_\\xi + \\sqrt{3}s_\\xi\\right)+ \\eta \\frac{\\sec(3\\xi)}{6}\\left[-\\frac{18G^2M^2\\tilde{\\Lambda}^2}{\\left(c_\\xi+\\sqrt{3}s_\\xi\\right)^2}+3\\mathcal{C}_{12} \\left(c_\\xi+\\sqrt{3}s_\\xi\\right) + \\left(c_\\xi+\\sqrt{3}s_\\xi\\right)^4\\right]\\right\\}~,\\nonumber\\\\\n\\end{eqnarray}\nwhere $r_b$ is the event horizon and $r_c$ is the cosmological one. It is obvious that the limit $\\eta\\rightarrow 0$ recovers the two horizons for a Schwarzschild-de Sitter spacetime. The behavior of $r_b$ and $r_c$ are qualitatively displayed\\footnote{The $\\eta$ parameter, according to \\cite{Assimos:2013eua,Sobreiro:2016fks} is very small, providing no actual difference in the plots. For this reason we overestimate $\\eta$ for the sake of comparison. The same observation holds for all the plots in this work.} in Figure~\\ref{graphrb} and Figure~\\ref{graph-rc}, respectively. We observe that the increasing of $M$ implies on the increasing of $r_b$, as expected. On the other hand, $r_c$ decreases as $M$ increases.\n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{rb.png}\n\t\t\\caption{Event horizon related to the mass spherical distribution. $r_b(x(M))$ is in units of $\\tilde{\\Lambda}^{-1}$ and $3GM\\tilde{\\Lambda}\\equiv x $. The dashed curve represents the event horizon behavior of a standard black hole and the thick curve represents the perturbative horizon. For the thick curve we adopted $\\eta=10^{-1}$.}\n\\label{graphrb}\n\\end{figure}\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.6\\textwidth]{rc.png}\n\t\\caption{Cosmological horizon. $r_c(x(M))$ is in units of $\\tilde{\\Lambda}^{-1}$, $3GM\\tilde{\\Lambda}\\equiv x$ and $\\eta=10^{-1}$. The dashed curve represents the cosmological horizon obtained from the Schwarzschild-de Sitter geometry. The thick curve is the cosmological horizon with the correction from the quadratic term contribution.}\n\\label{graph-rc}\n\\end{figure}\n\n\n\\subsection{Singularities}\n\nIt is a straightforward calculation to find the possible singularities in the perturbative solution \\eqref{pert-sol-1st} by computing the Kretschmann invariant, the scalar curvature and the Ricci tensor squared, namely,\n\\begin{eqnarray}\\label{K-invar}\n\\mathcal{R}^{\\alpha\\beta\\gamma\\delta} \\mathcal{R}_{\\alpha\\beta\\gamma\\delta}&=&\n\\frac{48G^2M^2}{r^6}+\\frac{8\\tilde{\\Lambda}^4}{3}+\n\\eta\\left[\\left(4G^2M^2+\\frac{24G^3M^3}{\\tilde{\\Lambda}^2}\\right)\\frac{12}{r^6}+\\frac{1440G^3M^3}{\\tilde{\\Lambda}^2r^9}+ \\frac{16\\tilde{\\Lambda}^2}{r^6}\\right]~,\\nonumber\\\\\n\\mathcal{R}&=&4\\tilde{\\Lambda}^2+\\eta\\left(4\\tilde{\\Lambda}^2 + \\frac{36G^2M^2}{\\tilde{\\Lambda}^2r^6}\\right)\\;,\\nonumber\\\\\n\\mathcal{R}^{\\alpha\\beta}\\mathcal{R}_{\\alpha\\beta}&=& 4\\tilde{\\Lambda}^4+\\eta\\left(\\frac{72G^2M^2}{r^6}+8\\tilde{\\Lambda}^4\\right)\\;.\n\\end{eqnarray}\nIt is then clear that a physical singularity exists at $r=0$, as expected. Interestingly, the perturbative contributions to $\\mathcal{R}$ and $\\mathcal{R}^{\\alpha\\beta}\\mathcal{R}_{\\alpha\\beta}$ are singular, while their zeroth order terms are not,\n\\begin{eqnarray}\\label{scalar-ricci-invar}\n\\lim_{r\\rightarrow 0}\\mathcal{R}&\\rightarrow &\\eta\\infty\\nonumber\\\\\n\\lim_{r\\rightarrow 0}\\mathcal{R}^{\\alpha\\beta}\\mathcal{R}_{\\alpha\\beta}&\\rightarrow &\\eta\\infty ~.\n\\end{eqnarray}\nThe physical singularity expressed by \\eqref{K-invar} and \\eqref{scalar-ricci-invar} are in agreement with the standard results obtained in the Einsteinian gravity.\n\n\\newpage\n\n\\section{Some thermodynamical aspects}\\label{therm}\n\nIn order to improve our analysis for the solutions found so far, we will express the main thermodynamical quantities related to the horizons. For this goal, we need to compute the surface gravities defined through\\footnote{Since the Killing field is just $r$-dependent, we can use this simplified expressions rather than the general formula $\\zeta^\\mathfrak{a}\\nabla_\\mathfrak{a}\\zeta^\\mathfrak{b}= \\kappa\\zeta^\\mathfrak{b}$, where $\\zeta^\\mathfrak{a}$ is the Killing vector field normal to the horizon.} \\cite{Cardoso:2003sw}\n\\begin{eqnarray}\n\\kappa_{b}&=&-\\frac{1}{2}\\frac{d}{dr}u(r)\\Bigr|_{r=r_b}~,\\label{surf-grav-bh}\\\\\n\\kappa_{c}&=&\\frac{1}{2}\\frac{d}{dr}u(r)\\Bigr|_{r=r_c}~,\\label{surf-grav-cosm}\n\\end{eqnarray}\nwhere $f(r)\\equiv e^{-2\\beta}$. This quantity is directly related to the Hawking temperature,\n\\begin{equation}\\label{T-hor}\nT=\\frac{\\kappa}{2\\pi}~,\n\\end{equation}\nThe second law of the thermodynamics of black holes states that the area of the surface of the event horizon can not decrease under any physical process \\cite{Wald:1984rg}. Such entropy is calculated by\n\\begin{equation}\\label{S-hor}\nS=\\frac{A}{4G}\\equiv\\frac{\\pi r^2}{G}~,\n\\end{equation}\nwhere $r$ is the horizon radius and $A$ is the area of this surface. In this section, the aim is to compare our results with those from the standard literature on black holes and cosmological horizons \\cite{Gibbons:1977mu,Wald:1984rg}.\n\nThe exact solution brings the similar results for the entropy to the de Sitter spacetime with a simple shift from $\\tilde{\\Lambda}^2$ to $\\Upsilon$. In the region $r\\gg 2GM$, it is remarkable that we connect both perturbative and exact entropies up to corrections in $\\eta$. Of course, in the limit $\\eta\\rightarrow 0$, both entropies are matched.\n\n\\subsection{Surface gravities and entropies of the perturbative solutions}\\label{Thermo-pert}\n\nWe calculate the surface gravities related to each horizon and we analyze their behaviors under the change of the mass $M$. In the perturbative case we have two horizons, $r_b$ and $r_c$, which are explicitly written in \\eqref{HZs}. Let us start the analysis with the event horizon. From Eq.~\\eqref{surf-grav-bh} we find\n\\begin{equation}\\label{surf-grav-bh-a}\n\\kappa_b\\approx -\\frac{1}{3}\\tilde{\\Lambda}^2 r_{02} + \\frac{2GM}{r_{02}^2} + \\eta\\left[-2GM\\frac{r_{12}}{r_{02}^3} -\\frac{1}{3}\\tilde{\\Lambda}^2r_{12} + \\frac{GM}{r_{02}^2}-\\frac{1}{3}\\tilde{\\Lambda}^2 r_{02} - \\frac{12G^2M^2}{\\tilde{\\Lambda}^2}\\frac{1}{r_{02}^5} \\right]~.\n\\end{equation}\nThe behavior of $\\kappa_b$ are qualitatively displayed in Figure~\\ref{graphkb}. We observe that the increasing of $M$ implies on the decreasing of $\\kappa_b$. Accordingly, from Eq.~\\eqref{T-hor}, it is clear that the temperature $T_b=\\kappa_b\/2\\pi$ has the same behavior as $\\kappa_b$.\n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{kb.png}\n\t\t\\caption{Surface gravity in the event horizon of the mass spherical distribution $M$. The dashed and thick curves stand for the standard surface gravity of a black hole and the perturbative surface gravity, respectively. For the thick curve we adopted $\\eta=10^{-3}$.}\n\\label{graphkb}\n\\end{figure}\n\nFor the cosmological horizon, Eq.~\\eqref{surf-grav-cosm} provides\n\\begin{equation}\\label{surf-grav-c}\n\\kappa_c\\approx \\frac{1}{3}\\tilde{\\Lambda}^2 r_{01} - \\frac{2GM}{r_{01}^2} + \\eta\\left[2GM\\frac{r_{11}}{r_{01}^3} +\\frac{1}{3}\\tilde{\\Lambda}^2r_{11} - \\frac{GM}{r_{01}^2}+\\frac{1}{3}\\tilde{\\Lambda}^2 r_{01} + \\frac{12G^2M^2}{\\tilde{\\Lambda}^2}\\frac{1}{r_{01}^5} \\right]~.\n\\end{equation}\nThe plot of the cosmological horizon surface gravity, as function of $M$, is displayed in Figure~\\ref{graph-kc}. The behavior of the surface gravity is essentially the same in both cases. According to Eq.~\\eqref{T-hor}, the respective temperature $T_c=\\kappa_c\/2\\pi$ have the same qualitative behavior as the cosmological horizon.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{kc.png}\n\t\t\\caption{Surface gravity of the cosmological horizon. $r_c(x(M))$ is in units of $\\tilde{\\Lambda}^{-1}$, $3GM\\tilde{\\Lambda}\\equiv x$ and $\\eta=10^{-2}$. The dashed curve represents the surface gravity to the cosmological horizon obtained from the Schwarzschild-de Sitter geometry. The thick curve is the cosmological horizon with the correction from the quadratic term contribution.}\n\\label{graph-kc}\n\\end{figure}\n\n\\newpage\n\nThe entropies can be computed from Eq.~\\eqref{S-hor}, providing\\footnote{We will not display the full expressions to the entropies \\eqref{S-rb} and \\eqref{S-rc} due to their extension.}\n\n\\begin{eqnarray}\nS_{b}&=&\\frac{\\pi r_b^2}{G}\\approx \\frac{\\pi r_{01}^2}{G}\\left[1+2\\eta\\left(\\frac{r_{11}}{r_{01}}\\right)^2\\right]~,\\label{S-rb}\\\\\nS_c&=&\\frac{\\pi r_c^2}{G}\\approx \\frac{\\pi r_{02}^2}{G}\\left[1+2\\eta\\left(\\frac{r_{12}}{r_{02}}\\right)^2\\right]~,\\label{S-rc}\n\\end{eqnarray}\nwhere $r_{11}$ and $r_{12}$ correspond to the corrections due to the quadratic curvature term of the horizons. It is clear from the plots that $S_b$ increases with the mass while $S_c$ decreases with the mass. We remark that entropy behaviors are in agreement with \\cite{Gibbons:1977mu,Frolov:1981mz}, of course, with a small deviation due the perturbative approximation. The plots with the qualitative behavior of such entropies are displayed in Figure~\\ref{graph-sb} and Figure~\\ref{graph-sc}.\n\n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{sb.png}\n\t\t\\caption{Entropy of the event horizon of the mass spherical distribution. $S_b(x(M))$ is in units of $\\tilde{\\Lambda}^{-1}$, $3GM\\tilde{\\Lambda}\\equiv x$ and $\\eta=10^{-2}$. The dashed curve represents the entropy to the standard black hole horizon obtained from the Schwarzschild-de Sitter geometry. The thick curve is the entropy with the correction from the quadratic term contribution.}\n\t\t\\label{graph-sb}\n\\end{figure}\n\\begin{figure}[htb]\n\t\\centering\n\t\t\\includegraphics[width=0.6\\textwidth]{sc.png}\n\t\t\\caption{Entropy of the cosmological horizon. $S_c(x(M))$ is in units of $\\tilde{\\Lambda}^{-1}$, $3GM\\tilde{\\Lambda}\\equiv x$ and $\\eta=10^{-1}$. The dashed curve represents the entropy to the standard cosmological horizon obtained from the Schwarzschild-de Sitter geometry. The thick curve is the entropy with the correction from the quadratic term contribution.} \\label{graph-sc}\n\\end{figure}\n\n\\section{Conclusions}\\label{rmks}\n\nIn this work we found static spherically symmetric solutions to a generalized gravity action in the EC formalism for vanishing torsion situations. The model, described by the action \\eqref{ym-map-grav-obs}, is composed by the usual EH term, the cosmological constant term, a curvature squared term, and a quadratic torsion term. Our results are summarized here:\n\\begin{itemize}\n\\item The coupled system of differential equations \\eqref{edo-t}-\\eqref{edo-scalarcurv} is exactly solved. The result is a de Sitter spacetime described by \\eqref{exact-sol-pm} with two possible effective cosmological constants described in \\eqref{Ypm}. Both cases are compositions of the parameters $\\tilde{\\Lambda}$ and $\\Lambda$. This result is important since the system \\eqref{edo-t}-\\eqref{edo-scalarcurv} is over-determined.\n\n\\item For the case where $\\tilde{\\Lambda}\\ll\\Lambda$, the effective cosmological constants are simplified (at first order) to Eq.~\\eqref{Yappr}. Hence, two regimes are possible, one with a strong curvature and another with a weak curvature.\n\n\\item By treating the the quadratic curvature term as a perturbation when compared do the rest of the terms, a perturbative solution is found. For that, Eq.~\\eqref{edo-scalarcurv} can be neglected (this situation corresponds to the case where torsion is set to zero at the action level). Hence, the smaller system system \\eqref{edo-t}-\\eqref{edo-theta-phi} is not over-determined. The solution \\eqref{pert-sol-4th} is a deformation of the usual Schwarzschild-de Sitter solution.\n\n\\item The appropriate limits of the perturbative solution are all consistent. In particular, at the limit $r\\rightarrow\\infty$, the asymptotic spacetime is a de Sitter spacetime. The same spacetime is obtained from the exact solution for $\\tilde{\\Lambda}\\ll\\Lambda$. See \\eqref{pert-ds-far}-\\eqref{Y-notrunc}.\n\n\\item The usual singularity $r=0$ is present at the perturbative solution and no other singularities appear.\n\n\\item The horizons are obtained as perturbed deformations of the usual event and cosmological horizons of the Schwarzschild-de Sitter spacetime, namely \\eqref{HZs}.\n\n\\item Moreover, the surface gravities and entropies associated to the horizons are also computed. The results are perturbations around the usual surface gravities and entropies of the Schwarzschild-de Sitter spacetime. See \\eqref{surf-grav-bh-a}-\\eqref{S-rc}.\n\n\\item It is also discussed the qualitative behavior of the horizons, surface gravities and entropies as functions of the mass $M$ of the matter distribution. In all cases, the bigger the mass the better is the agreement between the perturbative solution and the Schwarzschild-de Sitter one. The deviations, \\emph{i.e.}, the perturbative corrections, are stronger as the mass decreases.\n\\end{itemize}\n\nAt this point it is convenient to compare our perturbative solution \\eqref{pert-sol-4th} with the one encountered by K.~S.~Stelle in \\cite{Stelle:1977ry}. In that work, a general theory of gravity in the second order formalism with higher derivative terms (quadratic terms in the Riemman tensor, Ricci tensor and scalar curvature) is considered. A perturbative static and spherically symmetric solution is found. At first sight one could argue that our result should be just a particular case of Stelle`s result for a suitable choice of parameters. However, our result is strongly different. First of all, the result in \\cite{Stelle:1977ry} is a perturbation around Minkowski spacetime. Their field equations are not perturbed equations as ours. As a consequence, their perturbed functions are different from ours. In fact, our perturbations consist on corrections of the Schwarzschild-de Sitter solution due to the quadratic curvature term while the solution in \\cite{Stelle:1977ry} is a perturbation around the Minkowski spacetime, independently of the magnitude of the higher order derivative terms. Hence, to our knowledge, our results are genuine novel solutions. Moreover, it is known that the field equations in the second order formalism are significantly different from the equations obtained in the first order formalism \\cite{Exirifard:2007da,Borunda:2008kf,Capozziello:2010ih}, except for the special case of GR.\n\nThe study of static spherically symmetric solutions within action \\eqref{ym-map-grav-obs} inside the matter distribution is currently under investigation for a perfect fluid distribution \\cite{Silveira:2017xxx}. Moreover, solutions for spherically symmetric matter distributions in the presence of electromagnetic fields and rotation will be analyzed as well.\n\nFinally, let us remark that the generalization of our results by considering non-vanishing torsion will also be studied. In this case, the system \\eqref{eq:eq_mov1}-\\eqref{eq:eq_mov2} have more degrees of freedom, a property that opens possibility of new solutions than the ones here found.\n\n\\section*{Acknowledgments}\n\nThe Conselho Nacional de Desenvolvimento Cient\\'ifico e Tecnol\\'ogico (CNPq-Brazil), The Coordena\\c c\\~ao de Aperfei\\c coamento de Pessoal de N\\'ivel Superior (CAPES), the Pr\\'o-Reitoria de Pesquisa, P\\'os-Gradua\\c c\\~ao e Inova\\c c\\~ao (PROPPI-UFF) and Centro Brasileiro de Pesquisas F\\'isicas are acknowledge for financial support.\n\n\n\n\n\\bibliographystyle{unsrt}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:1}\nIn a series of recent experiments\n\\cite{rai04,rai02,cze01,yuk98,rai02a,bon03} significant\ndifferences had been found between the low-energy cross sections\nof the $d+d\\rightarrow t+p$ reaction when the target nuclei are\nembedded in a metallic or an insulator environment. Apparently,\nthe source of this effect is the conduction electron distribution\nin the metallic lattice and its screening effects on the Coulomb\nbarrier around the incident and target nuclei \\cite{ass87,ass87a}.\nHints for similar behavior were found also in the electron capture\nrate of Be$^{7}$ \\cite{nir07,wan06}. While in some of these\nexperiments temperature dependence was also claimed\n\\cite{rai04,rai02}, no such dependence was found in others\n\\cite{nir07,sev07}.\n\nThis phenomenon can be briefly introduced as follows: The\ntransparency, $T(E_{k})$, of a potential barrier to an incident\nparticle having energy $ E_{k}$ is given by \\cite{fer49},\n\\begin{equation}\\label{eq1}\n\\ T(E_{k})=\\exp \\left\\{ -2\\int_{r_{N}}^{b}\\sqrt{\\frac{2M}{\\hbar ^{2}}%\n\\left[ E_{p}(r)-E_{k}\\right] \\,}dr\\right\\}\n\\end{equation}\nwhere $M$ is the reduced mass of the two particles, $r_{N}$ - the\nnuclear radius, $b$ - the classical turning point (CTP) and\n$E_{p}(r)=ze\\,V(r)$ is the potential energy of the incident\nparticle in the potential, $V(r)$, generated by the target\nnucleus. $Ze$ and $ze$ are charges of the target and incident\nnuclei, respectively.\n\nIf a pure Coulomb potential is substituted for the interaction potential, $%\nV(r)=V_{C}(r)=Ze\\,\/\\,r,$ Eq. (1) reproduces the Sommerfeld factor,\n\n\\begin{eqnarray}\\label{eq2}\nT_{C}(E_{k}) &=& \\exp \\left\\{\n-2\\int_{r_{N}}^{b}\\sqrt{\\frac{2M}{\\hbar\n^{2}}\\left[ \\frac{Zze^{2}}{r}-E_{k}\\right] \\,}dr\\right\\} \\\\\n &=& \\exp \\left\\{ -2\\pi \\eta\n\\right\\} \\nonumber\n\\end{eqnarray}\n\\begin{equation}\\label{eq3}\n\\eta (E_{k})=\\,\\frac{Zz\\,e^{2}}{\\hbar v_{k}}\n\\end{equation}\nIn (\\ref{eq3}) $v_{k}=\\sqrt{2E_{k}\/M}$ is the velocity of the\nincident particle. If the target nucleus is embedded in a solid\ntarget, the potential generated by nearby electrons, in addition\nto the bare Coulomb potential, has to be included as well,\n\\begin{equation}\\label{eq4}\nV(r)=V_{n}(r)+V_{e}(r)=\\frac{Ze}{r}+V_{e}(r)\n\\end{equation}\n\nFor low energy reactions the electronic part of the potential,\n$V_{e}(r)$, has a small but important contribution to the total\npotential. This contribution is amplified by the exponential\nfactor of the transparency.\n\nIn general, the electronic potential generated in a metallic\nlattice by the conduction electrons is a slowly varying function.\nEven for low energy particles the CTP of the projectile is so\nclose to the target nucleus that one can fairly assume that along\nthe integration path in (\\ref{eq1}) the electronic potential\npractically equals its value at the target nucleus, $V_{e}(r)\\cong\nV_{e}(0)$. When this approximation is inserted back into\n(\\ref{eq1}) it gets the form,\n\\begin{equation}\\label{eq5}\nT(E_{k})\\cong \\exp \\left\\{ -2\\int_{r_{N}}^{b}\\sqrt{\\frac{2M}{\\hbar\n^{2}}\\left[ \\frac{Zz\\,e^{2}}{r}+zeV_{e}(0)-E_{k}\\right]\n\\,}dr\\right\\}\n\\end{equation}\n\nThis is equivalent to a reaction in a pure Coulomb potential with\nthe kinetic energy, $E_{k}$, replaced by $E_{k}+U_{e}$, where\n\\begin{equation}\\label{eq6}\nU_{e}=ze\\,|V_{e}(0)| ~~~~~~~~(V_{e}(0)<0)\n\\end{equation}\n\nThe authors of Ref. \\cite{rai04} tried to explain the experimental\nresults by means of the Debye statistical model, which neglects\nthe electrons degeneracy. A difficulty of their treatment is also\nthe point that under the experimental conditions the Debye radius\nis significantly smaller than the atomic radius, therefore the\nbasic assumptions of the model are not satisfied.\n\nThe aim of the present paper is to compute the electrons spatial\ndistribution in insulator and metallic environments for the\n$d+d\\rightarrow t+p$ reaction $(Z=z=1)$, using a three-dimensional\nThomas-Fermi (TF) model, which accounts for the electrons'\ndegeneracy, therefore presumably better describing the experimental\nconditions in a solid host of local high\ndensity conditions (\"strongly-coupled plasma\").\nUsing the TF model we try\nto infer the change in the transparency between metallic and\ninsulating environments. To illustrate our method, we shall focus\non the case of deuterium embedded in a copper lattice for which\nexperimental results are available \\cite{rai04}.\n\n\\section{The model}\\label{sec:2}\n\\subsection{Basic data}\\label{subsec:2.1}\nIn the experiment, Ref. \\cite{rai04}, the deuterium atoms consist\nonly a small part of the target - about 11 copper atoms for each\ndeuterium atom \\cite{rai04} - one can, therefore, safely assume\nthat the presence of the deuterium atoms does not significantly\nmodify the copper lattice properties.\n\nOur first step is to find the volume available for the deuterium\natoms. This is carried out by means of the QEOS method\n\\cite{mor88}, which is frequently used to find the\nequation-of-state of various materials, and was found to give\naccurate results. For our purposes its main advantage is that QEOS\ncan provide the volume per atom separately for each component in a\nmixture of materials. Assuming a deuterium\/copper solid material\nwith 9\\% deuterium and 91\\% copper at 8.93 $g\/cm^{3}$ (solid\ncopper specific gravity), QEOS predicts that the volumes of the\ndeuterium (copper) atoms in the target are $\nVol_{D\\,}\\,(Vol_{Cu})=1.79\\,(11.7)^{.}10^{-24}\\;\\,cm^{3}\/atom$,\n\\textit{i.e.} , the average volume available for a copper atom is\nby a factor of $\\sim $ 6.5 larger than that of a deuterium atom.\nIf these atomic volumes are assumed to have the shape of a\nspherical enclosure, called in the following the \\textit{Ion\nSphere }(IS)\\textit{, }then the corresponding IS radii are $\nR_{i,D}\\,(R_{i,Cu})=0.717\\,(1.36)\\,10^{-8}\\,cm.$\n\nThe atomic structure of copper is $[Ar]\\,3d^{10}\\,4s$. In a pure\ncopper metal lattice there are at most 11 electrons per atom in\nthe conduction band. Their average density is\n$n_{e}=11\\,\/\\,11.7^{.}10^{-24}=9.4^{.}10^{23}\n\\,electrons\\,\/\\,cm^{3}$. If the small amount of deuterium atoms\ndoes not significantly change this distribution, then the same\ndensity prevails within the deuterium IS as well, generating\n$n_{e}Vol_{D\\,}=9.4^{.}10^{23} \\times 1.79^{.}10^{-24}=1.7$ extra\nelectrons inside the deuterium IS. Thus, together with its own\nelectron, the deuterium IS contains, on the average, $ N_{e}=2.7$\nelectrons. Changing the Cu\/D target density by $\\pm 10\\%$ changes\nthis quantity only by $\\pm $5\\%. In our computations we have used\n$N_{e}$, the number of electrons in the deuterium IS, as an\nadjustable parameter.\n\n\\subsection{The modelling of the electronic potential in insulating and\nmetallic environments}\\label{subsec:2.2} The electronic potential\nof a free deuterium atom in high density insulating environment\nwas calculated by the computer program RELDIR, which solves the\nrelativistic Dirac equation of a deuterium (or any other) atom in\na finite radius Ion Sphere with any number of free electrons\nwithin the IS. The results of this computation are the bound and\nfree electrons wavefunctions, their spatial density, and the\ncorresponding potentials.\n\nAs a first step, the computation was carried out for a solid\ndensity deuterium lattice, which is known to be an insulator,\n(0.202$\\,g\/cm^{3},$ 6.08$^{.}10^{22}\\,atoms\\,\/\\,cm^{3}$,\n$R_{i}=1.58^{.}10^{-8}cm$). From the result of RELDIR for the\nelectronic potential one obtains $U_{e}=20\\,eV$, in accordance\nwith the experimental results of Ref. \\cite{rai04} in an\ninsulating environment. This result was used as the basic\nreference for comparison.\n\nThe evaluation of the electronic potential near a deuterium atom\nembedded in a metallic lattice was carried out by means of the\nThomas-Fermi (TF) model in conjunction with the Born-Oppenheimer\n(BO) approximation. Owing to the high density environment, a\nFermi-Dirac statistics for the electrons, as used by the TF model,\nprovides a better approximation for the electrons spatial and\nenergy distributions inside the copper lattice than the Debye\nmodel \\cite{rai04,rai02} which is more appropriate for\nweakly-coupled low-density matter. In this context, the physical\nmeaning of the BO approximation is that nearby electrons rapidly\nadjust their local distribution to any change in the nuclei\npositions, so that the electron distribution depends only on the\ninstantaneous distance between the two nuclei, but not on their\nrelative motion.\n\nThe TF model for the electrons consists of a set of three\nequations. The first one is the Poisson equation,\n\\begin{equation}\\label{eq7}\n\\overrightarrow{\\mathbf{\\nabla }}V_{e}(\\mathbf{r})=4\\pi\ne\\,n_{e}(\\varepsilon_{F};\\mathbf{r})\n\\end{equation}\nwhose solution is the electronic potential, $V_{e}(\\mathbf{r})$,\nwhen the electrons spatial density,\n$n_{e}(\\varepsilon_{F};\\mathbf{r}),$ is known. In (\\ref{eq7})\n$\\varepsilon_{F}$ is the Fermi energy. The second equation is the\nFermi-Dirac distribution of the electrons inside the IS,\n\\begin{eqnarray}\\label{eq8}\nn_{e}(\\varepsilon_{F};\\mathbf{r})=\\frac{1}{2\\pi ^{2}}\\,\\left(\n\\frac{2m_{e}\\,kT}{\\hbar ^{2}}\\right) ^{3\/2}\\, \\nonumber \\\\ \\times\nF_{1\/2}\\left( \\frac{\\varepsilon\n_{F}+eV(\\mathbf{r)}}{kT};\\left\\vert\n\\frac{eV(\\mathbf{r)}}{kT}\\right\\vert \\right)\n\\end{eqnarray}\nwhich provides the electron density as function of the total\npotential $V( \\mathbf{r})=V_{n}(\\mathbf{r})+V_{e}(\\mathbf{r}).$ In\n(\\ref{eq8}), $kT$ \\ is the temperature of the target material\n($=300\\,K)$ in energy units, and,\n\\begin{equation}\\label{eq9}\nF_{1\/2}\\left( x;\\beta \\right) =\\int_{\\beta }^{\\infty }\\frac{\ny^{1\/2}\\,dy}{1+\\exp \\left\\{ y-x\\right\\} }\n\\end{equation}\nis the \\textit{incomplete Fermi-Dirac integral } \\cite{sal98}. The\nboundary condition applied to Eqs. (\\ref{eq7}) and (\\ref{eq8}) is\nGauss' theorem, $\\oint_{IS\\,surface}\\mathbf{E}\n^{.}\\mathbf{dS}=4\\pi e\\,N_{e}$. Finally, the Fermi energy,\n$\\varepsilon_{F}$, is computed from the condition that the total\ncharge inside the IS equals $ N_{e}$,\n\\begin{equation}\\label{eq10}\nN_{e}=\\int_{IS\\;\\mbox{volume}}n_{e}(\\varepsilon_{F};\\mathbf{r}\n)\\,d^{3}r\n\\end{equation}\n\nSimultaneous solution of Eqs. (\\ref{eq8})-(\\ref{eq10}) yields the\nelectronic potential, $ V_{e}(\\mathbf{r})$, the electron density,\n$n_{e}(\\varepsilon_{F};\\mathbf{r} ) $, and the Fermi energy,\n$\\varepsilon_{F}$. When these solutions are inserted back into\n(\\ref{eq1}) and (\\ref{eq4}), one finds the CTP, $b(N_{e},E_{k})$,\nand the transparency, $T(N_{e},E_{k})$ as function of the number\nof electrons inside the IS and the incident particle's kinetic\nenergy.\n\n\\subsection{Computational details}\\label{subsec:2.3}\nThe solution of (\\ref{eq7}) can be rewritten as \\cite{sal94},\n\\begin{eqnarray}\\label{eq11}\nV_{e}(r,\\theta ,\\varphi ) = -e\\int \\int\n\\int_{IS\\;\\mbox{volume}}\\frac{ n_{e}(r^{\\prime },\\theta ^{\\prime\n})}{\\left\\vert \\mathbf{r-r}^{\\prime }\\right\\vert }d^{3}r \\\\\n = -e\\int_{r_{N}}^{R_{i}}r^{\\prime 2}\\,dr^{\\prime\n}\\,\\int_{\\theta =0}^{\\pi }\\sin \\theta ^{\\prime }\\,d\\theta ^{\\prime\n}\\, \\nonumber \\\\ \\times \\int_{\\varphi =0}^{2\\pi }d\\varphi ^{\\prime\n}\\,\\frac{n_{e}(r^{\\prime },\\theta ^{\\prime })}{ \\left\\vert\n\\mathbf{r-r}^{\\prime }\\right\\vert } \\nonumber\n\\end{eqnarray}\nObviously, the electron distribution has cylindrical symmetry\naround the line connecting the two nuclei, therefore\n$n_{e}(r^{\\prime },\\theta ^{\\prime })$ is independent of $\\varphi\n^{\\prime }.$ Moreover, the charge distribution has also a\nreflection symmetry around the plane perpendicular to this line at\nhalfway between the nuclei. Using these symmetry properties, the\nintegration over $\\varphi ^{\\prime }$ can be carried out\nanalytically, thereby reducing the triple integral in (\\ref{eq11})\nto a double one, see details in Ref. \\cite{sal94}. Finally,\ntransforming the coordinate system center onto the target nucleus,\nEq. (\\ref{eq11}) gets the form,\n\\small{\n\\begin{eqnarray}\\label{eq12}\nV_{e}(R,\\mu ) = -4e\\int_{r_{N}}^{R_{i}}R^{\\prime 2}\\, dR^{\\prime\n}\\, \\int_{\\mu = \\mu _{\\min }}^{1}d\\mu ^{\\prime }\\,n_{e}(R^{\\prime\n},\\mu ^{\\prime })\\, \\nonumber \\\\\n\\times \\left[\\frac{1}{\\sqrt{A_{-}+B}}K\\left( \\sqrt{\\frac{\n2B}{A_{-}+B}}\\right) + \\frac{1}{\\sqrt{A_{+}+B}}K\\left(\n\\sqrt{\\frac{2B}{A_{+}+B }}\\right) \\right] \\nonumber\\\\\n\\end{eqnarray}\n}\n\\begin{eqnarray}\\label{eq13}\n A_{\\pm } = \\left[ \\left( b\/2+R\\cos \\theta \\right) \\pm\n\\left( b\/2+R^{\\prime }\\cos \\theta ^{\\prime }\\right) \\right] ^{2}\n\\nonumber \\\\\n + ~(R\\sin \\theta )^{2}+\\left( R^{\\prime\n}\\sin \\theta ^{\\prime }\\right) ^{2}\n\\end{eqnarray}\n\\begin{equation}\\label{eq14}\nB=2R\\,R^{\\prime }\\,\\sin \\theta \\,\\sin \\theta ^{\\prime }\n\\end{equation}\n\nIn (\\ref{eq12}) $R$ is the radial distance of the field point from\nthe target nucleus, $\\mu =\\cos \\theta $, $K(x)$ is the Jacobi\nelliptic integral \\cite{abr72}, and $\\mu _{\\min }=\\max \\left(\n-b\\,\/\\,2R^{\\prime },\\,-1\\right) .$ The reduction of the triple\nintegral in (\\ref{eq11}) to the shape of (\\ref{eq12}) reduced the\ncomputational resources and improved the accuracy of the numerical\nprocedure.\n\nThe numerical process was greatly complicated by the fact that the\nCTP, $b$, not only depends on the electron density,\n$n_{e}(\\varepsilon_{F};\\mathbf{r} ) $, but also determines it. It\nwas, therefore, necessary to carry out a doubly iterative method\nfor the computations. The internal iterations calculated the\nelectron density and potential through equations\n(\\ref{eq12})-(\\ref{eq14}) and (\\ref{eq8})-(\\ref{eq10}). When these\nhave been found, a second iterative procedure was applied to solve\n$b$ from the condition,\n\\begin{equation}\\label{eq15}\nE_{p}(b)=E_{k}\n\\end{equation}\n\nThis double iterative method was continued until convergence was\nachieved for all the parameters.\n\n\\section{Results}\\label{sec:3}\n\\subsection{The electron density}\\label{sec:3.1}\nFigure~\\ref{fig:1} shows the density of the electrons around the\nincident and the target nuclei, when the center-of-mass energy of\nthe two nuclei is 4000~eV. The figure clearly shows the\npolarization of the electrons near the nuclei. We recall that the\nTF model predicts that the electron density diverges as $r^{-3\/2}$\nnear the nucleus \\cite{sal98}.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{Fig_1}\n\\caption{The distribution of the electrons around the two nuclei.}\n\\label{fig:1}\n\\end{figure}\n\nThe TF model predicts an accumulation of the electrons along the\nline connecting the nuclei, with a saddle point at halfway between\nthem, a fact which enhances the screening effect.\n\n\\subsection{The electronic potential}\\label{sec:3.2}\nFigure~\\ref{fig:2} shows the spatial distribution of the\nelectronic potential in the same region as in Fig.~\\ref{fig:1}. In\nthe interesting part of the field, \\textit{i.e.} in the space\naround the two nuclei, the potential has very slow variation,\nwhich justifies the approximation $V_{e}(r)\\cong V_{e}(0)$, to an\naccuracy of a few percents.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth] {Fig_2}\n\\caption{The distribution of the electronic potential, $V_{e}(r)$,\naround the two nuclei.} \\label{fig:2}\n\\end{figure}\n\nThe value of the potential on the nuclei, $U_{e}=|eV_{e}(0)|$, is\ndisplayed on Fig.~\\ref{fig:3}. This is, in fact, the quantity that\nis measured experimentally. The figure shows that $U_{e}$ fits\nexcellently a linear behavior as function of $N_{e}$, with a small\ncontribution from the incident energy,\n\\begin{figure}[h]\n\\centering\n \\includegraphics[width=3in] {Fig_3}\n\\caption{The value of the electronic potential, $V_{e}(r)$, on the\ntarget nucleus.}\\label{fig:3}\n\\end{figure}\n\\begin{equation}\\label{eq16}\n3D:U_{e} = |eV_{e}(0)| = 27.2\\,N_{e}+80.3 +\n3.1^{.}10^{-4}E_{k}\\,~~eV\n\\end{equation}\n\nThe last term is always less than 4\\% in the range of interest. In\norder to compare the importance of a 3D modelling, we have\ndeveloped also a one-dimensional TF program that solves the same\nproblem in a radially symmetrical environment. The results of this\nprogram is also plotted on Fig.~\\ref{fig:3}. The TF 1D code\nyielded,\n\\begin{equation}\\label{eq17}\n1D:U_{e}=|eV_{e}(0)|=27.2\\,N_{e}\\,+33.6 ~eV\n\\end{equation}\n\nThe difference between these two results originates from the fact\nthat the electrons accumulation between the nuclei in a 3D model\nis larger than in the 1D case. This difference clearly indicates\nthe importance of a three-dimensional treatment of the problem.\n\nFor $N_{e}=3$, $E_{k}=4000\\,eV$ \\ Eq. (16) gives $U_{e}=163\\,eV$.\nThis has to be compared to the experimental result, $U_{e}=470\\pm\n50\\,eV$, which is by a factor of $2.9\\pm 0.3$ larger than the\nresult of the TF 3D model. It has also to be compared to\n$U_{e}=20\\,\\,eV$ as computed from RELDIR for the insulator case,\nsee above.\n\n\\subsection{The Fermi energy}\\label{sec:3.3}\nThe Fermi energy, $\\varepsilon_{F}$, strongly depends on the\nnumber of electrons in the IS, $N_{e}$, and is, of course,\nindependent on the incoming particle's energy. The behavior of\n$\\varepsilon_{F}$ vs. $N_{e}$ is illustrated in Fig.~\\ref{fig:4}.\nWithin the range of our computations $\\varepsilon_{F}$ turned out\nto be a linear function of $N_{e}$,\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=3in] {Fig_4}\n\\caption{Fermi energy as function of the number of electrons\ninside the ion sphere, for 3D and 1D Thomas-Fermi\nsimulations.}\\label{fig:4}\n\\end{figure}\n\\begin{equation}\\label{eq18}\n3D:\\quad \\varepsilon_{F}=-43.7+38.8\\,N_{e}\\qquad (eV)\n\\end{equation}\nwith a hint for a slight quadratic curvature. For the 1D case we\nhave found a significantly higher result,\n\\begin{equation}\\label{eq19}\n1D:\\quad \\varepsilon_{F}=-15.9+35.3\\,N_{e}\\qquad (eV)\n\\end{equation}\n\nIt should be emphasized, that the Fermi energy is positive (except\nfor the case of $N_{e}=1,$ namely, the case of an isolated\ndeuterium atom), and is much higher than the target temperature\n($\\varepsilon_{F}\\gg kT=300\\,K=0.025\\,eV)$, for all the cases.\nThis means that at the temperatures of the experiments (room\ntemperature and below) there is no reason for measurable\ntemperature variations. In fact, we have repeated our computations\nwith $T=3000\\,K$ , but this order of magnitude change in the\ntemperature modified the results by less than 0.5\\% - as expected.\n\n\\subsection{The classical turning point}\\label{sec:3.4}\nThe CTP fits, with excellent accuracy, a function of the form,\n\\begin{equation}\\label{eq20}\nb\\,\/\\,a_{0}=\\frac{e^{2}\\,\/\\,a_{0}}{E_{k}+U_{e}}\n\\end{equation}\nwhere $e^{2}\/\\,a_{0}=27.211\\,eV$, ($a_{0}$ is the Bohr radius).\nThis has exactly the form of \\ the Coulomb CTP, with the\nprojectile's kinetic energy modified according to Eq. (5). Fitting\nthe computational results to the form of (20) provides,\n\\begin{equation}\\label{eq21}\nU_{e}=27.7\\,N_{e}+76.4+5.76^{.}10^{-4}\\,E_{k}\\quad eV\n\\end{equation}\n\nThe agreement of this $U_{e}$ with the values obtained from the\nelectronic potential, Eqs. (\\ref{eq16}), can be regarded as very\ngood. The difference between the two results stems from the fact\nthat $|eV_{e}(\\mathbf{r})|$ is not exactly constant along the line\nconnecting the two nuclei. And again, this result is in\ndiscrepancy with the experimental value, but is much higher than\nthe insulator case.\n\n\\subsection{The transparency}\\label{sec:3.5}\nAt low impact energies the screened astrophysical factor, $S(E)$,\nis enhanced relative to the unscreened one by\n $T(E)\/T_{C}(E)$ \\cite{rai04}, see Eqs. (\\ref{eq1}) and\n(\\ref{eq2}). Fig.~\\ref{fig:5} shows our results for this factor as\nfunction of the center of mass energy for the $d+d\\rightarrow t+p$\nreaction, when the target deuterium is embedded in a copper\n\\begin{figure}[b]\n\\centering\n \\includegraphics[width=3in] {Fig_5}\n\\caption{Comparison between the results of the 3D TF model\nfor the astrophysical factor, S(E), of the reaction $d+d \\rightarrow\nt+p$, when the target nuclei are embedded in a copper substrate,\nand the experimental results. The experimental points are taken from\n\\cite{rai04}. The dotted line presented the theoretically extrapolated\n\"bare nucleon\" S(E) factor}.\\label{fig:5}\n\\end{figure}\nlattice. The experimental results of Ref. \\cite{rai04} are also\ndisplayed in Fig.~\\ref{fig:5}. The dashed curve represents the\nbare $S(E)$ factor \\cite{rai04}. While we note that the value\n$N_{e}$ = 3 is a reasonable estimate for the conducting electrons in\nthe deuterium IS embedded in Cu, even for $N_{e}$ = 8 the 3-dimensional\nTF model underestimates the experimental low-energy\nincrease of the astrophysical factor.\nThe low $U_{e}$ predicted by the model relative to the experiment is another\nmanifestation of the same fact.\nIn fact, one needs 11 conduction electrons inside the deuterium IS\nto get agreement with the experimental results.\n\nIn the range of the energies used in our computations $U_{e}\\leq\n0.1E$. Denoting $\\xi =U_{e}\\,\/\\,E,$ $\\xi $ can be assumed to be a\nsmall quantity. Using first order expansion, an analytical\nestimate can be developed for the change in the value of\n$T(E)\\,\/\\,T_{C}(E)$ caused by a change $b\\rightarrow b\\,\/\\,(1+\\xi\n)$ in the CTP, see Eq. (\\ref{eq20}). The ratio of the screened to\nCoulomb transparencies becomes,\n\n\\begin{equation}\\label{eq22}\nT(E)\\,\/\\,T_{C}= exp\\left\\{ \\delta G\\right\\}\n\\end{equation}\nwhere,\n\\begin{equation}\\label{eq23}\n\\delta G=G-G_{C}=2e^{2}\\sqrt{\\frac{2M}{\\hbar ^{2}}}\\frac{\\xi\n}{\\sqrt{ E}}\\left( \\frac{\\pi }{4}-\\sqrt{\\frac{\\xi }{2}}\\right)\n\\end{equation}\nFor $E=4000\\,eV,$ $N_{e}=3$ and $U_{e}=163$ $eV$ this formula\npredicts,\n\\begin{equation}\\label{eq24}\nT(E)\\,\/\\,T_{C}= exp\\left\\{ \\delta G\\right\\} =1.45\n\\end{equation}\nin contrast to the experimental result of $T(E)\\,\/\\,T_{C}\\approx\n2.0\\pm 0.2\\,.$\n\n\\section{Discussion}\\label{sec:4}\nIn this paper we present computational results from a\nthree-dimensional TF model about the modification of low-energy\ncross-section for the $d+d\\rightarrow t+p$ reaction, when the\ntarget nucleus is embedded in a copper lattice. Our computational\nresult of $U_{e}=163\\,eV$ is lower by a factor of $\\sim $3 from\nthe experimental result of $U_{e}=470\\pm 50\\,eV,$ but still is\nsubstantially higher than the cross-section in an insulator (solid\ndeuterium lattice), $U_{e}=20\\,\\,eV.$\n\nIn order to see whether such a difference holds true for other\ntarget lattices as well, in Fig.~\\ref{fig:6} we plotted the\nresults of all the experiments in metal lattices published in Ref.\n\\cite{rai04} in comparison to ours. The 3D TF results in the\nfigure may be shifted $\\pm 5\\%$ up or down, due to differences in\nthe local IS volume of the deuterium in the various lattices.\nFigure~\\ref{fig:6} exhibits a consistent difference between the\ncomputational and the experimental results. As the Thomas-Fermi is\na highly successful model, well fitted to high density matter,\nthis insistent difference is of some surprise.\n\\begin{figure}[b]\n\\centering\n\\includegraphics[width=3in] {Fig_6}\n\\caption{The experimental values of $U_{e}$, drawn from the results of\n \\cite{rai04}, compared to the predictions of the 3D TF model.}\n \\label{fig:6}\n\\end{figure}\n\nObviously, the central reason for the disagreement is the\nelectronic potential on the target nucleus, see Eq. (\\ref{eq6}).\nIn our opinion, the semiclassical nature of the TF model cannot be\nthe reason for this difference, because the quantum mechanical\nbehavior of the conduction electrons seems to be unimportant in\nthe present problem.\n\nOn the other hand, the TF model assumption of a perfect spherical\nsymmetry, and the target nucleus location at the center of a\nwell-defined ion sphere, may oversimplify the real situation. In\nthe experiment, the deuterium nuclei are implanted into the metal\nlattice by bombardment of $10\\,keV$ deuterons into the metal foil\n\\cite{rai02}. It is well known that this technique does not\nnecessarily deposit the stopped deuterium into a spherically\nsymmetrical environment.\n\nAnother contribution to the discrepancy may be the local\nconduction electron density fluctuations around the target\nnucleus. As the transparency depends exponentially on the local\nelectron potential, a small number of nuclei, with an\ninstantaneous large electron screening, has greater influence on\nthe cross-section than the other nuclei with the average\nelectronic screening.\n\nFinally, it is possible that QEOS underestimates the deuterium IS\nvolume under the specific experimental conditions. Larger IS\nvolume, with more conduction electrons, may have better agreement\nwith the experiment.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nWaves injected into plasma can be amplified, extracting energy from the plasma. \nSimilarly, internal modes within the plasma can grow by extracting energy from the plasma.\nThere are multiple and distinct ways in which this energy can be accessed, and the open question is how much energy can possibly be accessed. \nThe ground state of the system can be defined as the state of least energy accessible respecting any constraints. \nThe free energy or the accessible energy can then be defined as the difference between the initial state energy and the ground state energy.\n\nIn his classic work,\\cite{Gardner1963} Gardner calculated the free energy when plasma could be rearranged while preserving the six-dimensional phase space densities or volumes. \nIf the distribution is divided into separate phase space volumes, the volume preservation constraint means that it is not possible to rearrange the system to any lower-energy state than the one in which the lowest-energy regions of phase space are occupied by the most-populated available phase space elements. \nEach volume carries with it its initial number of particles, so putting that volume in a six-dimensional location of lower energy preserves that volume phase space density as well. \nThis is known as Gardner restacking, representing a particular phase space rearrangement.\nThe fully restacked phase space is the ground state subject to phase space conservation; the releasable energy is known as the Gardner free energy. \n\n\nThis problem was approached using variational techniques by Dodin and Fisch.\\cite{Dodin2005}\nDodin and Fisch also calculated the free energy, including the additional constraint that the total current be preserved. \nThe free energy respecting current conservation, of course, would be less than the Gardner free energy.\nRecently, Helander calculated the free energy, conserving like Gardner the six-dimensional phase space densities, but conserving as well other quantities, particularly those quantities that affect the motion of individual particles in phase space.\\cite{Helander2017ii, Helander2020} \nFor example, in a magnetized system, quantities like the first or second adiabatic invariants $\\mu$ and $J$ might be conserved. \nThe free energy constrained by conservation of the adiabatic invariants of the motion, of course, would similarly be less than the Gardner free energy.\n\nHowever, even under Hamiltonian dynamics, rearrangements of plasma can appear not to conserve phase space volume, when the distribution functions are viewed with finite granularity. \nViewed at finite granularity, waves can diffuse particles.\\cite{Kennel1966} \nImportantly, these diffusive wave-particle interactions can release particle energy to the waves.\\cite{Fisch1992}\nThe energy accessible via diffusive operations in phase space is less than that accessible via restacking. \nThe property of free energy constrained by phase space diffusion was first posed by Fisch and Rax.\\cite{Fisch1993}\n\n\nThe operation of diffusing particles between volumes in phase space, rather than interchanging these volumes, has actually had applicability to a rather wide range of problems.\nMathematical treatments of rearrangement by diffusion describe theories of income inequality,\\cite{Dalton1920} altruism,\\cite{Thon2004} and physical chemistry.\\cite{Horn1964, Zylka1985}\nThe maximal extractable energy under diffusive phase space rearrangements in plasma was addressed by Hay, Schiff, and Fisch. \\cite{Hay2015, Hay2017}\n\nHelander's recent calculation of the plasma free energy obeying phase space conservation, with the motion of individual particles constrained by adiabatic invariants, now points to a natural generalization: the free energy under diffusion in phase space, but with the motion of individual particles similarly constrained by adiabatic invariants.\nThis paper will discuss that generalization. \nLest this be thought to be an academic exercise, we note that it is realized, for instance, in the quasilinear theory of plasma waves and instabilities with frequencies below the cyclotron frequency. \nThe available energies subject to both diffusion and adiabatic motion constraints should be more limited than the available energy subject only to either constraint. \n\nThe paper is organized as follows: \nSection~\\ref{sec:availableEnergies} describes these different available energies. \nSection~\\ref{sec:discreteModel} introduces a simple, discrete model which illustrates the different ways in which these energies can be extracted. \nSection~\\ref{sec:inhomogeneousField} shows how that simple model can be modified to describe a more concrete plasma system. \nFinally, Section~\\ref{sec:discussion} discusses the context and implications of these ideas. \n\n\\section{The Four Classes of Available Energy} \\label{sec:availableEnergies}\n\nLet $\\Delta W_\\text{G}$ denote the accessible energy in Gardner's restacking problem, in which phase space volume conservation is the only restriction. Let $\\Delta W_{\\text{G}|\\mu}$ denote Helander's available energy for the version of this problem in which one or more conservation law constraints are included. Let $\\Delta W_\\text{D}$ denote the maximum extractable energy in the variant of Gardner's problem in which phase space elements must be diffusively averaged rather than being exchanged (``restacked\"). Finally, let $\\Delta W_{\\text{D}|\\mu}$ denote the maximum extractable energy for the diffusive problem with conservation laws. \n\nHere, a ``diffusive exchange\" refers to an operation in which the populations $f_a$ and $f_b$ of two equal-volume regions of phase space are mixed so that both populations are $(f_a + f_b) \/ 2$ after the exchange. \nThere is no requirement that these two elements be adjacent in phase space; in fact, microscopically local flows can give rise to apparently non-local diffusive processes.\\cite{Fisch1993} \nThis kind of exchange can describe any process that tends to equalize the populations of different regions of phase space, regardless of the details of the microscopic dynamics. \nThis equalization could, for instance, be the result of quasilinear diffusion but could equally well result from more complicated transport processes in phase space involving finite-Kubo-number effects, L\\'evy flights, etc. \n\nThis leaves us with four distinct available energies. \nThe condition for the plasma to be in a ground state is the same in the restacking and diffusion problems: for any pair of equal-volume phase space elements with different populations, the higher-population element must occupy a region of phase space with no more energy than the lower-population element. If there is a conserved quantity $\\mu$ (or, in general, a vector of conserved quantities $\\ensuremath{\\boldsymbol\\mu}$), then the condition is identical except that it is only necessary to consider pairs of phase space elements with the same values of $\\mu$ (or $\\ensuremath{\\boldsymbol\\mu}$). \n\nAlthough Gardner restacking, even with constraints, leads to a unique ground state, that is not the case with diffusive exchange.\nIt is possible through diffusive exchange to reach more than one possible ground state from the same initial configuration. \nThese ground states will often have different energies. \nNote that $\\Delta W_\\text{D}$ and $\\Delta W_{\\text{D}|\\mu}$ are defined as the maximum possible extractable energy, \nwhich is the difference between the initial energy and the energy of the lowest-energy accessible ground state. \n\nGiven the same initial phase space configuration, diffusive exchange and restacking will typically not lead to the same ground state. The extractable energy will differ too.\nIt is clear that the available energies without additional conservation-law restrictions will always be at least as large as their restricted counterparts. That is, \n\\begin{gather}\n\\Delta W_{\\text{G}|\\mu} \\leq \\Delta W_\\text{G} \\\\\n\\Delta W_{\\text{D}|\\mu} \\leq \\Delta W_\\text{D} \\, . \n\\end{gather}\nMoreover, phase space restacking can always access at least as much energy as diffusive exchanges can (strictly more, if they are nonzero). To see this, note that both kinds of exchange move the system toward the same ground state condition, where the most highly populated phase space volumes are at the lowest energies, but that diffusive exchanges reduce the difference between the more- and less-highly populated phase space volumes. This reasoning is equally applicable with and without conservation laws, so it follows that \n\\begin{gather}\n\\Delta W_\\text{D} \\leq \\Delta W_\\text{G} \\\\\n\\Delta W_{\\text{D}|\\mu} \\leq \\Delta W_{\\text{G}|\\mu} \\, , \n\\end{gather}\nwith equality holding only when both sides vanish. \n\n\\section{Simple Discrete Model} \\label{sec:discreteModel}\n\nIn order to get an intuitive sense for the four available energies -- that is, the restacking energy with and without a conservation law and the diffusive-exchange energy with and without a conservation law -- it is helpful to construct a simple model that shows concretely how the different energies can play out. \n\n\\begin{figure}\n\t\\includegraphics[width=\\linewidth]{gridTikzFigure.pdf}\n\t\\caption{Schematic of a simple discrete system, with varying energy and some additional coordinate $\\mu$. } \n\t\\label{fig:grid}\n\\end{figure}\n\nTo that end, consider a collection of discrete states, indexed by their energies and by some coordinate $\\mu$, as shown schematically in Figure~\\ref{fig:grid}. Associate the states in the $n$th column with energy $\\epsilon_n$, with $\\epsilon_i \\leq \\epsilon_j$ $\\forall i < j$. One could construct a grid of this kind with any number of columns and rows. Physically, previous work presents the discrete version of these problems in two ways.\\cite{Fisch1993, Hay2015, Hay2017} First, it can model an intrinsically discrete physical system, like transitions between atomic energy levels stimulated by lasers. Second, it can model a system with continuous phase space (e.g., a plasma) being mixed with some finite granularity. \n\nIn this scenario, it is straightforward to understand the four available energies described in Section~\\ref{sec:availableEnergies}. If $\\mu$ is conserved, different rows are not allowed to interact, and the total available energy (either $\\Delta W_{\\text{G}|\\mu}$ or $\\Delta W_{\\text{D}|\\mu}$) is the sum of the available energies of the individual rows, considered independently. If $\\mu$ is not conserved, then either the diffusive or the Gardner energy minimization problem must consider the system as a whole. \n\nWhen constructing examples of this kind, it quickly becomes apparent that conservation laws affect the outcome of diffusive and Gardner relaxation in qualitatively different ways. Consider the following phase-space configuration, with two energy levels (with the left-hand column corresponding to a lower energy) and two different values of $\\mu$:\n\\begin{gather*}\n\\begin{array}{|c|c|}\n\\hline\n0 & 1 \\\\ \\hline\n0 & 0 \\\\ \\hline\n\\end{array} \\; .\n\\end{gather*}\nSuppose the separation between the two energy levels is $\\varepsilon$. Then $\\Delta W_{\\text{G}|\\mu} = \\Delta W_\\text{G} = \\varepsilon$ and $\\Delta W_{\\text{D}|\\mu} = \\varepsilon \/ 2$. If $\\mu$ is not conserved,, the lowest diffusively accessible energy can be reached by the sequence \n\\begin{gather}\n\\begin{array}{|c|c|}\n\\hline\n0 & 1 \\\\ \\hline\n0 & 0 \\\\ \\hline\n\\end{array} \\rightarrow \n\\begin{array}{|c|c|}\n\\hline\n1\/2 & 1\/2 \\\\ \\hline\n0 & 0 \\\\ \\hline\n\\end{array} \\rightarrow \n\\begin{array}{|c|c|}\n\\hline\n1\/2 & 1\/4 \\\\ \\hline\n1\/4 & 0 \\\\ \\hline\n\\end{array}\n\\end{gather}\nso that $\\Delta W_\\text{G} = (3\/4) \\varepsilon$. At this point, we note that there is a qualitative difference between discrete and continuous systems. In the example just given, the ground state contradicts a theorem by Gardner that the distribution function can only depend on energy alone. This contradiction is a consequence of the fact that energetically neutral exchanges are possible between the two low-energy boxes in the diagram. It disappears if the distribution function is required to be a smooth function of a continuous energy variable. \n\nThis example worked because, in the absence of any $\\mu$ constraint, diffusive processes can take advantage of the additional phase space to more efficiently transfer material (or quanta, etc., depending on how this model is interpreted physically) more efficiently from high-energy to low-energy states. One might imagine that this a special property of phase space that is initially unoccupied. However, similar behavior appears when the initial population configurations for each $\\mu$ are the same. Consider, for example, the following $2 \\times 2$ system: \n\\begin{align*}\n\\begin{array}{|c|c|}\n\\hline \n0 & 1 \\\\ \\hline\n0 & 1 \\\\ \\hline\n\\end{array} \\; .\n\\end{align*}\nSuppose, as before, that the rows correspond to different values of $\\mu$ and that the columns correspond to energy states that are separated by some energy $\\varepsilon$. Three of the four measures of available energy are immediately clear: $\\Delta W_{\\text{G}|\\mu} = \\Delta W_\\text{G} = 2 \\varepsilon$ and $\\Delta W_{\\text{D}|\\mu} = \\varepsilon$. \n\nHay, Schiff, and Fisch showed\\cite{Hay2015} that there are only a finite number of candidates for the optimal sequence of diffusive steps to relax a system to equilibrium. After enumerating all possible candidates, it is possible to show that the lowest possible energy state accessible through diffusive steps can be reached by the sequence \n\\begin{align*}\n&\\begin{array}{|c|c|}\n\\hline \n0 & 1 \\\\ \\hline\n0 & 1 \\\\ \\hline\n\\end{array} \\rightarrow\n\\begin{array}{|c|c|}\n\\hline \n0 & 1 \\\\ \\hline\n1\/2 & 1\/2 \\\\ \\hline\n\\end{array} \\rightarrow \n\\begin{array}{|c|c|}\n\\hline \n0 & 3\/4 \\\\ \\hline\n3\/4 & 1\/2 \\\\ \\hline\n\\end{array} \\\\\n&\\hspace{120 pt} \\rightarrow \n\\begin{array}{|c|c|}\n\\hline \n1\/4 & 3\/4 \\\\ \\hline\n3\/4 & 1\/4 \\\\ \\hline\n\\end{array} \\rightarrow \n\\begin{array}{|c|c|}\n\\hline \n1\/2 & 1\/2 \\\\ \\hline\n3\/4 & 1\/4 \\\\ \\hline\n\\end{array} \\; , \n\\end{align*}\nso $\\Delta W_\\text{D} = (5\/4) \\varepsilon$. \n\nIt seems that there is some qualitative way in which increasing the number of accessible states improves the diffusive available energy without necessarily improving the Gardner available energy. However, it is nontrivial to write down a strong condition relating $\\Delta W_{\\text{G}|\\mu}$, $\\Delta W_\\text{G}$, $\\Delta W_{\\text{D}|\\mu}$, and $\\Delta W_\\text{D}$ that captures this intuition. \n\nOne might expect, after looking at a number of examples involving small systems, that $\\Delta W_{\\text{D}|\\mu} \/ \\Delta W_\\text{D}$ would always be less than or equal to $\\Delta W_{\\text{G}|\\mu} \/ \\Delta W_\\text{G}$ (in other words, that restricting the size of accessible phase space would always have a fractionally more severe impact on the energy accessible via diffusive exchange). In fact, this inequality holds for any system with two energy levels (a $2 \\times N$ grid of states). \n\nTo see this, first note that in a $2 \\times N$ system, $\\Delta W_{\\text{D}|\\mu} = (1\/2) \\Delta W_{\\text{G}|\\mu}$. If only two cells can be exchanged, then it is either favorable to exchange them (in which case the Gardner exchange moves the difference between the cells' populations from one to the other and the diffusive exchange accomplishes half that) or it is not (in which case neither does anything). \n\nNow consider the same system without any $\\mu$ constraints. It is possible to divide the initial population values $\\{f_{ij}\\}$ into the $N$ largest and $N$ smallest values -- or, more precisely, into equally large sets $A$ and $B$ such that for any $a \\in A$ and $b \\in B$, $f_a \\geq f_b$. These sets can then be subdivided into the elements starting in the lower-energy state ($A_0$ and $B_0$) and those starting in the higher-energy state ($A_1$ and $B_1$). $A_1$ and $B_0$ will have the same number of elements, so every element of $A_1$ can be paired with one unique element of $B_0$. If each of these pairs are exchanged, then all members of $A$ will be in the lower-energy state, and the system will be in a ground state, having released energy $\\Delta W_\\text{G} = ||A_1|| \\varepsilon$. \n\nIf those same pairs of cells are diffusively averaged rather than being exchanged, the released energy will be $(1\/2) \\Delta W_\\text{G}$. This will often not be the optimal diffusive strategy, but it demonstrates that $\\Delta W_\\text{D} \\geq (1\/2) \\Delta W_\\text{G}$, which is enough to show (for the $2 \\times N$ case) that $\\Delta W_{\\text{D}|\\mu} \/ \\Delta W_\\text{D} \\leq \\Delta W_{\\text{G}|\\mu} \/ \\Delta W_\\text{G}$. \n\nHowever, this inequality does not always hold for systems with more than two allowed energy levels. Proving for a particular case that $\\Delta W_{\\text{D}|\\mu} \/ \\Delta W_\\text{D} > \\Delta W_{\\text{G}|\\mu} \/ \\Delta W_\\text{G}$ is often somewhat involved; given the other three available energies, it requires establishing an \\textit{upper} bound for $\\Delta W_\\text{D}$. Consider the following configuration, with three energy levels and two allowed values for $\\mu$:\n\\begin{align*}\n\\begin{array}{|c|c|c|}\n\\hline\nX+1 & X & 0 \\\\ \\hline\n0 & 2 & 1 \\\\ \\hline\n\\end{array} \\; .\n\\end{align*}\nSuppose the three columns correspond to energies $0$, $\\varepsilon$, and $2 \\varepsilon$, and consider the limit where $X \\rightarrow \\infty$. \n\nWhen $\\mu$ conservation is enforced, the upper row is in its ground state, does not contribute to $\\Delta W_{\\text{G}|\\mu}$ or $\\Delta W_{\\text{D}|\\mu}$. The contribution from the lower row gives $\\Delta W_{\\text{G}|\\mu} = 3 \\varepsilon$. Applying the criteria for extremal sequences from Hay, Schiff, and Fisch\\cite{Hay2015} for a three-level system, and checking all possible candidates, it is possible to show that the optimal sequence respecting $\\mu$ conservation is \n\\begin{align*}\n\\begin{array}{|c|c|c|}\n\\hline\nX+1 & X & 0 \\\\ \\hline\n0 & 2 & 1 \\\\ \\hline\n\\end{array} \\rightarrow \n\\begin{array}{|c|c|c|}\n\\hline\nX+1 & X & 0 \\\\ \\hline\n1\/2 & 2 & 1\/2 \\\\ \\hline\n\\end{array} \\rightarrow\\begin{array}{|c|c|c|}\n\\hline\nX+1 & X & 0 \\\\ \\hline\n5\/4 & 5\/4 & 1\/2 \\\\ \\hline\n\\end{array} \\; ,\n\\end{align*}\nso that $\\Delta W_{\\text{D}|\\mu} = (7\/4) \\varepsilon$. \n\nIn the absence of $\\mu$ conservation, the Gardner available energy is $\\Delta W_\\text{G} = (X + 1) \\varepsilon$. In the limit where $X$ is very large, the only thing that will determine $\\Delta W_\\text{D}$ will be the fraction of $X$ that can be moved from the second column to the first. There is no strategy that can move more than half of the content of one cell to another using diffusive exchanges; in other words, $\\lim_{X \\rightarrow \\infty} \\Delta W_\\text{D} = (X\/2) \\varepsilon$. Then, for this example, $\\Delta W_{\\text{D}|\\mu} \/ \\Delta W_\\text{D} > \\Delta W_{\\text{G}|\\mu} \/ \\Delta W_\\text{G}$. \n\n\\section{Example: Inhomogeneous Magnetic Field} \\label{sec:inhomogeneousField}\n\nIn many scenarios, the kind of simple phase-space grid considered in the previous section may need to be modified. However, much of the intuition remains the same. \nOne case that was discussed by Helander\\cite{Helander2017ii} for the continuous restacking problem was a plasma in an inhomogeneous magnetic field, with conservation of the first adiabatic invariant $\\mu = m v_\\perp^2 \/ 2 B$. \nTo illustrate the four free energies, consider for simplicity the case where the physical volume is divided into two halves: one in which $B = B_0 = \\text{const}$; and another in which $B = B_1 = \\text{const}$, with $B_1 > B_0$. \nSuppose the field is straight and its direction does not vary, and suppose the energy in the direction parallel to the field can be ignored. \nFor simplicity, we consider a discrete version of this system. The discrete version can be constructed by averaging the plasma distribution function over finite regions of phase space, and then restricting the Gardner and diffusive exchange operations to act on these macroscopic regions. \n\nThere are two major ways in which this scenario differs from those discussed in Section~\\ref{sec:discreteModel}. First, for any given $\\mu$, the volume of accessible phase space is now proportional to $B$. This can be shown by calculating the appropriate Jacobian determinant. Intuitively, it can be understood in terms of the geometry of phase space when $\\mu$ is conserved. For a given $B$ and a given $\\mu$ (or a given small range in $\\mu$), the allowed region in phase space traces out a circle (or thin ring) in the $v_\\perp = v_x \\times v_y$ plane. If $\\mu$ is held fixed and $B$ is changed from $B_0$ to $B_1$, then the region transforms to become a ring with a larger radius, with a correspondingly larger phase-space volume. The discrete system is composed of a series of boxes with equal phase-space volumes. Therefore, for any given $\\mu$, there will be a larger number of phase-space boxes on the higher-field, higher-energy side of the system than on the lower-field, lower-energy side. \n\nIn addition, there is now a coupling between the invariant $\\mu$ and the energy $\\varepsilon$. \nFor any given value of $B$, $\\varepsilon \\propto \\mu$. \nMoreover, if a particle has energy $\\varepsilon_0$ on the lower-field side, and if it is then moved to the higher-field side without changing $\\mu$, it must then have energy $\\varepsilon_1 = \\mu (B_1 \/ B_0) \\varepsilon_0$. When $\\mu$ is conserved this is a two-energy-level system, but for different values of $\\mu$ the difference between the two energy levels will be different. \nAs a result, for this scenario, analogous to Figure~\\ref{fig:grid}, Figure~\\ref{fig:newGrid} represents the magnetic field constraint. \n\n\\begin{figure}\n\t\\includegraphics[width=\\linewidth]{newGridTikzFigure.pdf}\n\t\\caption{Schematic of the discrete restacking problem for a spatial region divided between an area with constant field $B_0$ and an area with constant field $B_1 = 2 B_0$. Dashed lines separate different values of $\\mu$. For each value of $\\mu$, the single box occupies the phase space on the low-field side and the pair of boxes occupy the phase space on the high-field side.} \n\t\\label{fig:newGrid}\n\\end{figure}\n\nThis picture can be used to illustrate the behavior discussed by Helander. \nFor example, if $\\mu$ is conserved, then any distribution that is a function of $\\mu$ alone is a ground state, and such states will have spatial densities that are proportional to $B$. \nThis is in some sense counterintuitive, since for any given $\\mu$ a higher field means a higher energy. \nHowever, the intuition can be restored with reference to Figure~\\ref{fig:newGrid}. \nIf $f$ is a function of $\\mu$ alone, then each of the boxes in a given row must have the same population. \nIt is then clear that distributions of this kind will give up no energy not only under $\\mu$-conserving exchanges, but also under $\\mu$-conserving diffusive exchanges. \n\nMoreover, it is clear why the total spatial density variations in such a state must be proportional to $B$: there are proportionally more boxes on the higher-field side than on the lower-field side for each choice of \n$\\mu$, and therefore there must be a proportionally higher total population (in other words, the sum of the population over all $\\mu$ for a given side of the system will be proportional to $B$). \nHelander also showed that $f = f(\\mu)$ ground states will have temperatures proportional to $B(\\mathbf{x})$. \nThis can be understood in terms of Figure~\\ref{fig:newGrid} in a similar way. \nSuppose $f = f(\\mu)$. Then if the distribution function has some structure at an energy $\\varepsilon$ on the low-field side, it must have the same structure at energy $(B_1\/B_0) \\varepsilon$ on the high-field side, since for a given $\\mu$ the low-field and high-field regions of phase space have energies proportional to their local values of $B$. \nGiven that the density of volumes is higher in the high field region proportionately to $(B_1\/B_0)$, it follows that the pressure in high field regions is then $(B_1\/B_0)^2$ the pressure of the low field region. \n\nThis is, of course, only one particular ground state, which occurs when all accessible states have equal occupation for each $\\mu$. \nIt is also possible to release free energy when the initial states are populated differently in such a way that there is population inversion, $(\\partial f \/ \\partial \\epsilon)_\\mu \\ge 0$. \nIn that case, for example with the higher B states populated equally but not the lower B state, representable by $(0,1,1)$, then the restacking solutions give one $\\varepsilon$ free energy, whereas the diffusive solution allows only $(3\/4) \\varepsilon$ free energy (where $\\varepsilon$ is the energy gap between the states). More generally, the difference between the diffusive and restacking solutions is similar to what was discussed in Section~\\ref{sec:discreteModel}. However, the details do turn out somewhat differently. For instance, recall the first example from Section~\\ref{sec:discreteModel}:\n\\begin{gather*}\n\\begin{array}{|c|c|}\n\\hline\n0 & 1 \\\\ \\hline\n0 & 0 \\\\ \\hline\n\\end{array} \\; .\n\\end{gather*}\nIt is possible to construct a six-cell analog of this, with one populated cell on the high-field side and two possible choices of $\\mu$, with a structure along the lines of the first two rows of Figure~\\ref{fig:newGrid}. \nIt might look something like this: \n\\begin{align*}\n\\begin{array}{|c|}\n\\hline 0 \\\\ \\hline\n\\end{array}\n\\begin{array}{|c|}\n\\hline 1 \\\\ \n\\hline 0 \\\\ \\hline\n\\end{array}& \\\\\n&\\begin{array}{|c|}\n\\hline 0 \\\\ \\hline\n\\end{array}\n\\begin{array}{c}\n~\n\\end{array}\n\\begin{array}{|c|}\n\\hline 0 \\\\\n\\hline 0 \\\\ \\hline\n\\end{array}\n\\end{align*}\nor this: \n\\begin{align*}\n\\begin{array}{|c|}\n\\hline 0 \\\\ \\hline\n\\end{array}\n\\begin{array}{|c|}\n\\hline 0 \\\\ \n\\hline 0 \\\\ \\hline\n\\end{array}& \\\\\n&\\begin{array}{|c|}\n\\hline 0 \\\\ \\hline\n\\end{array}\n\\begin{array}{c}\n~\n\\end{array}\n\\begin{array}{|c|}\n\\hline 1 \\\\\n\\hline 0 \\\\ \\hline\n\\end{array} \\; .\n\\end{align*}\nBut now the solutions will be different. For instance, it now makes a difference to all four available energies whether the populated cell was for the larger or smaller value of $\\mu$, since these now have different energies and different gaps between their energies and the low-field states at the same $\\mu$. \n\nNow consider an example in which, for the second-lowest choice of $\\mu$, the high-field states (marked red in Fig.~\\ref{fig:newGrid_marked}) are occupied with populations normalized to 1 each, and the rest of phase space is unoccupied. \nThis is analogous to the second example in Section~\\ref{sec:discreteModel}. In this example, if each discrete box has a width $\\Delta \\mu$ in $\\mu$ space, the bright green box has energy $(1\/2) B_0 \\Delta \\mu$, the yellow box has energy $(3\/2) B_0 \\Delta \\mu$, and each red box has energy $3 B_0 \\Delta \\mu$. \n\nIf $\\mu$ conservation is enforced, then the populations in the red boxes can only exchange with the box marked yellow in Fig.~\\ref{fig:newGrid_marked}. The resulting diffusive free energy is $\\Delta W_{\\text{D}|\\mu} = (9\/8) B_0 \\Delta \\mu$. In the absence of a $\\mu$ constraint, computing $\\Delta W_\\text{D}$ involves the red, yellow, green, and teal boxes. It is straightforward to find a bound on $\\Delta W_\\text{D}$ by considering only exchanges between the red, yellow, and bright green boxes: $\\Delta W_\\text{D} \\geq (21\/8) B_0 \\Delta \\mu$. \nThus, the release of the constraint on the $\\mu$ invariance makes substantially more energy available. \nThis case provides some practical insight into $\\alpha$-channeling; this will be discussed in Section~\\ref{sec:discussion}. \n\nThe system in this section has included only the component of the kinetic energy from motion perpendicular to the field. The qualitative behavior would be much the same if the parallel component were included. The phase space structure for any given $v_{||}$ would be identical to the one shown in Fig.~\\ref{fig:newGrid}, but with an offset in energy depending on the value of $v_{||}$. However, there would be significant differences: the energy of a state would no longer be a function of $\\mu$ and $B$ alone, so for any given $\\mu$ a range of energies would be accessible on both the low- and high-field sides. \n\n\\begin{figure}\n\t\\includegraphics[width=\\linewidth]{newGridTikzFigure_marked.pdf}\n\t\\caption{This version of Fig.~\\ref{fig:newGrid} has cells marked with different colors in order to make the last example in Section~\\ref{sec:inhomogeneousField} clearer.} \n\t\\label{fig:newGrid_marked}\n\\end{figure}\n\n\\section{Summary and Discussion} \\label{sec:discussion}\n\nThe key point of this paper is primarily one of classification. There are in fact four free energies that are of interest in plasma: the unconstrained restacking energy ($\\Delta W_\\text{G}$), \nthe restacking energy with conservation laws ($\\Delta W_{\\text{G}|\\mu}$), the diffusive-exchange energy ($\\Delta W_\\text{D}$), and the diffusive-exchange energy with conservation laws ($\\Delta W_{\\text{D}|\\mu}$). \nThe first three of these had been discussed previously in the literature. \nThe last is first described here. \n\nThis classification is primarily of academic interest, at least insofar as any detailed calculation of the energy is concerned. \nIn practice, because accessing the full range of excitations is impractical, the full free energy will not be made available in any of of these categories.\nThus, for example, the precise Gardner restacking energy is not important as a quantity so much as it is important as a concept. \nAs a concept, it exposes the fundamental physics of free energy under phase space conservation. \nAs a quantity, it does gives a useful measure of how much energy is {\\it in principle} extractable, thus constructing a measure of success in extracting energy, or a measure of concern if the energy were lost to instabilities.\nSimilar arguments can be made for the worth of classifying the other free energies.\n\nHowever, it is worth pointing out that, in recognizing the fourth free energy here, namely the diffusive-exchange energy with conservation laws ($\\Delta W_{\\text{D}|\\mu}$), certain subtleties required quite careful attention in applying constraints to diffusive processes, as highlighted by the examples given.\nFor example, in the case of applying the $\\mu$-conservation constraint in an inhomogeneous magnetic field $B$, the density of states needed to be recognized as proportional to $B$. \nThis led to a very different, and somewhat non-intuitive, calculation for the free energy under diffusion but respecting $\\mu$-conservation. \n\nIn fact, apart from its academic interest, this example may have an important application in $\\alpha$-channeling. \nIn the case of $\\alpha$-channeling, $\\alpha$-particles are diffused from the hot tokamak center by waves, which in doing so extract the $\\alpha$-particle energy.\\cite{Fisch1992}\nHowever, generally one wave does not have all the necessary wave characteristics to accomplish this on its own.\nIt turns out that two waves, even if each is without necessary wave characteristics, together might be able to accomplish efficient energy extraction.\\cite{Fisch1995}\nIn this regard, waves, such as the ion Bernstein wave, that break $\\mu$ invariance\\cite{Fisch1995ii} were proposed in combination with low-frequency waves that respect $\\mu$ invariance to extract a substantial fraction of the $\\alpha$-particle energy.\\cite{Herrmann1997}\n\nThe strategy that achieved the most energy extraction (of about 61\\%) featured diffusion by the ion Bernstein wave confined to an outer region, in keeping with a realistic spatial deployment of this wave.\nIn other regions, the low frequency waves dominated the diffusion. \nThis strategy, while giving quite favorable results, was by no means necessarily optimal, since the space of wave parameters to search is extremely large.\n\nNote that waves that do not break the $\\mu$ invariant can diffuse $\\alpha$-particles from the tokamak center, where they are born energetic, to the low-field side of the tokamak. \nThus, in conserving $\\mu$, the $\\alpha$-particles advantageously lose perpendicular energy. \nHowever, as discussed in Section IV, the density of states at any given $\\mu$ is higher in the tokamak center where the magnetic field is larger than on the periphery where the magnetic field is smaller. This statement pertains even though the phase space on tokamaks is six-dimensional (since it includes the parallel velocity) rather than the four-dimensional phase space considered for simplicity in Section~\\ref{sec:inhomogeneousField}. In either case, the diffusion to lower magnetic fields, when respecting the $\\mu$ invariance, whether in four or six dimensions, will be to states less dense, and therefore the energy extraction will be less efficient than when diffusion occurs to states equally dense. \nIn the last example considered in the previous section, where the magnetic fields on either side of the system differed by a factor of two and the system began with a population of particles on the high-field side, the extractable energy under diffusion and respecting $\\mu$ invariance was less than half of the extractable energy when $\\mu$ invariance was not respected. \n\nMuch of the previous work on two-wave $\\alpha$-channeling has rested on the intuition that a high-frequency, non-$\\mu$-conserving wave could help to extract the energy from $\\alpha$-particles while a low-frequency, $\\mu$-conserving wave could accomplish spatial diffusion. The work presented here suggests a second motivation for combining such waves: that breaking $\\mu$ conservation may also allow diffusive processes to more efficiently extract energy. \nIn other words, there emerges the interesting suggestion that a greater amount of energy may become available by arranging for breaking the $\\mu$ invariance even in regions where the low-frequency waves dominate the diffusion. \nSince the parameter space of wave possibilities is quite large, such an intuition may prove valuable in optimizing the $\\alpha$-channeling effect.\nAt present, this is a speculation, not a proved result. \nA rigorous proof would require a much more specialized analysis of the initial conditions and phase-space geometry associated with $\\alpha$-channeling, which goes beyond the intended scope of this paper. \nIn addition, of course, it would be important to show that such a scheme can work for specific wave implementations.\n\nThe above example shows how consideration of the new, fourth free energy identified here leads to useful, possibly practical, physical intuitions regarding extractable energy under wave diffusion by different waves. \nHowever, in the end, the major interest of this classification remains an academic understanding for the fundamental ways in which energy may be released in different plasma systems. \n\n\n\n\n\\begin{acknowledgements}\nThis work was supported by US DOE grants DE-AC02-09CH11466 and DE-SC0016072. \nOne of the authors (PH) would like to acknowledge the splendid hospitality of the Princeton Plasma Physics Laboratory. \n\\end{acknowledgements}\n\n\\section*{Data Availability Statement}\n\nData sharing is not applicable to this article as no new data were created or analyzed in this study.\n\n\\bibliographystyle{apsrev4-1} \n\t","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzatle b/data_all_eng_slimpj/shuffled/split2/finalzzatle new file mode 100644 index 0000000000000000000000000000000000000000..f54c5dad599e623534f9f2c232fca305189539b3 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzatle @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nQuantum computing is highly promising for simulating\nchallenging molecules and materials \\cite{bauer2020quantum,mcardle2020quantum}. It is most likely beneficial for systems with the presence of strong correlations where perturbative techniques fail\\cite{cao2019quantum,motta2021emerging}. Unfortunately, current quantum hardware that is called noisy intermediate-scale quantum (NISQ) is limited due to noise and decoherence\\cite{de2021materials}. The variational quantum eigensolver (VQE) has been proposed as a low-depth quantum algorithm \\cite{peruzzo2014variational,mcclean2016theory,kandala2017hardware} to take advantage of NISQ computers. It is a hybrid quantum-classical method that must be carried out on quantum and classical computers. On the quantum computer, quantum states depending on a set of variational parameters are prepared, and the expectation value of the Hamiltonian is then measured. Since most operations on a quantum computer are unitary, unitary coupled-cluster (UCC) wavefunction \\cite{bartlett1989alternative,taube2006new,romero2018strategies,anand2022quantum} has been proposed as a low-circuit-depth state-preparation ansatz for VQE. Next, the set of variational variables is optimized on classical computers, and the loop is repeated until converged. \n\nHowever, the VQE applicability is limited by the dimensionality of many-body systems associated with the number of variational variables and the circuit depths. Recently, many methodologies have been developed to reduce the resources required in VQE. Grimsley and co-workers proposed the adaptive-VQE (ADAPT-VQE) ansatz\\cite{grimsley2019adaptive} that is progressively built by subsequently incorporating into it the operators that contribute most to minimizing the VQE energy towards the ground-state energy. Later on, several groups have extended ADAPT-VQE to make the algorithm more efficient\\cite{tang2021qubit,zhang2021mutual}. While ADAPT-VQE was shown to outperform standard ansatz, its iterative nature makes calculations more costly. Freericks and co-workers, on the other hand, have devised the factorized UCC ansatz for VQE\\cite{fac-UCC,xu2022decomposition} that employs a Taylor expansion in the small amplitudes, trading off circuit depth for additional measurements. Strong correlations were considered by performing the expansion about a small set of UCC factors that are treated exactly. \n\nAlternatively, one can reduce the dimensionality of the many-body Hamiltonian used in VQE by partitioning the whole system into smaller active spaces that can be handled by quantum computing. Karol and co-workers have employed the downfolding framework based on the double UCC (DUCC) to construct effective active-space Hamiltonians \\cite{bauman2019downfolding,kowalski2021dimensionality} that integrate out high-energy Fermionic degrees of freedom while being capable of reproducing exact energy of quantum systems. To this end, one needs to define subsets of excitations either entirely within the subsets considered or involving some external orbitals. The approach can capture the effect of the whole orbital space in small-size active spaces\\cite{metcalf2020resource,chladek2021variational}. Inspired by the divide-and-conquer technique in classical quantum chemistry, Nakagawa and co-workers proposed a method called deep VQE\\cite{fujii2022deep,mizuta2021deep}. In the first step of this method, the whole system is divided into much smaller subsystems, each of which is solved independently using VQE. In the next step, the ground states of subsystems are used as a basis with reduced degrees of freedom to construct an effective Hamiltonian considering the inter-subsystem interactions. The resulting effective Hamiltonian is finally solved using VQE. \n\nQuantum embedding frameworks, such as density matrix embedding theory (DMET)\\cite{mineh2022solving,tilly2021reduced,ralli2022scalable}, dynamical mean-field theory (DMFT)\\cite{keen2020quantum,bauer2016hybrid}, and density functional embedding theory\\cite{ralli2022scalable,gujarati2022quantum}, have been employed to make quantum computation feasible for real molecules and materials. Recently, Rossmannek et al. have demonstrated the performance of the VQE embedding into classical mean-field methods, including Hartree-Fock (HF) and density functional theory (DFT)\\cite{rossmannek2021quantum}. Those authors restricted the quantum computation to a critical subset of molecular orbitals, whereas the remaining electrons provide the embedding potential computed using classical mean-field theories. The proposed embedding schemes obtained significant energy corrections to the HF and DFT reference for several simple molecules in their strongly correlated regime and larger systems of the oxirane size. However, most of those calculations were limited to a minimal basis. It may be interesting to explore the performance of the active-space VQE approach on larger basis sets.\n\nIn the present work, we propose an active-space VQE approximation where VQE is naturally embedded in a correlated mean-field reference. To this end, we start from the UCC ansatz and divide the total excitation operator into internal and external contributions accordingly to the active-inactive partitioning of the orbital space. For the inactive space, instead of HF and DFT, we employ our recently-developed correlated mean-field theory called one-body second-order M{\\o}ller-Plesset perturbation theory (OBMP2)\\cite{OBMP2-JCP2013,tran2021improving}. Unlike standard MP2, OBMP2 is self-consistent, meaning that it can bypass challenges caused by the non-iterative nature of standard MP2. For the active space, we employ VQE to solve the effective Hamiltonian composing of the bare Hamiltonian and a potential caused by the internal-external interaction. Details of the procedure are given in Section~\\ref{sec:theo}. We demonstrate the performance of our approach by considering different systems with singlet and doublet ground states in Section~\\ref{sec:result}. We examine the accuracy in predicting both energy and density matrix (by evaluating dipole moment). We show that the active-space VQE with the correlated reference outperforms the standard active-space VQE. \n\n\\section{Theory}{\\label{sec:theo}}\n\\subsection{Variational quantum eigensolver: UCC ansatz}\n\nVQE relies on the variational principle, which states that the ground-state energy $E_0$ is always less than or equal to the expectation value of Hamiltonian $\\hat{H}$ calculated with the trial wavefunction $\\left|\\psi\\right>$\n\\begin{align}\n E_0 \\leq \\frac{\\left<\\psi\\right| \\hat{H} \\left|\\psi\\right>}{\\left<\\psi\\right|\\left|\\psi\\right>}\n\\end{align}\nwith the molecular Hamiltonian as\n\\begin{align}\n \\hat{H} = \\hat{h} + \\hat{v} = \\sum_{pq}h^{p}_{q} \\hat{a}_{p}^{q} + \\tfrac{1}{2}\\sum_{pqrs}g^{p r}_{q s}\\hat{a}_{p r}^{q s}\\label{eq:h1}\n\\end{align}\nwhere $\\left\\{p, q, r, \\ldots \\right\\}$ indices refer to general ($all$) sin orbitals. The objective of the VQE is to minimize the expectation value of the Hamiltonian with respect to $\\left|\\psi\\right>$. \n\nTo implement this optimization problem on the quantum computer, one has to start by defining a wavefunction ansatz that can be expressed as a series of quantum gates. To this end, we express $\\left|\\psi\\right>$ as the application of a parametrized unitary operator $U(\\boldsymbol \\theta)$ to an initial state $\\left|\\boldsymbol 0 \\right>$ for $N$ qubits, with $\\boldsymbol \\theta$ representing a set of parameters varying values in $\\left(-\\pi, \\pi\\right]$. Given that trial wavefunctions, $\\left|\\psi\\right>$, are necessarily normalized, we can now write the VQE optimization problem as follows.\n\\begin{align}\n E_{\\text{VQE}} = \\min_{\\boldsymbol \\theta} \\left<\\boldsymbol 0\\right| U^\\dagger(\\boldsymbol \\theta) \\hat{H} U(\\boldsymbol \\theta) \\left|\\boldsymbol 0\\right> \\label{eq:E-vqe}\n\\end{align}\n\nThe unitary coupled cluster (UCC) ansatz is perhaps the most widely-used ansatz for VQE and given as.\n\\begin{align}\n \\left|\\psi_\\text{UCC} \\right> = e^{\\hat{A}} \\left|\\boldsymbol 0\\right>,\n\\end{align}\nwhere $\\left|\\boldsymbol 0 \\right>$ is the HF reference and $\\hat{A}$ is an anti-Hermitian combination of particle-hole excitation and de-excitation:\n\\begin{align}\n \\hat{A} &= \\hat{T} - \\hat{T}^\\dagger \\label{eq:A_op}\\\\\n \\hat{T} &= \\sum_{i}^{occ}\\sum_{a}^{vir} T_{i}^{a} \\hat{a}_a^i + \\sum_{ij}^{occ}\\sum_{ab}^{vir} T_{ij}^{ab} \\hat{a}_{ab}^{ij} + ... \\label{eq:T_exc} \n\\end{align}\nwhere $\\left\\{i, j, k, \\ldots \\right\\}$ indices refer to occupied ($occ$) spin orbitals and\n$\\left\\{a, b, c, \\ldots \\right\\}$ indices refer to virtual ($vir$) spin orbitals. \nThe amplitudes $T_{i}^{a}$ and $T_{ij}^{ab}$ are parameterized into rotation angles $\\boldsymbol\\theta$ that are the variationally optimized. Because the computational cost scales exponentially with the system size, the excitation operator is usually truncated at single and double excitations, resulting in UCC singles and doubles (UCCSD).\n\nVQE employs HF wavefunction as a reference, and orbitals are fixed during calculation. However, it is well-known that HF orbitals are not optimal for correlated methods. Recently, several works have proposed the orbital-optimized VQE (OO-VQE) method, in which orbitals are optimized by making the energy stationary with respect to orbital rotation parameters\\cite{sokolov2020-ooVQE,mizukami2020-ooVQE,yalouz2021-SA-ooVQE,ratini2022-WAHTOR}. This approach requires the orbital gradient of VQE energy, demanding additional computational costs. \n\n\\subsection{Correlated mean-field theory: OBMP2}{\\label{sec:obmp2}}\nLet us recap the OBMP2 theory, whose formulation details are presented in Refs.~\\citenum{tran2021improving} and ~\\citenum{OBMP2-JCP2013}. The OBMP2 approach was derived through the canonical transformation \\cite{CT-JCP2006,CT-JCP2007,CT-ACP2007,CT-JCP2009,CT-JCP2010,CT-IRPC2010}, in which an effective Hamiltonian that includes dynamic correlation effects is achieved by a similarity transformation of the molecular Hamiltonian $\\hat{H}$ using a unitary operator $e^{\\hat{A}}$ :\n\\begin{align}\n\\hat{\\bar{H}} = e^{\\hat{A}^\\dagger} \\hat{H} e^{\\hat{A}},\n\\label{Hamiltonian:ct}\n\\end{align}\nwith the anti-Hermitian excited operator $\\hat{A}$ defined as in Eq~\\ref{eq:A_op}. In OBMP2, the cluster operator $\\hat{A}$ is modeled such that including only double excitation. \n\\begin{align}\n \\hat{A} = \\hat{A}_\\text{D} = \\tfrac{1}{2} \\sum_{ij}^{occ} \\sum_{ab}^{vir} T_{ij}^{ab}(\\hat{a}_{ab}^{ij} - \\hat{a}_{ij}^{ab}) \\,, \\label{eq:op1}\n\\end{align}\nwith the MP2 amplitude \n\\begin{align}\n T_{i j}^{a b} = \\frac{g_{i j}^{a b} } { \\epsilon_{i} + \\epsilon_{j} - \\epsilon_{a} - \\epsilon_{b} } \\,, \\label{eq:amp}\n\\end{align}\nwhere $\\epsilon_{i}$ is the orbital energy of the spin-orbital $i$. Using the Baker\u2013Campbell\u2013Hausdorff transformation, the OBMP2 Hamiltonian is defined as\n\\begin{align}\n \\hat{H}_\\text{OBMP2} = \\hat{H}_\\text{HF} + \\left[\\hat{H},\\hat{A}_\\text{D}\\right]_1 + \\tfrac{1}{2}\\left[\\left[\\hat{F},\\hat{A}_\\text{D}\\right],\\hat{A}_\\text{D}\\right]_1.\n \\label{eq:h2}\n\\end{align}\nwith\n\\begin{align}\n \\hat{H}_\\text{HF} &= \\hat{F} + C = \\hat{h} + \\hat{v}_{\\text{HF}} +C \\label{eq:h1hf}\n\\end{align}\nWhere $\\hat{h}$ is the one-electron Hamiltonian defined in Eq~\\ref{eq:h1}. The HF potential $\\hat{v}^{\\text{HF}}$ and the constant $C$ is given as:\n\\begin{align}\n \\hat{v}_{\\text{HF}} &= \\sum_{pq}^{all}\\sum_{i}^{occ}\\left(g^{p i}_{q i} - g^{p i}_{i q} \\right) \\label{eq:vhf}\\\\\n C &= \\,\\, \\sum_{ij}^{occ} \\left(g^{ij}_{ji} - g_{ij}^{ij} \\right) \\,. \n\\end{align}\nIn Eq.\\ref{eq:h2}, commutators with the subscription 1, $[\\ldots]_1$, involve one-body operators and constants that are reduced from many-body operators using the cumulant approximation\\cite{cumulant-JCP1997,cumulant-PRA1998,cumulant-CPL1998,cumulant-JCP1999}. Doing some derivation, we eventually arrive at the OBMP2 Hamiltonian as follows\n\\begin{align}\n \\hat{H}_\\text{OBMP2} = & \\,\\, \\hat{H}_\\text{HF} + \\hat{V}_\\text{OBMP2} \\label{eq:h4}\n\\end{align}\nwhere $\\hat{v}_\\text{OBMP2}$ is a correlated potential composing of one-body operators. The working expression is given as\n\\begin{align}\n\\hat{V}_{\\text{OBMP2}} = & \\overline{T}_{i j}^{a b} \\left[ f_{a}^{i} \\,\\hat{\\Omega}\\left( \\hat{a}_{j}^{b} \\right) \n + g_{a b}^{i p} \\,\\hat{\\Omega} \\left( \\hat{a}_{j}^{p} \\right) - g^{a q}_{i j} \\,\\hat{\\Omega} \\left( \\hat{a}^{b}_{q} \\right) \\right] \\nonumber \\\\ &- 2 \\overline{T}_{i j}^{a b}g^{i j}_{a b} \n + \\,f_{a}^{i}\\overline{T}_{i j}^{a b}\\overline{T}_{j k}^{b c} \\,\\hat{\\Omega} \\left(\\hat{a}_{c}^{k} \\right) \\nonumber \\\\ \n &+ f_{c}^{a}T_{i j}^{a b}\\overline{T}_{i l}^{c b} \\,\\hat{\\Omega} \\left(\\hat{a}^{l}_{j} \\right) + f_{c}^{a}T_{i j}^{a b}\\overline{T}_{k j}^{c b} \\,\\hat{\\Omega} \\left(\\hat{a}^{k}_{i} \\right) \\nonumber \\\\ \n &- f^{k}_{i}T_{i j}^{a b}\\overline{T}_{k l}^{a b} \\,\\hat{\\Omega} \\left(\\hat{a}_{l}^{j} \\right)\n - f^{p}_{i}T_{i j}^{a b}\\overline{T}_{k j}^{a b} \\,\\hat{\\Omega} \\left(\\hat{a}^{p}_{k} \\right) \\nonumber \\\\ \n & + f^{k}_{i} T_{i j}^{a b}\\overline{T}_{k j}^{a d} \\,\\hat{\\Omega}\\left(\\hat{a}_{b}^{d} \\right) + f_{k}^{i}T_{i j}^{a b}\\overline{T}_{k j}^{c b} \\,\\hat{\\Omega} \\left(\\hat{a}_{a}^{c} \\right) \\nonumber \\\\ \n &- f_{c}^{a}T_{i j}^{a b}\\overline{T}_{i j}^{c d} \\,\\hat{\\Omega} \\left(\\hat{a}^{b}_{d} \\right) \\,\n - f_{p}^{a}T_{i j}^{a b}\\overline{T}_{i j}^{c b} \\,\\hat{\\Omega} \\left(\\hat{a}^{p}_{c} \\right) \\nonumber \\\\\n & - 2f_{a}^{c}{T}_{i j}^{a b}\\overline{T}_{i j}^{c b} + 2f_{i}^{k}{T}_{i j}^{a b}\\overline{T}_{k j}^{a b}. \\label{eq:vobmp2} \n\\end{align}\nwith $\\overline{T}_{ij}^{ab} = {T}_{ij}^{ab} - {T}_{ji}^{ab}$, the symmetrization operator $\\hat{\\Omega} \\left( \\hat{a}^{p}_{q} \\right) = \\hat{a}^{p}_{q} + \\hat{a}^{q}_{p}$, and the Fock matrix \n\\begin{align}\n f_p^q = h_p^q + \\sum_{i}^{occ}\\left(g^{p i}_{q i} - g^{p i}_{i q} \\right).\n\\end{align}\nNote that, for convenience, we have used Einstein's convention in Eq.~\\ref{eq:vobmp2} to present the summations over repeated indices. We rewrite $\\hat{H}_\\text{OBMP2}$ (Eqs. \\ref{eq:h2} and \\ref{eq:h4}) in a similar form to Eq. \\ref{eq:h1hf} for $\\hat{H}_\\text{HF}$ as follows:\n\\begin{align}\n \\hat{H}_\\text{OBMP2} = & \\hat{\\bar{F}} + \\bar{C} \\label{eq:h5}\n\\end{align}\nwith $\\hat{\\bar{F}} = \\bar{f}^{p}_{q} \\hat{a}_{p}^{q}$.\n$\\bar{f}^{p}_{q}$ is so-called {\\it correlated} Fock matrix and written as\n\\begin{align}\n\\bar{f}^{p}_{q} &= f^{p}_{q} + v^{p}_{q}. \\label{eq:corr-fock\n\\end{align}\n$v^{p}_{q}$ is the matrix representation of the one-body operator $\\hat{V}_{\\text{OBMP2}}$, serving as the correlation potential altering the uncorrelated HF picture. We update the MO coefficients and energies by diagonalizing the matrix $\\bar{f}^{p}_{q}$, leading to orbital relaxation in the presence of dynamic correlation effects. The OBMP2 method is implemented within a local version of PySCF\\cite{pyscf-2018}. \n\\subsection{VQE with OBMP2 reference}\n\nWe can see that both UCC and OBMP2 are formulated using a unitary exponential operator $e^{\\hat{A}}$ (Eqs~\\ref{eq:A_op} and \\ref{eq:op1}), implying that one can combine these two naturally. Partitioning the whole orbital space into active and inactive spaces, one can decompose excitation operators into internal and external contributions as:\n\\begin{align}\n \\hat{A} = \\hat{A}_{\\text{int}} + \\hat{A}_{\\text{ext}}.\n\\end{align}\nHere, the internal (int) defines excitations within active space, and the external (ext) is the remaining excitations involving at least one inactive space orbital. \n\nThe total energy can be written as:\n\\begin{align}\n E &= \\left< \\boldsymbol 0 \\right| e^{\\hat{A}^\\dagger} \\hat{H} e^{\\hat{A}} \\left| \\boldsymbol 0 \\right> \\\\\n &= \\left< \\boldsymbol 0 \\right| e^{\\hat{A}_{\\text{int}}^\\dagger} e^{\\hat{A}_{\\text{ext}}^\\dagger} \\hat{H} e^{\\hat{A}_{\\text{ext}}} e^{\\hat{A}_{\\text{int}}} \\left| \\boldsymbol 0 \\right>\n\\end{align}\nIn the standard active space approximation, the external contribution is zero. Here, we treat it using OBMP2 described in the subsection~\\ref{sec:obmp2}. We arrive then at \n\\begin{align}\n E &= \\left< \\boldsymbol 0 \\right| e^{\\hat{A}_{\\text{int}}^\\dagger} \\hat{H}^{\\text{act}}_{\\text{eff}} e^{\\hat{A}_{\\text{int}}} \\left| \\boldsymbol 0 \\right> + E_{\\text{OBMP2}}^{\\text{ext}} \\label{eq:ene-part}\n\\end{align}\nwhere $\\hat{H}^{\\text{act}}_{\\text{eff}}$ is the effective Hamiltonian within the active space including the bare Hamiltonian $\\hat{H}^{\\text{act}}$ and an effective one-body potential $v^{\\text{act}}_{\\text{eff}}$ from the remaining electrons outside this space: \n\\begin{align}\n \\hat{H}^{\\text{act}}_{\\text{eff}} &= \n \\hat{H}^{\\text{act}} + \\hat{v}^{\\text{act}}_{\\text{eff}},\\\\\n \\hat{v}^{\\text{act}}_{\\text{eff}} &= \\hat{v}^{\\text{act}}_{\\text{HF}} + \\hat{V}^{\\text{act}}_{\\text{OBMP2}}.\n\\end{align}\nIn the present implementation, we have not included $\\hat{V}_{\\text{OBMP2}}^{\\text{act}}$ in the active-space Hamiltonian. Instead, the active-inactive interaction is buried in the external contribution determined by: \n\\begin{align}\n E_{\\text{OBMP2}}^{\\text{ext}} = E_{\\text{OBMP2}}^{\\text{tot}} - E_{\\text{OBMP2}}^{\\text{act}} \n\\end{align} \nwith $E_{\\text{OBMP2}}^{\\text{tot}}$ the total OBMP2 energy of the whole system and $E_{\\text{OBMP2}}^{\\text{act}}$ the OBMP2 energy of the active space. \n\nCalculations start by running OBMP2 for the whole system and selecting active space using OBMP2 orbitals. VQE is then used for the active space with the effective Hamiltonian. We use UCC with doubles (UCCD) as the ansatz in the present work. The total double-excitation amplitude for the whole system $T_2$ is the summation of internal and external contributions: \n\\begin{align}\n T_2 = T_2^{\\text{int}}+T_2^{\\text{ext}}. \\label{eq:total-amp} \n\\end{align}\nThe density matrix needed for properties is then evaluated using the total amplitude. It is essential to remind that, unlike standard MP2, the OBMP2 amplitude is relaxed through a self-consistency, bypassing issues caused by the non-self-consistent nature of standard MP2. The classical calculation is carried out using PySCF\\cite{pyscf-2018}, and the quantum part is done using the Qiskit package\\cite{Qiskit}.\n\n\\section{Results and discussion}{\\label{sec:result}}\n\\subsection{Full-space VQE with OBMP2 orbitals}\n\n\\begin{figure}[t!]\n \\includegraphics[width=8cm,]{H4chain-sto.png}\n \\caption{Potential energy curves of H$_4$ chain in STO-6G. VQE was performed for the full orbital space.}\n \\label{fig:h4c-sto}\n\\end{figure}\n\n\\begin{figure}[t!]\n \\includegraphics[width=8cm,]{LiH-sto.png}\n \\caption{Potential energy curves of LiH in STO-6G. VQE was performed for the full orbital space.}\n \\label{fig:lih-sto}\n\\end{figure}\n\nSeveral authors have shown that orbital relaxation is important to reduce VQE errors \\cite{sokolov2020-ooVQE,mizukami2020-ooVQE,yalouz2021-SA-ooVQE,ratini2022-WAHTOR}. In these studies, the energy of VQE is minimized concerning both cluster amplitudes and orbitals, resulting in a self-consistency that demands higher computational costs than standard VQE. It is thus interesting to examine whether correlated orbital reference preoptimized using a lower-level method can improve the accuracy of ``single-shot'' VQE. Here, we performed VQE only once on OBMP2 orbitals that are relaxed in the presence of dynamical correlation at the MP2 level. In fact, since MP2 is the first-order amplitude truncation of coupled-cluster doubles (CCD), one can consider OBMP2 is an approximation to orbital-optimized CCD (OO-CCD) \\cite{mizukami2020-ooVQE,sokolov2020-ooVQE}.\n\nIn Figure~\\ref{fig:h4c-sto}, we plot the potential energy curves of the H$_4$ chain in the STO-6G basis. VQE with the UCCD solver for the entire orbital space was performed using HF and OBMP2 orbitals. While both cases are almost identical and close to the FCI reference when $R < 1.5$\\r{A}, errors become more prominent as stretching the distance. Noticeably, VQE with the OBMP2 orbitals is better than that with the HF orbitals, and its error is twice smaller than the HF counterpart. We consider another example, LiH in the STO-6G basis. As shown in Figure~\\ref{fig:lih-sto}, VQE again benefits from correlated orbital reference. In general, OBMP2 orbital reference can help to reduce VQE errors without any additional costs. Using OBMP2 orbitals as a basis may be a good approximation to OO-CCD. \n\n\\subsection{Active-space VQE with restricted OBMP2}\n\n\\begin{figure}[t!]\n \\includegraphics[width=8cm,]{H4chain-ccpvdz.png}\n \\caption{Potential energy curves of H$_4$ chain in cc-pVDZ. VQE was performed in the active space of four orbitals.}\n \\label{fig:h4c-dz}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\includegraphics[width=8cm,]{H4square-ccpvdz.png}\n \\caption{Potential energy curves of H$_4$ square in cc-pVDZ. VQE was performed in the active space of four orbitals .}\n \\label{fig:h4s-dz}\n\\end{figure}\n\nThis subsection considers the active-space approximation for molecules with the ground state singlet. The restricted OBMP2 is used as the reference for VQE. \n\n\\begin{figure*}[t]\n \\includegraphics[width=14cm,]{LiH-ccpvdz.png}\n \\caption{Potential energy curves of LiH in cc-pVDZ with different active spaces for VQE.}\n \\label{fig:lih-dz}\n\\end{figure*}\n\nFigure~\\ref{fig:h4c-dz} represents potential energy curves of H$_4$ chain in cc-pVDZ. VQE was performed within an active space of four orbitals. For comparison, we also plot HF and OBMP2 curves. Due to the lack of dynamic correlation outside the active space, VQE is far from the FCI reference. While OBMP2 is better than VQE around equilibrium, its errors are significantly larger than VQE errors at long distances. Also, the VQE curve is much more parallel to FCI than OBMP2, resulting from a proper description of static correlation in VQE. Capturing dynamical correlation outside the active space, VQE-in-OBMP2 dramatically outperforms VQE and yields errors relative to FCI much smaller than those of VQE. Interestingly, the non-parallelity error (NPE), defined as the difference between the minimum and maximum errors, is smaller for VQE-in-OBMP2 than VQE. \n\nThe next system we consider is the H$_4$ ring as depicted in the inset of Figure~\\ref{fig:h4s-dz}. We analyze the change of energy when the four atoms move along the circumference with radius $R=1.8$\\r{A} by varying the angle $\\theta$. The square geometry corresponds to $\\theta = 90^{\\circ}$. We perform VQE calculation within the active space of four valence orbitals. For comparison, we also present the CCSD result. As we can see in Figure ~\\ref{fig:h4s-dz}, while CCSD agrees very well with the FCI reference for $\\theta$ far from $90^{\\circ}$, it fails to predict the convexity of the energy around $90^{\\circ}$. In contrast, VQE with UCC ansatz can properly describe the energy profile, which is consistent with the work of Sokolov {\\it et al.} \\cite{sokolov2020-ooVQE}. Harsha {\\it et al.} showed that the variational UCC is superior to standard CC, particularly when strong electron correlation is involved\\cite{harsha2018difference}. Due to the lack of dynamical correlation outside the active space, there is a large up-shift of the VQE curve with a maximum error of 40 mHa. When the correlation outside the active space is taken into account, VQE-in-OBMP2 can significantly improve upon standard VQE with a maximum error four times smaller than VQE. \n\nWe now examine the systematic improvement with respect to the size of the active space. To this end, we consider LiH in cc-pVDZ and gradually enlarge the active space by adding $\\sigma$-type orbitals. Here, the core orbital Li $1s$ is not included in the active space and treated at the (correlated) mean-field level. All results are summarized in Figure~\\ref{fig:lih-dz}. We can see a systematic improvement when the size of the active space increases. However, the largest active space of seven orbitals is still insufficient for standard VQE to achieve satisfying accuracy. While retaining the systematic improvement with respect to the size of active space, VQE-in-OBMP2 can dramatically reduce errors. Its NPEs are also smaller than those of standard VQE. With the largest active space considered here, VQE-in-OBMP2 can approach closer to the FCI reference with a maximum error of 6 mHa. \n\n\\subsection{Active-space VQE with unrestricted OBMP2}\n\n\\begin{figure*}[t!]\n \\includegraphics[width=8cm,valign=t]{CH_pes_dz.png}\n \\includegraphics[width=8cm,valign=t]{CH_dipole_dz.png}\n \\caption{Left: potential energy curves of CH with the ground state doublet. Right: The change of CH dipole moment. The basis set is cc-pVDZ. VQE was performed in the active space of five orbitals.}\n \\label{fig:ch-pes-dip}\n\\end{figure*}\n\nThis subsection considers two systems with the ground state doublet (e.g., having one unpaired electron): CH and H$_2$O$^+$. The unrestricted HF (UHF) and OBMP2 (UOBMP2) are used as the reference for VQE. In addition to potential energy curves, we also calculate dipole moments that are a direct measure of the density matrix. The cc-pVDZ basis set is used for all calculations. \n\nFigure~\\ref{fig:ch-pes-dip} represents the potential energy curves and dipole moments of CH from different methods and their errors relative to the CCSD reference. VQE is performed in the active space of five orbitals, composed of C $2s2p$ and H $1s$. Although unrestricted HF (UHF) can describe the dissociation adequately, a large NPE is observed due to a bump at the unrestricted point 1.5\\r{A}. Standard VQE performed on UHF can reduce errors and parallel the curve to the FCI reference. When VQE is performed with the UOBMP2 reference, the errors in energy dramatically decrease. We plot the change of dipole moments by stretching the C--H bond in the right panel of Figure~\\ref{fig:ch-pes-dip}. All the methods yield curves that behave similarly to the CCSD reference. VQE-in-OBMP2 predicts dipole moment closer to the CCSD reference than standard VQE for $R < 1.5 \\r{A}$, indicating the importance of dynamic correlation in the accurate prediction of density-related properties. However, its errors are still significant in the stretched regime, demanding the enlargement of the active space to get more accurate dipole moments. \n\\begin{figure*}[t!]\n \\includegraphics[width=8cm,valign=t]{H2O_pes_dz.png}\n \\includegraphics[width=8cm,valign=t]{H2O_dipole_dz.png}\n \\caption{Left: potential energy curves of H$_2$O$^+$ with the ground state doublet. Right: The change of H$_2$O$^+$ dipole moment. The basis set is cc-pVDZ. VQE was performed in the active space of six orbitals.}\n \\label{fig:h2o-pes-dip}\n\\end{figure*}\n\nLet us now consider H$_2$O$^+$ for which VQE is performed in the active space of six orbitals composed of O $2s2p$ and H $1s$. Its potential energy curves and dipole moments evaluated using different methods are depicted in Figure~\\ref{fig:h2o-pes-dip}. As for the potential energy curves, although VQE with six orbitals in the active space does not significantly improve UHF, it makes the curve more parallel to the CCSD reference, in particular, at the unrestricted point ($R \\approx 1.3 \\r{A}$). When combined with UOBMP2, VQE yields the curve much closer to the reference and further reduces NPE. As for dipole moments, the VQE-in-OBMP2 curve is overall closest to the CCSD reference. It agrees very well with CCSD for short distances ($R < 1.5\\r{A}$). However, it deviates from the reference at long distances, requiring a larger active space for VQE. \n\n\\section{Conclusion}\nWe have proposed an active space approximation in which VQE is naturally embedded in a correlated mean-field reference derived from the UCC ansatz. By partitioning the whole orbital space into active and inactive spaces, we divided the total excitation operator in the UCC ansatz into internal and external contributions. The inactive space is treated using OBMP2, a correlated mean-field theory recently developed by us. The effective Hamiltonian for the active space is derived as the sum of the bare Hamiltonian in the active space and a potential describing the internal-external interaction. Considering different systems with singlet and doublet ground states in the minimal and larger basis sets, we demonstrated the accuracy of our approach in predicting energies and dipole moments. We show that the VQE with the OBMP2 reference significantly improves upon the standard active-space VQE with the uncorrelated HF reference. \n\nOur proposed approach is generally applicable to different types of UCC ansatz, such as generalized UCC\\cite{lee2018generalized}, paired UCC\\cite{lee2018generalized,stein2014seniority}, and pair-natural orbital-UCC\\cite{kottmann2021reducing}. We believe it is useful to study real chemistry and materials on quantum computers. Further work is planned to develop more sophisticated schemes of active-space selection to treat systems with large active spaces. For example, one can split the active space into smaller subspaces and treat them independently using VQE as has been done in quantum embedding methods \\cite{welborn2016bootstrap,wouters2016practical,seet-jctc2016,seet-jpcl2017}. In the current work, orbitals are only optimized in OBMP2, and VQE is performed as a \"single-shot\" calculation. We are now implementing a fully self-consistent scheme, in which the total amplitude (Eq.~\\ref{eq:total-amp}) is used to construct correlated Fock (Eq.~\\ref{eq:corr-fock}). Molecular orbitals and orbital energies are then relaxed in the presence of both UCC and MP2 correlations. We hope to report these results in near-future publications.\n\n\\section*{Acknowledgments}\nThis work is supported by the Vietnam Academy of Science and Technology (VAST) under the grant number CSCL14.01\/22-23. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIf a random walk is started in a known state and run for several steps, one may examine the probabilities that it is now in each possible state. A likelihood order is a partial order on the state space, so that if one state is larger than another, then the random walk is always more likely to be in the former state than the latter, after any number of steps. The main result of this paper is that for any Coxeter system $(W,S)$, the weak Bruhat order is a likelihood order for the simple random walk on $W$ generated by $S \\cup \\{1_W\\}$. \n\n\\begin{Theorem}\n\\label{the:main}\nFor any Coxeter system $(W,S)$, consider the simple random walk on $W$, starting at the identity, and at each step multiplying on the right by an element of $S$ or by the identity, each with probability $\\frac{1}{|S|+1}$. Then for any $n$, and any two states $w$ and $w'$, if $w \\leq_B w'$ then the probability that the random walk is at $w$ after $n$ steps is at least the probability that it is at $w'$. \n\\end{Theorem}\n\nIn type A, this result describes a partial order for the (appropriately lazy) adjacent transposition walk on the symmetric group. For analysis of the mixing time of this walk, see Section 4 of \\cite{Compgroups}. The adjacent transposition walk is a special case of the interchange process \\cite{Interchange}.\n\nLikelihood orders can describe the most and least likely states of a random walk. In particular, if a random walk has uniform stationary distribution, then the separation distance from the stationary distribution depends only on the probability of being at the least likely state. Thus, knowing which state is the least likely, together with a lower bound on the probability of being at that state, produces an upper bound on the separation distance mixing time. Upper bounds on separation distance give upper bounds on total variation distance, so upper bounds on total variation mixing times follow. \n\nLower bounds on mixing times are obtained by analysing a set of unlikely states. Knowledge of which states are the least likely via a likelihood order can inform the choice of such a set. For instance, see Section 4 of \\cite{Thumb}.\n\nSome results regarding likelihood orders for various random walks on the symmetric group $S_n$ are given in \\cite{MeganLikelihood} and \\cite{MeganInvolutions}. These likelihood orders are shown to hold after enough steps (for example, after $O(n^2)$ steps). Likelihood orders have also been considered by Diaconis and Isaacs in \\cite{DiaconisIsaacs}, where they prove that for any symmetric random walk on a group, after any even number of steps the most likely state is the initial one. They also give likelihood orders for several random walks on the cycle.\n\n\\section{Preliminaries}\n\nIn this section, some necessary background results are recalled. Section \\ref{sec:cayley} defines Cayley graphs and a useful family of their symmetries, and Section \\ref{sec:distances} discusses the sets of vertices in the Cayley graph which are closer to one end of a given edge than to the other. Section \\ref{sec:bruhat} defines the weak Bruhat order.\n\nA Coxeter system $(W,S)$ is a group $W$ together with a presentation of a certain form.\n\n\\begin{Definition}\nA \\emph{Coxeter presentation} is a presentation of the form $$\\pres{s_1,s_2,\\dots,s_n}{\\{s_i^2\\}_{i=1}^n, \\{(s_is_j)^{m_{ij}}\\}_{i \\neq j}}$$ where each $m_{ij}, i \\neq j$ is either a positive integer at least two, or $\\infty$, indicating the lack of that relation. \n\\end{Definition}\n\nA good example of a Coxeter group is the symmetric group $S_n$, which has the Coxeter presentation $$S_n = \\left\\langle s_1, s_2, \\dots, s_{n-1}\\left|\\begin{array}{lcl}\ns_i^2 & \\text{for each} & 1 \\leq i \\leq n-1 \\\\\n(s_is_{i+1})^3 & \\text{for each} & 1 \\leq i \\leq n-2 \\\\\ns_is_js_i^{-1}s_j^{-1} & \\text{if} & |i-j| > 1 \\\\\n\\end{array}\\right.\\right\\rangle.$$ \n\nIn this presentation, the generator $s_i$ represents the transposition $(i \\; i+1)$. This presentation has $m_{ij} = 3$ when $|i-j|=1$ and $m_{ij} = 2$ for $|i-j|>1$.\n\n\\subsection{Cayley graphs}\n\\label{sec:cayley}\n\nGiven a group $W$ and a generating set $S$, the Cayley graph $\\Gamma(W,S)$ is defined as follows\n\n\\begin{Definition}\nThe graph $\\Gamma(W,S)$ has a vertex for each element of $W$, and for each $w \\in W$ and each $s \\in S$, there is an edge from $w$ to $ws$. It will often be convenient to label the edge $(w,ws)$ by the generator $s$. \n\\end{Definition} \n\nIn the present setting, groups will always be generated by elements of order two, so Cayley graphs will be undirected.\n\nIt will be necessary to have the following results regarding certain symmetries of Cayley graphs.\n\n\\begin{Definition}\nConsider a Cayley graph $\\Gamma(W,S)$. For any $x \\in W$, let $L_x$ be the left multiplication map on $\\Gamma(W,S)$ which takes $w$ to $xw$, for each $w \\in W$.\n\\end{Definition}\n\n\\begin{Proposition}\nFor any Cayley graph $\\Gamma(W,S)$ and any $x \\in W$, the map $L_x$ is an automorphism of $\\Gamma(W,S)$. Further, $L_x$ preserves the edge labels of $\\Gamma(W,S)$.\n\\end{Proposition}\n\\begin{proof}\nThe map $L_{x^{-1}}$ is the inverse of $L_x$, so $L_w$ is a bijection. To check that $L_x$ preserves edges of $\\Gamma(W,S)$, observe that for each edge $(w,ws)$ of $\\Gamma(W,S)$, the image under $L_x$, $(xw,xws)$, is also an edge of $\\Gamma(W,S)$, and that these two edges have the same label.\n\\end{proof}\n\nRandom walks on the group $W$ can be understood via the Cayley graph. In particular, if a random walk is defined by at each step multiplying by an element of $S$, then consider the set of paths in $\\Gamma(W,S)$ of length $n$ which start at the identity. The probability $P^n(w)$ that the walk is at $w$ after $n$ steps is equal to the proportion of these paths which end at $w$. Lazy walks can be considered by including $1_W$ in $S$.\n\n\\subsection{Distances in $\\Gamma(W,S)$}\n\\label{sec:distances}\n\nIt will be important to understand relative distances in the Cayley graphs of Coxeter groups. Consider the Cayley graph with the usual graph metric --- that is, each edge has length $1$, and the distance $d(w,x)$ between two vertices $w$ and $x$ is the number of edges in the shortest path connecting them. As usual in the theory of Coxeter groups, $l(w)$ will denote the distance from the identity $d(1_W,w)$. Equivalently, $l(w)$ is the fewest number of generators which can be multiplied to produce $w$.\n\nFor this section, fix $w \\in W$ and $s \\in S$, with $l(w) < l(ws)$. That $l(w) < l(ws)$ is not used in this section, but is consistent with how these results will be used in Section \\ref{sec:main}.\n\n\\begin{Definition}\n\\label{def:colours}\nLet $\\Gamma(W,S)$ be the Cayley graph of a Coxeter system $(W,S)$. For the fixed adjacent vertices $w$ and $ws$ of $\\Gamma(W,S)$, colour each vertex of $\\Gamma(W,S)$ white if it is closer to $w$ than to $ws$ and black if it is closer to $ws$ than to $w$. \n\\end{Definition}\n\n\\begin{Proposition}\nEach vertex of $\\Gamma(W,S)$ is coloured white or black, but not both.\n\\end{Proposition}\n\\begin{proof}\nEach vertex of $\\Gamma(W,S)$ has at most one colour, because it cannot be both closer to $w$ than to $ws$ and the reverse. To show that each vertex is coloured, it is sufficient to show that no vertex can be equidistant from $w$ and $ws$.\n\nThe Coxeter relations of $(W,S)$ each have even length, so $\\Gamma(W,S)$ is a bipartite graph, and hence the distances from any vertex to $w$ and $ws$ have opposite parities. Thus each vertex is coloured, completing the proof.\n\\end{proof}\n\n\\begin{Definition}\nContinuing from Definition \\ref{def:colours}, colour grey each edge which connects a white vertex to a black vertex. \n\\end{Definition}\n\n\\begin{Lemma}\n\\label{lem:greydist}\nIf $(x,xt)$ is a grey edge, with $x$ white and $xt$ black, then $d(x,w) = d(xt,ws)$. (The generator $t$ may be equal to $s$, but need not be.)\n\\end{Lemma}\n\\begin{proof}\nThe two vertices $w$ and $ws$ are adjacent, as are the vertices $x$ and $xt$. The vertex $x$ is white, and $xt$ is black. Thus, the following relations between distances hold \n\\begin{align*}\nd(x,w) + 1 &= d(x,ws) \\\\ \nd(xt,ws) + 1 &= d(xt,w) \\\\\nd(x,ws) &= d(xt,ws) \\pm 1\\\\\nd(xt,w) &= d(x,w) \\pm 1\\\\\n\\end{align*} \nAdding these four equations, each of the two $\\pm$ signs must be a plus. Thus $d(x,w) = d(xt,ws)$, as required.\n\\end{proof}\n\n\\begin{Lemma}\n\\label{lem:greypath}\nUnder the conditions of Lemma \\ref{lem:greydist}, $w^{-1}x = sw^{-1}xt$.\n\\end{Lemma}\n\\begin{proof}\nLet $\\boldsymbol{\\omega}$ be a reduced word for $w^{-1}x$. From Lemma \\ref{lem:greydist}, $l(sw^{-1}xt) = l(w^{-1}x)$. Thus, $s\\boldsymbol{\\omega} t$ is a word of length two greater than the minimum length of any equivalent word. By the deletion condition (Section 1.7 of \\cite{Humphreys}), there is a reduced word for $sw^{-1}xt$ which can be obtained by deleting two letters from $s\\boldsymbol{\\omega} t$. However, Lemma \\ref{lem:greydist} also implies that the words $s\\boldsymbol{\\omega}$ and $\\boldsymbol{\\omega} t$ are reduced, so the two letters removed from the word $s\\boldsymbol{\\omega} t$ must be the initial $s$ and the final $t$. Therefore $s\\boldsymbol{\\omega} t$ and $\\boldsymbol{\\omega}$ are equivalent words, so $w^{-1}x = sw^{-1}xt$.\n\\end{proof}\n\n\\begin{Proposition}\n\\label{prop:greyflip}\nThe map $L_{wsw^{-1}}$ interchanges the endpoints of any grey edge.\n\\end{Proposition}\n\\begin{proof}\nLet $(x,xt)$ be a grey edge, with $x$ white and $xt$ black. The image $L_{wsw^{-1}}(x)$ is\n\\begin{align*}\nL_{wsw^{-1}}(x) &= wsw^{-1}x \\\\\n& = wssw^{-1}xt \\text{ (by Lemma \\ref{lem:greypath})} \\\\\n& = xt \\\\\n\\end{align*}\nThe map $L_{wsw^{-1}}$ is an involution, so $L_{wsw^{-1}}(xt) = x$.\n\\end{proof}\n\n\\begin{Corollary}\nAny vertex of $\\Gamma(W,S)$ is adjacent to at most one grey edge.\n\\end{Corollary}\n\\begin{proof}\nThe function $L_{wsw^{-1}}$ is well defined. If any vertex were adjacent to more than one grey edge, then Proposition \\ref{prop:greyflip} implies that $L_{wsw^{-1}}$ is multivalued, a contradiction.\n\\end{proof}\n\nThe results in this section are not new --- they are standard facts about the geometry of $\\Gamma(W,S)$, proven again here to draw out the key pieces. The set of grey edges is commonly referred to as a wall, and could be defined as the set of edges preserved by the reflection $L_{wsw^{-1}}$.\n\n\\subsection{The weak Bruhat order}\n\\label{sec:bruhat}\n\nLet $(W,S)$ be a Coxeter system. The (right) weak Bruhat order on $(W,S)$ is defined as follows (Chapter 3 of \\cite{Bjorner}).\n\n\\begin{Definition}\nLet $w$ and $w'$ be elements of $W$. Then $w \\leq_B w'$ if there is a reduced word for $w$ which can be multiplied on the right by elements of $S$ to produce a reduced word for $w'$.\t\n\\end{Definition}\n\nAn equivalent formulation in terms of the Cayley graph of $(W,S)$ is\n\n\\begin{Definition}\nLet $w$ and $w'$ be elements of $W$. Then $w \\leq_B w'$ if there is a minimal length path in $\\Gamma(W,S)$ from the identity element $1_W$ to $w'$ which passes through $w$.\t\n\\end{Definition}\n\nThese two definitions are equivalent because edges in the Cayley graph $\\Gamma(W,S)$ correspond to multiplication on the right by an element of $S$.\n\n\\section{Proof of main result}\n\\label{sec:main}\n\nThe main result of this paper is that the weak Bruhat order arises as a likelihood order for random walks on $(W,S)$.\n\n\\begin{repTheorem}{the:main}\nFor any Coxeter system $(W,S)$, consider the simple random walk on $W$, starting at the identity, and at each step multiplying on the right by an element of $S$ or by the identity, each with probability $\\frac{1}{|S|+1}$. Then for any $n$, and any two states $w$ and $w'$, if $w \\leq_B w'$ then the probability that the random walk is at $w$ after $n$ steps is at least the probability that it is at $w'$. \n\\end{repTheorem}\n\nTo prove Theorem \\ref{the:main}, it suffices to consider $w$ and $w'$ which are adjacent --- that is, when $w' = ws$ for some $s \\in S$, with $l(w) < l(ws)$. If the result is true for adjacent vertices, then the general case follows by induction. Thus, the theorem reduces to the following proposition.\n\n\\begin{Proposition}\n\\label{prop:likelihood}\nLet $w \\in W$ and $s \\in S$, with $l(ws) > l(w)$. Then for any $n$, $P^n(ws) < P^n(w)$.\n\\end{Proposition}\n\\begin{proof}\nLet $\\cP$ be the set of paths of length $t$ from the identity $1_W$ to $w$, with each step being either the identity or an element of $S$. Let $\\cP'$ be the set of paths of length $t$ from $1_W$ to $ws$. To prove this proposition, it suffices to construct an injection from $\\cP'$ to $\\cP$. Consider an element $\\alpha$ of $\\cP'$, and write it as $$\\alpha = (1_W=a_0,a_1,a_2,\\dots,a_n=ws).$$\n\nThe path $\\alpha$ starts at a point closer to $w$ than to $ws$, because $l(ws) > l(w)$, and $\\alpha$ ends at $ws$, which is closer to $ws$ than to $w$. Thus at some point, $\\alpha$ crosses from a vertex closer to $w$ to a vertex closer to $ws$ --- that is, this path crosses a grey edge. Given $\\alpha$, let $i$ be the last time at which $\\alpha$ either just crossed a grey edge or just stayed in place on an endpoint of a grey edge. Using $\\oplus$ to denote concatenation, define the sequence of vertices \n\\begin{align*}\nf(\\alpha) &= (a_j)_{j=0}^{i-1} \\oplus (L_{wsw^{-1}}(a_j))_{j=i}^{n} \\\\\n &= (1_W=a_0,a_1,\\dots,a_{i-1},L_{wsw^{-1}}(a_i),L_{wsw^{-1}}(a_{i+1}),\\dots,L_{wsw^{-1}}(a_n)) = w.\\\\\n\\end{align*}\n\nThat is, the sequence $f(\\alpha)$ is identical to $\\alpha$ up until time $i-1$, and from time $i$ onwards, it is reflected by $L_{wsw^{-1}}$. \n\n\\begin{Proposition}\nThe sequence $f(\\alpha)$ is a path. That is, each two consecutive entries are either adjacent in the graph $\\Gamma(W,S)$, or equal.\n\\end{Proposition}\n\\begin{proof}\nIt must be checked that each two consecutive entries in $f(\\alpha)$ are either adjacent in the graph $\\Gamma(W,S)$, or equal. This is immediate for each consecutive pair except for $a_{i-1}$ and $L_{wsw^{-1}}(a_i)$.\n\nBy the definition of $i$, the vertex $a_i$ is a black vertex adjacent to exactly one grey edge, and $a_{i-1}$ is one of the two endpoints of that edge. From Proposition \\ref{prop:greyflip}, $L_{wsw^{-1}}(a_i)$ is either equal to $a_{i-1}$ or connected to $a_{i-1}$ by this grey edge.\n\\end{proof}\n\nThe map $L_{wsw^{-1}}$ interchanges $ws$ and $w$, so $f$ is a function from $\\cP'$ to $\\cP$. All that remains is to show that $f$ is an injection. The function $f$ is an involution, because it applies $L_{wsw^{-1}}$ to the part of the path from time $i$ onwards, the map $L_{wsw^{-1}}$ is an involution, and moving from $\\alpha$ to $f(\\alpha)$ does not change the definition of $i$ (but rather interchanges the two cases in the definition of $i$).\n\nThus $f$ is an involution from $\\cP'$ to $\\cP$, so there are at least as many paths of length $n$ from $1_W$ to $w$ as from $1_W$ to $ws$, completing the proof of Proposition \\ref{prop:likelihood}.\n\\end{proof}\n\n\\begin{Corollary}\nFor any Coxeter system $(W,S)$ and the corresponding random walk described by Theorem \\ref{the:main}, the most likely element after $n$ steps is the identity. If $W$ is finite, then the least likely element is the longest element. Here, most and least likely are not necessarily strict.\n\\end{Corollary}\n\n\\begin{Example}\nConsider the random walk on $S_4$ generated by the adjacent transpositions $(1 \\; 2)$, $(2 \\; 3)$, and $(3 \\; 4)$, as well as the identity. For any $n$, the most likely element after $n$ steps is the identity and the least likely is the reversal $(1 \\; 4)(2 \\; 3)$. Theorem \\ref{the:main} does not address the relative likelihoods of the transpositions $s = (1 \\; 2)$ and $t = (2 \\; 3)$, but both are more likely than $(1 \\; 3) = sts = tst$.\n\\end{Example}\n\n\\begin{Example}\nTake the simple random walk on the square grid $\\Z \\times \\Z$ generated by $(\\pm 1,0)$, $(0,\\pm 1)$ and $(0,0)$. This is a relabelling of the Cayley graph of the Coxeter group $$D_\\infty \\times D_\\infty = \\pres{s,t,u,v}{s^2,t^2,u^2,v^2,susu,svsv,tutu,tvtv}.$$ For any $n$, the most likely element after $n$ steps is the identity $(0,0)$, and the next most likely are the four adjacent vertices (which are equally likely, by symmetry). The vertex $(1,3)$ is always more likely than $(2,3)$, but Theorem \\ref{the:main} doesn't address the relative likelihoods of $(1,3)$ and $(2,2)$.\n\\end{Example}\n\n\\begin{Example}\nFinally, consider the simple random walk on a $d$--regular tree, with laziness $\\frac{1}{d+1}$. This is the Cayley graph of the free product of several copies of the group $\\Z \/ 2\\Z$. For any $n$, the most likely vertex after $n$ steps is the initial vertex, the next most likely are the adjacent vertices, then the vertices at distance two, and so on.\n\\end{Example}\n\nTheorem \\ref{the:main} may be strengthened to not require that all generators have equal probability.\n\n\\begin{Theorem}\nFor any Coxeter system $(W,S)$, consider a random walk on $W$, starting at the identity, and at each step multiplying on the right by an element $s \\in S$ or by the identity, with probabilities $p_s$ or $p_{\\text{id}}$. As long as each $p_s$ is less than $p_{\\text{id}}$, the conclusion of Theorem \\ref{the:main} holds.\n\\end{Theorem}\n\\begin{proof}\nThe only part of the proof that must be changed is the proof of Proposition \\ref{prop:likelihood}, comparing the probabilities of the states $ws$ and $w$, for $w \\in W$ and $s \\in S$ with $l(ws) > l(w)$. With this choice of $s$, divide the probability of multiplying by $1_W$ into two parts, of probabilities $p_s$ and $p_{\\text{id}} - p_s$. Where the proof of Proposition \\ref{prop:likelihood} pairs up the events of multiplying by $s$ or by $1_W$, use only the first of these parts, which is an event of equal probability to that of multiplication by $s$.\n\\end{proof}\n\n\\bibliographystyle{hplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{I}\nHydrologists have studied air-water flow in soils, mainly using the so-called Richards approximation. At least two\nhypotheses are physically required for this model to be applicable: the water pressure in the saturated region must\nbe larger than the atmospheric pressure and all the unsaturated regions must have a boundary connected to the\nsurface. However, in many situations, these hypotheses are not satisfied and a more general two-phase flow model\nmust be considered. This work explores the limit of this general model as the viscosity of the air tends to zero,\nwhich is one of the hypotheses required in the Richards model. To that purpose we prove the existence of a weak\nsolution of the two-phase flow problem and prove estimates which are uniform in the air viscosity. In this paper, we\nassume that the air and water phases are incompressible and immiscible. The geometric domain is supposed to be\nhorizontal, homogeneous and isotropic. Our starting point is the following two-phase flow model, which one can\ndeduce from Darcy's law\n$$(\\cal{TP})\\left\\{\\begin{array}{ll}\n&u_t -div(k_w(u)\\nabla(p)) = s_w\\nonumber\\\\\n&(1-u)_t - div(\\Frac{1}{\\mu} k_a(u)\\nabla(p + p_c(u))) = s_a, \\nonumber\n\\end{array}\\right.$$\nwhere $u$ and $p$ are respectively the saturation and the pressure of the water phase, $k_w$ and $k_a$ are\nrespectively the relative permeabilities of the water and the air phase, $\\mu$ is the ratio between the viscosity of\nthe air phase and that of the water phase, $p_c$ is the capillary pressure, $s_w$ is an internal source term for the\nwater phase and $s_a$ is an internal source term for the air phase; these source terms are used to represent\nexchanges with the outside. We suppose in particular that the physical functions $k_w$, $k_a$ and $p_c$ only depend\non the saturation $u$ of the water phase, and that $ k_w(1) = k_a(0) = 1$. The aim of this paper is the study of the\nlimit of the two-phase flow problem as $\\mu \\downarrow 0$.\\\\\n$\\\\ $\nThe classical Richards model as formulated by the\nengineers is given by\n$$({\\cal{R}})\\left\\{\\begin{array}{ll}\n&u_t -div(k_w(u)\\nabla p) = s_w\\nonumber\\\\\n&u=p_c^{-1}(p_{atm}-p). \\nonumber\n\\end{array}\\right.$$\nwhere the properties of capillary pressure $p_c=p_c(u)$ are describes in hypothesis $(H_8)$ below. For the existence\nand uniqueness of the solution of Richards model together with suitable initial and boundary conditions as well as\nqualitative properties of the solution and methods for numerical approximations we refer to \\cite{AL}, \\cite{HW},\n\\cite{P}, \\cite{RPK}. In this article, we will show that the singular limit as $\\mu\\downarrow 0$ of the two phase\nflow problem $(\\cal{TP})$ has the form\n$$({\\cal{FBP}})\\left\\{\\begin{array}{ll}\n&u_t -div(k_w(u)\\nabla p) = s_w\\nonumber\\\\\n&u=1\\mbox{ or }\\nabla(p+p_c(u))=0\\mbox{ a.e. in }\n\\Omega\\times(0,T). \\nonumber\n\\end{array}\\right.$$\nWe remark that a solution of $({\\cal{R}})$ with $u > 0$ satisfies $({\\cal{FBP}})$. \\\\\n$\\\\ $ This paper is organized as follows. In Section 2 we present a complete mathematical formulation of the\nproblem, and state the main mathematical results, which include a precise formulation of the singular limit problem.\nWe give a sequence of regularized problems in Section 3, and prove the existence of a classical solution. In Section\n4 we present a priori estimates, which are uniform in an extra regularization parameter $\\delta$ and in the air\nviscosity $\\mu$. In Section 5, we let $\\delta\\downarrow 0$ and prove that the solution converges to a solution of\nthe two phase flow problem. We study its limiting behavior as the air viscosity $\\mu$ tends to zero in Section 6.\nFinally in Section 7 we propose a finite volume algorithm in a one\ndimensional context and present a variety of numerical solutions.\\\\\n\n\\section{Mathematical formulation and main results}\\label{I}\nWe consider the two-phase flow problem\n$$\n\\vgl=0\\vglb=0 \\mbox{$(S^{\\mu})$} \\left\\{ \\eqalijna{ &u_t =div\\bigg(k_w(u)\\nabla p\\bigg)+f^{\\mu}(c)\\overline s-f^{\\mu}(u)\\underline s,\n~~~~~&\\mbox{ in }Q_T,\\cr &(1-u)_t = div\\bigg(\\Frac{1}{\\mu}k_a(u)\\nabla(p+p_c(u))\\bigg)\\cr &~~~~~~~~~~~~~~+\n(1-f^{\\mu}(c))\\overline s-(1-f^{\\mu}(u))\\underline s,~~~~&\\mbox{ in }Q_T,\\cr &\\Int_{\\Omega} p(x,t)dx=0,&\\mbox{ for }t \\in (0,T),\\cr &\\nabla\np.n=0, &\\mbox{ on }\\partial \\Omega \\times (0,T),\\cr &\\nabla (p+p_c(u)).n=0, &\\mbox{ on }\\partial \\Omega \\times\n(0,T),\\cr &u(x,0)=u_0(x),&\\mbox{ for }x \\in \\Omega, } \\right. \\eqno\\eqalijnb{\\latexeqno{n1}\\cr\\nonumber\\cr\n \\latexeqno{n2} \\cr\n \\latexeqno{n4} \\cr\\latexeqno{n3} \\cr \\latexeqno{n5} \\cr \\latexeqno{CImu} \\cr\n }\n$$\nwhere $T$ is a positive constant, $Q_T:=\\Omega \\times (0,T)$ and where we suppose that\n$$\\begin{array}{lll}\n&(H_1) &\\Omega \\mbox{ is a smooth bounded domain of ${\\mathrm {I\\mkern-5.5mu R\\mkern1mu}}^N$ where the space dimension $N$ is arbitrary},\\\\\n&(H_2) &u_m\\in(0,1),\\\\\n&(H_3) &c\\in L^\\infty({\\Omega}\\times(0,T)) \\mbox{ and }u_m\\leq c \\leq 1,\\\\\n&(H_4) &u_0\\in L^\\infty({\\Omega}) \\mbox{ and }u_m\\leq u_0 \\leq 1,\\\\\n&(H_5) &\\overline s\\in L^2(\\Omega),~~\\overline s\\geq 0,~~\\underline s\\in L^2(\\Omega),~~\\underline s\\geq 0\\mbox{ and }\\Int_{\\Omega}(\\overline s(x)-\\underline s(x))dx=0,\\\\\n&(H_6) &k_w\\in C^2([0,1]),~~k_w'\\geq 0,~~k_w(0)=0,~~k_w(1)=1\\mbox{ and }k_w(u_m)>0,\\\\\n&(H_7) &k_a\\in C^2([0,1]),~~k_a'\\leq 0,~~k_a(1)=0,~~k_a(0)=1\\mbox{ and }k_a(s)>0\\mbox{ for all }s\\in[0,1),\\\\\n&(H_8) &p_c\\in C^0([0,1])\\cup C^3([0,1)),~~p_c'<0\\mbox{ and }\\sup_{s\\in[0,1)}(-k_a(s)p_c'(s))<+\\infty,\\\\\n&(H_9) &\\mu\\in(0,1].\n\\end{array}$$\nIn this model, $u$ and $p$ are respectively the saturation and the pressure of the water phase, $k_w$ and $k_a$ are\nrespectively the mobilities of the water phase and the mobility of the non-water phase and $p_c$ is the capillary\npressure. We assume in particular that the permeability functions $k_w$, $k_a$ and the capillary pressure $p_c$ only\ndepend on the saturation $u$ of the water phase. Here, we suppose that the flow of the water phase in the reservoir\nis driven by an injection term $f^\\mu(c)\\overline s$ and an extraction term $f^\\mu(u) \\underline s$ where $\\overline s$ and $\\underline s$ are given\nspace dependent functions, $c$ is the saturation of the injected fluid; if $c=1$, only water will be injected, if\n$c=0$, only air will be injected, whereas a mixture of water and air will be injected if $0 < c < 1$. The function\n$f^\\mu$ is the fractional flow of the water phase, namely\n\\begin{equation}\\label{deffmu}\nf^{\\mu}(s)=\\Frac{k_w(s)}{M^\\mu(s)}, \\mbox{ with\n}M^\\mu(s)=k_w(s)+\\Frac{1}{\\mu}k_a(s).\n\\end{equation}\nIn particular, we remark that\n\\begin{equation}\\label{fmucroissante}\nf^{\\mu}(s) \\mbox{ is non decreasing. }\n\\end{equation}\nNext we introduce a set of notations, which will be useful in the sequel.\n\\begin{equation}\\label{defg}\ng(s)=-\\Int_0^sk_a(\\tau)p_c'(\\tau)d\\tau,\n\\end{equation}\n\\begin{equation}\\label{defzeta}\n\\zeta(s)=\\Int_0^s\\sqrt{k_a(\\tau)}p_c'(\\tau)d\\tau,\n\\end{equation}\n\\begin{equation}\\label{Q}\n{\\cal{Q}}^\\mu(s)=\\Int_0^s f^{\\mu}(\\tau)p_c'(\\tau)d\\tau,\n\\end{equation}\nand\n\\begin{equation}\\label{R}\n{\\cal{R}}^\\mu(s)=\\Int_0^s\\Frac{k_a(\\tau)}{k_a(\\tau)+\\muk_w(\\tau)}p_c'(\\tau)d\\tau,\n\\end{equation}\nfor all $s\\in[0,1]$. This implies in particular that\n\\begin{equation}\\label{P+Q} {\\cal{R}}^\\mu(s)+{\\cal{Q}}^\\mu(s)=p_c(s)-p_c(0), \\mbox{ for all }s\\in[0,1].\n\\end{equation}\n\\begin{definition} The pair $(u^{\\mu},p^{\\mu})$ is a weak solution of Problem $(S^\\mu)$ if\n$$\\begin{array}{lll}\n&u^{\\mu}\\in L^\\infty({\\Omega}\\times(0,T)),~~~~&\\mbox{with~}0\\leq u^{\\mu}\\leq 1 \\mbox{ in }Q_T,\\\\\n&&\\\\\n&p^{\\mu}\\in L^2(0,T;H^1({\\Omega})),~~~~&\\Int_\\Op^{\\mu}(x,t)dx=0 \\mbox{~for~almost~every~} t \\in (0,T),\\\\\n&&\\\\\n&g(u^{\\mu})\\in L^2(0,T;H^1({\\Omega})),&\\\\\n\\end{array}$$\nwith\n\\begin{eqnarray}\n&\\Int_0^T\\displaystyle \\Int _{\\Omega}u^{\\mu} \\varphi_t dxdt=&\n\\Int_0^T\\displaystyle \\Int _{\\Omega}k_w(u^{\\mu})\\nablap^{\\mu}.\\nabla \\varphi dx dt\n-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(f^{\\mu}(c)\\overline s-f^{\\mu}(u^{\\mu})\\underline s \\bigg) \\varphi dxdt\\nonumber\\\\\n&&-\\displaystyle \\Int _{\\Omega} u_0(x)\\varphi(x,0)dx,\\label{defsol1}\n\\end{eqnarray}and\n\\begin{eqnarray}\n&&\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(1-u^{\\mu}\\bigg)\\varphi_tdxdt=\\Int_0^T\\displaystyle \\Int _{\\Omega}\\Frac{1}{\\mu}k_a(u^{\\mu})\\bigg(\\nablap^{\\mu}+\\nabla p_c(u^{\\mu})\\bigg).\\nabla \\varphi dx dt\\nonumber\\\\\n&&-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg((1-f^{\\mu}(c))\\overline s-(1-f^{\\mu}(u^{\\mu}))\\underline s\\bigg)\n\\varphi dx dt-\\displaystyle \\Int _{\\Omega}\n\\bigg(1-u_0(x)\\bigg)\\varphi(x,0)dx,~~~~\\label{defsol2}\n\\end{eqnarray}\nfor all $\\varphi$ in ${\\cal{C}}:=\\{w\\in W_2^{2,1}(Q_T), w(.,T)=0 \\mbox{ in }\\Omega \\}.$\n\\end{definition}\n$\\\\ $ Our first result, which we prove in Section \\ref{Existencesolution}, is the following\n\\begin{theorem}\\label{thexistence}\nSuppose that the hypotheses $(H_1)-(H_9)$ are satisfied, then there exists a weak solution $(u^{\\mu},p^{\\mu})$ of Problem\n$(S^\\mu)$.\n\\end{theorem}\nNext we define the discontinuous function $\\chi$ by $$\\chi(s):=\\left\\{\\begin{array}{ll}0&\\mbox{ if }s\\in[0,1)\\\\\n1&\\mbox{ if }s=1,\\end{array}\\right.$$ as well as the graph\n$$H(s):=\\left\\{\\begin{array}{lll}&0&\\mbox{ if }s\\in[0,1)\\\\ &[0,1]&\\mbox{ if }s=1.\\end{array}\\right.$$\nThe main goal of this paper is to prove the following convergence result,\n\\begin{theorem}\\label{thlim}\nSuppose that the hypotheses $(H_1)-(H_9)$ are satisfied, then there\nexists a subsequence $((u^{\\mu_n},p^{\\mu_n}))_{n\\in N}$ of\nweak solutions of Problem $(S^{\\mu_n})$ and functions $u$, $p$, $\\hat f$ such that\n$$\\begin{array}{ll}\n&u\\in L^\\infty(Q_T),~~~0\\leq u\\leq 1\\mbox { in }Q_T,\\\\\n&\\\\\n&\\hat f\\in L^\\infty(Q_T),~~~0\\leq \\hat f\\leq 1\\mbox { in }Q_T,\\\\\n&\\\\\n&p\\in L^2(0,T;H^1({\\Omega})),\\\\\n&\\\\\n&k_a(u)\\nablap_c(u)\\in L^2({\\Omega}\\times(0,T)),\\\\\n\\end{array}$$\nand\n$$\\begin{array}{ll}\n&(u^{\\mu_n})_{n\\in N}\\mbox{ tends to }u\\mbox{ strongly in }L^2(Q_T),\\\\\n&(p^{\\mu_n})_{n\\in N}\\mbox{ tends to }p\\mbox{ weakly in }L^2(0,T;H^1({\\Omega})),\\\\\n\\end{array}$$\nas $\\mu_n$ tends to zero and\n\\begin{eqnarray}\n&\\Int_0^T\\displaystyle \\Int _{\\Omega} u \\varphi_tdxdt =&\\Int_0^T\\displaystyle \\Int _{\\Omega}k_w(u)\\nabla p.\\nabla \\varphi dxdt-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(\\chi(c)\\overline s-\\hat f\\underline s \\bigg) \\varphi dxdt\\nonumber\\\\\n&&-\\displaystyle \\Int _{\\Omega} u_0(x)\\varphi(x,0)dx,\\label{sollim1}\n\\end{eqnarray}\nfor all $\\varphi \\in{\\cal{C}}$, where $\\hat f(x,t)\\in H(u(x,t))$\nfor $(x,t)\\in Q_T$. Moreover we also have that\n\\begin{equation}\\label{lim1}\n\\Int_0^T\\displaystyle \\Int _{\\Omega} \\bigg[k_a(u)\\bigg]^2\\bigg[\\nabla p+\\nabla p_c(u)\\bigg]^2dxdt=0\n\\end{equation}\nand\n\\begin{equation}\\label{lim2}\n\\Int_{\\Omega} p(x,t) dx=0, \\mbox{ for almost every }t\\in(0,T).\n\\end{equation}\n\\end{theorem}\nFormally, $u$ satisfies the following limit problem\n$$\n\\left\\{\n\\begin{array}{ll}\nu_t =div\\bigg(k_w(u)\\nabla p \\bigg)+ \\chi(c)\\bar s - \\hat\nf\\underline{s},\n~~&\\mbox{ in }Q_T, \\\\\n\\nabla u.n=0, &\\mbox{ on }\\partial \\Omega \\times (0,T),\\\\\nu(x,0)=u_0(x),~~&\\mbox{ for }x \\in \\Omega.\n\\end {array}\n\\right.\n$$\nMore precisely the following corollary holds\n\\begin{corollary}\\label{thcoro} Suppose that $u<1$ in ${\\cal O}=\\cup_{t\\in[\\tau,T]}\\Omega_t$, where $\\tau >0$\nand $\\Omega_t$, for $t\\in[\\tau,T]$, are smooth subdomains of ${\\Omega}$\nand ${\\cal O}$ is a smooth domain of $\\Omega\\times[\\tau,T]$ and that $u=1$ in\n$Q_T\\setminus\\overline{\\cal{O}}$ then\n$$p(x,t)=-p_c(u(x,t))+constant(t), \\mbox{ for all }(x,t)\\in {\\cal O}$$ and $u$ satisfies\n$$\n\\left\\{\n\\begin{array}{ll}\nu_t =-div\\bigg(k_w(u)\\nabla p_c(u)\\bigg)+ \\chi(c)\\bar s, ~~&\\mbox{ in } {\\cal O}, \\\\\n\\Frac{\\partial u}{\\partial n}=0,&\\mbox{ on }\\partial {\\cal O}\\cap\n\\bigg(\\partial\\Omega\\times(0,T)\\bigg),\\\\\nu=1, &\\mbox{ elsewhere on }\\partial {\\cal O},\\\\\nu(x,0)=u_0(x),&\\mbox{ for }x \\in \\Omega.\n\\end {array}\n\\right.\n$$\n\\end{corollary}\n$\\\\ $ Finally we remark that another form of the limit problem involves a parabolic equation, which is close to the\nstandard Richards equation. Indeed if we set $\\phi(s):=p_c(0)-p_c(s)$ and denote by $\\beta$ the inverse function of\n$\\phi$, the function $v:=\\phi(u)$ is a weak solution of the problem\n$$\n\\left\\{\n\\begin{array}{ll}\n\\beta(v)_t =div\\bigg(k_w(\\beta(v))\\nabla v \\bigg)+ \\chi(c)\\bar s -\n\\hat f\\underline{s},\n~~&\\mbox{ in }Q_T, \\\\\n\\nabla \\beta(v).n=0, &\\mbox{ on }\\partial \\Omega \\times (0,T),\\\\\n\\beta(v)(x,0)=u_0(x),~~&\\mbox{ for }x \\in \\Omega,\n\\end {array}\n\\right.\n$$\nwith $\\hat f\\in H(\\beta(v))$.\n\\setcounter{equation}{0}\n\\section{Existence of a solution of an approximate problem $(S^{\\mu}_{\\delta})$ of Problem\n$(S^{\\mu})$}\\label{Existencesolution}\nLet $\\delta$ be an arbitrary positive constant. In order to prove the\nexistence of a solution of Problem $(S^{\\mu})$ we introduce a\nsequence of regularized problems $(S^{\\mu}_{\\delta})$, namely\n$$\n\\vgl=0\\vglb=0 \\mbox{$(S^{\\mu}_{\\delta})$} \\left\\{ \\eqalijna{ &u_t~~ =\ndiv\\bigg(k_w(u)\\nabla p\\bigg)+f^{\\mu}(c_\\d)\\overline s_\\delta\\cr\n&~~~~~~~~-f^{\\mu}(u)\\bigg(\\underline s_\\delta+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta )dx\\bigg),\n~~&\\mbox{ in }\\Omega \\times (0,T),\\cr &(1-u)_t\n=div\\bigg(\\Frac{1}{\\mu}k_a(u)\\nabla(p+p_c(u))\\bigg)+\\bigg(1-f^{\\mu}(c_\\d)\\bigg)\\overline s_\\delta~~\\cr\n&~~~~~~~~~~~~~~-\\bigg(1-f^{\\mu}(u)\\bigg)\\bigg(\\underline s_\\delta+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta\n)dx\\bigg), &\\mbox{ in }\\Omega \\times (0,T),\\cr &\\Int_{\\Omega}\np(x,t)dx=0,&\\mbox{ for }t\\in(0,T),\\cr\n &\\nabla p.n=0, &\\mbox{ on }\\partial \\Omega\n\\times (0,T),\\cr &\\nabla (p +p_c(u)).n=0, &\\mbox{ on }\\partial\n\\Omega \\times (0,T),\\cr &u(x,0)=u_{0}^\\delta(x),& \\mbox{ for }x \\in\n\\Omega,\\cr } \\right. \\eqno\\eqalijnb{\\nonumber\\cr\\latexeqno{1d}\\cr\n\\nonumber\\cr\n \\latexeqno{2d} \\cr\n \\latexeqno{3d} \\cr\n \\latexeqno{4d} \\cr \\latexeqno{5d} \\cr\\latexeqno{CId} \\cr\n }\n$$\nwhere $u_{0}^\\delta$, $c_\\delta$, $\\overline s_\\delta$ and $\\underline s_\\delta$ are smooth functions such that $u_{0}^\\delta$ tends to $u_0$ in\n$L^2(\\Omega)$ and $c_\\delta$, $\\overline s_\\delta$ and $\\underline s_\\delta$ tend respectively to $c$, $\\overline s$ and $\\underline s$ in $L^2(Q_T)$, as\n$\\delta\\downarrow 0$. In particular we suppose that there exists a positive constant $C$ such that\n\\begin{equation}\\label{sdeltaborne}\n\\underline s_\\delta\\geq 0,~~\\overline s_\\delta\\geq 0 \\mbox{ and }\\Int_{\\Omega} \\underline s_\\delta^2+\\Int_{\\Omega} \\overline s_\\delta^2 \\leq C.\n\\end{equation}\nMoreover we suppose that $u_{0}^\\delta$, $c_\\delta$ satisfy\n\\begin{eqnarray}\n&0From the standard theory of parabolic equations, we have that\n\\begin{equation}\\label{n8f}\n|\\hat V|^{2+\\alpha,\\frac{2+\\alpha}{2}}_{Q_T}\\leq\nD_3\\bigg(|W|^{1+\\alpha,\\frac{1+\\alpha}{2}}_{Q_T}+|\\nabla W|^{1+\\alpha,\\frac{1+\\alpha}{2}}_{Q_T}\\bigg) + D_4.\\label{n8ebis}\\\\\n\\end{equation}\nMoreover defining by $\\cal L$ the parabolic operator arising in\n$(Q^2_W)$, namely\n$${\\cal {L}}(\\hat V)(x,t):=\\hat V_t-\\Delta\\psi^\\mu_\\varepsilon(\\hat V)\n-M^\\mu(\\hat V)\\nablaf^{\\mu}(\\hat V).\\nabla W\n-[f^{\\mu}(c_\\delta)-f^{\\mu}(\\hat V)]\\overline s_\\delta,$$ we remark that\n(\\ref{fmucroissante}), the property (\\ref{Cdeltaborne}) of\n$c_\\delta$ and the fact that, by (\\ref{sdeltaborne}), $\\overline s_\\delta$ is positive imply that\n\\begin{equation}\\label{calO}\n{\\cal {L}}(u_m)\\leq 0\\mbox{ and }{\\cal {L}}(1-\\delta)\\geq 0.\n\\end{equation}\nSetting $T:=T^2\\circ T^1$, the inequalities (\\ref{calO}) ensure that $T$ maps the convex set ${\\cal{K}}$ into\nitself. Moreover we deduce from (\\ref{n8f}) that $T({\\cal{K}})$ is relatively compact in ${\\cal{K}}$.\n\\\\Next, we check that $T$ is continuous. Suppose that a sequence $(V_m)_{m\\inN}$ converges to a limit $V\\in\n{\\cal{K}}$ in $C^{1+\\alpha,\\frac{1+\\alpha}{2}}(\\overline Q_T)$, as $m\\rightarrow\\infty$. Since $(V_m)_{m\\inN}$ is bounded\nin $C^{1+\\alpha,\\frac{1+\\alpha}{2}}(\\overline Q_T)$, it follows from (\\ref{n8e}) that the sequence\n$(W_m:=T^1(V_m))_{m\\inN}$, where $W_m$ is the solution of $(Q^1_{V_m})$, is bounded in\n$C^{1+\\alpha,\\frac{1+\\alpha}{2}}(\\overline Q_T)$, so that as $m\\rightarrow\\infty$, $W_m$ converges to the unique solution\n$W$ of Problem $(Q^1_V)$ in $C^{1+\\beta,\\frac{1+\\beta}{2}}(\\overline Q_T)$ for all $\\beta\\in(0,\\alpha)$. Moreover $W\\in\nC^{1+\\alpha,\\frac{1+\\alpha}{2}}(\\overline Q_T)$. Further it also follows from (\\ref{n8e}) that $(\\nabla\nW_m)_{m\\inN}$ is bounded in $C^{1+\\alpha,\\frac{1+\\alpha}{2}}(\\overline Q_T)$, so that the solution $\\hat V_m=T^2(W_m)$ of\nProblem $(Q^2_{W_m})$ is bounded in $C^{2+\\alpha,\\frac{2+\\alpha}{2}}(\\overline Q_T)$. Since $\\hat V_m=T^2(W_m)=T(V_m)$,\n$(T(V_m))_{m\\inN}$ converges to the unique solution $\\hat V$ of Problem $(Q^2_{W})$ in\n$C^{2+\\beta,\\frac{2+\\beta}{2}}(\\overline Q_T)$ for all $\\beta\\in(0,\\alpha)$, as $m\\rightarrow\\infty$, so that $\\hat\nV=T^2(W)=T^2\\circ T^1(V)$. Therefore we have just proved that $(T^2\\circ T^1(V_m))_{m\\inN}$ converges to $T^2\\circ\nT^1(V)$ in $C^{2+\\beta,\\frac{2+\\beta}{2}}(\\overline Q_T)$ for all $\\beta\\in(0,\\alpha)$, as $m\\rightarrow\\infty$, which\nensures the continuity of the map $T$. It follows from the Schauder fixed point theorem that there exists a solution\n$(u^{\\mu}_\\delta,{\\cal{P}}^\\mu_\\delta)$ of $(\\tilde{S}^{\\mu}_{\\delta})$ such that\n$$u^{\\mu}_{\\delta}\\in K\\cap C^{2+\\alpha,\\frac{2+\\alpha}{2}}(\\overline Q_T) \\mbox{ and ${\\cal{P}}^\\mu_{\\delta}$, $\\nabla {\\cal{P}}^\\mu_{\\delta}$, $\\in C^{1+\\alpha,\\frac{1+\\alpha}{2}}(Q_T)$,\n$\\Delta {\\cal{P}}^\\mu_{\\delta} \\in C^{\\alpha,\\frac{\\alpha}{2}}(Q_T)$.}$$ $\\\\\n$ This concludes the proof of Lemma \\ref{leexistencetildeeps}.\nMoreover we deduce from Lemma \\ref{leexistencetildeeps} the\nexistence of a solution of $(S^{\\mu}_{\\delta})$, namely\n\n\\begin{corollary}\\label{leexistenceeps}\nAssume the hypotheses $(H_1)-(H_9)$ then there exists $(u^{\\mu}_{\\delta},p^{\\mu}_{\\delta})$ solution of $(S^{\\mu}_{\\delta})$ such that\n$u^{\\mu}_{\\delta}\\in C^{2+\\alpha,\\frac{2+\\alpha}{2}}(\\overline Q_T)$,\n\\begin{equation}\\label{borneu}\nu_m\\leq u^{\\mu}_{\\delta}(x,t)\\leq 1-\\delta\n\\end{equation}\nand $p^{\\mu}_{\\delta}$, $\\nabla p^{\\mu}_{\\delta}$ $\\in C^{1+\\alpha,\\frac{1+\\alpha}{2}}(Q_T)$, $\\Delta p^{\\mu}_{\\delta} \\in\nC^{\\alpha,\\frac{\\alpha}{2}}(Q_T)$.\n\\end{corollary}\n\n\\setcounter{equation}{0}\n\\section{A priori Estimates}\nIn view of (\\ref{fmucroissante}) and (\\ref{borneu}) we deduce the following bounds\n\\begin{eqnarray}\n&&0=f^{\\mu}(0)\\leq f^{\\mu}(u^{\\mu}_{\\delta}(x,t))\\leq 1=f^{\\mu}(1),\\label{Eb1}\\\\\n&&0=f^{\\mu}(0)\\leq f^{\\mu}(c_{\\delta}(x,t))\\leq 1=f^{\\mu}(1),\\label{Eb1bis}\\\\\n&&00$. Moreover we have by Poincar\\'e-Wirtinger inequality\nthat\n$$\\Int_{Q_T}(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta}))^2\\leq C_1\\bigg[\\Int_{Q_T}|\\nabla(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta}))|^2+\n\\bigg(\\Int_{Q_T}p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta})\\bigg)^2 \\bigg].$$ Using\n(\\ref{3d}) and (\\ref{Eb5}), it follows that\n$$\\Int_{Q_T}(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta}))^2\\leq C_1\\Int_{Q_T}|\\nabla(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta}))|^2+ C_2,$$\nwhich we substitute into (\\ref{E1}) with $h=\\Frac{k_w(u_m)}{2C_1}$ to deduce, also in view of (\\ref{sdeltaborne})\nand (\\ref{Eb3}), that\n\\begin{equation}\\label{E2}\n\\Int_{Q_T}|\\nabla(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta}))|^2\\leq C_3\\mbox{ and\n}\\Int_{Q_T}|p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta})|^2\\leq C_3.\n\\end{equation}\nFurthermore multiplying (\\ref{1d}) by $p^{\\mu}_{\\delta}$ and (\\ref{2d}) by $p^{\\mu}_{\\delta}+p_c(u^{\\mu}_{\\delta})$, adding up both\nresults and integrating on $Q_T$ we obtain\n\\begin{eqnarray}\n&-\\Int_{Q_T}(u^{\\mu}_{\\delta})_tp_c(u^{\\mu}_{\\delta})+\\Int_{Q_T}\nk_w(u^{\\mu}_{\\delta})|\\nablap^{\\mu}_{\\delta}|^2+\\Frac{1}{\\mu}k_a(u^{\\mu}_{\\delta})\\big|\\nablap^{\\mu}_{\\delta}+\\nabla\np_c(u^{\\mu}_{\\delta})\\big|^2=I,~~~~\\label{E3}\n\\end{eqnarray}\nwhere\n$$\\begin{array}{ll}\nI:=&\\Int_{Q_T}\\bigg[f^{\\mu}(c_\\delta)\\overline s_\\delta-f^{\\mu}(u^{\\mu}_{\\delta})\\bigg(\\underline s_\\delta +\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta )\\bigg)\\bigg]p^{\\mu}_{\\delta} dxdt\\\\\n&+\\Int_{Q_T}\\bigg[(1-f^{\\mu}(c_\\delta))\\overline s_\\delta-\\bigg(1-f^{\\mu}(u^{\\mu}_{\\delta})\\bigg)\\bigg(\\underline s_\\delta+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta )\n\\bigg)\\bigg](p^{\\mu}_{\\delta}+p_c(u^{\\mu}_{\\delta}))dxdt.\n\\end{array}\n$$\nWe check below that first term on the left-hand-side of (\\ref{E3}) and $I$ are bounded. Denoting by ${\\cal{P}}_c$ a\nprimitive of $p_c$ we have that\n$$\\Int_{Q_T}p_c(u^{\\mu}_{\\delta})(u^{\\mu}_{\\delta})_t=\\Int_{\\Omega}\\Int_0^T\\Frac{\\partial}{\\partial\nt}\\big[{\\cal{P}}_c(u^{\\mu}_{\\delta})\\big].$$ Since ${\\cal{P}}_c$ is continuous and $u^{\\mu}_{\\delta}$ is bounded this gives\n\\begin{equation}\\label{E4}\n\\bigg|\\Int_{Q_T}p_c(u^{\\mu}_{\\delta})(u^{\\mu}_{\\delta})_tdxdt\\bigg|\\leq C_4.\n\\end{equation}\nMoreover we have using (\\ref{3d}) and (\\ref{P+Q}) that\n\\begin{eqnarray}\n&I=&\\Int_{Q_T}\\bigg(p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta})\\bigg)(\\overline s_\\delta-\\underline s_\\delta)\ndxdt\\\\\n&&-\\Int_{Q_T}{\\cal{R}}^\\mu(u^{\\mu}_{\\delta})\\bigg[f^{\\mu}(c_\\delta)\\overline s_\\delta-f^{\\mu}(u^{\\mu}_{\\delta})\\underline s_\\delta+\\bigg(1-f^{\\mu}(u^{\\mu}_\\delta)\\bigg)\\bigg( \\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta) \\bigg)\\bigg] dxdt\\nonumber\\\\\n&&+\\Int_{Q_T}\\bigg[(1-f^{\\mu}(c_\\delta))\\overline s_\\delta-\\bigg(1-f^{\\mu}(u^{\\mu}_{\\delta})\\bigg)\\bigg(\\underline s_\\delta\n+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta) \\bigg)\n\\bigg]\\bigg[{\\cal{Q}}^\\mu(u^{\\mu}_{\\delta})+p_c(0)\\bigg] dxdt.\\nonumber\n\\end{eqnarray}\nIn view of $(H_5)$, (\\ref{sdeltaborne}), (\\ref{Eb1}),\n(\\ref{Eb1bis}), (\\ref{Eb5}) and (\\ref{Eb6}) we obtain\n$$\nI\\leq C_5\\Int_{Q_T}|p^{\\mu}_{\\delta}+{\\cal{R}}^\\mu(u^{\\mu}_{\\delta})|^2+ C_6.\n$$\nThis together with (\\ref{E2}) yields $I\\leq C_5C_3+C_6$. Substituting this into (\\ref{E3}) and also using (\\ref{E4})\nwe obtain that\n\\begin{equation}\\label{E5}\n\\Int_{Q_T} k_w(u^{\\mu}_{\\delta})|\\nablap^{\\mu}_{\\delta}|^2+\\Frac{1}{\\mu}k_a(u^{\\mu}_{\\delta})\\big|\\nablap^{\\mu}_{\\delta}+\\nabla\np_c(u^{\\mu}_{\\delta})\\big|^2dxdt\\leq C_7,\n\\end{equation}\nwhich implies (\\ref{Est1}).\nIn view of (\\ref{Eb2}), we also deduce from (\\ref{E5}) the estimate (\\ref{Est1.1}). \\\\\nNext we prove (\\ref{Est1.2}). By the definition (\\ref{defg}) of $g$, we obtain from (\\ref{7d}) that\n\\begin{equation}\\label{E5b}\n-div\\bigg(M^\\mu(u^{\\mu}_{\\delta})\\nablap^{\\mu}_{\\delta}\\bigg)\n+\\Frac{1}{\\mu}\\Delta g(u^{\\mu}_{\\delta})=\\overline s_\\delta-\\underline s_\\delta-\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta)dx.\n\\end{equation}\nMultiplying (\\ref{E5b}) by $f^{\\mu}(u^{\\mu}_{\\delta})$ and subtracting the result from $(\\ref{1d})$ we deduce that\n\\begin{equation}\\label{E5t}\n(u^{\\mu}_{\\delta})_t=\\Frac{1}{\\mu}f^{\\mu}(u^{\\mu}_{\\delta})\\Delta\ng(u^{\\mu}_{\\delta})+div\\bigg(k_w(u^{\\mu}_{\\delta})\\nabla p^{\\mu}_{\\delta}\\bigg)\n-f^{\\mu}(u^{\\mu}_{\\delta})div\\bigg(M^\\mu(u^{\\mu}_{\\delta})\\nabla(p^{\\mu})\\bigg)+ \\overline s_\\delta\\big[f^{\\mu}(c_\\delta)-f^{\\mu}(u^{\\mu}_{\\delta})\\big].\n\\end{equation}\nMoreover using the definition (\\ref{deffmu}) of $f^{\\mu}$ and $M^\\mu$ we note that\n$$\n\\begin{array}{lll}\n&div\\bigg(M^\\mu(u^{\\mu}_{\\delta})f^{\\mu}(u^{\\mu}_{\\delta})\\nabla p^{\\mu}_{\\delta}\\bigg)&=\ndiv\\bigg(k_w(u^{\\mu}_{\\delta})\\nabla p^{\\mu}_{\\delta}\\bigg)\\\\\n&&=M^\\mu(u^{\\mu}_{\\delta})\\nabla(f^{\\mu}(u^{\\mu}_{\\delta})).\\nablap^{\\mu}_{\\delta}\n+f^{\\mu}(u^{\\mu}_{\\delta})div\\bigg(M^\\mu(u^{\\mu}_{\\delta})\\nabla(p^{\\mu})\\bigg),\n\\end{array}\n$$\nwhich we substitute into (\\ref{E5t}) to obtain\n\\begin{eqnarray}\n(u^{\\mu}_{\\delta})_t-\\Frac{1}{\\mu}f^{\\mu}(u^{\\mu}_{\\delta})\\Delta\ng(u^{\\mu}_{\\delta})- M^\\mu(u^{\\mu}_{\\delta})\\nabla(f^{\\mu}(u^{\\mu}_{\\delta})).\\nablap^{\\mu}_{\\delta}\n=\\overline s_\\delta\\big[f^{\\mu}(c_\\delta)-f^{\\mu}(u^{\\mu}_{\\delta})\\big].\\label{E6}\n\\end{eqnarray}\nWe set\n\\begin{equation}\\label{defD}\nD^\\mu(a):=p_c(a)f^{\\mu}(a)-{\\cal{Q}}^\\mu(a),\n\\end{equation}\nfor all $a\\in[0,1]$, so that by the definition (\\ref{Q}) of ${\\cal{Q}}^\\mu$ we have $\\nabla\nD^\\mu(u^{\\mu}_{\\delta})=p_c(u^{\\mu}_{\\delta})\\nabla(f^{\\mu}(u^{\\mu}_{\\delta}))$. Substituting this into (\\ref{E6}), which we have multiplied\nby $p_c(u^{\\mu}_{\\delta})$, we deduce that\n\\begin{equation}\\label{E7}\np_c(u^{\\mu}_{\\delta})(u^{\\mu}_{\\delta})_t -\\Frac{1}{\\mu}f^{\\mu}(u^{\\mu}_{\\delta})p_c(u^{\\mu}_{\\delta})\\Delta\ng(u^{\\mu}_{\\delta})-M^\\mu(u^{\\mu}_{\\delta})\\nabla\nD^\\mu(u^{\\mu}_{\\delta}).\\nablap^{\\mu}_{\\delta}=p_c(u^{\\mu}_{\\delta})\\overline s_\\delta\\big[f^{\\mu}(c_\\delta)-f^{\\mu}(u^{\\mu}_{\\delta})\\big].\n\\end{equation}\nMultiplying (\\ref{E5b}) by $D^\\mu(u^{\\mu}_{\\delta})$, adding the result\nto (\\ref{E7}) and also using the fact that\n$$div\\bigg(M^\\mu(u^{\\mu}_{\\delta})D^\\mu(u^{\\mu}_{\\delta})\\nablap^{\\mu}_{\\delta}\\bigg)=\nM^\\mu(u^{\\mu}_{\\delta})\\nabla D^\\mu(u^{\\mu}_{\\delta}).\\nablap^{\\mu}_{\\delta}\n+D^\\mu(u^{\\mu}_{\\delta})div\\bigg(M^\\mu(u^{\\mu}_{\\delta})\\nablap^{\\mu}_{\\delta}\\bigg),$$\nwe deduce that\n\\begin{eqnarray}\n&p_c(u^{\\mu}_{\\delta})(u^{\\mu}_{\\delta})_t-\\Frac{1}{\\mu}\\bigg(f^{\\mu}(u^{\\mu}_{\\delta})p_c(u^{\\mu}_{\\delta})-D^\\mu(u^{\\mu}_{\\delta})\\bigg)\\Delta\ng(u^{\\mu}_{\\delta})\n-div\\bigg(M^\\mu(u^{\\mu}_{\\delta})D^\\mu(u^{\\mu}_{\\delta})\\nablap^{\\mu}_{\\delta}\\bigg)\n\\nonumber\\\\\n&=p_c(u^{\\mu}_{\\delta})\\overline s_\\delta\\bigg[f^{\\mu}(c_\\delta)-f^{\\mu}(u^{\\mu}_{\\delta})\\bigg]+\nD^\\mu(u^{\\mu}_{\\delta})\\bigg(\\overline s_\\delta-\\underline s_\\delta-\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta)\\bigg). \\label{E8}\\end{eqnarray} Integrating (\\ref{E8})\non $Q_T$ and using the fact that the definition (\\ref{defD}) of $D^\\mu$ implies\n$$p_c(u^{\\mu}_{\\delta})f^{\\mu}(u^{\\mu}_{\\delta})-D^\\mu(u^{\\mu}_{\\delta})={\\cal{Q}}^\\mu(u^{\\mu}_{\\delta}),$$\nwe obtain\n\\begin{eqnarray}\n\\Int_{Q_T}p_c(u^{\\mu}_{\\delta})(u^{\\mu}_{\\delta})_tdxdt\n-\\Frac{1}{\\mu}\\Int_{Q_T}{\\cal{Q}}^\\mu(u^{\\mu}_{\\delta})\\Delta\ng(u^{\\mu}_{\\delta})dxdt=J,\\label{E9}\n\\end{eqnarray}\nwhere\n$$\n\\begin{array}{ll}\nJ:=&\\Int_{Q_T}p_c(u^{\\mu}_{\\delta})\\overline s_\\delta\\big[f^{\\mu}(c_\\delta)-f^{\\mu}(u^{\\mu}_{\\delta})\\big]dxdt\\\\\n&+\\Int_{Q_T}\\bigg(p_c(u^{\\mu}_{\\delta})f^{\\mu}(u^{\\mu}_{\\delta})-{\\cal{Q}}^\\mu(u^{\\mu}_{\\delta})\\bigg)\n\\bigg(\\overline s_\\delta-\\underline s_\\delta-\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta-\\underline s_\\delta)\\bigg)dxdt.\n\\end{array}\n$$\nIt follows from (\\ref{Eb1}), (\\ref{Eb1bis}), (\\ref{Eb4}), (\\ref{Eb6}) and\n(\\ref{sdeltaborne}) that $|J|\\leq C_8$. Substituting this\ninto (\\ref{E9}) and also using (\\ref{E4}) we obtain that\n\\begin{equation}\\label{E11}\n0\\leq -\\Frac{1}{\\mu}\\Int_{Q_T}\\nabla{\\cal{Q}}^\\mu(u^{\\mu}_{\\delta}).\\nabla(\ng(u^{\\mu}_{\\delta}))dxdt\\leq C_9.\n\\end{equation}\nFurthermore we remark that\n$$\n\\Frac{1}{\\mu}f^{\\mu}(u^{\\mu}_{\\delta})\\geq \\Frac{k_w(u_m)}{2},\n$$\nwhich together with (\\ref{E11}) and the fact that $\\nabla {\\cal{Q}}^\\mu(u^{\\mu}_{\\delta})=f^{\\mu}(u^{\\mu}_{\\delta})\\nablap_c(u^{\\mu}_{\\delta})$\nyields\n\\begin{equation}\\label{E12}\n0\\leq -\\Int_{Q_T}\\nablap_c(u^{\\mu}_{\\delta})\\nabla(g(u^{\\mu}_{\\delta}))dxdt\\leq C_{10}.\n\\end{equation}\nBy the definition (\\ref{defzeta}) of $\\zeta$, we have $-\\nablap_c(u^{\\mu}_{\\delta})\\nabla\ng(u^{\\mu}_{\\delta})=|\\nabla\\zeta(u^{\\mu}_{\\delta})|^2$. This together with (\\ref{E12}) implies (\\ref{Est1.2}) and\n(\\ref{Est1.2bis}), which in view of (\\ref{Eb2bis}) gives (\\ref{Est1.2ter}). This completes the proof of Lemma\n\\ref{leest1}. $\\\\ $$\\\\ $ In what follows we give estimates of differences of space translates of $p^{\\mu}_{\\delta}$ and\n$g(u^{\\mu}_{\\delta})$. We set for $r\\in{\\mathrm {I\\mkern-5.5mu R\\mkern1mu}}^+$ sufficiently small:\n$$\\Omega_r=\\{x\\in\\Omega,~~B(x,2r)\\subset\\Omega\\}.$$\n\\begin{lemma}\\label{leest2} Let $(u^{\\mu}_{\\delta},p^{\\mu}_{\\delta})$ be a solution of Problem $(S^\\mu_\\delta)$;\nthere exists a positive constant $C$ such that\n\\begin{equation}\\label{Est2}\n\\Int_0^T\\Int_{{\\Omega}_r}\\bigg|p^{\\mu}_{\\delta}(x+\\xi,t)-p^{\\mu}_{\\delta}(x,t)\\bigg|^2(x,t)dxdt\\leq\nC\\xi^2\n\\end{equation}\nand\n\\begin{equation}\\label{Est2.2}\n\\Int_0^T\\Int_{{\\Omega}_r}\\bigg|g(u^{\\mu}_{\\delta})(x+\\xi,t)-g(u^{\\mu}_{\\delta})(x,t)\\bigg|^2dxdt\\leq\nC\\xi^2,\n\\end{equation}\nwhere $\\xi\\in{\\mathrm {I\\mkern-5.5mu R\\mkern1mu}}^N$ and $|\\xi|\\leq 2r.$\n\\end{lemma}\n{\\underline{Proof}}: The inequalities (\\ref{Est2}) and (\\ref{Est2.2}) follow from (\\ref{Est1.1}) and (\\ref{Est1.2ter})\nrespectively. $\\\\ $$\\\\ $ Next we estimate differences of time translates of $g(u^{\\mu}_{\\delta})$.\n\\begin{lemma}\\label{leest3} Let $(u^{\\mu}_{\\delta},p^{\\mu}_{\\delta})$ be a solution of Problem $(S^\\mu_\\delta)$ then\nthere exists a positive constant $C$ such that\n\\begin{equation}\\label{Est3}\n\\Int_0^{T-\\tau}\\Int_{{\\Omega}}\\big[g(u^{\\mu}_{\\delta})(x,t+\\tau)-g(u^{\\mu}_{\\delta})(x,t)\\big]^2dxdt\\leq\nC\\tau,\n\\end{equation}\nfor all $\\tau\\in(0,T)$.\n\\end{lemma}\n{\\underline{Proof}}: We set\n$$A(t):=\\Int_{\\Omega}[g(u^{\\mu}_{\\delta})(x,t+\\tau)-g(u^{\\mu}_{\\delta})(x,t)]^2dx.$$\nSince $g$ is a non decreasing Lipschitz continuous function with the Lipschitz constant $C_g$ we have that\n\\begin{eqnarray}\nA(t)&\\leq& C_g\\Int_{\\Omega}[g(u^{\\mu}_{\\delta}(x,t+\\tau))-g(u^{\\mu}_{\\delta}(x,t))][u^{\\mu}_{\\delta}(x,t+\\tau)-u^{\\mu}_{\\delta}(x,t)]dx\\nonumber\\\\\n&\\leq & C_g\\Int_{\\Omega}[g(u^{\\mu}_{\\delta}(x,t+\\tau))-g(u^{\\mu}_{\\delta}(x,t))]\\left[\\Int_t^{t+\\tau}(u^{\\mu}_\\delta)_t(x,\\theta)d\\theta\\right]dx\\nonumber\\\\\n&\\leq& C_g\\Int_{\\Omega}\\Int_t^{t+\\tau}\\bigg[g(u^{\\mu}_{\\delta}(x,t+\\tau))-g(u^{\\mu}_{\\delta}(x,t))\\bigg]\\nonumber\\\\\n&~~~&~\\bigg[div(k_w(u^{\\mu}_{\\delta})\\nablap^{\\mu}_{\\delta})+f^{\\mu}(c_\\delta)\\overline s_\\delta-f^{\\mu}(u^{\\mu}_\\delta)\n\\bigg(\\underline s_\\delta+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta(y)-\\underline s_\\delta(y))dy\\bigg)\\bigg](x,\\theta) d\\theta dx,\\nonumber\n\\end{eqnarray}\nwhere we have used (\\ref{1d}). Integrating by parts this gives\n\\begin{eqnarray}\n&A(t)\\leq\nC_g\\bigg\\{\\Int_t^{t+\\tau}\\Int_{\\Omega}\\bigg|k_w(u^{\\mu}_{\\delta})(x,\\theta)\\nablap^{\\mu}_{\\delta}(x,\\theta)\n\\nabla g(u^{\\mu}_{\\delta})(x,t+\\tau)\\bigg|dxd\\theta \\nonumber\\\\\n&+\\Int_t^{t+\\tau}\\Int_{\\Omega}\\bigg|k_w(u^{\\mu}_{\\delta})(x,\\theta)\\nablap^{\\mu}_{\\delta}(x,\\theta)\\nabla\ng(u^{\\mu}_{\\delta})(x,t)\\bigg|\ndxd\\theta \\nonumber\\\\\n&+\\bigg|\\Int_{\\Omega}\\bigg[g(u^{\\mu}_{\\delta})(x,t+\\tau)-g(u^{\\mu}_{\\delta})(x,t)\\bigg]K(x,t,\\tau)dx\n\\bigg|\\bigg\\},\\label{Est20}\n\\end{eqnarray}\nwhere\n\\begin{equation}\\label{defK}\nK(x,t,\\tau):=\\Int_t^{t+\\tau}\\bigg(f^{\\mu}(c_\\delta(x,\\theta))\\overline s_\\delta(x)-\nf^{\\mu}(u_\\delta(x,\\theta))\\bigg[\\underline s_\\delta(x)+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}(\\overline s_\\delta(y)-\\underline s_\\delta(y))dy\\bigg]\\bigg)d\\theta.\n\\end{equation}\nNext we estimate the right hand side of (\\ref{Est20}). Using (\\ref{Eb2}) we have that\n\\begin{eqnarray}\n&&\\Int_t^{t+\\tau}\\Int_{\\Omega}\\bigg|k_w(u^{\\mu}_{\\delta})(x,\\theta)\\nablap^{\\mu}_{\\delta}(x,\\theta)\n\\nabla g(u^{\\mu}_{\\delta})(x,t+\\tau)\\bigg|dxd\\theta\\nonumber\\\\\n&&\\leq\n\\Frac{1}{2}\\bigg(\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta+\n\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nabla\ng(u^{\\mu}_{\\delta})(x,t+\\tau)|^2dxd\\theta\\bigg)\\nonumber\\\\\n&&\\leq\n\\Frac{1}{2}\\bigg(\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta+\n\\tau\\Int_{\\Omega}|\\nabla g(u^{\\mu}_{\\delta})(x,t+\\tau)|^2dx\\bigg).\n\\label{Est21}\n\\end{eqnarray}\nSimilarly we have that\n\\begin{eqnarray}\n&&\\Int_t^{t+\\tau}\\Int_{\\Omega}\\bigg|k_w(u^{\\mu}_{\\delta})(x,\\theta)\\nablap^{\\mu}_{\\delta}(x,\\theta)\\nabla\ng(u^{\\mu}_{\\delta})(x,t)\\bigg|dxd\\theta\\nonumber\\\\\n&&\\leq \\Frac{1}{2}\\bigg(\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta+ \\tau\\Int_{\\Omega}|\\nabla\ng(u^{\\mu}_{\\delta})(x,t)|^2dx\\bigg). \\label{Est22}\n\\end{eqnarray}\nMoreover using (\\ref{Eb1}) and (\\ref{Eb1bis}) we obtain from the definition (\\ref{defK}) of $K$ that\n$$|K(x,t,\\tau)|\\leq\n\\Int_t^{t+\\tau}\\bigg[|\\overline s_\\delta|+|\\underline s_\\delta|+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}|\\overline s_\\delta-\\underline s_\\delta|dx\\bigg]d\\theta\\leq\n\\bigg[|\\overline s_\\delta|+|\\underline s_\\delta|+\\displaystyle {\\Int \\kern -0.961em -}_{\\Omega}|\\overline s_\\delta-\\underline s_\\delta|dx\\bigg]\\tau.$$ This together with $(\\ref{sdeltaborne})$ and the fact\nthat the function $g(u^{\\mu}_{\\delta})$ is bounded uniformly on $\\mu$ and $\\delta$ yields\n\\begin{equation}\\label{Est22.2}\n\\bigg|\\Int_{\\Omega}\\bigg[g(u^{\\mu}_{\\delta})(x,t+\\tau)-g(u^{\\mu}_{\\delta})(x,t)\\bigg]K(x,t,\\tau)\\bigg|dx\\leq \\tilde C\\tau.\n\\end{equation}\nSubstituting (\\ref{Est21}), (\\ref{Est22}) and (\\ref{Est22.2}) into (\\ref{Est20}) we deduce that\n\\begin{eqnarray}\nA(t)&\\leq\n&C_g\\bigg(\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta+\n\\Frac{\\tau}{2}\\Int_{\\Omega}|\\nabla\ng(u^{\\mu}_{\\delta})(x,t+\\tau)|^2dx\\nonumber\\\\\n&&+ \\Frac{\\tau}{2}\\Int_{\\Omega}|\\nabla g(u^{\\mu}_{\\delta})(x,t)|^2dx+\\tilde C\\tau\\bigg), \\nonumber\n\\end{eqnarray}\nwhich we integrate on $[0,T-\\tau]$ to obtain\n\\begin{eqnarray}\n\\Int_0^{T-\\tau}A(t)dt&\\leq &C_g \\bigg(\n\\Int_0^{T-\\tau}\\Int_t^{t+\\tau}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta\ndt+\n\\tau \\Int_0^{T}\\Int_{\\Omega}|\\nabla g(u^{\\mu}_{\\delta})|^2dxdt+\\tilde C\\tau T\\bigg)\\nonumber\\\\\n&\\leq &C_g \\bigg( \\tau\\Int_0^{T}\\Int_{\\Omega}|\\nablap^{\\mu}_{\\delta}(x,\\theta)|^2dxd\\theta + \\tau \\Int_0^{T}\\Int_{\\Omega}|\\nabla\ng(u^{\\mu}_{\\delta})|^2dxdt+\\tilde C\\tau T\\bigg).\\nonumber\n\\end{eqnarray}\nIn view of (\\ref{Est1.1}) and (\\ref{Est1.2ter}) we deduce (\\ref{Est3}), which completes the proof of Lemma\n\\ref{leest3}.\n\\setcounter{equation}{0}\n\\section{Convergence as $\\delta\\downarrow 0$.}\\label{deltavers0}\nLetting $\\delta$ tend to 0, we deduce from the estimates given in Lemmas \\ref{leest1} and \\ref{leest2} the existence of\na weak solution of Problem $(S^\\mu)$. More precisely, we have the following result,\n\\begin{lemma}\\label{ledelta=0}\nThere exists a weak solution $(u^{\\mu},p^{\\mu})$ of Problem $(S^\\mu)$, which satisfies\n\\begin{equation}\\label{Est1delta=0} \\Int_0^T\\displaystyle \\Int _{\\Omega} \\bigg[k_a(u^{\\mu})\\bigg]^2\\bigg[\\nablap^{\\mu}+\\nabla\np_c(u^{\\mu})\\bigg]^2dxdt\\leq C\\mu,\n\\end{equation}\n\\begin{equation}\\label{Est1.1delta=0}\n\\Int_0^T\\displaystyle \\Int _{\\Omega} |\\nablap^{\\mu}|^2dxdt\\leq C,\n\\end{equation}\n\\begin{equation}\\label{Est1.2delta=0}\n\\Int_0^T\\displaystyle \\Int _{\\Omega} |\\nabla g(u^{\\mu})|^2dxdt\\leq C,\n\\end{equation}\n\\begin{equation}\\label{Est2.2delta=0}\n\\Int_0^T\\Int_{{\\Omega}_r}\\big[g(u^{\\mu})(x+\\xi,t)-g(u^{\\mu})(x,t)\\big]^2dxdt\\leq\nC\\xi^2,\n\\end{equation}\nwhere $\\xi\\in{\\mathrm {I\\mkern-5.5mu R\\mkern1mu}}^N$ and $|\\xi|\\leq 2r$. Moreover the following estimate of differences of time translates holds\n\\begin{equation}\\label{Est3delta=0}\n\\Int_0^{T-\\tau}\\Int_{{\\Omega}}\\big[g(u^{\\mu})(x,t+\\tau)-g(u^{\\mu})(x,t)\\big]^2dxdt\\leq\nC\\tau,\n\\end{equation}\nfor all $\\tau\\in(0,T)$.\n\\end{lemma}\n{\\underline{Proof}}: We deduce from (\\ref{Est1.1}), (\\ref{Est2.2}) and (\\ref{Est3}) that there exist functions $\\hat g^\\mu$ and\n$p^\\mu$ and a subsequence $((u^{\\mu}_{\\delta_n},p^{\\mu}_{\\delta_n}))_{n\\in N}$ of weak solutions of Problem $(S^{\\mu}_{{\\delta_n}})$ such\nthat\n\\begin{eqnarray}\n&&(g(u^{\\mu}_{\\delta_n}))_{n\\in N}\\mbox{ tends to }\\hat g^\\mu\\mbox{ strongly in }L^2(Q_T), ~~~~~~\\label{borneudelta=0.a}\n\\\\\n&&(p^{\\mu}_{\\delta_n})_{n\\in N}\\mbox{ tends to }p^\\mu\\mbox{ weakly in }L^2(0,T;H^1({\\Omega})),\\nonumber\n\\end{eqnarray}\nas $\\delta_n$ tends to zero. Thus for a subsequence, which we denote again by $\\delta_n$, we have that\n\\begin{equation}\\label{borneudelta=0.b}\n(g(u^{\\mu}_{\\delta_n}))_{n\\in N}\\mbox{ tends to }\\hat g^\\mu\\mbox{ for almost }(x,t)\\in Q_T.\n\\end{equation}\nUsing the fact that g is bijective we deduce that\n\\begin{equation}\\label{borneudelta=0.c} (u^{\\mu}_{\\delta_n})_{n\\in N} \\mbox{ tends to $u^\\mu:=g^{-1}(\\hat g^\\mu)$ strongly in $L^2(Q_T)$ and almost everywhere in }Q_T,\n\\end{equation}\nas $\\delta_n$ tends to zero. Moreover we have in view of (\\ref{Est1.2ter}) and (\\ref{borneudelta=0.a}) that $\\nabla\ng(u^{\\mu}_{\\delta_n})$ tends to $\\nabla g(u^{\\mu})$ weakly in $L^2(Q_T)$ as $\\delta_n\\downarrow 0$, so that by the\ndefinition (\\ref{defg}) of $g$\n\\begin{equation}\\label{cvfaible1}\nk_a(u^{\\mu}_{\\delta_n})\\nabla p_c(u^{\\mu}_{\\delta_n}) \\mbox{ tends to $k_a(u^{\\mu})\\nabla p_c(u^{\\mu})$ weakly in $L^2(Q_T)$ as\n$\\delta_n\\downarrow 0$.}\n\\end{equation}\nLetting $\\delta_n$ tend to 0 in (\\ref{borneu}) we deduce that\n\\begin{equation}\\label{borneudelta=0.1}\nu_m\\leq u^{\\mu}(x,t)\\leq 1. \\end{equation} Moreover we deduce from\n(\\ref{3d}) that\n\\begin{equation}\\label{delta=0.2}\n\\Int_\\Op^{\\mu}(x,t)dx=0, \\mbox{ for almost every } t \\in (0,T).\n\\end{equation} Multiplying (\\ref{1d}) by $\\varphi\\in{\\cal{C}}$,\nintegrating by parts and letting $\\delta_n$ tend to 0 we obtain\n\\begin{eqnarray}\n&\\Int_0^T\\displaystyle \\Int _{\\Omega} u^\\mu \\varphi_tdxdt =&\\Int_0^T\\displaystyle \\Int _{\\Omega}k_w(u^\\mu)\\nabla p^\\mu.\\nabla \\varphi dxdt-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg( f^\\mu(c)\\overline s- f^\\mu(u^\\mu)\\underline s \\bigg) \\varphi dxdt\\nonumber\\\\\n&&-\\displaystyle \\Int _{\\Omega} u_0(x)\\varphi(x,0)dx,\\label{delta=0.3}\n\\end{eqnarray}\nwhere we have used that $u_{0}^\\delta$ tends to $u_0$ in $L^2(\\Omega)$ and that $c_\\delta$, $\\overline s_\\delta$ and $\\underline s_\\delta$ tend\nrespectively to $c$, $\\overline s$ and $\\underline s$ in $L^2(Q_T)$ as $\\delta\\downarrow 0$. Similarly, multiplying (\\ref{2d}) by\n$\\varphi\\in{\\cal{C}}$, integrating by parts and letting $\\delta_n$ tend to 0 we deduce that\n\\begin{eqnarray}\n&&\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(1-u^{\\mu}\\bigg)\\varphi_tdxdt=\\Frac{1}{\\mu}\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(k_a(u^{\\mu})\\nablap^{\\mu}+\\nabla g(u^{\\mu})\\bigg).\\nabla \\varphi dx dt\\nonumber\\\\\n&&-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg((1-f^{\\mu}(c))\\overline s-(1-f^{\\mu}(u^{\\mu}))\\underline s\\bigg) \\varphi dx dt-\\displaystyle \\Int _{\\Omega}\n\\bigg(1-u_0(x)\\bigg)\\varphi(x,0)dx,~~\\label{delta=0.4}\n\\end{eqnarray}\nwhich since $\\nabla g(u^{\\mu})=k_a(u^{\\mu})\\nabla p_c(u^{\\mu})$ coincides with (\\ref{defsol2}). Next we prove\n(\\ref{Est1delta=0}). We first check that\n\\begin{equation}\\label{cvfaible2}\nk_a(u^{\\mu}_{\\delta_n})\\nabla p^{\\mu}_{\\delta_n}\\mbox{ tends to }k_a(u^{\\mu})\\nabla p^{\\mu} \\mbox{ weakly in }L^2(Q_T),\n\\end{equation}\nas $\\delta_n$ tends to 0. Let $\\varphi\\in L^2(Q_T)$, we have that\n\\begin{eqnarray}\n\\bigg|\\Int_{Q_T} \\bigg(k_a(u^{\\mu}_{\\delta_n})\\nabla p^{\\mu}_{\\delta_n}-k_a(u^{\\mu})\\nabla p^{\\mu}\\bigg)\\varphi ~dxdt\\bigg| \\leq\n|I^1_{\\delta_n}|+|I^2_{\\delta_n}|,\\label{delta=0.4bis}\n\\end{eqnarray}\nwhere\n$$\nI^1_{\\delta_n}:= \\Int_{Q_T} \\bigg(k_a(u^{\\mu}_{\\delta_n})-k_a(u^{\\mu})\\bigg) \\nabla p^{\\mu}_{\\delta_n}\\varphi~dxdt\n$$and\n$$I^2_{\\delta_n}=\\Int_{Q_T}\nk_a(u^{\\mu})\\varphi\\bigg(\\nabla p^{\\mu}_{\\delta_n} -\\nabla p^{\\mu}\\bigg)~dxdt.\n$$\nUsing the fact that $\\nabla p^{\\mu}_{\\delta_n}$ converges to $\\nabla p^{\\mu}$ weakly in $L^2(Q_T)$ as $\\delta_n\\downarrow\n0$, we deduce, since $k_a(u^{\\mu})\\varphi\\in L^2(Q_T)$, that\n\\begin{equation}\\label{delta=0.4ter}\n\\mbox{ $|I^2_{\\delta_n}|$ tends to $0$ as $\\delta_n\\downarrow 0$.}\n\\end{equation}\nMoreover we have by (\\ref{Est1.1}) that\n\\begin{eqnarray}\n|I^1_{\\delta_n}|&&\\leq \\bigg(\\Int_{Q_T} \\bigg|k_a(u^{\\mu}_{\\delta_n})-k_a(u^{\\mu})\\bigg|^2\\varphi^2dxdt\\bigg)^{1\/2}\n\\bigg(\\Int_{Q_T}|\\nabla p^{\\mu}_{\\delta_n}|^2dxdt\\bigg)^{1\/2} \\nonumber\\\\&&\\leq C\\bigg(\\Int_{Q_T}\n\\bigg|k_a(u^{\\mu}_{\\delta_n})-k_a(u^{\\mu})\\bigg|^2\\varphi^2dxdt\\bigg)^{1\/2}.\\nonumber\n\\end{eqnarray}\nSince $\\bigg|k_a(u^{\\mu}_{\\delta_n})-k_a(u^{\\mu})\\bigg|^2\\varphi^2\\leq 4\\varphi^2$ and since $k_a(u^{\\mu}_{\\delta_n})$ tends\nto $k_a(u^{\\mu})$ almost everywhere, we deduce from the Dominated Convergence Theorem that $I^1_{\\delta_n}$ tends to\n$0$ as $\\delta_n\\downarrow 0$. This with (\\ref{delta=0.4ter}) implies (\\ref{cvfaible2}), which with\n(\\ref{cvfaible1}) gives that\n\\begin{equation}\\label{cvfaible3}\nk_a(u^{\\mu}_{\\delta_n})\\bigg[\\nabla p^{\\mu}_{\\delta_n}+\\nabla p_c(u^{\\mu}_{\\delta_n})\\bigg]\\mbox{ tends to\n}k_a(u^{\\mu})\\bigg[\\nabla p^{\\mu}+\\nabla p_c(u^{\\mu})\\bigg] \\mbox{ weakly in }L^2(Q_T).\n\\end{equation}\nThe functional $v\\mapsto \\Int_{Q_T}v^2dxdt$ is convex and lower semi continuous from $L^2(Q_T)$ to $\\overline R$\ntherefore it is also weakly l.s.c. (see \\cite{B} Corollary III.8) and thus we deduce from (\\ref{Eb2bis}),\n(\\ref{Est1}) and (\\ref{cvfaible3}) that\n$$\n\\begin{array}{lll}\n\\Int_{Q_T}\\bigg[k_a(u^{\\mu})\\bigg]^2\\bigg[\\nabla p^{\\mu} +\\nabla p_c(u^{\\mu})\\bigg]^2dxdt\n&\\leq& \\liminf_{\\delta_n\\downarrow 0\n}\\Int_{Q_T}\\bigg[k_a(u^{\\mu}_{\\delta_n})\\bigg]^2\\bigg[\\nabla p^{\\mu}_{\\delta_n} +\\nabla\np_c(u^{\\mu}_{\\delta_n})\\bigg]^2dxdt\\\\\n&\\leq& \\liminf_{\\delta_n\\downarrow 0}\\Int_{Q_T}k_a(u^{\\mu}_{\\delta_n})\\bigg[\\nabla p^{\\mu}_{\\delta_n} +\\nabla\np_c(u^{\\mu}_{\\delta_n})\\bigg]^2dxdt \\\\\n&\\leq &C\\mu,\n\\end{array}\n$$ which coincides with (\\ref{Est1delta=0}). Finally, we deduce\nrespectively from (\\ref{Est1.1}), (\\ref{Est1.2ter}), (\\ref{Est2.2}) and (\\ref{Est3}) the estimates\n(\\ref{Est1.1delta=0}), (\\ref{Est1.2delta=0}), (\\ref{Est2.2delta=0}) and (\\ref{Est3delta=0}). This concludes the\nproof of Lemma \\ref{ledelta=0}.\n\n\\setcounter{equation}{0}\n\\section{Convergence as $\\mu\\downarrow 0$.}\\label{muvers0}\nThe goal of this section is to prove Theorem \\ref{thlim}. We first deduce from the estimates (\\ref{Est1.1delta=0}),\n(\\ref{Est2.2delta=0}) and (\\ref{Est3delta=0}) that there exists a couple of functions $(u,p)$ and a subsequence\n$((u^{\\mu_n},p^{\\mu_n}))_{n\\in N}$ such that $$\\begin{array}{ll}\n&(u^{\\mu_n})_{n\\in N}\\mbox{ tends to }u\\mbox{ strongly in }L^2(Q_T)\\mbox{ and almost everywhere in }Q_T ,\\\\\n&(p^{\\mu_n})_{n\\in N}\\mbox{ tends to }p\\mbox{ weakly in }L^2(0,T;H^1({\\Omega})),\\\\\n\\end{array}$$\nas $\\mu_n$ tends to zero. Moreover since\n$$0\\leq f^{\\mu_n}(u^{\\mu_n})\\leq 1,$$\nthere exists a function $\\hat f\\in L^2(Q_T)$ with $0\\leq \\hat f\\leq 1$ and a subsequence\n$(f^{\\mu_{n_m}}(u^{\\mu_{n_m}}))_{n_m\\in N}$ of $(f^{\\mu_{n}}(u^{\\mu_{n}}))_{n\\in N}$ such that\n$(f^{\\mu_{n_m}}(u^{\\mu_{n_m}}))_{n_m\\in N}$ tends to $\\hat f$ weakly in $L^2(Q_T)$ as $\\mu_{n_m}$ tends to zero.\nMoreover we deduce respectively from (\\ref{borneudelta=0.1}) and (\\ref{delta=0.2}) that $0\\leq u\\leq 1$ and that\n$$\\Int_\\Omega p(x,t)dx=0,\\mbox{ for almost every }t\\in(0,T),$$\nwhich gives (\\ref{lim2}). As it is done in Section \\ref{muvers0} in the proof of (\\ref{Est1delta=0}), one can first\ncheck that\n$$\nk_a(u^{\\mu_{n_m}})(\\nabla p^{\\mu_{n_m}} +\\nabla p_c(u^{\\mu_{n_m}}))\\mbox{ tends to }k_a(u)(\\nabla p +\\nabla\np_c(u))\\mbox{ weakly in }L^2(Q_T),\n$$\nas $\\mu_{n_m}\\downarrow 0$ and then deduce from (\\ref{Est1delta=0}) the estimate (\\ref{lim1}). Furthermore letting $\\mu_{n_m}$ tends to zero into\n(\\ref{delta=0.3}) we obtain, since $\\lim_{\\mu_{n_m}\\downarrow 0}f^{\\mu_{n_m}}(s)=\\chi(s)$ for all $s\\in[0,1]$, that\n$$\n\\begin{array}{ll}\n\\Int_0^T\\displaystyle \\Int _{\\Omega} u \\varphi_tdxdt =&\\Int_0^T\\displaystyle \\Int _{\\Omega}k_w(u)\\nabla p.\\nabla \\varphi dxdt-\\Int_0^T\\displaystyle \\Int _{\\Omega}\\bigg(\\chi(c)\\overline s-\\hat f\\underline s \\bigg) \\varphi dxdt\\nonumber\\\\\n&-\\displaystyle \\Int _{\\Omega} u_0(x)\\varphi(x,0)dx,\n\\end{array}\n$$\nwhich coincides with (\\ref{sollim1}) and concludes the proof of Theorem \\ref{thlim}.\n\n\\setcounter{equation}{0}\n\\section{Numerical simulations}\n\\subsection{The saturation equation and the numerical algorithm}\nIn this section we present numerical simulations in one space dimension. To that purpose we apply the finite volume\nmethod, which we present below. To begin with, we rewrite the equations (\\ref{n1}) and (\\ref{n2}) in the case that\n$\\Omega=(0,1)$; this gives for $(x,t)\\in(0,1)\\times(0,T)$\n\\begin{eqnarray}\n&&u^\\mu_t =\\partial_x\\bigg(k_w(u^\\mu)\\partial_x\np^\\mu\\bigg)+f^{\\mu}(c)\\overline s-f^{\\mu}(u^\\mu)\\underline s, ~~\\label{D1} \\\\\n&&(1-u^\\mu)_t =\n\\partial_x\\bigg(\\Frac{1}{\\mu}k_a(u^\\mu)\\partial_x(p^\\mu+p_c(u^\\mu))\\bigg)+ (1-f^{\\mu}(c))\\overline s-(1-f^{\\mu}(u^\\mu))\\underline s.~~\\label{D2}\n\\end{eqnarray}\nAdding up both equations and using the boundary conditions (\\ref{n3}) and (\\ref{n5}) we obtain\n\\begin{equation}\\label{D3}\n\\partial_x p^\\mu=-\\Frac{k_a(u^\\mu)}{k_a(u^\\mu)+\\muk_w(u^\\mu)}\\partial_x(p_c(u^\\mu)).\n\\end{equation}\nSubstituting (\\ref{D3}) into (\\ref{D1}) yields\n\\begin{equation}\\label{D4}\nu^\\mu_t\n=-\\partial_x\\bigg[f^{\\mu}(u^\\mu)\\Frac{k_a(u^\\mu)}{\\mu}\\partial_x(p_c(u^\\mu))\\bigg]+f^{\\mu}(c)\\overline s-f^{\\mu}(u^\\mu)\\underline s.\n\\end{equation}\nMoreover we deduce from (\\ref{D3}) and the definition (\\ref{R}) of ${\\cal{R}}^\\mu$ that $\\partial_x\np^\\mu=-\\partial_x({\\cal{R}}^\\mu(u^\\mu))$, so that in view of (\\ref{n4}) we have\n\\begin{equation}\\label{D5}\np^\\mu(x,t)=-{\\cal{R}}^\\mu(u^\\mu)(x,t)+\\Int_0^1{\\cal{R}}^\\mu(u^\\mu)(y,t)dy.\n\\end{equation}\nIn the sequel, we compare numerically the solution $u^\\mu$ of (\\ref{D4}) with the solution $u$ of the limit equation\nin the case that $u<1$, namely\n\\begin{equation}\\label{D5.b}\nu_t =-\\partial_x\\bigg(k_w(u)\\partial_xp_c(u)\\bigg)+ \\chi(c)\\bar s.\n\\end{equation}\nWe discretize the time evolution equation (\\ref{D4}) together with the initial condition and the homogeneous Neumann\nboundary condition. The time explicit finite volume scheme is defined by the following equations in which ${\\cal{K}}>0$ and\n${\\cal{J}}>0$ denote respectively the time and the space step. \\\\\n(i) The discrete initial condition is given for $i\\in\\{0,...,[1\/{\\cal{J}}]\\}$ by\n\\begin{equation}\\label{Condinitialdiscre}\n[U^\\mu]_i^{0}=u^\\mu(i{\\cal{J}},0).\n\\end{equation}\n(ii) For $i\\in\\{0,...,[1\/{\\cal{J}}]\\}$ and for $n\\in\\{0,...,[T\/{\\cal{K}}]\\}$ the discrete equation is given by\n\\begin{eqnarray}\n\\Frac{1}{{\\cal{K}}}\\bigg([U^\\mu]_i^{n+1}-[U^\\mu]_i^{n}\\bigg)&=&\n[F^\\mu]_{i+1}^n-[F^\\mu]_i^n\n+f^{\\mu}(C_i^n)\\overline{S}_i^n-f^{\\mu}([U^\\mu]_i^n)\\underline{S}_i^n,\\label{D6}\n\\end{eqnarray}\nwhere\n$$[F^\\mu]_i^n=-\\Frac{1}{{\\cal{J}}}\\bigg(p_c([U^\\mu]_{i+1}^n)-p_c([U^\\mu]_{i}^n\\bigg)\n\\Frac{k_w([U^\\mu]_{i+1}^n)k_a([U^\\mu]_{i}^n)}{\\muk_w([U^\\mu]_{i+1}^n)+k_a([U^\\mu]_{i}^n)}.$$\n\n\\noindent(iii) For $n\\in\\{0,...,[T\/{\\cal{K}}]\\}$ the discrete Neumann condition is defined by\n\\begin{equation}\\label{Condborddiscre}\n[F^\\mu]_0^{n}=0 \\mbox{ and } [F^\\mu]_{[1\/{\\cal{J}}]}^{n}=0.\n\\end{equation}\nThe numerical scheme (\\ref{Condinitialdiscre})-(\\ref{Condborddiscre}) allows to build an approximate solution,\n$u_{{\\cal{J}},{\\cal{K}}}:[0,1]\\times[0,T]\\rightarrow {\\mathrm {I\\mkern-5.5mu R\\mkern1mu}}$ for all $i\\in\\{0,...,[1\/{\\cal{J}}]\\}$ and all $n\\in\\{0,...,[T\/{\\cal{K}}]\\}$, which is\ngiven by\n\\begin{equation}\\label{schema}\nu_{{\\cal{J}},{\\cal{K}}}(x,t)=u_i^n,\\mbox{ for all }x\\in(i{\\cal{J}},(i+1){\\cal{J}}]\\mbox{ and\nfor all }t\\in (n{\\cal{K}},(n+1){\\cal{K}}].\n\\end{equation}\nIn order to also compute the pressures, we propose the following\ndiscrete equation corresponding to (\\ref{D5})\n\\begin{equation}\\label{D7}\n[P^\\mu]_i^{n}=-{\\cal{R}}([U^\\mu]_i^{n})+{\\cal{J}}\\Sigma_{j=1}^{[1\/{\\cal{J}}]}{\\cal{R}}([U^\\mu]_j^{n}).\n\\end{equation}\nFinally, setting $p^\\mu_g(x,t)=p^\\mu(x,t)+p_c(u^\\mu)(x,t)$ we\ndeduce that\n\\begin{equation}\\label{D8}\n([P_g^\\mu])_i^{n}=-{\\cal{R}}([U^\\mu]_i^{n}+p_c([U^\\mu]_i^{n})\n+{\\cal{J}}\\Sigma_{j=1}^{[1\/{\\cal{J}}]}{\\cal{R}}([U^\\mu]_j^{n}),\n\\end{equation}\nfor all $i\\in\\{0,...,[1\/{\\cal{J}}]\\}$ and all $n\\in\\{0,...,[T\/{\\cal{K}}]\\}$.\nSimilarly we propose a finite volume scheme corresponding to the\nequation (\\ref{D5.b}), namely\n\\begin{eqnarray}\n\\Frac{1}{{\\cal{K}}}\\bigg(U_i^{n+1}-U_i^{n}\\bigg)= F_{i+1}^n-F_i^n\n+\\chi(C_i^n)\\overline{S}_i^n\\label{D6},\n\\end{eqnarray}\nwhere\n$$F_i^n=-\\Frac{1}{{\\cal{J}}}\\bigg(p_c(U_{i+1}^n)-p_c(U_{i}^n\\bigg)\nk_w(U_{i+1}^n),$$ for all\n$(i,n)\\in\\{0,...,[1\/{\\cal{J}}]\\}\\times\\{0,...,[T\/{\\cal{K}}]\\}$.\n\n\\subsection{Numerical tests}\nFor the numerical computation we take $\\mu=10^{-8}$, $p_c(z)=0,1\\sqrt{1-z}$, $k_a(z)=(1-z)^2$, $k_w(z)=\\sqrt z$ and\n$\\overline s(z)=\\delta_0(z)$, $\\underline s(z)=\\delta_1(z)$, where $\\delta_a$ is the Dirac function at the point $a$. Furthermore\n$u^\\mu$ is\ngiven by the line with crosses, $p^\\mu_g$ is given by the lines with diam and the limit function $u$ corresponds to the continuous line.\\\\\n$\\\\ $ \\underline{\\bf{First test case}:} The case that $c=0,7$ and\n$u_0=1$ on $[0,1]$. We obtain at $t=0,01$ the following pictures\n\\begin{center}\n\\includegraphics[width=0.5\\hsize]{test1articlet_0_01}\\\\\n\\it{Figure 1 : t=0,01}\n\\end{center}\nWe note that, for $\\mu$ small, the functions $u$ and $u^\\mu$ are very close. Here we only start with water and\ninject a mixture of water and air. The air immediately invades the whole domain. Figure 1 illustrates the result\nwhich we proved in this paper, namely that $u^\\mu$ tends to the solution $u$ of the limit equation (\\ref{D5.b}) as\n$\\mu$ tends to 0 and moreover that the pressure\n$p^\\mu_a$ is constant. This is indeed the case since $u<1$.\\\\\n$\\\\ $\n\\underline{\\bf{Second test case}:} The case that $c=0,7$ and\n$u_0(x)=\\left\\{\\begin{array}{ll} 0,1\\mbox{ on }[0,1\/3]\\\\0,7\\mbox{\non }(1\/3,1]\\end{array}\\right.$. We obtain the following pictures\nfor $t=0,01$ and for $t=0,1$ respectively\n\\begin{center}\n\\includegraphics[scale=0.4]{test2articlet_0_01}\\\\\n\\it{Figure 2 : t=0,01}\n\\end{center}\n$\\\\ $\n\\begin{center}\n\\includegraphics[scale=0.4]{test2articlet_0_1}\\\\\n\\it{Figure 3 : t=0,1}\n\\end{center}\nThe injection of a mixture of water and air $(c=0.7)$ takes place in a region of low water saturation. We first\nremark that both functions $u^\\mu$ and $u$ evolve very slowly. Here again we have that $u(x,t)<1$ for all\n$(x,t)\\in(0,1)\\times(0,T)$ and we remark that the graphs of the two functions $u^\\mu$ and $u$ nearly\ncoincide.\\\\\n$\\\\ $ \\underline{\\bf{Third test case}:} The case that $c=1$ and\n$u_0(x)=\\left\\{\\begin{array}{ll} 0,1\\mbox{ on }[0,1\/3]\\\\0,7\\mbox{\non }(1\/3,1]\\end{array}\\right.$. We obtain the following pictures\nfor $t=0,01$ and for $t=0,1$ respectively\n\\begin{center}\n\\includegraphics[scale=0.4]{test2articlet_0_01_c_1}\\\\\n\\it{Figure 4 : t=0,01}\n\\end{center}\n$\\\\ $\n\\begin{center}\n\\includegraphics[scale=0.4]{test2articlet_0_1_c_1}\\\\\n\\it{Figure 5 : t=0,1}\n\\end{center}\nHere only water is injected; note that the saturation $u^{\\mu}$ evolves rather fast.\n\n\\newpage \\noindent\n{\\Large\\bf Acknowledgement} \\\\\n\\vspace{0.05 cm} \\\\\nWe would like to thank Professor Rapha\\`ele Herbin from the University of Provence for her interest in this work.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec-intro}\n\nThe rotational period of an asteroid is a physical property that is important in a wide range of planetary science and space exploration contexts. A rotational period measurement is essential to the characterization of the rotational state~\\citep[e.g.,][]{prav02}, which informs our understanding of an asteroid's interior and morphology~\\citep[e.g.,][]{Sche15AIV}, dynamical evolution through the Yarkovsky and YORP effects~\\citep[e.g.,][]{Vokr15AIV,gree20}, and the formation and evolution of binaries, triples, and pairs~\\citep[e.g.,][]{Marg15AIV,Wals15AIV}. Spin rate distributions place bounds on the dynamical and collisional evolutions of the main belt of asteroids~\\citep[e.g.,][]{bott15AIV}, and therefore the characteristics of the near-Earth asteroid population, which governs the history of impact cratering in the inner solar system and affects planetary defense efforts. Spin periods also provide useful initial conditions when modeling the shape~\\citep[e.g.,][]{Ostr02,Benn15AIV,Dure15AIV} and thermophysical properties~\\citep[e.g.,][]{Delb15AIV} of asteroids.\nIn these contexts, the availability of a small number of candidate spin periods is extremely valuable. One can test the model with each trial period and promptly identify the correct period. This fact motivated in part our inclusion of secondary spin period solutions in our results, in addition to our primary, best-fit period solutions.\n\n\n\nSeveral approaches have been used to measure the rotational periods of asteroids with high precision, including Earth-based~\\citep[e.g.,][]{prav02} or space-based~\\citep[e.g.,][]{hora18} photometric observations or a combination of the two~\\citep[e.g.,][]{dure18}, as well as Earth-based radar observations~\\citep[e.g.,][]{ostr06,naid15dp}. Optical lightcurve photometry based on wideband measurements of the sunlight reflected by the asteroid is the most common approach.\nHere we use infrared lightcurves to determine asteroid spin periods.\n\n\n\nDuring its six-month primary mission, the WISE spacecraft~\\citep{wrig10} conducted a whole-sky infrared survey at four infrared bands (W1--4) centered at 3.4, 4.6, 12, and 22 $\\mu$m. All four detectors were simultaneously exposed, producing up to four independent photometric measurements. The high-quality, multi-band IR observations of $\\sim$100,000 asteroids have been used to estimate asteroid diameters and albedos~\\citep[e.g.,][]{main15AIV}. Improved algorithms applied to a curated set of thousands of asteroids yielded refined estimates as well as estimates for asteroids not previously analyzed~\\citep{myhr22}.\n\n\n\n\nBoth the cadence and length of an asteroid lightcurve observing campaign determine the parameters that can be reliably recovered from the observations. Durations on the order of days\/months\/years, such as survey data from the Palomar Transient Factory (PTF) \\citep{wasz15}, provide an opportunity to sample the object at different phase angles and to recover parameters of the phase function \\citep{muin10}. Densely sampled observations that span at least a complete rotational cycle provide the best opportunity to determine the spin period. WISE observations of asteroids present a challenge for lightcurve analyses because they take place over short intervals with sparse cadence, typically yielding only $\\sim$16 observations over a $\\sim$36-hour interval~\\citep{wrig10}. Nevertheless, most asteroids experience a few rotations in 36 hr, such that WISE data can in principle be used to estimate the spin periods of thousands of asteroids.\n\n\\citet{dure15} combined sparse photometry, including WISE data, to derive the spin period of\none asteroid. \\citet{hanu15} combined WISE thermal infrared data and other data to obtain shape models and spin periods of\nsix asteroids. Their work has since been expanded to derive 1451 spin periods for asteroids observed by WISE \\citep{dure18}. Here, we use the well-curated data set of ~\\citet{myhr22} from the fully cryogenic phase of the WISE mission to estimate spin periods for hundreds of asteroids.\n\n\n\n\n\n\n\n\n\nThe Lightcurve Database (LCDB) \\citep{lcdb} is a compilation of most known lightcurve measurements from various sources. Each lightcurve is assigned a quality code\nbetween 0 (incorrect) and 3 (best) by the database curators to convey the confidence level in the uniqueness and accuracy of the rotational period estimate.\nWe used the LCDB to evaluate the reliability of our solutions and to train a machine learning reliability classifier.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Methods}\n\\label{sec-methods}\n\n\n\\subsection{Overview}\n\\label{sec-overview} \n\n\n\n\n\n\nOur methods closely follow those of \\citet{wasz15}, who used sparse photometry from the Palomar Transient Factory (PTF) to determine $\\sim$9,000 reliable asteroid spin periods. Their method is conceptually straightforward. For each trial period, one fits a Fourier series model \\citep[][Equation 1]{harr89} to the observed flux values and computes the sum of squares of the flux residuals. The Fourier series is truncated after the second harmonic, a simplification that rests on the assumption that the object is approximately ellipsoidal in shape. It is also consistent with the fact that the second harmonic dominates asteroid lightcurves with amplitudes greater than 0.4 magnitudes \\citep[][]{harr14}. Although this model is insufficient to capture the full details of the lightcurve, it is adequate to recover the spin period in most instances, as can be verified by comparing the solutions to high-quality (quality code 3- or above) solutions published in the lightcurve database (LCDB) \\citep{lcdb}. A similar method was also used by \\citet{chan17} to determine 2780 reliable asteroid rotation periods from the PTF.\n\n\n\n\n\n\n\\citet{wasz15}'s method is directly applicable to WISE photometry, which typically contains\nat least $\\sim$16 observations of each asteroid\nwith a nominal 1.59 hr cadence \nover a $\\sim$36 hr period. \nAlthough this observational\nmode prevents the determination of spin periods for fast ($P<$ 3.17 hr) and\nslow ($P>$ 72 hr) rotators, \nmost asteroids have spin periods that are amenable to characterization with this technique.\nBased on reliable LCDB statistics (quality code 3- or higher) and the range of diameters (0.28--72.2 km) in the \\citet{myhr22} sample, we estimate that fewer than 17\\% of asteroids in our sample have a spin period smaller than 3.17 hr.\n\n\\citet{wasz15} were able to fit for a photometric phase function because their observations were obtained over a wide range of phase angles. In contrast, WISE observations typically span a narrow range of phase angles and we did not attempt to evaluate the phase function. Phase angles remained nearly constant during the short observation intervals, and phase angle effects were absorbed by the zeroth-order coefficient of the Fourier series, i.e., mean magnitude.\n\nWe compared our method to results obtained with the more traditional Lomb-Scargle periodogram \\citep{press92} (Sections~\\ref{methods-ls} and \\ref{res-ls}).\n\n\\subsection{Data Set}\n\\label{sec-data} \n\n\n\\subsubsection{Initial Data Set}\n\\label{sec-data-gold4}\n\nWe used the carefully curated data set of \\citet{myhr22}, who eliminated measurements with artifacts, low signal-to-noise ratio (S\/N), poor photometric quality, saturation, questionable PSF fits, background confusion, problematic near-conjunction conditions, or large discrepancies between ephemeris predictions and reported position. Their data set provides high-quality flux measurements in all four infrared bands for 4420 asteroids. For a small fraction (6\\%, 265 asteroids),\nWISE observations were obtained in distinct (almost always two) epoch clusters separated by more than 30 days.\nWe determined independent solutions for each cluster of observations because they were obtained at different phase angles.\nAlthough our analysis is done on individual clusters, we may refer to the clusters as asteroids for ease of presentation.\n\n\n\\subsubsection{Assignment of Flux Uncertainties}\n\\label{sec-sigma}\n\nBoth \\citet{hanu15} and \\citet{myhr18empirical} have shown that uncertainties reported by the WISE pipeline underestimate actual flux uncertainties. \\citet{hanu15} used $\\sim$400 pairs of asteroid detections observed in quick succession ($\\sim$11 s) to quantify actual flux uncertainties in W3 and W4. They found that uncertainties reported in the WISE database underestimate actual uncertainties by factors 1.4 and 1.3 in W3 and W4, respectively. \\citet{myhr18empirical} expanded this analysis to bands W1--W4 and included a much larger number of pairs (7834, 11202, 125318, and 59049 in W1--W4, respectively). Here, we further expanded the number of pairs and fit Gaussians to the distributions of Z values $(Z=(f_1 - f_2)\/\\sqrt{\\sigma_1^2+\\sigma_2^2})$ \\citep[][Equation 3]{myhr18empirical} after removing $\\sim$1\\% of pairs with $|Z|>5$. The elimination is required because the tails of the distributions are non-Gaussian even though the cores of the distributions are well approximated by Gaussians. We list the correction factors in all four bands for completeness (Table \\ref{tab-factors}). We assigned uncertainties to the flux measurements by multiplying the uncertainties reported in the WISE database with the relevant correction factor.\n\n\\begin{table}[h]\n \\begin{center}\n \\begin{tabular}{lrrr}\n %\n\n Band & Number of pairs & Correction factor & Median $\\sigma$ (corrected) \\\\\n \\hline\n W1 & 29936 & 1.224 & 0.195 \\\\\n W2 & 34462 & 1.120 & 0.132 \\\\\n W3 & 170232 & 1.479 & 0.035 \\\\\n W4 & 120225 & 1.218 & 0.058 \\\\\n \\end{tabular}\n \\caption{Correction factors that are required to convert flux uncertainties reported in the WISE pipeline to actual uncertainties. The second column shows the number of remaining pairs after elimination of $\\sim$1\\% of pairs with uncharacteristically large flux differences.\n %\n %\n %\n %\n The last column shows the median flux uncertainties in magnitude units after correction.\n }\n \\label{tab-factors}\n\\end{center} \n \\end{table}\n\n\n\\subsubsection{Band Selection}\n\\label{sec-band-selection} \n\n\nWe focus on observations in W4 for two reasons. First, thermal emission from asteroids is stronger, S\/N is higher, and the number of observations in the curated data set is higher in W3 and W4 than in W1 and W2. \nSecond, the observation cadence of the original and curated WISE data \nis generally non-uniform, with time intervals as small as 11~s, a most common time interval of 1.59 hr, a frequent time interval of 2 $\\times$ 1.59 hr = 3.17 hr, and occasional time intervals at higher multiples of 1.59 hr.\nW4 observations provide the best observational cadence, whereas W3 observations have a\nhigher fraction\nof asteroids with a longer cadence (Figure \\ref{fig-w3-cadence}), which increases the susceptibility to aliasing difficulties (Section {\\ref{sec-alias}}). \n\n\n\n\n\n\n\\begin{figure}[hbt]\n\\begin{center}\n \\begin{tabular}{cc}\n \\includegraphics[width=3in]{plots\/WISE_Cadance_W3.pdf} & \\includegraphics[width=3in]{plots\/WISE_Cadance_W4.pdf}\n \\end{tabular}\n %\n \\caption{Histograms of W3 (left) and W4 (right) sampling intervals in the \\citet{myhr22} data set.}\n %\n\\label{fig-w3-cadence}\n\\end{center}\n\\end{figure}\n\n\nWe are not concerned with thermal lags or differences in lightcurve amplitudes compared to optical lightcurves, as they are inconsequential to the determination of spin periods. In addition, \\citet{dure18} demonstrated that thermal lightcurves in W3 and W4 are qualitatively consistent with optical lightcurves. \n\nWhile multiband lightcurve fits could in principle be envisioned, their utility in the context of spin period determinations with WISE data is limited because the exposures in the four bands are simultaneous, such that they sample roughly the same rotational phase. The determination of phase lags among observations in the four WISE bands is potentially informative but beyond the scope of the current work.\n\n\n\n\n\n\n\n\n\n\n\n\\subsubsection{Data Selection Filters}\n\\label{sec-data-prefilters}\n\nWe applied several preprocessing filters in order to identify lightcurves most suitable for rotational period determination. Each observation cluster must pass all of the following filters to qualify for analysis, otherwise it is discarded.\nFirst, we focus on\nlightcurves that have at least 12 data points in W4.\nSecond, we eliminated\nlightcurves where the peak-to-peak variation in flux magnitude W4$_{{\\rm red}} < 0.3$ mag,\nwhere we used magnitudes reduced to the values that would be observed at an asteroid-sun distance $r_{\\rm as}$ = 1 au and asteroid-observer distance $r_{\\rm ao}$ = 1 au.\nThis filter eliminates low-amplitude lightcurves, which may be ambiguous with respect to spin period determination \\citep{harr14}.\nThis filter effectively eliminates very slow rotators, which are not considered in our analysis anyway (Section \\ref{sec-fit}).\nThird, we eliminated any\nlightcurve that does not have an adequate observation cadence. Specifically, we required at least one time interval between consecutive observations to fall in the range 1.55--1.58 h, which corresponds to the WISE spacecraft orbital period \\citep{wrig10}. Observation clusters that have a longer (3.10--3.16 h)\nminimum sampling interval are more susceptible to severe aliasing (Section \\ref{sec-alias}) and are discarded.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nThe filtered data set contains a total of 3061 asteroids, with 3225 observation epochs and 57,532 W4 photometric measurements. In this data set, 164 (5.3\\%) out of the 3061 asteroids contain two observation clusters, and none have more than 2 observation clusters.\n\nOn average, an observation cluster contains 16 data points and spans 36 hr.\n\n\n\n\n\n\n\n\n\\begin{table}[h]\n \\begin{center}\n \\begin{tabular}{lr}\n W4 flux observations & 57,532 \\\\ %\n %\n %\n Median flux uncertainty & 0.067 \\\\ %\n Median number of data points per cluster & 16 \\\\\n Median observation span of clusters & 1.522 days\\\\\n\n \\end{tabular}\n %\n \\caption{Characteristics of filtered data set}\n \\label{tab-preproc-stats}\n\\end{center} \n \\end{table}\n\n\n\n\n\n\n\n\n\\subsection{Fitting Procedure}\n\\label{sec-fit}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nAn effective method for identifying periodicities in sparse data is a ``period scan'', where a model lightcurve is fitted to the data for a range of trial spin periods and a measure of the misfit (i.e., fit dispersion) is evaluated at each trial period \\citep{harr12}. The misfit metric is the usual sum of squares of residuals:\n\\begin{equation}\n\\chi^2 = \\sum_{i=1}^{N} \\frac{(O_i - C_i)^2}{\\sigma_i^2},\n\\end{equation}\nwhere $O_i$ is the $i$-th observation, $C_i$ is the $i$-th computed (modeled) value, $\\sigma_i$ is the uncertainty associated with the $i$-th observation, and the index $i$ ranges from 1 to $N$, the total number of observations. The observations are the reduced flux magnitudes (Section \\ref{sec-data-prefilters}), the uncertainties are the magnitude uncertainties from the WISE database multiplied by the relevant correction factors (Section \\ref{sec-sigma}), and the computed values are obtained by fitting a second-order Fourier model to the data similar to \\citet{wasz15}'s formulation. Specifically, \n\\begin{equation}\n C_i = A_{0,0}\n + A_{1,1} \\sin{\\left(\\frac{2\\pi}{P} t_i\\right)}\n + A_{2,1} \\cos{\\left(\\frac{2\\pi}{P} t_i\\right)}\n + A_{1,2} \\sin{\\left(\\frac{4\\pi}{P} t_i\\right)}\n + A_{2,2} \\cos{\\left(\\frac{4\\pi}{P} t_i\\right)},\n \\label{eq-fourier}\n\\end{equation}\nwhere $t_i$ is the light-time corrected epoch of the $i$-th observation, $P$ is the trial spin period, and the $A_{i,j}$ are the five adjustable Fourier coefficients.\n\n\n\nWe considered a range of evenly spaced rotational frequencies between $f$ = 0 and 7.57 rotations per day, where $f_{\\rm max}$~=~7.57 rot\/day ($P \\simeq 3.17$ h) represents the fastest rotation that can be\nNyquist sampled with the WISE observational cadence (Section \\ref{sec-alias}).\nHowever, we showed that it is possible to recover the periods of fast rotators by taking advantage of the mirroring properties of aliased signals.\nWe did not attempt to recover rotational frequencies that exceed 11 rotations per day, which correspond approximately to the rotational frequency at which centrifugal acceleration at the equator exceeds the acceleration due to self-gravity for typical asteroid densities, the so-called spin barrier at $P\\simeq 2.2$ hr \\citep{prav02}. The overwhelming majority of asteroids detected by WISE and included in our data set are large and experience fewer than 11 rotations per day.\n\n\n\nThe least-squares minimizer is an ordinary linear least-squares solver that follows \\citet[][tar.gz file]{wasz15}, except that we do not fit for phase curve parameters.\nWe also computed the reduced chi-squared metric $\\chi^2_\\nu = \\chi^2 \/ (N-5)$, where the number of degrees of freedom $\\nu = N-5$ represents the number of data points minus the number of free model parameters. %\n\nThe discrete Fourier transform of a time series with duration $T$ yields a frequency resolution of $1\/T$. In this work, we chose to increase the frequency resolution by an oversampling factor of 10 in order to better resolve peaks in the %\nperiod scan. The number of trial frequencies was therefore set to $10 \\times T \\times f_{\\rm max}$.\n\n\nWe used an iterative procedure similar to \\citet{wasz15}\nwhere an increasingly large ``cosmic error'' is added to the observation uncertainties.\nThe cosmic error is initialized as 0.002 mag in the first iteration and is multiplied by 1.5 in each subsequent iteration. %\nThe purpose of the cosmic error is to inflate the measurement uncertainties in order to reflect the model's inability to accurately represent asteroid lightcurves with a Fourier series truncated at the second harmonic. The cosmic error does not\naffect the periodicities identified in the lightcurve, but it does affect the confidence intervals assigned to the period estimates. \n\n\nWe followed \\citet{wasz15} and \\citet{harr14} in preferring double-peaked folded lightcurves. \nTo identify the number of peaks in the lightcurve, we generated a synthetic folded lightcurve with the fitted Fourier coefficients and candidate period, sampled it with 10,000 points, and analyzed the samples with Matlab's built-in function {\\tt find\\_peaks}\\footnote{https:\/\/www.mathworks.com\/help\/signal\/ref\/findpeaks.html}.\nFor each double-peaked solution, we computed the heights of each peak relative to the lightcurve's global minimum.\n\n\n\n\n\n\n\n\n\n\nThere are two possible paths to convergence. At the end of each iteration, the solution with the lowest $\\chi^2_\\nu$ is selected if and only if it satisfies three conditions: (1) the folded lightcurve is double-peaked;\n(2) the height of the highest peak is at least twice that of the lowest peak; and (3) $\\chi^2_\\nu < 3$. If conditions (1) or (2) are not satisfied, the solution at half-frequency is considered and is adopted if it satisfies the same three conditions. Otherwise, the cosmic error is increased\nand the next iteration begins.\nIf the cosmic error reaches 0.1 mag, the fit is deemed unsuccessful.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Aliasing}\n\\label{sec-alias}\n\n\n\n\n\n\n\n\nThe cadence of observations determines the sampling intervals between consecutive photometric measurements.\nThe Nyquist sampling criterion requires that at least two samples of a periodic signal be obtained per cycle in order to identify the periodicity unambiguously.\nApart from the fortuitous double detections obtained $\\sim$11 s apart (Section \\ref{sec-sigma}), the smallest sampling interval in WISE data is approximately 1.59 hr, which is dictated by WISE's $\\sim$15 daily orbital revolutions \\citep{wrig10}.\nAsteroids sampled with the 1.59 hr cadence and rotation periods larger than 3.17 hr ($f <$7.57 rot\/day) are usually Nyquist sampled, i.e., suffer no aliasing.\n\nAssuming uniform sampling, the signatures of asteroids with rotation periods between 1.59 hr and 3.17 hr appear aliased in the period scan in a predictable manner, specifically:\n\\begin{equation}\n %\n f_{\\rm alias} = 2 f_{\\rm Nyq} - f_{\\rm spin},\n \\label{eq-nyquist}\n\\end{equation}\nwhere $f_{\\rm Nyq}$ = 7.57 rot\/day is the critical Nyquist frequency or folding frequency, $f_{\\rm spin}$ is the underlying true rotational frequency, and $f_{\\rm alias}$ is the aliased frequency.\nFor example, an asteroid rotating at the spin barrier of 2.2 h (10.9 rot\/day) exhibits signatures at 2.2 h (10.9 rot\/day) and 6.97 h (3.4 rot\/day). \nAbsent additional information, a period scan may yield inconclusive results with respect to these two solutions. \nHowever, folding about the 7.57 rot\/day axis remains limited because only about 17\\% of asteroids in our sample experience rotation rates that exceed 7.57 rot\/day (Section \\ref{sec-overview}).\nThe overwhelming majority of asteroids detected by WISE and included in our data set are large and experience fewer than 11 rotations per day.\n\n\nIt is frequent\nfor the interval between consecutive W4 measurements to be 3.17 hr instead of the nominal cadence of 1.59 hr, e.g., when poor-quality flux measurements are eliminated. As a result, we also expected and observed\n(Section~\\ref{sec-results})\nfolding of the periodogram about $f_{\\rm Nyq'}$ = 3.78 rot\/day.\nThe folded frequency happens to be correct in a substantial fraction of cases, which we used to our advantage as it is calculable and therefore recoverable with no loss of precision.\n\n\n\n\n\n\n\n\n\nWe found that the aliasing behavior is complicated by non-uniform sampling in the WISE data. \nAdditional aliasing considerations are described in Appendix~\\ref{app-aliasing}.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{S\/N Calculations}\n\\label{sec-snr}\nA period scan may return multiple peaks with low misfit values. S\/N metrics are useful in determining whether a peak is likely to represent a genuine rotational signature as opposed to a noise artifact. We adopted two S\/N metrics. One metric follows \\citet{wasz15} and quantifies the height of the peak with respect to the median misfit in terms of an estimation to the standard deviations of the misfit variations:\n\\begin{equation}\n S\/N_{\\rm W} = \\frac{|\\chi^2_{\\rm min} - \\chi^2_{\\rm median}|}{(\\chi^2_{84\\%} - \\chi^2_{16\\%})\/2}, \n %\n \\label{eq-snrw}\n\\end{equation}\nwhere the denominator includes percentiles of the misfit distribution corresponding to $\\pm$1 standard deviations from the median.\nThe second metric follows \\citet{harr12} and associates the misfit outside of minima, which we approximated by $\\chi^2_{95\\%}$, to the quadratic sum of the amplitude of lightcurve variation ($a$) and the noise in the data ($n$). It also associates the minimum misfit to the square of the single-point data scatter after removal of the signal ($n^2$), and assigns an overall noise level to the solution equal to $n\/\\sqrt{\\nu} = n\/\\sqrt{(N-5)}$. We have\n\\begin{equation}\n (a^2 + n^2) = \\chi^2_{95\\%} \/ N,\n\\end{equation}\n \\begin{equation}\n n^2 = \\chi^2_{\\rm min} \/ N,\n\\end{equation}\n \\begin{equation}\n {\\rm signal} = a = \\sqrt{(\\chi^2_{95\\%} - \\chi^2_{\\rm min}) \/ N}\n \\end{equation}\n \\begin{equation}\n {\\rm noise} = n' = \\sqrt{\\chi^2_{\\rm min} \/ N} \/ \\sqrt{(N - 5)}\n \\end{equation}\n \\begin{equation}\n S\/N_{\\rm H} = a\/n' = \\sqrt{\\frac{\\chi^2_{95\\%} - \\chi^2_{\\rm min}}{\\chi^2_{\\rm min} }} \\sqrt{(N - 5)}\n \\label{eq-snrh}\n \\end{equation}\n\n\\subsection{Assignment of Spin Period Uncertainties}\n\\label{sec-uncertainties}\n\nOnce the best-fit peak was identified, we assigned a 1$\\sigma$ uncertainty to the fitted period by computing the periods corresponding to a constant chi-square boundary \\citep[][Section 15.6]{press92}. Specifically, we computed the periods at which\n$ \\chi^2 = \\chi^2_{\\rm min} + \\Delta\\chi^2(68.3\\%, \\nu=5) = \\chi^2_{\\rm min} + 5.86$, where $\\chi^2_{\\rm min}$ is the minimum misfit.\n\n\n\n\\subsection{Post-processing Filters}\n\\label{sec-post-fit-filters}\nBecause the duration and cadence of WISE observations are not optimal for the unambiguous determination of spin periods, it was important to\nremove solutions that are\nlikely unreliable.\nWe applied the following filters:\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n %\n %\n \n %\n %\n %\n %\n \n %\n \n %\n %\n \n %\n \n %\n\n %\n\n %\n\n\n \n\n\n\n\n\n\n\n\n\n\n\n\n\n(1) Reject slow rotators. \nWe required observations over at least 180 degrees of rotational phase and rejected any solution with a best-fit period that is two or more times longer than the data span.\n\n\n\n\n(2) Reject anomalously high amplitude lightcurves.\nWe eliminated solutions where the peak-to-peak amplitude of the fitted solution was three or more times larger than the peak-to-peak amplitude of the observations.\nThe peak-to-peak amplitude of the lightcurve was determined numerically while evaluating the lightcurve with the best-fit Fourier coefficients. \n\n\n\n %\n(3) Reject low S\/N solutions. \n Low S\/N solutions are likely spurious and were eliminated. In practice, we found that the S\/N formulation of \\citet{harr12} was more effective than that of \\citet{wasz15}, perhaps due in part to the relatively small number of (noisy) observations. \n %\n Solutions were rejected when $(S\/N)_{\\rm H} < 5$.\n %\n %\n %\n \n %\n The solutions that passed all of the above filters are reported below. \n %\n %\n We have evidence that most of these solutions are accurate\n %\n (Section~\\ref{sec-results}),\n but we also expect a fraction of aliased or incorrect solutions in this set.\n\n\n %\n %\n\n\n\n\n\n\\subsection{Machine Learning Reliability Classifier} \n\\label{sec-ml}\n\n\n\n\n\n\n\\citet{wasz15} pioneered the usage of a machine learning (ML) classifier to improve the reliability of asteroid lightcurve fits.\nThey applied a random forest (RF) algorithm, which is \na supervised machine learning algorithm that utilizes an ensemble of weak decision tree predictors to increase prediction power. \nThe hypothesis underpinning a machine learning classifier is that certain appropriately chosen features associated with a lightcurve solution jointly carry non-trivial information regarding the reliability of the solution. Given a labeled training set that includes both the values of the features and a reliability indicator, an ML algorithm can be trained to detect relations within the feature space and predict a reliability indicator for lightcurve solutions that do not have a reliably known period (i.e., solutions that are not in the training set).\nA Random Forest classifier makes predictions via a majority voting process by its ensemble of decision trees. For each sample, the classifier generates a probability derived from the voting process, then makes a binary prediction (i.e., reliable or unreliable) on the basis of a user-defined probability threshold. \n\\citet{wasz15}'s classifier was trained with about 1000 lightcurves with known reference periods and improved the overall\nsuccess rate \nfrom $\\sim$66\\% to $\\sim$80\\% for 19,000 lightcurves. %\n\n\n\n\n\n\n\nBecause our work also involved the analysis of thousands of sparse lightcurve, we initially applied an RF algorithm in an attempt to identify the most reliable lightcurve solutions. The RF classifier was able to provide a modest improvement to the success rate of our primary solutions (from 55\\% to 70\\%), but it also marked correct solutions as incorrect. Because the recovery of spin periods among our three solutions was so high (88\\%) and the performance of the RF classifier was limited, we ultimately decided against providing a potentially flawed reliability indicator.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Lomb-Scargle Periodogram}\n\\label{methods-ls}\n\n\n\n\nThe Lomb-Scargle (LS) periodogram \\citep{Lomb76,Scar82} is a standard algorithm that enables the analysis of periodicities in unevenly sampled time series, \nsuch as asteroid lightcurves. \nWe explored the performance of the LS periodogram\nas an alternative to our default pipeline. Our LS pipeline follows the implementation in standard libraries and does not include iterative adjustment of uncertainties, requirement for double-peaked solutions, and post-fit filters implemented in our default pipeline.\n\n\n\n\nWe used the generalized LS algorithm of \\citet{zech09} as implemented in the astropy package\\footnote{https:\/\/docs.astropy.org\/en\/stable\/timeseries\/lombscargle.html}. This implementation takes observational uncertainties into account and enables the estimation of a floating mean.\nWe deployed both first-order and second-order LS periodograms.\n\nIn the first-order LS implementation, we followed \\citet{McNeill19}\nand set the estimated rotational period at twice the best-fit LS period to conform to the double-peak nature of asteroid lightcurves.\nWe admitted only solutions with a false-alarm probability (FAP) less than 10\\%. %\n\nThe second-order LS implementation is similar to the periodogram calculation used in our default pipeline but does not include any post-fit filter. \n\n\n\n\n\n\\section{Results}\n\\label{sec-results}\n\n\\subsection{Default pipeline}\n\nWe present the period solutions that successfully converged in the Fourier fitting algorithm and passed the post-fit filters (Section~\\ref{sec-post-fit-filters}). \nThe period solutions of 2008 ($\\sim$62\\%) out of the initial 3225 lightcurves fulfilled both inclusion criteria.\n\n\n\n\n\n\n\n\n\n\n\n\n\nTo test the reliability of our results, we compared our spin period estimates to high-quality (quality code 3- or higher) rotational periods published in the LCDB.\nWe quantified the fraction of solutions that were within 5\\% of the LCDB solution, which are deemed to be accurate solutions.\nAt the time of writing, there were 752 solutions (representing 702 unique asteroids) among our 2008\nsolutions with a suitable LCDB estimate.\nWe refer to this set of solutions as the `LCDB reference group'.\n\n\n\n\n\n %\n\n\nIn the LCDB reference group, the fitted period was found to be accurate (within 5\\%) in\n55\\% of the cases. Notably, the relative errors of the fitted periods exhibit a bimodal distribution (Figure \\ref{fig-accuracy-histogram}).\nThe two\nmodes bifurcate at a fractional error of approximately 5\\%,\nwhich provides a posteriori justification for selecting a 5\\% threshold for accuracy.\n\n\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[width=4in]{plots\/period_relative_error.pdf}\n %\n \\caption{Histogram of fractional accuracy in spin period obtained by comparing our period solutions to the corresponding LCDB reference periods. The left and right clusters correspond to the accurate and inaccurate period estimates, respectively. The red line denotes the 5\\% accuracy threshold.}\n %\n\\label{fig-accuracy-histogram}\n\\end{center}\n\\end{figure}\n\nOur spin period solutions are listed in Table~\\ref{tab-results}. Figure~\\ref{fig-suc-mode-1} illustrates an example of a favorable situation with a short (1.59 hr) cadence and relative long (6 days) duration, which yields a solution with high S\/N and no aliasing. We found that it is possible to successfully recover the spin period even when the observation span is comparable to the spin period (Figure~\\ref{fig-suc-mode-3}). Our analysis also identified correct spin periods in situations where the minimum $\\chi^2$ value is not markedly different from other competing solutions (Figure~\\ref{fig-suc-mode-2}). When multiple periodogram peaks have comparable $\\chi^2$ values, the potential for an incorrect solution exists.\n\n\n\n\n\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_s_1_296.pdf}\n \\caption{\n %\n Spin period solution for asteroid 296 obtained from 35 W4 observations spanning 6 days. \n The period solution at 4.543 $\\pm$ 0.014 hr is in good agreement with the LCDB value of 4.5385 hr.\n The data exhibit short (1.59 h) sampling intervals over a long observation span, and the periodogram is free of aliases.\n }\n\\label{fig-suc-mode-1}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_s_3_2715.pdf}\n \\caption{\n %\n %\n %\n %\n Spin period solution for asteroid 2715 obtained from 16 W4 observations spanning 1.5 days.\n The period solution at 33.19 $\\pm$ 2.19 hr is in good agreement with the LCDB value of 33.62 hr.\n The correct solution was identified despite a data observation span that is only slightly longer than the spin period.\n }\n\\label{fig-suc-mode-3}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_s_2_2812.pdf}\n \\caption{\n %\n Spin period solution for asteroid 2812 obtained from 14 W4 observations spanning 1.2 days.\n The period solution at 7.74 $\\pm$ 0.30 hr is in good agreement with the LCDB value of 7.7 hr.\n The correct solution was identified despite a relatively low S\/N and the presence of competing solutions with similar but larger $\\chi^2$ values.\n }\n\\label{fig-suc-mode-2}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=3in]{plots\/pp_x_plot.pdf}\n \\caption{\n Best-fit rotational frequency (this work) vs.\\ LCDB\n rotational frequency for the LCDB reference group. The\n dashed grey line is at f = 3.783 rot\/day, which is the\n expected folding frequency for a 3.17 hr cadence and the\n predominant aliasing mode. Most of the best-fit\n solutions (55\\%) are accurate (green dots). The\n solutions mirrored about 3.78 rot\/day (dark blue dots)\n are also accurate in 27\\% of the cases, and the\n solutions mirrored about 7.57 rot\/day (light blue dots)\n are accurate in 6\\% of the cases. The inaccurate\n solutions (red dots) represent 12\\% of the cases.\n The combination of best-fit and\n mirrored solutions yields an aggregate success rate of\n %\n 88\\%.\n}\n\\label{fig-pp-x-plot}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\nWe validated our\nresults by plotting our best-fit results against LCDB values for the LCDB reference group (Figure \\ref{fig-pp-x-plot}).\nIn frequency space, which reveals folding behavior, the structure of the solutions is\nstriking. Most of the inaccurate solutions are in fact aliases of the\ncorrect frequencies folded about the f = 3.783 rot\/day axis or, less frequently, the f~=~7.57 rot\/day axis.\nFigures~\\ref{fig-fail-mode-1} and \\ref{fig-fail-mode-2} illustrate two examples.\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_f_1_4715.pdf}\n \\caption{\n %\n %\n %\n %\n Spin period solution for asteroid 4715 obtained from 23 W4 observations spanning 3.6 days.\n The best-fit, primary period solution at 4.962 $\\pm$ 0.01 hr (4.84 rot\/day) is an alias (mirrored around f = 3.78 rot\/day) of the presumed correct LCDB value of 8.8129 hr (2.72 rot\/day). The secondary period solution at 8.7930 hr is accurate.\n }\n \\label{fig-fail-mode-1}\n\\end{center}\n\\end{figure}\n\\begin{figure}[p]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_f_2_6485.pdf}\n \\caption{\n %\n %\n %\n %\n Spin period solution for asteroid 6485 obtained from 22 W4 observations spanning 5.2 days.\n The best-fit, primary period solution at 3.31 $\\pm$ 0.035 hr (7.25 rot\/day) is an alias (mirrored around f = 3.78 rot\/day) of the presumed correct LCDB value of 75.56 hr (0.318 rot\/day). The secondary period solution at 75.4453 hr is accurate.\n}\n\\label{fig-fail-mode-2}\n\\end{center}\n\\end{figure}\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[width=7in]{plots\/mode_f_3_10721.pdf}\n \\caption{\n %\n %\n %\n %\n Spin period solution for asteroid 10721 obtained from 18 W4 observations spanning 1.8 days.\n The period solution at 3.2721 $\\pm$ 0.031 hr (7.33 rot\/day) is an alias (mirrored around f = 7.57 rot\/day) of the presumed correct LCDB value of 3.0675 hr (7.82 rot\/day). The tertiary period solution at 3.0514 hr is accurate.\n}\n\\label{fig-fail-mode-3}\n\\end{center}\n\\end{figure}\n\nFor this reason, we list both the best-fit frequency and its mirror values in Table~\\ref{tab-results}. One of these solutions is correct (within 5\\%) in\n88\\% (659\/752)\nof the cases in the LCDB reference group. We posit that the accuracy rate is similar for asteroids that are not in the LCDB reference group.\nThe best-fit, secondary (mirrored about 3.78 rot\/day), and tertiary (mirrored about 7.57 rot\/day) solutions are accurate in 55\\%, 27\\%, and 6\\% of the cases, respectively.\n\nInaccurate solutions that are not mirrors of correct values are visible outside of the X pattern on Figure~\\ref{fig-pp-x-plot}.\n\nA number of solutions deemed to be inaccurate cluster near the accurate solutions along the blue diagonal, especially at low frequencies. This behavior indicates that our 5\\% criterion is a conservative metric of accuracy, and that additional solutions are in fact close to the correct value.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{deluxetable}{rrrrrrrrrrrrrrr}\n %\n \\tablecaption{Spin period solutions for 1929 asteroids. \\label{tab-results}}\n\n\\tablehead{\n %\n\\colhead{Object} & \\colhead{$D$} & \\colhead{span} & \\colhead{idx} & \\colhead{$A$} & \\colhead{$P$} & \\colhead{$\\sigma_P$} & \\colhead{$f$} & \\colhead{$f_{\\xleftrightarrow{1}}$} & \\colhead{$P_{\\xleftrightarrow{1}}$} & \\colhead{$f_{\\xleftrightarrow{2}}$} & \\colhead{$P_{\\xleftrightarrow{2}}$} & \\colhead{$P_{\\rm LCDB}$} & \\colhead{$U$} & \\colhead{flag} \\\\ \n\\colhead{ } & \\colhead{(km)} & \\colhead{(hr)} & \\colhead{ } & \\colhead{(mag)} & \\colhead{(hr)} & \\colhead{(hr)} & \\colhead{(rot\/d)} & \\colhead{(rot\/d)} & \\colhead{(hr)} & \\colhead{(rot\/d)} & \\colhead{(hr)} & \\colhead{(hr)} & \\colhead{} & \\colhead{} \n }\n\n\n\\startdata\n131 & 30.6 & 109.5 & 1 & 0.358 & 5.1919 & 0.019 & 4.6226 & 2.9436 & 8.1532 & 10.5774 & 2.2690 & 5.1812 & 3 & 1 \\\\\n155 & 44.6 & 93.7 & 1 & 0.234 & 5.2919 & 0.016 & 4.5352 & 3.0310 & 7.9183 & 10.6648 & 2.2504 & 7.9597 & 3 & 2 \\\\\n170 & 35.4 & 181.0 & 1 & 0.360 & 13.0199 & 0.250 & 1.8433 & 5.7229 & 4.1937 & 9.4433 & 2.5415 & 13.1200 & 3 & 1 \\\\\n180 & 24.8 & 96.9 & 1 & 0.493 & 23.6223 & 0.477 & 1.0160 & 6.5502 & 3.6640 & 8.6160 & 2.7855 & 23.8660 & 3 & 1 \\\\\n183 & 30.7 & 90.5 & 1 & 0.395 & 11.7553 & 0.095 & 2.0416 & 5.5246 & 4.3442 & 9.6416 & 2.4892 & 11.7700 & 3 & 1 \\\\\n183 & 30.8 & 106.4 & 2 & 0.431 & 11.8183 & 0.115 & 2.0307 & 5.5354 & 4.3357 & 9.6308 & 2.4920 & 11.7700 & 3 & 1 \\\\\n239 & 42.9 & 90.5 & 1 & 0.392 & 18.4676 & 0.263 & 1.2996 & 6.2666 & 3.8298 & 8.8996 & 2.6968 & 18.4707 & 3 & 1 \\\\\n244 & 10.9 & 20.6 & 1 & 1.151 & 3.2773 & 0.083 & 7.3230 & 0.2432 & 98.6986 & 7.8770 & 3.0469 & 129.5100 & 3- & 0 \\\\\n251 & 33.8 & 100.0 & 1 & 0.486 & 20.0040 & 0.301 & 1.1998 & 6.3664 & 3.7698 & 8.7998 & 2.7273 & 20.2160 & 3 & 1 \\\\\n254 & 12.4 & 39.7 & 1 & 0.765 & 6.9642 & 0.188 & 3.4462 & 4.1200 & 5.8252 & 11.0462 & 2.1727 & 5.8949 & 3 & 2 \\\\\n... & ... & ... &...& ... & ... & ... & ... & ... & ... & ... & ...\\\\\n223431 & 5.8 & 119.1 & 1 & 0.214 & 30.5368 & 0.975 & 0.7859 & 6.7803 & 3.5397 & 8.3859 & 2.8619 & & & \\\\\n230118 & 0.9 & 581.1 & 1 & 0.635 & 29.9524 & 0.113 & 0.8013 & 6.7649 & 3.5477 & 8.4013 & 2.8567 & & & \\\\\n232382 & 1.0 & 535.0 & 2 & 0.192 & 67.7217 & 2.077 & 0.3544 & 7.2118 & 3.3279 & 7.9544 & 3.0172 & & & \\\\\n249958 & 5.3 & 134.9 & 1 & 0.231 & 6.8153 & 0.031 & 3.5215 & 4.0447 & 5.9337 & 11.1215 & 2.1580 & & & \\\\\n256155 & 3.1 & 134.9 & 1 & 0.549 & 15.6910 & 0.117 & 1.5295 & 6.0367 & 3.9757 & 9.1295 & 2.6288 & & & \\\\\n307840 & 6.8 & 335.1 & 1 & 0.367 & 16.0317 & 0.150 & 1.4970 & 6.0692 & 3.9544 & 9.0970 & 2.6382 & & & \\\\\n318081 & 5.9 & 42.9 & 1 & 0.410 & 3.6023 & 0.029 & 6.6623 & 0.9039 & 26.5523 & 8.5377 & 2.8111 & & & \\\\\n366774 & 0.9 & 58.8 & 1 & 0.327 & 3.4972 & 0.037 & 6.8625 & 0.7037 & 34.1075 & 8.3375 & 2.8786 & & & \\\\\n386720 & 1.0 & 141.3 & 1 & 0.806 & 4.6034 & 0.015 & 5.2136 & 2.3526 & 10.2013 & 9.9864 & 2.4033 & & & \\\\\n\\enddata \n\\tablecomments{For each object, we show the diameter $D$ (km), observation span (hr), index of the cluster of observations analyzed, the best-fit period $P$ (hr), the peak-to-peak magnitude variation of the fitted lightcurve $A$\n %\n for the primary (best-fit) period solution, the primary period uncertainty $\\sigma_P$ (hr), the best-fit rotational frequency $f$ (rot\/d), the first alternate rotational frequency\n %\n $f_{\\xleftrightarrow{1}}$ (rot\/d) found by folding the best-fit frequency about 3.78 rot\/day, \n the first alternate period\n %\n $P_{\\xleftrightarrow{1}}$ (hr), the second alternate rotational frequency\n %\n $f_{\\xleftrightarrow{2}}$ (rot\/d) found by folding the best-fit frequency about 7.56 rot\/day, \n the second alternate period\n %\n $P_{\\xleftrightarrow{2}}$\n (hr), the LCDB reference period $P_{\\rm LCDB}$ (hr), if known, the corresponding quality code $U$, and a flag indicating agreement for objects in the LCDB Reference Group.\nThe flag is 1 if the best-fit period matches $P_{\\rm LCDB}$ within 5\\%, 2 if the first mirror period matches, 3 if the second mirror period matches, and 0 if none of the three periods match $P_{\\rm LCDB}$. \n(This table is available in its entirety in \\href{https:\/\/ucla.box.com\/s\/49z4fr6pdaalfssg7qhngb30vrwjl9f0}{machine-readable} and \\href{https:\/\/ucla.box.com\/s\/5muceha0gyj7ynsnysfwjt4uxgln7c8t}{CSV} forms in the online journal. A portion is shown here for guidance regarding its form and content.)}\n\\end{deluxetable}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Lomb-Scargle Pipeline}\n\\label{res-ls}\n\n\n\nAs in our default pipeline, we generated three candidate period solutions for each fitted lightcurve (primary: LS solution, secondary: mirror across 3.78 rot\/day, tertiary: mirror across 7.57 rot\/day). We compared the accuracy of the LS pipeline to our default pipeline by computing the number of solutions that are within 5\\% of the high-quality (3\/3-) LCDB solutions (Table \\ref{tab-LS_results}).\n\n\n\n\\begin{table}[h]\n \\begin{center}\n \\begin{tabular}{lrrr}\n %\n\n Accuracy flag & LS first order & LS second order & Default pipeline \\\\\n \\hline\n 0 (inaccurate) & 91 (12\\%) & 518 (52\\%) & 93 (12\\%) \\\\\n 1 (primary) & 320 (43\\%) & 316 (32\\%) & 414 (55\\%) \\\\\n 2 (secondary) & 286 (39\\%) & 105 (11\\%) & 203 (27\\%) \\\\\n 3 (tertiary) & 33 (5\\%) & 56 (6\\%) & 42 (6\\%) \\\\\n Aggregate accuracy & 88\\% & 48\\% & 88\\% \\\\\n Number of reference solutions & 730 & 995 & 752 \\\\\n \n \n \\end{tabular}\n \\caption{ Number of accurate solutions with the Lomb-Scargle and default pipelines.\n }\n \\label{tab-LS_results}\n\\end{center} \n \\end{table}\n\n\nThe first-order LS solutions have an aggregate accuracy comparable to the default pipeline solutions. However, the primary LS solutions were accurate only in 43\\% of the cases, compared to 55\\% in the default pipeline. \nThe second-order LS solutions have a lower accuracy rate than the default pipeline solutions, both for primary solutions and in aggregate. The lower performance of the LS algorithm demonstrates the importance of the\niterative algorithm and post-fit filters in our default, \\citet{wasz15}-inspired pipeline.\n\n\n\n\n\n\n\n\n\n\n\n\n\\FloatBarrier\n\\section{Discussion}\n\\label{sec-discussion}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\\citet{prav02} reviewed the rotation periods of asteroids as a function of diameter and found that the spin distribution\nis Maxwellian for large asteroids ($>$40 km) and strongly non-Maxwellian for smaller asteroids, with an excess of both slowly rotating and rapidly rotating asteroids.\nThe WISE data set can potentially inform these studies because it can yield estimates of both diameter and spin period (Figure~\\ref{fig-period_diam_accurate_fit_period}).\n\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[width=4.5in]{plots\/fig10.pdf}\n %\n \\caption{Period vs.\\ diameter diagram for the 659 light curves in the LCDB reference group where one of our spin period solutions matched the LCDB to 5\\%. Only the spin period solutions that match the LCDB value are plotted, with color-coding indicating our primary, secondary, or tertiary solutions. The red dashed line at $P=$ 3.17 hr corresponds to the upper range of trial frequencies explored in the fitting process. The grey dashed line at $P=$ 2.2 hr illustrates the spin barrier. Diameters are from \\citet{myhr22}.}\n\\label{fig-period_diam_accurate_fit_period}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\nWe can evaluate the accuracy of our method as function of asteroid diameter and LCDB period (Figure \\ref{fig-period_diam_lcdb_ref_period}).\nDiameter does not have an apparent influence on the accuracy of our method.\nHowever, spin period does affect our ability to recover a correct solution with the WISE data. We found that the spin periods of both slowly rotating ($P > \\sim$100 hr) and rapidly rotating ($P < \\sim$2.5 hr) asteroids are generally not recoverable.\n\n\\begin{figure}[hbt]\n \\begin{center}\n %\n %\n \\includegraphics[width=4.5in]{plots\/fig11.pdf}\n %\n \\caption{LCDB Period vs.\\ diameter \\citep{myhr22} for the 752 lightcurves in the LCDB Reference Group. Our spin period solutions are color-coded according to four possible outcomes: primary solution is accurate, secondary solution is accurate, tertiary solution is accurate, or none are accurate. The red dashed line at $P=$ 3.17 hr corresponds to the upper range of trial frequencies explored in the fitting process. The grey dashed line at $P=$ 2.2 hr illustrates the spin barrier. Diameters are from \\citet{myhr22}.}\n\\label{fig-period_diam_lcdb_ref_period}\n\\end{center}\n\\end{figure}\n\n\n\n\n\nThe distribution of lightcurve amplitudes in our data set is instructive (Figure~\\ref{fig-amp-histogram}). Our distribution underestimates the proportion of low-amplitude lightcurves, because our sample selection enforced a pre-fit filter, which required a magnitude variation of at least 0.3 mag in the observed fluxes. Likewise, one of our post-fit filters eliminated some high-amplitude lightcurve solutions, but it did so only when the data themselves did not exhibit a large magnitude variation.\nWe found that 50\\%, 8.8\\%, 2.2\\%, 1\\%, and 0.25\\% of solutions have amplitudes larger than 0.5 mag, 1 mag, 1.5 mag, 2 mag, and 2.5 mag, respectively.\n\\begin{figure}[hbt]\n \\begin{center}\n \\includegraphics[width=4.5in]{plots\/amp_histogram.pdf}\n \\caption{Histogram of peak-to-peak lightcurve amplitudes.}\n\\label{fig-amp-histogram}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\n\\label{sec-conclusions}\nWe devised a procedure similar to that of \\citet{wasz15} that enables the determination of thousands of asteroid spin periods from WISE data. Despite WISE's suboptimal observation and cadence for asteroid spin measurements,\none of our\nsolutions is accurate 88\\% of the time when compared to a high-quality control group of 752 spin periods. We obtained primary, secondary, and tertiary spin period estimates for 2008 observation clusters representing 1929 unique asteroids. Among those, 1205 asteroids do not currently have a high-quality spin period estimate. Our\nprimary, secondary, or tertiary \nsolution for\nover a thousand asteroids is expected to be accurate at the 5\\% level or better and can greatly facilitate shape or thermal modeling work.\n\n\n\\acknowledgments\n\nWe thank Alan Harris for useful discussions.\n\n\nAL thanks Breann Sitarski for her helpful feedback on this work and the related research presentations, as well as her mentorship throughout his undergraduate studies.\n\nAL was funded in part by the Joe and Andrea Straus Endowment for Undergrad Opportunity\nand the Donald Carlisle Undergrad Research Endowed Fund.\nEW was funded in part by the Nathan P. Myhrvold Graduate Fellowship.\n\nThis publication makes use of data products from the Wide-field Infrared Survey Explorer, which is a joint project of the University of California, Los Angeles, and the Jet Propulsion Laboratory\/California Institute of Technology, funded by the National Aeronautics and Space Administration.\n\n\\software{\nNumPy \\citep{numpy},\nSciPy \\citep{scipy},\npandas \\citep{pandas},\nMatplotlib \\citep{mpl}\n}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nIt is widely believed that very few galaxies exist today that have not been formed or shaped in some way by an interaction with another galaxy. \\citet{Toomre_77} identified 11 galaxies that exhibit characteristics of on-going mergers, and arranged them in a chronological order, illustrating how spiral galaxies can merge to produce ellipticals. It is now the view of many astronomers that this process, whilst not the only mechanism to create these systems, plays a vital role in the production of elliptical galaxies. The 11 systems within the `Toomre' sequence, alongside other examples of on-going mergers, have been studied in great detail over a range of wavelengths to try and characterise exactly how these systems evolve. When studying these merging galaxies, X-ray observations are of particular importance, as they are able to probe the dusty nucleus of the system, which can be obscured at other wavelengths, allowing the nature of the point source population to be established. Also imaging of the soft X-ray emission permits the diffuse hot gas to be mapped out. This gives important information about the hot gaseous component associated with the strong starburst, and allows galactic-winds outflowing from the system to be observed, enabling constraints to be placed on the energetics of these outflows.\n\nX-ray observations were initially carried out with the {\\em Einstein Observatory}, providing limited spatial resolution. This instrument was followed by \\emph{ROSAT}, which greatly improved the sensitivity of the observations, allowing the X-ray properties of these merging galaxies to be probed. A study of a sample of interacting systems was carried out by \\citet{Read_98} (from here on RP98), where the X-ray luminosity and properties of the diffuse gas were investigated with \\emph{ROSAT}, alongside the point source population. In this study it was found that the normalised X-ray luminosity of these systems, broadly speaking, followed the normalised \\ensuremath{L_{\\mathrm{FIR}}}\\ luminosity, peaking at the time of coalescence. It was also found that the young merger-remnant systems within the sample exhibited low X-ray luminosities, with no indication from the later stage systems in their sample that these systems would increase in X-ray luminosity. However, it was noted that the $\\sim$1 Gyr system, NGC 7252, still hosts a large amount of molecular gas, in the form of tails and loops, and given more time to evolve, the X-ray properties could resemble those of a mature elliptical galaxy at a greater dynamical time. The main limitation of this study, due to the \\emph{ROSAT}\\ observations, was the inability to disentangle the point sources from the diffuse gas, particularly at larger distances. \n\nWith the next generation of X-ray observatories this issue was addressed with an increase in spatial resolution. This improvement, provided by \\emph{Chandra}, has allowed the investigation of both the diffuse gas and the point source population of galaxies to be carried out in greater detail. The ability to disentangle these two components is vital when investigating interacting and merging galaxies, as studies have shown that both of these components have different evolutionary timescales \\citep{Read_01}. It is therefore important to be able to study both of these separately, to fully understand how they evolve.\n\nFrom observing a selection of interacting galaxy systems, at different stages of evolution, the processes involved in the merging of galaxies pairs can be characterised. In the following sections we will describe the sample we have selected, and investigate the behaviour of the X-ray emission as the galaxies evolve from two spiral galaxies, through to relaxed merger-remnants. Section \\ref{sec:sample} outlines the selection criteria we have used and gives a brief description of each system. In section \\ref{sec_data_red} we present the results from new \\emph{Chandra}\\ observations of Mkn 266 and Arp 222. Correlations and evolution of X-ray emission across the whole sample is presented in section \\ref{sec_evol}, and discussed in section \\ref{sec:dis}. Conclusions are given in section \\ref{sec:con}.\n\n\\section{The Sample}\n\\label {sec:sample}\n\nTo gain a better understanding of galaxy evolution it is important to compare similar systems that are undergoing the same transformation. By compiling a sample of these galaxies, the evolution of the systems' X-ray properties can be investigated. In this paper we have a sample of nine interacting and post-merger systems, carefully selected to ensure that they are representative of galaxies undergoing a major merger. The nine systems were selected using the following criteria;\n\n\\begin{enumerate}\n\\item \nAll systems have been observed with {\\em Chandra}.\n\\item \nSystems comprise of, or, originate from, two similar mass, gas rich, spiral galaxies.\n\\item\nMulti-wavelength information is available for each system.\n\\item\nThe absorbing column is low, maximising the sensitivity to soft X-ray emission.\n\\item\nA wide chronological sequence is covered; from detached pairs to merger-remnants.\n\\end{enumerate}\n\nOnce the nine systems to study had been selected, the issue of merger age had to be addressed. This is one of the main problems when working with a chronological study such as this, and a number of different methods are required to solve this problem. Firstly the point of nuclear coalescence was assigned to be at time 0. From this, the time taken until nuclear coalescence for each pre-merger system can then be estimated. These timescales were derived with a combination of N-body simulations, such as \\citet{Mihos_96}, and dynamical age estimates, where the length and faintness of tidal tails, as well as nuclei separation, were used \\citep{Toomre_72}. For post-merger systems, assigning an age estimate was done by making the assumption that the last widespread episode of star formation within the system took place at the time of nuclear coalescence. Stellar population synthesis models were then used to calculate these timescales, therefore giving a good merger age estimate \\citep{Bruzual_93}. In the following sub-sections a brief description of the nine merger examples, and their X-ray properties, are presented in chronological order.\n\n\\subsection{Arp 270}\n\nThe earliest example of an interacting system in our sample is Arp 270 (also NGC 3395\/3396). This comprises two spiral galaxies of comparable mass \\citep{Hern_01}, separated by 12\\,kpc at a distance of 28 Mpc (assuming \\ensuremath{H_{\\mathrm{0}}}\\ = 75\\,km s$^{-1}$\nMpc$^{-1}$, and accounting for Virgocentric in-fall). These galaxies are connected by an optical bridge which is thought to have formed during the systems first perigalactic passage which occurred approximately 5$\\times$10$^8$ years ago.\n \nA 20 ks \\emph{Chandra}\\ observation of the system was made in 2001 and is discussed in detail in \\citet{Brassington_05}, the contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray contours overlaid on an optical image are shown in Figure \\ref{fig:arp270_optic_con}. From this observation 16 point sources are detected, 7 of which are classified as {\\em Ultraluminous X-ray Sources} (ULX's), with \\ensuremath{L_{\\mathrm{X}}}\\ $\\ge 1\\times$10$^{39}$erg s$^{-1}$ \\citep{Soria_05b}. The diffuse gas emits at a global temperature of $\\sim$0.5 keV and shows no evidence of hot gaseous outflows as are seen in later stage systems (RP98). The galaxy pair, although in a very early stage of interaction, already show increased levels of \\ensuremath{L_{\\mathrm{FIR}}}\\ compared to quiescent galaxies, indicating that there is enhanced star formation taking place. The numerical simulations of \\citet{Mihos_96} suggest that a system such as Arp 270 will coalesce in $\\sim$650 Myrs.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/arp270_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{The early merger system Arp 270. Contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the Palomar 5m telescope. }\n \\label{fig:arp270_optic_con}\n\\end{figure}\n\n\\subsection{The Mice}\n\nThe Mice (also Arp 242, NGC 4676A\/B) is another early stage merger system, lying second in the evolutionary sequence proposed by \\citet{Toomre_72}. At a distance of 88 Mpc, the two detached spiral galaxies are again connected by a tidal bridge and, in addition to this feature, exhibit large tidal tails, a consequence of the galaxies interacting as they have passed each other. \\citet{Read_03} reports on a 30.5\\,ks \\emph{Chandra}\\ observation of the system, the adaptively smoothed 0.2$-$10.0\\,keV X-ray contours overlaid on an optical image are shown in Figure \\ref{fig:mice_opt_con}. \n\nFive ULXs are detected in association with the galaxies, these sources have been found to be coincident with regions of ongoing star formation, both within the nuclei of the galaxies and also in the tidal tails. The structure of the diffuse X-ray gas in both nuclei suggests that this system is at a more advanced stage of evolution than Arp 270. This is indicated by the morphology of the gas, a soft, thermal plasma, which extends out of the minor axis of both galaxies, suggesting that these features are starburst driven winds. As in Arp 270, the Mice emits an enhanced level of \\ensuremath{L_{\\mathrm{FIR}}}, again indicating that this system has enhanced star formation taking place, this emission is particularly high in the nuclei of the galaxies. \n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/mice_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{The Mice, contours of adaptively smoothed 0.2$-$10.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from WFPC2 on board the {\\em HST}. }\n \\label{fig:mice_opt_con}\n\\end{figure}\n\n\\subsection{The Antennae}\n\nThe Antennae, NGC 4038\/4039, or Arp 244, is probably the most famous example of a galaxy pair undergoing a major merger, and, lying at a distance of 19 Mpc, is also the nearest. A deep integrated \\emph{Chandra}\\ observation of 411\\,ks has been made of this system, enabling the nature of both the point source population and the diffuse gas to be investigated \\citep{Fabbiano_04, Zezas_04}, Fig \\ref{fig:ant_opt_con} shows the full band (0.3$-$6.0 keV) X-ray contours overlaid on an optical image. \n\nDue to the depth of the observation, sources were detected down to a luminosity of 2$-$5 $\\times$10$^{37}$erg s$^{-1}$. This resulted in a detection of 120 point sources, 12 of these have been confirmed as ULXs. The integrated observation comprises 7 separate pointings, enabling the variability of the point sources to be investigated.\n\nOut of the 12 ULXs, 4 emit below 1 $\\times$10$^{39}$ erg s$^{-1}$, the lower luminosity threshold for a source to be classified as a ULX, in at least one observation. Further, one of the sources was observed in only one pointing, indicating that it is likely to be a transient, providing further evidence that ULXs are a heterogeneous class, comprising of contributions from X-ray binary systems, transient sources and also, possibly, intermediate mass black holes (IMBHs). \n\nThe quality of the data has also lead to detailed mapping of the diffuse gas, investigating both the temperature and metallicity variation of the ISM. 21 separate regions of the diffuse gas have been analysed, spectral fitting of these regions reveal that there is a variation in temperature of the diffuse gas from 0.2 to 0.9 keV and metallicities vary from regions of sub-solar abundances to areas of emission displaying super-solar abundances, notably in both the nuclear regions and two hotspots in the northern loop of the disc (R1 and R2 reported in \\citet{Fabbiano_03b}). \n\nThe morphology of the gas reveals that there are large scale diffuse features; two large faint X-ray loops extending to the South of the system and a low-surface-brightness halo in the region surrounding the stellar discs extending out to $\\sim$18\\,kpc from the nucleus of NGC 4039. The two loops have temperatures ranging from 0.29 to 0.34 keV and the low surface brightness halo has a nominal temperature of 0.23 keV. This cooler larger scale emission may be the aftermath of a superwind, possibly from the first encounter of the two systems, which took place $\\sim$2$-$5 $\\times$10$^8$ years ago. The star formation rates within the two nuclei are 2.1 \\ensuremath{\\Msol~\\pyr}\\ and 1.7 \\ensuremath{\\Msol~\\pyr}\\ and in the region where the discs overlap has been found to be 5.0 \\ensuremath{\\Msol~\\pyr}\\ \\citep{Mihos_93}.\n\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/ant_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{The Antennae, contours of adaptively smoothed 0.3$-$6.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the UK Schmidt telescope. }\n \\label{fig:ant_opt_con}\n\\end{figure}\n\n\\subsection{Mkn 266}\n\nThe next system within this sample, Markarian 266 (also NGC 5256), is a double nucleus system with a large gaseous envelope. Here, a summary of the galaxy properties are presented, with the full analysis and results of a recent \\emph{Chandra}\\ observation detailed in section \\ref{sec:mkn266}. \n\nThis system lies at a distance of 115 Mpc and has an X-ray luminosity of 7.32 $\\times$10$^{41}$erg s$^{-1}$. The two nuclei, originating from the progenitors, are thought to comprise a LINER with a powerful starburst component, to the north of the system, and a Seyfert type 2 to the South \\citep{Mazzarella_88}. In addition to these sources, an area of enhanced emission has been detected between these nuclei, it is thought that this is caused by the collision of the two discs. Prior to a 20\\,ks \\emph{Chandra}\\ observation of Mkn 266, made in 2001, this feature had not been seen in X-rays before. But, due to {\\em Chandra's} superior spatial resolution, it is now possible to distinguish this feature. This can be seen in Figure \\ref{fig:mkn_266_opt_con}, where the adaptively smoothed, full band X-ray contours (0.3$-$8.0\\,keV) overlaid on an image from WFPC2, on board The {\\em Hubble Space Telescope (HST)} are shown.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/mkn266_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{Mkn 266, contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from WFPC2 on board the {\\em HST}. }\n \\label{fig:mkn_266_opt_con}\n\\end{figure}\n\nIn addition to the central emission from the colliding galaxies, a large X-ray gas cloud can be seen to the North of the system. From a \\emph{ROSAT}\\ observation it was proposed that this feature is a superwind, driven by the effect of the starburst's supernovae and stellar winds from the centre of the system \\citep{Wang_97}. \\citet{Kollatschny_98} argue that an X-ray `jet', arising from a centrally powered superwind, is unlikely due to both the non-radial geometry of the emission and also the bright X-ray emission of the feature. They suggest that the most plausible mechanism to explain the `jet' is from excitation by hot post-shock gas, although they do not conclude where the energy to power this feature would arise from. From the \\emph{Chandra}\\ observation we speculate that the northern emission observed could also arise from star formation taking place in a tidal arm that has been stripped from the galaxy during an earlier interaction. The exact nature of this emission is discussed in detail in section \\ref{sec:mkn266_northern}.\n\n\\subsection{NGC 3256}\n\nThe most X-ray luminous system in our sample, NGC 3256, is a powerful ultraluminous infrared galaxy (ULIRG), lying at a distance of 56 Mpc. This merger system, like Mkn 266, has one common gaseous envelope containing the nuclei from its two parent galaxies. \\citet{Lira_02} report on a 28\\,ks \\emph{Chandra}\\ observation made in 2000. Contours of adaptively smoothed 0.2$-$8.0\\,keV X-ray emission overlaid on an optical image of the system are shown in Figure \\ref{fig:ngc3256_opt_con}. \n\nThe total X-ray luminosity of NGC 3256 is 7.87$\\times$ 10$^{41}$ \\ensuremath{\\erg~\\ps}\\ with $\\sim$70\\% of the X-ray luminosity arising from the diffuse emission. 14 discrete X-ray sources were detected in this system, all of which have been classified as ULXs. Due to the high source detection threshold of this observation (\\ensuremath{L_{\\mathrm{X}}}=1.4$\\times$10$^{39}$erg s$^{-1}$) fainter point sources were not detected. Both galaxy nuclei are clearly detected in X-rays. The Northern nucleus is a site of intense star formation, and UV spectra \\citep{Lipari_00} show strong absorption lines, implying the presence of massive young stars. The Southern nucleus is heavily obscured and appears to be less active then the Northern nucleus. It has been suggested that it hosts an AGN, although there is no clear evidence of this provided in the X-ray data. The soft diffuse emission of the system can be described by two thermal components with a harder tail. The thermal plasma components exhibit temperatures of 0.6 keV and 0.9 keV and the hard component is thought to arise from a contribution from the lower luminosity X-ray point source population. \n\nA kinematic study of the system \\citep{English_03} suggests that NGC 3256 is currently experiencing the starburst that just precedes the final core collision and, given the close proximity of the two nuclei, it is likely that this system has undergone more than one perigalactic approach. Assuming that the two tidal tails formed during the last closest-encounter, the time that has elapsed since then can be estimated. This characteristic timescale was calculated to be 500 Myr and it is thought that coalescence will take place in $\\sim$200 Myr.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/ngc3256_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{NGC 3256, contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from WFPC2 on board the {\\em HST}. }\n \\label{fig:ngc3256_opt_con}\n\\end{figure}\n\n\\subsection{Arp 220}\n\nThe double nuclear system, Arp 220 (IC 4553\/4, UGC 09913) is a prototypical ULIRG, lying at a distance of 76 Mpc. The two nuclei have a separation of less than 0.5 kpc and are just at the point of coalescence. This complicated, dusty system has the highest value of \\ensuremath{L_{\\mathrm{FIR}}}\\ in our sample, which arises from the large reservoir of hot gas within the system. The mechanism by which this gas is heated is still currently under debate. Both the presence of a heavily shrouded AGN and intense nuclear starbursts have been suggested, along with a combination of both these energy sources contributing to the heating mechanism. A 60 ks \\emph{Chandra}\\ observation of the system was made in 2000, and from these data both the nuclear and the extended emission have been analysed \\citep{Clements_02,McDowell_03}. Figure \\ref{fig:arp220_opt_con} shows the 0.2$-$1.0 keV adaptively smoothed X-ray emission, overlaid on an optical image.\n\nThe system hosts 4 ULXs, including the nuclei, no other point sources are detected. Three distinct regimes of large diffuse gas structures are observed; the circumnuclear region, a plume region and, further out, diffuse lobe regions. The circumnuclear region has both compact, hard nuclear emission that resides in the 1 kpc at the centre of the system and softer, more extended emission. It has been suggested that the hard component of this region arises from a significant point source population, emitting at a luminosity lower than $\\sim 5 \\times 10 ^{39}$ \\ensuremath{\\erg~\\ps}, the sensitivity threshold for this observation. Although, this could also be a AGN, albeit one emitting at a low luminosity.\n\nThe plume regions extend to the northwest and southeast of the system with a projected tip to tip length of $\\sim$10 kpc, these features are clearly seen in Figure \\ref{fig:arp220_opt_con}. The spectrum of the plumes include both hot (1$-$5 keV) and cooler (0.25 keV) thermal contributions. It is likely that these regions are associated with a superwind extending from the nuclear region as a consequence of the vigorous star formation that is taking place at the centre of the system. Beyond the plumes, two large, low surface brightness lobe regions have been observed extending from 10$-$15 kpc on either side of the nuclear region. These regions have been found to be cooler than the plumes (0.2$-$1.0 keV), although, due to the low number of counts, higher temperature gas residing in these regions cannot be ruled out. \n\nIt was noted by \\citet{Heckman_96} that the plumes and lobes are ``misaligned'' by 25\\ensuremath{^\\circ}$-$30\\ensuremath{^\\circ}, they suggest that this could be due to a change in orientation of the system as the encounter has progressed. \\citet{McDowell_03} propose that this misalignment is actually a consequence of the lobes being produced not by the superwinds in the system, but are a product of the merger itself. Tentative evidence to support this scenario is presented in an INTEGRAL observation by \\citet{Colina_04}.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/arp220_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{Arp 220, contours of adaptively smoothed 0.2$-$1.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the Palomar 5m telescope. }\n \\label{fig:arp220_opt_con}\n\\end{figure}\n\n\\subsection{NGC 7252}\n\nNGC 7252, the first example of a merger-remnant galaxy in our sample, is a proto-typical merger galaxy at a distance of 63 Mpc. The central part of the system, a single relaxed body, displays an r$^{1\/4}$ optical surface brightness profile typical of elliptical galaxies \\citep{Schweizer_82}. In addition to this relaxed nucleus, the galaxy exhibits complex loops and ripples, notably two long tidal tails, indicative of the merger history of this system. From both {\\em UBVI} images taken with the WCFPT instrument on the {\\em HST} \\citep{Miller_97} and N-body simulations \\citep{Hibbard_95}, it has been estimated that nuclear coalescence took place $\\sim$1 Gyr ago.\n\nA 28 ks \\emph{Chandra}\\ observation, along with an \\emph{XMM-Newton}\\ observation is reported in \\citet{Nolan_04}. Figure \\ref{fig:ngc_7252_opt_con} shows the 0.3$-$7.0 keV adaptively smoothed \\emph{Chandra}\\ X-ray contours overlaid on an optical image. A total of nine ULXs are detected within the optical confines of this system, a further 5 sources are also detected but at a lower significance ($<3\\sigma$). The hot, diffuse gas of the system has been found to be fairly symmetrical and has an X-ray luminosity of 2.42$\\times$10$^{40}$ \\ensuremath{\\erg~\\ps}. This is low when compared to luminosities from typical elliptical galaxies (\\ensuremath{L_{\\mathrm{X}}}\\ $\\sim$10$^{41-42}$ \\ensuremath{\\erg~\\ps}). The low luminosity of this system is possibly due to the young age of the galaxy and, over much longer timescales, X-ray halo regeneration may increase the mass and hence luminosity of the gas within this system to levels seen in typical elliptical galaxies. From spectral modelling of the \\emph{XMM-Newton}\\ data the X-ray emission from the nuclear region is found to emit at 0.72 keV and 0.36 keV. There is also a harder contribution that can be fitted with a power law, and it is expected that this is due to a lower luminosity point source population. \n\nDuring the nuclear coalescence of NGC 7252 the star formation rate at the centre of the galaxy would have been massively enhanced. Now, as the reserve of gas becomes depleted, the star formation rate has fallen to one third of the value it would have been at its peak \\citep{Mihos_93}, although its \\ensuremath{L_{\\mathrm{FIR}}}\\ is still enhanced when compared to normal quiescent galaxies.\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/ngc7252_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{NGC 7252, contours of adaptively smoothed 0.3$-$7.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the LCO 2.5m telescope. }\n \\label{fig:ngc_7252_opt_con}\n\\end{figure}\n\n\\subsection{Arp 222}\n\nArp 222, or NGC 7727, is a post merger galaxy at a slightly more advanced stage of evolution than NGC 7252 \\citep{Georgakakis_00}. A CO observation \\citep{Crabtree_94} indicates that these systems are very similar in morphology and both host young cluster populations, however, Arp 222 seems to have a much smaller molecular gas content than NGC 7252. From a K-band observation \\citep{Rothberg_04} it has been found that the optical surface brightness profile of Arp 222 follows the de Vaucouleurs r$^{1\/4}$ law, indicating that violent relaxation has taken place since the merger. From analysing the discrete structure of the galaxy, plumes extending from the nucleus were identified, these features indicate that the system has still not fully relaxed into a mature elliptical galaxy.\n\nA 19 ks \\emph{Chandra}\\ observation of the post merger-remnant was carried out in 2001, and analysis and results from this observation are given in section \\ref{sec:arp222}. Figure \\ref{fig:arp222_opt_con} shows the 0.3$-$8.0 keV adaptively smoothed X-ray emission overlaid on an optical image. From this observation 15 point sources were detected, two of which are classified as ULXs. The X-ray luminosity of the diffuse gas in this system is 6.52 $\\times$10$^{39}$\\ensuremath{\\erg~\\ps}, much lower than the X-ray luminosity emitted from NGC 7252. This is likely to be due to the smaller gas content of Arp 222, which, as can be seen in Figure \\ref{fig:arp222_opt_con}, does not extend to the optical confines of the galaxy. From spectral modelling the global temperature of the diffuse gas in this system has been found to be 0.60 keV.\nThe \\ensuremath{L_{\\mathrm{FIR}}}\\ value of the system is much lower than that of the previous systems and is similar to values seen in elliptical galaxies. \n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/arp222_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{Arp 222 Contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the UK Schmidt telescope. }\n \\label{fig:arp222_opt_con}\n\\end{figure}\n\n\\subsection{NGC 1700}\n\nThe final system in our sample, NGC 1700, is a protoelliptical galaxy with a best age estimate of $\\sim$3 Gyr \\citep{Brown_00}. This galaxy possesses a kinematically distinct core and boxy isophotes at larger radii, it also contains two symmetrical tidal tails, a good indicator that this system formed through the merger of two, comparable mass, spiral galaxies. It has been suggested that it is the presence of these tidal features that causes the 'boxiness' of the galaxy \\citep{Brown_00}. A 42 ks \\emph{Chandra}\\ observation was made of the system in 2000, and the results of this are reported in \\citet{Statler_02}. Figure \\ref{fig:ngc1700_opt_con} shows the 0.3$-$0.8 keV adaptively smoothed X-ray emission overlaid on an optical image of NGC 1700.\n\nFrom this observation, 36 point sources are detected, 6 of which are classified as ULXs \\citep{Diehl_06}. The diffuse X-ray gas is well modelled by a single temperature thermal plasma at a temperature of 0.43 keV and has an X-ray luminosity of 1.47$\\times 10^{41}$ \\ensuremath{\\erg~\\ps}, similar to that of a mature elliptical galaxies. The change in morphology of the diffuse X-ray gas from the central elliptical region to the outer boxy region, can clearly be seen in Figure \\ref{fig:ngc1700_opt_con}. \\citet{Statler_02} suggest that the flattening of the isophotes is a consequence of an elliptical-spiral interaction, not a spiral-spiral merger as suggested by \\citet{Brown_00}. They argue that an interaction involving a preexisting elliptical with a hot ISM, would lead to the channelling of the already hot gas into the systems common potential well. This gas, at sufficiently low densities, would then settle into a rotationally flattened cooling disc, as is observed. Currently, neither these two scenarios, nor the suggestion that the system could have formed from a 3-body interaction \\citep{Statler_96}, can be ruled out. \n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/ngc1700_opt_xcon.ps}\n \\hspace{0.1cm}\n \\caption{NGC 1700 Contours of adaptively smoothed 0.3$-$0.8\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the UK Schmidt telescope. }\n \\label{fig:ngc1700_opt_con}\n\\end{figure}\n\n\\section{{\\em Chandra} Observations and Data Analysis of Mkn 266 and Arp 222}\n\\label{sec_data_red}\n\nObservations of Mkn 266 and Arp 222 were carried out with the ACIS-S camera on board the {\\em Chandra} X-ray Observatory. Mkn 266 was observed on 2nd November 2001 with a total observation time of 19.7\\,ks and Arp 222 was observed on 18th December 2001 with a total observation time of 19.0\\,ks. The initial data processing to correct for the motion of the\nspacecraft and apply instrument calibration was carried out with the\nStandard Data Processing (SDP) at the {\\em Chandra} X-ray Center\n(CXC). The data products were then analysed using the CXC CIAO\nsoftware suite (v3.2)\\footnote{http:\/\/asc.harvard.edu\/ciao} and\nHEASOFT (v5.3.1). The data were reprocessed, screened for bad pixels, and time filtered to remove periods of high background (when\ncounts deviated by more than 5$\\sigma$ above the mean). This\nresulted in a corrected exposure time of 18.5ks for Mkn 266 and 18.8\\,ks for Arp 222. The full data analysis and results from these two observations are detailed in the following subsections.\n\n\\subsection{Mkn 266}\n\\label{sec:mkn266}\n\\subsubsection{Overall X-ray Structure}\n\\label{mkn:sec_struc}\n\nA 0.3$-$8.0\\,keV (from here on referred to as `full band') {\\em Chandra}\nimage was created from the cleaned events file and adaptively smoothed\nusing the CIAO task {\\em csmooth} which uses a smoothing kernel to\npreserve an approximately constant signal to noise ratio across the\nimage, which was constrained to be between 2.6 and 4. In Figure\n\\ref{fig:mkn_rgb} both the optical image from the {\\em HST} with the full band X-ray contours overlaid (left) and the `true colour' image of the galaxy system (right) are shown. The `true colour' image was created by combining three separate smoothed images in three energy bands; 0.3$-$0.8 keV, 0.8$-$2.0 keV and 2.0$-$8.0 keV, using the same smoothing scale for each image. These energy bands correspond to red, green and blue respectively. From these images it can be seen that the system is comprised of two separate regions of X-ray emission. To the South of the images the central emission from the interacting galaxies can be seen. This comprises of the two nuclei from the progenitor galaxies, a LINER to the North and a Seyfert 2 to the South, contained within a common gaseous envelope and a region of enhanced emission between the two nuclei. This region is coincident with a radio source reported in \\citet{Mazzarella_88}. It is likely that this enhanced emission is due to the interaction of the two galaxy discs. To the North of the images diffuse gas is detected and shows some correlation with emission seen in the {\\em HST} image. The nature of this emission is discussed in detail in section \\ref{sec:mkn266_northern}.\n\nFrom the `true colour' image it can be seen that the north-east nucleus emits in all three energy bands, whilst the south-west nuclear emission is not as hard. There is also some enhanced X-ray emission in the central region between the two nuclei. The X-ray emission surrounding these nuclei appears to be soft and diffuse with some suggestion of a super-bubble to the south-east of the system.\n\n\\begin{figure*}\n \\begin{minipage}{0.5\\linewidth}\n \\vspace{0.7cm}\n \\includegraphics[width=\\linewidth]{images\/mkn266_main.ps}\n\n\n \\end{minipage}\\vspace{0.05\\linewidth}\n \\begin{minipage}{0.38\\linewidth}\n\n \\hspace{0.1\\linewidth}\n \\includegraphics[width=\\linewidth]{images\/mkn_rgb.ps}\n\n \\end{minipage}\n \\caption{Left, contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from WFPC2 on board the {\\em HST}. The black box indicates the plate boundary of the {\\em HST} image. Right shows the `true colour' image of Mkn 266. Red corresponds to 0.3$-$0.8 keV, green to 0.8$-$2.0 keV and blue to 2.0$-$8.0 keV.}\n \\label{fig:mkn_rgb}\n\\end{figure*}\n\n\\subsubsection{Point Source Spatial and Spectral Analysis}\n\\label{mkn:sec_source_ps}\n\nDiscrete X-ray sources were detected using the CIAO tool {\\em\nwavdetect}. This was run on the full band\nimage, over the 2, 4, 8, 16 pixel wavelet scales (where pixel\nwidth is 0.1\\ensuremath{^{\\prime\\prime}}), with a significance threshold of\n$2.5\\times10^{-5}$, which corresponds to one spurious source over a\n200 $\\times$ 200 pixel grid, the size of our image. Only 4 sources were\ndetected in this range; the two nuclei and two regions in the diffuse northern feature. Both the regions in the northern emission contained less than 30 counts and had large detection regions of r$\\ge$3.5\\arcsec\\ and so were not defined as point sources. Instead, a spectrum was extracted from a region file containing all the northern emission. These two detections, along with the extraction and background extraction regions for the 2 nuclei, are shown in Figure \\ref{fig:mkn_reg}.\n\nThe two detected nuclear sources were extracted using the source\nregion files created by {\\em wavdetect}. The size of each region was\nselected to ensure that as many source photons as possible were\ndetected whilst minimising contamination from nearby sources and\nbackground. The background files were defined as a source free annulus surrounding and concentric with each source region\nfile, to account for the variation of diffuse emission, and to\nminimise effects related to the spatial variation of the CCD response. \n\nThe source spectra for the two nuclei were created using the CIAO tool {\\em psextract} and fitted in XSPEC (v11.3.1). Due to the low number of counts in this observation the Cash statistic \\citep{Cash_79} was used in preference to \\ensuremath{\\chi^2}\\ when modelling the data. Both sources were well fitted with an absorbed thermal model plus an absorbed power law, the absorption component was fixed at the value out of our Galaxy (1.68 $\\times 10 ^{20} $ atom cm$^{-2}$). The data were restricted to 0.3$-$6.0 keV, as energies below this have calibration uncertainties, and the spectra presented here do not have significant source flux above 6.0 keV. The parameters from these best fit models can be seen in Table \\ref{tab:mkn266_ps}; where columns 2 and 3 give the right ascension and declination (J2000), column 4 the count rate, column 5 the source significance, column 6 the Galactic value of \\ensuremath{N_{\\mathrm{H}}}, column 7 $kT$, column 8 metallicity, column 9 power law photon index ($\\Gamma$) and columns 10 and 11 give the observed and intrinsic (i.e. corrected for absorption) luminosities. \n\n\\begin{figure}\n \\hspace{0.1\\linewidth}\n \\includegraphics[width=0.8\\linewidth]{images\/mkn_reg.ps}\n \\hspace{0.1cm}\n \\caption{Adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S of Mkn 266, with {\\em wavdetect} point source denoted with an ``X''. Extraction and background region files for source A and B are also shown. }\n \\label{fig:mkn_reg}\n\\end{figure}\n\n\\begin{table*}\n\\centering\n\\caption[]{Sources detected in the 0.3$-$6.0 keV band within the Mkn 266 with a summary of point source spectral fits. Errors on the spectral\nfits parameters are given as 1\\( \\sigma \\) for 1 interesting parameter\nfrom XSPEC. An F denotes that the value has been frozen.\n\\label{tab:mkn266_ps}}\n\\begin{tabular}{c@{}c@{}c@{}c@{}c@{}c@{}cccc@{}c@{}}\n\\noalign{\\smallskip}\n\\hline\nSource & RA & Dec. & Count Rate & Sig. & \\multicolumn{4}{c}{Spectral Fit } & \\multicolumn{2}{c}{Luminosity (0.3$-$6.0 keV) } \\\\\n & & & & & \\ensuremath{N_{\\mathrm{H}}} & kT & Z & $\\Gamma$ \n & \\multicolumn{2}{c}{(10\\(^{41} \\) erg s\\(^{-1} \\)) } \\\\ \n & & &($\\times 10^{-2}$ count s$^{-1}$) & ($\\sigma$) & ($\\times 10 ^{20} $ atom cm$^{-2}$) & (keV) & (\\ensuremath{\\mathrm{Z}_{\\odot}}) & & Observed & Intrinsic \\\\ \n \\\\ \\hline\n \n \n\nA & 13:38:17.84 & +48:16:41.2 & 1.90$\\pm$0.11 & 41.8 & 1.68 F & 0.80$^{+0.07}_{-0.11}$ & 0.59$^{+1.50}_{-0.24}$ & 0.02$^{+0.25}_{-0.21}$ & 3.41 $\\pm 0.17$ & 3.47 $\\pm 0.17$ \\\\\nB & 13:38:17.36 & +48:16:32.5 & 0.90$\\pm$0.08 & 18.7 & 1.68 F & 0.88$^{+0.08}_{-0.10}$ & 0.30 F & 1.13$^{+0.49}_{-0.64}$ & 0.98 $\\pm 0.07$ & 1.04 $\\pm 0.07$ \\\\ \n\n\\noalign{\\smallskip}\n\\hline\n\\end{tabular}\n\\end{table*}\n\nFrom this table it can be seen that source A, the LINER, exhibits a remarkably flat power law of 0.02$^{+0.25}_{-0.21}$. It is likely that part of the contribution to this component arises from unresolved point sources but the very flat slope of this fit suggests that some of the hard component is either heavily absorbed or dominated by reflection \\citep{Levenson_04,Matt_00}. By including an absorber for the power law component we can allow for this extra absorption, however, due to the low number of counts, when including this component the fit becomes unconstrained.\n\nIt is likely that the cause of this heavy obscuration is due to the strong starburst in this region. An observation by \\citet{Sanders_86} found that Mkn 266 is rich in CO, indicating that this system contains a massive, warm reservoir of molecular gas to fuel this starburst, and the dynamic conditions that concentrate this material also serve to further obscure the LINER. The MEKAL component of the fit to this source arises from this starburst, which contributes 5.7$\\times$10$^{40}$\\ensuremath{\\erg~\\ps}\\ to the total intrinsic luminosity of source A. \n\nSource B, the southern nucleus, emits at a temperature of 0.88 keV and has a power law slope of 1.13. This thermal component is likely to arise from the diffuse emission surrounding the nucleus, and the value of $\\Gamma$ is consistent with the interpretation of this nucleus being a Seyfert 2 \\citep{Cappi_05}. In previous X-ray surveys it has been found that up to $\\sim$75\\% of Seyfert 2 objects are heavily obscured with \\ensuremath{N_{\\mathrm{H}}}\\ $\\ge$10$^{23}$atom cm$^{-2}$ \\citep{Risaliti_99}. With the data we have from this observation the two component model describes the spectrum well, and, due to the limited number of counts, a more complex model with an additional absorption component would over-fit the data. In addition to this, \\citet{Cappi_05} have found from a recent survey of nearby Seyfert galaxies, that these sources possess the entire range of \\ensuremath{N_{\\mathrm{H}}}\\ from 10$^{20}$atom cm$^{-2}$ to 10$^{24}$atom cm$^{-2}$, fairly continuously. Indicating that, although the southern nucleus could be heavily obscured, it is also possible that the two component model described here is good indication of the properties of this source. \n\\subsubsection{Diffuse Emission Spatial and Spectral Analysis}\n\\label{mkn:sec_source_dif}\n\nFrom the `true colour' image of Mkn 266 it is clear that the galaxy contains significant amounts of diffuse gas, in both the northern feature and also surrounding the central galaxy. To investigate the nature of this diffuse emission, spectra were extracted from these two separate regions using the CIAO tool {\\em acisspec}, and fitted in XSPEC. Once again, due to the low number of counts, the Cash statistic was used when modelling the data. \n\nThe diffuse gas contained within the galaxy is not well described by a single temperature fit, and is better modelled with two temperature components. It is likely that the hotter gas contributing to this model arises from the enhanced emission seen between the two nuclei. To investigate this, the extraction region was divided into two separate parts; one, smaller region, to probe the impact area, where it is likely that the two discs have collided, and a second, larger region, covering the rest of the galaxy, but which excludes the inner impact region and the northern diffuse region. These, along with the extraction region for the northern diffuse emission and the associated background regions, are shown in Figure \\ref{fig:mkn_diff}. The two separate regions, diffuse 1 and diffuse 2, are both well described with single component MEKAL fits, exhibiting the same temperatures that were fitted in the two temperature model. The northern region is also well described with a single component MEKAL fit. The parameters from the best fit models for these regions, along with their X-ray luminosities, are shown in Table \\ref{tab:mkn_diff}.\n\n\\begin{figure}\n \\hspace{0.1\\linewidth}\n \\includegraphics[width=0.8\\linewidth]{images\/mkn_diff.ps}\n \\hspace{0.1cm}\n \\caption{Adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S of Mkn 266, with extraction and background region for the diffuse gas shown. The background region files are selected to be source free annuli, surrounding and concentric with each extraction region file.}\n \\label{fig:mkn_diff}\n\\end{figure}\n\n\\begin{table*}\n\\centering\n\\caption[]{Summary of the spectral fits of the diffuse gas regions in Mkn 266. Errors on the spectral\nfits parameters are given as 1\\( \\sigma \\) for 1 interesting parameter\nfrom XSPEC. An F denotes that the value has been frozen.\n\\label{tab:mkn_diff}}\n\\begin{tabular}{cccccc}\n\\noalign{\\smallskip}\n\\hline\nDiffuse Region & \\ensuremath{N_{\\mathrm{H}}} \t\t\t\t\t& kT \t& Z \t\t& \\multicolumn{2}{c}{Luminosity (0.3$-$6.0 keV)} \\\\\n \t\t& ($\\times 10 ^{20} $ atom cm$^{-2}$) \t& (keV) & (\\ensuremath{\\mathrm{Z}_{\\odot}})\t& \\multicolumn{2}{c}{(10$^{40}$ erg s$^{-1}$)} \\\\\n\t\t&\t\t\t\t\t&\t&\t\t& Observed & Intrinsic \\\\ \n \\hline\n \n \n\nDiffuse 1 \t& 1.68 F \t\t\t\t& 0.52$^{+0.06}_{-0.06}$ & 0.23$^{+0.05}_{-0.03}$ \t& 9.88 $\\pm 0.50$ & 10.92 $\\pm 0.50$ \\\\\nDiffuse 2\t& 1.68 F \t\t\t\t& 1.07$^{+0.06}_{-0.09}$ & 0.22$^{+0.15}_{-0.08}$ \t& 8.10\t$\\pm 0.49$ & 9.83 $\\pm 0.53$ \\\\\nDiffuse 3 \t& 1.68 F \t\t\t\t& 0.30$^{+0.02}_{-0.03}$ & 0.30F\t \t\t& 6.10 $\\pm 0.49$ & 6.90 $\\pm 0.55$ \\\\\n\n\n\\noalign{\\smallskip}\n\\hline\n\\end{tabular}\n\\end{table*}\n\nFrom this table it can be seen that the diffuse gas within the galaxy system, diffuse 1, exhibits a temperature of 0.52 keV, a fairly typical temperature for gas in interacting galaxies (RP98). The impact region, diffuse 2, has a higher temperature of 1.07 keV. This is, within errors, the same temperature attained from the higher temperature contribution from the two component MEKAL fit of the combined spectrum for diffuse 1 and diffuse 2, indicating that the higher temperature from this fit was a consequence of the gas arising from this impact area. \n\nThe northern emission appears spectrally distinct from the surrounding diffuse gas and has been found to emit at a cooler temperature of 0.30 keV. With this \\emph{Chandra}\\ data it has also been found that this region has a much lower luminosity than the one derived in \\citet{Kollatschny_98}, who report on a \\emph{ROSAT}\\ HRI observation of the galaxy and find the X-ray luminosity of this region to be 3.1$\\times$10$^{41}$ \\ensuremath{\\erg~\\ps}, as opposed to 6.9$\\times$10$^{40}$ \\ensuremath{\\erg~\\ps}, as reported in this work. However, in \\citet{Kollatschny_98} this X-ray luminosity is derived from assuming that the relative number of HRI counts is directly proportional to their share of the integrated X-ray flux, with the total luminosity of the system being derived from the \\emph{ROSAT}\\ PSPC observation of the system and consequently has large uncertainties associated with it.\n\n\\subsubsection{Nature of the Northern Emission}\n\\label{sec:mkn266_northern}\n\nThe origin of the diffuse emission to the north of the system is still the subject of debate. There have been a number of suggestions as to how this feature has been formed. From HRI and PSPC observations with \\emph{ROSAT}, \\citet{Wang_97} suggest that it is an outflow, driven by the mechanical energy of the supernovae and stellar winds in the starburst. A subsequent paper \\citep{Kollatschny_98}, investigating both the HRI observation and optical B,V,R-images of Mkn 266, argued that the scenario proposed by \\citet{Wang_97} is unlikely, due to both the high luminosity of the `jet' and also its non-radial geometry. They instead suggest that the mechanism that gives rise to the `jet' is excitation by hot post-shock gas, although they do not conclude where the energy to power this feature would arise from.\n\nA tridimensional spectrophotometric study by \\citet{Ishigaki_00} investigated the H{$\\alpha$}, [O~\\textsc{iii}] and [S~\\textsc{ii}] emission-lines within Mkn 266. Figure \\ref{fig:mkn_266_halpha} shows the H{$\\alpha$}\\ contours from this observation, overlaid on the full band, adaptively smoothed, \\emph{Chandra}\\ X-ray emission. As can be seen, the enhanced regions of H{$\\alpha$}\\ in the northern region are coincident with the two X-ray regions detected with {\\em wavdetect} (see Figure \\ref{fig:mkn_reg}). Some caution should be exercised when interpreting this plot due to the uncertainty in the astrometry, which we conservatively estimate to be 2\\ensuremath{^{\\prime\\prime}}\\ (1.1 kpc). Even so, the correlation between the X-ray and H{$\\alpha$}\\ is striking, suggesting that this region to the north of the system is a site of star formation, possibly a tidal arm that has been stripped from the southern progenitor during the merger process. This interpretation is strengthened by the fact that both the {\\em HST} (Figure \\ref{fig:mkn_rgb}) and H{$\\alpha$}\\ emission seem to connect with the dust lanes around the southern nucleus \\citep{Ishigaki_00}.\n\n\\begin{figure}\n \\includegraphics[width=0.95\\linewidth]{images\/mkn266_halpha.ps}\n \\hspace{0.1cm}\n \\caption{Adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S of Mkn 266. H{$\\alpha$}\\ contours from \\citet{Ishigaki_00} are overlaid. }\n \\label{fig:mkn_266_halpha}\n\\end{figure}\n\nAs shown in the previous section, the luminosity arising from the diffuse gas within the northern feature is 6.9$\\times$10$^{40}$ \\ensuremath{\\erg~\\ps}. Given that our spectral fit to the data indicates that this emission arises from a thermal component, and both the {\\em HST} and H{$\\alpha$}\\ emission indicate that there is star formation within this region, the origin of these X-rays could come from supernovae. Making some assumptions about the geometry of the emitting region of this northern feature the thermal energy of the gas can be derived. First the volume, $V$, of the diffuse northern emission has been assumed to be an ellipsoid, with symmetry about the longer axis. The fitted emission measure is equal to $\\eta n_\\mathrm{e}^{2}V$\nand can be used to infer the mean particle number density $n_\\mathrm{e}$, with\nthe filling factor, $\\eta$, assumed to be 1. This factor represents the fraction of volume filled by the emitting gas. Although we have assumed this to be 1 in our calculations, there is evidence from hydrodynamical simulations to suggest this value could be $\\le$2 per cent \\citep{Strickland_00b}. The mean electron density is then used to derive the total gas mass $M_\\mathrm{gas}$, which then leads to the thermal energy $E_\\mathrm{th}$ of the hot gas. From these assumptions $E_\\mathrm{th}$ has been calculated to be between 6.72$\\times 10^{54}$ \\ensuremath{\\mbox{erg}}\\ (for $\\eta$=0.02) and 3.36$\\times 10^{56}$ \\ensuremath{\\mbox{erg}}\\ (for $\\eta$=1).\n\nBy deriving the supernova rate, \\ensuremath{r_{\\mathrm{SN}}}, for this region, the energy arising from supernovae can be calculated. \\citet{Davies_00} calculate \\ensuremath{r_{\\mathrm{SN}}}\\ for the LINER in Mkn 266, using the relation derived in \\citet{Condon_90} \n\\begin{equation}\nL_{\\mathrm{NT}}[\\mathrm{WHz^{-1}}]\\sim1.3\\times 10^{23}(\\upsilon[\\mathrm{GHz}])^{-0.8}r_{\\mathrm{SN}}[\\mathrm{yr^{-1}}],\n\\label{eq:sn}\n\\end{equation}\nand the 20 cm radio continuum \\citep{Mazzarella_88}. Where $L_{\\mathrm{NT}}$ is the calculated power and $\\upsilon$ is the frequency of the observation. From this they calculate \\ensuremath{r_{\\mathrm{SN}}}\\ to be 0.45 SNyr$^{-1}$. \n\nFor the northern diffuse emission, \\citet{Mazzarella_88} find a power of 8.9$\\times 10^{21}$WHz$^{-1}$, using this value and the above equation we find \\ensuremath{r_{\\mathrm{SN}}}\\ =0.05 SNyr$^{-1}$ for this region. From \\citet{Mattila_01} it has been found that it is more appropriate to use \\ensuremath{L_{\\mathrm{FIR}}}\\ to calculate \\ensuremath{r_{\\mathrm{SN}}}\\ in starburst galaxies but, as we only have \\ensuremath{L_{\\mathrm{FIR}}}\\ for the whole system, we would greatly overestimate the supernova rate, and consequently, we will use the rate of 0.05 SNyr$^{-1}$ we have derived from the 20 cm continuum data.\n\nFrom \\citet{Davies_00}, models of the star formation timescales have been derived for the LINER. And, although these models give a fairly poorly constrained age (20-500 Myr), by making the assumption that this is a proxy for the timescales of star formation taking place in the northern diffuse emission, the number of supernovae formed in this time can be estimated. Given that we also make the assumption that a massive star takes $\\sim 1\\times$10$^{7}$ yr to evolve into a supernova, we calculate the number of SNe formed in this time to be $5\\times10^5 - 2.5\\times10^7$. If each supernova releases $\\sim1\\times 10^{51}$ \\ensuremath{\\mbox{erg}}, the total thermal energy available will be between $\\sim5\\times 10^{56}$ \\ensuremath{\\mbox{erg}}\\ to $\\sim2.5\\times 10^{58}$ \\ensuremath{\\mbox{erg}}, thus demonstrating that the observed luminosity in the northern diffuse emission could arise from the contribution of star formation alone. \n\nThe reason that both \\citet{Wang_97} and \\citet{Kollatschny_98} prefer an outflow scenario to that of tidal stripping to explain this northern feature is due to the presence of optical emission in a highly excited state within this feature. These line ratios, coupled with the electron temperature calculated for this region, cannot be explained by thermal collisional ionisation, and are likely to arise through either photoionisation or shock excitation. \\citet{Wang_97} and \\citet{Kollatschny_98} both suggest that these shocks arise from a `jet', outflowing from the central galaxy. However, \\citet{Ishigaki_00} propose that the northern emission is photoionised by radiation from the Seyfert nucleus and therefore, excitation by shocks is not required to explain the high excitation emission lines that are observed. \n\nThe present study adds further evidence to this debate. With the higher resolution provided by \\emph{Chandra}, the structure of the northern emission can now be resolved. From this it can be seen that the northern feature is curved, and the X-ray emission appears to be `clumpy', with the two most intense areas of X-ray emission coincident with the H{$\\alpha$}\\ emission (see Figure \\ref{fig:mkn_266_halpha}). These are not features which indicate a superwind. Furthermore, the morphology of this region traces that of the optical emission, which is highly suggestive that these two features are connected. Consequently, we propose that the most likely source of this X-ray emission is star formation taking place within a tidal arm that has been stripped out from one of the progenitors during the merger of the two galaxies.\n\n\\subsubsection{The South East Extension}\n\nTo the south east of the central galaxy region there is some suggestion of X-ray extension, which appears to be coincident with filaments seen in the {\\em HST} image, just beginning to break out of the galactic disc (Figure \\ref{fig:mkn_rgb}). Given the low count statistics in this observation we cannot extract spectra for this region alone. But, from the morphology of the X-ray emission, coupled with the H{$\\alpha$}\\ emission, there is some indication that this region is a site of star formation just on the point of break-out from the gaseous envelope of the central system. This scenario is preferred to the one suggested in the case of the northern emission as there is additional evidence of an outflow feature from the starburst region around the LINER \\citep{Ishigaki_00}. This outflow could be beginning to sweep the dust out of the galaxy, indicating that this system is in the stage just prior to the outbreak of large-scale galactic winds throughout the system. Of course, alternatively, this feature could be a small scale version of the northern emission, particularly given its coincidence with optical emission seen with the {\\em HST}. However, without spectral information to investigate the nature of this object, neither scenario can be ruled out.\n\n\\subsection{Arp 222}\n\\label{sec:arp222}\n\\subsubsection{Overall X-ray Structure}\n\nA full band adaptively smoothed \\emph{Chandra}\\ image of Arp 222 was produced using the same tools and techniques as described in section \\ref{mkn:sec_struc}. In Figure \\ref{fig:222_gal} both the optical image from the UK Schmidt telescope, with the smoothed full band X-ray contours overlaid (left), and the `true colour' image (right) of Arp 222 is shown. The true colour image was produced using the same methods as described in section \\ref{mkn:sec_struc}.\n\n\\begin{figure*}\n \\begin{minipage}{0.58\\linewidth}\n \\vspace{0.7cm}\n \\includegraphics[width=\\linewidth]{images\/arp222_main.ps}\n\n\n \\end{minipage}\\vspace{0.03\\linewidth}\n \\begin{minipage}{0.385\\linewidth}\n\n \\includegraphics[width=\\linewidth]{images\/arp222_rgb.ps}\n\n \\end{minipage}\n \\caption{Left, contours of adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S overlaid on an optical image from the UK Schmidt telescope. Right shows the `true colour' image of Arp 222. Red corresponds to 0.3$-$0.8 keV, green to 0.8$-$2.0 keV and blue to 2.0$-$8.0 keV.\n \\label{fig:222_gal}}\n\\end{figure*}\n\nFrom these images it can be seen that there is some diffuse gas at the centre of the galaxy, but this emission does not extend out to the optical confines of the system. In addition to the diffuse gas, a number of point sources are found through out the galaxy. From the `true colour' image, the white appearance of the nucleus indicates that it emits in all three energy bands, whilst the other point sources can be seen to be softer.\n\n\\subsubsection{Spatial and Spectral Analysis}\n\nX-ray point sources were searched for using the CIAO tool {\\em wavdetect}. This was run on the full band image, over the 2, 4, 8, 16 pixel wavelet scales (where pixel width is 0.5\\ensuremath{^{\\prime\\prime}}), with a significance threshold of 2.8$\\times$10$^{-6}$, which corresponds to one spurious source over a 600 $\\times$ 600 pixel grid, the size of our image. 24 sources were detected in this full band range, these were then limited to those that lie within the $D_{25}$ ellipse of the galaxy, reducing the number to 15 detected sources. These regions, along with the $D_{25}$ ellipse are shown in Figure \\ref{fig:arp222_ps}. From using the \\emph{Chandra}\\ Deep Field South number counts \\citep{Giacconi_01} we estimate 2 to 3 of these sources to be background objects. \n\n\\begin{figure}\n\\hspace{0.1\\linewidth}\n \\includegraphics[width=0.8\\linewidth]{images\/arp222_reg.ps}\n \\hspace{0.1cm}\n \\caption{Adaptively smoothed 0.3$-$8.0\\,keV X-ray data from {\\em Chandra} ACIS-S of Arp 222, with point sources detected by {\\em wavdetect} indicated. The D$_{25}$ ellipse is also shown.}\n \\label{fig:arp222_ps}\n\\end{figure}\n\nSpectra for the point sources were extracted using the CIAO tool {\\em psextract} with region and background region files selected in the same way as described in section \\ref{mkn:sec_source_dif}. Of the 15 detected sources, only one, source 5, emitted sufficient counts to be modelled individually, the other 14 sources were fitted simultaneously. The spectrum of the diffuse gas was extracted with the CIAO tool {\\em acisspec}, from a region file centred on source 5, with a radius of 45\\ensuremath{^{\\prime\\prime}}. The background region file was defined to be a source free annulus surrounding and concentric with the region file. Once again, due to the low number of counts, the Cash statistic was used in preference to \\ensuremath{\\chi^2}. Both the combined sources and source 5 were well described by single component, power law models. And, from this combined fit model, the luminosities for each individual source were derived. The diffuse gas was well fitted by a single component MEKAL model. These spectral fits are summarised in Table \\ref{tab:arp222}, where column 1 gives the source identification number, columns 2 and 3 give the right ascension and declination (J2000), column 4 the Galactic value of \\ensuremath{N_{\\mathrm{H}}}, column 5 $kT$, column 6 metallicity, column 7 $\\Gamma$ and columns 8 and 9 give the observed and intrinsic luminosities. \n\n\\begin{table*}\n\\centering\n\\caption[]{A summary of the best fit parameters for the diffuse gas and point sources detected in the 0.3$-$6.0 keV band within Arp 222. Errors on the spectral fits parameters are given as 1\\( \\sigma \\) for 1 interesting parameter\nfrom XSPEC. An F denotes that the value has been frozen.\n\\label{tab:arp222}}\n\\begin{tabular}{cccc@{}cccc@{}c@{}}\n\\noalign{\\smallskip}\n\\hline\nSource & RA & Dec. & \\multicolumn{4}{c}{Spectral Fit } & \\multicolumn{2}{c}{Luminosity (0.3$-$6.0 keV) } \\\\\n & & & \\ensuremath{N_{\\mathrm{H}}} & kT & Z &$\\Gamma$ \n & \\multicolumn{2}{c}{(10\\(^{39} \\) erg s\\(^{-1} \\)) } \\\\ \n & & & ($\\times 10 ^{20} $ atom cm$^{-2}$) & (keV) & (\\ensuremath{\\mathrm{Z}_{\\odot}}) & & Observed & Intrinsic \\\\ \n \\\\ \\hline\n \n \n\n1 & 23:40:00.61 & -12:17:10.2 & 2.75 F & - & - & 1.72F & 0.99 $\\pm 0.17$ & 1.07 $\\pm 0.17$ \\\\\n2 & 23:39:57.01 & -12:16:33.0 & 2.75 F & - & - & 1.72F & 0.31 $\\pm 0.16$ & 0.32 $\\pm 0.16$ \\\\ \n3 & 23:39:55.95 & -12:16:47.3 & 2.75 F & - & - & 1.72F & 0.19 $\\pm 0.07$ & 0.21 $\\pm 0.08$ \\\\\n4 & 23:39:54.37 & -12:16:59.1 & 2.75 F & - & - & 1.72F & 0.45 $\\pm 0.11$ & 0.48 $\\pm 0.12$ \\\\\n5 & 23:39:53.67 & -12:17:31.6 & 2.75 F & - & - & 2.15$^{+0.37}_{-0.36}$ & 1.74 $\\pm 0.21$ & 1.97 $\\pm 0.24$ \\\\\n6 & 23:39:52.84 & -12:17:36.8 & 2.75 F & - & - & 1.72F & 0.37 $\\pm 0.11$ & 0.40 $\\pm 0.12$ \\\\\n7 & 23:39:52.97 & -12:18:01.5 & 2.75 F & - & - & 1.72F & 0.20 $\\pm 0.09$ & 0.21 $\\pm 0.09$ \\\\\n8 & 23:39:52.67 & -12:17:21.1 & 2.75 F & - & - & 1.72F & 0.43 $\\pm 0.11$ & 0.46 $\\pm 0.12$ \\\\\n9 & 23:39:51.67 & -12:18:19.6 & 2.75 F & - & - & 1.72F & 0.52 $\\pm 0.14$ & 0.54 $\\pm 0.15$ \\\\\n10 & 23:39:51.31 & -12:18:30.0 & 2.75 F & - & - & 1.72F & 0.35 $\\pm 0.11$ & 0.37 $\\pm 0.11$ \\\\\n11 & 23:39:51.15 & -12:17:48.8 & 2.75 F & - & - & 1.72F & 0.38 $\\pm 0.11$ & 0.42 $\\pm 0.13$ \\\\\n12 & 23:39:50.57 & -12:16:53.7 & 2.75 F & - & - & 1.72F & 0.28 $\\pm 0.09$ & 0.30 $\\pm 0.10$ \\\\\n13 & 23:39:51.18 & -12:15:57.6 & 2.75 F & - & - & 1.72F & 0.30 $\\pm 0.12$ & 0.31 $\\pm 0.13$ \\\\\n14 & 23:39:49.36 & -12:18:24.4 & 2.75 F & - & - & 1.72F & 0.38 $\\pm 0.11$ & 0.40 $\\pm 0.12$ \\\\\n15 & 23:39:45.63 & -12:17:41.2 & 2.75 F & - & - & 1.72F & 0.21 $\\pm 0.12$ & 0.55 $\\pm 0.13$ \\\\\n\\\\\nDiffuse & 23:39:53.67 & -12:17:31.6 & 2.75 F & 0.60$^{+0.07}_{-0.06}$ & 0.08$^{+0.06}_{-0.02}$ & - & 5.48 $\\pm$ 0.28 & 6.52 $\\pm$ 0.33 \\\\\n\n\\noalign{\\smallskip}\n\\hline\n\\end{tabular}\n\\end{table*}\n\nFrom this table it can be seen that the best-fit spectral index for the combined sources is $\\Gamma$=1.72, a typical value for XRBs in, what can be described as, a low\/hard state \\citep{Soria_03}. For the central source, source 5, the best fit model has shown that the spectral state is much softer, with $\\Gamma$=2.15. This value is consistent with it also being an XRB, but this time in a high\/soft state \\citep{Colbert_04} This source, along with source 1, has been found to be a ULX. Given the quiescent nature of the galaxy, and that the time since the last episode of widespread star formation has been shown to be $\\sim$1.2 Gyr ago \\citep{Georgakakis_00}, this indicates that these sources are likely to arise from low mass X-ray binaries (LMXRBs), not HMXRBs as has been seen in younger merger systems within this sample.\n\nThe amount of diffuse gas within the system is smaller than in mature elliptical galaxies, and, as mentioned previously, does not extend out to the optical confines of the system. The temperature of the gas, 0.6 keV, has been found to be comparable to that of other interacting systems, but the diffuse gas exhibits a lower X-ray luminosity (6.52$\\times 10^{39}$ \\ensuremath{\\erg~\\ps}) than the other systems within this sample. The low gas content of Arp 222 is further seen from CO observations of the galaxy \\citep{Crabtree_94}. From this \\emph{Chandra}\\ observation it can be seen that this system is X-ray faint and does not currently resemble a mature elliptical galaxy, although may at a greater dynamical age.\n\n\\section{The Evolution of Merging Galaxies}\n\\label{sec_evol}\n\n\\subsection{Multi-wavelength Evolution Sequence}\n\nTo gain a greater understanding of the merger process of galaxy pairs, in addition to the X-ray luminosity of each system, B-band, K-band and FIR luminosities have been obtained to study how each of these luminosity diagnostics evolve along the chronological sequence. In the case of systems where the X-ray luminosity has been obtained from the literature, the stated band has been converted into the 0.3$-$6.0 band used in this paper by assuming canonical models of the two X-ray components. For the diffuse gas, a MEKAL model with a gas temperature of 0.5 keV has been assumed, and for the point source population a power law model with a photon index of 1.5 has been adopted.\n\nThese luminosities are given in Table \\ref{tab:merger:props} where; column 1 gives the system name, column 2 the distance to the object, column 3 the merger age, with nuclear coalescence being defined as 0 Myr, column 4 the total (0.3$-$6.0 keV) intrinsic X-ray luminosity from the \\emph{Chandra}\\ observation, column 5 gives the percentage of luminosity arising from the diffuse gas (\\%{\\ensuremath{L_{\\mathrm{diff}}}}), column 6 \\ensuremath{L_{\\mathrm{FIR}}}, column 7 \\ensuremath{L_{\\mathrm{B}}}\\ and column 8 \\ensuremath{L_{\\mathrm{K}}}.\n\n\\begin{table*}\n\n\\centering\n\n\\caption[]{The fundamental properties of all the systems within this sample. Columns are explained in the text.\n\\label{tab:merger:props}}\n\\begin{tabular}{cccccccc}\n\\noalign{\\smallskip}\n\\hline\n\nGalaxy \t\t& Distance\t& Merger Age\t& \\ensuremath{L_{\\mathrm{X}}}\t\t\t& \\%L$_{diff}$\t& Log \\ensuremath{L_{\\mathrm{FIR}}}\t\t\t& Log \\ensuremath{L_{\\mathrm{B}}} \t& Log \\ensuremath{L_{\\mathrm{K}}} \\\\\nSystem \t\t& \t\t&\t\t& (0.3$-$6.0 keV)\t&\t\t& \t\t\t\t&\t\t&\t \\\\\n \t\t& (Mpc)\t\t& (Myr)\t\t& ($\\times10^{40}$\\ensuremath{\\erg~\\ps}) & \t\t& (\\ensuremath{\\erg~\\ps}) \t\t& (\\ensuremath{\\erg~\\ps})\t& (\\ensuremath{\\erg~\\ps}) \\\\\n \n\\noalign{\\smallskip}\n\\hline\n\nArp 270\t\t& 28\t\t& -650 \t\t& 2.95\t\t\t&\t28\t&43.63\t\t\t\t&43.72\t\t&43.37\t \\\\\nThe Mice\t& 88\t\t& -500 \t\t& 6.01\t\t\t&\t31\t&44.12\t\t\t\t&44.07\t\t&44.04\t \\\\\nThe Antennae\t& 19\t\t& -400 \t\t& 6.97\t\t\t&\t54\t&43.95\t\t\t\t&43.98\t\t&44.64\t \\\\\nMkn 266\t\t& 115\t\t& -300 \t\t& 73.18\t\t\t& \t41\t&44.75\t\t\t\t&44.30\t\t&44.38\t \\\\\nNGC 3256\t& 56\t\t& -200 \t\t& 87.70\t\t\t&\t80\t&45.19\t\t\t\t&44.42\t\t&45.22\t \\\\\nArp 220\t\t& 76\t\t& 0 \t\t& 23.70\t\t\t&\t44\t&45.50\t\t\t\t&43.97\t\t&44.78\t \\\\\nNGC 7252\t& 63\t\t& 1000 \t\t& 6.17\t\t\t&\t31\t&44.00\t\t\t\t&44.40\t\t&44.61\t \\\\ \nArp 222\t\t& 23\t\t& 1200 \t\t& 1.46\t\t\t&\t45\t&$\\geq$42.37\t\t&43.92\t\t&44.84\t \\\\\nNGC 1700\t& 54\t\t& 3000 \t\t& 17.58\t\t\t&\t83\t&42.91\t\t\t\t&44.32\t\t&45.15\t \\\\\n\n\n\\noalign{\\smallskip}\n\\hline\n\\end{tabular}\n\\end{table*}\n\nThe FIR luminosities are calculated using the\nexpression \\citep{Devereux_89}\n\\begin{equation}\n\\label{eq:lfir}\n\\ensuremath{L_{\\mathrm{FIR}}} =3.65 \\times 10^5[2.58S_{60 \\mu m}+S_{100\\mu m}]D^2\\ensuremath{\\mathrm{L}_{\\odot}},\n\\label{equ:lfir}\n\\end{equation}\nwith {\\em IRAS} 60- and 100-$\\mu$m fluxes taken from the {\\em IRAS}\nPoint Source Catalogue \\citep{Moshir_90}. The optical (B) luminosities\nwere calculated as in \\citet{Tully_88}\n\\begin{equation}\n\\mathrm{log} \\ensuremath{L_{\\mathrm{B}}}\\, (\\ensuremath{\\mathrm{L}_{\\odot}}) =12.192-0.4B_\\mathrm{T}+2\\mathrm{log}D,\n\\end{equation}\nwhere $B_\\mathrm{T}$ is the blue apparent magnitude and $D$ is the\ndistance in Mpc. Values of blue apparent magnitude were taken from\n\\citet{Dev_91} (the value for The Mice was taken from NGC 2000.0\n\\citep{Dreyer_88}). The values of \\ensuremath{L_{\\mathrm{K}}}\\ are derived using the relation given in \\citet{Seigar_05}\n\\begin{equation}\n\\mathrm{log} \\ensuremath{L_{\\mathrm{K}}}=11.364-0.4K_\\mathrm{T}+\\mathrm{log}(1+Z)+2\\mathrm{log}D,\n\\end{equation}\nwhere $K_\\mathrm{T}$ is the K-band apparent magnitude, $Z$ is the galaxy redshift and $D$ is the distance in Mpc. Apparent K-band magnitudes were taken from the 2MASS survey.\n\nTo calculate the value of \\%{\\ensuremath{L_{\\mathrm{diff}}}} for each system, the contribution to the luminosity of the diffuse gas that arises from unresolved low luminosity point sources had to be estimated and removed. This was done in a number of ways; in the case of systems for which we have access to the reduced data (The Mice, Mkn 266 and Arp 222), an additional power law component with a photon index of 1.5 was included in the spectral fit of the diffuse emission. It was then assumed that the luminosity of this component arises from unresolved point sources within the system. Additionally, for Mkn 266, it was assumed that the MEKAL component of the point source fits arises from the diffuse gas surrounding the LINER and Seyfert. In the cases of Arp 270 and NGC 7252, estimates of the contribution of the unresolved point sources to the total diffuse luminosity has already been made in \\citet{Brassington_05} and \\citet{Nolan_04}, respectively.\n\nFor the systems we have taken from the literature, the `Universal Luminosity Function' (ULF), derived by \\citet{Grimm_03}, was used to predict the flux from unresolved low luminosity sources. The predicted XRB luminosity function is\n\\begin{equation}\nN(>L)=5.4\\mathrm{SFR}(L_{38}^{-0.61}-210^{-0.61}),\n\\label{equ:sfr}\n\\end{equation}\nwhere $L_{38}$ = $L\/10^{38}$ erg s$^{-1}$ and the factor 210 arises\nfrom the upper luminosity cut-off of 2.1$\\times 10^{40}$ erg\ns$^{-1}$. \nSFR was estimated from the population of brighter point sources detected above the completeness limit from the \\emph{Chandra}\\ observations. \nIn the case of The Antennae, point sources were detected down to a luminosity of 5$\\times10^{37}$erg s$^{-1}$ and therefore no correction was required for this system.\n\nAs mentioned previously, one of the main difficulties in compiling an evolutionary sequence such as this, is assigning an age to each of these systems. The way in which these estimates were made is described in section \\ref{sec:sample}, but another important point that was not discussed is that when constructing an evolutionary sample, the absolute timescale of the merger process must be considered. From N-body simulations \\citep{Mihos_96} it is clear that from the first initial strong interaction between equal-mass mergers, through to their nuclear coalescence, takes $\\sim$700 Myr. What is not so well defined is the amount of time that elapses between coalescence and the relaxation of the galaxy into a system resembling a mature elliptical galaxy. Within the sample presented here, this issue has been addressed by selecting a greater dynamical range than has previously been studied. By doing this it is hoped that the transition between young merger remnants to relaxed mature ellipticals can be observed, and therefore a more complete picture of the merger process can be obtained. \n\nAlthough these merger systems are being compared to establish a single evolutionary sequence, it should be remembered that nine individual systems, not nine examples of one merger at different stages of its evolution, are being looked at. Consequently, although the trends that have been identified here should be a good indicator of how X-ray emission evolves during the merger process, due to the careful selection criteria outlined in section \\ref{sec:sample}, it is likely that other merger pair systems will exhibit different X-ray properties. These variations are likely to arise due to the unique interaction parameters associated with each merger system, as well as the variation of gas content and mass of individual galaxies within each system.\n\nWith the multi-wavelength luminosities that have been collected for each system (Table \\ref{tab:merger:props}), the evolution of the galaxy properties along the merger process has been investigated. The activity levels, a proxy for star formation normalised by galaxy mass, is indicated by {\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}}. To investigate the variation of {\\ensuremath{L_{\\mathrm{X}}}}, scaled by galaxy mass for each system, both the B-band and the K-band luminosities, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}} and {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} have been used. Both of these have been used as normalisation values as \\ensuremath{L_{\\mathrm{B}}}\\ can become greatly enhanced during periods of star formation due to the presence of young stars, indicating that \\ensuremath{L_{\\mathrm{K}}}\\ is likely to provide a more reliable tracer of galaxy mass. Therefore, by plotting \\ensuremath{L_{\\mathrm{X}}}\\ against both of these values, how great this effect is can be observed. \n\nThese ratios are shown in Figure \\ref{fig:evol}, where, not only the luminosity ratios, but also the percentage of luminosity arising from diffuse gas (\\%{\\ensuremath{L_{\\mathrm{diff}}}}), as a function of merger age have been plotted. {\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} (solid line), {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}} (dot-dash line) and {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} (dashed line) have been normalised to the typical spiral galaxy, NGC 2403. Whilst \\%{\\ensuremath{L_{\\mathrm{diff}}}} (dotted line) is plotted on the right-hand y-axis of the plot, as an absolute value, where \\%{\\ensuremath{L_{\\mathrm{diff}}}} for NGC 2403 is 12\\%. The horizontal lines to the right of the plot indicate {\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} and \\%{\\ensuremath{L_{\\mathrm{diff}}}} for NGC 2434, a typical elliptical galaxy \\citep{Diehl_06}. \n\n\\begin{figure*}\n \\includegraphics[width=\\linewidth]{images\/evolution.ps}\n \\caption {The evolution of X-ray luminosity in merging galaxies. Shown are {\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} (solid line), {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}} (dot-dash line), {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} (dashed line) and \\%{\\ensuremath{L_{\\mathrm{diff}}}}, plotted as a function of merger age, where 0 age is defined to be the time of nuclear coalescence. All luminosity ratios ({\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}} and {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}}) are normalised to the spiral galaxy NGC 2403. \\%{\\ensuremath{L_{\\mathrm{diff}}}} is plotted on a linear scale, shown on the right-hand y-axis of the plot and is an absolute value. \\%{\\ensuremath{L_{\\mathrm{diff}}}} for NGC 2403 is 12\\%. The horizontal lines to the right of the plot indicate {\\ensuremath{L_{\\mathrm{FIR}}}}\/{\\ensuremath{L_{\\mathrm{K}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} and \\%{\\ensuremath{L_{\\mathrm{diff}}}} for NGC 2434, a typical elliptical galaxy.}\n \\label{fig:evol}\n\\end{figure*}\n\nThe system NGC 2403 was selected to represent a typical spiral galaxy. This system was chosen due to its close proximity (D=3.2 Mpc), the fact that it does not have a high level of star-forming activity and that it does not host an AGN \\citep{Schlegel_03}. NGC 2434, was selected to represent a typical elliptical galaxy. At a distance of 22 Mpc, this system is close enough to enable the point source population to be disentangled from the hot gas, allowing this diffuse emission to be modelled \\citep{Diehl_06}, this systems also exhibits the quiescent levels of star formation normally associated with such galaxies. Table \\ref{tab:typical} shows the properties for both of these systems, with columns as in Table \\ref{tab:merger:props}.\n\n\\begin{table*}\n\n\\centering\n\n\\caption[]{Properties of the typical spiral (NGC 2403) and the typical elliptical (NGC 2434) used in Figure \\ref{fig:evol}. Columns as in Table \\ref{tab:merger:props}.\n\\label{tab:typical}}\n\\begin{tabular}{ccccccc}\n\\noalign{\\smallskip}\n\\hline\n\nGalaxy \t\t& Distance\t& \\ensuremath{L_{\\mathrm{X}}}\t\t\t& \\%L$_{diff}$\t& Log \\ensuremath{L_{\\mathrm{FIR}}}\t& Log \\ensuremath{L_{\\mathrm{B}}} \t& Log \\ensuremath{L_{\\mathrm{K}}} \\\\\nSystem \t\t& \t\t&(0.3$-$6.0)\t\t\t&\t\t& \t\t&\t\t&\t \\\\\n \t\t& Mpc\t\t&($\\times10^{40}$\\ensuremath{\\erg~\\ps})& \t\t& (\\ensuremath{\\erg~\\ps}) \t\t& (\\ensuremath{\\erg~\\ps})\t& (\\ensuremath{\\erg~\\ps}) \\\\\n \n\\noalign{\\smallskip}\n\\hline\n\nNGC 2403\t& 3\t\t& 0.29\t\t\t&\t12\t& 42.21\t\t& 43.22\t\t&43.49\t \\\\\nNGC 2434\t& 22\t\t& 4.82\t\t\t&\t76\t& 41.96\t\t& 43.54\t\t&44.49\t \\\\\n\n\n\\noalign{\\smallskip}\n\\hline\n\\end{tabular}\n\\end{table*}\n\nAnother factor that must be considered when plotting these luminosity ratios is the AGN that is hosted by Mkn 266. This is the only system within the sample that has a confirmed AGN, and as such, it is expected to be significantly more luminous than the other systems. Therefore, the values of {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}}, {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{K}}}} and \\%{\\ensuremath{L_{\\mathrm{diff}}}} have been calculated for Mkn 266, both including and excluding the contribution from the Seyfert. As shown in Table \\ref{tab:mkn266_ps}, this nucleus is not a powerful AGN, instead it is close to the Seyfert-LINER borderline, and therefore, the total value of \\ensuremath{L_{\\mathrm{X}}}\\ only reduces by 14\\% if it is excluded, with the value of \\%{\\ensuremath{L_{\\mathrm{diff}}}} rising to 48\\%. When the reduced luminosity is used to derive the luminosity ratios for Mkn 266, the change in Figure \\ref{fig:evol} is very small, and the overall trends exhibited by these ratios are preserved.\n\n\\subsection{Arp 220: the Normalisation of the Universal Luminosity Function}\n\\label{sec:ulf}\n\n\\citet{Grimm_03} considered a number of SFR indicators for each galaxy within their sample. This was done, as there is considerable scatter in the SFR estimates obtained from different indicators. To calculate an `adopted' SFR for each galaxy, indicators that deviated significantly from the other values were disregarded, and the remaining indicators were averaged to give a final value used in subsequent calculations. \n\nIn the present work, the SFR was estimated from the brighter point sources, detected from the \\emph{Chandra}\\ observations. To further investigate the normalisation of the ULF, SFR indicators, derived from \\ensuremath{L_{\\mathrm{FIR}}}, using the expression \\citep{Rosa_02}\n\\begin{equation}\n\\mathrm{SFR}_\\mathrm{FIR}=4.5 \\times 10^{-44}L_\\mathrm{FIR}(\\mathrm{erg~s}^{-1}),\n\\end{equation}\nhave also been used to estimate the luminosity arising from the unresolved point sources. This ULF correction, using the SFR indicator derived from \\ensuremath{L_{\\mathrm{FIR}}}, has been calculated for the most active system within our sample, Arp 220. From this correction, a luminosity of 5.05$\\times$10$^{41}$erg s$^{-1}$, has been derived for the low luminosity (\\ensuremath{L_{\\mathrm{X}}}\\ $\\le 5\\times$10$^{39}$erg s$^{-1}$) point sources. This value is almost two times greater than the {\\em total} observed luminosity of Arp 220 (see Table \\ref{tab:merger:props}), indicating that using \\ensuremath{L_{\\mathrm{FIR}}}\\ as a SFR in the ULF must greatly overestimate the point source population for this system. To ensure that the value calculated in equation \\ref{equ:lfir} was consistent with other observations, additional \\ensuremath{L_{\\mathrm{FIR}}}\\ values were obtained (Log \\ensuremath{L_{\\mathrm{FIR}}}=45.86, \\citet{David_92} and Log \\ensuremath{L_{\\mathrm{FIR}}}=45.54, \\citet{Liu_95}). Averaging these values gives Log \\ensuremath{L_{\\mathrm{FIR}}}\\ = 45.73 for Arp 220, similar to the value given in Table \\ref{tab:merger:props}. \n\nTo investigate this further, we compare the observed population of brighter sources with that predicted using the ULF. From the \\emph{Chandra}\\ observation, 4 sources with \\ensuremath{L_{\\mathrm{X}}} $\\ge 5\\times$10$^{39}$erg s$^{-1}$ are detected in Arp 220. Using the SFR derived from \\ensuremath{L_{\\mathrm{FIR}}}\\ in equation \\ref{equ:sfr}, the bright point source population is estimated to be ten times larger, with 41 discrete sources predicted. This demonstrates that, in the case of Arp 220, when using \\ensuremath{L_{\\mathrm{FIR}}}\\ as the SFR indicator, the ULF greatly overestimates the whole point source population. {\\ensuremath{L_{\\ensuremath{\\mathrm{H}{\\alpha}}}}} values for this galaxy were also used to calculate the SFR of the system \\citep{Young_96,Colina_04} but, due to the very dusty nature of this object, are subject to large extinction effects, leading to greatly reduced SFRs with values as little as 0.55, which, given the merger status of this system, is unlikely to be a true measure of the SFR.\n\nTo investigate the reliability of using the \\ensuremath{L_{\\mathrm{FIR}}}\\ SFR indicator to normalise the ULF, the detected point source populations (with \\ensuremath{L_{\\mathrm{X}}}\\ $\\le 2.1\\times10^{40}$ erg s$^{-1}$) from the \\emph{Chandra}\\ observations were used to derive SFRs from equation \\ref{equ:sfr} for a number of galaxies in our sample; these values are plotted against the \\ensuremath{L_{\\mathrm{FIR}}}\\ derived SFR values in Figure \\ref{fig:sfr}. Only seven of the nine systems have been included in this figure, as the ULF is only a measure of the HMXRB population, and, it is likely that the stellar populations from the two older merger-remnants are dominated by LMXRBs.\n\\begin{figure}[h]\n \\hspace{0.8cm}\n \\includegraphics[width=0.8\\linewidth]{images\/sfr_ratio.ps}\n \\caption{A plot of the SFR indicated by \\ensuremath{L_{\\mathrm{FIR}}}\\ and the SFR derived from equation \\ref{equ:sfr}, with the solid line indicating unity. Arp 222 and NGC 1700 have not been included in this plot as their point source populations are likely to be dominated by LMXRBs.}\n \\label{fig:sfr}\n\\end{figure}\nFrom Figure \\ref{fig:sfr}, it can be seen that at high levels of the\n\\ensuremath{L_{\\mathrm{FIR}}}\\ SFR indicator, the SFR derived from equation \\ref{equ:sfr} is\nmuch smaller, indicating that, in very active star-forming systems,\nthe prediction of the point source population, when using the \\ensuremath{L_{\\mathrm{FIR}}}\\\nSFR value, does not represent the observed X-ray population well. \n\nThis relationship between \\ensuremath{L_{\\mathrm{X}}}\\ and SFR was also investigated in\n\\citet{Gilfanov_04b}, where active systems in the HDF-N were used to\nprobe the ULF at high values of SFR. From this work it was found that,\nfor higher levels of star formation, the observed X-ray luminosities\nfrom these galaxies do follow the \\ensuremath{L_{\\mathrm{X}}}-SFR relation. However, the\ncalculated SFR for all of these systems were derived from 1.4 GHz, {\\ensuremath{L_\\nu}} SFR\nindicators only, not \\ensuremath{L_{\\mathrm{FIR}}}\\ SFR indicators. This suggests that the discrepancy\nbetween \\ensuremath{L_{\\mathrm{FIR}}}\\ SFR, and the observed values derived from the\n\\emph{Chandra}\\ observations, are a consequence of the normalisation\nindicator chosen, and do not arise from non-linearity of the ULF at high values of SFR.\n\nThere are a number of reasons why the SFRs inferred from the X-ray observations and \\ensuremath{L_{\\mathrm{FIR}}}\\ values might differ so greatly for systems in the most active phases. It could simply be that, for dusty, active systems such as these, \\ensuremath{L_{\\mathrm{FIR}}}\\ is a poor tracer of SFR, becoming greatly enhanced due to the large amounts of gas residing in the galaxy. This would indicate that it is the \\ensuremath{L_{\\mathrm{FIR}}}\\ SFR indicator, not the SFR from the ULF, that is incorrect. However, this explanation is unlikely, as studies have shown that, whilst {\\ensuremath{L_{\\ensuremath{\\mathrm{H}{\\alpha}}}}} and {\\ensuremath{L_{\\mathrm{UV}}}} are susceptible to extinction caused by the presence of dust in star-forming regions, \\ensuremath{L_{\\mathrm{FIR}}}\\ does not vary with gas content, and scales well with both extinction corrected {\\ensuremath{L_{\\mathrm{UV}}}} and radio continuum emission in dusty starbursts \\citep{Buat_96,Buat_92}.\n\nAlternatively, the discrepancy between the predicted and observed values could be a consequence of the age of the stellar population. If the starburst has only recently taken place, it is plausible that, whilst \\ensuremath{L_{\\mathrm{FIR}}}\\ has had enough time to become massively enhanced, the X-ray point source population has not yet evolved into HMXRBs. The consequence of this would be that the X-ray luminosity arising from the point source population is lower than the SFR, indicated by \\ensuremath{L_{\\mathrm{FIR}}}, would predict. \\citet{Wilson_05} recently reported on an optical study of Arp 220 which has detected two populations of young massive star clusters. The age of the youngest of these populations has been found to be 5$-$10 Myr, indicating that enough time has elapsed since the last starburst for HMXRBs to form ($\\sim1-$10 Myr). It is therefore unlikely that a lag between \\ensuremath{L_{\\mathrm{FIR}}}\\ and \\ensuremath{L_{\\mathrm{X}}}\\ is the reason for the difference in the SFR values shown in Figure \\ref{fig:sfr}.\n\nAnother reason that the SFR derived from equation \\ref{equ:sfr} is lower than expected, when compared to the \\ensuremath{L_{\\mathrm{FIR}}}\\ SFR, could simply be that the X-ray observations are not detecting all the point sources in very active galaxies due to obscuration, and, those that are being detected have greatly reduced luminosities. To investigate this, the optical obscuration in Arp 220 \\citep{Shioya_01} has been converted into an absorbing column density, giving \\ensuremath{N_{\\mathrm{H}}} = 2.2$\\times 10 ^{22} $ atom cm$^{-2}$, a value which is actually somewhat smaller than the modelled value given in \\citet{Clements_02} of 3$\\times 10 ^{22} $ atom cm$^{-2}$. This demonstrates that the stated luminosities in \\citet{Clements_02} are reliable values, and all the point sources with \\ensuremath{L_{\\mathrm{X}}} $\\ge 5\\times$10$^{39}$erg s$^{-1}$ (the stated lower luminosity point source detection threshold) should have been detected in this system. \n\nA final explanation for the difference between the predicted and observed SFR indicators for systems close to the point of nuclear coalescence, could arise from a physical change in the birth of stellar systems. These merging galaxies provide extreme examples of violent star formation, and it is therefore possible that the most recent starburst has resulted in the birth of a stellar population with a different Initial Mass Function (IMF). \n\nThe IMF, the distribution of masses with which stars are formed, has long been a contentious issue, with a variety of different models used to explain the observed stellar populations \\citep{Larson_86,Larson_98}. It was initially proposed that the IMF was a universal function with a power law form, regardless of formation time or local environment \\citep{Salpeter_55}. But, it has also been argued that the IMF varies, and was more top-heavy at earlier times \\citep{Rieke_80}. \n\nFrom the most recent observations it is thought that the Salpeter IMF,\nwith a lower mass flattening between 0.5 \\ensuremath{\\mathrm{M}_{\\odot}}\\ and 1.0 \\ensuremath{\\mathrm{M}_{\\odot}}, can be\nconsidered a reasonable approximation for large regions of starburst\ngalaxies \\citep{Elmegreen_05}, with some suggestion of small\nvariations with environment, with denser and more massive star\nclusters producing more massive stars compared to intermediate mass\nstars \\citep{Elmegreen_04,Shadmehri_04}. This more top-heavy IMF has\nthe effect of generating greater numbers of supernova remnants, black\nholes, and HMXRBs, as well as higher amounts of \\ensuremath{L_{\\mathrm{FIR}}}\\ per unit mass\nof stars formed, therefore leading to an increase in both \\ensuremath{L_{\\mathrm{X}}}\\ and\n\\ensuremath{L_{\\mathrm{FIR}}}. However, the\nlevel of enhancement of \\ensuremath{L_{\\mathrm{FIR}}}, relative to the increase in HMXRBs formed, is\ndependent on the shape of the IMF. \n\nIn the galaxy systems close to the point of nuclear coalescence\npresented in this sample,\nthe X-ray binary population is dominated by HMXRBs, meaning that \\ensuremath{L_{\\mathrm{X}}}\\\nwill scale with the number of massive stars produced, whereas \\ensuremath{L_{\\mathrm{FIR}}}\\\nwill scale\nwith the main sequence luminosity of those stars. As the IMF flattens,\n\\ensuremath{L_{\\mathrm{FIR}}}\\ will rise more steeply, a consequence of the strong mass to\nluminosity dependence, $M \\propto L^{3.5}$. Therefore, when\nusing \\ensuremath{L_{\\mathrm{FIR}}}\\ values to derive the SFR for these system, this value will be\noverestimated.\n\n\n\n\nIf this interpretation is correct, it could explain why there appears to be a deficit of X-ray binaries in the \\emph{Chandra}\\ observations of the systems close to the point of coalescence. In actual fact, what is being seen here is an overestimated SFR, derived from the massively enhanced \\ensuremath{L_{\\mathrm{FIR}}}, as can be seen in Figure \\ref{fig:sfr}.\n\n\n\n\n\n\\section{Discussion: X-ray Evolution}\n\\label{sec:dis}\n\nThe first thing to note from Figure \\ref{fig:evol} is that Arp 270, even though an early stage system, is already exhibiting enhanced star formation activity, as measured by \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ (see \\citet{Read_01}), compared to the typical spiral system, NGC 2403. This activity increases up to the point of nuclear coalescence, after which there is a steady drop in \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ until the merger-remnant systems, $\\sim$1 Gyr after coalescence. From this point the activity value levels off to that of a typical elliptical system. \n\nThe evolution of the X-ray luminosity is different to that of \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}. There is initially a rise, as seen with the star formation activity, but this peaks $\\sim$300 Myr before coalescence takes place, whilst \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ is still increasing. From this peak it drops until the young merger-remnants, as is the case for \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}. But, instead of levelling off, the X-ray luminosity once again begins to rise, with the total X-ray luminosity of the 3 Gyr system beginning to resemble that of a mature elliptical galaxy.\n\nBoth \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ and \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ broadly exhibit the same variations. One notable exception is the value of \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ for Mkn 266. This relatively small value is due the massive enhancement of \\ensuremath{L_{\\mathrm{B}}}\\ from this system's AGN \\citep{Risaliti_04}, which \\ensuremath{L_{\\mathrm{K}}}\\ is not susceptible to, therefore indicating that \\ensuremath{L_{\\mathrm{K}}}\\ is a more reliable tracer of galaxy mass than \\ensuremath{L_{\\mathrm{B}}}, and will exhibit less scatter.\n\nThe evolution of \\%{\\ensuremath{L_{\\mathrm{diff}}}} also shows an increased value for Arp 270, when compared to the typical spiral system. This value then steadily rises, up to a point $\\sim$200 Myr prior to nuclear coalescence. This trend line then drops until the young merger-remnant systems, once again rising, with the 3 Gyr system exhibiting a similar value of \\%{\\ensuremath{L_{\\mathrm{diff}}}} to that of the typical elliptical, NGC 2434.\n\n\\subsection{Previous X-ray Studies}\n\\label{sec:RP98}\n\nIn RP98 it was found that the X-ray luminosity of the sample generally followed the same patterns as the star formation activity, with the peak of \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ being coincident with nuclear coalescence. Also in the RP98 study, the sample of post-merger systems was limited, with a baseline extending out to only 1.5 Gyr after nuclear coalescence. Consequently, no rise in the X-ray luminosity of post-merger systems was observed, because, as shown in the present study, there is a strong suggestion that systems require a much greater relaxation time before they begin to exhibit properties seen in mature elliptical galaxies. This idea is further strengthened by a study of post-merger ellipticals \\citep{Osul_01}, where a long term trend ($\\sim$10 Gyr) for \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ to increase with dynamical age was found.\n\nThe reason that the peak in \\ensuremath{L_{\\mathrm{X}}}\\ has been found at two different merger ages in these studies could be due to the different observatories that were used. \\emph{Chandra}\\ has greater spatial resolution than \\emph{ROSAT}, and is able to disentangle background galaxies from the target object with much greater accuracy. It is therefore possible that these objects were included in the previous \\emph{ROSAT}\\ work as diffuse features, leading to a different peak value of \\ensuremath{L_{\\mathrm{X}}}. However, both samples include the famous merger system, The Antennae, and the system at coalescence, Arp 220, and the X-ray luminosities from both of these systems are comparable between \\emph{ROSAT}\\ and \\emph{Chandra}. Indicating that this discrepancy does not arise from a difference between these two observatories. \n\nAnother difference between RP98 and the current work, that could account for the differing merger ages of peak \\ensuremath{L_{\\mathrm{X}}}, is the selection of systems within each study. In our sample we include two systems between The Antennae and Arp 220; Mkn 266 and NGC 3256, in RP98 there is only one, NGC 520. This system is highlighted in RP98 as being X-ray faint, not exhibiting the X-ray properties that one would expect, given its large \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ ratio. They suggest that this galaxy does not appear to be on the same evolutionary path as the rest of their sample and, possibly, will not evolve into an elliptical galaxy.\n\nA recent study of NGC 520 with \\emph{Chandra}\\ \\citep{Read_05}, has indicated that this system comprises one gas-rich and one gas-poor galaxy, not two gas rich spirals as has been selected within this sample. This lack of gas in the second galaxy has resulted in a `half merger' being induced in this system, and hence appears underluminous in X-rays. Therefore, it is likely that the peak X-ray luminosity was missed in the RP98 sample due to the galaxy selection. In the present study, use of the selection criteria outlined in section \\ref{sec:sample} has meant that only systems that are gas rich have been included, ensuring that comparable systems in the merger sequence have been selected. Consequently, the systems presented in this sample are more likely to be representative of the typical merger evolution.\n\n\\subsection{The X-ray Luminosity Peak and the Impact of Galactic Winds}\n\\label{sec:galWs}\n\nFrom the X-ray luminosities of the systems within this sample it has been seen, for the first time, that both \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ and \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ peak $\\sim$300 Myr before nuclear coalescence takes place. This result, given that the normalised star formation rate is still increasing at this time, is initially surprising, as one would expect the X-ray luminosity to increase with increasing \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}. However, from looking at the morphology of the hot diffuse gas, it is clear that the systems emitting very high levels of \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ are also experiencing outflows from starburst-driven winds. We propose that this relative reduction in X-ray luminosity, for these very active systems, is a consequence of these large scale diffuse outflows.\n\nStarburst-driven winds are responsible for the transport of gas and energy out of star-forming galaxies. The energetics of these galactic winds were investigated by \\citet{Strickland_00b}, where hydrodynamical simulations were compared to the observations of the galactic wind of M82. From this work it was shown that the majority of the thermal and kinetic energy of galactic winds is in a hot volume-filling component of the gas, which is very difficult to probe due to its low emissivity, a consequence of the low density of this component.\n\nWithin our sample, the system that represents the peak X-ray luminosity is Mkn 266. In section \\ref{sec:mkn266} the results from the \\emph{Chandra}\\ observation of this system are presented and tentative evidence suggesting that this system is just about to experience galaxy wide diffuse outflows is found. If this interpretation is correct, it seems probable that the lower values of \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ and \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}, for NGC 3256 and Arp 220, the most active systems in this sample, are a consequence of their extensive galactic winds, which have allowed the density, and hence the emissivity, of the hot gas to drop.\n\n\\subsection{Halo Regeneration in Low \\ensuremath{L_{\\mathrm{X}}}\\ Systems}\n\\label{sec:halo}\n\nAnother interesting result from this survey is the increase in \\ensuremath{L_{\\mathrm{X}}}\\ in the older merger-remnants. In previous studies (e.g. RP98) post-merger systems have been found to be X-ray faint when compared to elliptical galaxies. In our study the merger sequence has been extended to include a 3 Gyr system, and, by doing so, it has been shown that these underluminous systems appear to increase in \\ensuremath{L_{\\mathrm{X}}}\\ as they age. Given that these merger-remnants have been shown to be quiescent, this increase in \\ensuremath{L_{\\mathrm{X}}}\\ is not due to any starburst activity within the system. Coupling this, with the increase in \\%{\\ensuremath{L_{\\mathrm{diff}}}}, indicates that diffuse X-ray gas is being produced, leading to the creation of X-ray haloes, as observed in mature elliptical galaxies. \\citet{Osul_01} investigated the relationship between {\\ensuremath{L_{\\mathrm{X}}}}\/{\\ensuremath{L_{\\mathrm{B}}}} and spectroscopic age in post-merger ellipticals and found that there was a long term trend ($\\sim$10 Gyr) for {\\ensuremath{L_{\\mathrm{X}}}} to increase with time. The mechanism by which the regeneration of hot gas haloes in these galaxies is explained, is one in which an outflowing wind to hydrostatic halo phase is driven by a declining SNIa rate. \\citet{Osul_01} argue that a scenario in which gas, driven out during the starburst, infalls onto the\nexisting halo is not the dominant mechanism in generating X-ray haloes as this mechanism would only take $\\sim$1$-$2 Gyr and would therefore not produce the long-term trend they observe. The time baseline from the sample studied in the present paper is not sufficient to allow us to discriminate between these two possibilities.\n\n\\subsection{The Behaviour of the X-ray Point Source Population}\n\\label{sec:compare}\n\nFrom the \\emph{Chandra}\\ observations in this study, the behaviour of the\npoint source population during the merger process has been\ncharacterised for the first time. In Figure \\ref{fig:lx_lps} the total\nluminosity, as well as its two components, the luminosity arising from\nthe point source population (\\ensuremath{L_{\\mathrm{src}}}) and the diffuse gas contribution\n(\\ensuremath{L_{\\mathrm{diff}}}), have been normalised by \\ensuremath{L_{\\mathrm{K}}}\\ and plotted against star\nformation activity (\\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}).\n\n\n\n\\begin{figure}\n \\includegraphics[width=\\linewidth]{images\/point_lfir.ps}\n \\caption{Plot indicating how \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ and the two components that contribute to \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ (the point source contribution (\\ensuremath{L_{\\mathrm{src}}}) and the diffuse gas contribution (\\ensuremath{L_{\\mathrm{diff}}})) scale with \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}.}\n \\label{fig:lx_lps}\n\\end{figure}\n\nFrom this figure it can be seen that the total \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}, whilst\nshowing a general trend of increasing with \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}, exhibits a large\namount of scatter, peaking at the merger system Mkn 266, and then dropping at high values of \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}. Decomposing\n\\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ into separate source and diffuse contributions to\n\\ensuremath{L_{\\mathrm{X}}}, it can be seen that the scatter arises\nprimarily from the diffuse component, whilst the drop in\n\\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ at high \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}, is a consequence of both \\ensuremath{L_{\\mathrm{diff}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ and\n\\ensuremath{L_{\\mathrm{src}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ falling steeply. Using the Kendall Rank coefficient, it was found\nthat the point source component, \\ensuremath{L_{\\mathrm{src}}}\/\\ensuremath{L_{\\mathrm{K}}}, shows a correlation of\n2.5$\\sigma$ with \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}. Fitting a single power law to these data gives\na logarithmic index of 0.93$\\pm$0.28.\n\nHowever, from visual inspection of Figure \\ref{fig:lx_lps}, a single\npower law trend does not represent the behaviour adequately.\nAt high values of \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}, the very active\nsystems in this sample actually exhibit declining values of\n\\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ as \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ increases. In section\n\\ref{sec:ulf}, we proposed the possibility that these systems close to\nnuclear coalescence have a value of \\ensuremath{L_{\\mathrm{FIR}}}\\ enhanced relative to that in\nless extreme environments, due to a change in the IMF. Under this\nhypothesis, the corresponding points in Figure \\ref{fig:lx_lps} will have\nbeen shifted strongly to the right, which could account for the\nnegative slope \nseen in all three curves at high \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}.\nThe idea that the change in behaviour relates to \\ensuremath{L_{\\mathrm{FIR}}}\/\\ensuremath{L_{\\mathrm{K}}}, rather than\n\\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}, is attractive since, given the very different\nmechanisms responsible for generating the source and diffuse\ncomponents of \\ensuremath{L_{\\mathrm{X}}}\\ (X-ray binaries and supernova-heated gas\nrespectively), it is hard to envisage any process which could\nsimultaneously change the behaviour of both.\n\n\n\\section{Conclusions}\n\\label{sec:con}\n\nFrom a sample of nine interacting and merging galaxies, the evolution of X-ray emission, ranging from detached spiral pairs through to merger-remnant systems, has been investigated. As part of this survey, results from the analysis of two \\emph{Chandra}\\ observations, Mkn 266 and Arp 222, have also been presented. Here we summarise the results from these \\emph{Chandra}\\ observations and then draw the main conclusions from the survey:\n\n\\subsection{Mkn 266 and Arp 222}\n\n\\begin{itemize}\n\n\\item{The luminous merger system, Mkn 266, has been shown to contain two nuclei, with X-ray luminosities of 3.47$\\times$10$^{41}$\\ensuremath{\\erg~\\ps}\\ and 1.04$\\times$10$^{41}$\\ensuremath{\\erg~\\ps}. In addition, an area of enhanced X-ray emission has been detected between them. This is coincident with a radio source and is likely a consequence of the collision between the two discs from the progenitors. A region of diffuse emission is detected to the north of the system. It is probable that this arises from a spiral arm that has been stripped out of the system during the merger. To the south east of the nucleus, a region of extended emission has been detected, indicating that this system could be on the verge of large-scale galactic winds breaking out. This system has the highest \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ ratio of any of the galaxies within our sample.} \n\n\\item{ 15 discrete X-ray sources have been detected in Arp 222, two of which are classified as ULXs. This merger-remnant system has been shown to be X-ray faint when compared to both other systems within this sample and the typical elliptical galaxy NGC 2434. The diffuse gas of Arp 222 has been modelled with a temperature of 0.6 keV and, even though optical and CO observations are consistent with those of elliptical galaxies, the X-ray luminosity of Arp 222 does not resemble that of a mature elliptical.}\n\n\\end{itemize}\n\n\\subsection{The X-ray Evolution of Merging Galaxies}\n\n\\begin{itemize}\n\n\\item{The most striking result from this work is the time at which \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{B}}}\\ and \\ensuremath{L_{\\mathrm{X}}}\/\\ensuremath{L_{\\mathrm{K}}}\\ peak. It was previously believed that this was coincident with nuclear coalescence, but here we find the peak $\\sim$300 Myr before this coalescence takes place. We suggest that subsequent drop in X-ray emission is a consequence of large-scale diffuse outflows breaking out of the galactic discs, reducing the hot gas density and allowing the escape of energy in kinetic form.}\n\n\\item{This study has also demonstrated that, in the systems close to the point of coalescence, \\ensuremath{L_{\\mathrm{FIR}}}\\ is massively enhanced when compared to the X-ray binary luminosity of these systems. We suggest here that the high level of \\ensuremath{L_{\\mathrm{FIR}}}\\ result from a change in the IMF in these exceptional starbursts. With the production of more massive stars compared to intermediate mass stars in these galaxies leading to larger values of \\ensuremath{L_{\\mathrm{FIR}}}\\ per unit mass of stars formed. }\n\n\\item{At a time $\\sim$1 Gyr after coalescence, the merger-remnants in our sample are X-ray faint when compared to typical X-ray luminosities of mature elliptical galaxies. However, we see evidence that these systems will start to resemble typical elliptical galaxies at a greater dynamical age, given the properties of the 3 Gyr system within our sample. This supports the idea that halo regeneration will take place within low {\\ensuremath{L_{\\mathrm{X}}}} merger-remnants. We caution that, with only one older, more relaxed, system within our sample, our conclusions on this point are necessarily tentative. To fully understand how young merger-remnants evolve into typical elliptical systems, the period in which this transformation takes place needs to be studied in greater detail.}\n\n\\end{itemize}\n\n\n\\section{Acknowledgements}\n\nWe thank the \\emph{Chandra}\\ X-ray Center (CXC) Data Systems and Science\nData Systems teams for developing the software used for the reduction (SDP) and analysis (CIAO). We would also like to thank the anonymous referee for helpful comments\nwhich improved this paper, and Steve Diehl for providing us with the X-ray information for NGC 1700 and NGC 2403. \n\nThis publication has made use of data products from the Two Micron All Sky Survey, which is a collaboration between The University of Massachusetts and the Infrared Processing and Analysis Center (JPL\/ Caltech). Funding is provided by the National Aeronautics and Space Administration and the National Science Foundation.\n\nNJB acknowledges the support of a PPARC studentship.\n\n\n\n\n\\label{lastpage}\n\n\\bibliographystyle{mn2e}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbbjn b/data_all_eng_slimpj/shuffled/split2/finalzzbbjn new file mode 100644 index 0000000000000000000000000000000000000000..d0f7ca92db3a58f22d479ccd08422024714fee28 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbbjn @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nModelling dynamical systems is a common problem in science and engineer, and there are many applications related to such a modelling, e.g. robot navigation, natural language processing, speech recognition, etc.~\\cite{ICML2013_hamilton13}. Many approaches have been devoted to modelling systems, such as Hidden Markov Models~(HMM) for uncontrolled dynamical systems and Partially Observable Markov Decision Processes~(POMDPs) for controlled systems. However, such latent-state approaches usually suffer from local minima and require some assumptions~\\cite{Singh04predictivestate}. Predictive State Representations~(PSRs) offer an effective approach for modelling partially dynamical systems~\\cite{Littman01predictiverepresentations}. Unlike the latent-state approaches, PSRs use a vector of predictions about future events, called tests, to represent the state. The tests can be executed on the systems and are fully observable. Compared to the latent-state approaches, PSRs have shown many advantages, such as the possibility of obtaining a global optimal model, more expressive power and less required prior domain knowledge, etc.~\\cite{LiuAAMAS15}.\n\nThere are two main problems in PSRs. One is the learning of the PSR model; the other is the application of the learned model, including predicting and planning~\\cite{Rosencrantz04learninglow,Liu14inaccuratePSR}. The state-of-the-art technique for addressing the learning problem is the spectral approach~\\cite{Boots-2011b}. Spectral methods treat the learning problem as the task of computing a singular value decomposition~(SVD) over a submatrix $H_s$ of a special type of matrix called the Hankel matrix~\\cite{Hsu_aspectral}. Under the strong assumptions that the rows $H$ and columns $T$ of $H_s$ are sufficient and the entries of the matrix can be estimated accurately, it has been proven that the spectral approach for learning PSRs is statistically consistent and the learned parameters can converge to the true parameters~\\cite{Boots-2011b}. However, for the sufficient assumption, it usually means a very large number of rows or columns of the $H_s$ matrix, which almost fails in practice~\\cite{kulesza2015spectral}. At the same time, the computation complexity of the SVD operation on $H_s$ is $O(|T|^2|H|)$~($|T|$ and $|H|$ is the number of the columns and rows in the matrix respectively), for large set of $T$ and $H$, such an operation is also prohibitively expensive. Also, for the spectral approach, to obtain the model parameter, one should estimate and manipulate two observable matrices $P_{T,H}$, $P_{H}$, and $|A|\\times|O|$ observable matrices $P_{T,ao,H}$~\\cite{Liu16IJCAI}. Although Denis {\\em et al.}~\\cite{DBLP:conf\/icml\/DenisGH14} showed that the concentration of the empirical $P_{T,H}$ around its mean does not highly depend on its dimension, which gives a hope for alleviating the statistical problem when using large set of $T$ and $H$ as the accuracy of the learned model is directly connected to the concentration, manipulating these matrices is still too expensive to afford in large systems.\n\nThus, in practice, taking the computational constraints into account, it is needed to find only a finite set of columns of the Hankel matrix before the spectral methods can be applied. While different sets of columns usually lead to variant accuracy of the learned model, in this paper, we first introduce a concept of model entropy and show the high relevance between the entropy and the accuracy of the learned model, then for the columns selection problem, we call it basis selection in spectral learning of PSRs, an approach is proposed by using the model entropy as the guidance. We finally show the effectiveness of the proposed approach by executing experiments on {\\em PocMan}, which is one benchmark domain in the literature.\n\nWe organize the remainder of this paper as follows. We briefly review the background and define notations in Section 2. We propose the approaches for basis selection in Section 3. We provide comparative results in Section 4 . Finally we conclude the paper in Section 5.\n\n\n\\section{Preliminaries}\n\nPredictive state representations~(PSRs) represent state by using a vector of predictions of fully observable quantities~(tests) conditioned on past events~(histories), denoted $b(\\cdot)$. For discrete systems with finite set of observations $O=\\{o^1,o^2,\\cdots,o^{|O|}\\}$ and actions $A=\\{a^1,a^2,\\cdots,a^{|A|}\\}$, at time $\\tau$, a \\emph{test} is a sequence of action-observation pairs that starts from time $\\tau + 1$. Similarly, a \\emph{history} at $\\tau$ is a sequence of action-observation pairs that starts from the beginning of time and ends at time $\\tau$, which is used to describe the full sequence of past events. The prediction of a length-$m$ test $t$ at history $h$ is defined as $p(t|h)=p(ht)\/p(h)=\\prod^m_{i=1}Pr(o_i|ha_1o_1\\cdots a_i)$~\\cite{Singh04predictivestate}.\n\nThe underlying dynamical system can be described by a special bi-infinite matrix, called the Hankel matrix~\\cite{Balle:2014ML}, where the rows and columns correspond to all the possible tests $T$ and histories $H$ respectively, the entries of the matrix are defined as $P_{t,h}=p(ht)$ for any $t \\in T$ and $h \\in H$, where $ht$ is the concatenation of $h$ and $t$~\\cite{Boots-2011b}. The rank of the Hankel matrix is called the linear dimension of the system. When the rank is finite, we assume it is $k$, in PSRs, the state of the system at history $h$ can be represented as a prediction vector of $k$ tests conditioned at $h$. The $k$ tests used as the state representation is called the minimal {\\em core tests} that the predictions of these tests contain sufficient information to calculate the predictions for all tests, and is a \\emph{sufficient statistic}~\\cite{Singh04predictivestate,LiuAAMAS15}. For linear dynamical systems, the minimal core tests can be the set of tests that corresponds to the $k$ linearly independent columns of the Hankel matrix~~\\cite{LiuAAMAS15}.\n\nA PSR of rank $k$ can be parameterized by a reference condition state vector $b_* = b(\\epsilon)\\in \\mathbb{R}^k$, an update matrix $B_{ao} \\in \\mathbb{R}^{k \\times k}$ for each $a \\in A$ and $o \\in O$, and a normalization vector $b_\\infty \\in \\mathbb{R}^k$, where $\\epsilon$ is the empty history and $b_\\infty^T B_{ao} = 1^T$~\\cite{Hsu_aspectral,Boots-2011b}. In the spectral approach, these parameters can be defined in terms of the matrices $P_\\mathcal{H}$, $P_{\\mathcal{T,H}}$, $P_{\\mathcal{T},ao,\\mathcal{H}}$ and an additional matrix $U \\in \\mathbb{R}^{T \\times d}$ as shown in Eq.~\\ref{equ:spectral}, where $\\mathcal{T}$ and $\\mathcal{H}$ are the set of all possible tests and histories respectively, $U$ is the left singular vectors of the matrix $P_{\\mathcal{T,H}}$, $^T$ is the transpose and $\\dag$ is the pseudo-inverse of the matrix~\\cite{Boots-2011b}.\n\\begin{equation}\n\\begin{split}\n& b_* = U^TP_{\\mathcal{H}}1_k, \\\\\n& b_{\\infty} = (P^T_{\\mathcal{T,H}})^{\\dag}P_\\mathcal{H}, \\\\\n& B_{ao} = U^TP_{\\mathcal{T},ao,\\mathcal{H}}(U^TP_{\\mathcal{T,H}})^\\dag.\n\\end{split}\n\\label{equ:spectral}\n\\end{equation}\n\nUsing these parameters, after taking action $a$ and seeing observation $o$ at history $h$, the PSR state at next time step $b(hao)$ is updated from $b(h)$ as follows~\\cite{Boots-2011b}:\n\n\\begin{equation}\nb(hao) = \\frac{B_{ao}b(h)}{b_\\infty^TB_{ao}b(h)}.\n\\end{equation}\n\nAlso, the probability of observing the sequence $a_1o_1a_2o_2 \\cdots a_no_n$ in the next $n$ time steps can be predicted by~\\cite{kulesza2015spectral}:\n\n\\begin{equation}\nPr[o_{1:t}||a_{1:t}] = b_{\\infty}^TB_{a_no_n} \\cdots B_{a_2o_2}B_{a_1o_1}b_*.\n\\end{equation}\n\nUnder the assumption that the columns and rows of the submatrix $H_s$ of the Hankel matrix are sufficient, when more and more data are included, the law of large numbers guarantees that the estimates $\\hat P_\\mathcal{H}$, $\\hat P_\\mathcal{T,H}$, and $\\hat P_{\\mathcal{T},ao,\\mathcal{H}}$ converge to the true matrices $P_\\mathcal{H}$, $P_\\mathcal{T,H}$, and $P_{\\mathcal{T},ao,\\mathcal{H}}$, the estimates $\\hat b_*$, $\\hat b_{\\infty}$, and $\\hat B_{ao}$ converge to the true parameters $b_*$, $b_{\\infty}$, and $B_{ao}$ for each $a \\in A$ and $o \\in O$, that the learning is consistent~\\cite{Boots-2011b}.\n\n\\section{Basis Selection via Model Entropy}\nAs the assumption that the columns and rows of the submatrix $H_s$ are sufficient is really strong~(usually means a very large set of columns and histories of the matrix), which almost fails in reality. At the same time, due to the computation and statistics constraints, we can only manipulate a limited finite set of tests\/histories. As different sets of columns~(tests) or rows~(histories) usually cause different model accuracy, which requires us to select the finite set of tests and rows~(histories) of the matrix for applying the spectral approach. In practice, as all possible training data is used to generate the histories, how to select the tests, i.e., the basis selection, is the crucial problem. In this section, we first introduce the concept of model entropy, then show the high relevance between the entropy and the model accuracy, finally we propose a simple search method for selecting the bases.\n\n\\noindent{\\bf Model Entropy}. Given a set of tests $X=\\{x_1, x_2, \\cdots, x_i\\}$, if it includes the set of core tests and the number of possible $p(X|\\cdot)$ is finite, Proposition~\\ref{pro:mdp} holds~\\cite{Liu16IJCAI}.\n\\begin{pro}\nThe Markov decision process~(MDP) model built by using $p(X|\\cdot)$ as state representation and the action-observation pair $\\langle ao \\rangle$ as action is deterministic.\n\\label{pro:mdp}\n\\end{pro}\nIn practice, it is difficult for $X$ to include the set of core tests. At any time step, the prediction vector $p(X|\\cdot)$ may actually correspond to several PSR states. In such cases, the transition from $p(X|h)$ to $p(X|hao)$ usually becomes stochastic and the less information is included in $X$, the more stochastic the transition will usually be. Inspired by the concept of Shannon entropy that measures information uncertainty~\\cite{shannon48}, to quantify the stochasticity, a concept of model entropy for each set of tests $X$ is defined in Eq.~\\ref{equ:entropy}~\\cite{Liu16IJCAI}.\n\\begin{equation}\n\\begin{small}\nE(X) = -\\sum_{a \\in \\mathcal{A}_{PP}} \\frac{1}{r(T^a)} \\sum_{i=1}^{r(T^a)} \\sum_{j=1}^{c(T^a)}T(s_i,a,s_j)\\log T(s_i,a,s_j),\n\\end{small}\n\\label{equ:entropy}\n\\end{equation}\nwhere $T$ is the state-transition function of the MDP using $p(X|\\cdot)$ as state representation and $\\langle ao \\rangle$ as action, $\\mathcal{A}_{PP}=A \\times O$ the set of action-observation pairs in the original system, $r(T^a)$ and $c(T^a)$ the number of rows and columns of the state-transition matrix $T^a$ respectively.\n\n\\noindent{\\bf Relevance between Model Entropy and Basis Selection}. When selecting the set of tests for applying spectral methods, the tests containing more information should be selected as the more information included in the set of tests, the higher accuracy the learned PSR model will usually be. According to Eq.~\\ref{equ:entropy}, the less information is included in the set of tests $X$, the more stochastic the state transition will be, and the higher the model entropy is. So for the basis selection problem, the set of tests with lowest model entropy should be selected.\n\n\\noindent{\\bf Basis Selection via Model Entropy}. Using the model entropy as the guidance, we propose a simple local search algorithm for searching the set of tests used for spectral learning of PSRs, which is shown in Algorithm~\\ref{Algo:selecting}. Starting with a default $T$ of the desired size, we iteratively sample a set of new tests and consider using it to replace the tests in $T$. If the replacement is a reduction in terms of the entropy value, then we keep it. After a fixed number of rounds, we stop and return the current $T$. In the algorithm, the entropy $E(T)$ for each candidate set of tests $T$ is calculated as follows~(We name this procedure as $EntropyLearn(D,T)$): We first generate an original randomly action-observation sequences $D$ as the training data for calculating the entropy. Then the data is translated into the form of $\\langle$action-observation$\\rangle$-$p(T|\\cdot)$ sequences. For example, a sequence $d=\\langle a_1o_1a_2o_2 \\cdots a_ko_k \\rangle$ is converted into $d'=\\langle a_1o_1 \\rangle p(T|a_1o_1)\\langle a_2o_2\\rangle p(T|a_1o_1a_2o_2) \\cdots \\langle a_ko_k\\rangle p(T|a_1o_1a_2o_2 \\cdots a_ko_k)$, where $p(T|\\cdot)$ can be estimated using the training data. Due to the sampling error, it is unlikely that any of these estimated $p(\\hat{T}|\\cdot)$ will be exactly the same, even if the true underlying $p(T|\\cdot)$ are identical. Statistical tests or linearly independent techniques can be used to estimate the number of distinct underlying $p(T|\\cdot)$ and cluster the estimated $p(\\hat{T}|\\cdot)$ corresponding to the same true prediction vector into one group(state)~\\cite{TalvitieS11,Liu14inaccuratePSR}. Subsequently, we compute the state-transition functions in the transformed data and build the MDP model. Finally, the entropy $E(T)$ of a set of tests $T$ can be calculated according to Eq.~\\ref{equ:entropy}.\n\n\\begin{algorithm}[tb]\n \\caption{Search for sets of $k$ tests $T$ that approximately minimize $E(T)$ }\n \\label{Algo:selecting}\n \n\\begin{algorithmic}[1]\n \\STATE {\\bfseries Input:} dataset $D$, initial $T$ of size $k$, number of size $n < k$, distributions $p_T$ over candidate tests, number of rounds $r$, entropy value $EV$, an entropy threshold $E_{th}>0$, number of iteration $iterNum$.\n \\STATE {\\bfseries Initialize:} $EV := EntropyLearn(D,T)$.\n \\FOR{$i=1$ {\\bfseries to} $r$}\n \\FOR{$j=1$ {\\bfseries to} $iterNum$}\n \\STATE {Sample a set of $n$ tests $T_s$ $\\not \\in T \\sim p_T$}\n \\STATE {$T^{'}$ $\\leftarrow$ a set of $n$ test in $T$}\n \\STATE {$EV_T := EntropyLearn(D,T \\setminus T^{'} \\cup T_s)$}\n \\IF{$EV - EV_T > E_{th}$}\n \\STATE $EV := EV_T$\n \\STATE $T := T \\setminus T^{'} \\cup T_s$\n \\STATE {\\bfseries break}\n \\ENDIF\n \\ENDFOR\n \\ENDFOR\n \\STATE {\\bfseries Output:} $T$\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\\section{Experimental Results}\nWe evaluated the proposed technique on {\\em PocMan}~\\cite{Silver10monte-carloplanning,JMLR:hamilton14}, a partially observable version of the video game {\\em PacMan}. The environment is extremely large that has 4 actions, 1024 observations and an extremely large number of states~(up to $10^{56}$).\n\n\\subsection{Experimental Setting}\n\\noindent{\\bf Evaluated Methods}.\nWe compared our approach~(Entropy-based) with the bound-based approach~\\cite{kulesza2015spectral}, which is the most related technique with our method that both the approaches address the same basis selection problem. The two approaches both first select a set of tests, then spectral method is applied by using these tests. Also, iterative approaches are used in both approaches for selecting the sets of tests. The main difference between our approach and the work of~\\cite{kulesza2015spectral} is the guidance used for selecting the set of tests. While our approach uses the model entropy as the guidance, the key idea in~\\cite{kulesza2015spectral} is that as in the limiting-case, the singular values of the learned transformed predictive state representations~(TPSR) parameters are bounded, then under an assumption that the smaller the singular values of the learned parameters are, the more accuracy the learned PSR model will be, they just select the tests that can cause smaller singular values of the learned PSR parameters. However, no any formal guarantees are provided for such an approach.\n\nA uniformly random generated sequence with length $200,000$ was used as the training sequence for both approaches. And to calculate the entropy of a candidate set of tests $T_c$, a randomly generated 5,000-length action-observation sequence $d$ was used. For each test $t \\in T_c$, $p(t|\\cdot)$ was estimated by executing $act(t)$ 100 times, where $act(t)$ is the action sequence in test $t$. The entropy thresholds for our approaches are 0.06, 0.04, and 0.02 for different number of tests as the entropy value usually decreases with the increase of the number of tests~(For number of tests 100 and 150, the threshold used is 0.06; for number of tests 200 and 250, the threshold is 0.04; and for number of tests 300, the threshold is 0.02). The number of rounds for both approaches is 10. In each round, we iteratively sample a set of new tests of size 20 and use it to replace 20 tests in $T$, the iterative number in each round is also 10. \\iffalse if the replacement is an reduction in terms of the entropy value or the bound in 10 iterations~(the reduction no less than the threshold for our approach), then we keep it and go to next round, otherwise, we also go to next round.\\fi After a fixed number of rounds, we stop and return the current $T$.\n\n\\noindent{\\bf Performance Measurements.}\nWe evaluated the learned models in terms of prediction accuracy, which is measured by the difference between the true predictions and the predictions given by the learned model over a test data sequence~(for {\\em PocMan}, we cannot obtain the true predictions, Monte-Carlo rollout predictions were used~\\cite{JMLR:hamilton14}).\n\nTwo error functions were used in the measurement. One is the average one-step prediction error per time step on the test sequence as shown in Eq.~\\ref{eqn:f1}:\n\n\\begin{equation}\n\\frac{1}{L} \\sum_{t=1}^L |p(o_{t+1}|h_t,a_{t+1})-\\hat{p}(o_{t+1}|h_t,a_{t+1})|.\n\\label{eqn:f1}\n\\end{equation}\n$p(\\cdot)$ is the probability calculated from the true POMDP model or the Monte-Carlo rollout prediction and $\\hat{p}(\\cdot)$ the probability obtained from the learned model. $|\\cdot|$ refers to the absolute value. $L$ is the length of the test sequence used in the experiments. For the average four-step prediction error, the same equation with predicting four steps ahead $\\hat{p}(o_{t+1}o_{t+2}o_{t+3}o_{t+4}|h_ta_{t+1}a_{t+2}a_{t+3}a_{t+4})$ was used.\n\n\\subsection{Performance Evaluation}\n\nIn this section, for each algorithm, we report the performance results as the mean error over 10 trials. For each trial, a uniform randomly generated test sequence with length $L=20,000$ was used for testing the accuracy of the learned model.\n\nTwo kinds of experiments were conducted. The first experiment is to evaluate the prediction performance by fixing the number of tests as 100, and ran Algorithm~\\ref{Algo:selecting} 10 rounds. For each round, we reported the average one and four prediction errors for both approaches, the results are shown in Fig.~\\ref{fig:fix}. The number at each point in Fig.~\\ref{fig:fix} is the average entropy~(for our approach) or the average largest singular values~(for the bound-based approach) in the corresponding round, which also shows that for the entropy-based approach, a higher entropy value results in a lower prediction accuracy while there are no relevance between the singular values and the prediction accuracy. The second experiment is to evaluate the prediction performance by varying the number of tests. For each number of tests, both approaches were ran 10 rounds, and the final results for both one and four-step errors after 10 rounds were reported in Fig.~\\ref{fig:vary}. For this experiment, we also reported the initial results~(Initial) without the replacement of the tests.\n\n\\begin{figure*}[!ht]\n\\begin{minipage}{6.0in}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmaniterone.pdf}\n\\centerline{{\\scriptsize $(a)$}}\n\\end{minipage}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmaniterfour.pdf}\n\\centerline{{\\scriptsize $(b)$}}\n\\end{minipage}\n\\end{minipage}\n\\caption{{\\small ($a$) One-step; ($b$) four-step prediction error of 10 rounds for fix number of tests}}\n\\label{fig:fix}\n\\end{figure*}\n\n\\begin{figure*}[!ht]\n\\begin{minipage}{6.0in}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmantestone.pdf}\n\\centerline{{\\scriptsize $(a)$}}\n\\end{minipage}\n\\begin{minipage}{3in}\n\\includegraphics[width=0.8\\textwidth]{pocmantestfour.pdf}\n\\centerline{{\\scriptsize $(b)$}}\n\\end{minipage}\n\\end{minipage}\n\\caption{{\\small ($a$) One-step; ($b$) four-step prediction error of different number of tests}}\n\\label{fig:vary}\n\\end{figure*}\n\nAs can be seen from Figs.~\\ref{fig:fix} and~\\ref{fig:vary}, both for the one-step and four-step predictions, and both for the two kinds of experiments, in all the cases, our algorithm performs very well and outperforms the bound-based approaches. Considering the error reported is only the prediction error for one time step, the improvement on prediction accuracy on a long sequence is remarkable. Meanwhile, for the first kind of experiment, as the number of rounds increases, our algorithm reduces its prediction error while the bound-based approaches with increasing rounds do not improve their performances and are very unstable. What also can be seen from the experimental results is that with the increase of the number of tests, for all the approaches, the prediction accuracy of the obtained PSR model also increases, which demonstrates the importance of including more tests in the learning of the models.\n\n\\section{Conclusion}\n\nHow to choose the bases is a very important problem in spectral learning of PSRs. However, until now, there are very little work that can address this issue successfully. In this paper, by introducing the model entropy for measuring the model accuracy and showing the close relevance between the entropy and model accuracy, we propose an entropy-based basis selection strategy for spectral learning of PSRs. Several experiments were conducted on the PocMan environment, and the results show that compared to the state-of-the-art bound-based approach, our technique is more stable and achieved much better performance.\n\n\\acks{This work was supported by the National Natural Science Foundation of China (No. 61375077).}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{intro}\nSerre \\cite{Serre} developed the theory of $p$-adic and mod $p$ modular\nforms and produced several interesting results.\nIn his theory, the Ramanujan operator\n$\\theta :f=\\sum a_nq^n \\longmapsto \\theta (f):=\\sum n\\,a_nq^n$\nplayed an important role. The notion of such operator was extended\nto the case of Siegel modular forms. The theta operator (generalized\nRamanujan operator) on Siegel modular forms is defined by\n$$\n\\varTheta : F=\\sum_T a(T)q^T \\longmapsto \n\\varTheta (F):=\\sum_T\\text{det}(T)\\cdot a(T)q^T,\n$$\nwhere $F=\\sum a(T)q^T$ is the Fourier expansion (generalized $q$-expansion)\nof $F$.\n\nFor a prime number $p$, the theta operator acts on the algebra of mod $p$\nSiegel modular forms (cf. B\\\"{o}cherer-Nagaoka \\cite{B-N}). In our study, we found Siegel\nmodular forms $F$ which satisfy the property\n$$\n\\varTheta (F) \\equiv 0 \\pmod{p}.\n$$\nThe space consisting of such Siegel modular forms is called the {\\it mod $p$\nkernel of the theta operator}. In this terminology, we can say that the Igusa cusp form\nof weight 35 is an element of the the mod 23 kernel of the \ntheta operator (cf. Kikuta-Kodama-Nagaoka \\cite{K-K-N}). Moreover the theta series attached to \nthe Leech lattice is also in the mod 23 kernel of the theta operator \n(cf. Nagaoka-Takemori \\cite{N-T}).\n\nThe main purpose of this paper is to extend the notion of the theta operator\nto the case of Hermitian modular forms and, to give some examples of Hermitian \nmodular forms which are in the mod $p$ kernel of the theta operator.\nThe first half concerns the Eisenstein series. Let \n$\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$ be the Hermitian modular group\nof degree 2 with respect to an imaginary quadratic number field $\\boldsymbol{K}$.\nKrieg \\cite{Krieg} constructed a weight $k$ Hermitian modular form \n$F_{k,\\boldsymbol{K}}$ which coincides with the weight $k$ Eisenstein series\n$E_{k,\\boldsymbol{K}}^{(2)}$ for $\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$.\n\nThe first result says that the Hermitian modular form $F_{p+1,\\boldsymbol{K}}$\nis in the mod $p$ kernel of the theta operator under some condition\non $p$, namely\n$$\n\\varTheta (F_{p+1,\\boldsymbol{K}}) \\equiv 0\\pmod{p}\n\\quad (\\text{cf. Theorem \\ref{main1}}).\n$$\nAs a corollary, we can show that the weight $p+1$ Hermitian Eisenstein\nseries $E_{p+1,\\boldsymbol{K}}^{(2)}$ satisfies\n$$\n\\varTheta( E_{p+1,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{p}\n$$\nif the class number of $\\boldsymbol{K}$ equals one.\n\nIn the remaining part, we give various examples which are in the mod $p$\nkernel of the theta operator. The first example we show is related to the\ntheta series attached to positive definite, even unimodular Hermitian lattice \nover the Gaussian field.\nLet $\\mathcal{L}$ be a positive definite, even unimodular Hermitian lattice of rank\n$r$ with the Gram matrix $H$. We denote the corresponding Hermitian theta series\nof degree $n$ by $\\vartheta_{\\mathcal{L}}^{(n)}=\\vartheta_H^{(n)}$. It is known\nthat the rank $r$ is divisible by 4 and $\\vartheta_{\\mathcal{L}}^{(n)}$ becomes a\nHermitian modular form of weight $r$. \nIn the case $r=12$, we have a positive definite, even integral Hermitian lattice\n$\\mathcal{L}_{\\mathbb{C}}$ of rank 12, which does not have any vector of\nlength one. In this paper we call it the Hermitian Leech lattice. \nThe theta series attached to $\\mathcal{L}_{\\mathbb{C}}$ satisfies\n$$\n\\varTheta (\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv\n0 \\pmod{11}\n\\quad (\\text{cf. Theorem \\ref{mod11cong}}).\n$$\n\nThe next example is connected with the Hermitian theta constant. \nLet $\\mathcal{E}$ be a set of mod 2 even characteristics of degree 2 (cf. $\\S$ \\ref{thetaconstant}).\nWe consider the theta constant $\\theta_{\\boldsymbol{m}}$\\,$(\\boldsymbol{m}\\in\\mathcal{E})$.\nIt is known that the function\n$$\n\\psi_{4k}:=\\frac{1}{4}\\sum_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}^{4k}\n$$\ndefines a Hermitian modular form of weight $4k$ (cf. Freitag \\cite{Freitag}).\nThe final result can be stated as\n$$\n\\varTheta (\\psi_8) \\equiv 0 \\pmod{7},\\qquad\n\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}\n\\quad (\\text{cf. Theorem \\ref{thetaconstantth}}).\n$$\n\nOur proof is based on the fact that the image of a weight $k$ modular form of the theta\noperator is congruent to a weight $k+p+1$ cusp form mod $p$ (cf. Theorem 2.7) and then\nwe use the Sturm bound (Corollary 2.6).\n\\section{Hermitian modular forms}\n\\label{Sect.2}\n\\subsection{Notation and definition}\n\\label{Sect.2.1}\nThe Hermitian upper half-space of degree $n$ is defined by\n$$\n\\mathbb{H}_n:=\\{\\,Z\\in Mat_n(\\mathbb{C})\\,\\mid\\, \\frac{1}{2i}(Z-{}^tZ)>0\\,\\},\n$$\nwhere ${}^t\\overline{Z}$ is the transposed complex conjugate of $Z$.\nThe space $\\mathbb{H}_n$ contains the Siegel upper-half space of degree\n$n$\n$$\n\\mathbb{S}_n:=\\mathbb{H}_n\\cap {\\rm Sym}_n(\\mathbb{C}).\n$$\nLet $\\boldsymbol{K}$ be an imaginary quadratic number field with discriminant\n$d_{\\boldsymbol{K}}$ and ring of integers $\\mathcal{O}_{\\boldsymbol{K}}$.\nThe Hermitian modular group\n$$\n\\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}}):=\\left\\{\\, M\\in\n{\\rm Mat}_{2n}(\\mathcal{O}_{\\boldsymbol{K}})\n\\,\\mid\\,\n{}^t\\overline{M}J_nM=J_n:=\n\\begin{pmatrix}0 & -1_n \\\\ 1_n & 0\\end{pmatrix}\\,\\right\\}\n$$\nacts on $\\mathbb{H}_n$ by fractional transformation\n$$\n\\mathbb{H}_n\\ni Z\\longmapsto M\\langle Z\\rangle:=\n(AZ+B)(CZ+D)^{-1},\\;\nM=\\begin{pmatrix}A&B\\\\ C&D\\end{pmatrix}\n\\in \\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}}).\n$$\n\nLet $\\Gamma \\subset \\Gamma^n(\\mathcal{O}_{\\boldsymbol{K}})$ be a\nsubgroup of finite index and $\\nu_k$\\,$(k\\in\\mathbb{Z})$ an abelian character\nof $\\Gamma$ satisfying $\\nu_k\\cdot\\nu_{k'=}\\nu_{k+k'}$.\nWe denote by $M_k(\\Gamma,\\nu_k)$ the space of Hermitian modular forms\nof weight $k$ and character $\\nu_k$ with respect to $\\Gamma$. Namely\nit consists of holomorphic functions $F:\\mathbb{H}_n\\longrightarrow\\mathbb{C}$\nsatisfying\n$$\nF\\mid_kM(Z):={\\rm det}(CZ+D)^{-k}F(M\\langle Z\\rangle)=\\nu_k(M)\\cdot F(Z)\n$$\nfor all $M=\\binom{*\\,*}{CD}\\in \\Gamma$. When\n$\\nu_k$ is trivial, we write it by $M_k(\\Gamma)$ simply. The subspace\n$S_k(\\Gamma,\\nu_k)$ of cusp forms is characterized by the condition\n$$\n\\Phi\\left(F\\mid_k\\begin{pmatrix}{}^t\\overline{U}&0\\\\ 0& U\\end{pmatrix}\\right)\n\\equiv 0,\\quad\n(U\\in GL_n(\\mathcal{O}_{\\boldsymbol{K}})),\n$$\nwhere $\\Phi$ is the Siegel operator. A modular form $F\\in M_k(\\Gamma,\\nu_k)$\nis called {\\it symmetric} if\n$$\nF({}^t\\!Z)=F(Z).\n$$\nWe denote by $M_k(\\Gamma,\\nu_k)^{\\rm sym}$ the subspace consisting of symmetric\nmodular forms. Moreover\n$$\nS_k(\\Gamma,\\nu)^{\\rm sym}:=M_k(\\Gamma,\\nu_k)^{\\rm sym}\\cap\nS_k(\\Gamma,\\nu_k).\n$$\nIf $F\\in M_k(\\Gamma,\\nu_k)$ satisfies the condition\n$$\nF(Z+B)=F(Z)\\qquad {\\rm for\\; all} \\;B\\in Her_n(\\mathcal{O}_{\\boldsymbol{K}}),\n$$\nthen $F$ has a Fourier expansion of the form\n\\begin{equation}\n\\label{Fourier}\nF(Z)\n=\\sum_{0\\leq H\\in\\Lambda_n(\\boldsymbol{K})}\na(F;H)\\text{exp}(2\\pi i\\text{tr}(HZ)),\n\\end{equation}\nwhere\n$$\n\\Lambda_n(\\boldsymbol{K}):=\\{\\,H=(h_{jl})\\in Her_n(\\boldsymbol{K})\\,\\mid\\,\nh_{jj}\\in\\mathbb{Z},\\;\\sqrt{d_{\\boldsymbol{K}}}\\,h_{jl}\\in\\mathcal{O}_{\\boldsymbol{K}}\\,\\}.\n$$\n\nWe assume that any $F\\in M_k(\\Gamma,\\nu_k)$ has the Fourier expansion above.\nFor any subring $R\\subset\\mathbb{C}$, we write as\n$$\nM_k(\\Gamma,\\nu_k)_R:=\\{\\,F\\in M_k(\\Gamma,\\nu_k)\\,\\mid\\,\na(F;H)\\in R\\;\\;(\\forall H\\in\\Lambda_n(\\boldsymbol{K}))\\,\\}.\n$$\nIn the Fourier expansion (\\ref{Fourier}), we use the abbreviation\n$$\n\\boldsymbol{q}^H:=\\text{exp}(2\\pi i\\text{tr}(HZ)).\n$$\nThe generalized $\\boldsymbol{q}$-expansion $F=\\sum a(F;H)\\boldsymbol{q}^H$\ncan be considered as an element in a formal power series ring\n$\\mathbb{C}[\\![\\boldsymbol{q}]\\!]$ (cf. Munemoto-Nagaoka \\cite{M-N}, p.248) from which we have\n$$\nM_k(\\Gamma,\\nu_k)_R \\subset R[\\![\\boldsymbol{q}]\\!].\n$$\n\nLet $p$ be a prime number and $\\mathbb{Z}_{(p)}$ the local ring at $p$, namely, \nring of $p$-integral rational numbers. For \n$F_i\\in M_{k_i}(\\Gamma,\\nu_{k_i})_{\\mathbb{Z}_{(p)}}$\\,\n($i=1,2$), we write $F_1 \\equiv F_2 \\pmod{p}$ when\n$$\na(F_1;H) \\equiv a(F_2;H) \\pmod{p}\n$$\nfor all $H\\in\\Lambda_n(\\boldsymbol{K})$.\n\\subsection{Hermitian modular forms of degree 2}\n\\label{Sect.2.2}\nIn the rest of this paper, we deal with Hermitian modular forms of degree 2.\n\\subsubsection{Eisenstein series}\n\\label{Sect.2.2.1}\nWe consider the Hermitian Eisenstein series of degree 2.\n$$\nE_{k,\\boldsymbol{K}}^{(2)}(Z)\n:=\\sum_{M=\\binom{*\\,*}{CD}}\\text{det}^{k\/2}(M)\\,\\text{det}(CZ+D)^{-k},\n\\quad Z\\in \\mathbb{H}_2,\n$$\nwhere $k>4$ is even and $M=\\binom{*\\,*}{CD}$ runs over a set\nof representaives of \n$\\left\\{\\binom{*\\,*}{0\\,*}\\right\\}\\backslash \\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})$.\nThen\n$$\nE_{k,\\boldsymbol{K}}^{(2)}\\in\nM_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{-k\/2})_{\\mathbb{Q}}^{\\text{sym}}.\n$$\nMoreover, $E_{4,\\boldsymbol{K}}^{(2)}\\in\nM_4(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{-2})_{\\mathbb{Q}}^{\\text{sym}}$ is constructed by the Maass\nlift (Krieg \\cite{Krieg}). For an even integer $k\\geq 4$, $E_k^{(1)}=\\Phi (E_{k,\\boldsymbol{K}}^{(2)})$\nis the normalized Eisenstein series Eisenstein series of weight $k$ for $SL_2(\\mathbb{Z})$.\n\\subsubsection{Structure of the graded ring in the case \n$\\boldsymbol{K}=\\mathbb{Q}(i)$}\n\\label{Sect.2.2.2}\nIn this section, we assume that $\\boldsymbol{K}=\\mathbb{Q}(i)$. In \\cite{K-N},\nthe authors defined some Hermitian cusp forms\n$$\n\\chi_8\\in S_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}},\nF_{10}\\in S_{10}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^5)_{\\mathbb{Z}}^{\\text{sym}},\nF_{12}\\in S_{12}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}},\n$$\ncharacterized by\n$$\n\\chi_8\\mid_{\\mathbb{S}_2}\\equiv 0,\\quad\nF_{10}\\mid_{\\mathbb{S}_2}=6X_{10},\\quad\nF_{12}\\mid_{\\mathbb{S}_2}=X_{12},\n$$\nwhere $X_k$\\,($k=10,12$) is Igusa's Siegel cusp form of weight $k$ \nwith integral Fourier coefficients (cf. Kikuta-Nagaoka \\cite{K-N}).\n\\begin{Thm}\n\\label{structure}\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$. The graded ring\n$$\n\\bigoplus_{k\\in\\mathbb{Z}}\nM_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k\/2})^{\\text{sym}}.\n$$\nis generated by\n$$\nE_{4,\\boldsymbol{K}}^{(2)},\\quad\nE_{6,\\boldsymbol{K}}^{(2)},\\quad\n\\chi_8,\\quad\nF_{10}, \\;\\;\\text{and}\\;\\;\nF_{12}.\n$$\n\\end{Thm}\nFor the proof, we should consult, for example, Kikuta-Nagaoka \\cite{K-N}.\n\\begin{Rem}\n$E_{4,\\boldsymbol{K}}^{(2)}\\in M_4(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}} $,\n$E_{6,\\boldsymbol{K}}^{(2)}\\in M_6(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{3})_{\\mathbb{Z}}^{\\text{sym}} $.\n\\end{Rem}\n\\subsubsection{Sturm bound}\n\\label{Sect.2.2.3}\nSturm gave some condition of $p$-divisibility of the Fourier coefficients of modular form. \nLater the bound is studied in the case of modular forms with several variables.\nFor example, the first author \\cite{C-C-K} studied the bound for Hermitian modular\nforms of degree 2 with respect to \n$\\boldsymbol{K}=\\mathbb{Q}(i)$ and $\\mathbb{Q}(\\sqrt{3}\\,i)$. However the statement\nis incorrect. We correct it here.\n\nWe assume that $\\boldsymbol{K}=\\mathbb{Q}(i)$ and use an abbreviation\n\\begin{equation}\n\\label{abb}\n[m,a+bi,n]:=\\begin{pmatrix}m & \\frac{a+bi}{2}\\\\ \\frac{a-bi}{2} & n \\end{pmatrix}\n\\in\\Lambda_2(\\boldsymbol{K}).\n\\end{equation}\nWe define a lexicographic order for the different element elements\n$$\nH=[m,a+bi,n],\\quad H'=[m',a'+b'i,n']\n$$\nof $\\Lambda_2(\\boldsymbol{K})$ by\n\\begin{align*}\nH \\succ H'\\quad\\Longleftrightarrow\\quad & (1)\\; \\text{tr}(H) > \\text{tr}(H')\\quad \\text{or}\\\\\n & (2)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m>m'\\quad \\text{or}\\\\\n & (3)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m=m',\\;a>a'\\quad \\text{or}\\\\\n & (4)\\; \\text{tr}(H)=\\text{tr}(H'),\\;m=m',\\;a=a',\\;b>b'.\n\\end{align*}\nLet $p$ be a prime number and \n$F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}_{(p)}}$.\nWe define the order of $F$ by\n$$\n\\text{ord}_p(F):=\\text{min}\\{\\,H\\in\\Lambda_2(\\boldsymbol{K})\\,\\mid\\,\na(F;H)\\not\\equiv 0 \\pmod{p}\\,\\},\n$$\nwhere the ``minimum'' is defined in the sense of the above order. If\n$F \\equiv 0 \\pmod{p}$, then we define $\\text{ord}_p(F)=(\\infty)$. In the case of\n$\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$, we can also define the order in a similar\nway.\n\nIt is easy to check that\n\\begin{Lem}\n\\label{order}\nThe following equality holds.\n$$\n{\\rm ord}_p(FG)={\\rm ord}_p(F)+{\\rm ord}_p(G).\n$$\n\\end{Lem}\nThen we have\n\\begin{Thm}\n\\label{sturm}\nLet $k$ be an even integer and $p$ a prime number with $p\\geq 5$.\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$ or $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$.\nFor $F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\nu_k)_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$,\nassume that\n$$\n{\\rm ord}_p(F)\\succ\n\\begin{cases}\n\\displaystyle\n\\left[ \\left[\\frac{k}{8}\\right],2 \\left[\\frac{k}{8}\\right], \\left[\\frac{k}{8}\\right]\\right] & \n\\text{if $\\boldsymbol{K}=\\mathbb{Q}(i)$}\n\\vspace{2mm}\n\\\\\n\\displaystyle\n\\left[\\left[\\frac{k}{9}\\right],2\\left[\\frac{k}{9}\\right], \\left[\\frac{k}{9}\\right]\\right] & \n\\text{if $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$}\n\\end{cases},\n$$\nThen we have ${\\rm ord}_p(F)=(\\infty)$, i.e., $F \\equiv 0\\pmod{p}$.\nHere\n$$\n\\nu_k=\n\\begin{cases}\n{\\rm det}^{k\/2} & \\text{if $\\boldsymbol{K}=\\mathbb{Q}(i)$}\\\\\n{\\rm det}^{k} & \\text{if $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$}\n\\end{cases}\n$$\nand $[x]$ inside of the bracket means the greatest integer such that $\\leq x$.\n\\end{Thm}\n\\begin{Rem}\nFor \n$\\boldsymbol{K}=\\mathbb{Q}(i)$ (resp. $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$)\nthe matrix $\\left[ [\\frac{k}{8}],2[\\frac{k}{8}],[\\frac{k}{8}] \\right]$\\;\n$\\left(\\text{resp.} \\left[ [\\frac{k}{9}],2[\\frac{k}{9}],[\\frac{k}{9}] \\right]\\right)$\nis the maximum of the elements in $\\Lambda_2(\\boldsymbol{K})$ of the form\n$\\left[ [\\frac{k}{8}],*,[\\frac{k}{8}] \\right]\\geq 0$\\;\n(resp. $\\left[ [\\frac{k}{9}],*,[\\frac{k}{9}] \\right]\\geq 0$).\n\\end{Rem}\n\\begin{proof} {\\it of Theorem \\ref{sturm}}\nLet $\\boldsymbol{K}=\\mathbb{Q}(i)$. We use the induction\non the weight $k$. \nWe can confirm that it is true for small $k$.\nSuppose that the statement is true for any $k$ with $kr$, then\n$F$ is called a {\\it mod $p$ singular Hermitian modular form} (e.g. cf. B\\\"{o}cherer-Kikuta \\cite{B-K}). \nIt is obvious that,\nif $F$ is a mod $p$ singular Hermitian modular form, then $F$ is an element of\nmod $p$ kernel of the theta operator.\n\nThe main purpose of this paper is to give some examples of Hermitian modular\nform in the mod $p$ kernel of the theta operator in the case that $n=2$.\n\\subsubsection{Basic property of theta operator}\n\\label{Sect.2.3.1}\nAs we stated above, the image $\\varTheta (F)$ is not necessarily a Hermitian\nmodular form. However the following result holds:\n\\begin{Thm}\n\\label{thetamodp}\nAssume that $\\boldsymbol{K}=\\mathbb{Q}(i)$ and $p$ is a prime number such\nthat $p\\geq 5$. For any \n$F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),{\\rm det}^{k\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$, there is a cusp form\n$$\nG\\in S_{k+p+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),{\\rm det}^{(k+p+1)\/2})_{\\mathbb{Z}_{(p)}}^{{\\rm sym}}\n$$ \nsuch that\n$$\n\\varTheta (F) \\equiv G \\pmod{p}.\n$$\n\\end{Thm}\n\\begin{proof}\nA corresponding statement in the case of Siegel modular forms can be found\nin B\\\"{o}cherer-Nagaoka \\cite{B-N}, Theorem 4. The proof here follows the same line.\nWe consider the normailized Rankin-Cohen bracket $[F_1,F_2]$ for\n$F_i\\in M_{k_i}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k_i\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$\\\\\n$(i=1,2)$\n(e.g. cf. Martin-Senadheera \\cite{M-S}). We can show that $[F_1,F_2]$ becomes\na Hermitian cusp form\n$$\n[F_1,F_2]\\in S_{k_1+k_2+2}\n(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(k_1+k_2+2)\/2})\n_{\\mathbb{Z}_{(p)}}^{\\text{sym}}.\n$$\nIf $p$ is a prime number such that $p\\geq 5$, then there is a Hermitian\nmodular form $G_{p-1}\\in M_{p-1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(p-1)\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$ \nsuch that\n$$\nG_{p-1} \\equiv 1 \\pmod{p}\\quad (\\text{cf. {\\rm Kikuta-Nagaoka} \\cite{K-N}. Proposition 5}).\n$$\nFor a given $F\\in M_k(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{k\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}$, we have\n$$\n\\varTheta (F) \\equiv [F,G_{p-1}] \\pmod{p}.\n$$\nHence we may put\n$$\nG:=[F,G_{p-1}]\\in S_{k+p+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\text{det}^{(k+p+1)\/2})_{\\mathbb{Z}_{(p)}}^{\\text{sym}}.\n$$\n\\end{proof}\n\\begin{Ex}\n\\label{ex.1}\n$$\n\\varTheta (E_{4,\\boldsymbol{K}}) \\equiv 5E_{4,\\boldsymbol{K}}\\cdot \\chi_8+2F_{12}\n\\pmod{7}.\n$$\n\\end{Ex}\n\\section{Eisenstein case}\n\\label{Sect.3}\nIn this section we deal with the Hermitian modular forms related to the Eisenstein series\nof degree 2.\n\\subsection{Krieg's result}\n\\label{Sect.3.1}\nWe denote by $h_{\\boldsymbol{K}}$ the class number of $\\boldsymbol{K}$ and\n$w_{\\boldsymbol{K}}$ the order of the unit group of $\\boldsymbol{K}$.\n\nGiven a prime $q$ dividing $D_{\\boldsymbol{K}}:=-d_{\\boldsymbol{K}}$ define the $q$-factor $\\chi_q$ of \n$\\chi_{\\boldsymbol{K}}$ (cf. Miyake \\cite{Miyake}, p.80). Then $\\chi_{\\boldsymbol{K}}$ can be\ndecomposed as\n$$\n\\chi_{\\boldsymbol{K}}=\\prod_{q\\mid D_{\\boldsymbol{K}}}\\chi_q.\n$$\nWe set\n$$\na_{D_{\\boldsymbol{K}}}(\\ell):=\\prod_{q\\mid D_{\\boldsymbol{K}}}(1+\\chi_q(-\\ell)).\n$$\nLet $D_{\\boldsymbol{K}}=mn$ with coprime $m$,\\,$n$. We set\n$$\n\\psi_m:=\\prod_{\\substack{q:\\text{prime}\\\\ q\\mid m}}\\chi_q,\\qquad\n\\psi_1:=1.\n$$\nFor $H\\in\\Lambda_2(\\boldsymbol{K})$ with $H\\ne O_2$, we define\n$$\n\\varepsilon (H):=\\text{max}\\{ \\ell\\in\\mathbb{N}\\,\\mid\\, \\ell^{-1}H\\in\n\\Lambda_2(\\boldsymbol{K})\\}.\n$$\nKrieg's result is stated as follows:\n\\begin{Thm}\n\\label{Krieg}\n{\\rm (Krieg \\cite{Krieg})}\\;\nAssume that $k \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$ and $k>4$.\nThen there exists a modular form\n$F_{k,\\boldsymbol{K}}\\in M_k(\\Gamma_2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Q}}^{{\\rm sym}}$\nwhose Fourier coefficient $a(F_{k,\\boldsymbol{K}};H)$ is given by\n$$\na(F_{k,\\boldsymbol{K}};H)=\n\\begin{cases}\n\\displaystyle\\frac{4k(k-1)}{B_k\\cdot B_{k-1,\\chi_{\\boldsymbol{K}}}}\n\\sum_{00,\\\\\n\\displaystyle\n-\\frac{2k}{B_k}\\sum_{03$ is a prime number such that $\\chi_{\\boldsymbol{K}}(p)=-1$ and\n$h_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}$, then\n$$\n\\varTheta (F_{p+1,\\boldsymbol{K}}) \\equiv 0 \\pmod{p}.\n$$\n\\end{Thm}\n\\begin{Cor}\n\\label{cor1}\nAssume that $h_{\\boldsymbol{K}}=1$ and $p>3$ is a prime number such\nthat $\\chi_{\\boldsymbol{K}}(p)=-1$.\nThen the weight $p+1$ Hermitian Eisenstein series $E_{p+1,\\boldsymbol{K}}^{(2)}$ satisfies\n$$\n\\varTheta (E_{p+1,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{p}.\n$$\n\\end{Cor}\n\\begin{Rem}\n{\\rm (1)}\\;\nIn the above theorem, the weight condition $k=p+1 \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$\nis automatically satisfied.\nIn fact, \nin the case $\\boldsymbol{K}=\\mathbb{Q}(i)$, the condition $\\chi_{\\boldsymbol{K}}(p)=-1$\nimplies $p \\equiv 3 \\pmod{4}$. Then $p+1 \\equiv 0 \\pmod{4}$. In the case \n$\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{3}\\,i)$, it follows from \nthe condition $\\chi_{\\boldsymbol{K}}(p)=-1$ that $p \\equiv -1 \\pmod{3}$. Since $p$ is odd,\nwe have $p+1 \\equiv 0 \\pmod{6}$. Since $w_{\\boldsymbol{K}}=2$ in the other cases,\n$p+1 \\equiv 0 \\pmod{w_{\\boldsymbol{K}}}$ is obvious.\n\\\\\n{\\rm (2)}\\; It is known that there are infinitely many $\\boldsymbol{K}$ and $p$ satisfying\n$$\n\\chi_{\\boldsymbol{K}}(p)=-1\\;\\; \\text{and}\\;\\;\nh_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}\n$$\n{\\rm (e.g. cf. Horie-Onishi \\cite{H-O})}.\\\\\n{\\rm (3)}\\; Our interest is to construct an element of mod $p$ kernel of the theta operator with the possible minimum weight \n(i.e., the weight is its filtration. For the details on the filtration \nof the mod $p$ modular forms, see Serre \\cite{Serre}). \nIf we do not restrict on the weight, we can construct some trivial examples in several ways. \nFor example, the power $F^p$ of a modular form $F$ is such a trivial example. \nIf $F$ is of weight $k$, then its weight is $pk$ and this is too large.\nWe suppose that the possible minimum weight is $p+1$ for mod $p$ non-singular cases\n(cf. B\\\"{o}cherer-Kikuta-Takemori \\cite{B-K-T}).\n\\end{Rem}\nFor the proof of the theorem, it is sufficient to show the following:\n\\begin{Prop}\n\\label{prop1}\nAssume that\n$p>3$ is a prime number such that $\\chi_{\\boldsymbol{K}}(p)=-1$ and\n$h_{\\boldsymbol{K}} \\not\\equiv 0 \\pmod{p}$. Let $a(F_{k,\\boldsymbol{K}};H)$\nbe the Fourier coefficient of $F_{k,\\boldsymbol{K}}$ at $H$. If ${\\rm det}(H)\\not\\equiv 0\\pmod{p}$,\nthen\n\\begin{equation}\n\\label{star}\na(F_{p+1,\\boldsymbol{K}};H) \\equiv 0 \\pmod{p}.\n\\end{equation}\n\\end{Prop}\n\\begin{proof}\nBy Theorem \\ref{Krieg}, the Fourier coefficient $a(F_{p+1,\\boldsymbol{K}};H)$ is expressed as\n$$\na(F_{p+1,\\boldsymbol{K}};H)=\\frac{4(p+1)p}{B_{p+1}\\cdot B_{p,\\chi_{\\boldsymbol{K}}}}\n\\sum_{00$. First we look at the factor\n$$\nA:=\\frac{4(p+1)p}{B_{p+1}\\cdot B_{p,\\chi_{\\boldsymbol{K}}}}.\n$$\nBy Kummer's congruence relation, we obtain\n\\begin{align*}\n& \\displaystyle \\bullet\\quad\\frac{B_{p+1}}{p+1} \\equiv \\frac{B_2}{2}=\\frac{1}{12} \\pmod{p}\n\\vspace{2mm}\n\\\\\n& \\displaystyle \\bullet\\quad\\frac{B_{p,\\chi_{\\boldsymbol{K}}}}{p}\n\\equiv (1-\\chi_{\\boldsymbol{K}}(p))B_{1,\\chi_{\\boldsymbol{K}}} \n = (1-\\chi_{\\boldsymbol{K}}(p))\\frac{-2h_{\\boldsymbol{K}}}{w_{\\boldsymbol{K}}}\n \\pmod{p}.\n\\end{align*}\nSince $p>3$, $\\chi_{\\boldsymbol{K}}(p)=-1$, and $h_{\\boldsymbol{K}}\\not\\equiv 0\\pmod{p}$, the factor $A$ is a $p$-adic unit.\n\nNext we shall show that, if $\\text{det}(H)\\not\\equiv 0\\pmod{p}$, then the factor\n$$\nB:=\\sum_{03$ is a prime number such that\n$p \\equiv 3 \\pmod{4}$ and $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{p}\\,i)$. Let $F_{k,\\boldsymbol{K}}$\nbe the Hermitian modular form introduced in Theorem \\ref{Krieg}. Then the modular form\n$F_{\\frac{p+1}{2},\\boldsymbol{K}}$ is a mod $p$ singular Hermitian modular form.\n\\end{Thm}\n\\begin{proof}\nFrom Theorem \\ref{Krieg}, we have\n$$\na(F_{\\frac{p+1}{2},\\boldsymbol{K}};H)\n=\\frac{(p+1)(p-1)}{B_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}}\n\\sum_{00$. Since the factor of the summand on the right-hand side is rational integer,\nit is sufficient to show\n\\begin{equation}\n\\label{bigstar}\n\\frac{(p+1)(p-1)}{B_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}}\n \\equiv 0 \\pmod{p}.\n\\end{equation}\nWe have the following result:\n\\begin{Lem}\n\\label{lemma3}\nAssume that \n$p>3$, $p \\equiv 3 \\pmod{4}$, and $\\boldsymbol{K}=\\mathbb{Q}(\\sqrt{p}\\,i)$. \nThen we have\n$$\n{\\rm (i)}\\;B_{\\frac{p+1}{2}}\\not\\equiv 0\\pmod{p}\n\\qquad\\qquad\n{\\rm (ii)}\\; p\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}} \\equiv -1 \\pmod{p}.\n$$\n\\end{Lem}\n\\begin{proof}\n(i) The statement $B_{\\frac{p+1}{2}}\\not\\equiv 0\\pmod{p}$ can be found in, for example,\nWashington \\cite{Washington}, p.86, Exercise 5.4.\\\\\n(ii)\\; The congruence $p\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}} \\equiv -1 \\pmod{p}$\nis a special case of the theorem of von Staudt-Clausen for the generalized Bernoulli\nnumbers. For the proof see Carlitz \\cite{Carlitz}, Theorem 3.\n\\end{proof}\nWe return to the proof of (\\ref{bigstar}). From the above lemma, we have\n$$\nB_{\\frac{p+1}{2}}\\cdot B_{\\frac{p-1}{2},\\chi_{\\boldsymbol{K}}}\\in \\frac{1}{p}\\mathbb{Z}_{(p)}^\\times. \n$$\nThis implies (\\ref{bigstar}).\n\\end{proof}\n\\section{Theta series case}\n\\label{Sect.4}\nIn this section we construct Hermitian modular forms in the mod $p$ kernel of\ntheta operator defined from theta series.\n\nFor a positive Hermitian lattice $\\mathcal{L}$ of rank $r$, we associate the\n{\\it Hermitian theta series}\n$$\n\\vartheta_{\\mathcal{L}}^{(n)}(Z)=\\vartheta_H^{(n)}(Z)=\n\\sum_{X\\in\\mathcal{O}_{\\boldsymbol{K}}^{(r,n)}}\\text{exp}(\\pi i\\text{tr}({}^t\\overline{X}HXZ)),\n\\quad Z\\in\\mathbb{H}_n,\n$$\nwhere $H$ is the corresponding Gram matrix of $\\mathcal{L}$.\n\nIn the rest of this paper, we assume that\n$$\n\\boldsymbol{K}=\\mathbb{Q}(i).\n$$\n\\subsection{Theta series for unimodular Hermitian lattice of rank 8}\n\\label{rank8}\nWe denote by $\\mathcal{U}_r(\\mathcal{O}_{\\boldsymbol{K}})=\\mathcal{U}_r(\\mathbb{Z}[i])$\nthe set of even integral, positive definite unimodular Hermitian matrices of rank $r$ over\n$\\mathcal{O}_{\\boldsymbol{K}}=\\mathbb{Z}[i]$. It is known that $4\\mid r$.We denote by \n$\\widetilde{\\mathcal{U}}_r(\\mathcal{O}_{\\boldsymbol{K}})$ the set of unimodular equivalence\nclasses. It is also known that \n$|\\widetilde{\\mathcal{U}}_8(\\mathcal{O}_{\\boldsymbol{K}})|=3$.\nWe fix a set of representatives $\\{ H_1,H_2,H_3\\}$, in which $H_i$ have the following\ndata:\n$$\n|\\text{Aut}(H_1)|=2^{15}\\cdot 3^5\\cdot 5^2\\cdot 7,\\quad\n|\\text{Aut}(H_2)|=2^{22}\\cdot 3^2\\cdot 5\\cdot 7,\\quad\n|\\text{Aut}(H_3)|=2^{21}\\cdot 3^4\\cdot 5^2,\n$$\n(cf. http:\/\/www.math.uni-sb.de\/ag\/schulze\/Hermitian-lattices\/).\n\nThe following identity is a special case of Siegel's main formula for Hermitian\nforms:\n\\begin{equation}\n\\label{mainformula}\n\\frac{\\vartheta_{H_1}^{(2)}}{2^{15}\\cdot 3^5\\cdot 5^2\\cdot 7}+\n\\frac{\\vartheta_{H_2}^{(2)}}{2^{22}\\cdot 3^2\\cdot 5\\cdot 7}+\n\\frac{\\vartheta_{H_3}^{(2)}}{2^{21}\\cdot 3^4\\cdot 5^2}\n=\\frac{61}{2^{22}\\cdot 3^5\\cdot 5\\cdot 7}E_{8,\\boldsymbol{K}}^{(2)},\n\\end{equation}\nwhere $\\frac{61}{2^{22}\\cdot 3^5\\cdot 5\\cdot 7}$ is the mass of the genus of the\nunimodular Hermitian lattices in rank 8. The space\n$M_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}),\\nu_8)^{\\text{sym}}=\nM_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))^{\\text{sym}}$ is spanned by \n$(E_{4,\\boldsymbol{K}}^{(2)})^2$ and $\\chi_8$ (cf. Theorem \\ref{structure}).\n\\begin{Lem}\n\\label{span}\nWe have the following identities\n$$\n\\vartheta_{H_1}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2-5760\\chi_8,\\;\\;\n\\vartheta_{H_2}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2-3072\\chi_8,\\;\\;\n\\vartheta_{H_1}^{(2)}=(E_{4,\\boldsymbol{K}}^{(2)})^2.\n$$\n\\end{Lem}\n\\begin{proof}\nThe above identities come from the following data:\n\\begin{align*}\n& \\begin{cases}\n a(\\vartheta_{H_1}^{(2)};[1,0,1])=120960,\\\\\n a(\\vartheta_{H_1}^{(2)};[1,1+i,1])=0,\n \\end{cases}\n\\begin{cases}\n a(\\vartheta_{H_2}^{(2)};[1,0,1])=131712,\\\\\n a(\\vartheta_{H_2}^{(2)};[1,1+i,1])=2688,\n \\end{cases}\n\\\\\n& \\begin{cases}\n a(\\vartheta_{H_3}^{(2)};[1,0,1])=144000,\\\\\n a(\\vartheta_{H_3}^{(2)};[1,1+i,1])=5760.\n \\end{cases}\n\\end{align*}\nThe calculations of the Fourier coefficients were done by Till Dieckmann.\n\\end{proof}\n\\begin{Thm}\n\\label{mod7}\nWe have $\\vartheta_{H_i}^{(2)}\\in M_8(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}$ ($i=1,2,3$) and\n$$\n\\varTheta(\\vartheta_{H_1}^{(2)}) \\equiv \\varTheta(\\vartheta_{H_2}^{(2)})\n\\equiv 0 \\pmod{7}.\n$$\n\\end{Thm}\n\\begin{proof}\nThe first statement is a consequence of the unimodularity of $H_i$.\nBy (\\ref{mainformula}), we see that\n$$\n4\\vartheta_{H_1}^{(2)} \\equiv 5\\cdot 61E_{8,\\boldsymbol{K}}^{(2)} \\pmod{7}.\n$$\nMoreover by Lemma \\ref{span}, we have\n$$\n\\vartheta_{H_1}^{(2)} \\equiv \\vartheta_{H_2}^{(2)} \\pmod{7}.\n$$\nSince $\\varTheta(E_{8,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{7}$ (cf. Corollary \\ref{cor1}),\nwe obtain\n$$\n\\varTheta(\\vartheta_{H_1}^{(2)}) \\equiv \\varTheta(\\vartheta_{H_2}^{(2)})\n\\equiv 0 \\pmod{7}.\n$$\n\\end{proof}\n\\subsection{Theta series for unimodular Hermitian lattice of rank 12}\n\\label{HermitianLeech}\nIt is known that there is a unimodular Hermitian lattice \n$\\mathcal{L}_{\\mathbb{C}}\\in\\mathcal{U}_{12}(\\mathbb{Z}[i])$\nwhich does not have any vector of length one.\nThe transfer of this lattice to $\\mathbb{Z}$ is the Leech lattice\n(cf. http:\/\/www.math.uni-sb.de\/ag\/schulze\/Hermitian-lattices\/).\nFor this lattice, we have\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\\mid_{\\mathbb{S}_2} =\n\\vartheta_{\\text{Leech}}^{(2)},\n$$\nnamely, the restriction of Hermitian theta series $\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}$\nto the Siegel upper half-space coincides with the Siegel theta series\n$\\vartheta_{\\text{Leech}}^{(2)}$ attached to the Leech lattice. \n\n\nWe fix the lattice\n$\\mathcal{L}_{\\mathbb{C}}$ and call it here the {\\it Hermitian Leech lattice}.\n\\begin{Thm}\n\\label{mod11cong}\nLet $\\mathcal{L}_{\\mathbb{C}}$ be the Hermitian Leech lattice.\nThe attached Hermitian theta series \n$\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\n\\in M_{12}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}$\nsatisfies the following congruence relations.\n\\vspace{2mm}\n\\\\\n{\\rm (1)}\\; $\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv 0 \\pmod{11}$.\n\\vspace{2mm}\n\\\\\n{\\rm (2)}\\; $\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)} \\equiv 1 \\pmod{13}$.\n\\end{Thm}\n\\begin{proof}\n(1)\\, By Theorem \\ref{thetamodp}, there is a Hermitian cusp form\n$$\nG\\in S_{12+11+1}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})_{\\mathbb{Z}_{(11)}}^{\\text{sym}}\n=S_{24}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}})_{\\mathbb{Z}_{(11)}}^{\\text{sym}}\n$$ \nsuch that\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv G \\pmod{11}.\n$$\nBy Table 2, we see that\n$$\na(\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)});H) \\equiv\na(G;H) \\pmod{11}\n$$\nfor any $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)=2$ and\n$\\text{tr}(H)\\leq\\displaystyle 2\\left[\\frac{24}{8}\\right]=6$. Applying Sturm's\nbound (Corollary \\ref{sturmcorollary}), we obtain\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv G \\equiv 0 \\pmod{11}.\n$$\n(2)\\, We can confirm\n$$\n\\Phi(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)})\n=(E_4^{(1)})^3-720\\Delta \\equiv E_{12}^{(1)} \\equiv 1 \\pmod{13},\n$$\nwhere $\\Phi$ is the Siegel operator and \n$\\Delta=\\frac{1}{1728}((E_4^{(1)})^3-(E_6^{(1)})^2)$ is Ramanujan's weight 12 cusp\nform for $SL_2(\\mathbb{Z})$. This shows that\n$$\na(\\varTheta(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)});H) \\equiv\na(\\varTheta(E_{12,\\boldsymbol{K}}^{(2)});H) \\pmod{13}\n$$\nfor any $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)\\leq 1$. Considering\nthis fact and Table 2, we see that this congruence relation holds for any\n $H\\in\\Lambda_2(\\boldsymbol{K})$ with $\\text{rank}(H)\\leq 2$. Applying Sturm's\nbound again, we obtain\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)} \\equiv E_{12,\\boldsymbol{K}}^{(2)}\n\\equiv 1 \\pmod{13}.\n$$\n\\end{proof}\nIn this section,\nwe constructed Hermitian modular forms in the mod $p$ kernel of the theta\noperator by theta series attached to unimodular Hermitian lattices. \nBy the results above,\nit is expected that, if $p$ is a prime number such that $p \\equiv 3 \\pmod{4}$,\nthen there is a unimodular lattice $\\mathcal{L}$ of rank $p+1$ such that\n$$\n\\varTheta(\\vartheta_{\\mathcal{L}}^{(2)}) \\equiv 0 \\pmod{p}.\n$$\n\\subsection{Theta constants}\n\\label{thetaconstant}\nIn the previous sections, we gave examples of Hermitian modular form in the mod $p$\nkernel of the theta operator. In this section we give another example.\n\nThe Hermitian\n{\\it theta constant} on $\\mathbb{H}_2$ with characteristic $\\boldsymbol{m}$\nover $\\boldsymbol{K}=\\mathbb{Q}(i)$ is defined by\n\\begin{align*}\n\\theta_{\\boldsymbol{m}}(Z) &=\\theta(Z;\\boldsymbol{a},\\boldsymbol{b})\\\\\n&:=\\sum_{\\boldsymbol{g}\\in\\mathbb{Z}[i]^{(2,1)}}\\text{exp}\n\\left[\n\\frac{1}{2}\\left(Z\\left\\{\\boldsymbol{g}+\\frac{1+i}{2}\\boldsymbol{a}\\right\\}\n+2\\text{Re}\\frac{1+i}{2}{}^t\\boldsymbol{b}\\boldsymbol{a}\\right)\n\\right],\n\\quad Z\\in \\mathbb{H}_2,\n\\end{align*}\nwhere $\\boldsymbol{m}=\\binom{\\boldsymbol{a}}{\\boldsymbol{b}}$, \n$\\boldsymbol{a},\\,\\boldsymbol{b}\\in\\mathbb{Z}[i]^{(2,1)}$, $A\\{ B\\}={}^t\\overline{B}AB$.\nDenote by $\\mathcal{E}$ the set of even characteristic of degree 2 mod 2\n(cf. \\cite{Freitag}), namely, \n$$\n\\mathcal{E}=\n\\left\\{\n\\begin{pmatrix}0\\\\0\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\0\\\\1\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\1\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\0\\\\1\\\\1\\end{pmatrix},\n\\begin{pmatrix}0\\\\1\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}0\\\\1\\\\1\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\0\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\0\\\\0\\\\1\\end{pmatrix},\n\\begin{pmatrix}1\\\\1\\\\0\\\\0\\end{pmatrix},\n\\begin{pmatrix}1\\\\1\\\\1\\\\1\\end{pmatrix}\n\\right\\}\n$$\n\\begin{Thm}\n{\\rm (Freitag \\cite{Freitag})}\\; Set\n$$\n\\psi_{4k}(Z):=\\frac{1}{4}\\sum_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}^{4k}(Z),\n\\quad (k\\in\\mathbb{N}),\n$$\nThen\n$$\n\\psi_{4k}\\in M_{4k}(\\Gamma^2(\\mathcal{O}_{\\boldsymbol{K}}))_{\\mathbb{Z}}^{\\text{sym}}.\n$$\n\\end{Thm}\n\\begin{Rem}\n(1)\\, $\\psi_4=E_{4,\\boldsymbol{K}}^{(2)}$.\\\\\n(2)\\,$F_{10}=2^{-12}\\displaystyle\\prod_{\\boldsymbol{m}\\in\\mathcal{E}}\\theta_{\\boldsymbol{m}}$,\nwhere $F_{10}$ is a Hermitian modular form given in $\\S$ \\ref{Sect.2.2.2}.\n\\end{Rem}\n\\begin{Thm}\n\\label{thetaconstantth}\nThe following congruence relations holds.\n\\vspace{2mm}\n\\\\\n{\\rm (1)}\\; $\\varTheta (\\psi_8) \\equiv 0\\pmod{7}$.\n\\vspace{2mm}\n\\\\\n{\\rm (2)}\\; $\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}$.\n\\end{Thm}\n\\begin{proof}\nThe form $\\psi_8$ can be expressed as\n$$\n\\psi_8=\\frac{14}{75}(E_{4,\\boldsymbol{K}}^{(2)})^2+\\frac{61}{75}E_{8,\\boldsymbol{K}}^{(2)}.\n$$\nSince $\\varTheta(E_{8,\\boldsymbol{K}}^{(2)}) \\equiv 0 \\pmod{7}$, we obtain\n $\\varTheta (\\psi_8) \\equiv 0\\pmod{7}$.\\\\\n(2)\\, We have the following expression.\n\\begin{align*}\n& \\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}\n =a_1\\psi_{12}+a_2(E_{4,\\boldsymbol{K}}^{(2)})^3\n+a_3E_{4,\\boldsymbol{K}}^{(2)}E_{8,\\boldsymbol{K}}\n+a_4E_{12,\\boldsymbol{K}}^{(2)},\\\\\n&\na_1=\\frac{1470105}{8511808},\\quad\na_2=\\frac{167218051}{638385600},\\quad\na_3=-\\frac{147340193}{212795200},\\\\\n&\na_4=\\frac{802930253}{638385600},\n\\end{align*}\nwhere $\\mathcal{L}_{\\mathbb{C}}$ is the Hermitian Leech lattice as before.\nIt should be noted that $a_2 \\equiv a_3 \\equiv 0\\pmod{11}$, and $a_1\\not\\equiv 0\\pmod{11}$.\nSince \n$\\varTheta (\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}) \\equiv \n\\varTheta(E_{12,\\boldsymbol{K}}^{(2)}) \\equiv 0\\pmod{11}$, we obtain\n$$\n\\varTheta (\\psi_{12}) \\equiv 0 \\pmod{11}.\n$$\n\\end{proof}\n\\section{Tables}\nIn this section, we summarize tables which are needed in the proof of our statements\nin the previous sections.\n\\subsection{Theta series for rank 8 unimodular Hermitian lattices}\nWe introduced Hermitian theta series $\\vartheta_{H_i}^{(2)}$ in $\\S$ \\ref{rank8}.\nThe following table gives some examples of Fouirer coefficients of $\\vartheta_{H_i}^{(2)}$\n$(i=1,2)$ and $\\vartheta_{[2,2,4]}^{(2)}$.\n\\begin{table*}[hbtp]\n\\caption{Fourier coefficients of hermitian theta series (rank 8)}\n\\label{tab.1}\n\\begin{center}\n\\begin{tabular}{llll} \\hline\n$H$ & 4det$(H)$ & $a(\\vartheta_{H_1}^{(2)};H)$ & $a(\\vartheta_{H_2}^{(2)};H)$ \\\\ \\hline\n$[0,0,0]$ & $0$ & $1$ & $1$ \\\\ \\hline\n$[1,0,0]$ & $0$ & $480$ & $480$ \\\\ \\hline\n$[2,0,0]$ & $0$ & $61920$ & $61920$ \\\\ \\hline\n$[3,0,0]$ & $0$ & $1050240$ & $1050240$ \\\\ \\hline\n$[4,0,0]$ & $0$ & $7926240$ & $7926240$ \\\\ \\hline\n$[1,1+i,1]$ & $2$ & $0$ & $2688=2^7\\cdot 3\\cdot 7$ \\\\ \\hline\n$[1,1,1]$ & $3$ & $26880=2^8\\cdot 3\\cdot 5\\cdot 7$& $21504=2^{10}\\cdot 3\\cdot 7$ \\\\ \\hline\n$[2,2,1]$ & $4$ & $120960=2^7\\cdot 3^3\\cdot 5\\cdot 7$& $131712=2^7\\cdot 3\\cdot 7^3$ \\\\ \\hline\n$[2,1+i,1]$ & $6$ & $1505280=2^{11}\\cdot 3\\cdot 5\\cdot 7^2$ & $1483776=2^{10}\\cdot 3^2\\cdot 7\\cdot 23$ \\\\ \\hline\n$[3,2+i,1]$ & $7$ & $3663360=2^9\\cdot 3^3\\cdot 5\\cdot 53$ & $3717120=2^{11}\\cdot 3\\cdot 5\\cdot 11^2$ \\\\ \\hline\n$[2,0,1]$ & $8$ & $8346240=2^7\\cdot 3^4\\cdot 5\\cdot 7\\cdot 23$ & $8217216=2^7\\cdot 3^2\\cdot 7\\cdot 1019$\\\\ \\hline\n$[2,2+2i,2]$ & $8$ & $8346240=2^7\\cdot 3^4\\cdot 5\\cdot 7\\cdot 23$ & $8561280=2^7\\cdot 3\\cdot 5\\cdot 7^3\\cdot 13$\\\\ \\hline\n$[3,1+i,1]$ & $10$ & $30965760=2^{15}\\cdot 3^3\\cdot 5\\cdot 7$ & $30992640=2^8\\cdot 3\\cdot 5\\cdot 7\\cdot 1153$ \\\\ \\hline\n$[2,2+i,2]$ & $11$& $55883520=2^8\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11$ & $55716864=2^{10}\\cdot 3\\cdot 7\\cdot 2591$ \\\\ \\hline\n$[4,2+i,1]$ & $11$& $55883520=2^8\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11$ & $55716864=2^{10}\\cdot 3\\cdot 7\\cdot 2591$ \\\\ \\hline\n$[3,0,1]$ & $12$ & $67751040=2^7\\cdot 3\\cdot 5\\cdot 7\\cdot 71^2$ & $68353152=2^7\\cdot 3\\cdot 7\\cdot 59\\cdot 431$ \\\\ \\hline\n$[2,2,2]$ & $12$ & $96875520=2^{10}\\cdot 3\\cdot 5\\cdot 7\\cdot 17\\cdot 53$& $96789504=2^{10}\\cdot 3\\cdot 7^2\\cdot 643$ \\\\ \\hline\n$[2,1+i,2]$ & $14$ & $240537600=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 29$& $240752640=2^{11}\\cdot 3\\cdot 5\\cdot 17\\cdot 461$ \\\\ \\hline\n$[4,1+i,1]$ & $14$ & $240537600=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 29$& $240752640=2^{11}\\cdot 3\\cdot 5\\cdot 17\\cdot 461$ \\\\ \\hline\n$[2,1,2]$ & $15$& $358095360=2^9\\cdot 3\\cdot 5\\cdot 7\\cdot 6661$ & $358041600=2^{11}\\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 37$ \\\\ \\hline\n$[4,1,1]$ & $15$&$358095360=2^9\\cdot 3\\cdot 5\\cdot 7\\cdot 6661$ & $358041600=2^{11}\\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 37$ \\\\ \\hline\n$[2,0,2]$ & $16$ & $544440960=2^7\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 643$& $544612992=2^7\\cdot 3\\cdot 7\\cdot 11\\cdot 113\\cdot 163$ \\\\ \\hline\n$[4,0,1]$ & $16$ & $528958080=2^7\\cdot 3^3\\cdot 5\\cdot 7\\cdot 4373$ & $527753856=2^7\\cdot 3\\cdot 7\\cdot 196337$ \\\\ \\hline \n\\end{tabular}\n\\end{center}\n\\end{table*}\n\\newpage\n\\subsection{Theta series for rank 12 unimodular Hermitian lattice}\nThe table deals with the Fourier coefficients for theta series\n$\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}$ where $\\mathcal{L}_{\\mathbb{C}}$\nis the Hermitian Leech lattices introduced in $\\S$ \\ref{HermitianLeech}.\n\\begin{table*}[hbtp]\n\\caption{Non-zero Fourier coefficients $a(\\vartheta_{\\mathcal{L}}^{(2)};H)$ with rank$(H)=2$ and tr$(H)\\leq 6$}\n\\label{tab.2}\n\\begin{center}\n\\begin{tabular}{lll} \\hline\n$H$ & 4det$(H)$ & $a(\\vartheta_{\\mathcal{L}}^{(2)};H)$ \\\\ \\hline\n$[2,0,2]$ & $16$ & $8484315840=2^6\\cdot 3^5\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 109$ \\\\ \\hline\n$[2,1,2]$ & $15$ & $4428103680=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2,2]$ & $12$ & $484323840=2^9\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,1+i,2]$ & $14$ & $2214051840=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2+i,2]$ & $11$ & $201277440=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 13$ \\\\ \\hline\n$[2,2+2i,2]$ & $8$ & $8648640=2^6 \\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,0,3]$ & $24$ & $480449249280=2^{14}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 31$ \\\\ \\hline\n$[2,1,3]$ & $23$ & $314395361280=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 71$ \\\\ \\hline\n$[2,2,3]$ & $20$ & $77491814400=2^{14}\\cdot 3^3\\cdot 5^2\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,1+i,3]$ & $22$ & $201679994880=2^{15}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 13\\cdot 167$ \\\\ \\hline\n$[2,2+i,3]$ & $19$ & $46495088640=2^{14}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,2+2i,3]$ & $16$ & $8302694400=2^{12}\\cdot 3^4\\cdot 5^2\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n$[2,0,4]$ & $32$ & $8567040081600=2^6 \\cdot 3^3\\cdot 5^2\\cdot 7\\cdot 11\\cdot 13\\cdot 19\\cdot 10427$ \\\\ \\hline\n$[2,1,4]$ & $31$ & $6230341877760=2^{15}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 67$ \\\\ \\hline\n$[2,2,4]$ & $28$ & $2254596664320=2^{10}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 5431$ \\\\ \\hline\n$[2,4,4]$ & $16$ & $8484315840=2^6 \\cdot 3^5\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 109$ \\\\ \\hline\n$[2,1+i,4]$ & $30$ & $4487883079680=2^{14}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 2027$ \\\\ \\hline\n$[2,2+i,4]$ & $27$ & $1565334650880=2^{14}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 101$ \\\\ \\hline\n$[2,2+2i,4]$ & $24$ & $482870868480=2^9 \\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 997$ \\\\ \\hline\n$[3,0,3]$ & $36$ & $27374536949760=2^{16}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11^2\\cdot 13\\cdot 281$ \\\\ \\hline\n$[3,1,3]$ & $35$ & $20648247459840=2^{15}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 4663$ \\\\ \\hline\n$[3,2,3]$ & $32$ & $8431662919680=2^{12}\\cdot 3^3\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 15233$ \\\\ \\hline\n$[3,3,3]$ & $27$ & $1539504046080=2^{15}\\cdot 3^2\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 149$ \\\\ \\hline\n$[3,1+i,3]$ & $34$ & $15436369428480=2^{16}\\cdot 3^4\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 83$ \\\\ \\hline\n$[3,2+i,3]$ & $31$ & $6137351700480=2^{16}\\cdot 3^5\\cdot 5\\cdot 7^2\\cdot 11^2\\cdot 13$ \\\\ \\hline\n$[3,3+i,3]$ & $26$ & $1053888675840=2^{16}\\cdot 3^3\\cdot 5\\cdot 7^2\\cdot 11\\cdot 13\\cdot 17$ \\\\ \\hline\n$[3,2+2i,3]$ & $28$ & $2218479943680=2^{15}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13\\cdot 167$ \\\\ \\hline\n$[3,3+2i,3]$ & $23$ & $309967257600=2^{16}\\cdot 3^3\\cdot 5^2\\cdot 7^2\\cdot 11\\cdot 13$ \\\\ \\hline\n$[3,3+3i,3]$ & $18$ & $26568622080=2^{16}\\cdot 3^4\\cdot 5\\cdot 7\\cdot 11\\cdot 13$ \\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\nAny non-zero Fourier coefficient $a(\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)};H)$ \nwith rank$(H)=2$ and tr$(H)\\leq 6$\ncoincides with one in the above list.\n\nThe calculation is based on the following expression:\n$$\n\\vartheta_{\\mathcal{L}_{\\mathbb{C}}}^{(2)}:=\n\\frac{7}{12}(E_{4,\\boldsymbol{K}}^{(2)})^3\n+\\frac{5}{12}(E_{6,\\boldsymbol{K}}^{(2)})^2\n-10080E_{4,\\boldsymbol{K}}^{(2)}\\chi_8-60480F_{12}.\n$$\n\\newpage\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\\subsection{Motivation}\nThere is growing interest in combining observational and experimental data to draw causal conclusions \\citep{hartman2015sate,athey2020combining,YangDing2020,Chen2021,Oberst2022,rosenman2020combining}. Experimental data from randomized controlled trials (RCTs) are considered the gold standard for causal inference and can provide unbiased estimates of average treatment effects. However, the scale of the experimental data is usually limited and the trial participants might not represent those in the target cohort. For example, the recruitment criteria for an RCT may prescribe that participants must be less than 65 years old and satisfy certain health criteria, whereas the target population considered for treatment may cover all age groups. This problem is known as the lack of transportability \\citep{pearl2011transportability,rudolph2017robust}, generalizability \\citep{cole2010generalizing,hernan2011compound,dahabreh2019extending}, representativeness \\citep{campbell1957factors} and external validity \\citep{rothwell2005external,westreich2019target}. By contrast, observational data usually has both the scale and the scope desired, but one can never prove that there is no hidden confounding. Any unmeasured confounding in the observational data may lead to a biased estimate of the causal effect. When it comes to estimating the causal effect in the target population, combining obervational and experimental data provides an avenue to exploit the benefits of both.\n\n\n\n\nExisting literature has proposed several methods of integrating RCT and observational data to address the issue of the RCT population not being representative of the target cohort. \\cite{kallus2018removing}, considered the case where the supports do not fully overlap, and proposed a linear correction term to approximate the difference between the causal estimates from observational data and experimental data caused by hidden confounding.\\par\n\n\nSometimes even though the domain of observational data overlaps with the experimental data, sub-populations with certain traits may be over- or under-represented in the RCT compared to the target cohort. This difference can lead to a biased estimate of the average treatment effect, and as a result, the causal conclusion may not be generalizable to the target population. In this case, reweighting the RCT population to make it applicable\nto the target cohort is a common choice of remedy \\citep{hartman2015sate,andrews2017weighting}. In particular, Inverse Probability of Sampling Weighting (IPSW) has been a popular estimator for reweighting \\citep{cole2008constructing,cole2010generalizing,stuart2011use}. In this paper, we base our theoretical results \non the IPSW estimator. \n\n\n\n\n\n\n\n\n\n\n\\subsection{Design Trumps Analysis}\nMost of the existing literature, including those discussed above, focuses on the analysis stage after RCTs are completed, and propose methods to analyse the data as given. This means, the analysis methods, including reweighting through IPSW, are to passively deal with the RCT data as they are. However, the quality of the causal inference is largely predetermined by the data collected. `Design trumps analysis' \\citep{rubin2008objective}; a carefully designed experiment benefits the causal inference by far more than the analysis that follows. Instead of marginally improving through analyses, we focus on developing a strategy for the design phase, specifically the selection of RCT participants with different characteristics \n, to fundamentally improve the causal inference.\\par\n\n\nWhen designing an RCT sample to draw causal conclusions on the target cohort, a heuristic strategy that practitioners tend to opt for is to construct the RCT sample that looks exactly like a miniature version of target cohort. For example, suppose that we want to examine the efficacy of a drug on a target population consisting $30\\%$ women and $70\\%$ men. If the budget allows us to recruit 100 RCT participants in total, then the intuition is to recruit exactly $30$ females and $70$ males. This intuition definitely works, yet, is it efficient? We refer to the efficiency of the reweighted causal estimator for the average treatment effect in the target population, and specifically, its variance \\footnote[1]{We note that the efficiency of an unbiased estimator $T$ is formally defined as $e(T) = \\frac{\\mathcal{I}^{-1}(\\theta)}{var(T)}$, that is, the ratio of its lowest possible variance over its actual variance. For our purpose, we do not discuss the behaviour of the Fisher information of the data but rather focus on reducing the variance of the estimators. With slight abuse of terminology, in this paper, when we say that one estimator is more efficient than another, we mean that the variance of the former is lower. Similarly, we say that an RCT sample is more efficient if it eventually leads to an estimator of lower variance.}.\n\nIn fact, we find that RCTs following the exact covariate distribution of the target cohort do not necessarily lead to the most efficient estimates after reweighting. Instead, our result suggests that the optimal covariate allocation of experiment samples is the target cohort distribution adjusted by the conditional variability of potential outcomes. Intuitively, this means that an optimal strategy is to sample more from the segments where the causal effect is more volatile or uncertain, even if they do not make up a large proportion of the target cohort. \n\n\n\n\n\n\\subsection{Contributions}\nIn this work, we focus on the common practice of generalizing the causal conclusions from an RCT to a target cohort. We aim at fundamentally improving the estimation efficiency by improving the selection of individuals into the trial, that is, the allocation of a certain number of places in the RCT to individuals of certain characteristics. We derive the optimal covariate allocation that minimizes the variance of the causal estimate of the target cohort. Practitioners can use this optimal allocation as a guide when they decide `who' to recruit for the trial. We also formulate a deviation metric that quantifies how far a given RCT allocation is from optimal, and practitioners can use this metric to decide when they are presented with several candidate RCT allocations to choose from.\\par\nWe develop variations of the main results to cater for various practical scenarios such as where the total number of participants in the trial is fixed, or the total recruitment cost is fixed while unit costs can differ, or with different precision requirements: best overall precision, equal segment precision or somewhere in between. In this paper, we provide practitioners with a clear strategy and versatile tools to select the most efficient RCT samples. \n\n\n\n\\subsection{Outline} \nThe remainder of this paper is organized at follows: In Section~\\ref{sec:1setup}, we introduce the problem setting, notations, provide the main assumptions and provide more details on the IPSW estimator that we consider. In Section~\\ref{sec:2res}, we derive the optimal covariate allocation for RCT samples to improve estimation efficiency, propose a deviation metric to assess candidate experimental designs and illustrate how this metric influences estimation efficiency. Section~\\ref{sec:3estimate} provides an estimate of the optimal covariate allocation and the corresponding assumptions to ensure consistency. Section~\\ref{sec:4pras} extends the main results and propose design strategies under other practical scenarios like heterogeneous unit cost and same precision requirement. In Section~\\ref{sec:5numerical}, we use two numerical studies, a synthetic simulation and a semi-synthetic simulation with real-word data, to corroborate our theoretical results.\n\n\n\n\n\\section{Setup, Assumptions and Estimators} \\label{sec:1setup}\n\n\\subsection{Problem Setup and Assumptions}\nIn this paper, we based our notations and assumptions on the potential outcome framework \\citep{rubin1974estimating}. We assume to have two datasets: a RCT and an observational data. We also make the assumption that the target cohort of interest is contained in the observational data. \\par\nDefine $S \\in \\{0,1\\} $ as the sample indicator where $s = 1$ indicates membership of the experimental data and $s = 0$ the target cohort, where $T \\in \\{0,1\\}$ as the treatment indicator and $t = 1$ indicates treatment and $t = 0$ indicates control. Let $Y_{is}^{(t)}$ denotes potential outcome for a unit $i$ assigned to data set $s$ and treatment $t$. We define $X$ as a set of observable pre-treatment variables, which can consist discrete and\/or continuous variables. Let $n_0$, $n_1$, $n = n_0 + n_1$ denote the number of units in the target cohort, RCT, and the combined dataset, respectively. We use $f_1(x)$ and $f_0(x)$ to denote the distribution of $X$ in the RCT population and target cohort, respectively. \n\nThe causal quantity of interest here is the average treatment effect (ATE) on the target population, denoted by $\\tau$ .\n\\begin{dfn} (ATE on target cohort)\n$$\n \\tau := \\mathbb{E} \\left [Y^{(1))} - Y^{(0)} \\mid S = 0 \\right].\n$$\n\\end{dfn}\nWe also define the CATE on the trial population, denoted by $\\tau(x)$.\n\\begin{dfn} (CATE on trial population)\n$$\n \\tau(x) := \\mathbb{E} \\left [Y^{(1)} - Y^{(0)} \\mid X = x, S = 1 \\right].\n$$\n\\end{dfn}\n\n\n\nTo ensure an unbiased estimator of the ATE on the target population after reweighting the estimates from the RCT, we need to make several standard assumptions.\n\\begin{assumption}(Identifiability of CATE in the RCT data)\n\\label{assump::identifiability1}\nFor all the observations in the RCT data, we assume the following conditions hold.\n\\begin{itemize}\n \n \\item[(i)] \n Consistency: $Y_{i} = Y_{i1}^{(t)}$ when $T=t$ and $S=1$;\n \n \\item[(ii)] Ignorability: $Y_{i}^{(t)} \\indep T \\mid (X, S =1)$;\n \\item[(iii)] Positivity: $0 < \\mathbb{P}(T=t \\mid X, S =1) < 1$ for all $t \\in \\{0,1\\}$.\n \n \n\\end{itemize}\n\\end{assumption}\nThe ignorability condition assumes that the experimental data is unconfounded and the positivity condition is guaranteed to hold in\nconditionally randomized experiments.The igonrability and positivity assumptions combined is also referred to as strong ignorability. Under Assumption~\\ref{assump::identifiability1}, the causal effect conditioned on $X = x$ in the experimental sample can be estimated without bias using:\n\\begin{eqnarray*}\n\\hat \\tau(x) &=& \\frac{ \\sum_{S_i=1,X_i = x} \\frac{T_i Y_i}{e(x)} - \\frac{(1-T_i) Y_i }{ 1-e(x)}}{\\sum_{S_i=1,X_i = x} 1},\n\\end{eqnarray*}\nwhere $e(x) = \\mathbb{P}(T=1 \\mid X=x, S=1)$ is the probability of treatment assignment in the experimental sample. This estimator is also known as the Horvitz-Thompson estimator \\citep{horvitz1952generalization}, which we will provide more details later in this section.\n\nTo make sure that we can `transport' the effect from the experimental data to the target cohort, we make the following transportability assumption.\n\\begin{assumption}(Transportability) \n\\label{assump::transport}\n$Y^{(t)} \\indep S \\mid (X, T=t)$.\n\\end{assumption}\nAssumption~\\ref{assump::transport} can be interpreted from several perspectives, as elaborated in \\cite{hernan2010causal}. First, it assumes that all the effect modifiers are captured by the set of observable covariates $X$. Second, it also ensures that \nthe treatment $T$ for different data stays the same. If the assigned treatment differs between the study population and the target population, then the magnitude of the causal effect of treatment will differ too. Lastly, the transportability assumption prescribes that there is no interference across the two populations. That is, treating one individual in one population does not interfere with the outcome of individuals in the other population.\\par\n\n\nFurthermore, we require the trial population fully overlaps with the the target cohort, so that we can reweight the CATE in the experimental sample to estimate the ATE in the target cohort. That is, for each individual in the target cohort, we want to make sure that we can find a comparable counterpart in the experimental sample with the same characteristics. \n\n\\begin{assumption}\n(Positivity of trial participation) \n\\label{assump::positivity}\n$0 < \\mathbb{P}(S=1 \\mid T=t, X = x) < 1$ for all $x \\in \\text{supp}(X\\,|\\, S=0) $.\n\\end{assumption}\nIn Assumption~\\ref{assump::positivity}, $\\text{supp}(X\\,|\\, S=0)$ denotes the support of the target cohort, in other words, the set of values that $X$ can take for individuals in the target cohort. Mathematically, $x \\in \\text{supp}(X\\,|\\, S=0)$ is equivalent to $\\mathbb{P}\\left(\\|X-x\\| \\leq \\delta \\mid S = 0\\right)>0$, $\\forall \\delta>0$. Assumption~\\ref{assump::positivity} requires that the support of the experimental sample includes the target cohort of interest.\n\n\\subsection{Estimators and related work}\nInverse Propensity (IP) weighted estimators were proposed by \\cite{horvitz1952generalization} for surveys in which subjects are sampled with unequal probabilities.\n\\begin{dfn}(Horvitz-Thompson estimator)\n\\begin{align*}\n \\widehat Y^{(t)}_{\\text{HT}} &= \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{I(T_i=t) Y_i}{\\mathbb{P}\\left(T_i = t \\,|\\, X = X_i \\right)},\\\\ \n \\hat{\\tau}_{\\text{HT}} &= \n \\widehat Y^{(1)}_{\\text{HT}} - \n \\widehat Y^{(0)}_{\\text{HT}} = \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{T_i Y_i}{e\\left(X_i\\right)} - \\frac{(1- T_i) Y_i}{1- e\\left(X_i\\right)},\n\\end{align*}\n\\end{dfn}\nwhere the probability of treatment $e(X_i)$ is assumed to be known as we focus on the design phase of experiments. In practice, we can extend the Horvitz-Thompson estimator by replacing $e(x)$ with an estimate $\\hat{e}(x)$, for example the Hajek estimator \\citep{hajek1971comment} and the difference-in-means estimator.\n\n\\begin{dfn} (Augmented Inverse Propensity Weighted estimator)\n\\begin{eqnarray*}\n \\widehat Y^{(1)}_{\\text{AIPW}} &=&\\frac{1}{n_1} \\sum_{i=1}^{n_1}\\left[\\hat{m}^{(1)}\\left(X_i\\right)+\\frac{T_i}{\\hat{e}\\left(X_i\\right)}\\left(Y_i-\\hat{m}^{(1)}\\left(X_i\\right)\\right)\\right], \\\\\n \\widehat Y^{(0)}_{\\text{AIPW}}&=&\\frac{1}{n_1} \\sum_{i=1}^{n_1}\\left[\\hat{m}^{(0)}\\left(X_i\\right)+\\frac{(1-T_i)}{1- \\hat{e}\\left(X_i\\right)}\\left(Y_i-\\hat{m}^{(0)}\\left(X_i\\right)\\right)\\right], \\\\\n \\hat{\\tau}_{\\text{\\text{AIPW}}} & =& \\frac{1}{n_1} \\sum_{i=1}^{n_1} \\frac{T_i(Y_i-\\hat{m}^{(1)}(X_i))}{\\hat{e}\\left(X_i\\right)} - \\frac{(1-T_i)(Y_i-\\hat{m}^{(0)}(X_i)))}{1-\\hat{e}\\left(X_i\\right)} + \\hat{m}^{(1)}(X_i) - \\hat{m}^{(0)}(X_i),\n\\end{eqnarray*}\nwhere $m^{(t)}(x)$ denotes the average outcome of treatment $t$ given covariate $X = x$, that is, $m^{(t)}(x)=\\mathbb{E} [Y \\mid T=t, X=x, S =1]$, and $\\hat{m}^{(t)}(x)$ is an estimate of $m^{(t)}(x)$ \\citep{robins1994correcting}. \n\\end{dfn}\nThe estimator $\\hat{\\tau}_{\\text{AIPW}}$ is doubly robust: $\\hat{\\tau}_{\\text{AIPW}}$ is consistent if either (1) $\\hat{e}\\left(X_i\\right)$ is consistent or (2) $\\hat{m}^{(t)}(x)$ is consistent.\n\n\\begin{dfn}(Inverse Propensity Sample Weighted (IPSW) estimator)\n\\begin{eqnarray*}\n\\hat{\\tau}_{\\text{IPSW}}^* &=& \\frac{1}{n_1} \\sum_{i \\in \\{i:S_i = 1\\}} w\\left(X_i\\right){\\left(\\frac{Y_i A_i}{e(X_i)}-\\frac{Y_i\\left(1-A_i\\right)}{1-e(X_i)}\\right)},\n\\\\\nw(x) &=& \n\\frac{f_0(X)}{f_1(X)}.\n\\end{eqnarray*}\n\\end{dfn}\nWe can see that the IPSW estimator extends the Horvitz-Thompson estimator by adding a weight $w(x)$, which is the ratio between the probably of observing an individual with characteristics $X = x$ in the trial population that in the target population \\citep{stuart2011use}\\footnote[2]{The definition of the weight $w(x)$ differs slightly from that in \\cite{stuart2011use}, where $w(x)$ is defined as $\\frac{P\\left(S =1 \\mid X=x\\right)}{P\\left(S = 0 \\mid X = x\\right)}$. That is, the ratio of the distribution of being selected into the trail over being selected into the target cohort. This definition is based on the problem setting where there is a super population which the target cohort and trial cohort are sampled from. Our definition here agrees with that in \\cite{colnet2022reweighting}.}. We use an asterisk in the notation to denote that it is an oracle definition where we assume both $f_1(X)$ and $f_0(X)$ are known, which is probably unrealistic. The IPSW estimator of average treatment effect on target cohort is proven to be unbiased under Assumptions~\\ref{assump::identifiability1}--~\\ref{assump::positivity}.\\par \nA concurrent study of high relevance to our work by \\cite{colnet2022reweighting} investigated performance of IPSW estimators. In particular, they defined different versions of IPSW estimators, where $f_1(X)$ and $f_0(X)$ are either treated as known or estimated, and derived the expressions of asymptotic variance for each version. They concluded that the semi-oracle estimator, where $f_1(X)$ is estimated and $f_0(X)$ is treated as known, outperforms the other two versions giving the lowest asymptotic variance.\n\n\\begin{dfn}\\label{dfn:semioracleIPSW}(Semi-oracle IPSW estimator, \\cite{colnet2022reweighting})\n\\begin{eqnarray*}\n\\hat{\\tau}_{\\text{IPSW}} &=& \\frac{1}{n_1} \\sum_{i \\in \\{i:S_i = 1\\}} \\frac{f_0(X_i)}{\\hat{f_1}(X_i)}{\\left(\\frac{Y_i A_i}{e(X_i)}-\\frac{Y_i\\left(1-A_i\\right)}{1-e(X_i)}\\right)}, \\text{ and} \\\\\n\\hat{f_1}(x) &=& \\frac{1}{n_1} \\sum_{S_i=1} \\mathbbm{1}_{X_i = x}.\n\\end{eqnarray*}\n\\end{dfn}\n\n The re-weighted ATE estimator we use in this paper, $\\hat{\\tau}$, coincides with their semi-oracle IPSW estimator defined above, where $f_1(X)$ is estimated from the RCT data.\n\n\\section{Main Results}\n\\label{sec:2res}\nIn this section, we start with the case where the number of possible covariate values is finite and derive the optimal covariate allocation of RCT samples that minimizes the variance of the ATE estimate, $\\hat{\\tau}$. We then develop a deviation metric, $\\mathcal{D}(f_1)$, that quantifies how much a candidate RCT sample composition with covariate distribution $f_1$ deviates from the optimal allocation. We prove that this deviation metric, $\\mathcal{D}(f_1)$, is proportional to the variance of $\\hat{\\tau}$ therefore it can be used as a metric for selection. Finally, we derive the above results in presence of continuous covariates.\n\\label{sec:theoreticalresults}\n\n\\subsection{Variance-Minimizing RCT Covariate Allocation}\nWe first consider the more straight-forward case, where the number of possible covariate values is finite. \nRecall that $e(x)$ denotes the propensity score,\nWe assume that the exact value of $e(x)$ is known for the RCT.\n\n\nWhen units in the experimental dataset cover all the possible covariate values, for $m=1,\\ldots,M$, recall the Horvitz-Thompson inverse-propensity weighted estimators \\citep{horvitz1952generalization} of CATE:\n\\begin{eqnarray*}\n\\hat \\tau(x_m) &=& \\frac{ \\sum_{S_i=1,X_i = x_m} \\frac{T_i Y_i}{e(x_m)} - \\frac{(1-T_i) Y_i }{ 1-e(x_m)}}{\\sum_{S_i=1,X_i = x_m} 1}.\n\\end{eqnarray*}\n\n\nDiscrete covariates can be furthered divided into two types: ordinal, for example, test grade, and categorical such as blood type. For ordinal covariates, we can construct a smoother estimator by applying kernel-based local averaging:\n\\begin{eqnarray*}\n \\hat \\tau_\\text{K} (x_m) = \\frac{\\frac{1}{n_1 h^k} \\sum_{S_i=1} \\left( \\frac{T_i Y_i}{ e (X_i)} - \\frac{(1-T_i) Y_i}{1- e (X_i)}\\right) K \\left(\\frac{X_i -x_m}{h}\\right)}{\\frac{1}{n_1 h^k} \\sum_{S_i = 1} K\\left(\\frac{X_i -x_m}{h}\\right)},\n\\end{eqnarray*}\nwhere $K(\\cdot)$ is kernel function and $h$ is the smoothing parameter. Conceptually, the kernel function measures how individuals with covariates in proximity to $x_m$ influence the estimation of $\\hat \\tau_\\text{K} (x_m)$.\nThis kernel-based estimator works even if the observational data does not fully overlap with the experimental data. The estimator $\\hat\\tau_\\text{K}$ is inspired by \\cite{abrevaya2015cate}, who used it to estimate the CATE. Specifically, if the covariate is ordinal and the sample size of a sub-population with a certain covariate value is small or even zero, we can consider $\\hat \\tau_\\text{K}(x)$, as it applies local averaging so that each CATE is informed by more data.\n\n\nTo study the variance of CATE estimates $\\hat \\tau(x_m)$ and $\\hat \\tau_\\text{K} (x_m)$, we define the following terms:\n\\begin{eqnarray*}\n \\sigma_\\psi^2(x) &=& \\mathbb{E}\\left[ \\left( \\psi(X,Y,T) - \\tau(x) \\right)^2 \\mid X=x, S=1 \\right],\\\\\n \\psi(x,y,t) &=& \\frac{t(y-m^{(1)}(x))}{e(x)} - \\frac{(1-t)(y-m^{(0)}(x))}{1-e(x)} + m^{(1)}(x) - m^{(0)}(x).\n \n\\end{eqnarray*}\nThe random vector $\\psi(X,Y,T)$ is the influence function of the AIPW estimator \\citep{bang2005doubly}. Term $\\sigma_\\psi^2(x)$ measures the conditional variability of the difference in potential outcomes given covariate $X = x$, and $m^{(t)}(x)$ denotes the average outcome with treatment $t$ given covariate $X = x$.\n\n\\begin{assumption}\\label{con::clt}\nAs $n$ goes to infinity, $n_1\/n$ has a limit in $(0,1)$.\n\\end{assumption}\nAssumption~\\ref{con::clt} suggests that when we consider the asymptotic behavior of our estimators, sample sizes for both experimental data and observational data go to infinity, though usually there is more observational samples than experimental samples.\n\n\\begin{theorem}\\label{thm::fclt}\nUnder Assumption~\\ref{con::clt}, for $m=1,\\ldots, M$, we have\n\\begin{eqnarray*}\n\\sqrt {n_1} (\\hat \\tau(x_m) - \\tau(x_m)) &\\stackrel{d}{\\rightarrow}& N \\left( 0, \\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)} \\right),\\\\\n\\sqrt{n_1 h} (\\hat \\tau_\\text{K} (x_m) - \\tau(x_m)) &\\stackrel{d}{\\rightarrow}&\n\\mathcal{N} \\left( 0, \\frac{\\Vert K \\Vert_2^2 \\sigma_\\psi^2(x_m) }{f_1(x_m)} \\right),\n\\end{eqnarray*}\nwhere $\\Vert K \\Vert_2 = (\\int K(u)^2 du)^{1\/2}$.\n\\end{theorem}\n\n\nTheorem~\\ref{thm::fclt} shows the asymptotic distribution of the two CATE estimators for every possible covariate value. Complete randomization in experiments ensures that $\\hat \\tau(x)$ is unbiased. Based on the idea of IPSW estimator, we then construct the following two reweighted estimators for ATE:\n$$\n \\hat \\tau = \\sum_{m=1}^M f_0(x_m) \\hat \\tau(x_m), \\quad\n \\hat \\tau_\\text{K} = \\sum_{m=1}^M f_0(x_m) \\hat \\tau_\\text{K}(x_m).\n$$\nIt is easy to see that the $\\hat{\\tau}$ above is the same as the semi-oracle IPSW estimator defined in Definition~\\ref{dfn:semioracleIPSW} once we substitute in the expression of $\\hat{\\tau}(x_m)$.\n\n\n\\begin{theorem}\\label{thm::covreal}\nUnder Assumption~\\ref{assump::identifiability1}--\\ref{con::clt}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &=& \\sum_{m=1}^M f_0^2(x_m)\\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)},\\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &=& \\Vert K \\Vert_2^2 \\sum_{m=1}^M f_0^2(x_m)\\frac{\\sigma_\\psi^2(x_m)}{f_1(x_m)},\n\\end{eqnarray*}\nwhere $\\text{var}_\\text{a}(\\cdot)$ denotes the asymptotic variance. For $m=1, \\ldots, M$, the optimal covariate RCT distribution to minimize both $\\text{var}(\\hat \\tau)$ and $\\text{var}_\\text{a}(\\hat \\tau_\\text{K})$ is \n\\begin{eqnarray*}\n f_1^*(x_m) = \\frac{f_0(x_m) \\sigma_\\psi(x_m)}{\\sum_{j=1}^M f_0(x_j) \\sigma_\\psi(x_j) }.\n\\end{eqnarray*}\n\\end{theorem}\nTheorem~\\ref{thm::covreal} indicates that even if the covariate distribution of experimental data is exactly the same as that of the target cohort, it does not necessarily produce the most efficient estimator. The optimal RCT covariate distribution also depends on the conditional variability of potential outcomes. In fact, $f_1^*$ is essentially the target covariate distribution adjusted by the variability of conditional causal effects. This result suggests that we should sample relatively more individuals from sub-populations where the causal effect is more volatile, even if they do not take up a big proportion of the target cohort. Moreover, the two estimators, $\\hat{\\tau}$ and $\\hat \\tau_\\text{K}$, share the same optimal covariate weight no matter whether local averaging is applied. \n\nIn practice, if the total number of samples is fixed, experiment designers can select RCT samples with covariate allocation identical to $f_1^*$ to improve the efficiency of IPSW estimate.\n \n\\subsection{Deviation Metric}\n\n\\begin{corollary}\\label{cor::Dmetrix}\nUnder Assumption~\\ref{assump::identifiability1}--\\ref{con::clt}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &= & \n \n \\left( \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\}\n \\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &= & \n \n \\Vert K \\Vert_2^2 \\left( \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\},\n\\end{eqnarray*}\nwhere $\\text{var}_{1}(\\cdot) = \\text{var}_{X \\mid S=1}(\\cdot)$, and we define\n$$\n\\mathcal{D}(f_1) = \\text{var}_{1} \\left( \\frac{f_1^*(X)}{f_1(X)}\\right)\n$$\nas the deviation metric of experiment samples as it measures the difference between the optimal covariate distribution $f_1^*$ and the real covariate distribution $f_1$. We have $\\mathcal{D}(f_1) \\geq 0$, and $\\mathcal{D}(f_1^*) = 0$ if and only if the real covariate distribution of experiment samples is identical to the optimal one, i.e. ${f_1^*(x)} = {f_1(x)}$ for $\\forall x \\in \\{ x_1, \\ldots, x_M \\}$.\n\\end{corollary}\n\nAccoring to Corollary~\\ref{cor::Dmetrix}, the variance of $\\hat \\tau$ depends on two parts: the first part $ \\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)$ depends on the true distribution of the target population, while the second part $\\mathcal{D}(f_1)$ is a measure of the deviation of the RCT sample allocation $f_1$ compared to the optimal variability-adjusted allocation $f_1^*$, and can thus reflect the representativeness of our RCT samples.\nAs Corollary~\\ref{cor::Dmetrix} shows, the variance of IPSW estimator for the population, $\\hat \\tau$, is proportional to $\\mathcal{D}(f_1)$. \n\nThe deviation metric equips us with a method to compare candidate experiment designs. To be specific, if experiment designers have several potential plans for RCT samples, they can choose one with the smallest deviation metric to maximize the estimation efficiency.\n \n\\subsection{Including Continuous Covariates}\n\nFor continuous covariates, for instance, body mass index (BMI), we apply stratification based on propensity score. By considering an appropriate partition of the support $\\{A_1, \\ldots, A_L\\}$ with finite $L \\in \\mathbb{N}$, we can turn it into the discrete case above. \n\n\\begin{assumption}\\label{con::cont}\nFor $l = 1, \\ldots, L$, and $x, x^\\prime \\in A_l$, we have\n\\begin{itemize}\n \\item[(i)] $\\Pr(T=1 \\mid X=x) = \\Pr (T=1 \\mid X = x^\\prime)$;\n \\item[(ii)] $\\mathbb{E}(Y^{(1)} -Y^{(0)} \\mid X=x, S=1 ) = \\mathbb{E}(Y^{(1)} -Y^{(0)} \\mid X=x^\\prime, S=1)$.\n\\end{itemize}\n\\end{assumption}\n\nAssumption~\\ref{con::cont} assumes that units within each stratum share the same propensity score and CATE. This is a strong but reasonable condition if we make each stratum $A_l$ sufficiently small. Under Assumption~\\ref{con::cont}, let $\\hat\\tau(A_l)$, $\\hat\\tau_\\text{K}(A_l)$, $\\sigma_\\psi^2(A_l)$, $e(A_l)$ denote the causal effect estimate, causal effect estimate with kernel-based local averaging, variance of influence function, propensity score, that are conditioned on $X \\in A_l$. Let $f_0(A_l) = \\Pr(X \\in A_l \\mid S=0)$ and $f_1(A_l) = \\Pr(X \\in A_l \\mid S=1)$. We can then construct two IPSW estimators:\n$$\n \\hat \\tau = \\sum_{l=1}^L f_0(A_l) \\hat \\tau(A_l), \\quad\n \\hat \\tau_\\text{K} = \\sum_{l=1}^L f_0(A_l) \\hat \\tau_\\text{K}(A_l).\n$$\n\nAs shown in Corollary~\\ref{thm::covc}, we have similar results to Theorem~\\ref{thm::covreal}, but instead of the optimal covariate distribution, we derive the optimal probability on each covariate set $A_l$.\n\n\\begin{corollary}\\label{thm::covc}\nUnder Assumption~\\ref{assump::identifiability1}--~\\ref{con::cont}, we have\n\\begin{eqnarray*}\n n_1 \\text{var}(\\hat \\tau) &=& \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)},\\\\\n n_1 h \\text{var}_\\text{a}(\\hat \\tau_\\text{K}) &=& \\Vert K \\Vert_2^2 \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)}.\n\\end{eqnarray*}\nFor $l=1, \\ldots, L$, the optimal distribution on each covariate set to minimize both $\\text{var}(\\hat \\tau)$ and $\\text{var}_\\text{a}(\\hat \\tau_\\text{K})$ is \n\\begin{eqnarray*}\n f_1^*(A_l) = \\frac{f_0(A_l) \\sigma_\\psi(A_l)}{\\sum_{j=1}^L f_0(A_j) \\sigma_\\psi(A_j) }.\n\\end{eqnarray*}\nMoreover,\n\\begin{eqnarray*}\n \\sum_{l=1}^L f_0^2(A_l)\\frac{\\sigma_\\psi^2(A_l)}{f_1(A_l)}\n &=& \n \\left( \\sum_{l=1}^L f_0(A_l) \\sigma_\\psi(A_l) \\right)^2 \\times\n \\left\\{ \\mathcal{D}(f_1) + 1\\right\\},\n\\end{eqnarray*}\nwhere $A(x) = \\{ A: x \\in A; A \\in \\{A_1, \\ldots, A_L\\}\\}$.\n\\end{corollary}\n\n\n\n\n\n\nIn the sections that follow, for simplicity, we illustrate our method in the scenario where the covariates are all discrete with finite possible values. The results can easily be extended to include continuous covariates following the same logic as descibed in this section.\n\n\\section{Estimating Conditional Variability}\n\\label{sec:3estimate}\nThe optimal covariate allocation derived above can benefit the planning of the composition of RCT samples. However, it's difficult, or impossible, to estimate the conditional variability of potential outcomes prior to RCTs being carried out. In this section, we provide a practical strategy using information from the observational data to estimate the theoretical optimal covariate distribution, and derive conditions under which our strategy yields consistent results.\n\nIn completely randomized experiments, we can show that\n\\begin{eqnarray*}\n\\sigma_\\psi^2(x) = \\frac{1}{e(x)} \\text{var}(Y^{(1)} \\mid X=x) + \\frac{1}{1-e(x)} \\text{var}(Y^{(0)} \\mid X=x).\n\\end{eqnarray*}\nTo estimate $\\sigma_\\psi^2(x)$ by observational data, $\\forall x$, let\n\n\\begin{align*}\n\\widehat Y^{(0)}(x) &= \\frac{\\sum_{S_i=0,T_i=0} Y_i}{\\sum_{S_i=0,T_i=0} 1 }, \\quad&\n\\widehat Y^{(1)}(x) &= \\frac{\\sum_{S_i=0,T_i=1} Y_i}{\\sum_{S_i=0,T_i=1} 1 },\\\\\n\\widehat S^{(0)}(x) &= \\frac{\\sum_{S_i=0,T_i=0} \\left(Y_i - \\widehat Y^{(0)}(x) \\right)^2}{\\sum_{S_i=0,T_i=0} 1 - 1}, \\quad&\n\\widehat S^{(1)}(x) &= \\frac{\\sum_{S_i=0,T_i=1} \\left(Y_i - \\widehat Y^{(1)}(x) \\right)^2}{\\sum_{S_i=0,T_i=1} 1 - 1}.\n\\end{align*}\n\nWe can then estimate the conditional variability of potential outcomes, the optimal covariate distribution and the deviation metric of RCT samples from observational data as follows\n\\begin{eqnarray*}\n\\hat{\\sigma}_\\psi^2(x) &=& \\frac{1}{e(x)}\\widehat S^{(1)}(x) + \\frac{1}{1-e(x)}\\widehat S^{(0)}(x)\\\\\n\\hat f_1^*(x_m) &= &\\frac{f_0(x_m) \\hat\\sigma_\\psi(x_m)}{\\sum_{j=1}^M f_0(x_j) \\hat\\sigma_\\psi(x_j) },\\\\\n\\widehat{\\mathcal{D}}(f_1) &=& \\text{var}_{1} \\left( \\frac{\\hat f_1^*(X)}{f_1(X)}\\right).\n\\end{eqnarray*}\n\n\nAssumption~\\ref{cond::prop} below ensures the consistency of the estimated conditional variance of potential outcomes $\\hat{\\sigma}_\\psi^2(x)$. The main problem of estimating $\\hat{\\sigma}_\\psi^2(x)$ from observational data is the possibility of unobserved confounding. Instead of assuming unconfoundedness, \nour Assumption~\\ref{cond::prop} is weaker and requires that the expectation of estimate $\\hat{\\sigma}_\\psi^2(x)$ \nis proportional to the target conditional variability, which is a weaker condition.\n\\begin{assumption}\\label{cond::prop}\nFor $\\forall x$, suppose\n\\begin{eqnarray*}\n&& \\frac{1}{e(x)}\\text{var}{(Y^{(1)} \\mid X=x, T=1, S=0)} + \\frac{1}{1-e(x)}\\text{var}{(Y^{(0)} \\mid X=x, T=0, S=0)} \\\\\n&=& c \\left[ \\frac{1}{e(x)}\\text{var}{(Y^{(1)} \\mid X=x, S=0)} + \\frac{1}{1-e(x)}\\text{var}{(Y^{(0)} \\mid X=x, S=0)} \\right],\n\\end{eqnarray*}\nwhere $c>0$ is an unknown constant.\n\\end{assumption}\nThe left-hand side of the equation above is the conditional variance of observed outcomes that can be estimated from the observational data, and the right-hand side is the theoretical conditional variance of potential outcomes that we want to approximate. Assumption~\\ref{cond::prop} requires these two quantities to be proportional, rather than absolutely equal. Intuitively, the assumption supposes that the covariate segments in the observational data that exhibit high volatility in observed outcomes, also have high variance in their potential outcomes, although it is not required that the absolute levels of variance have to be the same.\n\n\\begin{theorem}\\label{thm::asyocd}\nUnder Assumption~\\ref{con::clt} and Assumption~\\ref{cond::prop}, $\\forall x$ we have\n$$\n \\hat f_1^*(x) \\rightarrow f_1^*(x), \\quad \\widehat{\\mathcal{D}}(f_1) \\rightarrow \\mathcal{D}(f_1).\n$$\nThus, $\\hat f_1^*(x)$ and $\\widehat{\\mathcal{D}}(f_1)$ are consistent. \n\\end{theorem}\n\nBased on Thereom~\\ref{thm::asyocd}, we propose a novel strategy to select efficient RCT samples. Specifically, we select the candidate experimental design with covariate allocation $f_1$ that minimizes the estimate of deviation metric $\\widehat{\\mathcal{D}}(f_1)$. By contrast, a naive strategy prefers the candidate experimental design with $f_1 = f_0$, which mimics exactly the covariate distribution in the target cohort. If the conditional variability of potential outcomes $\\sigma_\\psi^2(x)$ vary widely according to $x$. Our strategy can lead to much more efficient treatment effect estimator compared to the naive strategy.\n\n\\section{Practical Scenarios}\n\\label{sec:4pras}\n\\subsection{Heterogeneous Unit Cost}\nWe also consider the experimental design with a cost constraint and heterogeneous cost for different sub-populations. The goal is to find the optimal sample allocation for RCT that minimizes the variance of the proposed estimator subject to a cost constraint. For $m = 1, \\ldots, M$, let $C_m$ denote the cost to collect a sample in the sub-population with $X = x_m$. \n\n\\begin{theorem}\\label{thm::cost}\nUnder the cost constraint that\n$$\n \\sum_{m=1}^M C_m \\left( \\sum_{S_i=1, X_i = x_m} 1 \\right) = C,\n$$\nthe optimal sample allocation $f_1(X)$ that minimizes $\\text{var}(\\hat \\tau)$ is\n$$\n f_1^{\\text{c},*} (x_m) = \\frac{f_0(x_m)\\sigma_\\psi(x_m)\/\\sqrt{C_m}} {\\sum_{i=1}^M f_0(x_i)\\sigma_\\psi(x_i)\/\\sqrt{C_i}}\n$$\nfor $m = 1, \\ldots, M$. Here we use the superscript $c$ to denote the cost constraint.\n\\end{theorem}\n\nTheorem~\\ref{thm::cost} suggests that the optimal RCT covariate allocation under the given cost constraint is the covariate allocation of target cohort adjusted by both the heterogeneous costs for sub-populations and the conditional variability of potential outcomes. Intuitively, compared to the case without heterogeneous costs, we should include more RCT samples with lower unit cost.\n\n\n\\subsection{Different Precision Requirements}\nTheorem~\\ref{thm::covc} shows the optimal sample allocation to maximize the efficiency of average treatment effect estimator for the target cohort. If we require the same precision for estimators in each domain, we need the sample allocation as follows:\n$$\n f_1^{\\text{s},*}(x_m) = \\frac{ \\sigma^2_\\psi(x_m)}{\\sum_{j=1}^M \\sigma^2_\\psi(x_j) },\n$$\nwhere we use the superscript s to denote the requirement of same precision for the CATE estimate in each segment.\n\nIntuitively, to take both objectives into consideration, we propose a compromised allocation that falls between the two optimum allocations $\\forall k \\in [0,1]$:\n$$\n f_1^{k,*}(x_m) = \\frac{f_0^k(x_m) \\sigma^{2-k}_\\psi(x_m)}{\\sum_{j=1}^M f_0^k(x_j) \\sigma^{2-k}_\\psi(x_j) }.\n$$\n\n\n\\begin{corollary}\\label{prop::1}\nIf for $m = 1, \\ldots, M$, \n$$\n f_0(x_m) = \\frac{\\sigma_\\psi(x_m)}{\\sum_{j=1}^M \\sigma_\\psi(x_j)},\n$$\nwe have for $\\forall k \\in [0,1]$,\n$$\n f_1^*(X) = f_1^{\\text{s},*}(X) = f_1^{k,*}(X).\n$$\n\nThe deviation metric for sample allocation under same precision strategy and compromise strategy are\n\\begin{eqnarray*}\n \\mathcal{D}(f_1^{\\text{s},*}) &=& \\text{var}_1\\left( \\frac{f_0(X)}{\\sigma_\\psi(X)} \\right) \\left( \\frac{\\sum_{m=1}^M \\sigma^2_\\psi(x_m)}{\\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)} \\right)^2,\\\\\n \\mathcal{D}(f_1^{k,*}) &=& \\text{var}_1\\left( \\frac{f_0(X)}{\\sigma_\\psi(X)} \\right)^{1-k} \\left( \\frac{\\sum_{m=1}^M f_0^{k}(x_m) \\sigma^{2-k}_\\psi(x_m)}{\\sum_{m=1}^M f_0(x_m) \\sigma_\\psi(x_m)} \\right)^2,\n\\end{eqnarray*}\nrespectively.\n\\end{corollary}\n\n\\section{Numerical Study}\n\\label{sec:5numerical}\n\\subsection{Simulation}\nIn this section, we conduct a simulation study to illustrate how representativeness of experiment samples influences the estimation efficiency of average treatment effect for a target cohort, and demonstrate how the representativeness metric $\\mathcal{D}(f_1)$ can facilitate the selection from candidate RCT sample designs. We set the size of observational dataset $n_0 = 10000$ and the size of experimental dataset $n_1 = 200$. For the units in the observational data, we draw covariates $x$ from $\\{1, 2, 3\\}$ with probability 0.3, 0.2 and 0.5 respectively. We then set\n\\begin{align*}\n Y^{(0)} = 2X + X^4\\epsilon, \\quad Y^{(1)} = 1 - X + \\epsilon, \\text{ then,}\\\\\n Y^{(1)} - Y^{(0)} = 1 -3X + (X^4 -1)\\epsilon,\n\\end{align*}\nwhere $\\epsilon \\sim \\mathcal{N}(0,1)$. We can then compute the conditional variability of potential outcomes $\\sigma^2_\\psi(x)$ and thus the optimal covariate distribution $f^*_1$ from the true population. Our model engenders distinctive conditional variability $\\sigma^2_\\psi(x)$ given different $x$, making the optimal covariate distribution $f_1^*$ very different from the target covariate distribution $f_0$.\n\nFor experimental data, we simulate 100 different candidate experimental sample designs. In each design, we randomly draw experiment samples from the target cohort with probability\n$$\n \\Pr(S=1 \\mid X=x) = \\frac{e^{p_x}}{e^{p_1} + e^{p_2} + e^{p_3}},\n$$\nwhere $p_1, p_2, p_3$ are i.i.d. samples drawn from standard normal distribution. We can then compute the real covariate distribution $f_1(x)$ and the repressiveness metric $\\mathcal{D}(f_1)$. To estimate the efficiency of average treatment effect estimator, we conduct 1000 experiments for each design. In each experiment, the treatment for each unit follows a Bernoulli distribution with probability 0.5. The simulation result is shown in Fig~\\ref{fig:sim}. The relationship between the variance and representativeness can be fit into a line, which is consistent with our result that $\\text{var}(\\hat \\tau) \\propto \\mathcal{D}(f_1)$. The red line shows the value $R(f_1)$ for the naive strategy mimicking exactly the target cohort distribution, which is not zero, and we can see that it is not the optimal RCT sample and does not produce the most efficient causal estimator. \n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{OS1.pdf}\n \\caption{How the deviation metric of experiment samples $\\mathcal{D}$ correlates with the estimated variance of the $\\hat{\\tau}$. The red line marks the deviation metric $\\mathcal{D}$ of a trial sample selected following the na\\\"ive strategy.}\n \\label{fig:sim}\n\\end{figure}\n\nFor the case with heterogeneous unit cost, we set the costs for sub-populations with covariate $X$ being 1, 2, 3 to be 20, 30, 40, respectively. The total capital available is 30000. Instead of randomly drawing experiment samples from the target cohort with a fixed number of total subjects, we randomly assign capitals for different sub-populations with a fixed amount of total cost. Given the budget assigned to each sub-population, we then draw subjects randomly from the sub-population, where the amount of subjects can be determined by the assigned budget. The simulation result is illustrated by Figure~\\ref{fig:simc}. We can see that under the cost constraint, experiment samples that follow a distribution closer to $f_1^{c,*}$ lead to causal estimator with better efficiency.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{OS2.pdf}\n \\caption{How the the deviation metric of experiment samples with cost constraint \n \n influences the variance of $\\hat{\\tau}$.}\n \\label{fig:simc}\n\\end{figure}\n\n\\subsection{Real Data Illustration}\nWe use the well-cited Tennessee Student\/Teacher Achievement Ratio (STAR) experiment to assess how covariate distribution of experiment samples influences the estimation efficiency of average treatment effect for target cohort. STAR is a randomized experiment started in 1985 to measure the effect of class size on student outcomes, measured by standardized test scores. Similar to the exercise in \\cite{kallus2018removing}, we focus a binary treatment: $T=1$ for small classes (13-17 pupils), and $T=0$ for regular classes (22-25 pupils). Since many students only started the study at first grade, we took as treatment their class-type at first grade. The outcome $Y$ is the sum of the listening, reading, and math standardized test at the end of first grade. We use the following covariates $X$ for each student: student race ($X_1 \\in \\{ 1, 2 \\}$) and school urbanicity ($X_2 \\in \\{1,2,3,4\\}$). We exclude units with missing covariates. The records of 4584 students remain, with 1733 assigned to treatment (small class, $T=1$), and 2413 to control (regular size class, $T=0$). Before analysis we fill the missing outcome by linear regression based on treatments and two covariates so that both two potential outcomes $Y_0$ and $Y_1$ for each student are known.\n\nWe simulate 500 candidate experiment sample allocations. For each allocations, we select $n_1 = 500$ experiment units from the dataset with probability\n$$\n \\Pr(S=1 \\mid X=x) = \\frac{e^{p_{x_1x_2}}}{e^{p_{11}} + e^{p_{12}} + e^{p_{13}}+ e^{p_{14}}+ e^{p_{21}}+ e^{p_{22}}+ e^{p_{23}}+ e^{p_{24}}}.\n$$\nwhere $p_{11}, p_{12}, p_{13}, p_{14}, p_{21}, p_{22}, p_{23}, p_{24}$ are i.i.d. samples drawn from standard normal distribution. We can then compute the real covariate distribution $f_1(x)$ and the repressiveness of the experiment samples $\\mathcal{D}(f_1) = \\text{var}_1 (f_1^*(X) \/ f_1(X))$. To estimate the efficiency of average treatment effect estimator, we conduct 200 experiments for each design. In each experiment, the treatment follows a Bernoulli distribution with probability $1733\/4584 = 0.378$. The simulation result is shown in Fig~\\ref{fig:real}. The relationship between the variance and the deviation metric $\\mathcal{D}(f)$ can be fit into a line, which is consistent with our result that $\\text{var}(\\hat \\tau) \\propto \\mathcal{D}(f_1)$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width = 0.5\\textwidth]{sim2.pdf}\n \\caption{How the deviation metric of experiment samples $\\mathcal{D}$ correlates with the estimated variance of the $\\hat{\\tau}$. The red line marks the deviation metric $\\mathcal{D}$ of a trial selected following the na\\\"ive strategy.}\n \\label{fig:real}\n\\end{figure}\n\n\n\\section{Conclusion}\nIn this paper, we examine the common procedure of generalizing causal inference from an RCT to a target cohort. We approach this as a problem where we can combine an RCT with an observational data. The observational data has two roles in the combination: one is to provide the exact covariate distribution of the target cohort, and the other role is to provide a means to estimate the conditional variability of the causal effect by covariate values.\n\nWe give the expression of the variance of Inverse Propensity Sampling Weights estimator as a function of covariate distribution in the RCT. We subsequently derive the variance-minimizing optimal covariate allocation in the RCT, under the constraint that the size of the trial population is fixed. Our result indicates that the optimal covariate distribution of the RCT does not necessarily follow the exact distribution of the target cohort, but is instead adjusted by the conditional variability of potential outcomes. Practitioners who are at the design phase of a trial can use the optimal allocation result to plan the group of participants to recruit into the trial. \\par\nWe also formulate a deviation metric quantifying how far a given RCT allocation is from optimal. The advantage of this metric is that it is proportional to the variance of the final ATE estimate so that when presented with several candidate RCT cohorts, practitioners can compare and choose the most efficient RCT according to this metric.\\par\nThe above results depend on the estimation of conditional variability of the causal effect by covariate values, which remains unknown. We propose to estimate it using the observational data and outline mild assumptions that needs to be met. \nIn reality, practitioners usually have complex considerations when designing a trial, for instance cost constraints and precision requirements. We develop variants of our main results to apply in such practical scenarios. Finally, we use two numerical studies to corroborate our theoretical results. \n\n\n\n\n\\bibliographystyle{abbrvnat}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction }\n\nThe Standard Electroweak Model based on gauge group $ SU(2)\\times U(1)$ gives a good description of electroweak processes.\nOne of the unsolved problems is the origin of electroweak symmetry breaking. \nIn the standard formulation the scalar field (Higgs boson) performs this task via Higgs mechanism, which generates a mass terms for vector bosons. \nSufficiently artificial Higgs mechanism with its imaginary bare mass is a naive relativistic analog of the phenomenological description of superconductivity \\cite{Sh-09}.\nHowever, it is not yet \nexperimentally verified whether electroweak symmetry is broken by such a Higgs mechanism, or by something else.\n The emergence of large number Higgsless models \\cite{C-05}--\\cite{MT-08} was stimulated by difficulties with Higgs boson. These models are mainly based on extra dimensions of different types or larger gauge groups. \nA finite electroweak model without a Higs particle which is used a regularized quantum field theory \\cite{E-66},\\cite{E-67} was developed in\n\\cite{MT-08}. \n \n \n \nOne of the important ingredient of the Standard Model is the simple group $ SU(2)$. More then fifty years in physics it is well known \nthe notion of group contraction \\cite{IW-53}, i.e. limit operation, which transforms, for example, a simple or semisimple group to a non-semisimple one. From general point of view\nfor better understanding of a physical system it is useful to investigate its properties for limiting values of their physical parameters.\nIn partucular, for a gauge model one of the similar limiting case corresponds to a model with contracted gauge group.\nThe gauge theories for non-semisimple groups which Lie algebras admit invariant non-degenerate metrics was considered in \\cite{NW-93},\\cite{T-95}.\n\n\n\nIn the present paper a modified formulation of the Higgsless Electroweak Model and its limiting case for contracted gauge group \n $ SU(2;j)\\times U(1)$ is regarded.\nFirstly we observe that the quadratic form $\\phi^\\dagger\\phi=\\phi_1^*\\phi_1+\\phi_2^*\\phi_2=R^2$\nof the complex matter field $\\phi \\in {\\bf C}_2$ is invariant with respect to gauge group transformations \n$ SU(2)\\times U(1)$ and we can restrict fields on the quadratic form without the loss of gauge invariance of the model. This quadratic form define three dimensional sphere in four dimensional Euclidean space of the real components of $\\phi,$ where the noneuclidean spherical geometry is realized.\n\nSecondly we introduce the {\\it free} matter field Lagrangian in this spherical field space, which along with the standard gauge field Lagrangian form the full Higgsless Lagrangian of the model. Its second order terms reproduce the same fields as the Standard Electroweak Model but without the remaining real dynamical Higgs field. The vector bosons masses are automatically generated and are given by the same formulas as in the Standard Electroweak Model, so there is no need in special mechanism of spontaneous symmetry breaking.\nThe fermion Lagrangian of the Standard Electroweak Model are modified by replacing of the fields $\\phi$ with the restricted on the quadratic form fields in such a way that its second order terms provide the electron mass and neutrino remain massless.\n\nWe recall the definition and properties of the contracted group $SU(2;j)$ in Sec. 2. \nIn Sec. 3, we step by step modify\nthe main points of the Electroweak Model for the gauge group $SU(2;j)\\times U(1)$.\nWe find transformation properties\nof gauge and matter fields under contractions. After that we obtain the Lagrangian of the contracted model from the noncontracted one by the substitution of the transformed fields. \nThe limiting case of the modified Higgsless Electroweak Model is regarded in Sec. 4. \nWhen contraction parameter tends to zero $j \\rightarrow 0$ or takes nilpotent value $j=\\iota$ the field space is fibered \\cite{Gr-09} in such a way that electromagnetic, Z-boson and electron fields are in the base whereas charged W-bosons and neutrino fields are in the fiber.\nWithin framework of the limit model the base fields can be interpreted as an external ones with respect to the fiber fields\n in the sence that the fiber fields do not effect on the base fields. \nThe field interactions are simplified under contraction.\nSec. 5 is devoted to the conclusions.\n\n\n\\section{Contracted Special Unitary group $SU(2;j)$ }\n\nLet us regard two dimensional complex fibered vector space $\\Phi_2(j)$ with one dimensional base $\\left\\{\\phi_1\\right\\}$\nand one dimensional fiber $\\left\\{\\phi_2\\right\\}$ \\cite{Gr-94}. This space has two hermitian forms: first in the base $\\bar{\\phi_1}\\phi_1=|\\phi_1|^2$ and second in the fiber $\\bar{\\phi_2}\\phi_2=|\\phi_2|^2,$ where bar denotes complex conjugation. Both forms can be written by one formula\n\\begin{equation}\n\\phi^\\dagger(j)\\phi(j)=|\\phi_1|^2+ j^2|\\phi_2|^2,\n\\label{g1}\n\\end{equation} \nwhere $\\phi^\\dagger(j)=(\\bar{\\phi_1},j\\bar{\\phi_2}), $\nparameter $j=1, \\iota$ and $\\iota$ is nilpotent unit $\\iota^2=0.$ \nFor nilpotent unit the following heuristic rules be fulfiled: \n1) division of a real or complex numbers by $\\iota$ is not defined, i.e. for a real or complex $a$ the expression $\\frac{a}{\\iota}$\nis defined only for $a=0$, \n2) however identical nilpotent units can be cancelled $\\frac{\\iota}{\\iota}=1.$\n \n\n The special unitary group $SU(2;j)$ is defined as a transformation group of $\\Phi_2(j)$ which keep invariant the hermitian form (\\ref{g1}), i.e.\n$$ \n\\phi'(j)=\n\\left(\\begin{array}{c}\n\\phi'_1 \\\\\nj\\phi'_2\n\\end{array} \\right)\n=\\left(\\begin{array}{cc}\n\t\\alpha & j\\beta \\\\\n-j\\bar{\\beta}\t & \\bar{\\alpha}\n\\end{array} \\right)\n\\left(\\begin{array}{c}\n\\phi_1 \\\\\nj\\phi_2\n\\end{array} \\right)\n=u(j)\\phi(j), \\quad\n$$\n\\begin{equation}\n\\det u(j)=|\\alpha|^2+j^2|\\beta|^2=1, \\quad u(j)u^{\\dagger}(j)=1.\n\\label{g3}\n\\end{equation} \n\n The fundamental representation of the one-parameter subgroups of $SU(2;j)$ are easily obtained\n\\begin{equation}\nu_1(\\alpha_1;j)=e^{\\alpha_1T_1(j)}=\\left(\\begin{array}{cc}\n\t\\cos \\frac{j\\alpha_1}{2} & i\\sin \\frac{j\\alpha_1}{2} \\\\\ni\\sin \\frac{j\\alpha_1}{2}\t & \\cos \\frac{j\\alpha_1}{2}\n\\end{array} \\right),\n\\label{g4}\n\\end{equation} \n\\begin{equation}\nu_2(\\alpha_2;j)=e^{\\alpha_2T_2(j)}=\\left(\\begin{array}{cc}\n\t\\cos \\frac{j\\alpha_2}{2} & \\sin \\frac{j\\alpha_2}{2} \\\\\n-\\sin \\frac{j\\alpha_2}{2}\t & \\cos \\frac{j\\alpha_2}{2}\n\\end{array} \\right),\n\\label{g5}\n\\end{equation} \n\\begin{equation}\nu_3(\\alpha_3;j)=e^{\\alpha_3T_3(j)}=\\left(\\begin{array}{cc}\n\te^{i\\frac{\\alpha_3}{2}} & 0 \\\\\n0\t & e^{-i\\frac{\\alpha_3}{2}}\n\\end{array} \\right).\n\\label{g6}\n\\end{equation} \nThe corresponding generators \n$$ \n T_1(j)= j\\frac{i}{2}\\left(\\begin{array}{cc}\n\t0 & 1 \\\\\n\t1 & 0\n\\end{array} \\right)=j\\frac{i}{2}\\tau_1, \\quad \nT_2(j)= j\\frac{i}{2}\\left(\\begin{array}{cc}\n\t0 & -i \\\\\n\ti & 0\n\\end{array} \\right)=j\\frac{i}{2}\\tau_2,\n$$\n\\begin{equation} \nT_3(j)= \\frac{i}{2}\\left(\\begin{array}{cc}\n\t1 & 0 \\\\\n\t0 & -1\n\\end{array} \\right)=\\frac{i}{2}\\tau_3, \n\\label{g7}\n\\end{equation} \nwith $\\tau_k$ being Pauli matrices, \nare subject of commutation relations\n$$ \n[T_1(j),T_2(j)]=-j^2T_3(j), \\quad [T_3(j),T_1(j)]=-T_2(j), \n$$\n\\begin{equation} \n [T_2(j),T_3(j)]=-T_1(j),\n\\label{g8}\n\\end{equation}\n and form the Lie algebra $su(2;j)$ with the general element\n\\begin{equation} \n T(j)=\\sum_{k=1}^{3}a_kT_k(j)= \\frac{i}{2}\\left(\\begin{array}{cc}\n\ta_3 & j(a_1-ia_2) \\\\\n\tj(a_1+ia_2) & -a_3\n\\end{array} \\right)=-T^{\\dagger}(j).\n\\label{g8-1}\n\\end{equation}\n\n\n \nThere are two more or less equivalent way of group contraction. We can put the contraction parameter equal to the nilpotent unit $j=\\iota$ or tend it to zero $j\\rightarrow 0$. Sometimes it is convenient to use the first (mathematical) approach, sometimes the second (physical) one. For example, the matrix $u(j)$ (\\ref{g3}) has non-zero nilpotent non-diagonal elements for $j=\\iota$,\nwhereas for $j\\rightarrow 0$ they are formally equal to zero. Nevertheless both approaches lead to the same final results.\n \nLet us describe the contracted group $SU(2;\\iota)$ in detail. For $j=\\iota$ it follows from (\\ref{g3}) that \n$\\det u(\\iota)=|\\alpha|^2=1,$ i.e. $\\alpha=e^{i\\varphi},$ therefore\n\\begin{equation} \nu(\\iota)= \n\\left(\\begin{array}{cc}\ne^{i\\varphi}\t & \\iota\\beta \\\\\n-\\iota\\bar{\\beta}\t & e^{-i\\varphi}\n\\end{array} \\right), \\quad\n\\beta=\\beta_1+i\\beta_2 \\in {\\bf C}. \n\\label{g9}\n\\end{equation}\nFunctions of nilpotent arguments are defined by their Taylor expansion, in particular, \n$\\cos\\iota x=1,\\;\\sin\\iota x=\\iota x.$ Then one-parameter subgroups of $SU(2;\\iota)$ take the form \n\\begin{equation}\nu_1(\\alpha_1;\\iota) \n=\\left(\\begin{array}{cc}\n\t1 & \\iota i\\frac{\\alpha_1}{2} \\\\\n\t\\iota i\\frac{\\alpha_1}{2} & 1\n\\end{array} \\right), \\quad\nu_2(\\alpha_2;\\iota) \n=\\left(\\begin{array}{cc}\n1 & \\iota \\frac{\\alpha_2}{2} \\\\\n-\\iota \\frac{\\alpha_2}{2}\t & 1\n\\end{array} \\right).\n\\label{g11}\n\\end{equation} \nThe third subgroup does not changed and is given by (\\ref{g6}).\nThe simple group $SU(2)$ is contracted to the non-semisimple group $SU(2;\\iota)$, which is isomorphic to the real Euclid group $E(2).$\nFirst two generators of the Lie algebra $su(2;\\iota)$ are commute $[T_1(\\iota),T_2(\\iota)]=0$ and the rest commutators are given by (\\ref{g8}). For the general element (\\ref{g8-1}) \nof $su(2;\\iota)$ the corresponding group element of $SU(2;\\iota)$ is as follows \n\\begin{equation} \nu(\\iota)=e^{T(\\iota)}=\n\\left(\\begin{array}{cc}\ne^{i\\frac{a_3}{2}}\t & \\iota i\\frac{\\bar{a}}{a_3}\\sin \\frac{a_3}{2} \\\\\n\\iota i\\frac{a}{a_3}\\sin \\frac{a_3}{2} & e^{-i\\frac{a_3}{2}}\n\\end{array} \\right), \\quad a=a_1+ia_2\\in C.\n\\label{g12}\n\\end{equation} \n \n \nThe actions of the unitary group $U(1)$ and the electromagnetic subgroup $U(1)_{em}$ \nin the fibered space $\\Phi_2(\\iota)$ are given by the same matrices as on the space $\\Phi_2$, namely\n\\begin{equation}\nu(\\beta)=e^{\\beta Y}=\\left(\\begin{array}{cc}\n\te^{i\\frac{\\beta}{2}} & 0 \\\\\n0\t & e^{i\\frac{\\beta}{2}}\n\\end{array} \\right), \\quad\nu_{em}(\\gamma)=e^{\\gamma Q}=\\left(\\begin{array}{cc}\n\te^{i\\gamma} & 0 \\\\\n0\t & 1\n\\end{array} \\right),\n\\label{g13}\n\\end{equation} \nwhere $Y=\\frac{i}{2}{\\bf 1}, \\; Q=Y+T_3.$\n\nRepresentations of groups $SU(2;\\iota), U(1), U(1)_{em}$ are linear ones, that is they are realised by linear operators in the fibered space $\\Phi_2(\\iota)$.\n\n\\section{ Electroweak Model for $SU(2;j)\\times U(1)$ gauge group}\n\n\nThe fibered space $\\Phi_2(j)$ can be obtained from $\\Phi_2$ by \nsubstitution $\\phi_2 \\rightarrow j\\phi_2$ in (\\ref{g1}), \nwhich induces another ones for Lie algebra $su(2)$ generators\n$T_1 \\rightarrow jT_1,\\; T_2 \\rightarrow jT_2,\\;T_3 \\rightarrow T_3. $\nAs far as the gauge fields take their values in Lie algebra, we can substitute gauge fields instead of transformation of generators, namely\n\\begin{equation}\nA_{\\mu}^1 \\rightarrow jA_{\\mu}^1, \\;\\; A_{\\mu}^2 \\rightarrow jA_{\\mu}^2,\\; \\;A_{\\mu}^3 \\rightarrow A_{\\mu}^3, \\;\\;\nB_{\\mu} \\rightarrow B_{\\mu}.\n\\label{eq28}\n\\end{equation} \n \n \n \n \n \nThese substitutions in the Lagrangian $L$ of the\nHiggsless Electroweak Model \\cite{Gr-07}\ngive rise to the Lagrangian $L(j)$\n\nof the contracted model \nwith $U(2;j)=SU(2;j)\\times U(1)$ gauge group\n\\begin{equation}\nL(j)=L_A(j) + L_{\\phi}(j),\n\\label{eq1}\n\\end{equation}\nwhere\n$$ \n L_A(j)=\\frac{1}{2g^2}\\mbox{tr}(F_{\\mu\\nu}(j))^2 + \\frac{1}{2g'^2}\\mbox{tr}(\\hat{B}_{\\mu\\nu})^2= \n $$\n \\begin{equation}\n= -\\frac{1}{4}[j^2(F_{\\mu\\nu}^1)^2+j^2(F_{\\mu\\nu}^2)^2+(F_{\\mu\\nu}^3)^2]-\\frac{1}{4}(B_{\\mu\\nu})^2\n\\label{eq2}\n\\end{equation}\nis the gauge fields Lagrangian\nand\n\\begin{equation} \n L_{\\phi}(j)= \\frac{1}{2}(D_\\mu \\phi(j))^{\\dagger}D_\\mu \\phi(j) \n\\label{eq3}\n\\end{equation} \nis the {\\it free} (without any potential term) matter field Lagrangian (summation on the repeating Greek indexes is always understood). \nHere $D_{\\mu}$ are the covariant derivatives\n \\begin{equation}\nD_\\mu\\phi(j)=\\partial_\\mu\\phi(j) + g\\left(\\sum_{k=1}^{3}T_k(j)A^k_\\mu \\right)\\phi(j) + g'YB_\\mu\\phi(j),\n\\label{eq4}\n\\end{equation} \nwhere $T_k(j)$ are given by (\\ref{g7})\n and \n$Y=\\frac{i}{2}{\\bf 1}$ is generator of $U(1).$ \nTheir actions on components of $\\phi(j)$ are given by\n$$\nD_\\mu \\phi_1=\\partial_\\mu \\phi_1 + \\frac{i}{2}(gA_\\mu^3+g'B_\\mu)\\phi_1 + j^2\\frac{ig}{2}(A_\\mu^1-iA_\\mu^2)\\phi_2,\n$$\n\\begin{equation}\nD_\\mu \\phi_2=\\partial_\\mu \\phi_2 - \\frac{i}{2}(gA_\\mu^3-g'B_\\mu)\\phi_2 + \\frac{ig}{2}(A_\\mu^1+iA_\\mu^2)\\phi_1.\n\\label{eq5}\n\\end{equation}\n\n\n\nThe gauge fields \n$$ \nA_\\mu (x;j)=g\\sum_{k=1}^{3}T_k(j)A^k_\\mu (x)=g\\frac{i}{2}\\left(\\begin{array}{cc}\n\tA^3_\\mu & j(A^1_\\mu -iA^2_\\mu ) \\\\\nj(A^1_\\mu + iA^2_\\mu ) & -A^3_\\mu \n\\end{array} \\right),\n$$\n\\begin{equation}\n \\hat{B}_\\mu (x)=g'YB_\\mu (x)=g'\\frac{i}{2}\\left(\\begin{array}{cc}\n\tB_{\\mu} & 0 \\\\\n0 & B_{\\mu}\n\\end{array} \\right)\n\\label{eq6-a}\n\\end{equation}\n take their values in Lie algebras $su(2;j),$ $u(1)$ respectively, and the stress tensors are\n$$ \nF_{\\mu\\nu}(x;j)={\\cal F}_{\\mu\\nu}(x;j)+[A_\\mu(x;j),A_\\nu(x;j)]=\n$$\n$$\n=g\\frac{i}{2}\\left(\\begin{array}{cc}\n\tF^3_\\mu & j(F^1_\\mu -iF^2_\\mu ) \\\\\nj(F^1_\\mu + iF^2_\\mu ) & -F^3_\\mu \n\\end{array} \\right), \n$$\n\\begin{equation} \nB_{\\mu\\nu}=\\partial_{\\mu}B_{\\nu}-\\partial_{\\nu}B_{\\mu}, \n\\label{eq7-a} \n\\end{equation}\nor in components \n$$\nF_{\\mu\\nu}^1={\\cal F}_{\\mu\\nu}^1 + g(A_\\mu^2A_\\nu^3-A_\\mu^3A_\\nu^2), \\quad\nF_{\\mu\\nu}^2={\\cal F}_{\\mu\\nu}^2 +g(A_\\mu^3A_\\nu^1-A_\\mu^1A_\\nu^3),\n$$\n\\begin{equation}\nF_{\\mu\\nu}^3={\\cal F}_{\\mu\\nu}^3 + j^2g(A_\\mu^1A_\\nu^2-A_\\mu^2A_\\nu^1),\n\\label{eq8-a} \n\\end{equation}\nwhere ${\\cal F}_{\\mu\\nu}^k=\\partial_\\mu A_\\nu^k- \\partial_\\nu A_\\mu^k. $ \n\n\n\nThe Lagrangian $L(j)$ (\\ref{eq1}) describe massless fields. \nIn a standard approach to generate mass terms for the vector bosons the \"`sombrero\"' potential is added to the matter field Lagrangian \n$ L_{\\phi}(j=1)$ (\\ref{eq3}) and after that the Higgs mechanism is used.\nThe different way \n\\cite{Gr-07} is based on the fact that the quadratic form $\\phi^{\\dagger}\\phi=\\rho^2$\nis invariant with respect to gauge transformations. This quadratic form define the 3-dimensional sphere $S_3$ of the radius $\\rho>0$ in the target space $\\Phi_2$ which is ${\\bf C_2}$ or ${\\bf R_4}$ if real components are counted. In other words the radial coordinates $R_{+}\\times S_3$ are introduced in ${\\bf R_4}.$ \nThe vector boson masses are generated by the transformation of Lagrangian $L(j=1)$ (\\ref{eq1}) to the coordinates on the sphere $S_3$ and are the same as in the standard model. \nHiggs boson field does not appeared if the sphere radius does not depend on the space-time coordinates $\\rho=R=const$ \\cite{Gr-07}. For $\\rho\\neq const$ the real positive massless scalar field --- analogy of dilaton or kind of Goldstone mode --- is presented in the model \\cite{F-08}.\n\n\n\n\nThe complex space $\\Phi_2(j)$ can be regarded as 4-dimensional real space ${\\bf R}_4(j)$. \n Let us introduce the real fields \n\\begin{equation} \n\\phi_1=r(1+i\\psi_3), \\quad \\phi_2=r(\\psi_2+i\\psi_1). \n\\label{eq7}\n\\end{equation} \nThe substitution $\\phi_2 \\rightarrow j\\phi_2$ induces the following substitutions \n \\begin{equation}\n\\psi_1 \\rightarrow j\\psi_1,\\;\\psi_2 \\rightarrow j\\psi_2,\\;\\psi_3 \\rightarrow \\psi_3,\\; r\\rightarrow r\n\\label{eq7-1}\n\\end{equation} \nfor the real fields. \n\n \n For the real fields the form (\\ref{g1}) is written as\n$r^2(1 + \\bar{\\psi}^2(j))=R^2, $ where $\\bar{ \\psi}^2(j)=j^2(\\psi_1^2+ \\psi_2^2)+\\psi_3^2,$ therefore \n\\begin{equation} \nr=\\frac{R}{\\sqrt{1 + \\bar{ \\psi}^2(j)}}. \n\\label{eq9}\n\\end{equation} \nHence there are \nthree independent real fields $\\bar{\\psi}(j)=(j\\psi_1,j\\psi_2,\\psi_3).$ These fields belong to the space $ \\Psi_3(j) $\nwith noneuclidean geometry which is realized on the 3-dimensional \"`sphere\"' of the radius $R$ in the 4-dimensional space \n${\\bf R}_4(j)$. The fields $\\bar{\\psi}(j)$ are intrinsic Beltrami coordinates on $ \\Psi_3(j)$. The space $ \\Psi_3(j=1)\\equiv S_3 $\nhas non degenerate spherical geometry, but $ \\Psi_3(j=\\iota) $ is fibered space of constant curvature with 1-dimensional base $\\{\\psi_3\\}$ and 2-dimensional fiber $\\{\\psi_1,\\psi_2\\}$ \\cite{Gr-09}, so-called semi-spherical space \\cite{P-65}, which can be interpreted as nonrelativistic $(1+2)$ kinematic with curvature or Newton kinematic \\cite{Gr-90}.\n\n\nThe {\\it free} Lagrangian (\\ref{eq3}) is transforms to \n the {\\it free} gauge invariant matter field Lagrangian $L_\\psi(j) $ on $ \\Psi_3(j) $, which is defined with the help of the metric tensor $g_{kl}(j)$ \\cite{Gr-07} of the space $ \\Psi_3(j) $\n$$ \ng_{11}=\\frac{1+\\psi_3^2+j^2\\psi_2^2}{(1+\\bar{\\psi}^2(j))^2}, \\quad\ng_{22}=\\frac{1+\\psi_3^2+j^2\\psi_1^2}{(1+\\bar{\\psi}^2(j))^2}, \\quad\ng_{33}=\\frac{1+j^2(\\psi_1^2+\\psi_2^2)}{(1+\\bar{\\psi}^2(j))^2}, \n$$\n$$\ng_{12}=g_{21}=\\frac{-j^2\\psi_1 \\psi_2}{(1+\\bar{\\psi}^2(j))^2},\\;\ng_{13}=g_{31}=\\frac{-j\\psi_1 \\psi_3}{(1+\\bar{\\psi}^2(j))^2},\\;\ng_{23}=g_{32}=\\frac{-j^2\\psi_2 \\psi_3}{(1+\\bar{\\psi}^2(j))^2}\n$$\n in the form \n$$\nL_\\psi(j)=\\frac{R^2}{2}\\sum_{k,l=1}^3g_{kl}(j)D_\\mu\\psi_k(j)D_\\mu\\psi_l(j)=\n$$\n\\begin{equation}\n=\\frac{R^2\\left[(1+\\bar{\\psi}^2(j))(D_{\\mu}\\bar{\\psi}(j))^2-(\\bar{\\psi}(j),D_{\\mu}\\bar{\\psi}(j))^2\\right] }{2(1+\\bar{\\psi}^2(j))^2}.\n\\label{eq10}\n\\end{equation}\n\n\nThe covariant derivatives (\\ref{eq4}) are obtained from the the representations of generators for the algebras $su(2),$ $u(1)$ in the space $\\Psi_3$ \\cite{Gr-07} with the help of the substitutions (\\ref{eq7-1}) \n$$\nT_1\\bar{\\psi}(j)=\\frac{i}{2}\\left(\\begin{array}{c}\n-j(1+j^2\\psi_1^2)\t \\\\\n j(\\psi_3-j^2\\psi_1\\psi_2) \\\\\n -j^2(\\psi_2+\\psi_1\\psi_3)\n\\end{array} \\right), \\quad\nT_2\\bar{\\psi}(j)=\\frac{i}{2}\\left(\\begin{array}{c}\n-j(\\psi_3+j^2\\psi_1\\psi_2) \\\\\n -j(1+j^2\\psi_2^2)\\\\\n \tj^2(\\psi_1-\\psi_2\\psi_3)\n\\end{array} \\right),\n$$\n$$ \nT_3\\bar{\\psi}(j) =\\frac{i}{2}\\left(\\begin{array}{c}\nj(-\\psi_2+\\psi_1\\psi_3)\t\\\\\nj(\\psi_1+\\psi_2\\psi_3) \\\\\n 1+\\psi_3^2\n\\end{array} \\right), \\quad\nY\\bar{\\psi}(j) =\\frac{i}{2}\\left(\\begin{array}{c}\n-j(\\psi_2+\\psi_1\\psi_3)\t\\\\\n j(\\psi_1-\\psi_2\\psi_3) \\\\\n -(1+\\psi_3^2)\n\\end{array} \\right)\n$$\nand are as follows: \n$$ \nD_\\mu \\psi_1=\\partial_\\mu \\psi_1 - \\frac{g'}{2}(\\psi_2+\\psi_1\\psi_3)B_\\mu + \n$$\n$$\n+\\frac{g}{2}\\left[-(1+j^2\\psi_1^2)A_\\mu^1 -(\\psi_3+j^2\\psi_1\\psi_2)A_\\mu^2-(\\psi_2-\\psi_1\\psi_3)A_\\mu^3 \\right], \n$$ \n$$ \n D_\\mu \\psi_2=\\partial_\\mu \\psi_2 + \\frac{g'}{2}(\\psi_1-\\psi_2\\psi_3)B_\\mu + \n $$\n $$\n +\\frac{g}{2}\\left[(\\psi_3-j^2\\psi_1\\psi_2)A_\\mu^1 -(1+j^2\\psi_2^2)A_\\mu^2 + (\\psi_1+\\psi_2\\psi_3)A_\\mu^3 \\right], \n$\n$$\n D_\\mu \\psi_3=\\partial_\\mu \\psi_3 - \\frac{g'}{2}(1+\\psi_3^2)B_\\mu +\n $$\n \\begin{equation} \n +\\frac{g}{2}\\left[-j^2(\\psi_2+\\psi_1\\psi_3)A_\\mu^1 +j^2(\\psi_1-\\psi_2\\psi_3)A_\\mu^2+(1+\\psi_3^2)A_\\mu^3 \\right]. \n\\label{eq11}\n\\end{equation} \nThe gauge fields Lagrangian (\\ref{eq2}) does not depend on the fields $\\phi$ and therefore remains unchanged.\nSo the full Lagrangian (\\ref{eq1}) is given by the sum of (\\ref{eq2}) and (\\ref{eq10})\n\nFor small fields, the second order part of the Lagrangian (\\ref{eq10}) is written as\n\\begin{equation} \nL_\\psi^{(2)}(j)=\\frac{R^2}{2}\\left[(D_\\mu \\bar{\\psi}(j))^{(1)}\\right]^2 = \n\\frac{R^2}{2}\\sum_{k=1}^3\\left[(D_\\mu \\psi_k(j))^{(1)}\\right]^2 , \n\\label{eq12}\n\\end{equation} \nwhere linear terms in covariant derivates (\\ref{eq11}) have the form\n$$\n(D_\\mu \\psi_1)^{(1)}= \\partial_\\mu\\psi_1-\\frac{g}{2}A_\\mu^1= -\\frac{g}{2}\\left(A_\\mu^1-\\frac{2}{g}\\partial_\\mu\\psi_1\\right) =-\\frac{g}{2}\\hat{A}_\\mu^1,\n$$\n$$ \n(D_\\mu \\psi_2)^{(1)}= \\partial_\\mu\\psi_2-\\frac{g}{2}A_\\mu^2= \n-\\frac{g}{2}\\left(A_\\mu^2-\\frac{2}{g}\\partial_\\mu\\psi_2\\right)= -\\frac{g}{2}\\hat{A}_\\mu^2,\n$$\n$$ \n(D_\\mu \\psi_3)^{(1)}=\\partial_\\mu\\psi_3+\\frac{g}{2}A_\\mu^3-\\frac{g'}{2}B_\\mu=\n\\frac{1}{2}\\sqrt{g^2+g'^2}Z_\\mu.\n$$\nThe new fields \n$$ \nW^{\\pm}_\\mu = \\frac{1}{\\sqrt{2}}\\left(\\hat{A}^1_\\mu \\mp i \\hat{A}^2_\\mu \\right), \\quad\n Z_\\mu =\\frac{gA^3_\\mu-g'B_\\mu + 2\\partial_\\mu \\psi_3}{\\sqrt{g^2+g'^2}}, \\quad \n A_\\mu =\\frac{g'A^3_\\mu+gB_\\mu}{\\sqrt{g^2+g'^2}}\n$$\nare transformed as \n$$\nW^{\\pm}_\\mu \\rightarrow j W^{\\pm}_\\mu,\\; Z_\\mu \\rightarrow Z_\\mu,\\; A_\\mu \\rightarrow A_\\mu \n$$ \nand Lagrangian (\\ref{eq12}) is rewritten as follows\n$$ \n L_{\\psi}^{(2)}=\n j^2 \\frac{R^2g^2}{4}W^{+}_\\mu W^{-}_\\mu +\\frac{R^2(g^2+g'^2)}{8}\\left(Z_\\mu \\right)^2.\n$$\n\nThe quadratic part of the full Lagrangian \n$$ \nL_0(j)= L_A^{(2)}(j) + L_{\\psi}^{(2)}(j)= \n$$\n$$ \n= - \\frac{1}{4}({\\cal F}_{\\mu\\nu})^2 -\\frac{1}{4}({\\cal Z}_{\\mu\\nu})^2 +\\frac{m_Z^2}{2}\\left(Z_\\mu \\right)^2 +\nj^2\\left\\{ -\\frac{1}{2}{\\cal W}^{+}_{\\mu\\nu}{\\cal W}^{-}_{\\mu\\nu} + m_{W}^2W^{+}_\\mu W^{-}_\\mu \\right\\} \\equiv\n$$ \n\\begin{equation} \n\\equiv L_b + j^2 L_f,\n\\label{neq1}\n\\end{equation} \nwhere \n\\begin{equation} \nm_W=\\frac{Rg}{2}, \\quad m_Z=\\frac{R}{2}\\sqrt{g^2+g'^2},\n\\label{eq13}\n\\end{equation}\nand \n$\n{\\cal Z}_{\\mu\\nu}=\\partial_\\mu Z_\\nu-\\partial_\\nu Z_\\mu, \\; \n{\\cal F}_{\\mu\\nu}=\\partial_\\mu A_\\nu-\\partial_\\nu A_\\mu, \\; \n{\\cal W^{\\pm}}_{\\mu\\nu}=\\partial_\\mu W^{\\pm}_\\nu-\\partial_\\nu W^{\\pm}_\\mu \\; \n$ \nare abelian stress tensors \n describes all the experimentally verified parts of the standard Electroweek Model \n but does not include the scalar Higgs field. \n \nThe interaction part of the full Lagrangian in the first degree of approximation is given by\n$$ \nL_{int}^{(1)}(j)=j^2\\left[L_{A}^{(3)} + L_{\\psi}^{(3)}\\right],\n$$\nwhere the third order terms of the gauge field Lagrangian (\\ref{eq2}) are\n\\begin{eqnarray*}\n L_A^{(3)} =-\\frac{g}{\\sqrt{g^2+g'^2}}\\left\\{i\\left( \\mathcal{W}_{\\mu\\nu}^-W_\\mu^+ - \\mathcal{W}_{\\mu\\nu}^+W_\\mu^- \\right)\\left(g'A_\\nu+gZ_\\nu \\right) - \\right. \\nonumber\\\\\n-\\frac{\\sqrt{2}}{g}\\left(g'A_\\mu+gZ_\\mu \\right) \n \\left[\\mathcal{W}_{\\mu\\nu}^+(\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + \\mathcal{W}_{\\mu\\nu}^- (\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right]- \\nonumber\\\\\n-\\frac{2ig}{\\sqrt{g^2+g'^2}}\\left(\\mathcal{W}_{\\mu\\nu}^-W_\\mu^+ - \\mathcal{W}_{\\mu\\nu}^+W_\\mu^- \\right)\\partial_{\\nu}\\psi_3 - \\nonumber\\\\\n-\\frac{2\\sqrt{2}}{\\sqrt{g^2+g'^2}}\\left[\\mathcal{W}_{\\mu\\nu}^+(\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + \\mathcal{W}_{\\mu\\nu}^-(\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right]\\partial_{\\nu}\\psi_3 + \\nonumber\\\\\n+\\left(g'\\mathcal{F}_{\\mu\\nu}+ g\\mathcal{Z}_{\\mu\\nu}\\right)\n \\left\\{\\frac{i}{4}\\left[\\left(W_{\\mu}^+ \\right)^2 - \\left(W_{\\mu}^- \\right)^2 \\right] +\n\\frac{4}{g^2}\\partial_{\\mu}\\psi_1\\partial_{\\nu}\\psi_2 + \\right. \\nonumber\\\\\n\\left. \\left. \n+ \\frac{\\sqrt{2}}{g}\\left[W_\\mu^+ (\\partial_{\\nu}\\psi_2-i\\partial_{\\nu}\\psi_1) + W_\\mu^- (\\partial_{\\nu}\\psi_2+i\\partial_{\\nu}\\psi_1) \\right] \\right\\} \\right\\} \\nonumber\\\\\n\\end{eqnarray*}\nand those of the matter field Lagrangian (\\ref{eq10}) are\n\\begin{eqnarray*}\n L_\\psi^{(3)}= \\frac{R^2g}{2\\sqrt{2}}\\left\\{\nW_{\\mu}^+\\left[\\psi_3(\\partial_\\mu \\psi_2 - i\\partial_\\mu \\psi_1) -\n\\frac{g^2-g'^2}{g^2+g'^2}(\\psi_2 -i\\psi_1) \\partial_\\mu \\psi_3 + \\right. \\right. \\nonumber\\\\\n\\left. + \\frac{g'\\left(gA_{\\mu}-g'Z_{\\mu} \\right)}{\\sqrt{g^2+g'^2}}(\\psi_2 -i\\psi_1) \\right] \n+W_{\\mu}^-\\left[\\psi_3(\\partial_\\mu \\psi_2 + i\\partial_\\mu \\psi_1) - \\right.\\nonumber\\\\\n\\left. - \\frac{g^2-g'^2}{g^2+g'^2}(\\psi_2 +i\\psi_1) \\partial_\\mu \\psi_3 + \n \\frac{g'\\left(gA_{\\mu}-g'Z_{\\mu} \\right)}{\\sqrt{g^2+g'^2}}(\\psi_2 +i\\psi_1) \\right]+ \\nonumber\\\\\n\\left.+\\frac{1}{g}\\sqrt{g^2+g'^2}Z_{\\mu}\\left(\\psi_1\\partial_\\mu \\psi_2 - \\psi_2\\partial_\\mu \\psi_1 \\right)\n\\right\\}. \n\\end{eqnarray*}\n\nThe fermion Lagrangian of the standard Electroweek Model\nis taken in the form \\cite{R-99}\n\\begin{equation} \nL_F=L_l^{\\dagger}i\\tilde{\\tau}_{\\mu}D_{\\mu}L_l + e_r^{\\dagger}i\\tau_{\\mu}D_{\\mu}e_r -\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}L_l) +(L_l^{\\dagger}\\phi)e_r],\n\\label{eq14}\n\\end{equation}\nwhere\n$\nL_l= \\left(\n\\begin{array}{c}\n\te_l\\\\\n\t\\nu_{e,l}\n\\end{array} \\right)\n$\nis the $SU(2)$-doublet, $e_r $ the $SU(2)$-singlet, $h_e$ is constant and $e_r, e_l, \\nu_e $ are two component Lorentzian spinors. \nHere $\\tau_{\\mu}$ are Pauli matricies, \n$\\tau_{0}=\\tilde{\\tau_0}={\\bf 1},$ $\\tilde{\\tau_k}=-\\tau_k. $ \nThe covariant derivatives $D_{\\mu}L_l $ are given by (\\ref{eq4}) with $L_l$ instead of \n$\\phi$ and $D_{\\mu}e_r=(\\partial_\\mu + ig'B_\\mu)e_r. $\n The convolution on the inner indices of $SU(2)$-doublet is denoted by $(\\phi^{\\dagger}L_l)$.\n\nThe matter field $\\phi$ appears in Lagrangian (\\ref{eq14}) only in mass terms. \nWhen the gauge group $SU(2)$ is contracted to $SU(2;j)$ and the matter field is fibered to $\\phi(j)$\nthe same take place with doublet $L_l$, namely, the first component $\te_l$ does not changed, but the second component is multiplied by contraction parameter: $\\nu_{e,l} \\rightarrow j\\nu_{e,l}$. \nWith the use of (\\ref{eq7}),(\\ref{eq9}) and these substitution the mass terms are rewritten in the form\n$$\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}(j)L_l(j)) +(L_l^{\\dagger}(j)\\phi(j))e_r]=\n\\frac{h_eR}{\\sqrt{1+\\bar{\\psi}^2(j)}}\\left\\{e_r^{\\dagger}e_l + e_l^{\\dagger}e_r + \\right.\n$$\n\\begin{equation} \n\\left. +i\\psi_3\\left(e_l^{\\dagger}e_r - e_r^{\\dagger}e_l \\right) + \nij^2 \\left[\\psi_1\\left(\\nu_{e,l}^{\\dagger}e_r - e_r^{\\dagger}\\nu_{e,l} \\right)+\ni\\psi_2\\left(\\nu_{e,l}^{\\dagger}e_r + e_r^{\\dagger}\\nu_{e,l} \\right)\\right]\n\\right\\},\n\\label{n1-1}\n\\end{equation}\nwhere\nthe $SU(2)$-singlet $e_r $ does not transforms under contraction.\n\n\n\\section{Limiting case of Higgsless Electroweak Model}\n\n\nAs it was mentioned the vector boson masses are automatically (without any Higgs mechanism) generated by the transformation of the free Lagrangian of the standard Electroweak Model to the Lagrangian (\\ref{eq1}),(\\ref{eq2}),(\\ref{eq10}) expressed in some coordinates on the sphere $\\Psi_3(j)$. \nAnd this statement is true for both values of contraction parameter $j=1,\\iota$.\nWhen contraction parameter tends to zero $j^2\\rightarrow 0$, then the contribution of W-bosons fields to the quadratic part of the Lagrangian (\\ref{neq1}) will be small in comparison with the contribution of Z-boson and electromagnetic fields. \nIn other words the limit Lagrangian includes only Z-boson and electromagnetic fields. Therefore charded W-bosons fields does not effect on these fields.\nThe part $L_f$ form a new Lagrangian for W-bosons fields and their interactions with other fields. \nThe appearance of two Lagrangians $L_b$ and $L_f$ for the limit model is in correspondence with two hermitian forms of fibered space $\\Phi_2(\\iota)$, which are invariant under the action of contracted gauge group $SU(2;\\iota)$. \n Electromagnetic and Z-boson fields can be regarded as external fields with respect to the W-bosons fields. \n\nIn mathematical language the field space $\\left\\{ A_{\\mu}, Z_{\\mu}, W_{\\mu}^{\\pm}\\right\\}$\nis fibered after contraction $j=\\iota$ to the base $\\left\\{ A_{\\mu}, Z_{\\mu}\\right\\}$ and the fiber \n$\\left\\{W_{\\mu}^{\\pm}\\right\\}.$ \n(In order to avoid terminological misunderstanding let us stress that we have in view locally trivial fibering, which\nis defined by the projection $pr:\\; \\left\\{ A_{\\mu}, Z_{\\mu}, W_{\\mu}^{\\pm}\\right\\} \\rightarrow \\left\\{ A_{\\mu}, Z_{\\mu}\\right\\}$ in the field space. \nThis fibering is understood in the context of semi-Riemannian geometry \\cite{Gr-09} and has nothing to do with the principal fiber bundle.)\nThen $L_b$ in (\\ref{neq1}) presents Lagrangian in the base and $L_f$ is Lagrangian in the fiber. In general, properties of a fiber are depend on a points of a base and not the contrary.\nIn this sense fields in the base are external one with respect to fields in the fiber. \n\nThe fermion Lagrangian (\\ref{eq14})\nfor nilpotent value of the contraction parameter $j=\\iota $ is also splited on electron part in the base and neutrino part in the fiber. \nThis means that in the limit model electron field is external one relative to neutrino field.\nThe mass terms (\\ref{n1-1}) for $j=\\iota $ are\n\\begin{equation}\nh_e[e_r^{\\dagger}(\\phi^{\\dagger}(\\iota)L_l(\\iota)) +(L_l^{\\dagger}(\\iota)\\phi(\\iota))e_r]\n=\\frac{h_eR}{\\sqrt{1+\\psi_3^2}} \\left[ e_r^{\\dagger}e_l^{-} + e_l^{- \\dagger}e_r \n +i\\psi_3\\left(e_l^{- \\dagger}e_r - e_r^{\\dagger}e_l^{-} \\right) \\right]. \n\\label{n1}\n\\end{equation}\nIts second order terms \n$ h_eR\\left(e_r^{\\dagger}e_l^{-} + e_l^{- \\dagger}e_r \\right)$ \nprovide the electron mass $m_e=h_eR $ and neutrino remain massless.\n\n\nLet us note that field interactions in contracted model are more simple as compared with the standard Electroweak Model due to nullification of some terms.\n\n\n\n\\section{Conclusions} \n\n\nThe modified formulation of the Electroweak Model with the gauge group $SU(2)\\times U(1)$ based on\nthe 3-dimensional spherical geometry in the target space is suggested. This model describes all experimentally observed fields and does not include the (up to now unobserved) scalar Higgs field. \nThe {\\it free} Lagrangian in the spherical matter field space is used instead of Lagrangian with\nthe potential of the special \"`sombrero\"' form.\nThe gauge field Lagrangian is the standard one. \nThere is no need in Higgs \nmechanism since the vector field masses are generated automatically.\n\n\nWe have discussed the limiting case of the modified Higgsless Electroweak Model, which corresponds to the contracted gauge group $SU(2;j)\\times U(1)$, where $j=\\iota$ or $j \\rightarrow 0$.\nThe masses of the all experimentally verified particles involved in the Electroweak Model remain the same under contraction, but interactions of the fields are changed in two aspects. \nFirstly all field interactions become more simpler due to nullification of some terms in Lagrangian. \nSecondly interrelation of the fields become more complicated. All fields are divided on two classes: fields in the base\n(Z-boson, electromagnetic and electron) and fields in the fiber (W-bosons and neutrino). \nThe base fields can be interpreted as external ones with respect to the fiber fields, i.e. Z-boson, electromagnetic and electron fields can interact with W-bosons and neutrino fields, but W-bosons and neutrino fields do not effect on these fields within framework of the limit model.\n\n\nThis work has been supported in part \nby the Russian Foundation for Basic Research, grant 08-01-90010-Bel-a\nand the program \"`Fundamental problems of nonlinear dynamics\"' of Russian Academy of Sciences. \n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe \\emph{competition graph} $C(D)$ of a digraph $D$ is an undirected graph which\nhas the same vertex set as $D$ and which has an edge $xy$\nbetween two distinct vertices $x$ and $y$\nif and only if for some vertex $z \\in V$,\nthe arcs $(x,z)$ and $(y,z)$ are in $D$.\n\nLet $d$ be a positive integer.\nFor $x = (x_1,x_2,\\ldots, x_d)$,\n$y = (y_1,y_2,\\ldots, y_d) \\in \\mathbb{R}^d$,\nwe write\n$x \\prec y$\nif $x_i 0 \\}$.\nFor a point $v=(v_1, v_2, v_3) \\in \\mathcal{H}_+$,\nlet $p_{1}^{(v)}$, $p_{2}^{(v)}$, and $p_{3}^{(v)}$\nbe points in $\\mathbb{R}^3$ defined by\n$p_{1}^{(v)} := (-v_2-v_3, v_2, v_3)$,\n$p_{2}^{(v)} := (v_1, -v_1-v_3, v_3)$, and\n$p_{3}^{(v)} := (v_1, v_2, -v_1-v_2)$,\nand let $\\triangle(v)$ be the convex hull of the points\n$p_{1}^{(v)}$, $p_{2}^{(v)}$, and $p_{3}^{(v)}$, i.e.,\n$\n\\triangle(v) := \\text{{\\rm Conv}}(p_{1}^{(v)},p_{2}^{(v)},p_{3}^{(v)})\n= \\left\\{ \\sum_{i=1}^3 \\lambda_i p_{i}^{(v)}\n\\mid \\sum_{i=1}^3 \\lambda_i=1, \\lambda_i \\geq 0 \\ (i=1,2,3) \\right\\}.\n$\nThen it is easy to check that $\\triangle(v)$ is an closed equilateral triangle\nwhich is contained in the plane $\\mathcal{H}$.\nLet $A(v)$ be the relative interior of the closed triangle $\\triangle(v)$, i.e.,\n$\nA(v) := \\text{rel.int}(\\triangle(v))\n= \\left\\{ \\sum_{i=1}^3 \\lambda_i p_{i}^{(v)}\n\\mid \\sum_{i=1}^3 \\lambda_i=1, \\lambda_i > 0 \\ (i=1,2,3) \\right\\}.\n$\nThen\n$A(v)$ and $A(w)$ are homothetic for any $v,w \\in \\mathcal{H}_+$.\n\nFor $v \\in \\mathcal{H}_+$ and $(i,j) \\in \\{(1,2),(2,3),(1,3)\\}$,\nlet $l_{ij}^{(v)}$ denote the line through\nthe two points $p^{(v)}_{i}$ and $p^{(v)}_{j}$, i.e.,\n$\nl_{ij}^{(v)} := \\{ x \\in \\mathbb{R}^3 \\mid\nx = \\alpha p^{(v)}_{i} + (1 - \\alpha) p^{(v)}_{j}, \\alpha \\in \\mathbb{R} \\},\n$\nand let $R_{ij}(v)$ denote the following region:\n\\[\nR_{ij}(v) := \\{ x \\in \\mathbb{R}^3 \\mid\nx = (1-\\alpha - \\beta)p^{(v)}_{k} + \\alpha p^{(v)}_{i} + \\beta p^{(v)}_{j} ,\n0 \\leq \\alpha \\in \\mathbb{R}, 0 \\leq \\beta \\in \\mathbb{R}, \\alpha + \\beta \\geq 1 \\},\n\\]\nwhere $k$ is the element in $\\{1,2,3\\} \\setminus \\{i,j\\}$;\nfor $k \\in \\{1,2,3\\}$, let $R_{k}(v)$ denote the following region:\n\\[\nR_{k}(v) := \\{ x \\in \\mathbb{R}^3 \\mid\nx = (1 + \\alpha + \\beta)p^{(v)}_{k} - \\alpha p^{(v)}_{i} - \\beta p^{(v)}_{j},\n0 \\leq \\alpha \\in \\mathbb{R}, 0 \\leq \\beta \\in \\mathbb{R} \\},\n\\]\nwhere $i$ and $j$ are elements such that\n$\\{i,j,k\\} = \\{1,2,3\\}$.\n(See Figure~\\ref{region} for an illustration.)\n\n\\begin{figure}\n\\psfrag{A}{ $p^{(v)}_{1}$}\n\\psfrag{B}{ $p^{(v)}_{2}$}\n\\psfrag{C}{ $p^{(v)}_{3}$}\n\\psfrag{D}{ $\\triangle(v)$}\n\\psfrag{E}{ $R_{23}(v)$}\n\\psfrag{F}{ $R_3(v)$}\n\\psfrag{G}{ $R_{13}(v)$}\n\\psfrag{H}{ $R_1(v)$}\n\\psfrag{I}{ $R_{12}(v)$}\n\\psfrag{J}{ $R_2(v)$}\n\\psfrag{K}{ $l_{12}^{(v)}$}\n\\psfrag{L}{ $l_{13}^{(v)}$}\n\\psfrag{M}{ $l_{23}^{(v)}$}\n\\begin{center}\n\\includegraphics[height=5cm]{region3.eps}\n\\end{center}\n\\vskip-1em\n\\caption{The regions determined by $v$. By our assumption, for any vertex $u$ of a graph considered in this paper, $p_1^{(u)}$, $p_2^{(u)}$, $p_3^{(u)}$ correspond to $p_1^{(v)}$, $p_2^{(v)}$, $p_3^{(v)}$ respectively.\n}\n\\label{region}\n\\end{figure}\n\nIf a graph $G$ satisfies $\\dim_{\\text{{\\rm poc}}}(G) \\leq 3$, then, by Theroem~\\ref{thm:intersectiongeneral}, we may assume that $V(G) \\subseteq \\mathcal{H}_+$ by translating each of the vertices of $G$ in the same direction and by the same amount.\n\n\\begin{Lem}\\label{lem:not-incl1}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$ contains an induced path $uvw$ of length two.\nThen neither $A(u) \\cap A(v) \\subseteq A(w)$ nor $A(v) \\cap A(w) \\subseteq A(u)$.\n\\end{Lem}\n\n\\begin{proof}\nWe show by contradiction.\nSuppose that $A(u) \\cap A(v) \\subseteq A(w)$ or $A(v) \\cap A(w) \\subseteq A(u)$.\nBy symmetry, we may assume without loss of generality that $A(u) \\cap A(v) \\subseteq A(w)$.\nSince $u$ and $v$ are adjacent in $G$,\nthere exists a vertex $a \\in V(G)$ such that\n$\\triangle(a) \\subseteq A(u) \\cap A(v)$\nby Theorem~\\ref{thm:intersectiongeneral}.\nTherefore\n$\\triangle(a) \\subseteq A(w)$.\nSince $\\triangle(a) \\subseteq A(u)$,\n$u$ and $w$ are adjacent in $G$ by Theorem~\\ref{thm:intersectiongeneral},\nwhich is a contradiction to the assumption that\n$u$ and $w$ are not adjacent in $G$.\nHence the lemma holds.\n\\end{proof}\n\n\\begin{Defi}\nFor $v,w \\in \\mathcal{H}_+$,\nwe say that $v$ and $w$ are\n\\emph{crossing} if\n$A(v) \\cap A(w) \\neq \\emptyset$,\n$A(v) \\setminus A(w) \\neq \\emptyset$, and\n$A(w) \\setminus A(v) \\neq \\emptyset$.\n\\end{Defi}\n\n\\begin{Lem}\\label{lem:not-incl2}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$ contains an induced path $xuvw$ of length three.\nThen $u$ and $v$ are crossing.\n\\end{Lem}\n\n\\begin{proof}\nSince $u$ and $v$ are adjacent in $G$, there exists a vertex $a \\in V(G)$\nsuch that $\\triangle(a) \\subseteq A(u) \\cap A(v)$\nby Theorem~\\ref{thm:intersectiongeneral}.\nTherefore $A(u) \\cap A(v) \\neq \\emptyset$.\nIf $A(v) \\subseteq A(u)$, then $A(v) \\cap A(w) \\subseteq A(u)$,\nwhich contradicts Lemma~\\ref{lem:not-incl1}.\nThus $A(v) \\setminus A(u) \\neq \\emptyset$.\nIf $A(u) \\subseteq A(v)$, then $A(x) \\cap A(u) \\subseteq A(v)$,\nwhich contradicts Lemma~\\ref{lem:not-incl1}.\nThus $A(u) \\setminus A(v) \\neq \\emptyset$.\nHence $u$ and $v$ are crossing.\n\\end{proof}\n\n\n\\begin{Lem}\\label{lem:intersecting3}\nIf $v$ and $w$ in $\\mathcal{H}_+$ are crossing,\nthen $p_k^{(x)} \\in \\triangle(y)$ for some $k \\in \\{1,2,3\\}$\nwhere $\\{x,y\\} = \\{v,w\\}$.\n\\end{Lem}\n\n\\begin{proof}\nSince $v$ and $w$ are crossing,\nwe have\n$A(v) \\cap A(w) \\neq \\emptyset$,\n$A(v) \\setminus A(w) \\neq \\emptyset$, and\n$A(w) \\setminus A(v) \\neq \\emptyset$.\nThen one of the vertices of the triangles $\\triangle(v)$ and $\\triangle(w)$\nis contained in the other triangle, thus the lemma holds.\n\\end{proof}\n\n\n\\begin{Defi}\nFor $k \\in \\{1,2,3\\}$,\nwe define a binary relation $\\stackrel{k}{\\rightarrow}$ on $\\mathcal{H}_+$ by\n\\[\nx \\stackrel{k}{\\rightarrow} y\n\\quad \\Leftrightarrow \\quad\n\\text{ $x$ and $y$ are crossing, and } p_k^{(y)} \\in \\triangle(x)\n\\]\nfor any $x, y \\in \\mathcal{H}_+$.\n\\end{Defi}\n\n\\begin{Lem}\\label{lem:transitive}\nLet $x,y,z \\in \\mathcal{H}_+$.\nSuppose that $x \\stackrel{k}{\\rightarrow} y$\nand $y \\stackrel{k}{\\rightarrow} z$ for some $k \\in \\{1,2,3\\}$\nand that $x$ and $z$ are crossing.\nThen $x \\stackrel{k}{\\rightarrow} z$.\n\\end{Lem}\n\n\\begin{proof}\nSince $x \\stackrel{k}{\\rightarrow} y$, $p_l^{(x)} \\not\\in R_i(y) \\cup R_{ij}(y) \\cup R_j(y)$ for each $l \\in \\{1,2,3\\}$,\nwhere $\\{i,j,k\\} = \\{1,2,3\\}$\nSince $y \\stackrel{k}{\\rightarrow} z$, $p_l^{(z)} \\in R_i(y) \\cup R_{ij}(y) \\cup R_j(y)$ for each $l \\in \\{i,j\\}$.\nSince $x$ and $z$ are crossing, $p_k^{(z)} \\in \\triangle(x)$.\n\\end{proof}\n\n\n\\begin{Defi}\nFor $k \\in \\{1,2,3\\}$,\na sequence $(v_1, \\ldots, v_m)$ of $m$ points in $\\mathcal{H}_+$, where $m \\geq 2$,\nis said to be \\emph{consecutively tail-biting in Type $k$}\nif $v_i \\stackrel{k}{\\rightarrow} v_j$ for any $i < j$\n(see Figure~\\ref{consecutive}).\nA finite set $V$ of points in $\\mathcal{H}_+$\nis said to be \\emph{consecutively tail-biting}\nif there is an ordering $(v_1, \\ldots, v_m)$ of $V$\nsuch that $(v_1, \\ldots, v_m)$ is consecutively tail-biting.\n\\end{Defi}\n\n\\begin{figure}\n\\psfrag{A}{ $A(v_1)$}\n\\psfrag{B}{ $A(v_2)$}\n\\psfrag{C}{ $A(v_3)$}\n\\psfrag{D}{ $A(v_4)$}\n\\psfrag{E}{(a)}\n\\psfrag{F}{(b)}\n\\psfrag{G}{(c)}\n\\begin{center}\n\\includegraphics[height=5cm]{consecutive1.eps} \\hskip2.5em\n\\includegraphics[height=5cm]{consecutive2.eps} \\hskip2.5em\n\\includegraphics[height=5cm]{consecutive3.eps}\n\\end{center}\n\\caption{The sequences $(v_1,v_2,v_3,v_4)$ in (a), (b), (c) are consecutively tail-biting of Type 1, 2, 3, respectively.}\n\\label{consecutive}\n\\end{figure}\n\n\\section{The partial order competition dimensions of diamond-free chordal graphs}\\label{sec:chordal}\n\n\nIn this section, we show that\na chordal graph has partial order competition dimension at most three\nif it is diamond-free.\n\n\nA \\emph{block graph} is a graph such that each of its maximal $2$-connected subgraphs\nis a complete graph.\nThe following is well-known.\n\n\\begin{Lem}[\\hskip-0.0025em {\\cite[Proposition 1]{BM86}}]\\label{lem:blockchara}\nA graph is a block graph\nif and only if the graph is a diamond-free chordal graph.\n\\end{Lem}\n\nNote that\na block graph having no cut vertex is a disjoint union of complete graphs.\nFor block graphs having cut vertices, the following lemma holds.\n\n\\begin{Lem}\\label{lem:blockgraph}\nLet $G$ be a block graph having at least one cut vertex.\nThen $G$ has a maximal clique that contains exactly one cut vertex.\n\\end{Lem}\n\n\\begin{proof}\nLet $H$ be the subgraph induced by the cut vertices of $G$.\nBy definition, $H$ is obviously a block graph, so $H$ is chordal and there is a simplicial vertex $v$ in $H$.\nSince $v$ is a cut vertex of $G$, $v$ belongs to at least two maximal cliques of $G$.\nSuppose that each maximal clique containing $v$ contains another cut vertex of $G$.\nTake two maximal cliques $X_1$ and $X_2$ of $G$ containing $v$\nand let $x$ and $y$ be cut vertices of $G$ belonging to $X_1$ and $X_2$, respectively.\nThen both $x$ and $y$ are adjacent to $v$ in $H$.\nSince $G$ is a block graph,\n$X_1 \\setminus \\{v\\}$ and $X_2 \\setminus \\{v\\}$\nare contained in distinct connected components of $G-v$.\nThis implies that $x$ and $y$ are not adjacent in $H$,\nwhich contradicts the choice of $v$.\nTherefore there is a maximal clique $X$ containing $v$\nwithout any other cut vertex of $G$.\n\\end{proof}\n\n\n\\begin{Lem}\\label{lem:blockgraph2}\nEvery block graph $G$ is the intersection graph\nof a family $\\mathcal{F}$ of homothetic closed equilateral triangles\nin which every clique of $G$ is consecutively tail-biting.\n\\end{Lem}\n\n\\begin{proof}\nWe show by induction on the number of cut vertices of $G$.\nIf a block graph has no cut vertex, then it is a disjoint union of complete graphs and the statement\nis trivially true as the vertices of each complete subgraph can be formed as a sequence which is consecutively tail-biting (refer to Figure~\\ref{consecutive}).\n\nAssume that the statement is true for any block graph $G$ with $m$ cut vertices where $m \\geq 0$.\nNow we take a block graph $G$ with $m+1$ cut vertices.\nBy Lemma~\\ref{lem:blockgraph},\nthere is a maximal clique $X$ that contains exactly one cut vertex, say $w$.\nBy definition, the vertices of $X$ other than $w$ are simplicial vertices.\n\nDeleting the vertices of $X$ other than $w$\nand the edges adjacent to them,\nwe obtain a block graph $G^*$ with $m$ cut vertices.\nThen, by the induction hypothesis, $G^*$ is the intersection graph of a family ${\\cal F}^*$\nof homothetic closed equilateral triangles satisfying the statement.\nWe consider the triangles corresponding to $w$.\nLet $C$ and $C'$ be two maximal cliques of $G^*$ containing $w$.\nBy the induction hypothesis,\nthe vertices of $C$ and $C'$ can be ordered as\n$v_{1}, v_{2}, \\ldots, v_{l}$ and $v'_{1}, v'_{2}, \\ldots, v'_{l'}$, respectively,\nso that $v_{i} \\stackrel{k}{\\rightarrow} v_{j}$ if $i < j$, for some $k \\in \\{1,2,3\\}$ and\nthat $v'_{i'} \\stackrel{k'}{\\rightarrow} v'_{j'}$ if $i' < j'$, for some $k' \\in \\{1,2,3\\}$.\n\nSuppose that $\\triangle(v_i) \\cap \\triangle(v'_j) \\neq \\emptyset$ for $v_i$ and $v'_j$ which are distinct from $w$.\nThen $v_i$ and $v'_j$ are adjacent in $G^*$, which implies the existence of a diamond in $G$\nsince maximal cliques have size at least two.\nWe have reached a contradiction to Lemma~\\ref{lem:blockchara} and so $\\triangle(v_i) \\cap \\triangle(v'_j) = \\emptyset$ for any $i,j$.\nTherefore there is a segment of a side on $\\triangle(w)$ (with a positive length) that does not intersect with the triangle assigned to any vertex in $G^*$ other than $w$\nsince there are finitely many maximal cliques in $G^*$ that contain $w$.\nIf the side belongs to $l_{ij}^{(w)}$ for $i,j \\in \\{1,2,3\\}$,\nthen we may order the deleted vertices and assign the homothetic closed equilateral triangles\nwith sufficiently small sizes to them\nso that the closed neighborhood of $v$ is consecutively tail-biting in Type $k$ for $k \\in \\{1,2,3\\} \\setminus \\{i,j\\}$\nand none of the triangles intersects with the triangle corresponding to any vertex other than $w$ in $G^*$.\nIt is not difficult to see that the set of the triangles in $\\mathcal{F}^*$\ntogether with the triangles just obtained is the one desired for $\\mathcal{F}$.\n\\end{proof}\n\n\n\\begin{Thm}\nFor any diamond-free chordal graph $G$, $\\dim_{\\text{{\\rm poc}}}(G) \\leq 3$.\n\\end{Thm}\n\n\\begin{proof}\nThe theorem follows from\nCorollary~\\ref{cor:closed}\nand Lemma~\\ref{lem:blockgraph2}.\n\\end{proof}\n\n\n\n\\section{Chordal graphs having partial order competition dimension greater than three}\\label{sec:dimpocmorethanthree}\n\nIn this section,\nwe present infinitely many chordal graphs $G$\nwith $\\dim_{\\text{{\\rm poc}}}(G) > 3$.\nWe first show two lemmas which will be repeatedly used\nin the proof of the theorem in this section.\n\n\\begin{Lem}\\label{lem:3triangles}\nLet $D$ be a $3$-partial order and\nlet $G$ be the competition graph of $D$.\nSuppose that $G$\ncontains a diamond $K_4-e$ as an induced subgraph,\nwhere $u$, $v$, $w$, $x$ are the vertices of the diamond and $e=vx$.\nIf the sequence $(u, v, w)$ is consecutively tail-biting in Type $k$ for some $k \\in \\{1,2,3\\}$,\nthen $p_i^{(x)} \\in R_i(v)$ and $p_j^{(x)} \\notin R_j(v)$ hold or $p_i^{(x)} \\notin R_i(v)$ and $p_j^{(x)} \\in R_j(v)$ hold where $\\{i,j,k\\} = \\{1,2,3\\}$.\n\\end{Lem}\n\n\\begin{proof}\nWithout loss of generality, we may assume that $k=3$.\nWe first claim that $p_1^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\nSuppose not.\nThen $p_1^{(x)} \\in R:=\\mathcal{H} \\setminus (R_1(v) \\cup R_2(v) \\cup R_{12}(v))$.\nSince $A(x)$ and $A(v)$ are homothetic, $A(x) \\subseteq R$.\nThus $A(w) \\cap A(x) \\subseteq A(w) \\cap R$.\nSince $(u,v,w)$ is consecutively tail-biting in Type 3, $A(w) \\cap R \\subseteq A(v)$.\nTherefore $A(w) \\cap A(x)\\subseteq A(v)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nThus $p_1^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\nBy symmetry, $p_2^{(x)} \\in R_1(v) \\cup R_2(v) \\cup R_{12}(v)$.\n\nSuppose that both $p_1^{(x)}$ and $p_2^{(x)}$ are in $R_{12}(v)$.\nSince $A(x)$ and $A(v)$ are homothetic, $A(x) \\cap R \\subseteq A(v)$.\nBy the hypothesis that $(u,v,w)$ is consecutively tail-biting in Type 3,\nwe have $A(u) \\subseteq R$. Therefore $A(x) \\cap A(u) \\subseteq A(x) \\cap R$.\nThus $A(x) \\cap A(u) \\subseteq A(v)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nTherefore $p_1^{(x)} \\in R_1(v) \\cup R_2(v)$ or $p_2^{(x)} \\in R_1(v) \\cup R_2(v)$.\nSince $p_1^{(x)} \\in R_2(v)$ (resp. $p_2^{(x)} \\in R_1(v)$) implies\n$p_2^{(x)} \\in R_2(v)$ (resp. $p_1^{(x)} \\in R_1(v)$), which is impossible, we have\n$p_1^{(x)} \\in R_1(v)$ or $p_2^{(x)} \\in R_2(v)$.\n\nSuppose that both $p_1^{(x)} \\in R_1(v)$ and $p_2^{(x)} \\in R_2(v)$ hold.\nThen $A(v) \\subseteq A(x)$ since $A(v)$ and $A(x)$ are homothetic.\nThen $A(u) \\cap A(v) \\subseteq A(x)$, which contradicts Lemma~\\ref{lem:not-incl1}.\nHence $p_1^{(x)} \\in R_1(v)$ and $p_2^{(x)} \\notin R_2(v)$ hold or $p_1^{(x)} \\notin R_1(v)$ and $p_2^{(x)} \\in R_2(v)$ hold.\n\\end{proof}\n\n\\begin{figure}\n\\psfrag{t}{\\small $t$}\n\\psfrag{u}{\\small $u$}\n\\psfrag{v}{\\small $v$}\n\\psfrag{w}{\\small $w$}\n\\psfrag{x}{\\small $x$}\n\\psfrag{y}{\\small $y$}\n\\begin{center}\n\\includegraphics{config.eps}\n\\end{center}\n\\caption{The graph $\\overline{\\mathrm{H}}$}\n\\label{fig:config}\n\\end{figure}\n\nLet $\\overline{\\mathrm{H}}$ be the graph on vertex set $\\{t,u,v,w,x,y\\}$ such that $\\{t,u,v,w\\}$ forms a complete graph $K_4$, $x$ is adjacent to only $t$ and $v$, and $y$ is adjacent to only $u$ and $w$ in $\\overline{\\mathrm{H}}$\n(see Figure~\\ref{fig:config} for an illustration).\n\n\\begin{Lem}\\label{lem:4triangles}\nLet $D$ be a $3$-partial order and let $G$ be the competition graph of $D$.\nSuppose that $G$ contains the graph $\\overline{\\mathrm{H}}$ as an induced subgraph and\n$(t, u, v, w)$ is consecutively tail-biting in Type $k$ for some $k \\in \\{1,2,3\\}$.\nThen, for $i,j$ with $\\{i,j,k\\} = \\{1,2,3\\}$,\n$p_i^{(x)} \\in R_i(u)$ implies $p_j^{(y)} \\in R_j(v)$.\n\\end{Lem}\n\n\\begin{proof}\nWithout loss of generality, we may assume that $k=3$.\nIt is sufficient to show that $p_1^{(x)} \\in R_1(u)$ implies $p_2^{(y)} \\in R_2(v)$.\nNow suppose that $p_1^{(x)} \\in R_1(u)$.\nSince $(t,u,v,w)$ is a tail-biting sequence of Type 3,\n$(t,u,v)$ and $(u,v,w)$ are tail-biting sequences of Type 3.\nSince $\\{t,u,v,x\\}$ induces a diamond and\n$(t,u,v)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(x)} \\in R_1(u)$ and $p_2^{(x)} \\not\\in R_2(u)$ hold or $p_1^{(x)} \\notin R_1(u)$ and $p_2^{(x)} \\in R_2(u)$ hold.\nSince $p_1^{(x)} \\in R_1(u)$, it must hold that $p_1^{(x)} \\in R_1(u)$ and $p_2^{(x)} \\not\\in R_2(u)$.\nSince $A(u)$ and $A(x)$ are homothetic and $p_1^{(x)} \\in R_1(u)$,\nwe have $A(u) \\subseteq A(x) \\cup R_{23}(x)$.\n\nSince $\\{u,v,w,y\\}$ induces a diamond and\n$(u,v,w)$ is a consecutively tail-biting sequence of Type 3, it follows from Lemma~\\ref{lem:3triangles} that $p_1^{(y)} \\in R_1(v)$ and $p_2^{(y)} \\not\\in R_2(v)$ hold or $p_1^{(y)} \\not\\in R_1(v)$ and $p_2^{(y)} \\in R_2(v)$ hold.\nWe will claim that the latter is true as it implies $p_2^{(y)} \\in R_2(v)$.\nTo reach a contradiction, suppose the former, that is, $p_1^{(y)} \\in R_1(v)$ and $p_2^{(y)} \\not\\in R_2(v)$.\nSince $A(v)$ and $A(y)$ are homothetic and $p_1^{(y)} \\in R_1(v)$, we have $A(v) \\subseteq A(y) \\cup R_{23}(y)$.\nWe now show that $A(x) \\cap A(v) \\subseteq A(y)$.\nTake any $a \\in A(x) \\cap A(v)$.\nSince $A(v) \\subseteq A(y) \\cup R_{23}(y)$, we have $a \\in A(y) \\cup R_{23}(y)$.\nSuppose that $a \\not\\in A(y)$. Then $a \\in R_{23}(y)$.\nThis together with the fact that $a \\in A(x)$\nimplies $A(y) \\cap R_{23}(x) = \\emptyset$.\nSince $A(u) \\subseteq A(x) \\cup R_{23}(x)$, we have\n\\begin{align*}\nA(u) \\cap A(y)\n&\\subseteq (A(x) \\cup R_{23}(x)) \\cap A(y) \\\\\n&= (A(x) \\cap A(y)) \\cup (R_{23}(x)) \\cap A(y)) \\\\\n&= (A(x) \\cap A(y)) \\cup \\emptyset \\\\\n&= A(x) \\cap A(y) \\subseteq A(x).\n\\end{align*}\nTherefore $A(u) \\cap A(y) \\subseteq A(u) \\cap A(x)$.\nSince $u$ and $y$ are adjacent in $G$,\nthere exists $b \\in V(G)$ such that\n$\\triangle(b) \\subseteq A(u) \\cap A(y)$.\nThen $\\triangle(b) \\subseteq A(u) \\cap A(x)$,\nwhich is a contradiction to the fact that $u$ and $x$ are not adjacent in $G$.\nThus $a \\notin R_{23}(y)$ and so $a \\in A(y)$.\nHence we have shown that $A(x) \\cap A(v) \\subseteq A(y)$.\nSince $x$ and $v$ are adjacent in $G$,\nthere exists $c \\in V(G)$ such that\n$\\triangle(c) \\subseteq A(x) \\cap A(v)$.\nThen $\\triangle(c) \\subseteq A(v) \\cap A(y)$,\nwhich is a contradiction to the fact that $v$ and $y$ are not adjacent in $G$.\nThus we have $p_1^{(y)} \\not\\in R_1(v)$ and $p_2^{(y)} \\in R_2(v)$.\nHence the lemma holds.\n\\end{proof}\n\n\n\\begin{Defi}\\label{def:expansion}\nFor a positive integer $n$,\nlet $G_n$ be the graph obtained from the complete graph $K_n$\nby adding a path of length $2$\nfor each pair of vertices of $K_n$,\ni.e.,\n$V(G_n) = \\{ v_i \\mid 1 \\leq i \\leq n \\}\n\\cup \\{ v_{ij} \\mid 1 \\leq i < j \\leq n \\}$\nand\n$E(G_n) = \\{ v_i v_j \\mid 1 \\leq i < j \\leq n \\}\n\\cup \\{ v_i v_{ij} \\mid 1 \\leq i < j \\leq n \\}\n\\cup \\{ v_j v_{ij} \\mid 1 \\leq i < j \\leq n \\}$.\n\\end{Defi}\n\n\\begin{Defi}\nFor a positive integer $m$,\nthe \\emph{Ramsey number} $r(m,m,m)$\nis the smallest positive integer $r$\nsuch that any $3$-edge-colored complete graph $K_r$ of order $r$\ncontains a monochromatic complete graph $K_m$ of order $m$.\n\\end{Defi}\n\n\n\\begin{Lem}\\label{lem:tail}\nLet $m$ be a positive integer at least $3$ and\nlet $n$ be an integer greater than or equal to\nthe Ramsey number $r(m,m,m)$.\nIf $\\dim_{{\\rm poc}}(G_n) \\leq 3$,\nthen\nthere exists a sequence\n$(x_1, \\ldots, x_m)$ of vertices of $G_n$\nsuch that $\\{ x_1, \\ldots, x_m \\}$\nis a clique of $G_n$\nand that any subsequence\n$(x_{i_1}, \\ldots, x_{i_l})$ of $(x_1, \\ldots, x_m)$\nis consecutively tail-biting,\nwhere $2 \\leq l \\leq m$\nand $1 \\leq i_1 < \\cdots < i_l \\leq m$.\n\\end{Lem}\n\n\n\\begin{proof}\nSince the vertices $v_i$ and $v_j$ of $G_n$\nare internal vertices of an induced path\nof length three by the definition of $G_n$,\nit follows from Lemma \\ref{lem:not-incl2} that\nthe vertices $v_i$ and $v_j$ of $G_n$\nare crossing.\nBy Lemma \\ref{lem:intersecting3}, for any $1 \\leq i < j \\leq n$,\nthere exists $k \\in \\{1,2,3\\}$ such that\n$v_i \\stackrel{k}{\\rightarrow} v_j$ or $v_j \\stackrel{k}{\\rightarrow} v_i$.\nNow we define an edge-coloring\n$c:\\{v_iv_j \\mid 1 \\leq i 3$.\n\\end{Thm}\n\n\\begin{proof}\nWe prove by contradiction.\nSuppose that $\\dim_{{\\rm poc}}(G_n) \\leq 3$\nfor some $n \\geq r(5,5,5)$.\nBy Lemma~\\ref{lem:tail},\n$G_n$ contains\na consecutively tail-biting sequence $(v_1, \\ldots, v_5)$ of five vertices in Type $k$\nsuch that $\\{v_1, \\ldots, v_5\\}$ is a clique of $G_n$\nand that\n$(v_{i_1}, v_{i_2}, v_{i_3})$\nis a consecutively tail-biting sequence for any $1 \\leq i_1 < i_2 < i_3 \\leq 5$\nand\n$(v_{i_1}, v_{i_2}, v_{i_3}, v_{i_4})$\nis a consecutively tail-biting sequence for any $1 \\leq i_1 < i_2 < i_3 < i_4 \\leq 5$.\nWithout loss of generality, we may assume that $k=3$.\n\nSince $\\{v_1,v_2,v_3,v_{13}\\}$ induces a diamond and\n$(v_1,v_2,v_3)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{13})} \\in R_1(v_2)$ and $p_2^{(v_{13})} \\not\\in R_2(v_2)$ hold or $p_1^{(v_{13})} \\notin R_1(v_2)$ and $p_2^{(v_{13})} \\in R_2(v_2)$ hold.\n\nWe first suppose that $p_1^{(v_{13})} \\in R_1(v_2)$ and $p_2^{(v_{13})} \\not\\in R_2(v_2)$.\nSince $\\{v_1,v_2,v_3,v_4,v_{13},v_{24}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_3,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{13})} \\in R_1(v_2)$ that\n$p_2^{(v_{24})} \\in R_2(v_3)$.\nSince $\\{v_1,v_2,v_3,v_5,v_{13},v_{25}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_3,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{13})} \\in R_1(v_2)$ that\n\\begin{equation}\\label{eqn:three}\np_2^{(v_{25})} \\in R_2(v_3).\n\\end{equation}\nSince $\\{v_2,v_3,v_4,v_5,v_{24},v_{35}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_2,v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_2^{(v_{24})} \\in R_2(v_3)$ that\n\\begin{equation}\\label{eqn:four}\np_1^{(v_{35})} \\in R_1(v_4).\n\\end{equation}\nSince $\\{v_1,v_3,v_4,v_{14}\\}$ induces a diamond and\n$(v_1,v_3,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{14})} \\in R_1(v_3)$ and $p_2^{(v_{14})} \\not\\in R_2(v_3)$ hold or $p_1^{(v_{14})} \\notin R_1(v_3)$ and $p_2^{(v_{14})} \\in R_2(v_3)$ hold.\nSuppose that $p_1^{(v_{14})} \\in R_1(v_3)$ and $p_2^{(v_{14})} \\not\\in R_2(v_3)$.\nSince $\\{v_1,v_3,v_4,v_5,v_{14},v_{35}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_1^{(v_{14})} \\in R_1(v_3)$ that\n\\begin{equation}\\label{eqn:four-2}\np_2^{(v_{35})} \\in R_2(v_4).\n\\end{equation}\nSince $\\{v_3,v_4,v_5,v_{35}\\}$ induces a diamond and\n$(v_3,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{35})} \\in R_1(v_4)$ and $p_2^{(v_{35})} \\notin R_2(v_4)$ hold or $p_1^{(v_{35})} \\notin R_1(v_4)$ and $p_2^{(v_{35})} \\in R_2(v_4)$ hold,\nwhich is a contradiction to the fact that both (\\ref{eqn:four}) and (\\ref{eqn:four-2}) hold.\nThus\n\\begin{equation}\\label{eqn:five}\np_1^{(v_{14})} \\not\\in R_1(v_3) \\text{ and } p_2^{(v_{14})} \\in R_2(v_3).\n\\end{equation}\nSince $\\{v_1,v_2,v_4,v_{14}\\}$ induces a diamond and\n$(v_1,v_2,v_4)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{14})} \\in R_1(v_2)$ and $p_2^{(v_{14})} \\notin R_2(v_2)$ hold or $p_1^{(v_{14})} \\notin R_1(v_2)$ and $p_2^{(v_{14})} \\in R_2(v_2)$ hold.\nSuppose that $p_1^{(v_{14})} \\not\\in R_1(v_2)$ and $p_2^{(v_{14})} \\in R_2(v_2)$.\nSince $\\{v_1,v_2,v_4,v_5,v_{14},v_{25}\\}$ induces an $\\overline{\\mathrm{H}}$ and\n$(v_1,v_2,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:4triangles} and $p_2^{(v_{14})} \\in R_2(v_2)$ that\n\\begin{equation}\\label{eqn:six}\np_1^{(v_{25})} \\in R_1(v_4).\n\\end{equation}\nBy (\\ref{eqn:three}) and (\\ref{eqn:six}),\nsince $A(v_4)$ and $A(v_{25})$ are homothetic,\nwe have\n\\begin{equation}\\label{eqn:six-2}\np_2^{(v_{25})} \\in R_2(v_4).\n\\end{equation}\nSince $\\{v_2,v_4,v_5,v_{25}\\}$ induces a diamond and\n$(v_2,v_4,v_5)$ is a consecutively tail-biting sequence of Type 3,\nit follows from Lemma~\\ref{lem:3triangles} that\n$p_1^{(v_{25})} \\in R_1(v_4)$ and $p_2^{(v_{25})} \\notin R_2(v_4)$ hold or $p_1^{(v_{25})} \\notin R_1(v_4)$ and $p_2^{(v_{25})} \\in R_2(v_4)$ hold,\nwhich is a contradiction to the fact that both (\\ref{eqn:six}) and (\\ref{eqn:six-2}) hold.\nThus\n$p_1^{(v_{14})} \\in R_1(v_2)$ and $p_2^{(v_{14})} \\not\\in R_2(v_2)$.\n\nSince $A(v_3)$ and $A(v_{14})$ are homothetic,\nwe have\n\\begin{equation}\\label{eqn:five-2}\np_1^{(v_{14})} \\in R_1(v_3),\n\\end{equation}\ncontradicting (\\ref{eqn:five}).\n\nIn the case where $p_1^{(v_{13})} \\not\\in R_1(v_2)$ and $p_2^{(v_{13})} \\in R_2(v_2)$, we also reach a contradiction by applying a similar argument.\n\nHence, $\\dim_{{\\rm poc}}(G_n) > 3$ holds for any $n \\geq r(5,5,5)$.\n\\end{proof}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeidu b/data_all_eng_slimpj/shuffled/split2/finalzzeidu new file mode 100644 index 0000000000000000000000000000000000000000..eebf84369fd08cebfafe48d0c8379a39a20a41e6 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeidu @@ -0,0 +1,5 @@ +{"text":"\n\\section{Introduction}\n\nOne of the major achievements of HERA was the experimental evidence that \namong the whole set of $\\gamma^* p \\to X$ deep inelastic scattering events, almost 10\\% are diffractive (DDIS), of the form $\\gamma^* p \\to X Y$ with a rapidity gap between the proton remnants $Y$\nand the hadrons $X$\ncoming from the fragmentation region of the initial virtual photon~\\cite{Chekanov:2004hy-Chekanov:2005vv-Chekanov:2008fh,\nAktas:2006hx-Aktas:2006hy-Aaron:2010aa-Aaron:2012ad-Aaron:2012hua}.\nDiffraction can be theoretically described according to several approaches, important for phenomenological applications. The first approach involves\na {\\em resolved} Pomeron contribution (with a parton distribution function inside the Pomeron), while the second one\nrelies on a {\\em direct} Pomeron contribution involving the coupling of a Pomeron with the diffractive state. The diffractive states can be modelled in perturbation theory by a $q \\bar{q}$ pair (for moderate $M^2$, where $M$ is the invariant mass of the diffractively produced state $X$) or by higher Fock states as a $q \\bar{q} g$ state for larger values of $M^2$. Based on such a model, with\n a two-gluon exchange picture for the Pomeron, a good description of HERA data for diffraction could be achieved~\\cite{Bartels:1998ea}. One of the important features of this approach is that the $q \\bar{q}$ component with a longitudinally polarized photon plays a crucial role in the region of small\ndiffractive mass $M$, although it is a\ntwist-4 contribution.\nIn the direct components considered there, the $q \\bar{q} g$ diffractive state has been studied in two particular limits. The first one, valid for very large $Q^2$, corresponds to a collinear approximation in which the transverse momentum of the gluon is assumed to be much smaller than the transverse momentum of the emitter~\\cite{Wusthoff:1995hd-Wusthoff:1997fz}. \nThe second one~\\cite{Bartels:1999tn,Bartels:2002ri}, valid for very large $M^2$, is based on the assumption of a strong ordering of longitudinal momenta, encountered in BFKL equation~\\cite{Fadin:1975cb-Kuraev:1976ge-Kuraev:1977fs-Balitsky:1978ic}. Both these approaches were combined in order to describe HERA data for DDIS~\\cite{Marquet:2007nf}. \n\nBased on these very successful developments led at HERA\nin order to understand the QCD dynamics with diffractive events, \nit would be appropriate to look for similar hard diffractive events at LHC. \nThe idea there is to adapt the concept of photoproduction of diffractive jets, which was performed at HERA~\\cite{Chekanov:2007rh,Aaron:2010su}, now with a flux of\nquasi-real photons in ultraperipheral collisions (UPC)~\\cite{Baltz:2007kq-Baur:2001jj}, relying on the notion of equivalent photon approximation. In both cases, \n the hard scale is provided by the invariant mass of the tagged jets.\n\nWe here report on our computation~\\cite{Boussarie:2014lxa} of the $\\gamma^* \\to q \\bar{q} g$ impact factor at tree level with an arbitrary number of $t$-channel gluons described within the Wilson line formalism, also called QCD shockwave approach~\\cite{Balitsky:1995ub-Balitsky:1998kc-Balitsky:1998ya-Balitsky:2001re}. As an aside, we rederive the $\\gamma^* \\to q \\bar{q}$ impact factor. In particular, the \n$\\gamma^* \\to q \\bar{q} g$ transition is computed without any soft or collinear approximation for the emitted gluon, in contrast with the above mentioned calculations. These results provide necessary generalization of building blocks for inclusive DDIS as well as for two- and three-jet diffractive production. Since the results we derived can account for an arbitrary number of $t$-channel gluons, this could allow to include higher twist effects which are suspected to be rather important in DDIS for $Q^2 \\lesssim 5$ GeV$^2$~\\cite{Motyka:2012ty}. \n\n\n\n\n\n\\section{Formalism}\n\nAs stated before, we use Balitsky's shockwave formalism. \nIts application shows that this method is very powerful in determining evolution equations and impact factors at next-to-leading order for inclusive processes~\\cite{Balitsky:2010ze-Balitsky:2012bs}, at semi-inclusive level for $p_t$-broadening in $pA$ collisions~\\cite{Chirilli:2011km-Chirilli:2012jd} or in the evaluation of the triple Pomeron vertex beyond the planar limit~\\cite{Chirilli:2010mw}, when compared with usual methods based on summation of contributions of individual Feynman diagrams computed in momentum space. It is an effective way of estimating the effect of multigluon exchange. Its formulation in coordinate space makes it natural in view of describing saturation~\\cite{GolecBiernat:1998js-GolecBiernat:1999qd}.\nOne introduces Wilson lines as \n\\begin{equation}\nU_{i}=U_{\\vec{z}_{i}}=U\\left( \\vec{z}_{i},\\eta\\right) =P \\exp\\left[{ig\\int_{-\\infty\n}^{+\\infty}b_{\\eta}^{-}(z_{i}^{+},\\vec{z}_{i}) \\, dz_{i}^{+}}\\right]\\,.\n\\label{WL}%\n\\end{equation}\nThe operator $b_{\\eta}^{-}$ is the external shock-wave field built from slow gluons \nwhose momenta are limited by the longitudinal cut-off defined by the rapidity $\\eta$\n\\begin{equation}\nb_{\\eta}^{-}=\\int\\frac{d^{4}p}{\\left( 2\\pi\\right) ^{4}}e^{-ip \\cdot z}b^{-}\\left(\np\\right) \\theta(e^{\\eta}-|p^{+}|).\\label{cutoff}%\n\\end{equation}\nWe use the light cone gauge\n$\\mathcal{A}\\cdot n_{2}=0,$\nwith $\\mathcal{A}$ being the sum of the external field $b$ and the quantum field\n$A$%\n\\begin{equation}\n\\mathcal{A}^{\\mu} = A^{\\mu}+b^{\\mu},\\quad b^{\\mu}\\left( z\\right) =b^{-}(z^{+},\\vec{z}\\,) \\,n_{2}%\n^{\\mu}=\\delta(z^{+})B\\left( \\vec{z}\\,\\right) n_{2}^{\\mu}\\,,\\label{b}%\n\\end{equation}\nwhere\n$B(\\vec{z})$ is a profile function.\nThe dipole operator \n$\\mathbf{U}_{12}=\\frac{1}{N_{c}}\\rm{tr}\\left( U_{1}U_{2}^{\\dagger}\\right) -1$\nwill be used extensively. \n\n\n\\section*{Impact factor for $\\gamma\\rightarrow q\\bar{q}$ transition}\n\n\\begin{figure}\n\\scalebox{.89}{\\begin{tabular}{cc}\n\\raisebox{3.75cm}{\\includegraphics[scale=0.5]{qqbar.pdf}} &\n\\includegraphics[scale=0.65]{NLO.pdf}\n\\end{tabular}}\n\\caption{Left: diagram for $\\gamma\\rightarrow q\\bar{q}$ transition. Right: the 4 diagrams for $\\gamma\\rightarrow q\\bar{q}g$ transition.}\n\\label{Fig:diagrams}\n\\end{figure}\n\nFor $q \\bar{q}$ production one can write, after projection on the color singlet state and subtraction of the non-interacting term\n\\begin{equation}\nM_{0}^{\\alpha}=N_c \\int d\\vec{z}_{1}d\\vec{z}_{2}F\\left( p_{q},p_{\\bar{q}}%\n,z_{0},\\vec{z}_{1},\\vec{z}_{2}\\right) ^{\\alpha} \\mathbf{U}_{12}\\,.\n\\label{M0int}%\n\\end{equation}\nDenoting $Z_{12} = \\sqrt{x_{q}x_{\\bar{q}}\\vec{z}_{12}^{\\,\\,2}}$, we get for a longitudinal photon\n\\begin{eqnarray}\n\\label{FL}\nF\\left( p_{q},p_{\\bar{q}},k,\\vec{z}_{1},\\vec{z}_{2}\\right) ^{\\alpha\n}\\varepsilon_{L\\alpha}=\\theta(p_{q}^{+})\\,\\theta(p_{\\bar{q}}^{+})\\frac\n{\\delta\\left( k^{+}-p_{q}^{+}-p_{\\bar{q}}^{+}\\right) }{(2\\pi)^{2}}%\ne^{-i\\vec{p}_{q}\\cdot \\vec{z}_{1}-i\\vec{p}_{_{\\bar{q}}}\\cdot\\vec{z}_{2}}\n(-2i)\\delta_{\\lambda_{q},-\\lambda_{\\bar{q}}}\\,x_{q}x_{\\bar{q}}%\n\\,Q\\,K_{0}\\left(Q \\, Z_{12}\\right)\\,,\n\\end{eqnarray}\nand for a transverse photon\n\\begin{eqnarray}\n\\label{FT}\nF( p_{q},p_{\\bar{q}},k,\\vec{z}_{1},\\vec{z}_{2}) ^{j}%\n\\varepsilon_{Tj}\\!=\\theta(p_{q}^{+})\\,\\theta(p_{\\bar{q}}^{+})\\frac{\\delta(\nk^{+}\\!\\!-\\!p_{q}^{+}\\!-p_{\\bar{q}}^{+}\\!) }{(2\\pi)^{2}}e^{-i\\vec{p}_{q}\\cdot\\vec\n{z}_{1}-i\\vec{p}_{_{\\bar{q}}}\\cdot\\vec{z}_{2}}\n\\delta_{\\lambda_{q},-\\lambda_{\\bar{q}}}( x_{q}-x_{\\bar{q}%\n}+s\\lambda_{q}) \\frac{\\vec{z}_{12} \\cdot \\vec{\\varepsilon}_{T}}{\\vec{z}_{12}^{\\,\\,2}}\nQ \\,Z_{12} K_{1}(Q\\, Z_{12})\\,.\\!\\!\\!\\!\\!\n\\end{eqnarray}\n\n\\section*{Impact factor for $\\gamma\\rightarrow q\\bar{q}g$ transition}\n\nFor $q \\bar{q} g$ production, projecting on the color singlet state and subtracting the non-interacting term again, one can write\n\\begin{eqnarray}\n\\nonumber \nM^{\\alpha} &=& N_c^2 \\int d\\vec{z}_{1}d\\vec{z}_{2}d\\vec{z}_{3} \\, F_{1}\\left( p_{q},p_{\\bar{q}}%\n,p_{g},z_{0},\\vec{z}_{1},\\vec{z}_{2},\\vec{z}_{3}\\right) ^{\\alpha}\\frac{1}{2}\n\\left( \\mathbf{U}_{32} + \\mathbf{U}_{13} - \\mathbf{U}_{12} + \\mathbf{U}_{32}\\mathbf{U}_{13} \\right)\n\\\\\n&+& N_c \\int d\\vec{z}_{1}d\\vec{z}_{2} \\, F_{2}\\left( p_{q},p_{\\bar{q}},p_{g},z_{0}%\n,\\vec{z}_{1},\\vec{z}_{2}\\right) ^{\\alpha}\\frac{N_{c}^{2}-1}{2N_{c}} \\mathbf{U}_{12}\\,.\n\\label{F2tilde}%\n\\end{eqnarray}\nThe first and the second line of this equation correspond respectively to the two last diagrams of the first line and to the second line of diagrams of Fig.~\\ref{Fig:diagrams}.\nFor a longitudinally polarized photon, they read\n\\begin{eqnarray}\n&&\\hspace{-1cm}F_{1}\\left( p_{q},p_{\\bar{q}},p_{g},k,\\vec{z}_{1},\\vec{z}_{2},\\vec{z}%\n_{3}\\right) ^{\\alpha}\\varepsilon_{L\\alpha}=2\\,Q\\,g\\,\\delta(k^{+}-p_{g}^{+}-p_{q}%\n^{+}-p_{_{\\bar{q}}}^{+})\\theta(p_{g}^{+}-\\sigma)\\frac{e^{-i\\vec{p}_{q} \\cdot %\n\\vec{z}_{1}-i\\vec{p}_{_{\\bar{q}}} \\cdot \\vec{z}_{_{2}}-i\\vec{p}_{g} \\cdot \\vec{z}_{3}}}%\n{\\pi\\sqrt{2p_{g}^{+}}}\n\\nonumber \\\\\n&\\times&\\delta_{\\lambda_{q},-\\lambda_{\\bar{q}}}\\left\\{ (x_{_{\\bar{q}}}%\n+x_{g}\\delta_{-s_{g}\\lambda_{q}})x_{q}\\frac{\\vec{z}_{32} \\cdot \\vec{\\varepsilon}%\n_{g}^{\\,\\,\\ast}}{\\vec{z}_{32}^{\\,\\,2}}-(x_{q}+x_{g}\\delta_{-s_{g}%\n\\lambda_{\\bar{q}}})x_{_{\\bar{q}}}\\frac{\\vec{z}_{31} \\cdot \\vec{\\varepsilon}%\n_{g}^{\\,\\,\\ast}}{\\vec{z}_{31}^{\\,\\,2}}\\right\\} K_{0}(QZ_{123}) \\,,\\\\\n\\label{F1eL}%\n\\label{resF2L}\n&&\\hspace{-1cm}\\tilde{F}_{2}\\left( p_{q},p_{\\bar{q}},p_{g},k,\\vec{z}_{1},\\vec{z}_{2}\\right)\n^{\\alpha}\\varepsilon_{L\\alpha}=4ig \\, Q\\,\\theta(p_{g}^{+}-\\sigma)\\delta(k^{+}%\n-p_{g}^{+}-p_{q}^{+}-p_{_{\\bar{q}}}^{+})\\frac{e^{-i\\vec{p}_{q} \\cdot \\vec{z}%\n_{1}-i\\vec{p}_{_{\\bar{q}}} \\cdot \\vec{z}_{2}}}{\\sqrt{2p_{g}^{+}}}%\n\\nonumber \\\\\n&&\\times\\delta_{\\lambda_{q},-\\lambda_{\\bar{q}}}\\frac{x_{q}\\left( x_{g}%\n+x_{\\bar{q}}\\right) \\left( \\delta_{-s_{g}\\lambda_{q}}x_{g}+x_{\\bar{q}%\n}\\right) }{x_{\\bar{q}} \\, x_g }\\frac{\\vec{P}_{\\bar{q}} \\cdot %\n\\vec{\\varepsilon}_{g}^{\\,\\,\\ast}}{\\vec{P}_{\\bar{q}}^2}\\,e^{-i\\vec{p}_{g} \\cdot \\vec{z}_{2}}K_{0}%\n(QZ_{122})-\\left( q\\leftrightarrow\\bar{q}\\right) ,\n\\end{eqnarray}\nwhile for a transversally polarized photon, we have \n\\begin{eqnarray}\n&&\\hspace{-.7cm}F_{1}\\left( p_{q},p_{\\bar{q}},p_{g},k,\\vec{z}_{1},\\vec{z}_{2},\\vec{z}%\n_{3}\\right) ^{\\alpha}\\!\\varepsilon_{T\\alpha}=\\!-2i\\,g\\,Q\\delta(k^{+}-p_{g}^{+}%\n-p_{q}^{+}-p_{_{\\bar{q}}}^{+})\\theta(p_{g}^{+}-\\sigma)\n\\frac{e^{-i\\vec{p}_{q} \\cdot \\vec{z}_{1}-i\\vec{p}_{_{\\bar{q}}} \\cdot \\vec{z}_{_{2}%\n}-i\\vec{p}_{g} \\cdot \\vec{z}_{3}}}{\\pi Z_{123}\\sqrt{2p_{g}^{+}}}\\delta_{\\lambda\n_{q},-\\lambda_{\\bar{q}}}K_{1}(QZ_{123})\\\\\n&&\\hspace{-.8cm}\\times\\left\\{ \\frac{\\left( \\vec{z}%\n_{23} \\cdot \\vec{\\varepsilon}_{g}^{\\,\\,\\ast}\\right) \\left( \\vec{z}_{13} \\cdot %\n\\vec{\\varepsilon}_{T}\\right) }{\\vec{z}_{23}{}^{2}}x_{q}\\left( x_{q}%\n-\\delta_{s\\lambda_{\\bar{q}}}\\right) \\left( x_{\\bar{q}}+x_{g}\\delta\n_{-s_{g}\\lambda_{q}}\\right) +\\frac{\\left( \\vec{z}_{23} \\cdot \\vec{\\varepsilon}_{g}^{\\,\\,\\ast}\\right)\n\\left( \\vec{z}_{23} \\cdot \\vec{\\varepsilon}_{T}\\right) }{\\vec{z}_{23}{}^{2}}%\nx_{q}x_{\\bar{q}}\\left( x_{\\bar{q}}+x_{g}\\delta_{-s_{g}\\lambda_{q}}%\n-\\delta_{s\\lambda_{q}}\\right) \\right\\} -\\left( q\\leftrightarrow\\bar\n{q}\\right) \\, ,\\nonumber\\\\\n\\label{F1eT}%\n\\label{resF2tildeT}\n&&\\hspace{-.8cm}\\tilde{F}_{2}\\left( p_{q},p_{\\bar{q}},p_{g},k,\\vec{z}_{1},\\vec{z}_{2}\\right)\n^{\\alpha}\\varepsilon_{T\\alpha}=-4g\\,\\theta(p_{g}^{+}-\\sigma)\\,\\delta(k^{+}%\n-p_{g}^{+}-p_{q}^{+}-p_{_{\\bar{q}}}^{+})\\frac{e^{-i\\vec{p}_{q} \\cdot \\vec{z}%\n_{1}-i\\vec{p}_{_{\\bar{q}}} \\cdot \\vec{z}_{2}}}{\\sqrt{2p_{g}^{+}}}\\delta_{\\lambda\n_{q},-\\lambda_{\\bar{q}}}%\n\\nonumber \\\\\n&&\\times \n\\frac{\\left( \\delta\n_{\\lambda_{\\bar{q}}s}-x_{q}\\right) \\left( \\delta_{-s_{g}\\lambda_{q}}%\nx_{g}+x_{\\bar{q}}\\right)}{x_{\\bar{q}} \\, x_g} \n\\frac{\\vec{P}_{\\bar{q}} \\cdot %\n\\vec{\\varepsilon}_{g}^{\\,\\,\\ast}}{\\vec{P}_{\\bar{q}}^2}\n\\frac{\\vec{z}_{12} \\cdot \\vec{\\varepsilon}_{T}}{\\vec{z}_{12}^2} \n\\, Q \\, Z_{122}\nK_{1}(QZ_{122})e^{-i\\vec{p}_{g} \\cdot \\vec{z}_{2}%\n}-\\left( q\\leftrightarrow\\bar{q}\\right) \\,. \n\\end{eqnarray}\nWe denote \n$ F_{2}\\left( p_{q},p_{\\bar{q}},p_{g},z_{0},\\vec{z}_{1},\\vec{z}_{2}\\right)^{\\alpha}\\!=\\!\\tilde{F}_{2}\\left( p_{q},p_{\\bar{q}},p_{g},z_{0},\\vec{z}_{1}%\n ,\\vec{z}_{2}\\right) ^{\\alpha}\\!+\\!\\int d\\vec{z}_{3}\\,F_{1}\\left( p_{q},p_{\\bar{q}%\n },p_{g},z_{0},\\vec{z}_{1},\\vec{z}_{2},\\vec{z}_{3}\\right) ^{\\alpha}\\!.$\n\n\n\\section*{2- and 3-gluon approximation}\n\n\nLet us notice that the dipole operator $\\mathbf{U}_{ij}$ is of order $g^2$. Hence for only two or three exchanged gluons one can neglect the quadrupole term in the amplitude $M^{\\alpha}$ and get \n\\begin{eqnarray}\n\\label{M3gBis}\nM^{\\alpha} \\overset{\\mathrm{g^3}}{=} \\frac{1}{2}\\int d\\vec{z}_{1}d\\vec{z}%\n_{2} \\mathbf{U}_{12} \\left[ \\left( N_{c}^{2}-1\\right)\n\\tilde{F}_{2}\\left( \\vec{z}_{1},\\vec{z}%\n_{2}\\right) ^{\\alpha} + \\int d\\vec{z}_{3} \\left\\{ N_{c}^{2}F_{1}\\left(\n\\vec{z}_{1},\\vec{z}_{3},\\vec{z}_{2}\\right)^{\\alpha} +N_{c}^{2}F_{1}\\left( \\vec{z}_{3},\\vec{z}%\n_{2},\\vec{z}_{1}\\right) ^{\\alpha} - F_{1}\\left( \\vec{z}_{1},\\vec{z}_{2},\\vec{z}_{3}\\right) ^{\\alpha} \\right\\} \\right].\n %\n\\end{eqnarray}\nFor $\\vec{p}_q=\\vec{p}_g=\\vec{p}_{\\bar{q}}=\\vec{0}$, those integrals can be performed analytically. Otherwise they can be expressed as a simple convergent integral over $[0,1]$ that can be performed numerically for any future phenomenological study. \n\n\\section*{Conclusion}\n\nThe measurement of dijet production in DDIS was recently performed~\\cite{Aaron:2011mp}, and a precise comparison of \ndijet versus triple-jet production, which has not been performed yet at HERA~\\cite{Adloff:2000qi}, would be very useful to get a deeper understanding of the QCD mechanism underlying diffraction. Recent investigations of the azimuthal distribution of dijets in diffractive photoproduction performed by ZEUS~\\cite{Guzik:2014iba} show sign of a possible need for a 2-gluon exchange model, which is part of the shock-wave mechanism. Our calculation could be used for phenomenological studies of those experimental results.\nA similar and very complementary study could be performed at LHC with UPC events. One should note that getting a full quantitative first principle analysis of this would require an evaluation of virtual corrections to the $\\gamma^* \\rightarrow q\\bar{q}$ impact factor, which are presently under study~\\cite{Boussarie:prep}.\n\nDiffractive open charm production was measured at HERA~\\cite{Aktas:2006up} \nand studied in the large $M$ limit based on the direct coupling between a Pomeron and a $q \\bar{q}$ or a $q\\bar{q}g$ state, with massive quarks~\\cite{Bartels:2002ri}. Such a program could also be performed at LHC, \nagain based on UPCs and on\nthe extension of the above mentioned impact factors to the case of a massive quark. This could be further extended \nto $J\/\\Psi$ production, which are copiously produced at LHC.\n\n\n\\begin{theacknowledgments}\nA. V. G. acknowledges support of president grant MK-7551.2015.2 and\nRFBR grant 13-02-01023. This work was partially supported by the PEPS-PTI PHENODIFF,\nthe PRC0731 DIFF-QCD, the Polish Grant NCN No. DEC-2011\/01\/B\/ST2\/03915, the ANR PARTONS (ANR-12-MONU-0008-01), the COPIN-IN2P3 Agreement\nand the Joint Research Activity Study of Strongly Interacting Matter (acronym HadronPhysics3,\nGrant Agreement n.283286) under the Seventh Framework Programme of the\nEuropean Community\n\\end{theacknowledgments}\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:Introduction}\n\nOhta-Kawasaki (OK) model is introduced in \\cite{OhtaKawasaki_Macromolecules1986} and has been extensively applied for the study of phase separation of diblock copolymers, which\nhave generated much interest in materials science in the past years due to their remarkable ability for self-assembly into nanoscale ordered structures \\cite{Hamley_Wiley2004}. Diblock copolymers are chain molecules made by two different segment species, say $A$ and $B$ species. Due to the chemical incompatibility, the two species tend to be phase-separated; on the other hand, the two species are connected by covalent chemical bonds, which leads to the so-called microphase separation. The OK model can describe such microphase separation for diblock copolymers via a free energy functional:\n\\begin{align}\\label{functional:OK}\nE^{\\text{OK}}[\\phi] = \\int_{\\mathbb{T}^d} \\dfrac{\\epsilon}{2}|\\nabla\\phi|^2 + \\dfrac{1}{\\epsilon}W(\\phi)\\ \\text{d}x + \\dfrac{\\gamma}{2}\\int_{\\mathbb{T}^d} |(-\\Delta)^{-\\frac{1}{2}}(f(\\phi)-\\omega)|^2\\ \\text{d}x,\n\\end{align}\nwith a volume constraint\n\\begin{align}\\label{eqn:Volume}\n\\int_{\\mathbb{T}^d} (f(\\phi) - \\omega)\\ \\text{d}x = 0.\n\\end{align}\nHere $\\mathbb{T}^d = \\prod_{i=1}^d [-X_i, X_i] \\subset \\mathbb{R}^d, d = 2, 3$ denotes a periodic box and $0<\\epsilon \\ll 1$ is an interface parameter that indicates the system is in deep segregation regime. $\\phi = \\phi(x)$ is a phase field labeling function which represents the concentration of $A$ species. By the assumption of incompressibility for the binary system, the concentration of $B$ species can be implicitly represented by $1-\\phi(x)$. Function $W(\\phi) = 18(\\phi^2-\\phi)^2$ is a double well potential which enforces the phase field function $\\phi$ to be equal to 1 inside the interface and 0 outside the interface. Near the interfacial region, the phase field function $\\phi$ rapidly but smoothly transitions from 0 to 1. A new term of $f(\\phi) = 3\\phi^2 - 2\\phi^3$ is introduced in the free energy functional to resemble $\\phi$ as the indicator for the $A$ species. The first integral in (\\ref{functional:OK}) is a local surface energy which represents the short-range interaction between the chain molecules and favors the large domain; while the second integral in (\\ref{functional:OK}) is a term for the long-range (nonlocal) repulsive interaction with $\\gamma >0$ being the strength of the repulsive force. Finally, $\\omega\\in(0,1)$ is the relative volume of the $A$ species.\n\nTo study the microphase separation and the pattern formation of the diblock copolymer, we consider the $L^2$ gradient flow dynamics of the OK model. On the other hand, to relax the volume constraint (\\ref{eqn:Volume}), we can incorporate a penalty term into (\\ref{functional:OK}) and change it into an unconstrained one:\n\\begin{align}\\label{functional:pOK}\nE^{\\text{pOK}}[\\phi] = \\int_{\\mathbb{T}^d} \\dfrac{\\epsilon}{2}|\\nabla\\phi|^2 + \\dfrac{1}{\\epsilon}W(\\phi)\\ \\text{d}x + \\dfrac{\\gamma}{2}\\int_{\\mathbb{T}^d} |(-\\Delta)^{-\\frac{1}{2}}(f(\\phi)-\\omega)|^2\\ \\text{d}x + \\dfrac{M}{2}\\left[ \\int_{\\mathbb{T}^d} f(\\phi)-\\omega\\ \\text{d}x \\right]^2,\n\\end{align}\nwhere $M\\gg1$ is a penalty constant. Then we can consider the corresponding penalized $L^2$ gradient flow dynamics with given initial $\\phi(x,t=0) = \\phi_0$, which thereafter is called penalized Allen-Cahn-Ohta-Kawasaki (pACOK) equation:\n\\begin{align}\\label{eqn:pACOK}\n\\dfrac{\\partial}{\\partial t} \\phi = \\epsilon\\Delta\\phi - \\dfrac{1}{\\epsilon}W'(\\phi) - \\gamma(-\\Delta)^{-1}(f(\\phi)-\\omega)f'(\\phi) - M\\int_{\\mathbb{T}^d} (f(\\phi)-\\omega)\\ \\text{d}x\\cdot f'(\\phi).\n\\end{align}\n\nOur main contribution in this paper is threefold. Firstly, the new form of $f(\\phi)$ guarantees that the pACOK equation is maximum principle preserving (MPP), namely, if the initial data is bounded $0\\le\\phi_0\\le1$, then $0\\le\\phi(x)\\le1$ for any later time. Secondly, we adopt a linear splitting method to the pACOK equation and then apply a semi-implicit scheme for the numerical simulations. This scheme treats the linear terms implicitly but all the nonlinear and nonlocal terms explicitly, and with proper choice for the splitting constant, it inherits the MPP property at both time-discrete and fully-discrete levels. Besides, just as the energy dissipation law (energy stability) is obeyed by the continuous $L^2$ gradient flow (\\ref{eqn:pACOK}), the proposed numerical scheme also successfully inherits the energy stability at both time-discrete and fully-discrete levels. Thirdly, the error estimate analysis is carried and the rate of convergence is verified by numerical simulations.\n\nThe inclusion of a new nonlinear term $f(\\phi)$ in the OK model makes the key novelty in this paper. On one hand, this term $f(\\phi)$ accounts for mimicking the behavior of $\\phi$ as the indicator for the $A$-rich region, so it satisfies the condition:\n\\begin{align}\\label{eqn:f_condition}\nf(0) = 0, \\quad f(1) = 1.\n\\end{align}\nOn the other hand, in order to pin the phase field label $\\phi$ at 1 and 0 inside and outside the $A$-$B$ interface, respectively, we set an extra condition\n\\begin{align}\\label{eqn:f'_condition}\nf'(0) = 0, \\quad f'(1) = 0.\n\\end{align}\nso that the evolution of the pACOK dynamics (\\ref{eqn:pACOK}) only updates the phase field $\\phi$ near the interface but not in the away-from-interface region. This will help maintain $\\phi$ as a desired tanh profile better than simply taking $f(\\phi) = \\phi$. See \\cite{Zhao_2018CMS, WangRenZhao_CMS2019, XuZhao_JSC2019} for the numerical comparisons between linear and nonlinear choices of $f(\\phi)$. The polynomial of the smallest degree satisfying both conditions (\\ref{eqn:f_condition}) and (\\ref{eqn:f'_condition}) is\n\\begin{align}\\label{eqn:f}\nf(\\phi) = 3\\phi^2 - 2\\phi^3.\n\\end{align}\nIn some scenario, a polynomial of higher degree might be required. For instance, in the study of energy stable numerical scheme based on operator splitting, one needs to perform a linear extension for the nonlinearity $f$ up to second order continuous derivative, then the minimal degree has to be fifth \\cite{XuZhao_JSC2019}. The real magic that $f(\\phi)$ in (\\ref{eqn:f}) plays is that it preserves the maximum principle. The key observation is that $f'$ and $W'$ share a common factor $\\phi - \\phi^2$, such that any possible growth on $\\phi$ (which potentially breaks the MPP) can be safely killed by the double well potential term to save the MPP. See the proof of Theorem \\ref{theorem:MPP_continuous} for details.\n\n\n\nThe MPP is an important property held by the Allen-Cahn equation. It says that if the initial data is bounded between 0 and 1, then the solution remains between 0 and 1 for any later time. In recent years, efforts have been devoted to investigate the MPP numerical schemes for the Allen-Cahn equation. Tang and Yang studied a first order implicit-explicit scheme for the MPP property for the Allen-Cahn equation in \\cite{TangYang_JCM2016}. They further extended the results to the generalized Allen-Cahn equation in \\cite{ShenTangYang_CMS2016}. Later, some attempts have been made to study second order MPP schemes for fractional-in-space Allen-Cahn equation \\cite{HouTangYang_JSC2017} and nonlocal Allen-Cahn equation \\cite{DuJuLiQiao_SINA2019}. Recently some adaptive second order MPP schemes have been considered for the Allen-Cahn equation \\cite{LiaoTangZhou_SINA2019} and time-fractional Allen-Cahn equation \\cite{LiaoTangZhou_JCP2019}.\n\nIn this paper, as a first attempt, we will explore the MPP scheme for an Allen-Cahn type dynamics with a long-range interaction term. The new ingredient in the MPP scheme is inspired by the continuous MPP property, namely, the nonlinear function $f(\\cdot)$ which satisfies the condition (\\ref{eqn:f'_condition}). However two things are different between the continuous and the discrete settings. One is that $f(\\cdot)$ has to be linearly extended to 0 and 1 in the continuous case but not in the discrete case. See the equation (\\ref{eqn:f_extension}) and the related discussion. The other is that in the continuous case, $f(\\cdot)$ has to be of the smallest degree to satisfy (\\ref{eqn:f'_condition}) in order to be well controlled by the double well potential $W$, while in the discrete case, any polynomial $f(\\cdot)$ satisfying (\\ref{eqn:f'_condition}) (and (\\ref{eqn:f_condition})) would do the trick. Allowing weaker conditions for the discrete schemes is due to the fact that the MPP of $\\phi(t)$ in the continuous pACOK dynamics depends on the entire history before $t$; while the discrete MPP of $\\phi^{n}$ only depends on the $k$ previous states $\\{\\phi^{j}\\}_{j=n-k}^{n-1}$ (in this paper, we focus on the the case of $k=1$). See the proofs of Theorem \\ref{theorem:MPP_continuous}, Theorem \\ref{theorem:MPP_semi} and Theorem \\ref{theorem:MPP_fully} for details. This indeed provides much flexibility on choosing $f(\\cdot)$ to exploit various discrete MPP schemes for Allen-Cahn type dynamics with long-range interactions.\n\nFor the discrete MPP schemes, the Lemma \\ref{lemma:MPP_condition} (for the time-discrete case) and Lemma \\ref{lemma:MPP_condition2} (for the fully-discrete case) play the key role which will be crucial for the analysis of not only the first order MPP scheme in this paper but also potentially for other higher order ones. Plus, these two lemmas suggest that the nonlocal terms might have to be treated explicitly in order to satisfy the discrete MPP.\n\n\nOur work is by no mean an additional extension of the existing work on MPP by changing from one model to another. This work has potential wider impact on many other applications. Indeed, we further extend this model to binary systems with various long-range interactions. See Section \\ref{section:GeneralFramework} for the detailed discussion on the extension. Our work could provide a general framework to explore the MPP numerical schemes for other applications such as the micromagnetic model for garnet films \\cite{CondetteMelcherSuli_MathComp2010} , FitzHugh-Nagumo system\\cite{RenTruskinovsky_Elasticity2000} , implicit solvation model\\cite{Zhao_2018CMS} etc.\n\n\nSince discrete energy stability is a byproduct when exploring the MPP schemes, we briefly review some of the existing work for the energy stable numerical methods. The energy stable schemes, first studied by Du and Nicolaides in \\cite{DuNicolaides_SINA1991} for a second order accurate unconditionally stable time-stepping scheme for the Cahn-Hilliard equation, has been extensively studied for various $L^2$ and $H^{-1}$ gradient flow dynamics such as the standard Allen-Cahn and Cahn-Hilliard equations \\cite{ShenYang_DCDSA2010}, phase field crystal model \\cite{WiseWangLowengrub_SINA2009, HuWiseWangLowengrub_JCP2009}, modified phase field crystal model \\cite{WangWise_SINA2011}, and epitaxial thin film growth model \\cite{ChenCondeWangWangWise_JSC2012} etc. Several popular numerical schemes adopted by the community are listed below. One is the convex splitting method \\cite{Eyre_Proc1998} in which the double well potential $W(\\phi)$ is split into the sum of a convex function and a concave one, and the convex part is treated implicitly and the concave one is treated explicitly. However, a nonlinear system usually needs to be solved at each time step which induces high computational cost. Another widely adopted method is the stabilized semi-implicit method \\cite{XuTang_SINA2006,ShenYang_DCDSA2010} in which $W(\\phi)$ is treated explicitly. A linear stabilizing term is added to maintain the energy stability. Another recent method is the IEQ method \\cite{ChengYangShen_JCP2017, Yang_JCP2016} in which all nonlinear terms are treated semi-implicitly, the energy stability is preserved and the resulting numerical schemes lead to a symmetric positive definite linear system to be solved at each time step. A variation of the IEQ method, which is called SAV method, is well studied in the last couple of years \\cite{ShenXuYang_SIAMReview2019}. For a more comprehensive review on the topics of the modeling and numerical methods of phase field approach, we refer the interested readers to \\cite{DuFeng_Handbook2020}.\n\nSome conventional notations adopted throughout the paper are collected here. We will denote by $\\|\\cdot\\|_{L^p}$ and $\\|\\cdot\\|_{H^s}$ the standard norms for the periodic Sobolev spaces $L^p_{\\text{per}}(\\mathbb{T}^d)$ and $H^s_{\\text{per}}(\\mathbb{T}^d)$. The standard $L^2$ inner product will be denoted by $\\langle \\cdot, \\cdot \\rangle$. In order to make the MPP satisfied by the pACOK equation (\\ref{eqn:pACOK}), the nonlinear function $f$ needs to be extended to $\\tilde{f}$ as follows:\n\\begin{align}\\label{eqn:f_extension}\n\\tilde{f} =\n\\begin{cases}\n0, \\hspace{0.64in} s < 0; \\\\\n3s^2 - 2s^3, \\quad 0 \\le s \\le 1; \\\\\n1, \\hspace{0.64in} s > 1.\n\\end{cases}\n\\end{align}\nWe still use $f$ to denote such an extension for the brevity of notations. Indeed, the extension of $f$ is only used for the proof of the MPP property for the continuous pACOK equation (\\ref{eqn:pACOK}). For the time-discrete and fully-discrete pACOK equations, the unextended $f(s) = 3s^2 - 2s^3$ suffices to guarantee the MPP and energy stability, see the proofs of Theorem \\ref{theorem:MPP_continuous}, Theorem \\ref{theorem:MPP_semi} and Theorem \\ref{theorem:MPP_fully} for the details. We take\n\\[\nL_{W''}: = \\|W''\\|_{L^{\\infty}[0,1]}, \\quad L_{f''}: = \\|f''\\|_{L^{\\infty}[0,1]}, \\quad L_{f'} = \\|f'\\|_{L^{\\infty}[0,1]}.\n\\]\nNext, $\\|(-\\Delta)^{-1}\\|$ denotes the optimal constant such that $\\|(-\\Delta)^{-1}f\\|_{L^{\\infty}}\\le C \\|f\\|_{L^{\\infty}}$, namely, it is the norm of the operator $(-\\Delta)^{-1}$ from $L^{\\infty}(\\mathbb{T}^d)$ to itself. We will take $[\\![ n]\\!]$ to be the set of integers $\\{1,2,\\cdots, n\\}$. Lastly, we denote $\\tilde{\\omega} = \\max\\{\\omega, 1-\\omega\\}$.\n\nThe rest of the paper is organized as follows. In Section 2, we will prove the MPP property for the continuous pACOK dynamics. In Section 3, a first order time-discrete numerical scheme will be studied which inherits the MPP and energy stability. In Section 4, we will conduct analysis of MPP and energy stability for the fully-discrete scheme. The error estimate will be carried as well. The extension of the MPP to general binary systems with long-range interactions is discussed in Section 5. We will present some numerical results to support our theoretical findings in Section 6, followed by a summary in Section 7. In the appendix, we present the wellposedness of the pACOK equation and the $L^{\\infty}$ bound for the weak solution of the pACOK equation.\n\n\n\n\n\n\\section{MPP for the continuous pACOK dynamics}\n\nIn this section, we will prove that the continuous pACOK equation (\\ref{eqn:pACOK}) satisfies the MPP, and one can see the critical role that $f(\\phi)$ plays in the theory. Note that in this section, $f(\\cdot)$ represents the extended version (\\ref{eqn:f_extension}).\n\n\n\n\\begin{theorem}\\label{theorem:MPP_continuous}\nThe pACOK equation (\\ref{eqn:pACOK}) is MPP, namely, if $0\\le \\phi_0 \\le 1$, then $0\\le \\phi(t) \\le 1$ for any $t>0$, provided that $\\phi_0\\in H^1(\\mathbb{T}^d)$ and\n\\begin{align}\\label{eqn_MPPcondition_continuous}\n\\frac{\\epsilon\\tilde{\\omega}}{6}\\Big[ \\gamma\\|(-\\Delta)^{-1}\\| + M |\\mathbb{T}^d| \\Big] \\le 1.\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nMultiplying on the two sides of (\\ref{eqn:pACOK}) by $2\\phi-1$, one has\n\\begin{align*}\n\\dfrac{\\partial}{\\partial t} (\\phi^2-\\phi) =&\\ \\epsilon \\Delta(\\phi^2-\\phi) - 2\\epsilon|\\nabla\\phi|^2 - \\dfrac{36}{\\epsilon}(\\phi^2-\\phi)(2\\phi-1)^2 \\\\\n&- \\gamma(-\\Delta)^{-1}(f(\\phi)-\\omega)\\cdot 6(\\phi-\\phi^2)(2\\phi-1) - M\\int_{\\mathbb{T}^d} (f(\\phi)-\\omega)\\ \\text{d}x\\cdot 6(\\phi-\\phi^2)(2\\phi-1)\n\\end{align*}\nMultiplying on the two sides of the above equation by $(\\phi^2-\\phi)^+$ and taking integral over $\\mathbb{T}^d$, one has\n\\begin{align*}\n&\\dfrac{1}{2}\\dfrac{\\partial}{\\partial t} \\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2\\ \\text{d}x \\\\\n=&\\ -\\epsilon \\int_{\\mathbb{T}^d} |\\nabla(\\phi^2-\\phi)^+|^2\\ \\text{d}x - 2\\epsilon \\int_{\\mathbb{T}^d} |\\nabla\\phi|^2 (\\phi^2-\\phi)^+\\ \\text{d}x - \\dfrac{36}{\\epsilon}\\int_{\\mathbb{T}^d}|(\\phi^2-\\phi)^+|^2 (2\\phi-1)^2\\ \\text{d}x \\\\\n& + 6\\gamma \\int_{\\mathbb{T}^d} (-\\Delta)^{-1}(f(\\phi)-\\omega) |(\\phi^2-\\phi)^+|^2(2\\phi-1)\\ \\text{d}x + 6M\\int_{\\mathbb{T}^d} (f(\\phi)-\\omega)\\ \\text{d}x \\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2(2\\phi-1)\\ \\text{d}x \\\\\n=&\\ -\\epsilon \\int_{\\mathbb{T}^d} |\\nabla(\\phi^2-\\phi)^+|^2\\ \\text{d}x - 2\\epsilon \\int_{\\mathbb{T}^d} |\\nabla\\phi|^2 (\\phi^2-\\phi)^+\\ \\text{d}x - \\dfrac{36}{\\epsilon}\\int_{\\mathbb{T}^d}|(\\phi^2-\\phi)^+|^2 \\Big( (2\\phi-1)^2 - (A+B)(2\\phi-1) \\Big)\\ \\text{d}x\n\\end{align*}\nwhere\n\\[\nA = \\dfrac{\\gamma\\epsilon}{6}(-\\Delta)^{-1}(f(\\phi)-\\omega), \\quad B = \\dfrac{M\\epsilon}{6}\\int_{\\mathbb{T}^d}(f(\\phi)-\\omega)\\ \\text{d}x.\n\\]\nWhen $\\phi^2-\\phi\\ge0$, one has $|2\\phi-1| \\ge 1$. Note that the condition (\\ref{eqn_MPPcondition_continuous}) implies $\\|A\\|_{L^{\\infty}} + |B|\\le 1$, therefore $(2\\phi-1)^2 - (A+B)(2\\phi-1)\\ge0$, which implies that\n\\[\n\\dfrac{1}{2}\\dfrac{\\partial}{\\partial t} \\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2\\ \\text{d}x \\le 0.\n\\]\nTaking integral for time from 0 to $t$ leads\n\\[\n\\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2(t)\\ \\text{d}x \\le \\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2(0)\\ \\text{d}x.\n\\]\nIf $0\\le \\phi(0) \\le 1$, $\\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2(0)\\ \\text{d}x=0$, then\n\\[\n\\int_{\\mathbb{T}^d} |(\\phi^2-\\phi)^+|^2(t)\\ \\text{d}x \\le 0 \\Rightarrow 0\\le \\phi(t) \\le 1,\n\\]\nwhich completes the proof.\n\\end{proof}\n\n\n\n\\begin{remark}\nThe wellposedness of the pACOK equation (\\ref{eqn:pACOK}) can be well established by using the standard minimization movement scheme, see Theorem \\ref{theorem:wellposedness} in the Appendix for the related discussion.\n\\end{remark}\n\n\n\\begin{remark}\nFor the condition (\\ref{eqn_MPPcondition_continuous}), it is theoretically easy to achieve due to the smallness of the interfacial width $\\epsilon$, though the long-range repulsion strength $\\gamma$ and the penalty constant $M$ are supposed to be large.\n\\end{remark}\n\n\\begin{remark}\nThe extension of $f$ is critical in order to bound the $A$ term as\n\\[\n\\|A\\|_{L^{\\infty}} \\le \\frac{\\gamma\\epsilon}{6} \\|(-\\Delta)^{-1}\\|\\cdot \\|f(\\phi)-\\omega\\|_{L^{\\infty}} \\le \\frac{\\gamma\\epsilon}{6} \\|(-\\Delta)^{-1}\\|\\cdot \\tilde{\\omega}.\n\\]\nOn the other hand, in the 2d case, one can still have the MPP held for non-extended $f(\\phi)$ by showing that $\\|f(\\phi)\\|_{L^{\\infty}} \\le C$ for some generic constant $C$ which depends on $\\|\\phi_0\\|_{H^1(\\mathbb{T}^2)}, \\epsilon^{-1}, \\omega, \\gamma$ and $M$. See the Theorem \\ref{theorem: Linfity} in the Appendix for the $L^{\\infty}$ bound for the 2d weak solution $\\phi$. Then $A$ is still bounded as\n\\[\n\\|A\\|_{L^{\\infty}} \\le \\frac{C\\gamma\\epsilon}{6} \\|(-\\Delta)^{-1}\\|\\tilde{\\omega}.\n\\]\nHowever, to bound the quantity $\\|A\\|_{L^{\\infty}}+|B|$ by 1, one has to take sufficiently small value of $\\gamma$, which is theoretically acceptable but unrealistic in applications.\n\\end{remark}\n\n\n\n\n\\section{Time-discrete Scheme: MPP and Energy Stability}\n\n\n\n\n\nNow we will consider a semi-discrete scheme for the pACOK equation (\\ref{eqn:pACOK}), and show that such a scheme satisfies the MPP and energy stability under some conditions. Given time interval $[0,T]$ and an integer $N>0$, we take the uniform time step size $\\tau = T\/N$ and $t_n = n\\tau$ for $n=0,1,\\cdots,N$. Let $\\phi^n(x) \\approx \\phi(t_n,x)$ be the temporal semi-discrete approximation of the solution $\\phi$ at $t_n$. Given initial data $\\phi^0 = \\phi_0$ and a splitting constant (or stabilizer) $\\kappa > 0$, we consider the following stabilized time-discrete scheme:\n\\begin{align}\\label{eqn:pACOK_SemiImplicit}\n\\left(\\dfrac{1}{\\tau}+\\dfrac{\\kappa}{\\epsilon}\\right)(\\phi^{n+1}-\\phi^n) = &\\ \\epsilon\\Delta\\phi^{n+1} - \\dfrac{1}{\\epsilon}W'(\\phi^n) \\nonumber\\\\\n& - \\gamma(-\\Delta)^{-1}(f(\\phi^n)-\\omega)f'(\\phi^n) - M\\int_{\\mathbb{T}^d} (f(\\phi^n)-\\omega)\\text{d}x\\cdot f'(\\phi^n),\n\\end{align}\nwhich can be rewritten as\n\\begin{align}\\label{eqn:pACOK_SemiImplicit2}\n\\left(\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)I - \\tau\\epsilon\\Delta \\right)\\phi^{n+1} = &\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\phi^n - \\dfrac{\\tau}{\\epsilon}W'(\\phi^n) \\nonumber\\\\\n&- \\tau\\gamma(-\\Delta)^{-1}(f(\\phi^n)-\\omega)f'(\\phi^n) - \\tau M\\int_{\\mathbb{T}^d} (f(\\phi^n)-\\omega)\\text{d}x\\cdot f'(\\phi^n).\n\\end{align}\nA simple calculation reveals that the eigenvalues of the operator $(1+\\tau\\kappa\\epsilon^{-1})I - \\tau\\epsilon\\Delta$ on the left hand side of (\\ref{eqn:pACOK_SemiImplicit2}) are all positive. Therefore the scheme is unconditionally uniquely solvable.\n\n\n\n\n\\subsection{MPP for time-discrete scheme}\n\n\n\n\nIn this section, we will show that the scheme (\\ref{eqn:pACOK_SemiImplicit2}) is MPP. To this end, we begin with a lemma.\n\n\\begin{lemma}\\label{lemma:MPP_condition}\nLet\n\\[\n\\mathcal{F}(\\psi) = \\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\psi - \\dfrac{\\tau}{\\epsilon}W'(\\psi) - \\tau\\gamma(-\\Delta)^{-1}(f(\\psi)-\\omega)f'(\\psi) - \\tau M\\int_{\\mathbb{T}^d} (f(\\psi)-\\omega)\\emph{d}x f'(\\psi).\n\\]\nIf $\\psi(x)\\in [0,1]$, then we have\n\\[\n\\max_{\\psi\\in[0,1]} \\mathcal{F}(\\psi) = 1+\\dfrac{\\tau\\kappa}{\\epsilon} ; \\quad \\min_{\\psi\\in[0,1]} \\mathcal{F}(\\psi) = 0,\n\\]\nprovided that\n\\begin{align}\\label{eqn:MPP_condition}\n\\frac{1}{\\tau}+ \\dfrac{\\kappa}{\\epsilon} \\ge \\dfrac{L_{W''}}{\\epsilon} + \\tilde{\\omega}L_{f''} \\Big(\\gamma\\|(-\\Delta)^{-1}\\|+M|\\mathbb{T}^d| \\Big).\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nNote that $f(\\cdot)$ satisfies $f'(0) = f'(1) = 0$, it follows that for $\\psi\\equiv 0$, $\\mathcal{F}(\\psi) = 0$; for $\\psi\\equiv 1$, $\\mathcal{F}(\\psi) = 1+\\tau\\kappa\/\\epsilon$. For any other $\\psi$ such that $0\\le \\psi \\le 1$ and any $x\\in\\mathbb{T}^d$, one has\n\\begin{align*}\n\\mathcal{F}(\\psi(x)) & = \\mathcal{F}(0) + \\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\psi(x) - \\frac{\\tau}{\\epsilon}\\psi(x)W''(\\xi_0) \\\\\n& \\quad - \\tau\\gamma \\Big( (-\\Delta)^{-1}(f(\\psi)-\\omega) \\Big)(x) \\cdot \\psi(x) f''(\\eta_0) - \\tau M \\int_{\\mathbb{T}^d} (f(\\psi)-\\omega ) \\text{d}x \\cdot \\psi(x) f''(\\eta_0) \\\\\n& \\ge \\mathcal{F}(0) + \\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\psi(x) - \\dfrac{\\tau}{\\epsilon}\\psi L_{W''} - \\tau\\gamma\\psi\\|(-\\Delta)^{-1}\\|\\tilde{\\omega}L_{f''} - \\tau M \\psi |\\mathbb{T}^d| \\tilde{\\omega} L_{f''} \\ge \\mathcal{F}(0),\n\\end{align*}\nwhere $\\xi_0, \\eta_0 \\in (0,\\psi(x)) \\subset (0,1)$ are constants obtained from Taylor expansion. On the other hand,\n\\begin{align*}\n\\mathcal{F}(1-\\psi(x)) & = \\mathcal{F}(1) - \\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\psi(x) + \\frac{\\tau}{\\epsilon}\\psi(x)W''(\\xi_1) \\\\\n&\\quad + \\tau\\gamma \\Big( (-\\Delta)^{-1}(f(1-\\psi)-\\omega)\\Big)(x)\\cdot\\psi(x) f''(\\eta_1) + \\tau M \\int_{\\mathbb{T}^d}(f(1-\\psi)-\\omega)\\text{d}x\\cdot \\psi(x) f''(\\eta_1) \\\\\n& \\le \\mathcal{F}(1) -\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\psi(x) + \\dfrac{\\tau}{\\epsilon}\\psi(x) L_{W''} + \\tau\\gamma\\psi(x)\\|(-\\Delta)^{-1}\\| \\tilde{\\omega}L_{f''} - \\tau M \\psi |\\mathbb{T}^d| \\tilde{\\omega}L_{f''} \\le \\mathcal{F}(1),\n\\end{align*}\nwhere $\\xi_1, \\eta_1 \\in (1-\\psi(x), 1) \\subset (0,1)$ are constants by Taylor expansion. Consequently we have the desired bounds for $\\mathcal{F}(\\psi)$.\n\\end{proof}\n\nNow we present the MPP property for the scheme (\\ref{eqn:pACOK_SemiImplicit}) or (\\ref{eqn:pACOK_SemiImplicit2}).\n\n\\begin{theorem}\\label{theorem:MPP_semi}\nThe stabilized time-discrete semi-implicit scheme (\\ref{eqn:pACOK_SemiImplicit}) or (\\ref{eqn:pACOK_SemiImplicit2}) is MPP, namely\n\\[\n0\\le \\phi^0 \\le 1 \\Rightarrow 0\\le \\phi^n \\le 1, \\quad \\forall n \\in [\\![ N]\\!].\n\\]\nprovided that the condition (\\ref{eqn:MPP_condition}) holds.\n\\end{theorem}\n\\begin{proof}\nWe can prove the result by induction. Assume that $0\\le \\phi^n \\le 1$, and $\\phi^{n+1}$ is obtained by the scheme (\\ref{eqn:pACOK_SemiImplicit2}). Assume $\\phi^{n+1}$ reaches the maximal value at $x^*$, then $-\\tau\\epsilon\\Delta\\phi^{n+1}(x^*)\\ge0$, and\n\\[\n\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right) \\phi^{n+1}(x^*) \\le \\mathcal{F}(\\phi^n(x^*)) \\le 1+\\dfrac{\\tau\\kappa}{\\epsilon} \\Rightarrow \\phi^{n+1} \\le 1.\n\\]\nSimilarly let $x_*$ be a minimal point for $\\phi^{n+1}$, then $-\\tau\\epsilon\\Delta\\phi^{n+1}\\le0$, and\n\\[\n\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right) \\phi^{n+1}(x_*) \\ge \\mathcal{F}(\\phi^n(x_*)) \\ge 0\\Rightarrow \\phi^{n+1} \\ge 0.\n\\]\nwhich completes the proof.\n\\end{proof}\n\n\\begin{remark}\nNote that the condition (\\ref{eqn:MPP_condition}) holds for sufficiently large stabilizer $\\kappa$ no matter what value of $\\tau>0$. Therefore the stabilized semi-implicit scheme (\\ref{eqn:pACOK_SemiImplicit}) or (\\ref{eqn:pACOK_SemiImplicit2}) is unconditionally MPP for sufficiently large $\\kappa$.\n\\end{remark}\n\n\n\n\n\n\n\n\\subsection{Energy stability for time-discrete scheme}\n\nWhile the stabilized semi-discrete scheme (\\ref{eqn:pACOK_SemiImplicit2}) is MPP, it is also energy stable as shown in the following theorem.\n\\begin{theorem}\\label{theorem:EnergyStability}\nAssume the initial $\\phi^0$ satisfies $0\\le \\phi^0\\le 1$, then the stabilized semi-implicit scheme (\\ref{eqn:pACOK_SemiImplicit}) or (\\ref{eqn:pACOK_SemiImplicit2}) is unconditionally energy stable in the sense that\n\\begin{align}\nE^{\\emph{pOK}}[\\phi^{n+1}] \\le E^{\\emph{pOK}}[\\phi^{n}]\n\\end{align}\n provided that\n\\begin{align}\\label{eqn:ES_condition}\n\\frac{\\kappa}{\\epsilon} \\ge \\frac{L_{W''}}{\\epsilon} + (L_{f'}^2+\\tilde{\\omega}L_{f''})\\Big(\\gamma\\|(-\\Delta)^{-1}\\| + M |\\mathbb{T}^d|\\Big).\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nTaking the $L^2$ inner product with $\\phi^{n+1}-\\phi^{n}$ on the two sides of (\\ref{eqn:pACOK_SemiImplicit}) , we have\n\\begin{align}\\label{eqn:estimate1}\n&\\dfrac{1}{\\tau} \\|\\phi^{n+1} - \\phi^{n}\\|_{L^2}^2 \\nonumber\\\\\n= &\\ - \\dfrac{\\kappa}{\\epsilon}\\|\\phi^{n+1} - \\phi^{n}\\|_{L^2}^2 \\underbrace{-\\epsilon \\langle \\nabla\\phi^{n+1}, \\nabla\\phi^{n+1} - \\nabla\\phi^{n} \\rangle }_{\\text{I}} \\underbrace{ - \\epsilon^{-1} \\langle W'(\\phi^{n}), \\phi^{n+1} - \\phi^{n}\\rangle }_{\\text{II}} \\nonumber\\\\\n& \\underbrace{ -\\gamma \\left\\langle (-\\Delta)^{-1}(f(\\phi^n)-\\omega)f'(\\phi^n), \\phi^{n+1}-\\phi^{n} \\right\\rangle }_{\\text{III}} \\underbrace{ -M \\textstyle{\\int_{\\mathbb{T}^d}} (f(\\phi^n)-\\omega)\\text{d}x \\left\\langle f'(\\phi^n), \\phi^{n+1}-\\phi^{n} \\right\\rangle }_{\\text{IV}}.\n\\end{align}\nUsing the identity $a\\cdot (a-b) = \\frac{1}{2}|a|^2 - \\frac{1}{2}|b|^2 + \\frac{1}{2}|a-b|^2$ and $b\\cdot (a-b) = \\frac{1}{2}|a|^2 - \\frac{1}{2}|b|^2 - \\frac{1}{2}|a-b|^2$, we have:\n\\begin{align*}\n\\text{I} =&\\ - \\frac{\\epsilon}{2} \\left( \\|\\nabla\\phi^{n+1}\\|_{L^2}^2 - \\|\\nabla\\phi^{n}\\|_{L^2}^2 + \\|\\nabla\\phi^{n+1}-\\nabla\\phi^n\\|_{L^2}^2 \\right); \\\\\n\\text{II} =& - \\epsilon^{-1}\\left\\langle 1, W'(\\phi^n)(\\phi^{n+1}-\\phi^n) \\right\\rangle = -\\epsilon^{-1}\\left\\langle 1, W(\\phi^{n+1}) \\right\\rangle + \\epsilon^{-1} \\left\\langle 1, W(\\phi^{n}) \\right\\rangle + (2\\epsilon)^{-1} W''(\\xi^n) \\|\\phi^{n+1}-\\phi^n\\|_{L^2}^2 ; \\\\\n \\text{III} =&\\ -\\gamma \\Big\\langle (-\\Delta)^{-1}(f(\\phi^{n})-\\omega), f(\\phi^{n+1}) - f(\\phi^{n}) \\Big\\rangle + \\dfrac{\\gamma}{2} \\Big\\langle (-\\Delta)^{-1}(f(\\phi^{n})-\\omega), f''(\\eta^n)(\\phi^{n+1} - \\phi^{n})^2 \\Big\\rangle \\\\\n \\quad=&\\ -\\dfrac{\\gamma}{2} \\left( \\|(-\\Delta)^{-\\frac{1}{2}}(f(\\phi^{n+1})-\\omega)\\|_{L^{2}}^2 - \\|(-\\Delta)^{-\\frac{1}{2}}(f(\\phi^{n})-\\omega)\\|_{L^{2}}^2 - \\|(-\\Delta)^{-\\frac{1}{2}}(f(\\phi^{n+1})-f(\\phi^n))\\|_{L^{2}}^2 \\right) \\\\\n& \\ + \\dfrac{\\gamma}{2} \\Big\\langle (-\\Delta)^{-1}(f(\\phi^{n})-\\omega), f''(\\eta^n)(\\phi^{n+1} - \\phi^{n})^2\\Big\\rangle ;\\\\\n \\text{IV} = & -\\dfrac{M}{2}\\left( \\left| \\textstyle{\\int_{\\mathbb{T}^d}} ( f(\\phi^{n+1}) - \\omega ) \\text{d}x \\right|^2 -\n \\left| \\textstyle{\\int_{\\mathbb{T}^d}} ( f(\\phi^{n}) - \\omega ) \\text{d}x \\right|^2 -\n \\left| \\textstyle{\\int_{\\mathbb{T}^d}} ( f(\\phi^{n+1}) - f(\\phi^n) \\text{d}x \\right|^2\n \\right)& \\\\\n& + \\dfrac{M}{2} \\left( \\textstyle{\\int_{\\mathbb{T}^d}} ( f(\\phi^{n}) - \\omega ) \\text{d}x\\right) f''(\\eta^n) \\|\\phi^{n+1}-\\phi^n\\|_{L^2}^2 .\n\\end{align*}\nwhere $\\xi^n$ and $\\eta^n$ are between $\\phi^n$ and $\\phi^{n+1}$. Note that the condition (\\ref{eqn:ES_condition}) implies (\\ref{eqn:MPP_condition}), Theorem \\ref{theorem:MPP_semi} gives $\\phi^n,\\phi^{n+1} \\in [0,1]$, consequently $\\xi^n, \\eta^n \\in (0,1)$. Therefore, we do not need the extension of $f$ as in (\\ref{eqn:f_extension}) to perform Taylor expansion above. Finally, inserting the equalities for I--IV back into (\\ref{eqn:estimate1}) and noting that $|f'|0$ and an integer $N>0$ such that $\\tau = \\frac{T}{N}$ and $t_n = n\\tau$ for $n=0,1,\\cdots, N$. Assume that $\\phi_t \\in L^2(0,T; L^2)$ and $\\phi_{tt} \\in L^2(0,T; H^{-1})$, then for $\\kappa$ satisfying the energy stability condition (\\ref{eqn:ES_condition}), if the time step size $\\tau \\le \\epsilon\/(10\\kappa)$, we have\n\\begin{align}\\label{eqn:errorestimate_time}\n\\|\\phi(t_n) - \\phi^n\\|_{L^2} \\le \\tilde{C}\\tau, \\quad \\forall n \\in [\\![ N]\\!]\n\\end{align}\nwhere $\\tilde{C} = e^{6T\\kappa\/\\epsilon} \\Big( \\frac{1+\\frac{|\\mathbb{T}^d|}{4\\pi^2}}{3\\epsilon\/2} \\|\\phi_{tt}\\|_{L^2(0,T; H^{-1})}^2 + \\frac{4\\kappa}{\\epsilon} \\|\\phi_t\\|_{L^2(0,T;L^2)}^2 \\Big)^{\\frac{1}{2}} $ is a constant independent of $\\tau$ and $N$.\n\\end{theorem}\n\n\n\n\n\n\n\n\\section{Fully-discrete Scheme: Maximum Principle Preservation and Energy Stability}\n\nIn this section, we propose a fully-discrete scheme by discretizing the spatial operators by a second order finite difference approximation. To this end, we adopt some notations for the finite difference approximation. For the brevity of notations, we will focus the discussion on the 2D case, which can be easily extended to 3D formulation.\n\n\\subsection{Second order finite difference scheme for spatial discretizaiton}\n\nWe consider $\\mathbb{T}^2 = \\prod_{i=1}^2 [-X_i,X_i] \\subset \\mathbb{R}^2$. Let $N_1, N_2$ be positive even integers. Take $h_i = \\frac{2X_i}{N_i}, i=1,2$ and $\\mathbb{T}^2_h = \\mathbb{T}^2\\ \\cap (\\otimes_{i=1}^2 h_i\\mathbb{Z})$. We define the index set:\n\\begin{align*}\nS_h &= \\left\\{ (k_1,k_2)\\in\\mathbb{Z}^2 | 1\\le k_i \\le N_i, i=1,2 \\right\\}.\n\\end{align*}\nDenote by $\\mathcal{M}_h$ the collection of periodic grid functions on $\\mathbb{T}^2_h$:\n\\begin{align*}\n\\mathcal{M}_h = \\left\\{ f: \\mathbb{T}^2_h\\rightarrow\\mathbb{R} | f_{k_1+m_1N_1, k_2+m_2N_2} =f_{k_1,k_2}, \\forall (k_1,k_2)\\in S_h, \\forall (m_1,m_2)\\in \\mathbb{Z}^2 \\right\\}.\n\\end{align*}\nFor any $f,g\\in\\mathcal{M}_h$ and $\\textbf{f} = (f^1,f^2)^T, \\textbf{g} = (g^1,g^2)^T\\in\\mathcal{M}_h\\times\\mathcal{M}_h$, we define the discrete $L^2$ inner product $\\langle\\cdot,\\cdot\\rangle_h$, discrete $L^2$ norm $\\|\\cdot\\|_{h,L^2}$ and discrete $L^{\\infty}$ norm $\\|\\cdot\\|_{h, L^{\\infty}}$ as follows:\n\\begin{align*}\n\\langle f, g\\rangle_h &= h_x h_y \\sum_{(i,j)\\in S_h} f_{ij} g_{ij}, \\quad \\|f\\|_{h,L^2} = \\sqrt{\\langle f, f \\rangle_h}, \\quad \\|f\\|_{h, L^{\\infty}} = \\max_{(i,j)\\in S_h} |f_{ij}| ; \\\\\n\\langle \\textbf{f}, \\textbf{g}\\rangle_h &= h_x h_y \\sum_{(i,j)\\in S_h} \\left( f_{ij}^1 g_{ij}^1 + f_{ij}^2 g_{ij}^2 \\right), \\quad \\|\\textbf{f}\\|_{h,L^2} = \\sqrt{\\langle \\textbf{f}, \\textbf{f}\\rangle_h}.\n\\end{align*}\nLet $\\mathring{\\mathcal{M}}_h = \\{f\\in\\mathcal{M}_h | \\langle f, 1 \\rangle_h = 0\\}$ be the collections of all periodic grid functions with zero mean.\n\nWe define the second order central difference approximation of the Laplacian operator $\\Delta$ as a discrete linear operator $\\Delta_h: \\mathring{\\mathcal{M}}_h \\rightarrow \\mathring{\\mathcal{M}}_h$\n\\begin{align}\n\\Delta_h u = f: \\Delta_h u_{ij} = \\frac{1}{h_1^2}(u_{i-1,j} - 2u_{ij} + u_{i+1,j}) + \\frac{1}{h_2^2}(u_{i,j-1} - 2u_{ij} + u_{i,j+1})\n\\end{align}\nwhere the periodic boundary condition applies when the the indices $i \\notin [\\![ N_1]\\!]$ or $j \\notin [\\![ N_2]\\!]$. Note that $\\Delta_h: \\mathring{\\mathcal{M}}_h \\rightarrow \\mathring{\\mathcal{M}}_h$ is one-to-one, it is safe to define its inverse $(\\Delta_h)^{-1}: \\mathring{\\mathcal{M}}_h \\rightarrow \\mathring{\\mathcal{M}}_h$\n\\begin{align}\n(\\Delta_h)^{-1} f = u \\quad \\text{if and only if} \\quad \\Delta_h u = f.\n\\end{align}\nWe denote by $\\|(-\\Delta_h)^{-1}\\|$ the optimal constant such that $\\|(-\\Delta_h)^{-1}f\\|_{h,L^\\infty}\\le C \\|f\\|_{h,L^\\infty}$, namely, the norm of the operator $(-\\Delta_h)^{-1}$ from $L^{\\infty}(\\mathring{\\mathcal{M}}_h)$ to itself.\n\n\nGiven the discrete Laplacian operator $\\Delta_h$ defined above, and denote $\\Phi^{n}\\approx \\phi(x,t_n)|_{\\mathbb{T}^2_h}$ the numerical solution, we arrive at the following first order fully-discrete semi-implicit scheme for the pACOK equation (\\ref{functional:pOK}): for $\\forall n \\in [\\![ N]\\!]$, find $\\Phi^{n+1} = (\\Phi_{ij}^{n+1}) \\in \\mathcal{M}_h$ such that\n\\begin{align}\\label{eqn:pACOK_FullDiscrete}\n\\left(\\dfrac{1}{\\tau}+\\dfrac{\\kappa_h}{\\epsilon}\\right)(\\Phi^{n+1}-\\Phi^n) = \\epsilon\\Delta_h\\Phi^{n+1} - \\dfrac{1}{\\epsilon}W'(\\Phi^n) &- \\gamma(-\\Delta_h)^{-1}(f(\\Phi^n)-\\omega) \\odot f'(\\Phi^n) \\nonumber \\\\\n&- M \\langle f(\\Phi^n)-\\omega, 1 \\rangle_h \\text{d}\\mathbf{x} f'(\\Phi^n),\n\\end{align}\nwith $\\Phi^0 = (\\Phi_{ij}^0) = \\phi_0|_{\\mathbb{T}^2_h}$ being the given intial data, and $\\kappa_h$ the stabilization constant. Here $\\text{d}\\mathbf{x} = h_1h_2$ and $\\odot$ represents pointwise multiplication. The scheme can be reformulated as\n\\begin{align}\\label{eqn:pACOK_FullDiscreteII}\n\\left(\\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)I - \\tau\\epsilon\\Delta_h\\right)\\Phi^{n+1} = \\left(1+\\dfrac{\\tau\\kappa}{\\epsilon}\\right)\\Phi^{n} - \\dfrac{\\tau}{\\epsilon}W'(\\Phi^n) & - \\tau\\gamma(-\\Delta_h)^{-1}(f(\\Phi^n)-\\omega)\\odot f'(\\Phi^n) \\nonumber \\\\\n&- \\tau M \\langle f(\\Phi^n)-\\omega, 1 \\rangle_h \\text{d}\\mathbf{x} f'(\\Phi^n),\n\\end{align}\nfrom which the unconditional unique solvability can be guaranteed by realizing the positivity of all the eigenvalues of the operator\n$\\left( 1+ \\tau\\kappa\\epsilon^{-1} \\right) I - \\tau \\epsilon \\Delta_h$ on the left hand side of (\\ref{eqn:pACOK_FullDiscreteII}).\n\n\n\n\\subsection{Maximum principle preservation for fully-discrete scheme}\n\nIn this section, we will show that the full-discrete scheme (\\ref{eqn:pACOK_FullDiscrete}) is MPP under a condition similar to (\\ref{eqn:MPP_condition}). To this end, a discrete counterpart of Lemma \\ref{lemma:MPP_condition} is needed.\n\n\n\\begin{lemma}\\label{lemma:MPP_condition2}\nLet $\\Psi\\in\\mathcal{M}_h$ be such that $0\\le\\Psi\\le 1$, and define $\\mathcal{F}_h: \\mathcal{M}_h \\rightarrow \\mathcal{M}_h$ as follows:\n\\[\n\\mathcal{F}_h(\\Psi) = \\left(1+\\dfrac{\\tau\\kappa_h}{\\epsilon}\\right)\\Psi - \\dfrac{\\tau}{\\epsilon}W'(\\Psi) - \\tau\\gamma(-\\Delta_h)^{-1}(f(\\psi)-\\omega)\\odot f'(\\Psi) - \\tau M \\langle f(\\psi)-\\omega,1 \\rangle\\emph{d}\\mathbf{x} f'(\\Psi),\n\\]\nthen we have\n\\[\n\\max_{0\\le\\Psi\\le 1} \\{\\mathcal{F}_h(\\Psi)\\}= 1+\\dfrac{\\tau\\kappa_h}{\\epsilon} ; \\quad \\min_{0\\le\\Psi\\le 1} \\{ \\mathcal{F}_h(\\Psi) \\} = 0,\n\\]\nprovided that\n\\begin{align}\\label{eqn:MPP_condition2}\n\\frac{1}{\\tau} + \\dfrac{\\kappa_h}{\\epsilon} \\ge \\dfrac{L_{W''}}{\\epsilon} + \\tilde{\\omega}L_{f''}\\Big(\\gamma\\|(-\\Delta_h)^{-1}\\| + M|\\mathbb{T}^2| \\Big).\n\\end{align}\n\\end{lemma}\n\nThe proof of Lemma \\ref{lemma:MPP_condition2} is similar to that of Lemma \\ref{lemma:MPP_condition}. The only difference is that the Laplacian operator $-\\Delta$ is replaced by a discrete Laplacian operator $-\\Delta_h$, and the integral term $\\int (f(\\psi)-\\omega) \\text{d}x$ is replaced by the Riemann sum. We therefore omit the details.\n\nNow we present the MPP for the fully-discrete sheme (\\ref{eqn:pACOK_FullDiscrete}) or (\\ref{eqn:pACOK_FullDiscreteII}).\n\n\\begin{theorem}\\label{theorem:MPP_fully}\nThe stabilized fully-discrete scheme (\\ref{eqn:pACOK_FullDiscrete}) or (\\ref{eqn:pACOK_FullDiscreteII}) is MPP provided that the condition (\\ref{eqn:MPP_condition2}) holds.\n\\end{theorem}\n\\begin{proof}\nAssume that $0\\le \\Phi^n \\le 1$, and $\\Phi^{n+1}$ is obtained by the scheme (\\ref{eqn:pACOK_FullDiscrete}). Assume $\\Phi^{n+1}$ reaches the maximal value at the index $(i^*,j^*)$, then\n\\[\n(\\Delta_h\\Phi^{n+1})_{i^*j^*} = \\frac{(\\Phi^n)_{i^*-1,j^*}+(\\Phi^n)_{i^*+1,j^*}-2(\\Phi^n)_{i^*j^*}}{h_1^2} + \\frac{(\\Phi^n)_{i^*,j^*-1}+(\\Phi^n)_{i^*,j^*+1}-2(\\Phi^n)_{i^*j^*}}{h_2^2} \\le 0,\n\\]\nand\n\\[\n\\left(1+\\dfrac{\\tau\\kappa_h}{\\epsilon}\\right) (\\Phi^{n+1})_{i^*j^*} \\le \\Big(\\mathcal{F}_h((\\Phi^n)\\Big)_{i^*j^*} \\le 1+\\dfrac{\\tau\\kappa}{\\epsilon} \\Rightarrow \\Phi^{n+1} \\le 1.\n\\]\nSimilarly let $(i_*, j_*)$ be the index for the smallest component of $\\Phi^{n+1}$, then\n\\[\n(\\Delta_h\\Phi^{n+1})_{i_*j_*} = \\frac{(\\Phi^n)_{i_*-1,j_*}+(\\Phi^n)_{i_*+1,j_*}-2(\\Phi^n)_{i_*j_*}}{h_1^2} + \\frac{(\\Phi^n)_{i_*,j_*-1}+(\\Phi^n)_{i_*,j_*+1}-2(\\Phi^n)_{i_*j_*}}{h_2^2} \\ge 0,\n\\]\nand\n\\[\n\\left(1+\\dfrac{\\tau\\kappa_h}{\\epsilon}\\right) (\\Phi^{n+1})_{i_*j_*} \\ge \\Big(\\mathcal{F}_h((\\Phi^n)\\Big)_{i_*j_*} \\ge 0 \\Rightarrow \\Phi^{n+1} \\ge 0,\n\\]\nwhich completes the proof.\n\\end{proof}\n\n\n\n\n\\subsection{Energy stability for fully-discrete scheme}\n\nWhile the stabilized fully-discrete scheme (\\ref{eqn:pACOK_FullDiscrete}) is MPP, it is also energy stable for the discrete OK energy functional defined below:\n\\begin{align}\\label{eqn:discreteEnergy}\nE_h^{\\text{pOK}}[\\Phi] = & -\\frac{\\epsilon}{2} \\langle \\Delta_h\\Phi,\\Phi \\rangle_h + \\frac{1}{\\epsilon} \\langle W(\\Phi),1\\rangle_h + \\frac{\\gamma}{2} \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi)-\\omega), (f(\\Phi)-\\omega) \\Big\\rangle_h \\nonumber \\\\\n& + \\frac{M}{2} \\Big( \\langle f(\\Phi^n)-\\omega, 1 \\rangle_h \\text{d}\\mathbf{x} \\Big)^2.\n\\end{align}\n\n\n\n\\begin{theorem}\nAssume the initial $\\Phi^0$ satisfies $0\\le \\Phi^0\\le 1$, then the stabilized fully-discrete semi-implicit scheme (\\ref{eqn:pACOK_FullDiscrete}) or (\\ref{eqn:pACOK_FullDiscreteII}) is unconditionally energy stable in the sense that\n\\begin{align}\nE_h^{\\emph{pOK}}[\\Phi^{n+1}] \\le E_h^{\\emph{pOK}}[\\Phi^{n}]\n\\end{align}\n provided that\n\\begin{align}\\label{eqn:discreteES_condition}\n\\frac{\\kappa_h}{\\epsilon} \\ge \\frac{L_{W''}}{\\epsilon} + (L_{f'}^2+\\tilde{\\omega}L_{f''})\\Big(\\gamma\\|(-\\Delta_h)^{-1}\\| + M |\\mathbb{T}^2|\\Big).\n\\end{align}\n\\end{theorem}\n\\begin{proof}\nThe proof is similar to that of Theorem \\ref{theorem:EnergyStability}. To see how the discrete operators apply in the proof, we will still show it in details. Taking the discrete $L^2$ inner product with $\\Phi^{n+1}-\\Phi^{n}$ on the two sides of (\\ref{eqn:pACOK_FullDiscrete}), we have\n\\begin{align}\\label{eqn:estimate2}\n&\\dfrac{1}{\\tau} \\|\\Phi^{n+1} - \\Phi^{n}\\|_{h, L^2}^2 \\nonumber\\\\\n= &\\ - \\dfrac{\\kappa_h}{\\epsilon}\\|\\Phi^{n+1} - \\Phi^{n}\\|_{h,L^2}^2 +\\underbrace{\\epsilon \\langle \\Delta_h\\Phi^{n+1}, \\Phi^{n+1} - \\Phi^{n} \\rangle_h }_{\\text{I}} \\underbrace{ - \\epsilon^{-1} \\langle W'(\\Phi^{n}), \\Phi^{n+1} - \\Phi^{n}\\rangle_h }_{\\text{II}} \\nonumber\\\\\n& \\underbrace{ -\\gamma \\left\\langle (-\\Delta_h)^{-1}(f(\\Phi^n)-\\omega)f'(\\Phi^n), \\Phi^{n+1}-\\Phi^{n} \\right\\rangle_h }_{\\text{III}} \\underbrace{ -M \\langle f(\\Phi^n)-\\omega, 1\\rangle\\text{d}\\textbf{x} \\left\\langle f'(\\Phi^n), \\Phi^{n+1}-\\Phi^{n} \\right\\rangle_h }_{\\text{IV}}.\n\\end{align}\nUsing the identity $a\\cdot (a-b) = \\frac{1}{2}|a|^2 - \\frac{1}{2}|b|^2 + \\frac{1}{2}|a-b|^2$ and $b\\cdot (a-b) = \\frac{1}{2}|a|^2 - \\frac{1}{2}|b|^2 - \\frac{1}{2}|a-b|^2$, we have:\n\\begin{align*}\n\\text{I} =&\\ \\frac{\\epsilon}{2} \\Big( \\Big\\langle \\Delta_h\\Phi^{n+1}, \\Phi^{n+1} \\Big\\rangle_h - \\Big\\langle \\Delta_h\\Phi^{n}, \\Phi^{n} \\Big\\rangle_h + \\Big\\langle \\Delta_h(\\Phi^{n+1}-\\Phi^{n}), \\Phi^{n+1}-\\Phi^n \\Big\\rangle_h \\Big); \\\\\n\\text{II} =& -\\epsilon^{-1}\\left\\langle 1, W(\\Phi^{n+1}) \\right\\rangle_h + \\epsilon^{-1} \\left\\langle 1, W(\\Phi^{n}) \\right\\rangle_h + (2\\epsilon)^{-1} W''(\\xi^n) \\|\\Phi^{n+1}-\\Phi^n\\|_{h, L^2}^2 ; \\\\\n \\text{III} =&\\ -\\gamma \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n})-\\omega), f(\\Phi^{n+1}) - f(\\Phi^{n})\\Big\\rangle_h + \\dfrac{\\gamma}{2} \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n})-\\omega), f''(\\eta^n)(\\Phi^{n+1} - \\Phi^{n})^2\\Big\\rangle_h \\\\\n \\quad=&\\ -\\dfrac{\\gamma}{2} \\Big( \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n+1})-\\omega), f(\\Phi^{n+1})-\\omega \\Big\\rangle_{h} - \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n})-\\omega), f(\\Phi^{n})-\\omega \\Big\\rangle_{h} \\\\\n& \\hspace{0.4in} - \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n+1})-f(\\Phi^{n})),f(\\Phi^{n+1})-f(\\Phi^{n}) \\Big\\rangle_{h} \\Big) \\\\\n& \\ + \\dfrac{\\gamma}{2} \\Big\\langle (-\\Delta_h)^{-1}(f(\\Phi^{n})-\\omega), f''(\\eta^n)(\\Phi^{n+1} - \\Phi^{n})^2\\Big\\rangle_h ;\\\\\n \\text{IV} = & -\\dfrac{M}{2}\\Big( \\left(\\langle f(\\Phi^{n+1}) - \\omega, 1 \\rangle_h \\text{d}\\textbf{x} \\right)^2 -\n \\left(\\langle f(\\Phi^{n}) - \\omega, 1 \\rangle_h \\text{d}\\textbf{x} \\right)^2 -\n \\left(\\langle f(\\Phi^{n+1}) - f(\\Phi^{n}), 1 \\rangle_h \\text{d}\\textbf{x} \\right)^2\n \\Big)& \\\\\n& + \\dfrac{M}{2} \\langle f(\\Phi^{n}) - \\omega, 1 \\rangle_h \\text{d}\\textbf{x} f''(\\eta^n) \\|\\Phi^{n+1}-\\Phi^n\\|_{h,L^2}^2,\n\\end{align*}\nwhere $\\xi^n$ and $\\eta^n$ are between $\\Phi^n$ and $\\Phi^{n+1}$ due to the smoothness up to 2nd order derivative for $f$ and $W$. Note that the condition (\\ref{eqn:discreteES_condition}) implies (\\ref{eqn:MPP_condition2}), owing to Theorem \\ref{theorem:MPP_fully}, $\\Phi^n,\\Phi^{n+1} \\in [0,1]$, therefore $\\xi^n, \\eta^n \\in (0,1)$. Finally inserting the equalities for I--IV back into (\\ref{eqn:estimate2}) and noting that $|f'|0$ and an integer $N>0$ such that $\\tau = T\/N$ and $t_n = n\\tau$ for $n=0,1,\\cdots,N$. Assume the initial value $\\phi_0$ is smooth, periodic and bounded $0\\le \\phi_0 \\le 1$, and the exact solution $\\phi(x,t)$ is sufficiently smooth. Let the stabilization constant $\\kappa_h$ satisfy the condition (\\ref{eqn:discreteES_condition}). We denote by $\\{\\Phi^n\\}_{n=1}^N = \\{(\\Phi_{ij}^n)\\}_{n=1}^N$ the approximate solution calculated by the scheme (\\ref{eqn:pACOK_FullDiscrete}) with $\\Phi^0 = \\phi_0|_{\\mathbb{T}^2}$. If the step size $\\tau$ is sufficiently small, we have\n\\begin{align}\n\\|\\phi(t_n) - \\Phi^{n}\\|_{h,L^2} \\le C (\\tau + h_1^2 + h_2^2), \\quad n \\in [\\![ N]\\!],\n\\end{align}\nwhere $C>0$ is some generic constant which depends on $\\phi, T, \\kappa_h, \\epsilon, \\gamma, M, |\\mathbb{T}^2|$ but is independent of $\\tau, h_1, h_2$.\n\\end{theorem}\n\n\n\n\n\n\n\n\n\n\n\\section{MPP schemes for a general Allen-Cahn type model}\\label{section:GeneralFramework}\n\nOur study on MPP can be extended into a more general setting. Consider a general Allen-Cahn type dynamics:\n\\begin{align}\\label{eqn:generalAC}\n\\dfrac{\\partial}{\\partial t} \\phi = \\epsilon\\Delta\\phi - \\dfrac{1}{\\epsilon}W'(\\phi) - \\mathcal{L}(f(\\phi)-\\omega)f'(\\phi) - M\\int_{\\mathbb{T}^d} (f(\\phi)-\\omega)\\ \\text{d}x\\cdot f'(\\phi),\n\\end{align}\nwhere $\\mathcal{L}$ is a positive semi-definite linear operator from $L^{\\infty}(\\mathbb{T}^d)$ to $L^{\\infty}(\\mathbb{T}^d)$ with the norm denoted by $\\|\\mathcal{L}\\|$. The last term on the right hand size counts for a possible volume constraint (\\ref{eqn:Volume}) when necessary. For instance, if $\\mathcal{L} = (-\\Delta)^{-1}$, then the volume constraint (\\ref{eqn:Volume}) is necessary, and it recovers the pACOK dynamics (\\ref{eqn:pACOK}). If $\\mathcal{L} = (I - \\gamma^2\\Delta)^{-1}$, then the volume constraint is unnecessary, and we set $M = 0$. This general dynamics can be viewed as the $L^2$ gradient flow dynamics associated to the free energy functional\n\\begin{align}\\label{eqn:generalframework}\nE^{\\text{ge}}[\\phi] = \\int_{\\mathbb{T}^d} \\frac{\\epsilon}{2}|\\nabla\\phi|^2 + \\frac{1}{\\epsilon}W(\\phi)\\ \\text{d}x + \\int_{\\mathbb{T}^d} |\\mathcal{L}^{\\frac{1}{2}}(f(\\phi)-\\omega)|^2\\ \\text{d}x + \\frac{M}{2}\\Big( \\int_{\\mathbb{T}^d} f(\\phi)-\\omega\\ \\text{d}x \\Big)^2,\n\\end{align}\nwhere depending on the different form of the operator $\\mathcal{L}$, we might or might not need the volume constraint.\n\nHere are some examples which fit into the general framework described above.\n\\begin{itemize}\n\\item In the micromagnetic model for garnet films \\cite{CondetteMelcherSuli_MathComp2010} with $d = 2$, the operator $\\mathcal{L}$ is being characterized by its eigenvalues $\\lambda(k) = \\frac{1-\\exp(-\\delta|k|)}{\\delta|k|}$. Here $\\delta>0$ corresponds to the relative film thickness. There is no volume constraint in this model.\n\n\\item A nonlocal geometric variational problem studied by \\cite{RenTruskinovsky_Elasticity2000} takes $\\mathcal{L} = (I - \\gamma^2\\Delta)^{-1}$ with no volume constraint. This problem can lead to the FitzHugh-Nagumo system \\cite{ChenChoiHuRen_SIMA2018}.\n\n\\item We can consider a positive semi-define linear operator $\\mathcal{L}$ which is the inverse of the following nonlocal operator $\\mathcal{K}: L^2(\\mathbb{T}^d) \\rightarrow L^2(\\mathbb{T}^d)$\n\\begin{align*}\n\\mathcal{K}: v(x) \\longmapsto \\int_{\\mathbb{T}^d} K(x-y) (v(x) - v(y))\\ \\text{d}y,\n\\end{align*}\nin which the kernel $K$ is nonnegative, radial, $\\mathbb{T}^d$-periodic with bounded second moment \\cite{Du_Book2020}. This can be viewed as a nonlocal OK model for the diblock copolymer system.\n\n\\item One example that cannot fit into the general framework (\\ref{eqn:generalframework}) but still satisfy the MPP property is the phase field variational implicit solvation model (pVISM), in which the free energy is formulated as \\cite{Zhao_2018CMS}:\n\\[\nE^{\\text{pVISM}}[\\phi] = \\int_{\\mathbb{T}^d} \\frac{\\epsilon}{2}|\\nabla\\phi|^2 + \\frac{1}{\\epsilon}W(\\phi)\\ \\text{d}x + \\int_{\\mathbb{T}^d} f(\\phi(x)) U(x;X)\\ \\text{d}x.\n\\]\nHere $X = (x_1,\\cdots, x_m)$ are the locations of the $m$ solute atoms, and $U$ is the potential between the solute atoms $X$ and solvent molecules $x$ (for instance, water). The phase field $\\phi$ labels the solvent so that the nonlocal interaction by $U$ takes integral only in the solvent region. The potential $U$ in pVISM typically consists of two parts, the solute-solvent van der Waals interaction and the electrostatic interaction. Additionally, the potential $U$ is cut off as a constant near the solute atoms $X$ so that it remains bounded. See \\cite{Zhao_2018CMS} and the references therein for the detailed discussion. In the next section, we will use pVISM as an example to show that choosing $f(\\phi) = 3\\phi^2 - 2\\phi^3$ make the MPP while $f(\\phi) = \\phi$ violates the MPP. See Subsection \\ref{subsection:pVISM} for the details.\n\\end{itemize}\n\nIn this general setting, the $L^2$ gradient flow dynamics always hold the MPP as in the following theorem.\n\\begin{theorem}\\label{theorem:generalMPP_continuous}\nThe general $L^2$ gradient flow dynamics (\\ref{eqn:generalAC}) is maximum principle preserving, namely, if $0\\le \\phi_0 \\le 1$, then $0\\le \\phi(t) \\le 1$ for any $t>0$, provided that\n\\begin{align}\\label{eqn_MPPcondition_continuous_generalsetting}\n\\frac{\\epsilon\\tilde{\\omega}}{6}\\Big[ \\|\\mathcal{L}\\| + \\tilde{M} |\\mathbb{T}^d| \\Big] \\le 1,\n\\end{align}\nwhere $\\tilde{M} = M$ if there is a volume constraint (\\ref{eqn:Volume}), and $\\tilde{M} = 0$ if there is no volume constraint.\n\\end{theorem}\nThe proof is identical to that of theorem \\ref{theorem:MPP_continuous} by replacing $\\gamma(-\\Delta)^{-1}$ by $\\mathcal{L}$, so we omit it.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Numerical simulations}\n\n\nIn this section, some numerical examples will be presented to validate the proposed schemes. Moreover some interesting patterns arising from the OK model will be shown. To begin with, let us briefly explain how to implement the numerical scheme (\\ref{eqn:pACOK_FullDiscreteII}). The implementation is as follows:\n\n\\begin{enumerate}\n\\item At the $n$-th step, take the Discrete Fourier Transform(DFT) on the right hand side $\\widehat{RHS}_{jk}$;\n\\item Calculate the DFT of $\\Phi^{n+1}$ as $\\hat{\\Phi}^{n+1}_{jk} = \\frac{\\widehat{RHS}_{jk}}{\\left(1+\\frac{\\tau\\kappa}{\\epsilon}\\right) + \\frac{4}{h_1^2}\\sin^2\\left(\\frac{j\\pi h_1}{2X_1}\\right) + \\frac{4}{h_2^2}\\sin^2\\left(\\frac{k\\pi h_2}{2X_2}\\right) }$;\n\\item Take the inverse DFT of $\\hat{\\Phi}^{n+1}_{jk}$ to obtain $\\Phi^{n+1}$ and move to the $(n+1)$-th step;\n\\end{enumerate}\n\nNow we solve the pACOK equation (\\ref{eqn:pACOK_FullDiscreteII}) coupled with periodic boundary condition. In this section, we fix $\\mathbb{T}^2 = [-1,1)^2 \\subset \\mathbb{R}^2$ and $N = N_1 = N_2 = 256$ unless stated otherwise. We set the stopping criteria for the time iteration by:\n\\begin{align}\n\\dfrac{\\|\\Phi^{n+1}-\\Phi^{n}\\|_{h,L^{\\infty}}}{\\tau} \\le \\text{TOL} = 10^{-3}.\n\\end{align}\nThe penalty constant is taken to be sufficiently large $M \\gg 1$. We take a sufficiently large value of $\\kappa_h>0$ to fulfill the energy stability condition (\\ref{eqn:discreteES_condition}) (and therefore fulfill the MPP condition (\\ref{eqn:MPP_condition2})), say $\\kappa_h = 2000$. Other parameters such as $\\epsilon, \\gamma, \\tau, \\omega$ might vary for different simulations.\n\n\\subsection{Rate of convergence}\n\nWe first of all test the convergence rates and the spatial accuracy of the scheme (\\ref{eqn:pACOK_FullDiscreteII}). For this numerical experiment, we fix $\\omega = 0.1$, and take a round disk as the initial data $\\Phi^0 = 0.5 + 0.5\\tanh(\\frac{r_0-r}{\\epsilon\/3})$ with $r_0 = \\sqrt{\\omega|\\mathbb{T}^2|\/\\pi}+0.1$. The simulation is performed until $T = 0.02$. For the rate of convergence, we take the solution generated by the scheme (\\ref{eqn:pACOK_FullDiscreteII}) with $\\tau = 10^{-6}$ and $N=2^8$ (consequently $h = h_1 = h_2 = \\frac{2X_1}{N}=\\frac{1}{128}$) as the benchmark solution. Then we take several values of step size larger than $\\tau = 10^{-6}$, each is the half of the previous one, and compute the discrete $L^2$ error between the numerical solutions with larger step sizes and the benchmark one. Table \\ref{table:convergence_rates} presents the errors and the convergence rates based on the data at $T = 0.02$ for the scheme (\\ref{eqn:pACOK_FullDiscreteII}) with time step sizes being halved from $\\tau = 10^{-4}$ to $10^{-4}\/16$. We test the convergence rates for three different values of $\\epsilon = 5h, 10h $ and $20h$. $\\gamma = 100$ is fixed. We can see from the table that the numerically computed convergence rates all tend to approach the theoretical value 1.\n\n\\label{table:convergence_rates}\n\\begin{table}[H]\n\\begin{center}\n\\begin{tabular}{ccccccc}\n\n & \\multicolumn{2}{c}{$\\epsilon = 5h$} &\\multicolumn{2}{c}{$\\epsilon = 10h$} &\\multicolumn{2}{c}{$\\epsilon = 20h$} \\\\ \\hline\n$\\tau $ & Error & Rate & Error & Rate & Error & Rate \\\\ \\hline\n1e-4\/$2^0$ & 1.936e-1 & --- & 1.555e-1 & --- & 5.902e-2 & --- \\\\\n\n1e-4\/$2^1$ & 1.542e-1& 0.33 & 9.465e-2 & 0.72 & 2.376e-2 & 1.31\n\t \t \\\\\n1e-4\/$2^2$ & 1.076e-1 & 0.52 & 4.858e-2& 0.96 & 9.233e-3 & 1.36\n\t \t \\\\\n1e-4\/$2^3$ & 6.423e-2 & 0.74 & 2.247e-2& 1.11 & 3.752e-3 & 1.29\n\t\t \\\\\n1e-4\/$2^4$ & 3.270e-2 & 0.97 & 9.787e-3 & 1.20 & 1.556e-3 & 1.27\n\t \t \\\\ \t\n1e-6 (BM) & --- & ---& --- & ---& --- & ---\\\\ \\hline\n\\end{tabular}\n\\end{center}\n\\caption{The errors and the corresponding convergence rates at time $T=0.02$ by the scheme (\\ref{eqn:pACOK_FullDiscreteII}) for different values of $\\epsilon$. In this simulation, $\\omega = 0.1, \\gamma = 100, M = 1000, \\kappa_h = 2000, N = 256$.}\n\\end{table}\n\n\n\n\\subsection{Comparison between $f(\\phi) = 3\\phi^2 - 2\\phi^3$ and $f(\\phi) = \\phi$ regarding to MPP} \\label{subsection:pVISM}\n\nIn this section, we show an example to see the effect of $f(\\phi) = 3\\phi^2 - 2\\phi^3$ on MPP. We consider the 1D pVISM system with $f(\\phi) = 3\\phi^2 - 2\\phi^3$, which holds the MPP theoretically, and with $f(\\phi) = \\phi$, the traditional choice which might lose the MPP. The parameters are taken as $L_x = 5, N = 1024, \\epsilon = 50h, \\kappa_h = 2000$. The solute atom $X = {(0)}$ (i.e. single solute atom system), and the potential function $U(x;X)$ reads:\n\\[\nU(x;X) = \\rho_{\\text{w}}\\cdot \\frac{1}{4\\epsilon_{\\text{LJ}}}\\bigg[\\Big(\\frac{\\sigma_0}{x_{\\text{cut}}}\\Big)^{12} - \\Big(\\frac{\\sigma_0}{x_{\\text{cut}}}\\Big)^{6}\\bigg] + \\frac{Q^2}{8\\pi\\epsilon_0}\\Big( \\frac{1}{\\epsilon_{\\text{w}}} - \\frac{1}{\\epsilon_{\\text{m}}}\\Big)\\frac{1}{x_{\\text{cut}}^2}.\n\\]\nHere $\\rho_{\\text{w}} = 0.0333 \\mathring{\\text{A}}^{-3}$ is the constant solvent (water) density, $\\epsilon_{\\text{LJ}} = 0.3 k_{\\text{B}}T$ is the depth of the Lennard-Jones potential well associated with the solute atom, $\\sigma_0 = 3.5 \\mathring{A}$ is the finite distance at which the Lenard-Jones potential of the solute atom is zero, $x_{\\text{cut}} = \\max\\{|x|,2.5\\}$ is the cutoff distance of $x$ from solute atom, $Q = 1e$ is the partial charge of the solute atom, $\\epsilon_0 = 1.4321\\times 10^{-4} e^2\/(k_{\\text{B}}T\\mathring{\\text{A}})$ is the vacuum permittivity, $\\epsilon_{\\text{m}} = 1$ is the relative permittivity of the solute, and $\\epsilon_{\\text{w}} = 80$ is the relative permittivity of the solvent. See \\cite{Zhao_2018CMS} for the model details.\n\nFigure \\ref{fig:pVISM} depicts the numerical equilibrium by taking $f(\\phi) = 3\\phi^2 - 2\\phi^3$ and $f(\\phi) = \\phi$ in the pVISM system. One can see that for the model with $f(\\phi) = 3\\phi^2 - 2\\phi^3$, the numerical equilibrium remains bounded between 0 and 1, the same as the theoretical prediction. On the contrary, if $f(\\phi) = \\phi$, the numerical equilibrium becomes smaller than 0 inside the interface, and greater than 1 outside the interface. Of course, the violation of MPP can be mitigated by letting $\\epsilon \\rightarrow 0$ by the $\\Gamma$-convergence theory \\cite{LiZhao_SIAM2013}. However, in real applications, especially in the 3d simulations, $\\epsilon$ has to remain relatively large to reduce the computational cost. Therefore, the choice of $f(\\phi) = 3\\phi^2 - 2\\phi^3$ is advantageous of keeping the hyperbolic tangent profile of $\\phi$, bounding $0\\le \\phi \\le 1$ and localizing the forces only near the interfaces even for a relatively large $\\epsilon$.\n\n\\begin{figure}[!htbp]\n\\centerline{\n\\includegraphics[width=150mm]{Figures\/pVISM.eps}\n }\n\\caption{Numerical comparison between the model with $f(\\phi) = 3\\phi^2 - 2\\phi^3$ and the one with $f(\\phi) = \\phi$ for the pVISM system. The model with $f(\\phi) = 3\\phi^2 - 2\\phi^3$ holds the MPP, while the MPP is violated when taking $f(\\phi) = \\phi$.}\n\\label{fig:pVISM}\n\\end{figure}\n\n\n\n\\subsection{1D coarsening dynamics and MPP}\n\n\n\nIn this section, we verify the MPP and energy stability for the numerical scheme (\\ref{eqn:pACOK_FullDiscreteII}) for the 1D case. We take a piecewise constant function as the initial, with the constant values generated randomly between 0 and 0.8. In the simulation, the parameter values are $T = 1000, \\omega = 0.3, \\gamma = 500, M = 2000, \\tau = 10^{-3}, \\kappa = 2000$. Figure \\ref{fig:1dcoarsening_gamma500} shows the coarsening dynamics in which the system experiences phase separation from the random initial, then bumps appear from coarsening, evolve into same size, and finally are separated in equal distance. The light blue curve (values labeled on the left $y$-axis) records the discrete $L^{\\infty}$ norm for the solution $2\\Phi^n-1$ (note that $|2\\Phi^n-1|\\le 1$ is equivalent to $0\\le \\Phi^n \\le 1$), which clearly implies the boundedness of $\\Phi^n$ between 0 and 1. The red curve (values labeled on the right $y$-axis) represents the discrete energy $E_h^{\\text{pOK}}[\\Phi^n]$ in (\\ref{eqn:discreteEnergy}) which is monotonically decreasing. Indicated by different colors, the four insets correspond to the four snapshots at $t = 0, 10, 500, 1000$ of the coarsening dynamics.\n\nNow we fix all parameter values as they are in Figure \\ref{fig:1dcoarsening_gamma500} but change $\\gamma = 2000$, a larger value than it was. As $\\gamma$ represents strength of the long-range repulsive interaction, we expect that a larger $\\gamma$ generates more bumps. This is verified by Figure \\ref{fig:1dcoarsening_gamma2000} in which the system still start from a randomly generated initial, but end up with six equally-sized equally-separated bumps. Meanwhile MPP and energy stability are still held as expected.\n\n\n\n\\begin{figure}[!htbp]\n\\centerline{\n\\includegraphics[width=150mm]{Figures\/OK_1d_kappa500.eps}\n }\n\\caption{A 1D coarsening dynamics process with a small repulsive strength $\\gamma$. In this simulation, the parameter values are $T = 1000, \\omega = 0.3, \\gamma = 500, M = 2000, \\tau = 10^{-3}, \\kappa = 2000$. The light blue curve records the discrete $L^{\\infty}$ norm for the solution $\\Phi^n$, which is clearly bounded between 0 and 1. The red curve represents the discrete energy $E_h^{\\text{pOK}}[\\Phi^n]$ in (\\ref{eqn:discreteEnergy}) which is monotonically decreasing. The four insets are snapshots at different times.}\n\\label{fig:1dcoarsening_gamma500}\n\\end{figure}\n\n\n\\begin{figure}[!htbp]\n\\centerline{\n\\includegraphics[width=150mm]{Figures\/OK_1d_kappa2000.eps}\n }\n\\caption{A 1D coarsening dynamics process with a large repulsive strength $\\gamma$. In this simulation, the parameter values are $T = 1000, \\omega = 0.3, \\gamma = 2000, M = 2000, \\tau = 10^{-3}, \\kappa = 2000$.}\n\\label{fig:1dcoarsening_gamma2000}\n\\end{figure}\n\n\n\n\n\n\n\n\\subsection{2D coarsening dynamics and MPP}\n\nIn this section, we solve the equation (\\ref{eqn:pACOK_FullDiscreteII}) in 2D and explore the corresponding discrete MPP and discrete energy stability. We take a $256\\times 256$ mesh grid and $T = 100, \\omega = 0.15, \\gamma = 2000, M = 10^{4}, \\tau = 2\\cdot 10^{-4}, \\kappa = 2000$. Similar as in the 1D case, a 2D random initial is generated on a coarse grid. The coarsening dynamics is presented in Figure \\ref{fig:2dcoarsening_gamma1000} in which the random initial is phase separated within a very short time period, resulting in a group of bubbles with different sizes, then the tiny bubbles disappear, other bubbles evolves into equal size, and eventually all the equally-sized bubbles become equally distanced, forming a hexagonal pattern in the 2D domain $\\mathbb{T}^2$. Just like the 1D case, we see that the 2D coarsening dynamics also enjoy the MPP property and energy stability in the discrete sense as the theory predicts in the previous sections. The insets are snapshots taken at $t = 0, 1, 10, 100$, each of which has a colored title indicating the corresponding colored marker on the two curves.\n\nWhen the value of $\\gamma$ become larger, say $\\gamma = 2000$, but other parameter values are fixed, the stronger long-range repulsive interaction between bubbles lead to more bubbles of equal size and equal distance. This result is depicted in Figure \\ref{fig:2dcoarsening_gamma2000} in which the MPP and energy stability are still held.\n\n\\begin{figure}[!htbp]\n\\centerline{\n\\includegraphics[width=150mm]{Figures\/OK_2d_kappa1000.eps}\n }\n\\caption{A 2D coarsening dynamics process with a small repulsive strength $\\gamma$. In this simulation, the parameter values are $T = 100, \\omega = 0.15, \\gamma = 1000, M = 10^4, \\tau = 2\\cdot10^{-4}, \\kappa = 2000$. The light blue curve is the discrete $L^{\\infty}$ norm of $2\\Phi^n-1$ which implies the bound of $\\Phi^n$ between 0 and 1, while the red curve indicates the monotonic decay of the discrete energy $E_h^{\\text{pOK}}$. The four insets are snapshots at different times. }\n\\label{fig:2dcoarsening_gamma1000}\n\\end{figure}\n\n\\begin{figure}[!htbp]\n\\centerline{\n\\includegraphics[width=150mm]{Figures\/OK_2d_kappa2000.eps}\n }\n\\caption{A 2D coarsening dynamics process with a large repulsive strength $\\gamma = 2000$. Other parameters are the same as that for Figure \\ref{fig:2dcoarsening_gamma1000}. Larger $\\gamma$ lead to more bubbles forming hexagonal pattern.}\n\\label{fig:2dcoarsening_gamma2000}\n\\end{figure}\n\n\n\n\n\\section{Summary}\n\nIn this paper, we explore the MPP property for the pACOK equation and propose a first order stabilized linear semi-implicit scheme which inherits the MPP and the energy stability in the discrete level. The third order polynomial $f(\\phi) = 3\\phi^2 - 2\\phi^3$ plays a key role in the proof of MPP for the system. We prove the MPP and energy stability in the semi-discrete and fully-discrete level in which the nonlinear terms $W$ and $f$ need not to be extended to have bounded second order derivative.\n\nIn the numerical experiments, we test the rate of convergence for the proposed scheme. We also show that in some examples, a traditional choice of $f(\\phi) = \\phi$ could violate the MPP. When $\\omega \\ll 1$, the pACOK dynamics displays pattern of hexagonal bubble assemblies. When the repulsive long-range interaction becomes stronger, there will be more bubbles appearing in the hexagonal equilibria.\n\nThis work can be extended along several directions. Firstly, we can study for higher order MPP schemes for the pACOK equation, or generally binary systems with long-range interactions. Secondly, we can further consider the MPP scheme for ternary systems, or a more general system of $N+1$ constituents in which $N$ phase field functions $\\{\\phi_j\\}_{j=1}^N$ are introduced to represent the densities of the $N$ constituents, and the $(N+1)$-th one is implicitly represented by $1- \\sum_{j=1}^N \\phi_j$.\n\nIn this paper, we mainly explore the numerical scheme for $L^2$ gradient flow dynamics based on operator splitting technique. Some other numerical methods, such as exponential time differencing based schemes, could be alternative choices for the MPP scheme, which will also be considered in the future.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Appendix}\\label{s:Appendix}\n\nIn the appendix, we briefly discuss the wellposedness of the pACOK dynamics (\\ref{eqn:pACOK}) and the $L^{\\infty}$ bound for the solution of (\\ref{eqn:pACOK}).\n\n\\begin{definition}\nLet $d = 2$ or 3. We call $\\phi(t,x)$ a global weak solution to problem (\\ref{eqn:pACOK}) if for any $T>0$, $\\phi(t,x)$ satisfies\n\\[\n\\phi \\in C([0,T]; L^p(\\mathbb{T}^d)) \\cap L^{\\infty}(0,T; H^1(\\mathbb{T}^d)) \\cap L^{2}(0,T;H^2(\\mathbb{T}^d)), \\quad p \\in [2,6)\n\\]\nand the initial condition $\\phi(0,x) = \\phi_0(x)$. Further, for any $t\\in(0,T]$, any test function $w\\in L^2(\\mathbb{T}^d)$, it holds\n\\begin{align*}\n&\\frac{\\text{d}}{\\text{d}t} \\int_{\\mathbb{T}^d} \\phi(t,x) w(x) \\ \\text{d}x \\\\\n= & \\int_{\\mathbb{T}^d} \\bigg[\\epsilon\\Delta\\phi - \\dfrac{1}{\\epsilon}W'(\\phi) - \\gamma(-\\Delta)^{-1}(f(\\phi)-\\omega)f'(\\phi) - M\\int_{\\mathbb{T}^d} (f(\\phi)-\\omega)\\ \\text{d}x\\cdot f'(\\phi) \\bigg] w(x) \\text{d}x\n\\end{align*}\nin the distributional sense in $(0,T)$.\n\\end{definition}\n\nWith the definition of the weak solution for the problem (\\ref{eqn:pACOK}), we are now ready to state the theorem for its wellposedness.\n\n\\begin{theorem}\\label{theorem:wellposedness}\nLet $d = 2$ or 3, and the initial data $\\phi_0\\in H^1(\\mathbb{T}^d)$. Then there exists a unique global weak solution $\\phi$ to the problem (\\ref{eqn:pACOK}). Further, the free energy $E^{\\emph{pOK}}$ in (\\ref{functional:pOK}) decreases as time evolves.\n\\end{theorem}\nThe proof is a standard procedure by following De Giorgi's minimizing movement scheme \\cite{Ambrosio_Rend1995,DeGiorgi_RMA1993}. We have a preprint discussing the wellposedness of a more complicated ternary system with long-range interaction, for which the proof of Theorem \\ref{theorem:wellposedness} can be viewed as a straightforward application. Therefore we will omit the proof here and recommend the readers to refer to \\cite{JooXuZhao_Preprint2020} for the details.\n\nOur next result is regarding to the $L^{\\infty}$ bound for the weak solution $\\phi(t,x)$ of the problem (\\ref{eqn:pACOK}), which can be achieved by De Giorgi's iteration \\cite{Chen_Book2003,WuYinWang_Book2006}. Note that in this case, the result holds only for $d = 2$. To begin with, we need an algebraic lemma. Without causing any confusions, we point out that the notations $M, h, k, d, \\alpha, \\beta$ picked below are exclusively for Lemma \\ref{lemma-algebra}, and might not mean the same as they are used elsewhere.\n\n\n\\begin{lemma}\\label{lemma-algebra}\nLet $\\mu(t)$ be a nonnegative, non-increasing function on $[k_0, +\\infty)$ that satisfies\n\\begin{equation}\\label{growth-condition}\n\\mu(h)\\leq\\Big(\\frac{M}{h-k}\\Big)^{\\alpha}\\mu(k)^{\\beta}, \\quad\\forall h>k\\geq k_0,\n\\end{equation}\nwhere $M>0$, $\\alpha>0$, $\\beta>1$ are all constants. Then we can find a constant $d>0$ such that\n$$\n \\mu(h)=0, \\quad\\forall h\\geq k_0+d.\n$$\n\\end{lemma}\n\\begin{proof}\nTo begin with, let\n\\[\nk_s=k_0+d-\\frac{d}{2^s}, \\quad\\forall s\\in\\mathbb{Z}^+,\n\\]\nwhere $d$ is defined to be\n\\begin{align}\\label{definition-d}\nd=M2^{\\frac{\\beta}{\\beta-1}}\\mu(k_0)^{\\frac{\\beta-1}{\\alpha}}.\n\\end{align}\nTake $h = k_{s+1}$ and $k = k_s$ in (\\ref{growth-condition}), it yields the recursive relation\n\\begin{equation}\\label{recursive}\n\\mu(k_{s+1})\\leq\\frac{M^{\\alpha}2^{(s+1)\\alpha}}{d^{\\alpha}}\\mu(k_s)^{\\beta}, \\quad\\forall s\\in\\mathbb{Z}^+.\n\\end{equation}\n\nNext we claim that\n\\begin{equation}\\label{claim}\n\\mu(k_s)\\leq\\frac{\\mu(k_0)}{r^s}, \\quad\\forall s\\in\\mathbb{Z}^+,\n\\end{equation}\nwhere $r>0$ is a constant defined as\n\\begin{equation}\\label{definition-r}\nr=2^{\\frac{\\alpha}{\\beta-1}}>1.\n\\end{equation}\nOnce \\eqref{claim} is verified, the proof of Lemma \\ref{lemma-algebra} is done by simply passing $s\\rightarrow \\infty$\nand using the assumption that $\\mu$ is non-increasing.\n\nWe finally prove (\\ref{claim}) by induction. Suppose (\\ref{claim}) is valid for $s$, then we obtain from (\\ref{definition-d}), (\\ref{recursive}) and (\\ref{definition-r}) that\n\\begin{align}\n\\mu(k_{s+1})\\leq\\frac{M^{\\alpha}2^{(s+1)\\alpha}}{d^{\\alpha}}\\frac{\\mu(k_0)^\\beta}{r^{\\beta{s}}}\n = \\frac{M^{\\alpha}2^{(s+1)\\alpha}}{M^{\\alpha}r^{\\beta}\\mu(k_0)^{\\beta-1}}\\frac{\\mu(k_0)^\\beta}{r^{\\beta{s}}}\n = \\dfrac{\\mu(k_0)}{r^{s+1}} \\dfrac{r^{s+1}2^{(s+1)\\alpha}}{r^{\\beta(s+1)}}\n = \\frac{\\mu(k_0)}{r^{s+1}}\n\\end{align}\nHence \\eqref{claim} is also valid if $s$ is replaced by $s+1$.\n\\end{proof}\n\n\n\nNow we can present our result regarding to the $L^{\\infty}$ bound for the weak solution to the problem (\\ref{eqn:pACOK}) with initial data $\\phi_0$ in 2D.\n\n\\begin{theorem}\\label{theorem: Linfity}\nFor any $\\phi_0\\in H^1(\\mathbb{T}^2)\\cap L^\\infty(\\mathbb{T}^2)$ and $T>0$, the unique weak solution\n\\[\n \\phi\\in L^\\infty(0, T; H^1(\\mathbb{T}^2))\\cap L^2(0, T; H^2(\\mathbb{T}^2))\n\\]\nto the problem (\\ref{eqn:pACOK}) satisfies\n\\begin{equation}\n\\|\\phi\\|_{L^\\infty([0,T]\\times\\mathbb{T}^2)}\\leq\\|\\phi_0\\|_{L^\\infty}+C^\\ast,\n\\end{equation}\nwhere $C^\\ast>0$ is a constant that only depends on $\\|\\phi_0\\|_{H^1}, \\epsilon^{-1}, \\omega, \\gamma$ and $M$.\n\\end{theorem}\n\\begin{proof}\nLet us denote\n\\[\n l=\\|\\phi_0\\|_{L^\\infty},\n\\]\nthe test function\n\\[\n\\xi(t,x)=(\\phi(t,x)-k)^+\\chi_{[t_1,t_2]},\\quad\\forall k>l\\ \\text{and}\\ t_2>t_1\n\\]\nand\n\\[\n\\tilde{F}(\\phi)=-\\frac{1}{\\epsilon}W'(\\phi)-\\gamma(-\\Delta)^{-1}\\big(f(\\phi)-\\omega\\big)f'(\\phi)-M\\Big(\\int_{\\mathbb{T}^2}(f(\\phi) - \\omega)\\,\\mathrm{d}{x}\\Big)f'(\\phi)\n\\]\nThen it is immediate to check for any $p>2$, there exists $M_1>0$ that only depends on $p$, $\\|\\phi_0\\|_{H^1}$, and coefficients of the equation, such that\n\\[\n\\|\\tilde{F}(\\phi(t))\\|_{L^p}\\leq M_1, \\quad\\forall t\\in [0, T].\n\\]\nConsider $\\xi$ as a test function for the weak solution $\\phi$, we obtain that\n\\begin{align}\\label{integral-equality-1}\n&\\iint_{[0, T]\\times\\mathbb{T}^2}\\partial_t(\\phi-k)^+(\\phi-k)^+\\chi_{[t_1,t_2]}\\,\\mathrm{d}{x}\\mathrm{d}{t}+\\iint_{[0, T]\\times\\mathbb{T}^2}\\big|\\nabla(\\phi-k)^+\\big|^2\\chi_{[t_1,t_2]}\\,\\mathrm{d}{x}\\mathrm{d}{t}\\nonumber\\\\\n=&\\iint_{[0, T]\\times\\mathbb{T}^2}\\tilde{F}(\\phi)(\\phi-k)^+\\chi_{[t_1,t_2]}\\,\\mathrm{d}{x}\\mathrm{d}{t}.\n\\end{align}\nIf we denote\n\\[\n\\mathrm{I}_k(t)=\\int_{\\mathbb{T}^2}|(\\phi(t,x)-k)^+|^2\\,\\mathrm{d}{x},\n\\]\nwe get from \\eqref{integral-equality-1} that\n\\begin{align}\\label{integral-inequality-1}\n\\frac12\\Big[\\mathrm{I}_k(t_2)-\\mathrm{I}_k(t_1)\\Big]+\\int_{t_1}^{t_2}\\int_{\\mathbb{T}^2}\\big|\\nabla(\\phi-k)^+\\big|^2\\,\\mathrm{d}{x}\\mathrm{d}{t}\n\\leq\\int_{t_1}^{t_2}\\int_{\\mathbb{T}^2}\\big|\\tilde{F}(\\phi)\\big|(\\phi-k)^+\\,\\mathrm{d}{x}\\mathrm{d}{t}.\n\\end{align}\n\nSuppose $\\mathrm{I}_k(t)$ attains its maximum value at $s\\in [0, T]$ (assume $s>0$ without loss of generalization). Then\n\\[\n\\mathrm{I}_k(s)-\\mathrm{I}_k(s-\\eps)\\geq 0,\n\\]\nfor any $0<\\eps0$ is a generic constant. If we further denote\n$$\n F(t,x)=|\\tilde{F}(\\phi(t,x))|+|\\varphi(t, x)|,\n$$\nthen $\\forall p>2$ we get\nafter combining \\eqref{key-inequality-1} with \\eqref{Sobolev-embedding} that\n\\begin{equation}\\label{key-inequality-2}\n\\Big(\\int_{\\mathbb{T}^2}|\\varphi(s, x)|^p\\,\\mathrm{d}{x}\\Big)^{\\frac{2}{p}}\\leq C\\int_{\\mathbb{T}^2}F(s, x)|\\varphi(s,x)|\\,\\mathrm{d}{x}.\n\\end{equation}\nNote that\n\\begin{equation}\\label{bound-F}\n\\|F(t)\\|_{L^p}\\leq M_2, \\quad\\forall t\\in [0, T].\n\\end{equation}\n\nNext, we denote\n$$\n B_k(t)=\\{x\\in\\mathbb{T}^2: \\phi(t, x)>k\\}.\n$$\nThen it follows from \\eqref{key-inequality-2} and H\\\"{o}lder's inequality that\n$$\n \\Big(\\int_{B_k(s)}|\\varphi(s, x)|^p\\,\\mathrm{d}{x}\\Big)^{\\frac{2}{p}}\\leq C\\int_{B_k(s)}F(s, x)|\\varphi(s,x)|\\,\\mathrm{d}{x}\n \\leq C\\Big(\\int_{B_k(s)}|\\varphi(s, x)|^p\\,\\mathrm{d}{x}\\Big)^{\\frac{1}{p}}\\Big(\\int_{B_k(s)}|F(s, x)|^q\\,\\mathrm{d}{x}\\Big)^{\\frac{1}{q}} ,\n$$\nwhere $q$ is the H\\\"older conjugate of $p$. It further implies\n\\begin{equation}\\label{key-inequality-3}\n\\Big(\\int_{B_k(s)}|\\varphi(s, x)|^p\\,\\mathrm{d}{x}\\Big)^{\\frac{1}{p}}\\leq C\\Big(\\int_{B_k(s)}|F(s, x)|^q\\,\\mathrm{d}{x}\\Big)^{\\frac{1}{q}},\n\\end{equation}\nwhere $C>0$ only depends on $p$, $\\|\\phi_0\\|_{H^1}$, and coefficients of equation (\\ref{eqn:pACOK}). As a consequence, for any $1k,\\,t\\in [0, T]\n\\end{align}\ndue to the fact that $\\varphi\\geq h-k$ on $B_h(t)$ and $B_h(t)\\subset B_k(t)$. Therefore, if we denote\n$$\n \\mu(k)=\\sup_{t\\in[0, T]}|B_k(t)|,\n$$\nwe get from \\eqref{direction-1} and \\eqref{direction-2} that\n\\begin{equation}\\label{iterative-inequality}\n\\mu(h)\\leq \\Big(\\frac{C}{h-k}\\Big)^2\\mu(k)^{\\frac{2p-2}{mp}+\\frac{p-2}{p}}.\n\\end{equation}\nNote that\n$$\n \\frac{2p-2}{mp}+\\frac{p-2}{p}>1\n$$\nby the choice of $m$, $p$, hence using Lemma \\ref{lemma-algebra} we know that\n$$\n \\mu(l+C^\\ast)=\\sup_{t\\in [0, T]}|B_{k+C^\\ast}(t)|=0,\n$$\nwhich indicates\n\\begin{equation}\n\\phi(t, x)\\leq l+C^\\ast, \\quad\\forall (t, x)\\in [0, T]\\times\\mathbb{T}^2.\n\\end{equation}\nHence the proof is complete.\n\n\\end{proof}\n\n\n\n\n\n\n\n\n\\section{Acknowledgements}\n\nX. Xu's work is supported by a grant from the Simons Foundation through grant No. 635288;\nY. Zhao's work is supported by a grant from the Simons Foundation through Grant No. 357963.\n\n\n\n\n\\newpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe detection of the primordial B-mode power spectrum by the BICEP2 collaboration confirms\nthe existence of primordial gravitational wave, and the observed B-mode power\nspectrum gives the constraint on the tensor-to-scalar ratio\nwith $r=0.20^{+0.07}_{-0.05}$ at $1\\sigma$ level for the lensed-$\\Lambda$CDM model \\cite{Ade:2014xna}.\nFurthermore, $r=0$ is disfavored at $7.0\\sigma$ level. The new constraints on $r$\nand the spectral index $n_s$ exclude a wide class of inflationary models.\nFor the inflation model with non-minimal coupling with gravity \\cite{Kallosh:2013tua}, a universal\nattractor at strong coupling was found with $n_s=1-2\/N$ and $r=12\/N^2$. This model is inconsistent\nwith the BICEP2 result $r\\gtrsim 0.1$ at $2\\sigma$ level\nbecause the BICEP2 constraint on $r$ requires the number of e-folds $N=\\sqrt{12\/r}\\lesssim \\sqrt{120}\\approx 11$\nwhich is not enough to solve the horizon problem. If we require $N=50$,\nthen $r=0.0048$, so the model is excluded by the BICEP2 result. For the small-field inflation\nlike the hilltop inflation with the potential $V(\\phi)=V_0[1-(\\phi\/\\mu)^p]$ \\cite{Albrecht:1982wi,Boubekeur:2005zm},\n$r\\sim 0$, so the model is excluded by the BICEP2 result.\n\nWithout the running of the spectral index, the combination of {\\em Planck}+WP+highL data gives\n$n_s=0.9600\\pm 0.0072$ and $r_{0.002}<0.0457$ at the 68\\% confidence level for\nthe $\\Lambda$CDM model \\cite{Ade:2013zuv,Ade:2013uln} which is in tension\nwith the BICEP2 result. When the running of the spectral index is included in the data fitting,\nthe same combination gives $n_s=0.957\\pm 0.015$, $n_s'=dn_s\/d\\ln k=-0.022^{+0.020}_{-0.021}$\nand $r_{0.002}<0.263$ at the 95\\% confidence level \\cite{Ade:2013zuv,Ade:2013uln}.\nTo give a consistent constraint on $r$\nfor the combination of {\\em Planck}+WP+highL data and the BICEP2 data, we require\na running of the spectral index $n_s'<-0.002$ at the 95\\% confidence level.\nFor the single field inflation, the spectral index $n_s$ for the\nscalar perturbation deviates from the Harrison-Zel'dovich value of $1$ in the order of $10^{-2}$, so\n$n_s'$ is in the order of $10^{-3}$. The explanation of large $r$ and $n_s'$ is a challenge to\nsingle field inflation. In light of the BICEP2 data, several attempts were proposed to explain the large value\nof $r$ \\cite{Lizarraga:2014eaa,Harigaya:2014qza,Contaldi:2014zua,Collins:2014yua,Byrnes:2014xua,Anchordoqui:2014uua,Harigaya:2014sua,\nNakayama:2014koa,Zhao:2014rna,\nCook:2014dga,Kobayashi:2014jga,Miranda:2014wga,Masina:2014yga,Hamada:2014iga,Hertzberg:2014aha,\nDent:2014rga,Joergensen:2014rya,Freese:2014nla,Ashoorioon:2014nta,Ashoorioon:2013eia,\nChoudhury:2014kma,Choudhury:2013iaa,Hotchkiss:2011gz,BenDayan:2009kv}.\nIn this Letter, we use the chaotic and natural inflation models to\nexplain the challenge.\n\n\\section{Slow-roll Inflation}\n\nThe slow-roll parameters are defined as\n\\begin{gather}\n\\label{slow1}\n\\epsilon=\\frac{M_{pl}^2V_\\phi^2}{2V^2},\\\\\n\\label{slow2}\n\\eta=\\frac{M_{pl}^2V_{\\phi\\phi}}{V},\\\\\n\\label{slow3}\n\\xi=\\frac{M_{pl}^4V_\\phi V_{\\phi\\phi\\phi}}{V^2},\n\\end{gather}\nwhere $M^2_{pl}=(8\\pi G)^{-1}$, $V_\\phi=dV(\\phi)\/d\\phi$, $V_{\\phi\\phi}=d^2V(\\phi)\/d\\phi^2$\nand $V_{\\phi\\phi\\phi}=d^3V(\\phi)\/d\\phi^3$. For the single field inflation, the spectral indices,\nthe tensor-to-scalar ratio and the running are\ngiven by\n\\begin{gather}\n\\label{nsdef}\nn_s-1\\approx 2\\eta-6\\epsilon,\\\\\n\\label{rdef}\nr\\approx 16\\epsilon\\approx -8n_t,\\\\\n\\label{rundef}\nn_s'=dn_s\/d\\ln k\\approx 16\\epsilon\\eta-24\\epsilon^2-2\\xi.\n\\end{gather}\nThe number of e-folds before the end of inflation is given by\n\\begin{equation}\n\\label{efolddef}\nN(t)=\\int_t^{t_e}Hdt\\approx \\frac{1}{M_{pl}^2}\\int_{\\phi_e}^\\phi\\frac{V(\\phi)}{V_\\phi(\\phi)}d\\phi,\n\\end{equation}\nwhere the value $\\phi_e$ of the inflaton field at the end of inflation is defined by $\\epsilon(\\phi_e)=1$.\nThe scalar power spectrum is\n\\begin{equation}\n\\label{power}\n\\mathcal{P}_{\\mathcal{R}}=A_s\\left(\\frac{k}{k_*}\\right)^{n_s-1+n_s'\\ln(k\/k_*)\/2},\n\\end{equation}\nwhere the subscript ``*\" means the value at the horizon crossing, the scalar amplitude\n\\begin{equation}\n\\label{power1}\nA_s\\approx \\frac{1}{24\\pi^2M^4_{pl}}\\frac{\\Lambda^4}{\\epsilon}.\n\\end{equation}\nWith the BICEP2 result $r=0.2$, the energy scale of inflation is $\\Lambda\\sim 2.2\\times 10^{16}$GeV.\n\nFor the chaotic inflation with the power-law potential $V(\\phi)=\\Lambda^4(\\phi\/M_{pl})^p$ \\cite{linde83}, the slow-roll parameters\nare $\\epsilon=p\/(4N_*)$, $\\eta=(p-1)\/(2N_*)$ and $\\xi=(p-1)(p-2)\/(4N^2_*)$. The spectral index $n_s=1-(p+2)\/(2N_*)$,\nthe running of the spectral index $n_s'=-(2+p)\/(2N_*^2)=-2(1-n_s)^2\/(p+2)<0$ and the tensor-to-scalar ratio $r=4p\/N_*=8p(1-n_s)\/(p+2)$.\nWe plot the $n_s-r$ and $n_s-n_s'$ relations in Figs.\n\\ref{pwrnsr} and \\ref{pwrrun} for $p=1$, $p=2$, $p=3$ and $p=4$. In Fig. \\ref{pwrnsr}, we also show the points with\n$N_*=50$ and $N_*=60$. From Figs. \\ref{pwrnsr} and \\ref{pwrrun}, we see that $r$ increases with the power $p$, but\n$|n_s'|$ decreases with the power $p$. Therefore, it is not easy to satisfy both the requirements $r\\gtrsim 0.1$ and\n$n_s'<-0.002$. The chaotic inflation with $20$, $i = 1, \\dots, N$, that determines the direction ${\\bf e}(\\Theta_i) = (\\cos \\Theta_i, \\sin \\Theta_i)$ of self-propulsion at constant speed $v_0$. The orientation $\\Theta_i$ also undergoes free rotational diffusion with diffusion coefficient $D_R$. \nIn its more general form, particles interact through a pair potential $u(r, \\varphi)$ with total potential energy \n\\begin{align}\\label{e:U}\n U = \\chi\\sum_{1\\le i \\epsilon, \\forall j\\ne i, \\\\\n\t\\label{sde_angle_hc}\n\t\\mathrm{d} \\Theta_i &= \\sqrt{2 D_R} \\mathrm{d} W_i.\n\\end{align}\n\\end{subequations}\nThis represents particles as hard disks of diameter $\\epsilon$: particles only sense each other when they come into contact, and they are not allowed to get closer than $\\epsilon$ to each other (mutual impenetrability condition). In comparison with the mean-field scaling, here instead the scaling is $\\chi = 1, \\ell = \\epsilon \\ll 1$ so that each particle only interacts with the few particles that are within a distance $O(\\epsilon)$, the interaction is very strong. Using the method of matched asymptotics, from \\eqref{sde_model_hc} one obtains to order $\\phi$ the following model:\n\\begin{align}\\label{model3}\n\\partial_t f + v_0\\nabla \\cdot \\left[ f (1-\\phi \\rho) {\\bf e}(\\theta) + \\phi {\\bf p} f\\right] &= D_T \\nabla \\cdot \\left[ (1- \\phi \\rho) \\nabla f + 3 \\phi f \\nabla \\rho \\right] + D_R \\partial_{\\theta}^2 f.\n\\end{align}\nHere $\\phi$ is the effective occupied area $\\phi = (N-1)\\epsilon^2 \\pi\/2$. Model \\eqref{model3} is obtained formally in the limit of $\\epsilon$ and $\\phi$ small. \nNote that this equation is consistent with the case $N=1$: if there is only one particle, then $\\phi = 1$ and we recover a linear PDE (no interactions). The equation for the spatial density is\n\\begin{equation}\n\\partial_t \\rho + v_0 \\nabla \\cdot {\\bf p} = D_T \\nabla \\cdot \\left[ (1+ 2\\phi \\rho) \\nabla \\rho \\right],\n\\end{equation}\nwhich indicates the collective diffusion effect: the higher the occupied fraction $\\phi$, the higher the effective diffusion coefficient. \nWe note that, due to the nature of the excluded-volume interactions, models \\eqref{model2} and \\eqref{model3} are obtained via approximations (closure at the pair correlation function and matched asymptotic expansions, respectively) and no rigorous results are available. A nice exposition of the difficulties of going from micro to macro in the presence of hard-core non-overlapping constraints is given in \\cite{Maury:2011eu}. In particular, they consider hard-core interacting particles in the context of congestion handling in crowd motion. In contrast to \\eqref{sde_model_hc}, the dynamics involve only position and are deterministic. Collisions can then be handled via the projection of velocities onto the set of feasible velocities. In \\cite{Maury:2011eu} they do not attempt to derive a macroscopic model from the microscopic dynamics but instead propose a PDE model for the population density $\\rho({\\bf x},t)$ that expresses the congestion assumption by setting the velocity to zero whenever $\\rho$ attains a saturation value (which they set to one). \n\n\\subsection{Discrete random walks}\\label{sec:discrete}\n\nNext we discuss fully discrete models for active particles with size exclusion effects. We start by considering a \nsimple exclusion model for active particles on a one-dimensional lattice, which has been investigated in \\cite{kourbane2018exact}. The brief description of the microscopic lattice model is as follows: $N$ particles of size $\\epsilon$ evolve on a discrete ring of $1\/\\epsilon$ sites, with occupancy $\\phi = \\epsilon N \\le 1$. Each lattice is occupied by at most one particle (thus modelling a size exclusion), and particles can either be moving left ($-$ particles) or right ($+$ particles). The configuration can be represented using occupation numbers $\\sigma_i$ at site $i$ with values in $\\{-1, 0, 1\\}$. \nThe dynamics combine three mechanisms: \n\\begin{enumerate}[label=(\\alph*)]\n\t\\item Diffusive motion: for each bond $(i, i+1)$, $\\sigma_i$ and $\\sigma_{i+1}$ are exchanged at rate $D_T$.\n\t\\item Self-propulsion and size exclusion: for each bond $(i, i+1)$, a $+$ particle in $i$ jumps to $i+1$ if $\\sigma_{i+1} = 0$; or a $-$ particle in $i+1$ jumps to $i$ if $\\sigma_i = 0$ at rate $\\epsilon v_0$.\n\t\\item Tumbling: particles switch direction $\\sigma_i \\to - \\sigma_i$ at rate $\\epsilon^2 \\lambda$. \n\\end{enumerate}\nRescaling space and time as $\\epsilon i$ and $\\epsilon^2 t$ respectively, and starting from a smooth initial conditions, the macroscopic equations can be derived exactly as \\cite{kourbane2018exact}\n\\begin{align} \\label{model_lattice1D}\n\\begin{aligned}\n\\partial_t f_+ + v_0\\partial_x [f_+ (1-\\phi \\rho)] &= D_T\\partial_{xx} f_+ + \\lambda (f_- - f_+), \\\\\n\\partial_t f_- - v_0\\partial_x [f_- (1-\\phi \\rho)] &= D_T\\partial_{xx} f_- + \\lambda (f_+ - f_-),\n\\end{aligned}\n\\end{align}\nwhere $f_+$ and $f_-$ are the probability densities corresponding to the $+$ and $-$ particles, respectively, and $\\rho = f_+ + f_-$. \nIntroducing the number densities\n\\begin{align}\\label{number_densities}\nr({\\bf x},t) = N f_+({\\bf x},t), \\qquad b({\\bf x},t) = N f_-({\\bf x},t),\n\\end{align}\nwhich integrate to $N_1$ and $N_2$ respectively, we can rewrite \\eqref{model_lattice1D} as \n\\begin{align}\\label{model_lattice1D_number_densities}\n\\begin{aligned}\n\\partial_t r + v_0\\partial_x [r (1-\\bar \\rho)] &= D_T\\partial_{xx} r + \\lambda (b - r), \\\\\n\\partial_t b - v_0\\partial_x [b (1- \\bar\\rho)] &= D_T\\partial_{xx} b + \\lambda (r - b),\n\\end{aligned}\n\\end{align}\nwith $\\bar \\rho = \\epsilon (r + b)$.\nOne can also consider the same process in higher dimensions with a finite set of orientations ${\\bf e}_k, k = 1, \\dots, m$. The most straightforward generalisation of \\eqref{model_lattice1D} is to consider a two-dimensional square lattice with $m=4$ directions, namely ${\\bf e}_1 = (1,0), {\\bf e}_2 = (0,1), {\\bf e}_3 = (-1,0), {\\bf e}_4 = (0,-1)$ (see Fig.~1 in \\cite{kourbane2018exact}). In this case, the configuration would take five possible values, $\\sigma_i= \\{-1, -i, 0, i, 1\\}$ and the resulting macroscopic model would consist of a system of four equations for the densities of each subpopulations\n\\begin{equation} \\label{lattice2D}\n\t\\partial_t f_k + v_0\\nabla \\cdot [f_k (1-\\phi \\rho) {\\bf e}_k] = D_T \\Delta f_k + \\lambda (f_{k+1} + f_{k-1} - 2 f_k), \\qquad k = 1, \\dots,4\n\\end{equation}\nwhere now $\\phi = \\epsilon^2 N$, $f_k({\\bf x},t)$ stands for the probability density of particles going in the ${\\bf e}_k$ direction, and $\\rho = \\sum_k f_k$. Periodicity in angle implies that $f_5=f_1, f_{-1} = f_4$. \n\nNote how the model in \\cite{kourbane2018exact} differs from an asymmetric simple exclusion processes (ASEP) in that particles are allowed to swap places in the diffusive step (see (a) above). As a result, the macroscopic models \\eqref{model_lattice1D} and \\eqref{lattice2D} lack any cross-diffusion terms. We can also consider an actual ASEP process, in which simple exclusion is also added to the diffusive step, that is, point (a) above is replaced by\n\\begin{enumerate}[label=(\\alph*')]\n\t\\item Diffusive motion: a particle in $i$ jumps to $i+ 1$ at rate $D_T$ if $\\sigma_{i+ 1} = 0$ (and similarly to $i-1$).\n\\end{enumerate}\nIn this case, the resulting macroscopic model is\n\\begin{equation} \\label{ASEP_2D}\n\t\\partial_t f_k + v_0\\nabla \\cdot [ f_k (1- \\phi \\rho) {\\bf e}_k] = D_T \\nabla \\cdot [(1- \\phi \\rho) \\nabla f_k + \\phi f_k \\nabla \\rho] + \\lambda (f_{k+1} + f_{k-1} - 2 f_k), \\qquad k = 1, \\dots,4.\n\\end{equation}\n\n\n\n\\subsection{Hybrid random walks}\\label{sec:hybrid}\n\nIn the previous two subsections we have discussed models that consider both the position and the orientation as continuous, or discrete. Here we discuss hybrid random walks, that is, when positions are continuous and orientations finite, or vice-versa.\n\nThe first hybrid model we consider is an active exclusion process whereby the orientation is a continuous process in $[0, 2 \\pi)$ evolving according to a Brownian motion with diffusion $D_R$, \\eqref{sde_angle}, while keeping the position evolving according to a discrete asymmetric exclusion process (ASEP) \\cite{Bruna:2021tb}. The advantage of this approach is to avoid the anisotropy imposed by the underlying lattice. \nHere we present the model in two-dimensions so that we can compare it to the models presented above. \n\nWe consider a square lattice with spacing $\\epsilon$ and orientations ${\\bf e}_k, k=1, \\dots, 4$ as given above. A particle at lattice site ${\\bf x}$ can jump to neighbouring sites ${\\bf x}+\\epsilon {\\bf e}_k$ if the latter is empty at a rate $\\pi_k(\\theta)$ that depends on its orientation $\\theta$, namely \n\\begin{equation*}\n\t\\pi_k(\\theta) = \\alpha_\\epsilon \\exp(\\beta_\\epsilon {\\bf e}(\\theta) \\cdot {\\bf e}_k),\n\\end{equation*}\nwhere $\\alpha_\\epsilon = D_T \/\\epsilon^2$ and $\\beta_\\epsilon = v_0 \\epsilon\/(2 D_T)$. Therefore, the diffusive and self-propulsion mechanisms in \\eqref{model_lattice1D} are now accounted for together: jumping in the direction opposite to your orientation reduces the rate to $\\sim \\alpha_\\epsilon(1-\\beta_\\epsilon)$, whereas the there is a positive bias $\\sim \\alpha_\\epsilon(1+\\beta_\\epsilon)$ towards jumps in the direction pointed to by ${\\bf e}(\\theta)$. The tumbling (point 3 above) is replaced by a rotational Brownian motion. \nTaking the limit $\\epsilon\\to 0$ while keeping the occupied fraction $\\phi =N \\epsilon^2$ finite one obtains the following macroscopic model for $f = f({\\bf x},\\theta,t)$\n\\begin{align} \\label{model_hy}\n\\partial_t f + v_0\\nabla \\cdot [ f (1- \\phi \\rho) {\\bf e}(\\theta)] &= D_T \\nabla \\cdot ( (1- \\phi \\rho) \\nabla f + \\phi f \\nabla \\rho) + D_R \\partial_{\\theta}^2 f.\n\\end{align}\nThis model can be directly related to the fully discrete model \\eqref{ASEP_2D}: they are exactly the same if one considers \\eqref{ASEP_2D} as the discretised-in-angle version of \\eqref{model_hy} by identifying \n$$\nD_R \\partial_\\theta^2 f_k \\approx D_R \\frac{f_{k+1} + f_{k-1} - 2 f_k}{(2\\pi\/m)^2}, \n$$\nthat is, $\\lambda = D_R m^2\/(2 \\pi^2)$, where $m$ is the number of orientations in the fully discrete model.\n\n\n\nThe other possible hybrid model is to consider a continuous random walk with interactions in space \\eqref{sde_x}, while only allowing a finite number of orientations, $\\Theta_i \\in \\{\\theta_1, \\dots, \\theta_m\\}$. In its simplest setting, we can consider that $\\theta_k$ are equally spaced in $[0, 2\\pi)$ and a constant switching rate $\\lambda$ between the neighbouring angles. The $N$ particles evolve according to the stochastic model:\n\\begin{subequations} \\label{hybrid2}\n\\begin{align}\n\\label{hybrid_x}\n\\mathrm{d} {\\bf X}_i &= \\sqrt{2 D_T} \\mathrm{d} {\\bf W}_i - \\nabla_{{\\bf x}_i} U\\mathrm{d} t + v_0 {\\bf e}(\\Theta_i) \\mathrm{d} t,\\\\\n\\label{hybrid_angle}\n\\Theta_i &= \\{\\theta_k\\}_{k=1}^m, \\quad \\theta_k \\xrightarrow{\\ \\lambda \\ } \\theta_{k+ 1}\\pmod{2\\pi}, \\quad \\theta_k \\xrightarrow{\\ \\lambda \\ } \\theta_{k- 1}\\pmod{2\\pi}.\n\\end{align}\n\\end{subequations}\nIf we assume exclude-volume interactions through a hard-core potential, the resulting model is \\cite{Wilson:2018fg}\n\\begin{equation}\\label{model_wilson}\n\t\\partial_t f_k + v_0\\nabla \\cdot \\left[ f_k (1-\\phi \\rho) {\\bf e}_k + \\phi {\\bf p} f_k\\right] = D_T \\nabla \\cdot \\left[ (1- \\phi \\rho) \\nabla f_k + 3 \\phi f_k \\nabla \\rho \\right] + \\lambda \\left ( f_{k+1} + f_{k-1} -2 f_k \\right),\n\\end{equation}\nwhere $\\rho = \\sum_{k=1}^m f_k$, ${\\bf p} = \\sum_{k=1}^m f_k {\\bf e}(\\theta_k)$, and ${\\bf e}_k = {\\bf e}(\\theta_k)$. \nThe density $f_k({\\bf x},t)$ represents the probability of finding a particle at position ${\\bf x}$ at time $t$ with orientation $\\theta_k$ (naturally, we identify $f_{m+1} = f_1$ and $f_{-1} = f_m$). Here $\\phi = (N-1) \\epsilon^2 \\pi\/2$ represents the effective excluded region as in \\eqref{model3}. We note how this model is consistent with the continuous model \\eqref{model3}, in that if we had discretised angle in \\eqref{model3} we would arrive at the cross-diffusion reaction model \\eqref{model_wilson}. \n\nA variant of the hybrid model \\eqref{hybrid2} is to allow for jumps to arbitrary orientations instead of rotations of $2\\pi\/m$, namely, from $\\theta_k$ to $\\theta_{j}\\pmod{2\\pi}$, $j\\ne k$, at a constant rate $\\lambda$ independent of the rotation. This is a convenient way to model the tumbles of a run-and-tumble process, such as the one used to describe the motion of \\emph{E. Coli} \\cite{Berg:1993ug}, see also \\S \\ref{sec:biological_transport}. In this case, the reaction term in \\eqref{model_wilson} changes to \n\\begin{equation}\\label{hybrid3}\n\t\\partial_t f_k + v_0\\nabla \\cdot \\left[ f_k (1-\\phi \\rho) {\\bf e}(\\theta_k) + \\phi {\\bf p} f_k\\right] = D_T \\nabla \\cdot \\left[ (1- \\phi \\rho) \\nabla f_k + 3 \\phi f_k \\nabla \\rho \\right] + \\lambda \\sum_{j \\ne k } \\left( f_{j} - f_k \\right).\n\\end{equation}\n\nWe may generalise the jumps in orientation by introducing a turning kernel $T(\\theta, \\theta')$ as the probability density function for a rotation from $\\theta'$ to $\\theta$. That is, if $\\Theta_i(t)$ is the orientation of the $i$th particle at time $t$ and the jump occurs at $t^*$, \n$$\nT(\\theta, \\theta') \\mathrm{d} \\theta = \\mathbb P\\left \\{ \\theta \\le \\Theta_i(t^*_+) \\le \\theta + \\mathrm{d} \\theta \\ | \\ \\Theta_i(t^*_-) = \\theta' \\right \\}.\n$$\nClearly for mass conservation we require that $\\int T(\\theta, \\theta') \\mathrm{d} \\theta = 1$. \nThe jumps may only depend on the relative orientation $\\theta - \\theta'$ in the case of a homogeneous and isotropic medium, in which case $T(\\theta, \\theta') \\equiv T(\\theta-\\theta')$. This is the case of the two particular examples above: in \\eqref{model_wilson}, the kernel is \n$$\nT(\\theta, \\theta') = \\frac{1}{2} \\left [ \\delta(\\theta -\\theta'- \\Delta) + \\delta(\\theta-\\theta' + \\Delta) \\right], \\qquad \\Delta = \\frac{2\\pi}{m},\n$$\nwhereas the rotation kernel in \\eqref{hybrid3} is\n$$\nT(\\theta, \\theta') = \\frac{1}{m-1} \\sum_{k = 1}^{m-1} \\delta(\\theta -\\theta' + k\\Delta), \\qquad \\Delta = \\frac{2\\pi}{m},\n$$\nwhere the argument of the delta function is taken to be $2\\pi$-periodic. If the turning times $t^*$ are distributed according to a Poisson process with intensity $\\lambda$, the resulting macroscopic model for the phase density $f = f({\\bf x},\\theta,t)$ with a general turning kernel $T$ becomes\n\\begin{equation}\\label{run-tumble-cont}\n\t\\partial_t f + v_0\\nabla \\cdot \\left[ f (1-\\phi \\rho) {\\bf e}(\\theta) + \\phi {\\bf p} f\\right] = D_T \\nabla \\cdot \\left[ (1- \\phi \\rho) \\nabla f + 3 \\phi f \\nabla \\rho \\right] -\\lambda f + \\lambda \\int_0^{2\\pi} T(\\theta,\\theta') f({\\bf x},\\theta',t) \\mathrm{d} \\theta '.\n\\end{equation}\nWe note that the microscopic process associated with \\eqref{run-tumble-cont} is continuous (and not hybrid) if the support of $T$ has positive measure.\n\n\n\n\\section{Models for externally activated particles}\\label{s:nonact}\n\nIn this section we go from active to passive particles and consider models with time reversal at the microscopic level. As mentioned in the introduction, the defining factor of active matter models is the self-propulsion term, which makes them out-of-equilibrium. Mathematically, this can be expressed by saying that even the microscopic model lacks a gradient-flow structure (either due to the term ${\\bf e}(\\theta)$ in the transport term, see \\eqref{model3}, or the reaction terms in \\eqref{model_lattice1D}, \\eqref{lattice2D}, see section \\ref{sec:gen_structure}).\n \nIn the previous section we have seen the role the orientation $\\theta$ plays. If it is kept continuous, the resulting macroscopic model is of kinetic type for the density $f({\\bf x},\\theta,t)$. If instead only a fixed number $m$ of orientations are allowed, then these define a set of $m$ species, whereby all the particles in the same species have the same drift term. This motivates the connection to cross-diffusion systems for passive particles, which are obtained by turning off the active change in directions in the models of section \\ref{sec:active} and look at the resulting special cases. This is a relevant limit in many applications, such as in pedestrian dynamics (see section \\ref{sec:pedestrian}).\nOnce the orientations are fixed, we are left with two possible passive systems: either originating from a spatial Brownian motion or a spatial ASEP discrete process. \n\n\\subsection{Continuous models}\nThe starting point is the microscopic model \\eqref{sde_model} taking the limit $D_R \\to 0$. We could still keep the interaction potential as depending on the relative orientations, which would lead to different self- and cross-interactions (which might be useful in certain applications). Here for simplicity we assume interactions are all the same regardless of the orientations:\n\\begin{subequations}\n\t\\label{sde_model_passive}\n\\begin{align}\n\\mathrm{d} {\\bf X}_i &= \\sqrt{2 D_T} \\mathrm{d} {\\bf W}_i - \\nabla_{{\\bf x}_i} U\\mathrm{d} t + v_0 {\\bf e}(\\Theta_i) \\mathrm{d} t,\\\\\n\\Theta_i(t) &= \\theta_k, \\qquad \\text{if } i \\in \\mathcal I_k, \\qquad k = 1,\\dots, m,\n\\end{align}\n\\end{subequations}\nwhere $\\mathcal I_k$ is the set of particles belonging to species $k$. The number of particles in each species is $|\\mathcal I_k| = N_k$.\n\nThe mean-field limit of \\eqref{sde_model_passive} is given by (taking $N = \\sum_k N_k \\to \\infty$ as in \\eqref{1_mfa})\n\\begin{equation}\n\t\\label{mfa_passive}\n\t\\partial_t f_k({\\bf x}, t) = \\nabla_{{\\bf x}} \\cdot \\left[ D_T \\nabla_{\\bf x} f_k -v_0 {\\bf e}(\\theta_k) f_k + f_k \\nabla_{{\\bf x}} (u \\ast \\rho) \\right],\n\\end{equation}\nand $\\rho({\\bf x},t) = \\sum_k f_k$. For consistency with the active models, here we do not take $f_k$ to be probability densities but to integrate to the relative species fraction, whereas as before the total density $\\rho$ has unit mass:\n\\begin{equation} \\label{normalisation_passive}\n\t\\int_\\Omega f_k({\\bf x},t) \\mathrm{d} {\\bf x} = \\frac{N_k}{N},\\qquad \\int_\\Omega \\rho({\\bf x},t) \\mathrm{d} {\\bf x} = 1.\n\\end{equation}\nThus $f_k = f_k({\\bf x},t)$ describes the probability that a particle is at position ${\\bf x}$ at time $t$, \\emph{and} is in the $\\mathcal I_k$ set.\n\nThe microscopic model \\eqref{sde_model_passive} with the interaction term $U$ replaced by a hard-core potential for particles with diameter $\\epsilon$ can be dealt with via the method of matched asymptotics. In this case, the resulting cross-diffusion model is\n\\begin{align}\\label{eq:MF_cross_diff_sys}\n\t\\partial_t f_k + v_0\\nabla \\cdot \\left[ f_k {\\bf e}_k + \\phi_{kl}({\\bf e}_l - {\\bf e}_k) f_k f_l \\right] &= D_T \\nabla \\cdot \\left[ (1 + \\phi_{kk} f_k) \\nabla f_k + \\phi_{kl} (3 f_k \\nabla f_l - f_l \\nabla f_k) \\right], \\qquad l \\ne k,\n\\end{align}\nwhere $\\phi_{kk} = (N_k -1) N\/N_k \\epsilon^2 \\pi$, $\\phi_{kl} = N\\epsilon ^2 \\pi \/2$ for $l\\ne k$, and $f_k({\\bf x},t)$ are defined as above. \nThis model was first derived in \\cite{Bruna:2012wu} for just two species but in a slightly more general context, whereby particles many have different sizes and diffusion coefficients\n(also, note that in \\cite{Bruna:2012wu}, \\eqref{eq:MF_cross_diff_sys} appears written in terms of probability densities). \nEquation \\eqref{eq:MF_cross_diff_sys} can be directly related to model \\eqref{model_wilson} with $\\lambda = 0$ if in both models we assume $N_k$ large enough such that $N_k - 1 \\approx N_k, N-1 \\approx N$: \n\\begin{equation}\\label{cross-diff_num_den}\n\t\\partial_t f_k + v_0\\nabla \\cdot \\left[ f_k (1-\\phi \\rho) {\\bf e}(\\theta_k) + \\phi {\\bf p} f_k\\right] = D_T \\nabla \\cdot \\left[ (1- \\phi \\rho) \\nabla f_k + 3 \\phi f_k \\nabla \\rho\t \\right],\n\\end{equation}\nwhere $\\phi = N \\epsilon^2 \\pi\/2$, $\\rho = \\sum_k f_k$, and ${\\bf p} =\\sum_k f_k {\\bf e}(\\theta_k)$. Model \\eqref{cross-diff_num_den} is the cross-diffusion system for red and blue particles studied in \\cite{BBRW2017} in disguise. First, set the number of species to $m=2$ and define the number densities\n\\begin{align}\\label{number_densitiesb}\nr({\\bf x},t) = N f_1({\\bf x},t), \\qquad b({\\bf x},t) = N f_2({\\bf x},t),\n\\end{align}\nwhich integrate to $N_1$ and $N_2$ respectively. Then define the potentials $V_r = - (v_0\/D_T) {\\bf e}(\\theta_1) \\cdot {\\bf x}$ and $V_b = - (v_0\/D_T) {\\bf e}(\\theta_2) \\cdot {\\bf x}$. In terms of these new quantities, system \\eqref{cross-diff_num_den} becomes\n\\begin{subequations}\\label{e:aa_cross_sys}\n\\begin{align}\n\\partial_t r &= D_T \\nabla \\cdot \\left[ (1+ 2\\varphi r - \\varphi b) \\nabla r + 3\\varphi r \\nabla b + r \\nabla V_r + \\varphi r b \\nabla (V_b - V_r) \\right],\\\\\n\\partial_t b &= D_T \\nabla \\cdot \\left[ (1+2\\varphi b - \\varphi r) \\nabla b + 3\\varphi b \\nabla r + b \\nabla V_b + \\varphi r b \\nabla (V_r - V_b) \\right],\n\\end{align}\n\\end{subequations}\nwhere $\\varphi = \\epsilon^2 \\pi\/2$.\nThis is exactly the cross-diffusion system for particles of the same size and diffusivity studied in \\cite{BBRW2017} for $d=2$ (see Eqs. (11) in \\cite{BBRW2017}).\\footnote{We note a typo in \\cite{BBRW2017}: the coefficient $\\beta$ below system (11) should have read $\\beta = (2d-1)\\gamma$.} \n\n\\subsection{Discrete models}\\label{sec:discrete_passive}\n\n\nIn this category there are discrete processes in space without changes in orientations. The most well-known model in the context of excluded-volume interactions is ASEP, which was used above in combination of either continuous change in angle, see \\eqref{model_hy}, or discrete jumps, see \\eqref{ASEP_2D}. We obtain the corresponding passive process by either setting $D_R$ or $\\lambda$ to zero, respectively. The resulting model in either case is\n\\begin{align} \\label{model3_passive}\n\\partial_t f_k + v_0\\nabla \\cdot [ f_k (1- \\phi \\rho) {\\bf e}(\\theta_k)] &= D_T \\nabla \\cdot [(1- \\phi \\rho) \\nabla f_k + \\phi f_k \\nabla \\rho], \\qquad k = 1, \\dots, m,\n\\end{align}\nwhere $f_k$ satisfy \\eqref{normalisation_passive} as before, and $\\phi = N \\epsilon^2$. We notice three differences with its continuous passive counterpart \\eqref{cross-diff_num_den}: in the latter, the effective occupied fraction $\\phi$ has a factor of $\\pi\/2$, the coefficient in the cross-diffusion term $f_k \\rho$ has a factor in three, and the transport term has an additional nonlinearity that depends on the polarisation.\nThe cross-diffusion system \\eqref{model3_passive} was derived in \\cite{Simpson:2009gi} and analysed in \\cite{Burger:2010gb} for two species ($m=2$). Specifically, if we introduce the number densities $r, b$ and general potentials $V_r, V_b$ as above, it reads\n\\begin{subequations}\n\\label{eq:MF_cross_diff}\n\\begin{align}\n\\partial_t r &= D_T \\nabla \\cdot \\left[(1-\\bar \\rho) \\nabla r + r \\nabla \\bar \\rho + r (1- \\bar \\rho) \\nabla V_r \\right]\\\\\n\\partial_t b &= D_T \\nabla \\cdot \\left[(1-\\bar \\rho) \\nabla b + b \\nabla \\bar \\rho + b (1-\\bar \\rho) \\nabla V_b \\right],\n\\end{align}\n\\end{subequations}\nwhere $\\bar \\rho = \\epsilon^2 (r+b) = \\epsilon^2 (N_1 f_1 + N_2 f_2)$ (compare with (3.7)-(3.8) in \\cite{Burger:2010gb}).\\footnote{In the system (3.7)-(3.8) of \\cite{Burger:2010gb}, $r$ and $b$ are volume concentrations, thus having a factor of $\\epsilon^2$ compared to those used in \\eqref{eq:MF_cross_diff}, and the diffusivities of the two species are $1$ and $D$ instead of $D_T$ for both.} \n\n\\section{General model structure}\\label{sec:gen_structure}\n\nWe now put the models presented in the previous sections into a more general picture. We assume that $f = f({\\bf x}, \\theta, t)$, where $\\theta$ is a continuous variable taking values in $[0, 2\\pi)$ or a discrete variable taking values $\\theta_k$ for $k = 1, \\dots, m$ (ordered increasingly on $[0,2\\pi)$). In the latter case we shall also use the notation $f_k({\\bf x},t) = f({\\bf x},\\theta_k,t).$ We also recall the definition of the space density $\\rho$ and the polarisation $\\mathbf{p}$:\n \\begin{align*}\n \\rho({\\bf x},t) = \\int_0^{2\\pi} f({\\bf x}, \\theta, t ) \\, \\mathrm{d} \\mu(\\theta) \\quad \\text{and} \\quad \\mathbf{p}({\\bf x},t) = \\int_0^{2\\pi} {\\bf e}(\\theta) f({\\bf x}, \\theta, t) \\, \\mathrm{d} \\mu(\\theta),\n \\end{align*}\n where the integral in $\\theta$ is either with respect to the Lebesgue measure for continuum angles or with respect to a discrete measure (a finite sum) for discrete angles.\n \nThe models presented have the following general model structure:\n\\begin{equation}\n \\label{eq:genform}\n \\partial_t f + v_0 \\nabla \\cdot ( f ((1-\\phi \\rho) {\\bf e}(\\theta) + a\\phi \\mathbf{p} f)) = \n D_T \\nabla \\cdot \\left( \\mathcal{B}_1(\\rho ) \\nabla f + \\mathcal{B}_2(f) \\nabla \\rho \\right) + D_R \\Delta_\\theta f.\n\\end{equation}\nwith $a \\in \\lbrace 0,1 \\rbrace$. In \\eqref{eq:genform} the derivative operator $\\nabla$ is the standard gradient with respect to the spatial variable $x$, while the Laplacian $\\Delta_\\theta$ is either the second derivative $\\partial_{\\theta \\theta}$ with respect to $\\theta$ in the continuous case, the second order difference\n$$ \\Delta_\\phi f = {\\cal D}^2f = (f_{k+1} + f_{k-1} - 2f_k) $$\nwith cyclic extension of the index $k$, or the graph Laplacian with uniform weights\n$$ \\Delta_\\phi f = {\\cal D}_G f = \\sum_{j \\neq k} (f_j - f_k). $$ \nLet us mention that similar structures and results hold true for graph Laplacians with other non-negative weights.\nWe provide an overview of the respective differential operators and constants for most of the presented models in Table \\ref{tab:my_label}.\n\n \\setlength{\\tabcolsep}{12pt}\n\\renewcommand{\\arraystretch}{1.2}\n\n \\begin{table}[ht]\n \\centering\n \\begin{tabular}{|c|c|c|c|c|c|c|}\n \\hline\n Eq. Nr. & $\\Delta_{\\theta}$ &$a$ & $\\mathcal{B}_1$ & $\\mathcal{B}_2$ & $D_R$ \\\\\n \\hline\n \\eqref{model2} & $\\partial_{\\theta \\theta}$ & $0 $ & 1 & 0 & $> 0$ \\\\\n \\hline\n \\eqref{model3} & $\\partial_{\\theta \\theta}$ & $1$ &$(1-\\phi \\rho)$ & $3 \\phi f$ & $> 0$ \\\\\n \\hline \n \\eqref{lattice2D} & ${\\cal D}^2$ & $0$ & 1 & 0 & $=\\lambda>0$ \\\\\n \\hline \n \\eqref{ASEP_2D} & ${\\cal D}^2$ & $0$ & $(1-\\phi\\rho)$ & $\\phi f$ & $=\\lambda>0$\\\\\n \\hline\n \\eqref{model_hy} & $\\partial_{\\theta \\theta}$ & $0$& $(1-\\phi \\rho)$& $\\phi f$ & $ > 0$ \\\\\n \\hline \n \\eqref{model_wilson} & ${\\cal D}^2$ & $1$& $(1-\\phi \\rho)$ & $3 \\phi f$ & $> 0$\\\\\n \\hline \n \\eqref{hybrid3} & ${\\cal D}_G$ & $1$& $(1-\\phi \\rho)$ & $3 \\phi f$ & $> 0$\\\\\n \\hline\n \\eqref{eq:MF_cross_diff_sys} & & $0$ & $(1-\\phi \\rho)$ & $3\\phi f$ & 0\\\\\n \\hline\n \\eqref{model3_passive} & & $0$ & $(1-\\phi \\rho)$ & $\\phi f$ & 0 \\\\\n \\hline\n \\end{tabular}\n \\caption{Table recasting most models in the general form of \\eqref{eq:genform}.}\n \\label{tab:my_label}\n \\end{table}\n \n\\paragraph{Small and large speed}\n \nNatural scaling limits for the general system \\eqref{eq:genform} are the ones for large and small speed, i.e. $v_0 \\rightarrow 0$ and $v_0 \\rightarrow \\infty$, respectively. The first case is rather obvious, since at $v_0=0$ the model is purely diffusive, i.e. \n$$ \\partial_t f = \n D_T \\nabla \\cdot ( \\mathcal{B}_1(f,\\rho ) \\nabla f + \\mathcal{B}_2(f,\\rho) \\nabla \\rho) + D_R \\Delta_\\theta f.$$\nThe model can then be written as a gradient flow structure (respectively a generalised gradient structure in the case of discrete angles, see for example \\cite{M2011,PRST2020}) for an entropy of the form \n \\begin{equation}\n{\\cal E}(f) = \\int \\int f \\log f ~\\mathrm{d} {\\bf x} ~\\mathrm{d} \\theta + \nc \\int (1-\\rho) \\log (1 -\\rho) ~\\mathrm{d} {\\bf x},\n\\end{equation}\nwith $c \\in \\{0,1,3\\}$ corresponding to the coefficients of ${\\cal B}_2$.\nIn the case of small $v_0$ the gradient flow structure is broken, but we still expect the diffusive part to dominate. In particular we expect long-time convergence to a unique stationary solution. \n\nIn the case $v_0 \\rightarrow \\infty$ there are two relevant time scales. At a small time scale, i.e. $\\tau = \\frac{L}{v_0}$, where $L$ is a typical length scale, the evolution is governed by the first order equation\n$$\\partial_t f + \\nabla \\cdot ( f ((1-\\phi \\rho) {\\bf e}(\\theta) + a\\phi \\mathbf{p} f)) = 0 .$$\nThe divergence of the corresponding velocity field \n$u = ((1-\\phi \\rho) {\\bf e}(\\theta) + a\\phi \\mathbf{p}$ is given by\n$$ \\nabla \\cdot u = - \\phi \\nabla \\rho \\cdot {\\bf e}(\\theta) + a \\phi \\nabla \\cdot \\mathbf{p}. $$\nIn particular in the case of $a=0$ we see that the question of expansion or compression of the velocity field is determined by the angle between $\\nabla \\rho$ and the unit vector ${\\bf e}(\\theta)$. Unless $\\nabla \\rho = 0$, the velocity field is compressible for a part of the directions and expansive for the opposite directions. A consequence to be expected is the appearance of patterns with almost piecewise constant densities. Inside the structures with constant densities ($\\nabla \\rho = 0$) the velocity field is incompressible, while the compression or expansion arises at the boundaries of such regions. This is rather described by a large time scale, i.e. the equation without time rescaling. Then one expects a slow interface motion, which is also observed in numerical simulations. In a simple case with only one direction this has been made precise in \\cite{burger2008asymptotic}.\n \n\\paragraph{Small and large rotational diffusion}\n\nThe limit of small rotational diffusion $D_R=0$ corresponds to a more standard nonlinear Fokker-Planck system with a given linear potential. \n$$ \\partial_t f + v_0 \\nabla \\cdot ( f ((1-\\phi \\rho) {\\bf e}(\\theta) + a\\phi \\mathbf{p})) = \n D_T \\nabla \\cdot ( \\mathcal{B}_1(f,\\rho ) \\nabla f + \\mathcal{B}_2(f,\\rho) \\nabla \\rho). $$\nModels of this kind have been investigated previously, see for example \\cite{Burger:2010gb, BHRW2016}. They tend to develop patterns such as jams or lanes, depending on the initial condition. This happens in particular for large speeds $v_0$.\n\nThe case of large rotational diffusion $D_R \\rightarrow \\infty$ will formally lead to $f$ being constant with respect to $\\theta$ at leading order. The corresponding equation at leading order can thus be obtained by averaging \\eqref{eq:genform} in $\\theta$. Since $f$ does not depend on $\\theta$ the polarisation is zero, that is\n$$ \\mathbf{p} = \\int_0^{2\\pi} f {\\bf e}(\\theta) ~ \\mathrm{d} \\theta = 0, $$\nand the transport term drops out in all the models. Indeed the nonlinear diffusion terms in any case average to linear diffusion with respect to $x$. Hence, the evolution of $f$ at leading order is governed by a linear diffusion equation. \n\n\n\n\\subsection{Wasserstein gradient flows}\n\nWe have seen in the previous subsection that several models for externally activated particle have an underlying gradient flow structure, which should ideally be maintained in the mean field limit. Adams et al. \\cite{ADPZ2011} showed in their seminal work that then the Wasserstein metric arises naturally in the mean-field limit (under suitable scaling assumptions). However, this limit is only well understood in a few cases (for example for point particles) and rigorous results are often missing. In case of excluded-volume effects, as discussed in subsections \\ref{sec:evi} and \\ref{sec:hybrid}, the only known rigorous continuum models are derived in 1D \\cite{Rost1984, BV2005, Gavish:2019tu}, with only approximate models for higher space dimension. We will see that these approximate mean-field limits often lack a full gradient flow structure, but are sufficiently close to it. In the following we give a brief overview on how Wasserstein gradient flows and energy dissipation provides useful a-priori estimates that can be used in existence proofs or when studying the long time behaviour of solutions. These techniques are particularly useful for systems with cross diffusion terms, for which standard existence results do not necessarily hold.\n\nWe will briefly outline the main ideas for functions $f = f({\\bf x}, \\theta, t)$ where $\\theta$ is either continuous or taking discrete values $\\theta_k$ with $k = 1, \\ldots m$. We say that a mean-field model has a Wasserstein gradient flow structure if it can be written as\n \\begin{align}\n \\label{e:w2sys}\n \\partial_t f({\\bf x}, \\theta, t) &=\n \\nabla \\cdot \\left( \\mathcal{M}(f) \\nabla_{{\\bf x}, \\theta}\n \\partial_f \\mathcal{E}\n \\right),\n \\end{align}\n where $\\mathcal{M}$ is the mobility operator and $w = \\partial_f \\mathcal{E}$ the variational derivatives of an entropy\/energy functional $\\mathcal{E}$ with respect to $f$. Note that for discrete $\\theta_k$, $k=1, \\ldots m$ the mobility $\\mathcal{M}$ is a positive definite matrix in $\\mathbb{R}^{m \\times m}$ and $\\partial_f \\mathcal{E}$ is replaced by the vector $\\partial_{f_k} \\mathcal{E}$. We have seen possible candidates for energies in the previous subsection - the usually comprise negative log entropy terms of the particle distribution and the total density (corresponding to linear and non-linear diffusion) as well as potentials (relating to the operators $\\mathcal{B}_1$ and $\\mathcal{B}_2$).\n \nIf the system has a Wasserstein gradient flow structure \\eqref{e:w2sys} then the entropy $\\mathcal{E}$ changes in time as\n\\begin{align}\\label{e:entropydiss}\n \\frac{d \\mathcal{E}}{dt} = \\int_{\\Omega} \\partial_t f \\cdot w \\, \\mathrm{d} {\\bf x} = -\\int_{\\Omega} \\bar{\\mathcal{M}}(w) \\lvert \\nabla w \\rvert^2 \\, \\mathrm{d} {\\bf x},\n \\end{align}\n where $\\bar{\\mathcal{M}}$ is the transformed mobility matrix. If $\\bar{\\mathcal{M}}$ is positive definite, then the energy is dissipated. In the next subsection we will define an entropy for the general model \\eqref{eq:genform} and show that the system is dissipative for several of the operator choices listed in Table \\ref{tab:my_label}. \n\nNote that these entropy dissipation arguments are mostly restricted to unbounded domains and bounded domains with no-flux or Dirichlet boundary conditions. It is possible to generalise them in the case of non-equilibrium boundary conditions, as such discussed in Section \\ref{s:bc}, but a general theory is not available yet. We will see in the next subsection that entropy dissipation may also hold for systems, which do not have a full gradient flow structure. \n \nSince system \\eqref{e:w2sys} is dissipative, we expect long time convergence to an equilibrium solution. The respective equilibrium solutions $f_\\infty$ to \\eqref{e:w2sys} then correspond to minimisers of the entropy $\\mathcal{E}$. To show exponential convergence towards equilibrium it is often helpful to study the evolution of the so-called relative entropy, that is\n\\begin{align*}\n \\mathcal{E}(f, f_{\\infty}) := \\mathcal{E}(f) - \\mathcal{E}(f_{\\infty}).\n\\end{align*}\nIn general one wishes the establish so-called entropy-entropy dissipation inequalities for the relative entropy\n\\begin{align*}\n \\frac{d\\mathcal{E}}{dt} \\leq -c \\mathcal{E},\n\\end{align*}\nwith $c>0$. Then Gronwall's lemma gives desired exponential convergence. This approach is also known as the Bakry-Emery method, see \\cite{BE1985}.\n\n We discussed the challenges in the rigorous derivation of mean-field models in the previous sections and how often only formal or approximate limiting results are available. These approximate mean-field models are often 'close' to a full gradient flow, meaning that they only differ by higher order terms (which were neglected in the approximation). This closeness motivated the definition of so-called asymptotic gradient flow, see \\cite{BBRW2017}. A dynamical system of the form \n \\begin{align}\\label{e:agf}\n \\partial_t z = \\mathcal{F}(z; \\epsilon)\n \\end{align}\n has a an asymptotic gradient flow structure of order $k$ if \n \\begin{align*}\n \\mathcal{F}(z; \\epsilon) + \\sum_{j=k+1}^{2k} \\epsilon^j \\mathcal{G}_j(z) = -\\mathcal{M}(z; \\epsilon) \\mathcal{E}'(z, \\epsilon),\n \\end{align*}\n for some parametric energy functional $\\mathcal{E}.$ For example, \\eqref{eq:MF_cross_diff_sys} exhibits a GF structure if the red and blue particles have the same size and diffusivity, but lacks it for differently sized particles (a variation of the model not discussed here). The closeness of AGF to GF can be used to study for example its stationary solutions and the behaviour of solution close to equilibrium, see \\cite{ABC2018, ARSW2020, BBRW2017}.\n \n\\subsection{Entropy dissipation}\\label{sec:entropydiss}\n\nNext we investigate the (approximate) dissipation of an appropriate energy for the general formulation \\eqref{eq:genform}. The considered energy functional is motivated by the entropies of the scaling limits considered before. In particular we consider\n \\begin{equation}\n{\\cal E}(f) = \\int \\int f \\log f + V({\\bf x},\\theta) f ~\\mathrm{d} {\\bf x}~\\mathrm{d} \\mu(\\theta) + \nc \\int (1-\\rho) \\log (1 -\\rho) ~\\mathrm{d} {\\bf x},\n\\end{equation}\nfor which the models can be formulated as gradient flows in the case $D_R=0$ with $c \\in \\{0,1,3\\}$ chosen appropriately. For simplicity we set $\\phi = 1$ as well as $D_T=1$ in the following in order to shorten the computations. As before, we interpret integrals in $\\theta$ with respect to the Lebesgue measure for continuum angles and with respect to the discrete measure (sum) in case of a finite number of directions. We recall that the potential $V$ is given by \n$$\nV({\\bf x}, \\theta) = - v_0 \\, {\\bf e}(\\theta) \\cdot {\\bf x} = - v_0\\, (\\cos \\theta_k x + \\sin \\theta_k y).\n$$\nIn the following we provide a formal computation assuming sufficient regularity of all solutions. We have\n\\begin{align*}\n \\frac{d \\mathcal{E}}{dt} &= \\int \\int \\partial_t f ( \\log f + V - c \\log(1-\\rho)) ~\\mathrm{d} {\\bf x}~\\mathrm{d} \\theta \\\\\n &= - \\int \\int \\nabla ( \\log f + V - c\\log(1-\\rho)) \n (- v_0 ( f ((1-\\rho) {\\bf e}(\\theta) + a \\mathbf{p} f)) + \n ( \\mathcal{B}_1(f,\\rho ) \\nabla f + \\mathcal{B}_2(f,\\rho) \\nabla \\rho)) \n ~\\mathrm{d} {\\bf x}~\\mathrm{d} \\theta\\\\\n & \\phantom{=} + D_R \\int \\int ( \\log f + V -c \\log(1-\\rho)) \\Delta_\\theta f ~\\mathrm{d} {\\bf x}~\\mathrm{d} \\theta\n\\end{align*}\n\nLet us first investigate the last term.\nSince $\\rho$ is independent of $\\theta$, we have using the properties of the generalised Laplacian $\\Delta_\\theta$ with periodic boundary conditions\n$$ \\int \\log(1-\\rho) \\Delta_\\theta f~d\\theta = \n\\log(1-\\rho) \\int \\Delta_\\theta f~d\\theta = 0.$$\nUsing the fact that $\\Delta_\\theta {\\bf e}(\\theta)$ is uniformly bounded in all cases, we find\n\\begin{align*}\n \\int \\int ( \\log f + V - c\\log(1-\\rho)) \\Delta_\\theta f ~d{\\bf x}~d\\theta &= \n - \\int \\int {\\cal F}_\\theta(f) - v_0 \\Delta_\\theta {\\bf e}(\\theta) {\\bf x} f ~d{\\bf x}~d\\theta, \\\\\n &\\leq C |v_0| \\int |x| f~d{\\bf x}~d\\theta = C |v_0| \\int |x| \\rho~d{\\bf x},\n\\end{align*}\nwhere ${\\cal F}_\\theta(f) \\geq 0$ is the Fisher information with respect to the generalised Laplacian $\\Delta_\\theta$ \n$$ {\\cal F}_\\theta(f) = \\left\\{ \\begin{array}{ll} \\frac{|\\partial_\\theta f|^2}f & \\text{for }\\partial_{\\theta \\theta} \\\\\n\\frac{|f_{k+1}-f_k|^2}{M(f_k,f_{k+1})}& \\text{for } {\\cal D}^2 \\\\\n\\sum_j \\frac{|f_{j}-f_k|^2}{M(f_j,f_k)} & \\text{for } {\\cal D}_G\n\\end{array} \\right. , $$\nwhere \n$$M(f,g) = \\frac{f-g}{\\log(f) - \\log(g)}$$\nis the logarithmic mean. \n\nNow we further investigate the first term for the models with $a =0$, where for appropriate choice of $c$ we can achieve\n\\begin{align*}\n\\int \\int \\nabla ( \\log f + V - c\\log(1-\\rho)) (\n v_0 f (1- \\rho) {\\bf e}(\\theta) - & ( \\mathcal{B}_1(f,\\rho ) \\nabla f + \\mathcal{B}_2(f,\\rho) \\nabla \\rho) )~d{\\bf x}~d\\theta \\\\\n &= - \\int \\int f(1-\\rho) |\\nabla ( \\log f + V - c\\log(1-\\rho)) |^2~d{\\bf x}~d\\theta \\leq 0.\n\\end{align*}\nOverall we finally find\n$$ \\frac{d \\mathcal{E}}{dt} \\leq C ~|v_0|~\\int |{\\bf x}| \\rho~d{\\bf x} \\leq C ~|v_0|~ \\sqrt{\\int |{\\bf x}|^2 \\rho~d{\\bf x}} . $$\nThus, the growth of the entropy in time is limited by the second moment. Note that for $a=1$ one can employ analogous reasoning to obtain the above negative term. However it is unclear how to control the additional term \n$\\int \\int \\nabla ( \\log f + V - c\\log(1-\\rho)) \n v_0 {\\bf p} f ~d{\\bf x}~d\\theta$.\\\\\n The obtained bounds provide useful a-priori estimates, which can be used in existence results and to study the long-time behaviour, see for example \\cite{BSW2012, J2015}.\n\n\n\n\\section{Boundary effects}\\label{s:bc}\n\n\n\nSo far, we have concentrated on the dynamics on domains with periodic boundary conditions, thus neglecting its effects entirely. In the following we will outline how boundaries as well as inflow and outflow conditions can be included in all models on the micro- as well as macroscopic level. We remark that the continuous random walks models are difficult to treat on bounded domains. Thus we only mention a few aspects and comment in more detail on the time-discrete situation which is easier to tackle, see remark \\ref{rem:time_discrete}.\n\n\n\\subsubsection*{Mass conserving boundary conditions} We first discuss conditions that conserve the total mass, i.e. the number of particles for discrete models, or the integral of the density for continuous ones, in a given domain.\nIn case of the coupled SDE model \\eqref{sde_model}, we are interested in conditions that ensure that particles remain inside the domain. Intuitively, particles need to be reflected whenever they hit the boundary. However, as we are dealing with a problem that is continuous in time, we have to ensure that the particle path remains continuous. In his seminal paper \\cite{Skorokhod1961_bounded} Skorokhod solved this problem by introducing an additional process that increases whenever the original process hits the boundary, see \\cite{Pilipenko2014_reflection} for a detailed discussion. \nFor the microscopic models on a lattice, such boundary conditions \ncorrespond to allowing only jumps into domain whenever a particle is located at the boundary. \nFor the macroscopic models, mass conservation corresponds to no-flux boundary conditions that are implemented by setting the normal flux over the boundary to zero, i.e. \n\\begin{align}\\label{eq:noflux}\n{\\bf J} \\cdot {\\bf n} = 0 \\text{ a.e. in } \\Upsilon \\times (0,T),\n\\end{align}\nwhere, using the general form \\eqref{eq:genform}, the flux density is given as \n\\begin{align}\\label{eq:J_general}\n {\\bf J} =v_0 ( f ((1-\\phi \\rho) {\\bf e}(\\theta) + a\\phi \\mathbf{p})) - D_T ( \\mathcal{B}_1(\\rho ) \\nabla f + \\mathcal{B}_2(f) \\nabla \\rho).\n\\end{align}\n\n\n\\subsubsection*{Flux boundary conditions}\nApart from periodic or no-flux boundary conditions, there is also the possibility for boundary conditions that allow for the in- or outflow of particles (mass) via the boundary. Such effects are of particular interest in the context of this chapter, since they yield an active system even if the motion of the particles within the domain is purely passive (i.e. due to diffusion). \n\nFor the SDE model \\eqref{sde_model}, such boundary conditions correspond to partially reflecting or radiation conditions.\nIntuitively, once a particle reaches the boundary it is, with a certain probability, either removed or otherwise reflected, see \\cite{grebenkov2006partially} and \\cite[Section 4]{Lions1984}.\nFor the discrete models of section \\ref{sec:discrete} let us consider first the special case of a single species which yields the setting of a asymmetric simple exclusion process (ASEP) with open boundary conditions, the paradigmatic models in non-equilibrium thermodynamics, \\cite{chou2011_nonequilibrium}. \nThe dynamics of such a process is well understood and can be solved explicitly, \\cite{Derrida1993:TASEP,derrida1998_exactly} (see also \\cite{Wood2009:TASEP_boundary}). \nWe denote by $\\alpha$ and $\\beta$ the rates by which particles enter (at the left boundary) or exit (at the right boundary) the lattice. Then, the key observation here is that in the steady state, system can be in one of three distinct states, characterised by the value of the one-dimensional current and the density as follows\n\\begin{itemize}\n \\item \\emph{low density} or \\emph{influx limited} meaning the density takes the value $\\alpha$ and the flux $\\alpha(1-\\alpha)$; occurs whenever $\\alpha < \\min\\{\\beta, 1\/2\\}$\n \\item \\emph{high density} or \\emph{outflux limited} meaning density $1-\\beta$ and flux $\\beta(1-\\beta)$; occurs whenever $\\beta < \\min\\{\\alpha, 1\/2\\}$\n \\item \\emph{maximal density} or \\emph{maximal current} if the density is $1\/2$ and the flux $1\/4$; occurs whenever $\\alpha > 1\/2$ and $\\beta > 1\/2$.\n \\end{itemize}\n \nA similar behaviour can be verified for the macroscopic, on-lattice model \\eqref{eq:MF_cross_diff} (or also \\eqref{model_lattice1D_number_densities} with $\\lambda =0$) for a single species on the domain $\\Omega = [0,L]$, which reduces to a single equation for the unknown density $r$, i.e.\n\\begin{align*}\n \\partial_t r + \\partial_x j = 0 \\text{ with } j = -D_T \\partial_x r + r(1-r) \\partial_x V.\n\\end{align*}\nWe supplement the equation with the flux boundary conditions \n\\begin{align}\\label{eq:inoutflux}\n -j \\cdot n = \\alpha (1- r) \\text{ at } x = 0 \\text{ and } j \\cdot n = \\beta r \\text{ at } x = L,\n\\end{align}\nsee \\cite{Burger2016}. Indeed, one can show that for positive $D_T> 0$, stationary solutions are close to one of the regimes and as $D_T \\to 0$, they obtain the exact values for flux and density. Interestingly, for positive $D_T$ it is possible to enter the maximal current regime for values of $\\alpha$ and $\\beta$ strictly less than $1\/2$. The long time behaviour of these equations, using entropy--entropy-dissipation inequalities, has been studied in \\cite{Burger2016}.\n\nFor the kinetic models \\eqref{model3} and \\eqref{model_hy}, a similar condition can be formulated for the unknown quantity $f$. However, as $f$ depends not only on $\\bf{x}$ and $t$ but also on the angle $\\theta$, the coefficients may also depend on it. In the most general situation we obtain\n\\begin{align}\\label{eq:flux_bc}\n {\\bf J} \\cdot {\\bf n} =- \\alpha(\\theta, {\\bf n}) {(1- \\phi\\rho)} + \\beta(\\theta, {\\bf n})f,\n\\end{align}\nwith ${\\bf J}$ defined in \\eqref{eq:J_general}.\nHere, the choice of the functions $\\alpha$ and $\\beta$ is subject to modelling assumptions or properties of microscopic stochastic models for the in- and outflow. Typically one has a separation into inflow- and outflow regions, which means that $\\alpha$ is supported on inward pointing directions ${\\bf e}(\\theta \\cdot n) > 0$, while $\\beta$ is supported outward pointing directions ${\\bf e}(\\theta \\cdot n) > 0$.\n\n\n\n\\subsubsection*{Other boundary conditions}\nLet us also discuss other types of boundary conditions. Homogeneous Dirichlet boundary conditions can be applied to all types of models: for the SDE \\eqref{sde_model}, one has to remove a particle once it reaches the boundary. The same holds for the discrete random walk models. For the macroscopic models, one sets the trace at the boundary to zero. Finally, also mixed boundary conditions are possible, combining the effects described above on different parts of the boundary.\n\n\\begin{remark}[Boundary conditions for discrete time random walks] \\label{rem:time_discrete}\nWe briefly comment on the situation for time-discrete random walks, that is when the SDE \\eqref{sde_model} is replaced by the time-discrete system\n\\begin{subequations}\n\t\\label{sde_model_discrete}\n\\begin{align}\n\\label{sde_x_discrete}\n\t{\\bf X}_i(t+\\Delta t) &= {\\bf X}_i(t) + \\Delta t\\sqrt{2 D_T} \\zeta_i - \\Delta t \\nabla_{{\\bf x}_i} U + \\Delta t v_0 {\\bf e}(\\Theta_i),\\\\\n\t\\label{sde_angle_discrete}\n\t\\Theta_i(t + \\Delta t) &= \\Theta_i(t) + \\Delta t\\sqrt{2 D_R} \\bar \\zeta_i - \\Delta t\\partial_{\\theta_i} U,\n\\end{align}\n\\end{subequations}\nfor some time step size $\\Delta t > 0$ and where $\\zeta_i, \\, \\bar \\zeta_i$ are normally distributed random variables with zero mean and unit variance. To implement boundary conditions, one has to calculate the probability that ${\\bf{X}}_i(t + \\Delta t) \\notin \\Omega$ (considering also the case that the particles leaves the domain but moves back into it within the time interval $[t, t+\\Delta t]$), see \\cite{Andrews2004_time_discrete} for detailed calculations in the case of pure diffusion. If a particle is found to have left the domain, it can either be removed with probability one (corresponding to homogeneous Dirichlet boundary conditions) or less than one, called a partially reflective boundary condition (corresponding to Robin boundary conditions). In our setting, this probability can depend on the current angle of the particle, $\\Theta_i(t)$, allowing for additional modelling. \nIt is also possible to add a reservoir of particles at the boundary to implement flux boundary conditions in the spirit of \\eqref{eq:flux_bc} by prescribing a probability to enter the domain. In the case of excluded volume, the probability to enter will depend on the number of particles close to the entrance.\n\\end{remark}\n\n\n\n\n\n\n\n\n\n\n\\section{Active crowds in the life and social science}\\label{sec:applications}\n\n\\subsection{Pedestrian dynamics} \\label{sec:pedestrian}\n\nA prominent example of active and externally activated dynamics in the context of socio-economic applications is the motion of large pedestrian crowds. There is an extensive literature on mathematical modelling for pedestrians in the physics and the transportation community, which is beyond the scope of this paper. We will therefore review the relevant models in the context of active crowds only and refer for a more comprehensive overview to \\cite{CPT2014, MF2018}. \n\n\\paragraph{Microscopic models for pedestrian flows}\nMicroscopic off-lattice models are the most popular approach in the engineering and transportation research literature. Most software packages and simulations are based on the so called social force model by Helbing \\cite{Helbing1995:social,Helbing2000:social}. The social force model is a second order SDE model, which does not take the form of active models considered here. However, it is easy to formulate models for pedestrians in the context of active particles satisfying \\eqref{sde_model}. For example, assume that all pedestrians move with the same constant speed in a desired direction $\\Theta_d$ avoiding collisions with others. Then their dynamics can be described by the following second order system:\n\\begin{subequations} \n\\label{e:activepedestrians}\n\\begin{align}\n \\mathrm{d} {\\bf X}_i &= -\\nabla_{{\\bf X}_i} U \\mathrm{d} t + v_0 \\frac{e(\\Theta_i)- \\Theta_d}{\\tau}dt + \\sqrt{2 D_T}\\, \\mathrm{d} {\\bf W}_i\\\\\n \\mathrm{d} \\Theta_i &= -\\partial_{\\Theta_i} U \\mathrm{d} t + \\sqrt{2D_R}\\,\\mathrm{d}{\\bf W}_i.\n\\end{align}\n\\end{subequations}\nThe potential $U$ takes the form \\eqref{e:U}, where the pairwise interactions $u$ should be related to the likelihood of a collision. One could for example consider\n\\begin{align*}\n u(\\lvert {\\bf X}_i - {\\bf X}_j \\rvert\/\\ell, \\Theta_i - \\Theta_j) = C \\frac{\\Theta_i - \\Theta_j}{\\lvert {\\bf X}_i - {\\bf X}_j\\rvert},\n\\end{align*}\nwhere $C \\in \\mathbb{R}^+$ and $\\ell$ relates to the personal comfort zone. Another possibility corresponds to a Lennard Jones type potential to model short range repulsion and long range attraction. \nAnother popular microscopic approach are so-called cellular automata, which correspond to the discrete active and externally activated models discussed before. In cellular automata a certain number of pedestrians can occupy discrete lattice sites and individuals move to available (not fully occupied) neighbouring sites, with transition rates. These transition rates may depend on given potentials, as discussed in the previous sections, which relate to the preferred direction.\n\nThere is also a large class of microscopic on-lattice models, so called cellular automata, see \\cite{Kirchner2002:Cellular}, which relate to the microscopic discussed in Section \\ref{sec:discrete_passive}. In cellular automata pedestrians move to neighbouring sites at given rates, if these sites are not already occupied. Their rates often depend on an external given potential, which relates to the desired direction $\\Theta_d$. Cellular automata often serve as the basis for the macroscopic pedestrian models, which will be discussed in the next paragraph, see for example \\cite{BMP2011, Burger2016:sidestepping}. \n\n\\paragraph{Macroscopic models for pedestrian flows}\nMean field models derived from microscopic off-lattice approaches have been used successfully to analyse the formation of directional lanes or aggregates in bi-directional pedestrian flows. This segregation behaviour has been observed in many experimental and real-life situations. Several models, which fall into the category of externally activated particles introduced in section \\ref{sec:discrete_passive}, were proposed and investigated in this context. These models take the form \\eqref{eq:MF_cross_diff}, in which the densities $r$ and $b$ relate to different directions of motion. For example in the case of bi-directional flows in a straight corridor 'red particles' correspond to individuals moving to the right, while blue ones move to the left. We will see in section \\ref{sec:numerics} that we can observe temporal as well as stationary segregated states. Depending on the initial and inflow conditions directional lanes or jams occur. Then the gradient flow structure can then be used to investigate the stability of stationary states using for example the respective entropy functionals. Due to the segregated structure of stationary solutions, one can also use linear stability analysis around constant steady states to understand for example the formation of lanes, see \\cite{Pietschmann2011:lane}.\n\nMore pronounced segregated states and lanes can be observed when allowing for side-stepping. In the respective microscopic on lattice models, individuals step aside when approached by a member of the other species. The respective formally derived mean-field model has a perturbed gradient flow structure, which can be used to show existence of solutions, see \\cite{Burger2016:sidestepping}. \nMore recently, a model containing both and active and a passive species has been introduced, \\cite{Cirillo2020:ActivePassive}\n\n\n\\subsection{Transport in biological systems}\\label{sec:biological_transport}\nAnother example where active particles play an important role are transport process in biological systems. We will discuss two important types of such processes in the following - chemotaxis and transport in neurons. \n\n\\subsubsection*{Chemotaxis}\n\nWe consider bacteria in a given domain that aim to move along the gradient of a given chemical substance, called chemo-attractant and modelled by a function $c:\\Omega \\to \\mathbb{R}_+$. Due to their size, bacteria cannot sense a gradient by, say, comparing the value of $c$ at their head with that at their tail. \nThus, they use a different mechanism based on comparing values of $c$ at different time instances and different points in space, called run-and-tumble. In a first step, they perform a directed motion into a fixed direction (run), then rotate randomly (tumble). These two steps are repeated, however, the probability of tumbling depends on $c$ as follows: If the value of $c$ is decreasing in time, bacteria tumble more frequently as they are not moving up the gradient. If the value of $c$ increased, they turn less often. \nRoughly speaking, this mechanism reduces the amount of diffusion depending on the gradient of $c$. Here, we consider a slightly different idea that fits into the hybrid random walk model introduced in \\eqref{hybrid2}, assuming $D_T$ to be small (run) and the rate of change for the angle depends on $c$. To this end $\\lambda$ is taken different for each angle (thus denoted by $\\lambda_k$) and is assumed to depend on the difference of the external signal $c$ at the current and past positions, only. Denoting by $t_k$, $k=1,2, \\ldots$ the times at which the angle changes, at time $t_n$ we have $\\lambda_k = \\lambda_k((c(\\mathbf{X}_i(t_n))-c(\\mathbf{X}_i(t_{n-1}))$.\nAdditionally, we introduce a fixed base-line turning frequency $\\bar\\lambda$, and consider\n$$\n\\lambda_k = \\bar \\lambda + (c(\\mathbf{X}_k(t_{n-1})) - c(\\mathbf{X}_k(t_n))),\n$$ \nNow going from discrete to time-continuous jumps, i.e. $t_{n} - t_{n-1} \\to 0$, and appropriate rescaling, we obtain via the chain rule\n$$\n\\lambda_k = \\bar \\lambda - \\dot{\\mathbf{X}}_k\\cdot \\nabla c(\\mathbf{X}_i).\n$$\nHowever, due to the stochastic nature of the equation governing the evolution $\\mathbf{X}_k$, its time derivative is not defined. Thus, as a modelling choice, we replace this velocity vector by $v_0 {\\bf e}(\\theta_k)$, i.e. the direction of the active motion of the respective particle. This is also motivated by the fact that for $D_T=0$ and $U=0$ in \\eqref{hybrid_x}, this is exact. We obtain\n$$\n\\lambda_k = \\bar \\lambda - v_0{\\bf e}(\\theta_k)\\cdot \\nabla c(\\mathbf{X}_i).\n$$\nIn the particular case on one spatial dimension with only two possible angles (denoted by $+$ and $-$) and for $v_0=1$ this reduces to\n$$\n\\lambda_\\pm = \\bar\\lambda \\mp \\partial_x c,\n$$\nwhich is exactly the model analysed in \\cite{Ralph:2020cj}. There, it was also shown that using an appropriate parabolic scaling, one can obtain a Chemotaxis-like model with linear transport but non-linear diffusion in the diffusive limit.\n\n\\subsubsection*{Transport in neurons}\nAnother interesting example are transport processes within cells and we focus on the example of vesicles in neurons. Vesicles are small bubbles of cell membrane that are produced in the cell body (soma) and are then transported along extensions of the cell called axons. The transport itself is carried out by motor proteins that move along microtubules and are allowed to change their direction of motion. \nThis situation can be modelled using the discrete random walks from section \\ref{sec:discrete} by considering the one-dimensional case which, in the macroscopic limit, yields equations \\eqref{model_lattice1D}. \nSince we are now dealing with two species $f_-$ and $f_+$, denoting left- and right-moving complexes, we also have to adopt our boundary conditions as follows: Denoting by $j_+$ and $j_-$ the respective fluxes,\n\\begin{align*}\n -j_+ &= \\alpha_+ (1-\\phi\\rho), \\quad j_- = \\beta_- f_- \\qquad\\text{ at } x = 0,\\\\\n -j_- &= \\alpha_- (1-\\phi\\rho), \\quad j_+ = \\beta_+ f_+ \\qquad\\text{ at } x = 1.\n\\end{align*}\nSystem \\eqref{model_lattice1D} has, to the best of our knowledge, not yet been considered with these boundary conditions. From an application point of view, it is relevant to study whether these models are able to reproduce the almost uniform distribution of motor complexes observed in experiments, see \\cite{Bressloff2015_democracy, Bressloff2016_exclusion} for an analysis.\n\nMore recently, the influence of transport in developing neurites has been studied in \\cite{humpert_role_2019} with an emphasis on the mechanism that decides which of the growing neurites becomes an axon. To model this situation, the concentration of vesicles at some and growth cones is modelled separately by ordinary differential equations which are connected to to instances of \\eqref{eq:MF_cross_diff} via flux boundary conditions. \n\n\n\n\n\n\\section{Numerical simulations}\\label{sec:numerics}\n\n\nIn the following, we present numerical examples in one spatial dimension comparing a subset of models presented above. All simulations are based on a finite element discretisation in space (using P$1$ elements). The time discretisation is based on the following implicit-explicit (IMEX) scheme \n\\begin{equation*}\n \\frac{f^{n+1} - f^n}{\\tau} + v_0 \\nabla \\cdot ( f ((1-\\phi \\rho^n) {\\bf e}(\\theta) + a\\phi \\mathbf{p} f)) = \n D_T \\nabla \\cdot ( \\mathcal{B}_1(\\rho^n ) \\nabla f^{n+1} + \\mathcal{B}_2(f^n) \\nabla \\rho^{n+1}) + D_R \\Delta_\\theta f^n,\n\\end{equation*}\nin which the superscript index $n$ refers to the $n$th time step, that is $t^n = n \\tau$, $\\tau>0.$\nHere transport and rotational diffusion are taken explicitly, while in the diffusive part terms of second order are treated implicitly. Thus, in every time step, a linear system has to be solved. All schemes were implemented using the finite element library NgSolve, see \\cite{Sch\u00f6berl1997}. \n\nWe will illustrate the behaviour of solutions for models \\eqref{model_lattice1D_number_densities}, \\eqref{e:aa_cross_sys}, \\eqref{eq:MF_cross_diff}, in case of in- and outflux \\eqref{eq:inoutflux}, no-flux \\eqref{eq:noflux} or periodic boundary conditions in case of two species, referred to as red $r$ and blue $b$ particles. We use subscript $r$ and $b$, when referring to their respective in- and outflow rates as well as diffusion coefficients. Note that while for the models \\eqref{ASEP_2D}, \\eqref{eq:MF_cross_diff}, the one-dimensional setting is meaningful, for model \\eqref{e:aa_cross_sys}, the simulations are to be understood as two-dimensional but with a potential that is constant in the second dimension.\nFor all simulations, we discretised the unit interval into $150$ elements and chose time steps of size $\\tau = 0.01$.\n\\subsubsection*{Flux boundary conditions} Figures \\ref{fig:Regime2} and \\ref{fig:Regime3} show density profiles for the respective models at time $t=0.5,\\, 2,\\, 3, \\, 30$. In figure \\ref{fig:Regime2}, we chose rather low rates (in particular below $1\/2$) and with $\\alpha_r > \\beta_r$ as well as $\\alpha_b < \\beta_b$ which resulted in species $r$ being in a outflux limited and species $b$ an influx limited phase. We observe that for these low rates, all models are quite close to one another, yet with different shapes of the boundary layers. Model \\eqref{model_lattice1D_number_densities}, having a linear diffusion term, showing a different slope as \\eqref{eq:MF_cross_diff} where cross-diffusion seems to play a role an \\eqref{e:aa_cross_sys} being in between.\n\nIn figure \\ref{fig:Regime3} we chose rates above $1\/2$ to obtain the maximal-current phase. There, interestingly, it turns our that the dynamics of model \\eqref{e:aa_cross_sys} shows a completely different behaviour. This constitutes an interesting starting point for further analytical considerations on the phase behaviour. Figure \\ref{fig:mass} displays the evolution of the total mass of the respective species for different in- and outflow rates. We observe that the reaction-diffusion \\eqref{model_lattice1D_number_densities} and the lattice based cross diffusion system \\eqref{eq:MF_cross_diff} show a similar quantitative behaviour in several in- and outflow regimes, while the cross-diffusion system obtained via asymptotic expansion \\eqref{e:aa_cross_sys} behaves only qualitatively similar.\n\\subsubsection*{Periodic boundary conditions}\nFor periodic boundary conditions, noting that the velocity is constant, thus periodic, we expect constant stationary solutions whose value is determined by the initial mass. This is indeed observed in figure \\ref{fig:periodic}. However, for earlier times, their dynamics differs substantially, in particular for \\eqref{eq:MF_cross_diff}, the influence of cross-diffusion (\"jams\") is most pronounced.\n\n\\subsubsection*{Confining potential}\nFinally in figure \\ref{fig:confined}, we consider the situation of no-flux conditions together with a confining potential $V(x) = (x-\\frac{1}{2})^2$. Here we observe very similar behaviour for all models, probably due to the fact that the transport term dominates the dynamics. \n\n\n\n\n\\begin{figure}\n \\centering\n \\subfigure[$t=0.5$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime2DensitiesAtTime_t=0.50}.pdf}\n }\n \\subfigure[$t=2$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime2DensitiesAtTime_t=2.00}.pdf}\n }\n \\subfigure[$t=3$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime2DensitiesAtTime_t=3.00}.pdf}\n }\n \\subfigure[$t=200$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime2DensitiesAtTime_t=200.00}.pdf}\n }\n \\caption{Flux boundary conditions with: $\\lambda = 0.01$, $D_r = 0.1,\\, D_b = 0.1,\\, \\alpha_r = 0.02, \\, \\beta_r = 0.01, \\, \\alpha_b = 0.01, \\, \\beta_b = 0.02$ which yields the influx-limited phase for species $r$ and outflux-limited for $b$.}\n \\label{fig:Regime2}\n\\end{figure}\n\n\n\\begin{figure}\n \\centering\n \\subfigure[$t=0.5$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime3DensitiesAtTime_t=0.50}.pdf}\n }\n \\subfigure[$t=2$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime3DensitiesAtTime_t=2.00}.pdf}\n }\n \\subfigure[$t=3$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime3DensitiesAtTime_t=3.00}.pdf}\n }\n \\subfigure[$t=200$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime3DensitiesAtTime_t=200.00}.pdf}\n }\n \\caption{Flux boundary conditions with: $\\lambda = 0.01$, $D_r = 0.1,\\, D_b = 0.1,\\, \\alpha_r = 0.6, \\, \\beta_r = 0.8, \\, \\alpha_b = 0.7, \\, \\beta_b = 0.9$ which yields the maximal current phase.}\n \\label{fig:Regime3}\n\\end{figure}\n\n\n\n\n\n\n\\begin{figure}\n \\centering\n \n \n \n \\subfigure[$\\alpha_r=0.02$, $\\beta_r=0.01$, $\\alpha_b=0.01$, $\\beta_b=0.02$]{\n \\includegraphics[width=0.275\\textwidth]{{figures\/Regime2Mass}.pdf}\n }\n \\hspace{0.2cm}\n \\subfigure[$\\alpha_r=0.6$, $\\beta_r=0.8$, $\\alpha_b=0.7$, $\\beta_b=0.9$]{\n \\includegraphics[width=0.275\\textwidth]{{figures\/Regime3Mass}.pdf}\n }\n \\hspace{0.2cm}\n \\subfigure[$\\alpha_r=0.1$, $\\beta_r=0.2$, $\\alpha_b=0.2$,\\quad $\\beta_b=0.4$]{\n \\includegraphics[width=0.275\\textwidth]{{figures\/Regime4Mass}.pdf}\n }\n \\caption{Evolution of the total mass for different flux boundary conditions and with $D_r=D_b = 0.1$ and $\\lambda = 0.01$ in all cases.}\n \\label{fig:mass}\n\\end{figure}\n\n\n\n\n\\begin{figure}\n \\centering\n \\subfigure[$t=0$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/PeriodicDensitiesAtTime_t=0.00}.pdf}\n }\n \\subfigure[$t=0.4$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/PeriodicDensitiesAtTime_t=0.50}.pdf}\n }\n \\subfigure[$t=1$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/PeriodicDensitiesAtTime_t=1.00}.pdf}\n }\n \\subfigure[$t=3.9$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/PeriodicDensitiesAtTime_t=3.90}.pdf}\n }\n \\caption{Periodic boundary conditions with $D_r=D_b = 0.01$ and $\\lambda = 0.01$. All models converge to constant stationary solution.}\n \\label{fig:periodic}\n\\end{figure}\n\n\\begin{figure}\n \\centering\n \\subfigure[$t=0$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime6DensitiesAtTime_t=0.00}.pdf}\n }\n \\subfigure[$t=0.5$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime6DensitiesAtTime_t=0.50}.pdf}\n }\n \\subfigure[$t=2$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime6DensitiesAtTime_t=2.00}.pdf}\n }\n \\subfigure[$t=10$]{\n \\includegraphics[width=0.23\\textwidth]{{figures\/Regime6DensitiesAtTime_t=10.00}.pdf}\n }\n \\label{fig:confined}\n \\caption{No flux boundary conditions with $D_r = D_b = 0.1$, $\\lambda = 0.01$ and an confining potential $V_r = V_b = 5(x-0.5)^2$. }\n\\end{figure}\n\n\n\n\n\\section*{Acknowledgements}\nThe work of MTW was partly supported by the Austrian Academy of Sciences New Frontier's grant NFG-0001. JFP thanks the DAAD for support via the PPP project 57447206. MBu acknowledges partial financial support by European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No. 777826 (NoMADS) and the German Science Foundation (DFG) through CRC TR 154 \"Mathematical Modelling, Simulation and Optimization using the Example of Gas Networks\", Subproject C06. Maria Bruna was partially supported by a Royal Society University Research Fellowship (grant number URF\/R1\/180040) and a Humboldt Research Fellowship from the Alexander von Humboldt Foundation.\n\n\\printbibliography\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nModern High Energy Physics (HEP) experiments are designed to detect large amount of data with very high rate. In addition to that weak signatures of new physics must be searched in complex background condition. In order to reach these achievements, new computing paradigms must be adopted. A novel approach is based on the use of high parallel computing devices, like Graphics Processing Units (GPU), which delivers such high performance solutions to be used in HEP. In particular, a massive parallel computation based on General Purpose Graphics Processing Units (GPGPU)~\\cite{nvidia} could dramatically speed up the algorithms for charged particle tracking and fitting, allowing their use for fast decision taking and triggering. \nIn this paper we describe a tracking recognition algorithm based on the Hough Transform~\\cite{hough:paper,hough:hep1,hough:hep2} and its implementation on Graphics Processing Units (GPU). \n\n\n\n\n\\section{Tracking with the Hough Transform}\n\nThe Hough Transform (HT) is a pattern recognition technique for features extraction in image processing, and in our case we will use a HT based algorithm to extract the tracks parameters from the hits left by charged particles in the detector. A preliminary result on this study has been already presented in~\\cite{tipp}. Our model is based on a cylindrical multi-layer silicon detector installed around the interaction point of a particle collider, with the detector axis on the beam-line. \nThe algorithm works in two serial steps. In the first part, for each hit having coordinates $(x_H,y_H,z_H)$ the algorithm computes all the circles in the $x-y$ transverse plane passing through that hit and the interaction point, where the circle equation is $x^2+y^2-2Ax-2By=0$, and $A$ and $B$ are the two parameters corresponding to the coordinates of the circle centre. The circle detection is performed taking into account also the longitudinal ($\\theta$) and polar ($\\phi$) angles. For all the $\\theta$, $\\phi$, $A$, $B$, satisfying the circle equation associated to a given hit, the corresponding $M_H(A,B,\\theta,\\phi)$ Hough Matrix (or Vote Matrix) elements are incremented by one. After computing all the hits, all the $M_H$ elements above a given threshold would correspond to real tracks. Thus, the second step is a local maxima search among the $M_H$ elements.\n\n\nIn our test, we used a dataset of 100 simulated events ($pp$ collisions at LHC energy, Minimum Bias sample with tracks having transverse momentum $p_T>500$ MeV), each event containing up to 5000 particle hits on a cylindrical 12-layer silicon detector centred on the nominal collision point. The four hyper-dimensions of the Hough space have been binned in $4 \\times 16 \\times 1024 \\times 1024$ along the corresponding $A,B,\\theta,\\phi$ parameters. \n\n\nThe algorithm performance compared to a $\\chi^2$ fit method is shown in Fig.~\\ref{hough:perf}: the $\\rho=\\sqrt{A^2+B^2}$ and $\\varphi=\\tan^{-1}(B\/A)$ are shown together with the corresponding resolutions.\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=.8\\textwidth]{rinaldi_lorenzo_fig1.pdf}}\n\\caption{Hough Transform algorithm compared to $\\chi^2$ fit. (a) $\\rho$ distribution; (b) $\\varphi$ distribution; (c) $\\rho$ resolution; (d) $\\varphi$ resolution. }\\label{hough:perf}\n\\end{figure}\n\n\n\\section{SINGLE-GPU implementation}\n\n\nThe HT tracking algorithm has been implemented in GPGPU splitting the code in two kernels, for Hough Matrix filling and searching local maxima on it. Implementation has been performed both in CUDA~\\cite{nvidia} and OpenCL~\\cite{opencl}. GPGPU implementation schema is shown in Fig.~\\ref{GPGPU:schema}. \n\n\n\n\nConcerning the CUDA implementation, for the $M_H$ filling kernel, we set a 1-D grid over all the hits, the grid size being equal to the number of hits of the event. Fixed the ($\\theta,\\phi$) values, a thread-block has been assigned to the $A$ values, and for each $A$, the corresponding $B$ is evaluated. The $M_H(A,B,\\theta,\\phi)$ matrix element is then incremented by a unity with an {\\tt atomicAdd} operation. The $M_H$ initialisation is done once at first iteration with {\\tt cudaMallocHost} (pinned memory) and initialised on device with {\\tt cudaMemset}. \nIn the second kernel, the local maxima search is carried out using a 2-D grid over the $\\theta,\\phi$ parameters, the grid dimension being the product of all the parameters number over the maximum number of threads per block ($N_{\\phi} \\times N_{\\theta} \\times N_A \\times N_B$)\/{\\tt maxThreadsPerBlock}, and 2-D threadblocks, with {\\tt dimXBlock}=$N_A$ and {\\tt dimYBlock=MaxThreadPerBlock}\/$N_A$. Each thread compares one $M_H(A,B,\\theta,\\phi)$ element to its neighbours and, if the biggest, it is stored in the GPU shared memory and eventually transferred back. With such big arrays the actual challenge lies in optimizing array allocation and access and indeed for\nthis kernel a significant speed up has been achieved by tuning matrix access in a coalesced fashion, thus allowing to gain a crucial computational speed-up.\n\\begin{figure}[hb]\n\\centerline{\\includegraphics[width=.6\\textwidth]{rinaldi_lorenzo_fig2.pdf}}\n\\caption{GPGPU implementation schema of the two Hough Transform algorithm kernels.}\\label{GPGPU:schema}\n\\end{figure}\nThe OpenCL implementation has been done using a similar structure used for CUDA. Since in OpenCL there is no direct pinning memory, a device buffer is mapped to an already existing $memallocated$ host buffer ({\\tt clEnqueueMapBuffer}) and dedicated kernels are used for matrices initialisation in the device memory. The memory host-to-device buffer allocation is performed concurrently and asynchronously, saving overall transferring time. \n\n\n\n\n\\subsection{SINGLE-GPU results}\n\n\n\\begin{footnotesize}\n\\begin{table}[h]\n\\centerline{\\begin{tabular}{ | l | c | c | c |}\n\\hline\nDevice & NVIDIA & NVIDIA & NVIDIA \\\\\nspecification & GeForce GTX770 & Tesla K20m & Tesla K40m \\\\\n\\hline\nPerformance (Gflops) & 3213 & 3542 & 4291 \\\\\nMem. Bandwidth (GB\/s) & 224.2 & 208 & 288 \\\\\nBus Connection & PCIe3 & PCIe3 & PCIe3 \\\\\nMem. Size (MB) & 2048 & 5120 & 12228 \\\\\nNumber of Cores & 1536 & 2496 & 2880 \\\\\nClock Speed (MHz) & 1046 & 706 & 1502 \\\\\n\\hline\n\\end{tabular}}\n\\caption{Computing resources setup.}\n\\label{tab:gpus}\n\\end{table}\n\\end{footnotesize}\n\nThe test has been performed using the NVIDIA~\\cite{nvidia} GPU boards listed in table~\\ref{tab:gpus}. The GTX770 board is mounted locally on a desktop PC, the Tesla K20 and K40 are installed in the INFN-CNAF HPC cluster. \n\n\n\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=1.05\\textwidth]{rinaldi_lorenzo_fig3.pdf}}\n\\caption{Execution timing as a function of the number of analysed hits. (a) Total execution time for all devices; (b) Total execution time for GPU devices only; (c) $M_H$ filling time for all devices; (d) $M_H$ filling timing for GPU devices only; (e) local maxima search timing for all devices; (f) local maxima search timing for GPU devices only; (g) device-to-host transfer time (GPUS) and I\/O time (CPU). }\\label{allgpu}\n\\end{figure}\n\n\n\n\nThe measurement of the execution time of all the algorithm components has been carried out as a function of the number of hits to be processed, and averaging the results over 100 independent runs. The result of the test is summarised in Fig.~\\ref{allgpu}.\nThe total execution time comparison between GPUs and CPU is shown in Fig.~\\ref{allgpu}a, while in Fig.~\\ref{allgpu}b the details about the execution on different GPUs are shown. The GPU execution is up to 15 times faster with respect to the CPU implementation, and the best result is obtained for the CUDA algorithm version on the GTX770 device. The GPUs timing are less dependent on the number of the hits with respect to CPU timing.\n\n\nThe kernels execution on GPUs is even faster with respect to CPU timing, with two orders of magnitude GPU-CPU speed up, as shown in Figs.~\\ref{allgpu}c and \\ref{allgpu}e. When comparing the kernel execution on different GPUs (Figs.~\\ref{allgpu}d) and~\\ref{allgpu}f), CUDA is observed to perform slightly better than OpenCL. Figure~\\ref{allgpu}g shows the GPU-to-CPU data transfer timings for all devices together with the CPU I\/O timing, giving a clear idea of the dominant part of the execution time.\n\n\n\n\\section{MULTI-GPU implementation}\n\n\n\nAssuming that the detector model we considered could have multiple readout boards working independently, it is interesting to split the workload on multiple GPUs. We have done this by splitting the transverse plane in four sectors to be processed separately, since the data across sectors are assumed to be read-out independently. \nHence, a single HT is executed for each sector, assigned to a single GPU, and eventually the results are merged when each GPU finishes its own process. The main advantage is to reduce the load on a single GPU by using lightweight Hough Matrices and output structures. Only CUDA implementation has been tested, using the same workload schema discussed in Sec. 3, but using four $M_H(A,B,\\theta)$, each matrix processing the data of a single $\\phi$ sector.\n\\begin{figure}[h]\n\\centerline{\\includegraphics[width=.8\\textwidth]{rinaldi_lorenzo_fig4.pdf}}\n\\caption{Execution timing as a function of the number of the hits for multi-GPU configuration. (a) Total execution time; (b) $M_H$ filling timing; (c) local maxima search timing; (d) device-to-host transfer time.}\\label{multigpu}\n\\end{figure}\n\n\n\\subsection{MULTI-GPU results}\n\nThe multi-GPU results are shown in Fig.~\\ref{multigpu}. The test has been carried out in double configuration, separately, with two NVIDIA Tesla K20 and two NVIDIA Tesla K40. The overall execution time is faster with double GPUs in both cases, even if timing does not scale with the number of GPUs. An approximate half timing is instead observed when comparing kernels execution times. On the other hand, the transferring time is almost independent on the number of GPUs, this leading the overall time execution.\n\n\n\n\n\\section{Conclusions}\n\nA pattern recognition algorithm based on the Hough Transform has been successfully implemented on CUDA and OpenCL, also using multiple devices. The results presented in this paper show that the employment of GPUs in situations where time is critical for HEP,\nlike triggering at hadron colliders, can lead to significant and encouraging speed-up. Indeed the problem by itself offers wide room for a parallel approach to computation: this is reflected in the results shown where the speed-up is around 15 times better than what achieved with a normal CPU. There are still many handles for optimising the performance, also taking into account the GPU architecture and board specifications. \nNext steps of this work go towards an interface to actual experimental frameworks, including the management of the experimental data structures and testing with more graphics accelerators and coprocessor.\n\n\n\n\n\n\n\n\n\n\\begin{footnotesize}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzetxr b/data_all_eng_slimpj/shuffled/split2/finalzzetxr new file mode 100644 index 0000000000000000000000000000000000000000..3e8cf8888ee460d79a7137f77606f39f472e644d --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzetxr @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\\label{sect:Intro}\n\n\n\n\n \n\nHS RS imaging has gradually become one of the most vital achievements in the field of RS since the 1980s \\cite{c173}. Varied from an initial single-band panchromatic image, a three-band color RGB image, and a several-band multispectral (MS) image, an HS image contains hundreds of narrow and continuous spectral bands, which is promoted by the development of spectral imaging equipment and the improvement of spectral resolutions. The broader portion of the HS spectrum can scan from the ultraviolet, extend into the visible spectrum, and eventually reach in the near-infrared or short-wave infrared \\cite{9395693}. Each pixel of HS images corresponds to a spectral signature and reflects the electromagnetic properties of the observed object. This enables the identification and discrimination of underlying objects, especially some that have a similar property in single-band or several-band RS images (such as panchromatic, RGB, MS) in a more accurate manner. As a result, the wealthy spatial and spectral information of HS images has extremely improved the perceptual ability of Earth observation, which makes the HS RS technique play a crucial role in the fields like precision agriculture (e.g., monitoring the growth and health of crops), space exploration (e.g., searching for signs of life on other planets), pollution monitoring (e.g., detection of the ocean oil spill), and military applications (e.g., identification of military targets) \\cite{c1,c172,9174822}. \n\nOver the past decade, massive efforts have been made to process and analyze HS RS data after the data acquisition. Initial HS data processing considers either a gray-level image for each band or the spectral signature of each pixel. From one side, each HS spectral band is regarded as a gray-level image, and the traditional 2-D image processing algorithms are directly introduced band by band \\cite{c175,c176}. From another side, the spectral signatures that have similar visible properties (e.g., color, texture) can be used to identify the materials \\cite{c174}. Furthermore, extensive low-rank (LR) matrix-based methods are employed to explore the high correlation of spectral channels with the assumption that the unfolding HS matrix has a low rank \\cite{c20,c177,c178}. Given an HS image of size h\u00d7v\u00d7z, the recovery of an unfolding HS matrix (hv\u00d7z) usually requires the singular value decomposition (SVD), which leads to the computational cost of $O(h^2 v^2 z+ z^3)$ \\cite{c30,c31,c32}. In some typical tensor decomposition-based methods, the complexity of the tensor singular value decomposition (t-SVD) is about $O(hvzlogz+ hv^2z)$ \\cite{c28,c34,c61}. Compared to matrix forms, tensor decompositions achieve excellent performances with a tolerable increment of computational complexity. However, these traditional LR models reshape each spectral band as a vector, leading to the destruction of the inherent spatial-spectral completeness of HS images. Correct interpretations of HS images and the appropriate choice of the intelligent models should be determined to reduce the gap between HS tasks and the advanced data processing technique. Both 2-D spatial information and 1-D spectral information are considered when an HS image is modeled as a three-order tensor.\n\n \n\n\n\\begin{figure*}[htp!]\n\t\\begin{center}\n \n \\includegraphics[width = 1\\textwidth]{TLFORHSI.pdf}\n\t\\end{center}\n\t\\caption[houston]{A taxonomy of main tensor decomposition-based methods for HS data processing. }\n\t\\label{fig:TLreference}\n\\end{figure*}\n\n\n\nTensor decomposition, which originates from Hitchcock's works in 1927 \\cite{c179}, touches upon numerous disciplines, but it has recently become prosperous in the fields of signal processing, machine learning, data mining and fusion over the last ten years \\cite{c181,c182,c183}. The early overviews focus on two common decomposition ways: Tucker decomposition and CANDECOMP\/PARAFAC (CP) decomposition. In 2008, these two decompositions were first introduced into HS restoration tasks to remove the Gaussian noise \\cite{c25,c26}. The tensor decomposition-based mathematical models avoid converting the original dimensions, and also to some degree, enhance the interpretability and completeness for problem modeling. Different types of prior knowledge (e.g, non-local similarity in the spatial domain, spatial and spectral smoothness) in HS RS are considered and incorporated into the tensor decomposition frameworks. However, on the one hand, additional tensor decomposition methods have been proposed recently, such as block term (BT) decomposition, t-SVD \\cite{c184}, tensor train (TT) decomposition \\cite{c185}, and tensor ring (TR) decomposition \\cite{c126}. On the other hand, as a versatile tool, tensor decomposition related to HS image processing has not been reviewed until. In this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned, as presented in Fig. \\ref{fig:TLreference}. \n\n\n\\begin{figure}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.45\\textwidth]{papernumber.pdf}\n\t\\end{center}\n\t\\caption[houston]{The number of journal and conference papers that published in IEEE Xplore on the subject of \"hyperspectral\" and \"tensor decomposition\" within different time periods. }\n\t\\label{fig:Visio-papernum}\n\\end{figure}\nFig. \\ref{fig:Visio-papernum} displays the dynamics of tensor decompositions used for HS data processing in the HS community. The listed numbers contain both scientific journal and conference papers published in IEEE Xplore, which regards \"hyperspectral\" and \"tensor decomposition\" as the main keywords in abstracts. To highlight the increasing trend of number of publications, time period has been divided into four equal time slots (i.e., 2007-2010, 2011-2014, 2015-2018, 2019-2022(05 January)).\nIn this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned. \n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(1) To the best of our knowledge, this is the first time to provide a comprehensive survey of the state-of-the-art tensor decomposition techniques for processing and analyzing HS RS images. More than 100 publications in this field are reviewed and discussed, most of which were published during the last five years.\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(2) For each HS topic, major representative works are scrupulously presented in terms of the specific categories of tensor decomposition. We introduce and discuss the pure tensor decomposition-based methods and their variants with other HS priors in sequence. The experimental examples are performed for validating and evaluating theoretical methods, followed by a discussion of remaining challenges and further research directions.\n\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(3) This article makes a connection between tensor decomposition modeling and HS prior information. Tab. \\ref{tab:tab1} summarizes with the publication years, brief description, and prior information. Either beginners or experiencers are expected to obtain certain harvest pertinent to the tensor decomposition-based frameworks for HS RS. The available codes are also displayed in Tab. \\ref{tab:tab1} for the sake of repeatability and further studies.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{Tensor decomposition-based approaches for HS RS.}\n\\resizebox{\\textwidth}{!}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\cline{1-6}\nCategory &Years & Methods & Brief Description & Prior Information & Code Links\\\\\n\\hline\nRstoration & 2008 & LRTA \\cite{c25} & Tucker decomposition & Spectral correlation & \\\\\nDenoising& 2008 & PARAFAC \\cite{c26} & CP decomposition & Spectral correlation & \\\\\n& 2013 & R1TD \\cite{c59} & Rank-1 Tensor decomposition & Spectral correlation & \\\\\n&2017& LRTR \\cite{c28} & TNN & Spectral correlation & \\\\\n&2019 & NTRM \\cite{xue2019nonconvex} & Logarithmic TTN & Spectral correlation &\\\\\n&2020 & 3DTNN \/ 3DLogTNN \\cite{c61}& Three-directional TNN \/ Log-based TNN &Spectral correlation & https:\/\/yubangzheng.github.io\/homepage\/ \\\\\n&2014& TDL & Tucker decomposition with dictionary learning & Spectral correlation + Non-local similarity & http:\/\/www.cs.cmu.edu\/~deyum\/ \\\\\n&2018 & NSNTD \\cite{c71} & Non-local similarity based nonnegative tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n&2019 & GNWTTN \\cite{c67} & Global and non-local weighted TTN &Spectral correlation + Non-local similarity & \\\\\n&2015& NLTA-LSM \\cite{c65} & Tensor decomposition with laplacian scale mixture & Spectral correlation + Non-local similarity & \\\\\n& 2016 &ITS\\cite{c63} & CP + Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/gr.xjtu.edu.cn\/web\/dymeng\/3 \\\\\n&2019 & NLR-CPTD \\cite{xue2019nonlocal} & CP + Tucker decomposition &Spectral correlation + Non-local similarity &\\\\\n&2017& LLRT \\cite{chang2017hyper} & hyper-Laplacian prior + Unidirectional LR tensor & Non-local similarity + Spectral smoothness &https:\/\/owuchangyuo.github.io\/publications\/LLRT \\\\\n&2019 & NGmeet \\cite{he2019non}& Spectral subspace-based unidirectional LR tensor & Spectral correlation + Non-local similarity &https:\/\/prowdiy.github.io\/weihe.github.io\/publication.html\\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2020 & NLTR \\cite{c64} & Nonlocal TR decomposition & Spectral correlation + Non-local similarity &https:\/\/chenyong1993.github.io\/yongchen.github.io\/\\\\\n& 2018 & TLR-TV \\cite{c77} & TNN + 2DTV \/ 3DTV & Spectral correlation + Spatial $\\&$ Spectral smoothness & \\\\\n& 2018 & SSTV-LRTF \\cite{c34} & TNN + SSTV & Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & MLR-SSTV \\cite{8920965} & Multi-directional weighted TNN + SSTV & Spectral correlation + Spatial-Spectral smoothness &\\\\\n&2018& LRTDTV \\cite{c36} & 3DwTV + Tucker decomposition & Spectral correlation + Spatial-Spectral smoothness & https:\/\/github.com\/zhaoxile\\\\\n&2021 &TLR-$L_{1\\--2}{\\rm SSTV}$ \\cite{c84} & $L_{1\\--2}{\\rm SSTV}$ + Local-patch TNN&Spectral correlation + Spatial-Spectral smoothness& \\\\\n&2019 &LRTDGS \\cite{c78}& Weighted group sparsity-regularized TV + Tucker decomposition& Spectral correlation + Spatial-Spectral smoothness &https:\/\/chenyong1993.github.io\/yongchen.github.io\/ \\\\\n&2019 & LRTF$L_0$ \\cite{c81}& $l_0$ gradient constraint + LR BT decomposition & Spectral correlation + Spatial-Spectral smoothness & http:\/\/www.xiongfuli.com\/cv\/ \\\\\n&2021&TLR-${l_0}\\text{TV}$ \\cite{c82}&${l_0}\\text{TV}$ + LR tensor & Spectral correlation + Spatial-Spectral smoothness &https:\/\/github.com\/minghuawang666\/TLR-L0TV\\\\\n& 2019 & SNLRSF \\cite{c72}& Subspace-based non-local LR and sparse factorization & Spectral correlation + Non-local tensor subspace & https:\/\/github.com\/AlgnersYJW\/\\\\\n&2020 & LRTF-DFR \\cite{c86}& double-factor-regularized LR tensor factorization & Subspace spectral correlation + spatial \\& spectral constraints & https:\/\/yubangzheng.github.io\/homepage\/\\\\\n&2021 &DNTSLR \\cite{c88}& Difference continuity + Non-local tensor subspace & Spectral correlation + Non-local tensor subspace & \\\\\n\\cline{2-6}\nDeblurring&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n&2021& OLRT \\cite{c117} & Joint spectral and non-local LR tensor & Non-local similarity +Spectral smoothness & https:\/\/owuchangyuo.github.io\/publications\/OLRT \\\\\n\\cline{2-6}\nInpaninting & 2015 & TMac \\cite{c118}& LR TC by parallel matrix factorization (TMac) & Spectral correlation& \\\\\n& 2015 & TNCP \\cite{c119} & TNN + CP decomposition & Spectral correlation & \\\\\n& 2017 & AWTC \\cite{c121} & HaLRTC with well-designed weights & Spectral correlation & \\\\\n& 2019 & LRRTC \\cite{c130}& logarithm of the determinant + TTN & Spectral correlation & \\\\\n& 2020 & LRTC \\cite{c123,c124} & t-SVD & Spectral correlation & \\\\\n& 2019 & TRTV \\cite{c128} & TR decomposition + spatial TV & Spectral correlation + Spatial smoothness & \\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2021 & TVWTR \\cite{c129}& Weighted TR decomposition + 3DTV & Spectral correlation + Spatial Spectral smoothness& https:\/\/github.com\/minghuawang666\/TVWTR \\\\\n\\cline{2-6}\nDestriping& 2018 & LRTD \\cite{c132}& Tucker decomposition + Spatial \\& Spectral TV & Spectral correlation + Spatial Spectral smoothness & https:\/\/github.com\/zhaoxile?tab=repositories \\\\\n& 2018 & LRNLTV \\cite{c135} & Matrix nuclear norm + Non-local TV & Spectral correlation + Non-local similarity &\\\\\n& 2020 & GLTSA \\cite{c133} & Global and local tensor sparse approximation & Sparisity + Spatial \\& Spectral smoothness& \\\\\n&2020& WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n\\hline\nCS & 2017 & JTenRe3-DTV \\cite{c97} & Tucker decomposition + weighted 3-D TV & Spectral correlation + Spatial \\& Spectral smoothness & https:\/\/github.com\/andrew-pengjj\/Enhanced-3DTV\\\\\n& 2017 & PLTD \\cite{c101} & Non-local Tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2019 & NTSRLR \\cite{c99} & TNN + Tucker decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2020 & SNLTR \\cite{c90} & TR decomposition + Subspace representation & Non-local similarity + +Spectral smoothness & \\\\\n& 2015 & 3D-KCHSI \\cite{c106} & KCS with independent sampling dimensions & Spectral correlation & \\\\\n& 2015 & T-NCS \\cite{c95} & Tucker decomposition & Spectral correlation & \\\\\n& 2013 & NBOMP \\cite{6797642} & KCS with a tensor-based greedy algorithm & Spectral correlation & \\\\\n& 2016 & BOSE \\cite{7544443} & KCS with beamformed mode-based sparse estimator & Spectral correlation &\\\\\n& 2020 & TBR \\cite{c107} & KCS with multi-dimensional block-sparsity & Spectral correlation &\\\\\n\\hline\nAD & 2015 & LTDD \\cite{c110} & Tucker decomposition + Umixing & Spectral correlation & \\\\\n& 2016 & TenB \\cite{c111} & Tucker decomposition + PCA & Spectral correlation & \\\\\n& 2019 & TDCW \\cite{c139} & Tucker decomposition + Clustering & Spectral correlation & \\\\\n& 2020 & TEELRD \\cite{c113} & Tucker decomposition + Endmember extraction & Spectral correlation + Subspace Learning &\\\\\n& 2019 & LRASTD \\cite{c115} & Tucker decomposition +TNN & Spectral correlation + Subspace Learning & \\\\\n& 2018 & TPCA \\cite{c112} & TPCA + Fourier transform & Spectral correlation & \\\\ \n& 2020 & PTA \\cite{c114} & TRNN + Spatial TV & Spectral correlation + Spatial smoothness & https:\/\/github.com\/l7170\/PTA-HAD \\\\\n& 2022 & PCA-TLRSR \\cite{minghuaTC} & weighted TNN + Multi-subspace & Spectral correlation + Subspace & https:\/\/github.com\/minghuawang666\/ \\\\\n\\hline\nSR & 2018 & STEREO \\cite{c145} & CP decomposition & Spectral correlation & https:\/\/github.com\/marhar19\/HSR\\_via\\_tensor\\_decomposition \\\\\n& 2020 & NCTCP \\cite{c152} & Nonlocal coupled CP decomposition & Spectral correlation + Non-local similarity & \\\\\n& 2018 & SCUBA \\cite{c151} & CP decomposition with matrix factorization &Spectral correlation & \\\\\n& 2018 & CSTF \\cite{c146} & Tucker decomposition & Spectral correlation & https:\/\/github.com\/renweidian\/CSTF \\\\\n& 2021 & CT\/CB-STAR \\cite{c155} & Tucker decomposition with inter-image variability & Spectral correlation + Spatial Spectral Variability & https:\/\/github.com\/ricardoborsoi\n\\\\\n& 2021 & CNTD \\cite{c170} & Nonnegative Tucker decomposition & Spectral correlation &\\\\\n& 2018 & CSTF-$l_2$ \\cite{c147} & Tucker decomposition & Spectral correlation &\\\\\n& 2020 & SCOTT \\cite{c153} & Tucker decomposition + HOSVD & Spectral correlation & https:\/\/github.com\/cprevost4\/HSR$\\_$Software\\\\\n& 2020 & NNSTF \\cite{c154} & Tucker decomposition + HOSVD & Spectral correlation + Non-local similarity& \\\\\n& 2020 & WLRTR \\cite{c116} & Weighted Tucker decomposition & Spectral correlation + Non-local similarity & https:\/\/owuchangyuo.github.io\/publications\/WLRTR \\\\\n& 2017 & NLSTF \\cite{c156} & Non-local sparse Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/github.com\/renweidian\/NLSTF \\\\\n& 2020 & NLSTF-SMBF \\cite{c157} & Non-local sparse Tucker decomposition &Spectral correlation + Non-local similarity & https:\/\/github.com\/renweidian\/NLSTF \\\\\n& 2020 & UTVTD \\cite{c160} & Tucker decomposition + Unidirectional TV &Spectral correlation + Spatial-Spectral smoothness & https:\/\/liangjiandeng.github.io\/ \\\\\n& 2020 & NLRTD-SU \\cite{c158} & Non-local Tucker decomposition + SU +3-DTV & Spectral correlation + Non-local similarity + Spatial-Spectral smoothness \\\\\n& 2018 & SSGLRTD \\cite{c159} & Spatial\u2013spectral-graph Tucker decomposition & Spectral correlation + Local geometry & \\\\\n& 2021 & gLGCTD \\cite{c161} & Graph Laplacian-guided Tucker decomposition & Spectral correlation + Local geometry & \\\\\n& 2019 & NN-CBTD \\cite{c163} & BT decomposition & Spectral correlation & \\\\\n& 2021 & BSC-LL1 \\cite{c138} & BT decomposition &Spectral correlation & https:\/\/github.com\/MengDing56 \\\\\n& 2021 & GLCBTD \\cite{c164} & Graph Laplacian-guided BT decomposition & Spectral correlation + Local Geometry \\\\\n& 2019 & LTTR \\cite{c148} & Non-local TT decomposition & Spectral correlation + Non-local similarity &https:\/\/github.com\/renweidian\/LTTR \\\\\n& 2021 & NLRSR \\cite{c162} & Non-local TT decomposition & Spatial-Spectral correlation + Non-local similarity\\\\\n& 2022 & CTRF \\cite{c150} & Coupled TR decomposition & Spectral correlation & \\\\\n& 2020 & HCTR \\cite{c149} & High-Order Coupled TR decomposition & Spectral correlation + Local Geometry & \\\\\n& 2021 & FSTRD \\cite{c169} & TR decomposition + TV &Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & LRTRTNN \\cite{c171} & Non-local TR decomposition + TNN & Spectral correlation + Non-local similarity&\\\\\n& 2019 & LTMR \\cite{c165} & Subspace based LR multi-Rank & Spectral correlation + Non-local similarity &https:\/\/github.com\/renweidian\/LTMR \\\\\n& 2021 & FLTMR \\cite{c166} & LTMR with a Truncation Concept & Spectral correlation + Non-local similarity& \\\\\n& 2019 & NPTSR \\cite{c8} & Non-local tensor sparse representation & Spectral correlation + Non-local similarity& \\\\\n& 2019 & TV-TLMR \\cite{c168} & Tucker decomposition + TV & Spectral correlation + Spatial-Spectral smoothness & \\\\\n& 2021 & LRTA-SR \\cite{c167} & TTN & Spectral correlation & \\\\\n\\hline \nSU& 2007 & NTF-SU \\cite{c190,c191} & CP decomposition & Spectral correlation & \\\\\n& 2020 & ULTRA-V \\cite{c212} & CP decomposition & Spectral correlation & https:\/\/github.com\/talesimbiriba\/ULTRA-V\\\\\n& 2017 & MVNTF \\cite{c192} & BT decomposition & Spectral correlation & https:\/\/gitSUb.com\/bearshng\/mvntf \\\\\n& 2019 & NTF-TV \\cite{c193} & TV + BT decomposition&\tSpectral correlation + Spatial-Spectral smoothness &\thttp:\/\/www.xiongfuli.com\/cv\/ \\\\\n& 2021 & SPLRTF \\cite{c194}\t&LR + sparsity + BT decomposition &\tSpectral correlation & \\\\\t\n& 2019 & svr-MVNTF \\cite{c195} & BT decomposition\t& Spectral correlation + Local Geometry \\\\\n& 2020 & SCNMTF \\cite{c196}\t& BT decomposition + NMF &\tSpectral correlations& \\\\\n& 2021 & NLTR \\cite{c197} & TV + Non-local LR &\tSpectral correlations+ Nonlocal similarity+ Spatial-Spectral smoothness\t& \\\\\n& 2021 & BUTTDL1 \\cite{c208} & sparsity + Tucker decomposition & Spectral correlations \\\\\n& 2021 & SeCoDe \\cite{c198} & Convolution operation + BT decomposition &\tSpectral correlations + Spatial-Spectral smoothness &\thttps:\/\/gitSUb.com\/danfenghong\/IEEE\\_TGRS\\_SeCoDe \\\\\n& 2020 & WNLTDSU \\cite{c210} & Weighted non-local LR + TV & Spectral correlation + Sparsity + Spatial smoothness & https:\/\/github.com\/sunlecncom\/WNLTDSU\\\\\n& 2021 & NL-TSUn \\cite{c210} & Non-local LR + Joint sparsity & Spectral correlation + Sparsity & \\\\\n& 2021 & LRNTF\\cite{c204} &\tBT decomposition &\tSpectral correlations &\thttps:\/\/gitSUb.com\/LinaZhuang\/HSI\\_nonlinear\\_unmixing\\_LR\\-NTF \\\\\n\\hline\n\\end{tabular}}\n\\label{tab:tab1}\n\\end{table*}\n\n \n\\section{Notations and Preliminaries}\n\\label{sect:Notations}\nIn this section, we introduce some notations and preliminaries. For clear description, the notations are list in Table \\ref{tab:tab0}. The main abbreviations used in this article are given in Table \\ref{tab:abbreviation}.\n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{The notations used in the paper}\n\\begin{spacing}{1.3}\n\\scalebox{0.75}{\n\\begin{tabular}{c|c}\n\\hline\n\\cline{1-2}\n\\cline{1-2}\nNotation&Description\\\\\n\\hline\n\n\\cline{1-2}\n$x$ &scalars\\\\\n$\\textbf{x}$ & vectors\\\\\n$\\textbf{X}$ & matrices\\\\\n$vec(\\textbf{X})$ & $vec(\\textbf{X}$) stacks the columns of $\\textbf{X}$\\\\\n$\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ & tensors with 3-modes\\\\\n$\\mathcal{X}_{h_i,v_i,z_i}$ & the ($h_i,v_i,z_i$)-element of $\\mathcal{X}$\\\\\n$\\mathcal{X}(i,:,:)$, $\\mathcal{X}(:,i,:)$ and $\\mathcal{X}(:,:,i)$ & the $i^{th}$ horizontal, lateral and frontal slices \\\\\n $||\\mathcal{X}||_1=\\sum_{h_i,v_i,z_i}{|\\mathcal{X}_{h_i,v_i,z_i}|}$ & $l_1$ norm\\\\\n $||\\mathcal{X}||_F= \\sqrt{\\sum_{h_i,v_i,z_i}{|\\mathcal{X}_{h_i,v_i,z_i}|^2}}$ & Frobenius norm\\\\\n ${\\sigma}_i(\\textbf{X})$ & the singular values of matrix $\\textbf{X}$ \\\\\n $||\\textbf{X}||_* = \\sum_i {\\sigma}_i(\\textbf{X})$ & nuclear norm\\\\ \n $||\\textbf{x}||_2 = \\sqrt{\\sum_i {|\\textbf{x}_i|^2}}$ & $l_2$ norm \\\\\n $ \\hat{\\mathcal{X}}=$fft$(\\mathcal{X},[],3)$ & Fourier transformation of $\\mathcal{X}$ along mode-3\\\\\n\\hline\n\\end{tabular}\n}\n\\end{spacing}\n\\label{tab:tab0}\n\\end{table}\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Main abbreviations used in the paper}\n\\begin{spacing}{1.3}\n\\scalebox{0.75}{\n\\begin{tabular}{c|c}\n\\hline\n\\cline{1-2}\n\\cline{1-2}\nAbbreviation &Full name\\\\\n\\hline\nAD & Anomaly detection\\\\\nCP & CANDECOMP\/PARAFAC\\\\\nCS & Compressive sensing\\\\\nBT & Block term\\\\\nHS & Hyperspectral\\\\\nLMM & Linear mixing model\\\\\nLR & Low-rank\\\\\nNMF & Nonnegative matrix factorization\\\\\nRS & Remote sensing\\\\\nSNN & The sum of the nuclear norm\\\\\nSU & Spectral unmixing\\\\\nTNN & Tensor nuclear norm\\\\\nTV & Total variation\\\\\nTT & Tensor train\\\\\nTTN & Tensor trace norm\\\\\nTR & Tensor ring\\\\\n1-D & One dimensional\\\\\n2-D & Two dimensional\\\\\n3-D & Three dimensional\\\\\n4-D & Four dimensional\\\\\n\\hline\n\\end{tabular}\n}\n\\end{spacing}\n\\label{tab:abbreviation}\n\\end{table}\n\n$\\textbf{Definition 1}$ (T-product \\cite{c43}): The T-product of two three-order tensors $\\mathcal{A} \\in \\mathbb{R}^{n_1 \\times n_2 \\times n_3}$ and $\\mathcal{B} \\in \\mathbb{R}^{n_2 \\times n_4 \\times n_3}$ is denoted by $\\mathcal{C} \\in \\mathbb{R}^{n_1 \\times n_4 \\times n_3}$:\n\\begin{equation}\n\\label{eq:product}\n\\mathcal{C}(i,k,:)= \\sum_{j=1}^{n_2} \\mathcal{A}(i,j,:) \\star \\mathcal{B}(j,k,:)\n\\end{equation}\nwhere $\\star$ represents the circular convolution between two tubes.\n\n$\\textbf{Definition 2}$ (Tensor $n$-mode product \\cite{c181}): The $n$-mode product of a tensor $\\mathcal{A} \\in \\mathbb{R}^{r_1 \\times r_2 \\times ... \\times r_N }$ and a matrix $\\mathbf{B} \\in \\mathbb{R}^{ B \\times r_n}$ is the tensor $\\mathcal{X} \\in \\mathbb{R}^{r_1 \\times r_2 \\times ... r_{n-1} \\times B \\times r_{n+1} ... \\times r_N } $ defined by\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sec-product}\n\\mathcal{X} = \\mathcal{A} \\times_n \\mathbf{B}\n\\end{aligned}\n\\end{equation}\nThe unfolding matrix form of Eq.(\\ref{eq:sec-product}) is\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:sec-product2}\n\\mathbf{X}_{(n)} = \\mathbf{B} \\times \\mathbf{A}_{(n)} \n\\end{aligned}\n\\end{equation}\n\n$\\textbf{Definition 3}$ (Four Special tensors \\cite{c27}):\n\nConjugate transpose: The conjugate transpose of a three-order tensor $ \\mathcal{X}\\in \\mathbb{R}^{h \\times v \\times z}$ is the tensor $ {\\rm conj}(\\mathcal{X})=\\mathcal{X}^* \\in \\mathbb{R}^{v \\times h \\times z}$, which can be obtained by conjugately transposing each front slice and reversing the order of transposed frontal 2 through $z$.\n\nIdentity tensor: The identity tensor denoted by $\\mathcal{I}\\in \\mathbb{R}^{h \\times v \\times z}$ is the tensor whose first frontal slice is an identity matrix and all other frontal slices are zero.\n\nOrthogonal tensor: A three-order tensor $\\mathcal{Q}$ is orthogonal if it satisfies $\\mathcal{Q}^* * \\mathcal{Q}= \\mathcal{Q} * \\mathcal{Q}^*=\\mathcal{I}$.\n\nF-diagonal tensor: A three-order tensor $\\mathcal{S}$ is f-diagonal if all of its slices are diagonal matrices.\n\n$\\textbf{Definition 4}$ (First Mode-$k$ Unfolding\/matricization \\cite{c181}): This operator noted unfold$(\\mathcal{X},k)$ converts a tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 ... I_k \\times I_{k+1} ...I_N }$ into a matrix $\\textbf{X}_{(k)} \\in \\mathbb{R}^{I_k \\times I_1..I_{k-1}I_{k+1}...I_N }$. Inversely, fold($\\textbf{X}_{(k)}, k$) denotes the folding of the matrix into a tensor.\n\n\n$\\textbf{Definition 5}$ (Second Mode-$k$ Unfolding\/matricization \\cite{c126}): For a tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 ... I_k \\times I_{k+1} ...I_N }$, its second Mode-$k$ Unfolding matrix represented by $\\textbf{X}_{} \\in \\mathbb{R}^{I_k \\times I_{k+1}...I_N I_1..I_{k-1} }$. The inverse operation is matrix folding (tensorization).\n\n\n\n$\\textbf{Definition 6}$ (Mode-$k$ permutation \\cite{c29}): \nFor a three tensor $\\mathcal{X} \\in \\mathbb{R}^{ I_1 \\times I_2 \\times ... \\times I_N }$, this operator noted by $\\mathcal{X}^k$=permutation($\\mathcal{X}$, $k$) changes its permutation order with $k$ times and obtain a new tensor $\\mathcal{X}^k \\in \\mathbb{R}^{ I_k \\times ... \\times I_N \\times I_1 \\times ... \\times I_{k-1} } $. The inverse operator is defined as $\\mathcal{X}$ = ipermutation($\\mathcal{X}^k$, $k$). For example, three mode-$k$ permutation of an HS tensor $\\mathcal{X}^1\\in \\mathbb{R}^{ h \\times v \\times z }$ can be written as\n$\\mathcal{X}^1\\in \\mathbb{R}^{v \\times z \\times h}$, $\\mathcal{X}^2\\in \\mathbb{R}^{z \\times h \\times v}$, $\\mathcal{X}^3\\in \\mathbb{R}^{h \\times v \\times z}$. \n\n\n$\\textbf{Definition 7}$ (Tensor Trace Norm (TTN) \\cite{c44}) It is the sum of the nuclear norm (SNN) of the mode-$k$ unfolding matrix for a $3$-way HS tensor:\n\\begin{equation}\n\\label{eq:tracenorm}\n||\\mathcal{X}||_{\\rm SNN}:=\\sum_{k=1}^{3} \\alpha _k ||\\textbf{X}_{(k)}||_*\n\\end{equation}\nwhere weights $\\alpha_k$ satisfy $\\alpha_k \\geq 0 (k=1,2,3)$ and $\\sum_{k=1}^{3} \\alpha_k =1$. \n\n$\\textbf{Definition 8}$ (Tucker decomposition \\cite{c181,c186,c187}): The Tucker decomposition of an $N$-order tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N}$ is defined as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:Tucker1}\n\\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 ... \\times_N \\mathbf{B}_N\n\\end{aligned}\n\\end{equation}\nwhere $\\mathcal{A} \\in \\mathbb{R}^{r_1 \\times r_2 \\times... \\times r_N}$ stands for a core tensor and $\\mathbf{B}_n \\in \\mathbb{R}^{I_n \\times r_n}, n=1,2,..,N$ represent factor matrices. The Tucker ranks are represented by ${\\rm rank}_{\\rm Tucker}(\\mathcal{X}) = [r_1, r_2,..., r_N] $.\n\n\n$\\textbf{Definition 9}$ (CP decomposition \\cite{c180,c181,c188}):\nThe CP decomposition of an $N$-order tensor $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N} $ is defined as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:CP1}\n \\mathcal { X }=\\sum_{r=1}^{R} \\tau_r \\mathbf{b}^{(1)}_{r} \\circ \\mathbf{b}^{(2)}_{r} \\circ ... \\circ \\mathbf{b}^{(N)}_{r} \n\\end{aligned}\n\\end{equation}\nwhere $\\tau_r$ are non-zero weight parameters, and $\\mathbf{b}^{(1)}_{r} \\circ \\mathbf{b}^{(2)}_{r} \\circ ... \\circ \\mathbf{b}^{(N)}_{r} $ denotes a rank-one tensor with $\\mathbf{b}^{(n)}_{r} \\in \\mathbb{R}^{I_n}$. The CP rank denoted by ${\\rm rank}_{\\rm CP}(\\mathcal{X}) = R $ is the sum number of rank-one tensors.\n\n$\\textbf{Definition 10}$ ( BT decomposition \\cite{c213}) \nThe BT decomposition of an three-order tensor $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z} $ is defined as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD1}\n \\mathcal{X} = \\sum_{r=1}^{R} \\mathcal{G}_r \\times_1 \\mathbf{A}_r \\times_2 \\mathbf{B}_r \\times_3 \\mathbf{C}_r\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{G} \\in \\mathbb{R}^{L_h \\times L_v \\times L_z}$, $\\mathbf{A}_r \\in \\mathbb{R}^{h \\times L_h}$, $\\mathbf{B}_r \\in \\mathbb{R}^{v \\times L_v}$, and $\\mathbf{C}_r \\in \\mathbb{R}^{z \\times L_z}$. Each of $R$ component tensors can be expressed by rank ($L_h$, $L_v$, $L_z$) Tucker decomposition. BT decomposition can be regarded as the combination of Tucker and CP decomposition. On the hand, Eq. (\\ref{eq:BTD1}) becomes Tucker decomposition when $R=1$. On the other hand, when each component is represented by a rank ($L$, $L$, $1$) tensor, Eq. (\\ref{eq:BTD1}) is written by\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD2}\n \\mathcal{X} =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} \n \\end{aligned}\n\\end{equation}\nwhere the matrix $\\mathbf{A}_{r} \\in \\mathbb{R}^{h \\times L_r}$ and the matrix $\\mathbf{B}_{r} \\in \\mathbb{R}^{v \\times L_r}$ are also rank-$L$. If rank-$L$ $\\mathbf{E}_{r} \\in \\mathbb{R}^{h \\times v}$ is factorized as $\\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T}$. Eq. (\\ref{eq:BTD2}) can be rewritten as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:BTD3}\n \\mathcal{X} =\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} \n \\end{aligned}\n\\end{equation}\n\n\n$\\textbf{Definition 11}$ (Tensor Nuclear Norm (TNN) \\cite{c123}) Let $\\mathcal{X}=\\mathcal{U} * \\mathcal{S} * \\mathcal{V}^{*}$ be the t-SVD of $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, TNN is the sum of singular values of $\\mathcal{X}$, that is,\n\\begin{equation}\n\\label{eq:TNN}\n||\\mathcal{X}||_*:=\\sum_{k=1}^{z} \\mathcal{S}(k,k,1),\n\\end{equation}\nand also can be expressed as the sum of nuclear norm of all the frontal slices of $\\hat{\\mathcal{X}}$ :\n\\begin{equation}\n\\label{eq:TNN2}\n||\\mathcal{X}||_*:=\\sum_{k=1}^{z} ||\\hat{\\mathcal{X}}(:,:,k)||_*.\n\\end{equation}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width = 1\\textwidth]{Visio-TD.pdf}\n\t\\end{center}\n\t\\caption[restoration]{ Illustration to show six tensor decompositions of third-order tensor: (a) Tucker decomposition, (b) CP decomposition, (c) BT decomposition, (d) t-SVD, (e) TT decomposition, (f) TR decomposition. }\n\t\\label{fig:td}\n\\end{figure*}\n\nFor more intuitively understanding the above mentioned tensor decompositions, examples for third-order tensor are shown in Fig. \\ref{fig:td}, which benefits the consequent tensor decomposition-based researches of third-order HS data.\n\n$\\textbf{Definition 12}$ (t-SVD \\cite{c123}) $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ can be factorized as\n\\begin{equation}\n\\label{eq:tsvd}\n\\mathcal{X}=\\mathcal{U} * \\mathcal{S} * \\mathcal{V}^{*}\n\\end{equation}\nwhere $\\mathcal{U} \\in \\mathbb{R}^{h \\times h \\times z}$, $\\mathcal{V} \\in \\mathbb{R}^{v \\times v \\times z}$ are orthogonal tensors and $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ is a f-diagonal tensor. The details of t-SVD are described in the Algorithm \\ref{alg:tsvd1}.\n\\begin{algorithm}[htb]\n\t\\caption{t-SVD} \\label{alg:tsvd1}\n\t\\begin{algorithmic}[1]\n\t\t\\REQUIRE $\\; \\mathcal{X}\\in \\mathbb{R}^{h \\times v \\times z}$\\\\\n\t\t\\NoDo\n \\STATE $ \\hat{\\mathcal{X}}=$fft$(\\mathcal{X},[],3)$;\n\t\t\\NoDo \\FOR{$i=0,1,\\dots, [\\frac{z+1}{2}]$}\n\t\t\\STATE $ [\\hat{\\mathcal{U}}(:,:,i), \\hat{\\mathcal{S}}(:,:,i), \\hat{\\mathcal{V}}(:,:,i)]=$SVD$(\\hat{\\mathcal{X}}(:,:,i))$;\t\t\n\t\t\\ENDFOR\n\t\t\\NoDo \\FOR{$i= [\\frac{z+1}{2}+1],\\dots,z$}\n\t\t\\STATE $ \\hat{\\mathcal{U}}(:,:,i)=$conj($\\hat{\\mathcal{U}}(:,:,z-i+2)$);\n\t\\STATE $ \\hat{\\mathcal{S}}(:,:,i)=(\\hat{\\mathcal{S}}(:,:,z-i+2)$)\n\t\\STATE$ \\hat{\\mathcal{V}}(:,:,i)=$conj($\\hat{\\mathcal{V}}(:,:,z-i+2)$);\t\t\n\t\t\\ENDFOR\n\t\t\\STATE $\\mathcal{U} =$ifft$(\\hat{\\mathcal{U}},[],3)$, $\\mathcal{S} =$Ifft$(\\hat{\\mathcal{S}},[],3)$,$ \\mathcal{V}=$fft$(\\hat{\\mathcal{V}},[],3)$;\n\t\t\\ENSURE $\\mathcal{U},\\mathcal{S},\\mathcal{V}$.\n\t\\end{algorithmic}\n\\end{algorithm}\n\n\n$\\textbf{Definition 13}$ ( TT decomposition \\cite{c185}) The TT decomposition of an $N$-order $\\mathcal{X} \\in \\mathbb{R}^{I_1 \\times I_2 \\times ... \\times I_N} $ is represented by cores $\\mathcal{G} = \\{ \\mathcal{G}^{(1)},..., \\mathcal{G}^{(N)} \\}$, where $\\mathcal{G}^{(n)} \\in \\mathbb{R}^{ r_{n-1} \\times I_n \\times r_n} $, $n=1,2,...,N$, $r_0 = r_N =1$. The rank of TT decomposition is defined as $ {\\rm rank}_{\\rm TT}(\\mathcal{X}) =[r_0,r_1,...,r_{N}]$. Each entry of the tensor $\\mathcal{X}$ is formulated as\n\\begin{equation}\n\\label{eq:ttd}\n\\mathcal{X}(i_1,...,1_N) = \\mathcal{G}^{(1)}(:,i_1,:)\\mathcal{G}^{(2)}(:,i_2,:)...\\mathcal{G}^{(N)}(:,i_N,:).\n\\end{equation}\n\n$\\textbf{Definition 14} $ (TR decomposition \\cite{c126}): The purpose of TR decomposition is to represent a high-order $\\mathcal{X}$ by multi-linear products of a sequence of three-order tensors in circular form. Three-order tensors are named TR factors $\\{\\mathcal{G}^{(n)}\\}^N_{n=1} = \\{\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)}\\}$, where $\\mathcal{G} ^{(n)}\\in \\mathbb{R}^{r_n \\times I_n \\times r_{n+1}}$, $n=1, 2, ..., N$, $r_0 =r_N$. In this case, the element-wise relationship of TR decomposition with factors $\\mathcal{G}$ can be written as\n\\begin{equation}\n\\label{eq:TR1}\n\\begin{aligned}\n\\mathcal{X}(i_1, i_2,...,i_N) &= {\\rm Tr}(\\mathcal{G}^{(1)}(:,i_1,:)\\mathcal{G}^{(2)}(:,i_2,:)...\\mathcal{G}^{(N)}(:,i_N,:))\\\\\n&= {\\rm Tr}(\\prod_{i=1}^{N} \\mathcal{G}^{(n)}(:,i_n,:))\n\\end{aligned}\n\\end{equation} \nwhere ${\\rm Tr}$ denotes the matrix trace operation.\n\n\n\n\n\n$\\textbf{Definition 15} $ (Multi-linear Product \\cite{c16}): Given two TR factors $\\mathcal{G}^{(n)}$ and $\\mathcal{G}^{(n+1)}$, their multi-linear product $\\mathcal{G}^{(n,n+1)} \\in \\mathbb{R}^{r_n \\times I_n I_{n+1}\\times r_{n+1}}$ is calculated as\n\\begin{equation}\n\\label{eq:multilinear}\n\\begin{aligned}\n\\mathcal{G}^{(n,n+1)}(:,I_n(i_k -1) +j_k,:)=\\mathcal{G}^{(n)}(:,i_k,:)\\mathcal{G}^{(n+1)}(:,j_k,:)\n\\end{aligned}\n\\end{equation} \nfor $i_k = 1,2,...,I_n, j_k = 1, 2,...,I_n+1$.\n\nFrom the above $\\textbf{Definition 15} $, the multi-linear product of all the TR factors can be induce as $[\\mathcal{G}] = \\prod^N_{n=1}\\mathcal{G}^{(n)} = \\mathcal{G}^{(1,2,...,n)} = \\{ \\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(n)}\\} \\in \\mathbb{R}^{r_1 \\times I_1 I_2... I_n \\times r_1}$. The TR decomposition can be rewritten as $\\mathcal{X} = \\Phi({\\mathcal{G}})$, where $\\Phi$ is a dimensional shifting operator $\\Phi: \\mathbb{R}^{r_1 \\times I_1 I_2...I_n \\times r_1} \\rightarrow \\mathbb{R}^{I_1 \\times I_2 \\times...\\times I_n}$. \n\n$\\textbf{Lemma 1} $ (Circular Dimensional Permutation Invarience \\cite{c126}): If the TR decomposition of $\\mathcal{X}$ is $\\mathcal{X} = \\Phi(\\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)})$, ${\\stackrel{\\leftarrow}{\\mathcal{X}}}^n \\in \\mathbb{R}^{I_n \\times I_{n+1} \\times ... \\times I_1 \\times ... \\times I_{n-1}} $ is defined as circular shifting the dimensions of $\\mathcal{X}$ by $n$, we obtain the following relation:\n\\begin{equation}\n\\label{eq:circluar}\n\\begin{aligned}\n\\stackrel{\\leftarrow}{\\mathcal{X}}^n = \\Phi( \\mathcal{G}^{(n)},\\mathcal{G}^{(n+1)},...,\\mathcal{G}^{(N)},\\mathcal{G}^{(1)},\\mathcal{G}^{(2)},...,\\mathcal{G}^{(N)})\n\\end{aligned}\n\\end{equation} \n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=9cm]{Visio-denoising_framework.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS image restoration. }\n\t\\label{fig:denoisingfr}\n\\end{figure*}\n\n\n\n\n$\\textbf{Definition 16} $ (Mixed $l_{1,0}$ pseudo-norm \\cite{c41}): Given a vector $\\textbf{y} \\in \\mathbb{R}^{m}$ and index sets $\\theta_1,...,\\theta_i,...,\\theta_n (1 \\leq n \\leq m)$ that satisfies \n\\begin{itemize}\n\\item Each $\\theta_i$ is a subset of {1,...,m},\n\\item $\\theta_i \\cap \\theta_l = \\emptyset$ for any $i \\neq l$,\n\\item $\\cup^n_{i=1}\\theta_i ={1,...,m}$,\n\\end{itemize}\nthe mixed $l_{1,0}$ pseudo-norm of $y$ is defined as:\n\\begin{equation}\n||\\textbf{y}||^{\\theta}_{1,0} = ||(||\\textbf{y}_{\\theta_1}||_1,...,||\\textbf{y}_{\\theta_i}||_1,...||\\textbf{y}_{\\theta_n}||_1)||_0,\n\\end{equation}\nwhere $\\textbf{y}_{\\theta_i}$ denotes a sub-vector of $\\textbf{y}$ with its entries specified by $\\theta_i$ and $|| \\cdot ||_0$ calculates the number of the non-zero entries in ($\\cdot$). \n\n\n\n\n\n\n\n\n\\section{HS Restoration}\n\\label{sect:restoration}\n\n\nIn the actual process of HS data acquisition and transformation, external environmental change and internal equipment conditions inevitably lead to noises, blurs, and missing data (including clouds and stripes) \\cite{GRSM2022,li2021progressive} which degrade the visual quality of HS images and the efficiency of the subsequent HS data applications, such as a fine HS RS classification for crops and wetlands \\cite{5779697,9598903} and the refinement of spectral information for target detection \\cite{zhangTD2012,2009OE}. Fig. \\ref{fig:denoisingfr} depicts the HS RS degradation and Restoration. Therefore, HS image restoration appears as a crucial pre-processing step for further applications. \n\nMathematically, an observed degraded HS image can be formulated as follows\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:degrade}\n\\mathcal{T}=M(\\mathcal{X}) + \\mathcal{S} + \\mathcal{N}\n\t\\end{split}\n\\end{equation} \nwhere $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ and $\\mathcal{N} \\in \\mathbb{R}^{h \\times v \\times z}$ represents an observed HS image, the restored HS image, the sparse error and additive noise, respectively, and $M(\\cdot)$ denotes different linear degradation operators for different HS restoration problems: (a) when $M(\\cdot)$ is a blur kernal also called as point spread function (PSF), Eq. (\\ref{eq:degrade}) becomes HS deblurring problem; \n(b) when $M(\\cdot)$ is a binary operation, i.e., 1 for original pixels, and 0 for missing data, Eq. (\\ref{eq:degrade}) turns into the HS inpainting problem;\n(c) when $M(\\mathcal{X})$ keeps $\\mathcal{X}$ constant, i.e., $M(\\mathcal{X}) = \\mathcal{X}$, Eq. (\\ref{eq:degrade}) is reformulated as the HS destriping problem ($\\mathcal{T}=\\mathcal{X} + \\mathcal{S}$) or HS denoising problem (only consider Gaussian noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{N}$ or consider mixed noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{S} + \\mathcal{N}$). The HS restoration task is to estimate recovered HS images $\\mathcal{X}$ from the given HS images $\\mathcal{T}$. This ill-posed problem suggests that extra constraints on $\\mathcal{X}$ need to be enforced for the optimal solution of $\\mathcal{X}$. These additional constraints reveal the HS desired property and various types of HS prior information, such as non-local similarity, spatial and spectral smoothness, and subspace representation. The HS restoration problem can be summarized as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:summ}\n \\underset{\\mathcal{X}}{ \\min } \\frac{1}{2} || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} ||^2_F + \\tau f(\\mathcal{X}) + \\lambda g(\\mathcal{S})\n \\end{aligned}\n\\end{equation}\nwhere $f(\\mathcal{X})$ and $g(\\mathcal{S})$ stand for the regularizations to explore the desired properties on the recovered $\\mathcal{X}$ and sparse part $\\mathcal{S}$, respectively. $\\tau$ and $\\lambda$ are regularization parameters.\n\n\n\n\n\n\n\\subsection{HS Denoising}\n\\label{sect:HS_Denoising}\nThe observed HS images are often corrupted by mixed noise, including Gaussian noise, salt and pepper noise, and dead-line noise. Several noise types of HS images are shown in Fig. \\ref{fig:Visio-noisyHSI}. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.2cm]{Visio-noisyHSI.pdf}\n\t\\end{center}\n\t\\caption[washington]{HS data sets with different noise types: (a) the Urban data set, (b) the Indian Pines data set, (c) the Salinas data set. }\n\t\\label{fig:Visio-noisyHSI}\n\\end{figure}\nThe wealthy spatial and spectral information of HS images can be extracted by different prior constraints like LR property, sparse representation, non-local similarity, and total variation. Different LR tensor decomposition models are introduced for HS denoising. Consequently, one or two kinds of other prior constraints are combined with these tensor decomposition models.\n\n\n\n\\subsubsection{LR Tensor Decomposition}\n\\label{sect:LRT}\n \nIn this section, the LR tensor decomposition methods are divided into two categories: 1) factorization-based approaches and 2) rank minimization-based approaches. The former one needs to predefine rank values and update decomposition factors. The latter directly minimizes tensor ranks and updates LR tensors.\n \n$\\textbf{(1) Factorization-based approaches}$ \n\nTwo typical representatives are used in the HS image denoising literature, namely, Tucker decomposition and CP decomposition. Renard \\textit{et al}. \\cite{c25} considered Gaussian noise and suggested a LR tensor approximation (LRTA) model to complete an HS image denoising task:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:lrta}\n&\\underset{\\mathcal{X}}{\\textrm{min}}\\;||\\mathcal{T} - \\mathcal{X} ||^2_F \\\\ \n&{\\rm s.t.} \\; \\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 \\times_3 \\mathbf{B}_3\n\t\\end{split}\n\\end{equation} \n\n Nevertheless, users should manually pre-define the multiple ranks along all modes before running the Tucker decomposition-related algorithm, which is intractable in reality. In Eq. (\\ref{eq:lrta}), the Tucker decomposition constraint is easily replaced by other tensor decomposition, such as CP decomposition. Liu \\textit{et al}. \\cite{c26} used a Parallel Factor Analysis (PARAFAC) decomposition algorithm and still assumed that HS images were corrupted by white Gaussian noise. Guo \\textit{et al}. \\cite{c59} presented an HS image noise-reduction model via rank-1 tensor decomposition, which was capable of extracting the signal-dominant features. However, the smallest number of rank-1 factors is served as the CP rank, which needs high computation cost to be calculated. \n\n $\\textbf{(2) Rank minimization approaches}$ \n \n The tensor rank bounds are rarely available in many HS noisy scenes. To avoid the occurrence of rank estimation, another kind of methods focus on minimizing the tensor rank directly, which can be formulated as follows:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:rank_mini}\n&\\underset{\\mathcal{X}}{\\textrm{min}}\\; {\\rm rank}(\\mathcal{X}) \\\\ \n&{\\rm s.t.} \\; \\mathcal{T}= \\mathcal{X} + \\mathcal{S} + \\mathcal{N} \\\\\n\t\\end{split}\n\\end{equation} \nwhere ${\\rm rank(\\mathcal{X})}$ denotes the rank of HS tensor $\\mathcal{X}$ and includes different rank definitions like Tucker rank, CP rank, TT rank, and tubal rank. Due to the above rank minimizations belong to non-convex problems, these problems are NP-hard to compute. Nuclear norms are generally used as the convex surrogate of non-convex rank function. Zhang \\textit{et al}. \\cite{c27} proposed a tubal rank related TNN to characterize the 3-D structural complexity of multi-linear data. Based on the TNN, Fan \\textit{et al}. \\cite{c28} presented an LR Tensor Recovery (LRTR) model to remove Gaussian noise and sparse noise:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:LRTR}\n&\\underset{\\mathcal{X},\\mathcal{S},\\mathcal{N}}{\\textrm{min}}\\; ||\\mathcal{X}||_* + \\lambda_1 || \\mathcal{S}||_1 + \\lambda_2 ||\\mathcal{N}||_F^2 \\\\ \n&{\\rm s.t.} \\; \\mathcal{T}= \\mathcal{X} + \\mathcal{S} + \\mathcal{N} \\\\\n\t\\end{split}\n\\end{equation} \n\nXue \\textit{et al}. \\cite{xue2019nonconvex} applied a non-convex logarithmic surrogate function into a TTN for tensor completion and (tensor robust principal component analysis) TRPCA tasks. \nZheng \\textit{et al}. \\cite{c61} explored the LR properties of tensors along three directions and proposed two tensor models: a three-directional TNN (3DTNN) and a three-directional log-based TNN (3DLogTNN) as its convex and nonconvex relaxation. Although these pure LR tensor decomposition approaches utilize the LR prior knowledge of HS images, they are hardly effective to suppress mixed noise due to the lack of other useful information. \n\n\\subsubsection{Other priors regularized LR Tensor Decomposition}\n\\label{sect:OLRT}\nVarious types of priors are combined with an LR tensor decomposition model to optimize the model solution including non-local similarity, spatial and spectral smoothness, spatial sparsity, subspace learning. \n\n$\\textbf{ (1) Non-local similarity }$\n\nAn HS image often possesses many repetitive local spatial patterns, and thus a local patch always has many similar patches across this HS image \\cite{c66}. Peng \\textit{et al}. \\cite{c62} designed a tensor dictionary learning (TDL) framework. In Fig. \\ref{fig:nonlocal}, an HS image is segmented into 3-D full band patches (FBP). The similar FBPs are clustered together as a 4-D tensor group to simultaneously leverage the non-local similarity of spatial patches and the spectral correlation. TDL is the first model to exploit the non-local similarity and the LR tensor property of 4-D tensor groups, as shown in Fig. \\ref{fig:nonlocal} (b). Instead of a traditional alternative least square based tucker decomposition, Bai \\textit{et al}. \\cite{c71} improved a hierarchical least square based nonnegative tucker decomposition method. Kong \\textit{et al}. \\cite{c67} incorporated the weighted tensor norm minimization into the Tucker decompositions of 4-D patches. \n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=10cm]{Visio-nonlocal.pdf}\n\t\\end{center}\n\t\\caption[nonlocal]{Flowchart of non-local LR tensor-based methods. }\n\t\\label{fig:nonlocal}\n\\end{figure*}\n\n\nDiffer from references \\cite{c62,c71,c67}, other works \\cite{c65,c63,xue2019nonlocal,chang2017hyper,he2019non,c64} obtained a 3-D tensor by stacking all non-local similar FBPs converted as matrices with a spatial mode and a spectral mode in Fig. \\ref{fig:nonlocal}(d). Based on a non-local similar framework, Dong \\textit{et al}. \\cite{c65} proposed a Laplacian Scale Mixture (LSM) regularized LR tensor approximation method for denoising.\nXie \\textit{et al}. \\cite{c63} conducted a tensor sparsity regularization named intrinsic tensor sparsity (ITS) to encode the spatial and spectral correlation of the non-local similar FBP groups. With the non-local similarity of FBPs, $\\mathcal{X}$ is estimated from its corruption $\\mathcal{T}$ by solving the following problem \n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:ITS}\n \\min _{\\mathcal{X}} \\Lambda (\\mathcal{X})+\\frac{\\gamma}{2}\\|\\mathcal{T}_{i}-\\mathcal{X}_i \\|_{F}^{2}\n\t\\end{split}\n\\end{equation} \nwhere the sparsity of a tensor $\\mathcal{X}$ is $\\Lambda (\\mathcal{X})=t\\|\\mathcal{A}\\|_{0}+(1-t) \\prod_{i=1}^{N} {\\rm rank}(X_{(i)})$ and $\\mathcal{A}$ is the core tensor of $\\mathcal{X}$ via the Tucker decomposition $\\mathcal{X}= \\mathcal{A} \\times_1 \\mathbf{B}_1 \\times_2 \\mathbf{B}_2 \\times_3 \\mathbf{B}_3$. Xue \\textit{et al}. \\cite{xue2019nonlocal} presented a non-local LR regularized CP tensor decomposition (NLR-CPTD) algorithm. However, the Tucker or CP decomposition-related methods are subject to the heavy computational burden issues. \n\nChang \\textit{et al}. \\cite{chang2017hyper} discovered the LR property of the non-local patches and used a hyper-Laplacian prior to model additional spectral information. He \\textit{et al}. \\cite{he2019non} developed a new paradigm, called non-local meets global (NGmeet) method, to fuse the spatial non-local similarity and the global spectral LR property. Chen \\textit{et al}. \\cite{c64} analyzed the advantages of a novel TR decomposition over the Tucker and CP decompositions. The proposed non-local TR decomposition method for HS image denoising is formulated as:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:TR}\n\\min _{\\mathcal{X}_{i}, \\mathcal{G}_{i}} \\frac{1}{2}\\|\\mathcal{T}_{i}-\\mathcal{X}_{i}\\|_{F}^{2} \\quad \\text { s.t. } \\; \\mathcal{X}_{i}=\\Phi([\\mathcal{G}_{i}])\n\t\\end{split}\n\\end{equation} \n\nThe non-local similarity-based tensor decomposition methods focus on removing Gaussian noise from corrupted HS images and unavoidably cause a computational burden in practice.\n\n$\\textbf{(2) Spatial and spectral smoothness }$\n\nHS images are usually captured by airborne or space-borne platforms far from the Earth's surface. The low measurement accuracy of imaging spectrometers leads to low spatial resolutions of HS images. In general, the distribution of ground objects varies gently. Moreover, high correlations exist between different spectral bands. HS images always have relatively smoothing characteristics in the spatial and spectral domains. \n\nAn original TV method was first proposed by Rudin \\textit{et al}. \\cite{c76} to remove the noise of gray-level images due to the ability to preserve edge information and promote piecewise smoothness. The HS image smoothness can be constrained by either an isotropic TV norm or an anisotropic TV norm \\cite{c35}. The obvious blurring artifacts are hardly eliminated in the denoised results of the isotropic model \\cite{c75}. Thus, anisotropic TV norms for HS image denoising are investigated in this paper.\nWe take the Washington DC (WDC) data set as a typical example to depict the gradient images along three directions in Fig. \\ref{fig:sstv1}. The smoothing areas and edge information of gradient images are much clearer than the origin. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=2.5cm]{Visio-spatialspectral-smoothness.pdf}\n\t\\end{center}\n\t\\caption[washington]{The spatial smooth properties of Washington DC: (a) original band, (b) the gradient image along the spatial horizontal direction, (c) the gradient image along the spatial vertical direction, (d) the gradient image along the spectral direction. }\n\t\\label{fig:sstv1}\n\\end{figure}\n\nInspired by the TV applications to gray-level images, the 2-D spatial TV norm of $\\mathcal{X}$ is easily introduced to an HS image in a band-by-band manner \\cite{c30}. This simple band-by-band TV norm is defined as follows:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:TV}\n\t\t|| \\mathcal{X} ||_{\\rm TV} = ||D_h \\mathcal{X} ||_1 + ||D_v \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \nwhere $D_h$ and $D_v$ stand for first-order linear difference operators corresponding to the horizontal and vertical directions, respectively. These two operator are usually defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dh}\n||D_h \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i,j+1,k)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq j < v \\\\\n0, \\quad &\\quad j=v\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dv}\n\t||D_v \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i+1,j,k)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq i < h \\\\\n0, \\quad &\\quad i=h\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\nTo enforce the spatial piecewise smoothness and the spectral consistency of HS images, a 3DTV norm \\cite{c35} and a SSTV norm \\cite{c19} are formulated, respectively:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:3DTV}\n\t\t|| \\mathcal{X} ||_{\\rm 3DTV} = ||D_h \\mathcal{X} ||_1 + ||D_v \\mathcal{X} ||_1 + ||D_z \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:SSTV}\n||\\mathcal{X}||_{\\rm{SSTV}} = ||D_z(D_h \\mathcal{X}) ||_1 +||D_z(D_v \\mathcal{X}) ||_1\n\t\t\t\\end{split}\n\\end{equation} \nwhere $||D_z \\mathcal{X} ||_1$ is a 1-D finite-difference operator along the spectral direction and is defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:dz}\n\t||D_z \\mathcal{X} ||_1 =\\left\\{\n\\begin{aligned}\n\\mathcal{X}(i,j,k+1)-\\mathcal{X}(i,j,k) ,\\quad & \\quad 1 \\leq i < h \\\\\n0, \\quad &\\quad k=z\n\\end{aligned}\n\\right.\n\\end{split}\n\\end{equation}\n\nConsidering the degrade model with mixed noise, Chen \\textit{et al}. \\cite{c77} integrated both the 2DTV and the 3DTV regularizations into the TNN. Fan \\textit{et al}. \\cite{c34} injected the above SSTV norm into LR tensor factorization. Wang \\textit{et al}. \\cite{wang2020hyperspectral} used an SSTV term in a multi-directional weighted LR tensor framework. Based on the different contributions of the three gradient terms to the 3DTV regularization, Wang \\textit{et al}. \\cite{c36} proposed the TV-regularized LR tensor decomposition (LRTDTV) method:\n \\begin{equation}\n\t\\begin{split}\n\t\\label{eq:LRTDTV}\n&\\min _{\\mathcal{X}, \\mathcal{S}, \\mathcal{N}} \\tau\\|\\mathcal{X}\\|_{\\mathrm{3DwTV}}+\\lambda\\|\\mathcal{S}\\|_{1}+\\beta\\|\\mathcal{N}\\|_{F}^{2} \\\\\n&\\text { s.t. } \\mathcal{T}=\\mathcal{X}+\\mathcal{S}+\\mathcal{N} \\\\\n&\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}, \\mathbf{B}_{i}^{T} \\mathbf{B}_{i}=\\mathbf{I}(i=1,2,3)\n\\end{split}\n\\end{equation}\nwhere the 3DwTV term is defined as:\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:3DwTV}\n\t\t|| \\mathcal{X} ||_{\\rm 3DwTV} = w_1||D_h \\mathcal{X} ||_1 + w_2||D_v \\mathcal{X} ||_1 + w_3||D_z \\mathcal{X} ||_1\n\t\t\t\\end{split}\n\\end{equation} \n\nZeng \\textit{et al}. \\cite{c84} integrating the advantages of both a global $L_{1\\--2}{\\rm SSTV}$ and the local-patch TNN.\nChen \\textit{et al}. \\cite{c78} exploited the row sparse structure of gradient images and proposed a weighted group sparsity-regularized TV combined with LR Tucker decomposition (LRTDGS) for HS mixed noise removal. \n\n\nDue to mentioned TV norms just penalizing large gradient magnitudes and easily blurring real image edges, a new $l_0$ gradient minimization was proposed to sharpen image edges \\cite{c37}. Actually, $l_1$ TV norm is a relaxation form of the $l_0$ gradient. Xiong \\textit{et al}. \\cite{c81} and Wang \\textit{et al}. \\cite{8920965} applied the $l_0$ gradient constraint in an LR BT decomposition and Tucker decomposition, respectively. However, the degrees of smoothness of this $l_0$ gradient form are controlled by a parameter, without any physical meaning. To alleviate this limitation, Ono \\cite{c41} proposed a novel $l_0$ gradient projection, which directly adopts a parameter to represent the smoothing degree of the output image. Wang \\textit{et al}. \\cite{c82} extended the ${l_0}\\text{TV}$ model into an LR tensor framework (TLR-${l_0}\\text{TV}$) to preserve more information for classification tasks after HS image denoising. The optimization model of TLR-${l_0}\\text{TV}$ is formulated as:\n\\begin{equation}\n\\label{eq:mlr-l0htv}\n\\begin{aligned}\n&\\underset{\\mathcal{X},\\mathcal{S}}{\\textrm{min}}\\; \\sum_{k=1}^{m} \\alpha _k E_k(\\mathcal{X})_{\\omega} +\\lambda\\|\\mathcal{S}\\|_1 +\\mu \\|\\mathcal{T}-\\mathcal{X}-\\mathcal{S}\\|_F^2,\\\\\n&s.t. \\ ||{B} D \\mathcal{X} ||^{\\theta}_{1,0} \\leq \\gamma,\n\\end{aligned}\t\n\\end{equation}\nwhere the functions $E_k(\\mathcal{X})_{\\omega}$ are set to be $||\\textbf{X}_{(k)}||_{\\omega,*}$ in the WSWNN-$l_0$TV-based method and $||\\mathcal{X}^k||_{\\omega,*}$ in the WSWTNN-$l_0$TV-based method. Operator ${B}$ forces boundary values of gradients to be zero when $ i = h $ and $j = v $. Operator $D$ is an operator to calculate both horizontal and vertical differences. Compared with many other TV-based LR tensor decompositions, TLR-${l_0}\\text{TV}$ achieves better denoising performances for mixed noise removal of HS images. In particular, HS classification accuracy is improved more effectively after denoising by TLR-l0TV.\n\n\n$\\textbf{(3) Subspace representation}$\n\nAs Fig. \\ref{fig:subspace} shows, an unfolding matrix $\\mathbf{X}$ of a denoised HS image can be projected into a orthogonal subspace, i.e., $\\mathbf{X} = \\mathbf{E}\\mathbf{Z}$. $\\mathbf{E} \\in \\mathbb{R}^{z \\times l}$ represents the basis of the subspace $S_l$ and $\\mathbf{Z} \\in \\mathbb{R}^{l \\times hv}$ denotes the representation coefficient of $\\mathbf{X}$ with respect to $\\mathbf{E}$. $\\mathbf{E}$ is reasonably assumed to be orthogonal, i.e., $\\mathbf{E}^{T} \\mathbf{E} =\\mathbf{I}$ \\cite{6736073}. \n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=4cm]{subspace.pdf}\n\t\\end{center}\n\t\\caption[nonlocal]{A schematic diagram of the subspace representation. }\n\t\\label{fig:subspace}\n\\end{figure}\n\nCao \\textit{et al}. \\cite{c72} combined a LR and sparse factorization with the non-local tensor constraint of subspace coefficients, dubbed SNLRSF. Each spectral band of an observed HS image $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$ is reshaped as each row of an HS unfolding matrix $\\mathbf{T} \\in \\mathbb{R}^{z \\times hv}$. The spectral vectors is assumed to lie in a $l$-dimensional subspace $S_l$ ($l \\ll z$), and the optimization model can be written as\n\\begin{equation}\n\\label{eq:SNLRSF}\n\\begin{aligned}\n&\\underset{\\mathbf{E}, \\mathbf{Z}, \\mathcal{L}_{i}, \\mathbf{S}}{\\arg \\min } \\frac{1}{2}\\|\\mathbf{T}-\\mathbf{E Z}-\\mathbf{S}\\|_{F}^{2}+\\lambda_{2}\\|\\mathbf{S}\\|_{1} \\\\\n&+\\lambda_{1} \\sum_{i}\\left(\\frac{1}{\\delta_{i}^{2}}\\left\\|\\Re_{i} \\mathbf{Z}-\\mathcal{L}_{i}\\right\\|_{F}^{2}+ ||\\mathcal{L}_i ||_{\\rm TTN} \\right) \\quad \\text { s.t. } \\quad \\mathbf{E}^{T} \\mathbf{E}=\\mathbf{I}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\Re_{i} \\mathbf{Z}$ is divided into three steps: 1) reshape the reduced-dimensionality coefficient image $\\mathbf{Z} \\in \\mathbb{R}^{l \\times hv}$ as a tensor $\\mathbf{Z} \\in \\mathbb{R}^{h \\times v \\times l}$; 2) segment the tensor $\\mathbf{Z}$ as an overlapped patch tensor $\\mathbf{Z}_i \\in \\mathbb{R}^{p \\times p \\times l}$; and 3) cluster $d$ similar patches in a neighborhood area by computing Euclidean distance.\n\n\n\nFrom one side, a spectral LR tensor model is explored according to the fact that spectral signatures of HS images lie in a low-dimensional subspace. From another side, a non-local LR factorization is employed to take the non-local similarity along the spatial direction into consideration. Following the line of SNLRSF, Zheng \\textit{et al}. \\cite{c86} employed LR matrix factorization to decouple spatial and spectral models. The group-sparse structure of HS images is introduced on spatial difference images (SpatDIs). A continuity constraint was applied in the spectral factor to promote the group sparsity of SpatDIs and the spectral continuity of HS images. Sun \\textit{et al}. \\cite{c88} projected the noisy HS images into a non-local tensor subspace spanned by a spectral difference continuous basis. The continuity of the restored HS data is significantly promoted by this difference regularization.\n\n\\subsubsection{Experimental results and analysis}\n\\label{sect:experiment_denoising}\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.3cm]{gaussian-result.pdf}\n\t\\end{center}\n\t\\caption[pavia80]{The different methods for Gaussian noise removal. (a) Original HS image, (b) Gaussian noise, (c) LRTA, (d) TDL, (e) ITS, (f) LLRT, (g) NGmeet. }\n\t\\label{fig:Visio-denoising_gaussian}\n\\end{figure*}\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.5cm]{mixed-result.pdf}\n\t\\end{center}\n\t\\caption[pavia80]{The different methods for mixed noise removal. (a) Original HS image, (b) Mixed noise, (c) LRTR, (d) LRTDTV, (e) LRTDGS, (f) 3DTNN, (g) TLR-$L_0$TV }\n\t\\label{fig:Visio-denoising_mixed}\n\\end{figure*}\n\nAn HS subimage is selected from the Pavia University data set and is normalized to $[0,1]$. The zero-mean Gaussian noise of noise variance $0.12$ is added into each band and shown in Fig. \\ref{fig:Visio-denoising_gaussian} (b). In a mixed noise case, the same Gaussian noise is also adopted. Each band is corrupted by the salt and pepper noise with a proportion of $0-20\\%$. Dead-lines are randomly added from band $61$ to band $80$, with the width of stripes generated from $1$ to $3$, and the number of stripes randomly selected from $3$ to $10$. In addition, bands $61-70$ are corrupted by some stripes with the number randomly selected from $20$ to $40$. Four different quantitative quality indices are chosen: the mean of peak signal-to-noise ratio (MPSNR), the mean of structural similarity (MSSIM), relative dimensional global error in synthesis (ERGAS), and the mean spectral angle distance (MSAD). Larger MPSNR and MSSIM values indicate better-denoised image quality. These two indices pay attention to the restoration precision of spatial pixels. In contrast, smaller ERGAS and MSAD values illustrate better performances of denoised results. \n\nFor Gaussian noise removal, all the competing approaches achieve good results to some degree in Fig. \\ref{fig:Visio-denoising_gaussian}, in which the enlarged subregions are delineated in red boxes. But residual noise remains in the result denoised by LRTA. Compared with TDL, ITS fails to preserve detailed spatial information. LLRT provides a rather similar result with NGmeet. Consistent with the visual observation, NGmeet outperforms the other methods and obtains the highest metric values among the denoising models in Tab. \\ref{tab:tab-gaussiannoise}. The non-local LR tensor methods including ITSReg, TDL, and LLRT gain better performances than LRTA, due to the formers exploiting two types of HS prior knowledge. The LRTA method is the fastest one among all the competing algorithms since LRTA just considers the spectral correlation. \n\nFig. \\ref{fig:Visio-denoising_mixed} shows the restoration results by five different methods under a heavy noise case. Dead-lines remaining in the images denoised by LRTR and 3DTNN are more obvious than the ones restored by LRTDTV and LRTDGS. The LR tensor-based model is employed in LRTR and 3DTNN, yet LRTDTV, LRTDGS, and TLR-$L_0$TV considered two kinds of prior knowledge: spectral correlation and spatial-spectral smoothness. LRTDTV and LRTDGS are more sensitive to dead-lines than TLR-$L_0$TV, leading to more or fewer artifacts in the denoised results. TLR-$L_0$TV removes most of the mixed noise and preserves image details like texture information and edges. To further evaluate the differences among competing denoising methods, we calculate four quality indices and show them in Tab. \\ref{tab:tab-mixednoise}, with the best results in bold. TLR-$L_0$TV obtains the highest denoising performance among all the approaches. For MPSNR, LRTDTV and LRTDGS are slightly larger than 3DTNN, whereas the SSIM and ERGAS values of LRTDTV and LRTDGS are better than those of 3DTNN. LRTR and LRTDGS are the first and second faster, but they hardly handle the complex mixed noise case with some dead-lines retaining. \n\n\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{Quantitative comparison of different selected algorithms for Gaussian noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\n& \\multicolumn{5}{|c}{Gaussian noise removal} \\\\\n\\hline\nIndex & LRTA & TDL & ITS & LLRT & NGmeet \\\\\n\\hline\nPSNR & 32.14 & 34.54 & 34.38 & \\underline{35.96} & $\\mathbf{37.06}$ \\\\\n \nSSIM & 0.9097 & {0.9484} & 0.9466 & \\underline{0.9637} & $\\mathbf{0.9707}$\\\\\n \nERGAS & 5.7044 & 4.3392 & 4.3981 & \\underline{4.0462} & $\\mathbf{3.2344}$\\\\\n \nMSAD & 6.6720 & 5.0701 & 5.0912 & \\underline{4.2402} & $\\mathbf{3.7804}$ \\\\\n \nTIME(s) & $ \\mathbf{1.48} $ & \\underline{13.77} & 650.49 & 506.84 & 29.58 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-gaussiannoise}\n\\end{spacing}\n\\end{table}\n \n \n \\begin{table}[!htbp]\n\\centering\n\\caption{Quantitative comparison of different selected algorithms for mixed noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\n& \\multicolumn{5}{|c}{Mixed noise removal} \\\\\n\\hline\nIndex & LRTR & LRTDTV & LRTDGS & 3DTNN & TLR-$L_0$TV\\\\\n\\hline\nMPSNR & 26.89 & \\underline{30.76} & \\underline{30.76} & 30.20 & $\\mathbf{31.59}$\\\\\n \nMSSIM & 0.8157 & 0.8821 & 0.7852 & \\underline{0.8945} & $\\mathbf{0.8973}$ \\\\\n \nERGAS & 10.8842 & 7.8154 & 9.5527 & \\underline{7.4915} & $\\mathbf{7.1748}$\\\\\n \nMSAD & 9.9624 & \\underline{7.3689} & 10.4568 & 7.4712 & $\\mathbf{7.2055}$\\\\\n \nTIME(s) & $\\mathbf{19.67}$ & 35.29 & \\underline{25.11} & 44.25 & 325.67 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-mixednoise}\n\\end{spacing}\n\\end{table}\n\n\n \\iffalse \n \\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for Gaussian \/ mixed noise removal.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|ccccc|ccccc}\n\\cline{1-11}\n& \\multicolumn{5}{|c}{Gaussian noise removal} & \\multicolumn{5}{|c}{Mixed noise removal} \\\\\n\\hline\nIndex & LRTA & TDL & ITS & LLRT & NGmeet & LRTR & LRTDTV & LRTDGS & 3DTNN & TLR-$L_0$TV\\\\\nMPSNR & 32.14 & 34.54 & 34.38 & \\underline{35.96} & $\\mathbf{37.06}$ & 26.89 & \\underline{30.76} & \\underline{30.76} & 30.20 & $\\mathbf{31.59}$\\\\\n \nMSSIM & 0.9097 & \\underline{0.9484} & 0.9466 & 0.9637 & $\\mathbf{0.9707}$ & 0.8157 & 0.8821 & 0.7852 & \\underline{0.8945} & $\\mathbf{0.8973}$ \\\\\n \nERGAS & 5.7044 & 4.3392 & 4.3981 & \\underline{4.0462} & $\\mathbf{3.2344}$ & 10.8842 & 7.8154 & 9.5527 & \\underline{7.4915} & $\\mathbf{7.1748}$\\\\\n \nMSAD & 6.6720 & 5.0701 & 5.0912 & \\underline{4.2402} & $\\mathbf{3.7804}$ & 9.9624 & \\underline{7.3689} & 10.4568 & 7.4712 & $\\mathbf{7.2055}$\\\\\n \nTIME(s) & $ \\mathbf{1.48} $ & \\underline{6.14} & 650.49 & 506.84 & 29.58 & $ \\mathbf{19.67} $ & 35.29 & \\underline{25.11} & 44.25 & 325.67 \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab3}\n\\end{spacing}\n\\end{table*}\n\\fi\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.1cm]{Visio-deblur.pdf}\n\t\\end{center}\n\t\\caption[wdc]{ OLRT for different blur cases. (a) Original WDC image, (b) the light Gaussian blur on WDC ($8 \\times 8$, Sigma = 3) and corresponding deblurred image (c), (d) the heavy Gaussian blur on WDC ($17 \\times 17$, Sigma = 7) and corresponding deblurred image (e), (f) the Uniform blur and corresponding deblurred image (g). }\n\t\\label{fig:Visio-deblur}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative evaluation of OLRT for different blur cases.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\cline{1-6}\nBlur cases & MPSNR & MSSIM & ERGAS & MSAD & TIME(s) \\\\\n\\hline\nGaussian blur (8*8, Sigma = 3) & 43.50 & 0.9912 & 1.5910 & 1.9486 &314.50 \\\\\nGaussian blur (17*17, Sigma = 7) & 39.63 & 0.9807 & 2.5407 & 3.0819 &305.70 \\\\\n Uniform blur & 39.39 & 0.9784 & 2.9332 & 3.8355 & 314.28 \\\\\n \n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-deblur}\n\\end{spacing}\n\\end{table*}\n\n\n\\subsection{HS Deblurrring} \n\\label{sect:HS_Deblurring}\n\nThe atmospheric turbulence or fundamental deviation of some imaging systems often blur HS images during the data acquisition process, which unfortunately damages the high-frequency components and the edge features of HS images. HS deblurring aims to recover sharp latent images from blurred ones. Chang \\textit{et al}. \\cite{c117} discussed the LR correlations along HS spatial, spectral, and non-local similarity modes and proposed a unified optimal LR tensor (OLRT) framework for multiple HS restoration tasks. But a matrix nuclear norm is used to constrain the LR property of unfolding non-local patch groups. Consequently, Chang \\textit{et al}. \\cite{c116} proposed a weighted LR tensor recovery (WLRTR) algorithm with a reweighted strategy. Considering spectral correlation and non-local similarity, the HS deblurring optimization problem can be formulated as follows\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:WLRTR}\n &\\underset{\\mathcal{X}, \\mathcal{A}_{i}, \\mathbf{B}_{j}}{ \\min } \\frac{1}{2}\\|\\mathcal{T}-M( \\mathcal{X} ) \\|_{F}^{2}+\\\\\n &\\eta \\sum_{i}\\left(\\left\\|\\mathcal{R}_{i} \\mathcal{X}-\\mathcal{A}_{i} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}\\right\\|_{F}^{2}\\right.\\left.+\\sigma_{i}^{2}\\left\\|\\boldsymbol{w}_{i} \\circ \\mathcal{A}_{i}\\right\\|_{1}\\right) \n\\end{aligned}\n\\end{equation}\nwhere $w_i$ is a reweighting factor inversely proportional to singular values of $\\mathcal{L}_i$ with $\\mathcal{L}_i = \\mathcal{A}_{i} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}$, and higher-order SVD (HOSVD) is applied to see the different sparsities of higher-order singular values, i.e., LR property. The last term $\\|\\mathcal{Y}-M( \\mathcal{X} ) \\|_{F}^{2}$ is a data fidelity item, which can be replaced by $ || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} - \\mathcal{N} ||^2_F$ for HS inpainting, destriping, and denoising problems. \n\n\nAn experimental example is given to display the deblurred performances of OLRT for the Gaussian blur with different levels and the uniform blur on the WDC data set. Fig. \\ref{fig:Visio-deblur} shows the visual results under different blur cases. The specific texture information is hardly distinguished in the three blurred images shown in Fig. \\ref{fig:Visio-deblur} (b), (d), and (f). The optimal LR tensor prior knowledge of OLRT reliably reflects the intrinsic structural correlation of HS images, which benefits the recovery of structural information and image edges. The quantitative results under different blur cases are reported in Tab. \\ref{tab:tab-deblur}.\n\n\\subsection{HS Inpainting}\n\\label{sect:HS_Inpainting}\n\n\nIn this section, we introduce and discuss LR tensor-based methods for HS inpainting. These methods are also suitable for missing data recovery of high-dimensional RS (HDRS) images. \nRS images such as HS, MS, and multi-temporal images often from missing data problems, such as dead pixels, thick clouds, and cloud shadows, as shown in Fig. \\ref{fig:Visio-missingdata}. The goal of inpainting is to estimate the missing data from observed images, which can be regarded as a tensor completion problem. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=3.2cm]{missing-data.pdf}\n\t\\end{center}\n\t\\caption[md]{ Examples of RS data with missing information. (a) Reflectance of Aqua MODIS band 6 with sensor failure. (b) Digital number values of Landsat ETM+ with the SLC-off problem. (c) Digital number\nvalues of a Landsat image with cloud obscuration. }\n\t\\label{fig:Visio-missingdata}\n\\end{figure}\n\nLR tensor completion theory has been successfully applied for HS inpainting \\cite{c117,c116,c118,c119,c120,c130,c131}. Liu \\textit{et al}. \\cite{c119} suggested a trace norm regularized CP decomposition for missing data recovery. Ng \\textit{et al}. \\cite{c121} learned from high-accuracy LR tensor completion (HaLRTC) \\cite{c120} for recovering the missing data of HDRS and proposed an adaptive weighted TC (AWTC) method. The proposed AWTC model is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:AWTC}\n \\underset{\\mathcal{X}}{ \\min } \\frac{\\eta}{2} || \\mathcal{T} - M(\\mathcal{X}) ||^2_F + \\sum_{i=1}^3 w_i || \\mathbf{X}_{(i)}||_*\n\\end{aligned}\n\\end{equation} \nwhere $w_i$ is well-designed parameter related to the singular values of $\\mathbf{X}_{(i)}$.\nXie \\textit{et al}. \\cite{c130} proposed a LR regularization-based TC (LRRTC), fusing the logarithm of the determinant with a TTN. With the definitions of a new TNN and its t-SVD \\cite{c123}, Wang \\textit{et al}. \\cite{c124} and Srindhuna \\textit{et al}. \\cite{c125} proposed new low-tuba-rank TC methods to estimate the missing values in HDRS images. Consequently, a novel TR decomposition is formulated to represent a high-dimensional tensor by circular multi-linear products on a sequence of third-order tensors \\cite{c126}. Based on the TR theory, He \\textit{et al}. \\cite{c128} fused the spatial TV into the TR framework and developed two solving algorithms: ALM and ALS. Similarly, Wang \\textit{et al}. \\cite{c129} incorporated a 3DTV regularization into a novel weighted TR decomposition framework. The proposed TV-WTV model is formulated as:\n\\begin{equation}\n\\label{eq:TVWTR}\n\\begin{aligned}\n&\\underset{\\mathcal{X},[\\mathcal{G}]}{\\textrm{min}}\\; \\sum^N_{n=1}\\sum^3_{i=1} \\theta_i ||\\textbf{G}^{(n)}_{(i)}||_*+ \\frac{\\lambda}{2}||\\mathcal{X} - \\Phi([\\mathcal{G}])||^2_F+\\tau||\\mathcal{X}||_{\\rm 3DTV} \\\\\n&s.t.\\; \\mathcal{X}_{\\Omega}=\\mathcal{T}_{\\Omega}\n\\end{aligned}\n\\end{equation}\n\n\n\nFor HS image inpainting tasks, we test three methods: HaLRTC, LRTC, and TVWTR on the random missing data problem and the text removal problem. A subimage is chosen from the Houston 2013 data set for our experimental study. Fig. \\ref{fig:Visio-inpainting1} shows the results of the Houston2013 data set before and after recovery under ratio = $80\\%$. Although missing pixels disappear in the results of HaLRTC and LRTC, these methods produce more or fewer artifacts in the top-right corner of the zoomed area. The TVWTR method performs the best among all the compared algorithms and recovers the details like the red square center of the zoom area. In Fig. \\ref{fig:Visio-inpainting2}, original HS bands are corrupted by different texts that do not appear randomly as in previous cases. The text corruption is eliminated by three tensor decomposition-based algorithms. Few text artifacts exist in the enlarged area of LRTC. Due to the consideration of the spectral correlation and the spatial-spectral smoothness, TVWTR provides the best result with reconstructing most information of the original image.\n\nThe corresponding quantitative results of two inpainting tasks are reported in Tab. \\ref{tab:tab-inpainting}. Taking account of two types of prior knowledge, TVWTR gives a significantly fortified performance under two cases, as compared with the other competing methods. HaLRTC and LRTC are the fastest and second-fastest among all the comparing methods.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{inpainting_results.pdf}\n\t\\end{center}\n\t\\caption[houston]{ The inpainting results by different methods under $80\\%$ missing ratio. (a) Original Houston2013 image, (b) Missing, (c) HaLRTC, (d) LRTC, (e) TVWTR. }\n\t\\label{fig:Visio-inpainting1}\n\\end{figure*}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{inpainting_text_results.pdf}\n\t\\end{center}\n\t\\caption[houston]{ Inpainting results by different methods for the text removal case. (a) Original Houston2013 image, (b) Missing, (c) HaLRTC, (d) LRTC, (e) TVWTR. }\n\t\\label{fig:Visio-inpainting2}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative evaluation of different methods for inpainting.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccc|ccc}\n\\cline{1-7}\n& \\multicolumn{3}{c}{Missing data ($80\\%$)} & \\multicolumn{3}{|c}{Text removal} \\\\\n\\hline\nIndex & HaLRTC & LRTC & TVWTR& HaLRTC & LRTC & TVWTR \\\\\n\\hline\n MPSNR & 36.54 & 38.49 & $\\mathbf{49.05}$ & 50.29 & 53.39& $\\mathbf{57.31}$ \\\\\n MSSIM & 0.9391 & 0.9555 & $\\mathbf{0.9947}$ & 0.9965 & 0.9975 & $\\mathbf{0.9991}$ \\\\\n ERGAS & 4.5703 & 3.7704 & $\\mathbf{1.0211}$ & 1.0108 & 1.1345 & $\\mathbf{0.4012}$\\\\ \n MSAD & 5.3845 & 4.6162 & $\\mathbf{1.3792}$ & 1.2424 &1.1134 & $\\mathbf{0.5497}$\\\\\n TIME(s) &$\\mathbf{9.34}$ & 23.74 & 347.46 & $\\mathbf{8.38}$ & 24.90 & 345.66\\\\\n \n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-inpainting}\n\\end{spacing}\n\\end{table*}\n\n\n\n\n\\subsection{HS Destriping}\n\\label{sect:HS_Destriping}\nIn the past three decades, plenty of airborne and space-borne imaging spectrometers have adopted a whiskbroom sensor or a pushbroom sensor commonly. The former one is built with linear charge-coupled device (CCD) detector arrays. The corresponding HS imaging systems scans the target pixel by pixel and then acquires a spatial image by track scanning with a scan mirror forward motion \\cite{c134}. The latter one contains area CCD arrays. A pushbroom sensor scans the target line by line, one direction of which is utilized for spatial imaging, and the other for spectral imaging. The incoherence of the system mechanical motion and the failure of CCD arrays lead to the non-uniform response of neighboring detectors, mainly generating stripe noise. The periodic or noneriodic stripes generally distributed along the scanning direction have a certain width and length. The values of stripes are brighter or darker than their surrounding pixels. The inherent property of stripes, i.e., $g(\\mathcal{S})$ should be considered in the HS destriping model. \n\nChen \\textit{et al}. \\cite{c132} were the first to develop a LR tensor decomposition for an MS image destriping task. The high correlation of the stripe component along the spatial domain is depicted by a LR Tucker decomposition. The final minimization model for solving the destriping problem is expressed as follows:\n \\begin{equation}\n \\begin{aligned}\n\\min _{\\mathcal{X}, \\mathcal{S}, \\mathcal{A}, \\mathbf{B}_{i}} & \\frac{1}{2}\\|\\mathcal{Y}-\\mathcal{X}-\\mathcal{S}\\|_{F}^{2}+\\eta_{1}\\left\\|D_{h} \\mathcal{X}\\right\\|_{1}+\\eta_{2}\\left\\|D_{z} \\mathcal{X}\\right\\|_{1} \\\\\n&+\\lambda\\|\\mathcal{S}\\|_{2,1} \\\\\n\\text { s.t. } \\mathcal{S}=& \\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}, \\mathbf{B}_{i}^{T} \\mathbf{B}_{i}=\\mathbf{I}(i=1,2,3)\n\\end{aligned}\n\\end{equation}\nwhere $\\|\\mathcal{S}\\|_{2,1} = \\sum^z_{k=1} \\sum^v_{j=1} \\sqrt{\\sum^h_{i_1} \\mathcal{S}_{i,j,k}^2 } $.\n\nCao \\textit{et al}. \\cite{c135} implemented the destriping task by the matrix nuclear norm of stripes and non-local similarity of image patches in the spatio-spectral volumes. WLRTR and OLRT \\cite{c117,c116} are also effective for a HS destriping task. Chang \\textit{et al}. \\cite{c117} simultaneously considered the LR properties of the stripe cubics and non-local patches. The OLRT algorithm is reformulated for modeling both the recovered and stripe components as follows\n\\begin{equation}\n\\begin{aligned}\n& \\min _{\\mathcal{X}, \\mathcal{L}_{i}^{j}, \\mathcal{S}} \\frac{1}{2}\\|\\mathcal{T}-\\mathcal{X}-\\mathcal{S}\\|_{F}^{2}+\\rho \\operatorname{rank}_{1}(\\mathcal{S}) \\\\\n&+\\omega_{j} \\sum_{j} \\sum_{i}(\\frac{1}{\\delta_{i}^{2}}\\|\\mathcal{R}_{i}^{j} \n\\mathcal{X}-\\mathcal{L}_{i}^{j}\\|_{F}^{2}+\\operatorname{rank}_{j} (\\mathcal{L}_{i}^{j}) )\n\\end{aligned}\n\\end{equation}\n\nIn \\cite{c133}, an HS destriping model is transformed to a tensor framework, in which the tensor-based non-convex sparse model used both $l_0$ and $l_1$ sparse priors to estimate stripes from noisy images.\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{destriping.pdf}\n\t\\end{center}\n\t\\caption[houston2018]{The destriping results by different methods. (a) Original Houston 2018 image, (b) nonperiodic stripes, (c) WLRTR, (d) LRTD. }\n\t\\label{fig:Visio-destriping}\n\\end{figure*}\n\n\n\n \n\nWe take an example with nonperiodic stripes of intensity 50 and stripe ratio 0.2, which is presented in Fig. \\ref{fig:Visio-destriping} (b). Fig. \\ref{fig:Visio-destriping} (c) and (d) display the destriping results of WLRTR and LRTD. The stripes are estimated and removed correctly by WLRTR and LRTD since both models consider non-local similarity and spectral correlation. Considering the third type of prior knowledge-- spatial and spectral smoothness, LRTD moderately preserves more details like clear edges than WLRTR. The quantitative comparison is in accordance with the above-mentioned visual results. Tab. \\ref{tab:tab-destriping} performs destriping results with four quantitative indices. LRTD achieves higher evaluation values than WLRTR.\n\n \\begin{table}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for destriping.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|cccc}\n\\hline\n & MPSNR & MSSIM & ERGAS & MSAD \\\\\n\\hline\nWLRTR & 39.77 & 0.9844 &7.7883 & 5.1765\\\\\nLRTD & $\\mathbf{47.87}$ & $\\mathbf{0.9912}$ & $\\mathbf{3.8388}$ & $\\mathbf{1.1276}$\\\\\n\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-destriping}\n\\end{spacing}\n\\end{table}\n\\subsection{Future challenges}\n\\label{sect:challenges_restoration}\n\nVarious tensor optimization models have been developed to solve the HS restoration problem and show impressive performances. Nevertheless, these models still can be further improved for future work:\n\n\nAs the prior information is efficient to find the optimal solution, novel tensor-based approaches should utilize as many types of priors as possible. Therein, how best to design a unified framework to simultaneously non-local similarity, spatial and spectral smoothness, and subspace representation is a crucial challenge.\n\nThe addition of different regularizations leads to the manual adjustment of corresponding parameters. For example, a noise-adjusted parameter pre-definition strategy needs to be studied to enhance the robustness of tensor optimization models. \n\n It is worth noting that we are usually blind to the location of the stripes or clouds. The locations of the stripes or mixed noise between the neighbor bands are often different and need to be estimated. How best to predict the degradation positions and design blind estimation algorithms deserves further study in following research.\n\nDue to some HS images containing hundreds of spectral bands, the high dimensions of an HS tensor cause a time-consuming problem. The model complexity of tensorial models should be reduced with the guarantee of efficiency and accuracy of HS restoration. \n\n\\section{HS CS}\n\\label{sect:HSI-CS}\nTraditional HS imaging techniques are based on the Nyquist sampling theory for data acquisition. A signal must be sampled at a rate greater than twice its maximum frequency component to ensure unambiguous data \\cite{c108,c109}. This signal processing needs a huge computing space and storage space. Meanwhile, the ever-increasing spectral resolution of HS images also leads to the high expense and low efficiency of transmission from airborne or space-borne platforms to ground stations. The goal of CS is to compressively sample and reconstruct signals based on sparse representation to reduce the cost of signal storage and transmission. In Fig. \\ref{fig:Visio-cs}, based on the image-forming principle of a single pixel camera which uses the digital micromirror device (DMD) to accomplish the CS sampling, an HS sensor can span the necessary wavelength range and record the intensity of the light reflected by the modulator in each wavelength \\cite{c215}. Since the CS rate can be far lower than the Nyquist rate, the limitation of high cost caused by the sheer volume of HS data will be alleviated. A contradiction usually exists between the massive HS data and the limited bandwidth of satellite transmission channel. HS images can be compressed first to reduce the pressure on channel transmission. Therefore, the HS CS technique is conducive to onboard burst transmission and real-time processing in RS \\cite{c223}.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=6cm]{Visio-cs-fra.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS CS. }\n\t\\label{fig:CSfr}\n\\end{figure*}\n\nCS of HS images aims to preciously reconstruct an HS data $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ from a few compressive measurements $\\textbf{y} \\in \\mathbb{R}^m$ by effective HS CS algorithms. The compressive measurements $\\textbf{y}$ can be formulated by:\n\\begin{equation}\n\\label{eq:y1}\n\\textbf{y} = \\Psi (\\mathcal{X})\n\\end{equation}\nwhere $\\Psi$ is a measurement operator instantiated as $\\Psi= \\textbf{D} \\cdot \\textbf{H} \\cdot \\textbf{P}$, where $\\textbf{D}$ is a random downsampling operator, $\\textbf{H}$ is a random permutation matrix, $\\textbf{P}$ is a WalshHadamard transform and the mapping of $\\Psi$ is $ \\mathbb{R}^{h \\times v \\times z} \\rightarrow \\mathbb{R}^m$ (the sampling ratio $m=hvz$). The strict reconstruction of $\\mathcal{X}$ from $\\mathbf{y}$ will be guaranteed by the CS theory when $\\Psi$ satisfies the restricted isometry property (RIP). This compressive operator has been successfully adopted for various HS CS tasks \\cite{c100,c92,c93,c94}. However, operator $\\Psi$ can be replaced with real \ndemands. Apparently, it is an ill-posed inverse problem to directly recover $\\mathcal{X}$ from Eq. (\\ref{eq:y1}). The extra prior information needs to be investigated to optimize the HS CS problem. The HS CS task can be generalized the following optimization problem:\n\\begin{equation}\n\\label{eq:hs_cs}\n\\begin{aligned}\n\\underset{\\mathcal{X}}{\\textrm{min}}\\;&\\|\\textbf{y}-\\Psi (\\mathcal{X})\\|_F^2 + \\lambda F(\\mathcal{X}),\n\\end{aligned}\t\n\\end{equation}\nwhere $F(\\mathcal{X})$ denotes the additional regularization term to use different types of HS prior information such as spectral correlation, spatial and spectral smoothness, and non-local similarity. \n\n\n\n\\subsection{Tensor decomposition-based HS CS reconstruction methods}\n\\label{sect:TD-CS}\n\n\n\nTucker decomposition-based methods have aroused wide attention for HS CS. Tucker decomposition was first introduced into the compression of HS images to constrain the discrete wavelet transform coefficients of spectral bands \\cite{c96}. Most of the following works try to study the Tucker decomposition-based variants for HS CS \\cite{c100,c97,c98,c99,c95,c214}. \n\n$\\textbf{(1) Tucker decomposition with TV}$\n\nIn one earlier work \\cite{c100}, a 2-D TV norm has been penalized in an LR matrix framework, which robustly recovers a large-size HS image when the sampling ratio is only 3\\%. A spectral LR model is rarely enough to depict the inherent property of HS images. \nJoint tensor Tucker decomposition with a weighted 3-D TV (JTenRe3-DTV) \\cite{c97} injected a weighted 3-D TV into the LR Tucker decomposition framework to model the global spectral correlation and local spatial\u2013spectral smoothness of an HS image. Considering the disturbance $\\mathcal{E}$, the JTenRe3-DTV optimization problem for HS CS can be expressed as\n\\begin{equation}\n\\label{eq:TDTV_cs}\n\\begin{aligned}\n&\\min _{\\mathcal{X}, \\mathcal{E}, \\mathcal{C}, \\mathbf{U}_{i}} \\frac{1}{2}\\|\\mathcal{E}\\|_{F}^{2}+\\lambda\\|\\mathcal{X}\\|_{3{\\rm DwTV}} \\\\\n&\\text { s.t. } \\mathbf{y}=\\Psi(\\mathcal{X}), \\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3}+\\mathcal{E}\n\\end{aligned}\t\n\\end{equation}\nIn \\cite{c102}, the LR tensor constraint of Eq.(\\ref{eq:TDTV_cs}) was replaced by the TNN. \n\n$\\textbf{ (2) Tucker decomposition with non-local similarity}$\n\nThe Tucker decomposition methods with non-local similarity either cluster similar patches into a 4-D group or unfold 2-D patches into a 3-D group. \nDu \\textit{et al}. \\cite{c101} represented each local patch of HS images as a 3-D tensor and grouped similar tensor patches to form a 4-D tensor per cluster. Each tensor group can be approximately decomposed by a sparse coefficient tensor and a few matrix dictionaries. \nXue \\textit{et al}. \\cite{c99} unfolded a series of 3-D cubes into 2-D matrices along the spectral modes and stacked these matrices as a new 3-D tensor. The spatial sparsity, the non-local similarity, and the spectral correlation were simultaneously employed to obtain the proposed model\n\\begin{equation}\n\\label{eq:TDNON_cs}\n\\begin{aligned}\n\\min _{\\mathbf{x}, \\mathcal{A}_{p}, \\mathbf{B}_{1 p}, \\mathbf{B}_{2 p}, \\mathbf{B}_{3 p}} &\\sum_{p=1}^{P} \\frac{\\lambda_{1}}{2}\\left\\|\\mathcal{X}_{p}-\\mathcal{A}_{p} \\times_{1} \\mathbf{B}_{1 p} \\times_{2} \\mathbf{B}_{2 p} \\times_{3} \\mathbf{B}_{3 p}\\right\\|_{F}^{2}\\\\\n&+\\lambda_{2}\\left\\|\\mathcal{A}_{p}\\right\\|_{1}+\\lambda_{3} L(\\mathcal{X}_{p}) \\\\\n\\text { s.t. } \\mathbf{y}=\\Phi \\mathbf{x}, \\mathcal{X}_{p}&=\\mathcal{A}_{p} \\times_{1} \\mathbf{B}_{1 p} \\times_{2} \\mathbf{B}_{2 p} \\times_{3} \\mathbf{B}_{3 p}, \\\\\n\\mathbf{B}_{i p}^{T} \\mathbf{B}_{i p}= \\mathbf{I}&(i=1,2,3)\n\\end{aligned}\t\n\\end{equation}\nwhere $p = 1, . . . P$, and $P$ denotes the group number, $\\mathbf{x} \\in \\mathbb{R}^{hvz}$ denotes the vector form of X, $L(\\mathbf{X})$ is the TTN of $\\mathcal{X}$,\n\n\n$\\textbf{(3) TR-based methods}$\n\nUnlike the Tucker decomposition methods \\cite{c101,c99} which directly captured the LR priror in the original image space at the cost of high computation, a novel subspace-based non-local TR decomposition (SNLTR) approach projected an HS image into a low-dimensional subspace \\cite{c90}. The non-local similarity of the subspace coefficient tensor is constrained by a TR decomposition model. The SNLTR model is presented as\n\\begin{equation}\n\\label{eq:SNLTR-cs}\n\\begin{aligned}\n&\\min _{\\mathbf{E}, \\mathbf{Z}, \\mathcal{L}_{i}, \\mathcal{G}_{i}} \\frac{1}{2}\\|\\mathbf{y}-\\Psi(\\mathbf{E} \\mathbf{Z})\\|_{F}^{2}+\\lambda \\sum_{i}\\left(\\frac{1}{2}\\left\\|\\Re_{i} \\mathbf{Z}-\\mathcal{L}_{i}\\right\\|_{F}^{2}\\right) \\\\\n&\\text { s.t. } \\mathbf{E}^{T} \\mathbf{E}=\\mathbf{I}, \\quad \\mathcal{L}_{i}=\\Phi\\left(\\left[\\mathcal{G}_{i}\\right]\\right)\n\\end{aligned}\t\n\\end{equation}\n\n\\subsection{HS Kronecker CS methods}\n\\label{sect:TD-KCS}\n\nUnlike the current 1-D or 2-D sampling strategy, Kronecker CS (KCS) comprises Kronecker-structured sensing matrices and sparsifying bases for each HS dimension \\cite{c214,8000407}. Based on multidimensional multiplexing, Yang \\textit{et al}. \\cite{c106} used a tensor measurement and a nonlinear sparse tensor coding to develop a self-learning tensor nonlinear CS (SLTNCS) algorithm. The sampling process and sparse representation can be represented as the model based on Tucker decomposition. Generally, an HS image $\\mathcal{X} \\in \\mathbb{R}^{n_1 \\times n_2 \\times n_3} $ can be expressed as the following Tucker model:\n\\begin{equation}\n\\label{eq:KCS}\n\\begin{aligned}\n\\mathcal{X}=\\mathcal{S} \\times_{1} \\Phi_{1} \\times_{2} \\Phi_{2} \\times_{3} \\Phi_{3}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\mathcal{S} \\in \\mathbb{R}^{m_1 \\times m_2 \\times m_3} $ stands for an approximate block-sparse tensor in terms of a set of three basis matrices {$\\Phi_{j} \\in \\mathbb{R}^{ k_j \\times k_j}$}, with $m_j \\ll k_j, j=1,2,3 $.\n\nIn the context of KCS, three measurement or sensing matrices denoted by $\\Psi_j, j=1,2,3$ of size $n_j \\times k_j$ with $n_j \\ll k_j$ are used to reduce the dimensionality of the measurement tensor. The compressive sampling model is given as\n\\begin{equation}\n\\label{eq:KCS-psi}\n\\begin{aligned}\n\\mathcal{Y}&=\\mathcal{X} \\times_{1} \\Psi_{1} \\times_{2} \\Psi_{2}\\times_{3} \\Psi_{3} \\\\\n&=\\mathcal{S} \\times_{1} \\mathbf{Q}_{1} \\times_{2} \\mathbf{Q}_{2} \\times_{3} \\mathbf{Q}_{3}\n\\end{aligned}\t\n\\end{equation}\nwhere $\\mathbf{Q}_j = \\Phi_j \\Psi_j, j=1,2,3$.\n\nZhao \\textit{et al}. \\cite{c106} designed a 3-D HS KCS mechanism to achieve independent samplings in three dimensions. The suitable sparsifying bases were selected and the corresponding optimized measurement matrices were generated, which adjusted the distribution of sampling ratio for each dimension of HS images. Yang \\textit{et al}. \\cite{c95} constrained the nonzero number of the Tucker core tensor to explore the spatial-spectral correlation. To address the issue of the computational burden on the data reconstruction of early HS KCS techniques, researchers have proposed several tensor-based methods such as the tensor-form greedy algorithm, N-way block orthogonal matching pursuit (NBOMP) \\cite{6797642}, beamformed mode-based sparse estimator (BOSE) \\cite{7544443} and Tensor-Based Bayesian Reconstruction (TBR) \\cite{c107}. The TBR model exploited the multi-dimensional block-sparsity of tensors, which was more consistent with the sparse model in HS KCS than the conventional CS methods. A Bayesian reconstruction algorithm was developed to achieve the decoupling of hyperparameters by a low-complexity technique. \n\n \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_cs}\n\nAn HS data experiment is employed to validate the effectiveness of tensor-based models on HS CS with four different sample ratios i.e. $1\\%$, $5\\%$, $10\\%$, $20\\%$. The Reno data set selected for HS CS experiments is size of $150 \\times 150 \\times 100$. The randomly permuted Hadamard transform is adopted as the compressive operator. Tab. \\ref{tab:tab-cs} compares the reconstruction results by SLNTCS and JTenRe3DTV. They have quality decays with sample ratios decreasing, but SLNTCS obtains poorer results than JTenRe3DTV in lower sampling ratios. \n\nIn the light of visual comparison, one representative band in the sampling ratio $10\\%$ is presented in Fig. \\ref{fig:Visio-cs}. The basic texture information can be found in the results of two HS CS algorithms. As shown in the enlarged area, SLNTCS causes some artifacts, but JTenRe3DTV produces a more acceptable result with the smoothing white area than SLNTCS.\n\n\n \\begin{table}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different selected algorithms for HS CS.}\n\\begin{spacing}{1.0}\n\\scalebox{0.8}{\n\\begin{tabular}{c|c|cccc}\n\\hline\nMethod & Index & $1\\%$ & $5\\%$ & $10\\%$ & $20\\%$ \\\\\n\\hline\nSLNTCS & MPSNR & 18.70 & 24.44 &27.72 & 32.14 \\\\\n& MSSIM & 0.3273 & 0.6593 & 0.8047 & 0.9159\\\\\n& ERGAS & 23.3411 & 12.1203 & 8.3119 & 5.0263\\\\\n& MSAD & 22.0.35 & 11.2031 & 7.6354 & 4.6003\\\\\n\\hline\nJTenRe3DTV & MPSNR & 27.91 & 34.54 & 36.28 & 37.41 \\\\\n& MSSIM & 0.8116 & 0.9443 & 0.9638 & 0.9709\\\\\n& ERGAS & 8.2422 & 4.0139 & 3.2990 & 2.9124\\\\\n& MSAD & 7.5545 & 3.5703 & 2.9233& 2.5723\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-cs}\n\\end{spacing}\n\\end{table}\n\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=5.2cm]{Visio-cs-exp.pdf}\n\t\\end{center}\n\t\\caption[houston]{ Inpainting results by different methods under $10\\%$ sampling ratio. (a) Original Reno image, (b) SLNTCS, (c) JTenRe3DTV. }\n\t\\label{fig:Visio-cs}\n\\end{figure}\n\n\\subsection{Future challenges}\n\\label{sect:challenges_cs} \nThe low acquisition rate of CS inspires a novel development potentiality for HS RS. Many tensor-based methods have been proposed to achieve remarkable HS CS reconstruction results at a lower sampling ratio. However, here we briefly point out some potential challenges.\n\nSome novel tensor decomposition approaches need to be explored. In the past research works, Tucker decomposition has been successfully applied for HS CS. But with the development of the tensorial mathematical theory, many tensor decomposition models have been proposed and introduced in other HS applications. Therefore, how best to find more appropriate tensor decomposition for HS CS is a vital challenge. \n\nNoise degradation usually has a negative influence in HS CS sampling and reconstruction, which is hardly ignored in the real HS CS real imaging process. As a result, considering the noise interference and enhancing the robustness of noise in the CS process remain challenging.\n\n\\section{HS AD}\n\\label{sect:HSI-AD}\n\n\n HS AD aims to discover and separate the potential man-made objects from observed image scenes, which is typically constructive for defense and surveillance developments in RS fields, such as mine exploration and military reconnaissance. For instance, aircrafts in the suburb scene and vehicles in the bridge scene are usually referred to as anomalies or outliers. In Fig. \\ref{fig:Visio-AD}, AD can be regarded as an unsupervised two-class classification problem where anomalies occupy small areas compared with their surrounding background. The key to coping with this problem is to exploit the discrepancy between anomalies and their background. Anomalies commonly occur with low probabilities and their spectral signatures are quite different from neighbors.\n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=7.5cm]{Visio-AD.pdf}\n\t\\end{center}\n\t\\caption[AD]{A schematic diagram of HS image anomaly detection. }\n\t\\label{fig:Visio-AD}\n\\end{figure}\n\nHS images containing two spatial dimensions and one spectral dimension are intrinsically considered as a three-order tensor. Tensor-based approaches have been gradually attaching attention for HS AD in recent years. Tucker decomposition is the first and essential type of tensor-decomposition methods used for HS AD. Therefore, in the following sections, we mainly focus on the Tucker decomposition-based methods and a few other types of tensor-based methods.\n\n\\subsection{Tensor decomposition-based HS AD methods}\n\\label{sect:TD_ad}\n\n$\\textbf{ (1) Tucker decomposition-based methods }$\n\n An observed HS image $\\mathcal{T}$ can be decomposed into two parts by Tucker decomposition, i.e.,\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{T} = \\mathcal{X} + \\mathcal{S}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{X}$ is LR background tensor and $\\mathcal{S}$ is the sparse tensor consisting of anomalies. The Tucker decomposition for AD is formulated as the following optimization\n\\begin{equation}\n\t\\begin{split}\n\t\\label{eq:Tucker-AD}\n\t\\left\\{\\begin{array}{l}\n\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3} \\\\\n\\mathcal{S}=\\mathcal{T}-\\mathcal{X}\n\\end{array}\\right.\n\\end{split} \n\\end{equation}\n\nMany Tucker decomposition-based variants have been studied to improve the AD accuracy. Li \\textit{et al}. \\cite{c110} proposed a LR tensor decomposition based AD (LTDD) model, which employed Tucker decomposition to obtain the core tensor of the LR part. The final spectral signatures of anomalies is extracted by an unmixing approach. After the Tucker decomposition processing, Zhang \\textit{et al}. \\cite{c111} utilized a reconstruction-error-based method to eliminate the background pixels and remain the anomaly information. Zhu \\textit{et al}. \\cite{c139} advocated a weighting strategy based on tensor decomposition and cluster weighting (TDCW). In TDCW, Tucker decomposition was adopted to obtain the anomaly part. K-means clustering and segmenting, were assigned as post-processing steps to achieve a performance boost.\nSong \\textit{et al}. \\cite{c113} proposed a tensor-based endmember extraction and LR decomposition (TEELRD) algorithm, where Tucker decomposition and k-means are employed to construct a high-quality dictionary.\n\nBased on Tucker decomposition, Qin \\textit{et al}. \\cite{c115} proposed a LR and sparse tensor decomposition (LRASTD). The LRASTD can be formulated as\n\\begin{equation}\n\t\\begin{split}\n\t\\label{eq:LRASTD}\n&\t\\min_{\\mathcal{A},\\mathcal{S}} || \\mathcal{A} ||_* + \\beta ||\\mathcal{A} ||_1 + \\lambda || \\mathcal{S} ||_{2,2,1}\\\\\n & {\\rm s.t.} \\quad\t\\mathcal{X}=\\mathcal{A} \\times_{1} \\mathbf{B}_{1} \\times_{2} \\mathbf{B}_{2} \\times_{3} \\mathbf{B}_{3} + \\mathcal{S}\n\\end{split} \n\\end{equation}\nwhere $|| \\mathcal{S} ||_{2,2,1} = \\sum^z_{k=1} || \\mathcal{S}(:,:,k) ||_F$.\n\n$\\textbf{ (2) Other Tensor-based methods }$\n\n Chen \\textit{et al}. \\cite{c112} presented a TPCA-based pre-processing method to separate a principal component part and a residual part. Li \\textit{et al}. \\cite{c114} proposed a prior-based tensor approximation (PTA) approach, where the background was constrained by a truncated nuclear norm (TRNN) regularization and a spatial TV. The proposed PTA can be expressed as\n \\begin{equation}\n \\begin{aligned}\n&\\arg \\min _{\\mathcal{X}, \\mathcal{S}} \\frac{1}{2}\\left(\\left\\|\\mathbf{D}_{H} \\mathbf{X}_{(1)}\\right\\|_{F}^{2}+\\left\\|\\mathbf{D}_{v} \\mathbf{X}_{(2)}\\right\\|_{F}^{2}\\right)+\\alpha\\left\\|\\mathbf{X}_{3}\\right\\|_{r}+\\beta\\left\\|\\mathbf{S}_{3}\\right\\|_{2,1} \\\\\n&\\text { s.t. }\\left\\{\\begin{array}{l}\n\\mathcal{Y}=\\mathcal{X}+\\mathcal{S} \\\\\n\\mathcal{X}_{1}=\\operatorname{unfold}_{1}(\\mathcal{X}) \\\\\n\\mathcal{X}_{2}=\\operatorname{unfold}_{2}(\\mathcal{X}) \\\\\n\\mathcal{X}_{3}=\\operatorname{unfold}_{3}(\\mathcal{X}) \\\\\n\\mathcal{S}_{3}=\\operatorname{unfold}_{3}(\\mathcal{S})\n\\end{array}\\right.\n\\end{aligned}\n \\end{equation}\nwhere $\\mathbf{D}_{H} \\in \\mathbb{R}^{(h-1) \\times h}$ and $\\mathbf{D}_{v} \\in \\mathbb{R}^{(v-1) \\times v}$ are defined as\n$\\mathbf{D}_{H}=\\left[\\begin{array}{ccccc}\n1 & -1 & & & \\\\\n& 1 & -1 & & \\\\\n& & \\ddots & \\ddots & \\\\\n& & & 1 & -1\n\\end{array}\\right]$\\\\\nand\n$\\mathbf{D}_{v}=\\left[\\begin{array}{ccccc}\n1 & -1 & & & \\\\\n& 1 & -1 & & \\\\\n& & \\ddots & \\ddots & \\\\\n& & & 1 & -1\n\\end{array}\\right]$\n\nWang \\textit{et al}. \\cite{minghuaTC} proposed a novel tensor LR and sparse representation method with a PCA pre-processing step, namely PCA-TLRSR, which was the first time to expand the concept of Tensor LR representation in HS AD and exploited the 3-D inherent structure of HS images. Assisted by the multi-subspace learning of the tensor domain and the sparsity constraint along the joint spectral-spatial dimensions, the LR background and anomalies are separated in a more accurate manner.\n\n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_ad}\n\nHerein, we take an example of PTA on three HS data sets for AD. The San Diego data set \\cite{c216} was captured by the Airborne Visible\/Infrared Imaging Spectrometer (AVIRIS) sensor over the San Diego airport, CA, USA. Three flights are obviously observed in the selected region with the size $100 \\times 100 \\times 189$. The Airport-1 and Airport-2 \\cite{c217} were also acquired by AVIRIS sensor. As shown in the second column of Fig. \\ref{fig:AD-exp}, flights are regarded as anomalies in different airport scenes. \n\\begin{figure}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=8.5cm, angle = 90]{AD-EXP2.pdf}\n\t\\end{center}\n\t\\caption[AD]{Original HS images, ground-truth maps, detection maps, and AUC curves of PTA on different data sets. }\n\t\\label{fig:AD-exp}\n\\end{figure}\n\nAs the detection maps of Fig. \\ref{fig:AD-exp} display, most of flights are clearly detected by PTA. Except for the visual observation of resulted anomaly maps, the receiver operating characteristic (ROC) curve \\cite{c218} and the area under the ROC curve (AUC) \\cite{c219} are employed to quantitatively assess the detection accuracy of the tensor-based method. The ROC curve plots the varying relationship of the probability of detection (PD) and false alarm rate (FAR) for extensive possible thresholds. The area under this curve is calculated as AUC, whose ideal result is 1. PTA is capable to achieve a high detection rate and low FAR. The AUC values derived from PTA is higher than 0.9.\n\n\\subsection{Future challenges}\n\\label{sect:challenges_ad} \n\nTucker decomposition-based models have been well developed by researchers, yet other types of tensor decompositions are rarely investigated in the HS AD community. In other words, how best to introduce other novel tensor decomposition frameworks into AD is a key challenge.\n\nAlthough most anomalies are successfully detected, some background pixels like roads and roofs usually remain. The more complex background and the fewer targets make the difficulty of AD increase. To solve this problem, researchers need to explore multiple features and suitable regularizations.\n\nThe background and anomalies are often modeled as the LR part and the sparse part of HS images. The 3-D inherent structure of HS images is exploited by tensor decomposition-based methods. The spatial sparsity and the 3-D inherent structure of anomalies should be considered by a consolidated optimization strategy.\n\n\n \\section{HS-MS fusion}\n\\label{sect:HSI-SR}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=8cm]{SR-fr2.pdf}\n\t\\end{center}\n\t\\caption[unmixing]{Illustration of HS and MS fusion. }\n\t\\label{fig:Visio-sr-fr}\n\\end{figure*}\n\nHS images provide abundant and varied spectral information, yet hardly contain high-spatial resolution owing to the limitations of sun irradiance \\cite{c140}, and imaging systems \\cite{c141}. On the contrary, MS images are captured with low-spectral resolution and high-spatial resolution. HS and MS fusion aims to improve the spatial resolution of HS images with the assistance of MS images and generate final HS images with high-spatial resolution and original spectral resolution. The high-quality fused HS images benefit for the in-depth recognition and \ninsight of materials, which contributes to many RS real applications, such as object classification and change detection of wetlands and farms \\cite{7182258,7895167,8961105,gu2019superpixel,hong2021graph,chen2022fccdn,hong2022Spectral}.\n\nFig. \\ref{fig:Visio-sr-fr} depicts an HS and MS fusion process to generate an HR-HS image. Suppose that a desired high-spatial-spectral resolution HS (HR-HS) image, a low-resolution HS (LR-HS) image, and a high-resolution MS (HR-MS) image are denoted by $\\mathcal{X} \\in \\mathbb{R}^{ H \\times V \\times B}$, $\\mathcal{Y} \\in \\mathbb{R}^{ h \\times v \\times B}$ and $\\mathcal{Z} \\in \\mathbb{R}^{ H \\times V \\times b}$ ($H \\gg h$, $V \\gg v$, $B \\gg b$), respectively. A LR-HS image is seen as a spatially downsampled and blurring version of $\\mathcal{X}$, and a HR-MS image is the spectrally downsampled version of $\\mathcal{X}$. The two degradation models are expressed as follow\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR}\n \\mathbf{Y}_{(3)} = \\mathbf{X}_{(3)} \\mathbf{R} + \\mathbf{N}_h\n \\end{aligned}\n\\end{equation}\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR2}\n \\mathbf{Z}_{(3)} = \\mathbf{G} \\mathbf{X}_{(3)} + \\mathbf{N}_m\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{R} = \\mathbf{B} \\mathbf{K}$, $\\mathbf{B}$ denotes a convolution blurring operation. $\\mathbf{K}$ is a spatial downsampling matrix, and $\\mathbf{G}$ represents a spectral-response function if a MS image sensor, which can be regarded as a spectral downsampling matrix. $\\mathbf{N}_h$ and $\\mathbf{N}_m$ stand for noise.\n\nAccording to references \\cite{c142,c143,c144}, $\\mathbf{R}$ and $\\mathbf{G}$ are assumed to be given in advance of solving the HS SR problem \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR-pro}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau f ({\\mathcal{X}})\n \\end{aligned}\n\\end{equation}\nwhere the first and second F-norm are data-fidelity terms with respect with models (\\ref{eq:HSR}) and (\\ref{eq:HSR2}), $f ({\\mathcal{X}})$ represents the prior regularization pertinent to the desired property on the HR-HS ${\\mathcal{X}}$. \nIn the next section, we review currently advanced HS SR methods from two categories: tensor decomposition and prior-based tensor decomposition models.\n\n\n\\subsection{Tensor Factorizations for SR}\n\\label{sect:TDforSR} \n\n\\subsubsection{CP Decomposition Model}\n\n Initially, Kanatsoulis \\textit{et al}. \\cite{c145} employed a coupled CP decomposition framework for HS SR. The CP decomposition of an HR-HS tensor $\\mathcal{X}$ can be expressed as\n \\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-cp}\n {\\mathcal { X }} &=\\sum_{r=1}^{R} \\mathbf{a}_{r} \\circ \\mathbf{b}_{r} \\circ \\mathbf{c}_{r} \\\\\n & =\\llbracket \\mathbf{A}, \\mathbf{B}, \\mathbf{C} \\rrbracket\\\\\n \\end{aligned}\n \\end{equation}\nwhere the latent LR factors are $\\mathbf{A}= [\\mathbf{a}_1,...,\\mathbf{a}_r ]$, $\\mathbf{B}= [\\mathbf{b}_1,...,\\mathbf{b}_r ]$, and $\\mathbf{C}= [\\mathbf{c}_1,...,\\mathbf{c}_r ]$.\n In \\cite{c145}, the coupled CP decomposition gave the following assumption\n \\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-cp-assum}\n {\\mathcal { Y }} \n =\\llbracket \\mathbf{P}_1 \\mathbf{A}, \\mathbf{P}_2 \\mathbf{B}, \\mathbf{C} \\rrbracket;\n {\\mathcal { Z }} \n =\\llbracket \\mathbf{A}, \\mathbf{B}, \\mathbf{P}_3 \\mathbf{C} \\rrbracket\n \\end{aligned}\n \\end{equation}\n where $\\mathbf{P}_1 \\in \\mathbb{R}^{h \\times H} $, $\\mathbf{P}_2 \\in \\mathbb{R}^{v \\times V} $, and $\\mathbf{P}_3 \\in \\mathbb{R}^{b \\times B} $ are three linear degradation matrices. The identifiability of HS SR based on the algebraic properties of CP decomposition is guaranteed under relaxed conditions. However, LR properties of different dimensions are treated equally, which is rarely suitable for real HS SR. Subsequently, Kanatsoulis \\textit{et al}. \\cite{c151} a SR cube algorithm (SCUBA) that combined the advantages of CP decomposition and matrix factorization. Xu \\textit{et al}. \\cite{c152} improved CP decomposition-based method by adding a non-local tensor extraction module. \n \n\n \n \n\n\\subsubsection{Tucker Decomposition Model}\n\n Li \\textit{et al}. \\cite{c146} extended a coupled sparse tensor factorization (CSTF) approach, in which the fusion problem was transformed into the estimation of dictionaries along three modes and corresponding sparse core tensor. When a tensor $\\mathcal{X}$ is decomposed by Tucker decomposition\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-tucker}\n \\mathcal{X}=\\mathcal{W} \\times_{1} \\mathcal{A} \\times_{2} \\mathcal{B} \\times_{3} \\mathcal{C}\n \\end{aligned}\n\\end{equation}\n\nThe LR-HS and HR-HS degradation models are rewritten as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-tucker2}\n \\mathcal{Y}&=\\mathcal{W} \\times_{1}(\\mathbf{P}_{1} \\mathbf{A}) \\times_{2}(\\mathbf{P}_{2} \\mathbf{B}) \\times_{3} \\mathbf{C} \\\\\n &=\\mathcal{W} \\times_{1}\\mathbf{A}^* \\times_{2}\\mathbf{B}^*\\times_{3} \\mathbf{C} \\\\\n \\end{aligned}\n\\end{equation}\n\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{Z}&=\\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} (\\mathbf{P}_{3}\\mathbf{C} )\\\\\n &=\\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} \\mathbf{C}^*\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{A}^* = \\mathbf{P}_{1} \\mathbf{A}$, $\\mathbf{B}^* = \\mathbf{P}_{2} \\mathbf{B}$, and $\\mathbf{C}^* = \\mathbf{P}_{3}\\mathbf{C}$ are the downsampled dictionaries along three modes. Taking the sparsity of core tensor $\\mathcal{W}$, Li \\textit{et al}. formulated the fusion problem as follows\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:sr-CSTF}\n \\min_{ \\mathbf{A}, \\mathbf{B}, \\mathbf{C}, \\mathcal{W}} &|| \\mathcal{Y}-\\mathcal{W} \\times_{1} \\mathbf{A}^* \\times_{2}\\mathbf{B}^*\\times_{3} \\mathbf{C} ||^2_F + \\\\ \n &|| \\mathcal{Z} - \\mathcal{W} \\times_{1}\\mathbf{A} \\times_{2} \\mathbf{B} \\times_{3} \\mathbf{C}^* ||^2_F + \\lambda || \\mathcal{W} ||_1\n \\end{aligned}\n\\end{equation}\n\nThe $l_1$ norm in Eq. (\\ref{eq:sr-CSTF}) was replaced by a $l_2$ norm in \\cite{c147}. Pr\u00e9vost \\textit{et al}. \\cite{c153} assumed HR-HS images assessed approximately low multilinear rank and developed an SR algorithm based on coupled Tucker tensor approximation (SCOTT) with HOSVD. In the Tucker decompositon framework and the BT of decompositon framework (named CT-STAR and CB-STAR) \\cite{c155}, an additive variability term was admitted for the study of the general identifiability with theoretical guarantees. Zare \\textit{et al}. \\cite{c170} offered a coupled non-negative Tucker decomposition (CNTD) method to constrain the nonnegativity of two Tucker spectral factors.\n\n $\\textbf{Non-local Tucker decomposition:}$ Wan \\textit{et al}. \\cite{c154} grouped 4-D tensor patches using the spectral correlation and similarity under Tucker decomposition. Dian \\textit{et al}. \\cite{c156} offered a non-local sparse tensor factorization (NLSTF) method, which induced core tensors and corresponding dictionaries from HR-MS images, and spectral dictionaries from LR-HS images. A modified NLSF\\_SMBF version was developed for the semi-blind fusion of HS and MS \\cite{c157}. However, the dictionary and the core tensor for each cluster are estimated separately by NLSTF and NLSF\\_SMBF. \n\n $\\textbf{Tucker decomposition + TV:}$ Xu \\textit{et al}. \\cite{c160} presented a Tucker decomposition model with a unidirectional TV. Wang \\textit{et al}. \\cite{c158} advocated a non-local LR tensor decomposition and SU based approach to leverage spectral correlations, non-local similarity, and spatial-spectral smoothness. \n\n\n\n $\\textbf{Tucker decomposition + Manifold:}$ Zhang \\textit{et al}. \\cite{c159} suggested a spatial\u2013spectral-graph-regularized LR tensor decomposition (SSGLRTD). In SSGLRTD, the spatial and spectral manifolds between HR-MS and LR-HS images are assumed to be similar to those embedded in HR-HS images. Bu \\textit{et al}. \\cite{c161} presented a graph Laplacian-guided coupled tensor decomposition (gLGCTD) model that incorporated global spectral correlation and complementary submanifold structures into a unified framework.\n\n\\subsubsection{BT Decomposition Model}\n \n\nZhang \\textit{et al}. \\cite{c163} discovered the identifiability guarantees in \\cite{c145,c146} at the cost of the lack of physical meaning for the latent factors under CP and Tucker decomposition. Therefore, they employed an alternative coupled nonnegative BT tensor decomposition (NN-CBCTD) approach for HS SR. The NN-CBTD model with rank-($L_r,L_r,1$) for HS SR is given as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NNCBTD}\n \\min _{\\mathrm{A}, \\mathbf{B}, \\mathrm{C}} &\\|\\mathcal{Y}-\\sum_{r=1}^{R}(\\mathbf{P}_{1} \\mathbf{A}_{r}\\left(\\mathbf{P}_{2} \\mathbf{B}_{r}\\right)^{\\top}) \\circ \\mathbf{c}_{r}\\|_{F}^{2} \\\\\n&+\\|\\mathcal{Z}-\\sum_{r=1}^{R}(\\mathbf{A}_{r} \\mathbf{B}_{r}^{\\top}) \\circ \\mathbf{P}_{3} \\mathbf{c}_{r}\\|_{F}^{2} \\\\\n\\text { s. t. } &\\mathbf{A} \\geq \\mathbf{0}, \\mathbf{B} \\geq \\mathbf{0}, \\mathbf{C} \\geq \\mathbf{0}\n \\end{aligned}\n\\end{equation}\n\nCompared with a conference version \\cite{c163}, the journal version \\cite{c138} additionally gave more recoverability analysis and more flexible decomposition framework by using a advocated LL1 model and a block coordinate descent algorithm. Jiang \\textit{et al}. \\cite{c164} introduced a graph manifold named Graph Laplacian into the CBTD framework.\n\n\n\\subsubsection{TT Decomposition Model}\n\nDian \\textit{et al}. \\cite{c148} proposed a low tensor-train rank (LTTR)-based HS SR method. A LTTR prior was designed for learning correlations among the spatial, spectral, and non-local modes of 4-D FBP patches. The HS SR optimization can be obtained as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:LTTR}\n \\min _{\\mathbf{X}_{(3)}}\\|\\mathbf{Y}_{(3)}-\\mathbf{X}_{(3)} \\mathbf{R}\\|_{F}^{2}+\\|\\mathbf{Z}_{(3)}-\\mathbf{G X}_{(3)}\\|_{F}^{2}+\\tau \\sum_{k=1}^{K}\\|\\mathcal{X}_{k}\\|_{\\mathrm{TT}}\n\\end{aligned}\n\\end{equation}\nwhere $K$ denotes the number of clusters, the TT rank of tensor $\\mathcal{Z}_k$ is defined\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:LTTR-tt}\n\\| \\mathcal{Z}_{k} \\|_{\\mathrm{TT}}=\\sum_{t=1}^{3} \\alpha_{t} \\operatorname{LS}(\\mathbf{Z}_{k \\langle t\\rangle})\n\\end{aligned}\n\\end{equation} \nand $\\mathrm{LS}(\\mathbf{A})=\\sum_{i} \\log (\\sigma_{i}(\\mathbf{A})+\\varepsilon)$ with a small positive value $\\varepsilon$.\n\nLi \\textit{et al}. \\cite{c162} presented nonlocal LR tensor approximation and sparse representation (NLRSR) that formed the non-local similarity and sparial-spectral correlation by the TT rank constraint of 4-D non-local patches.\n\n\n\\subsubsection{TR Decomposition Model}\n The TR decomposition of an HR-HS tensor $\\mathcal{X} \\in \\mathbb{R}^{H \\times V \\times B}$ is represented as\n \\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-x}\n\\mathcal{X}=\\Phi [\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)}]\n\\end{aligned}\n\\end{equation}\nwhere three TR factors are denoted by $\\mathcal{G}^{(1)} \\in \\mathbb{R}^{r_{1} \\times H \\times r_{2}}$, $\\mathcal{G}_{(2)} \\in \\mathbb{R}^{r_{2} \\times V \\times r_{3}}$, and $\\mathcal{G}^{(3)} \\in \\mathbb{R}^{r_{3} \\times B \\times r_{1}}$ with TR ranks $r = [r_1, r_2, r_3]$. Based on the TR theory, an LR-HS image is rewritten as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-y}\n\\mathcal{Y}={\\Phi}[\\mathcal{G}^{(1)} \\times{ }_{2} \\mathbf{P}_{1}, \\mathcal{G}^{(2)} \\times_{2} \\mathbf{P}_{2}, \\mathcal{G}^{(3)}]\n\\end{aligned}\n\\end{equation}\nand an HR-MS image can be expressed as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:SR-TR-z}\n\\mathcal{Z}=\\boldsymbol{\\Phi}[\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)} \\times_{2} \\mathbf{P}_{3}]\n\\end{aligned}\n\\end{equation}\n\nHe \\textit{et al}. \\cite{c150} presented a coupled TR factorization (CTRF) model and a modified CTRF version (NCTRF) with the nuclear norm regularization of third\/spectral TR factor. The NCTRF model is formulated as\n\\begin{equation}\n\\begin{aligned}\n\\label{eq:NCTRF}\n\\begin{aligned}\n&\\min _{\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)}}\\|\\mathcal{Y}-\\boldsymbol{\\Phi} [\\mathcal{G}^{(1)} \\times_{2} \\mathbf{P}_{1}, \\mathcal{G}^{(2)} \\times_{2} \\mathbf{P}_{2}, \\mathcal{G}^{(3)} ] \\|_{F}^{2} \\\\\n&+\\|\\mathcal{Z}-\\boldsymbol{\\Phi} [\\mathcal{G}^{(1)}, \\mathcal{G}^{(2)}, \\mathcal{G}^{(3)} \\times_{2} \\mathbf{P}_{3} ] \\|_{F}^{2}+\\lambda\\|\\mathbf{G}_{(2)}^{(3)}\\|_{*}\n\\end{aligned}\n\\end{aligned}\n\\end{equation}\n\nEq. (\\ref{eq:NCTRF}) becomes the CTRF model when removing the last term. In \\cite{c150}, the benefit of TR decomposition for SR is elaborated via the theoretical and experimental proof related to a low-dimensional TR subspace.\nThe relationship between the TR spectral factors of LR-HS images and HR-MS images were explored in \\cite{c149} with a high-order representation of the original HS image. The spectral structures of HR-HS images were kept to be consistent with LR-HS images by a graph-Laplacian regularization. \nChen \\textit{et al}. \\cite{c169} presented a factor-smonthed TR decomposition (FSTRD) to capture the spatial-spectral continuity of HR-HS images.\nBased on the basic CTRF model, Xu \\textit{et al}. \\cite{c171} advocated LR TR decomposition based on TNN (LRTRTNN), which exploited the LR properties of non-local similar patches and their TR factors.\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=12cm]{SR-result.pdf}\n\t\\end{center}\n\t\\caption[sr]{The fusion results of five different HS-MS fusion methods. (a) REF, (b) Blind-STEREO, (c) CSTF, (d) LTMR, (e) LTTR, and (f) SC-LL1. }\n\t\\label{fig:Visio-sr-re}\n\\end{figure*}\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different methods for HS-MS fusion.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|ccccc}\n\\hline\nIndex & Blind-STEREO &\tCSTF & LTTR & LTMR & SC-LL1 \\\\\n\\hline\n MPSNR & 53.49 & $\\mathbf{54.28}$ &\t38.11 & 39.58 & 54.13\\\\\n ERGAS &0.3584 & $\\mathbf{0.3317}$ & 1.9761 & 1.5851\t& 0.3076\\\\ \n SAM & 1.0132\t& 0.8841 & 3.7971 & 3.0495 & $\\mathbf{0.8213}$ \\\\\n RMSE &0.0027\t& 0.0025 & 0.0165 & 0.0132\t& $\\mathbf{0.0023}$\\\\\nCC &0.9993\t& 0.9994 & 0.9819 & 0.9870\t& $\\mathbf{0.9995}$\\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-fusion}\n\\end{spacing}\n\\end{table*}\n\n\\subsubsection{Tensor Rank Minimization for SR}\n\\label{sect:TRMforSR} \n\nBased on t-SVD, Dian \\textit{et al}. \\cite{c165} developed a subspace based low tensor multi-rank (LTMR) that induced an HR-HS image by spectral subspace and corresponding coefficients of grouped FBPs. The specific LTMR model is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:LTMR}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau \\sum_{k=1}^{K}\\|\\mathcal{X}_{k}\\|_{\\mathrm{TMR}}\n \\end{aligned}\n\\end{equation}\nwhere the multi-rank of tensor $\\mathcal{X}$ is defined as $\\|\\mathcal{X}\\|_{\\mathrm{TMR}}=\\frac{1}{B_{3}} \\sum_{b=1}^{B_{3}} \\operatorname{LS}(\\hat{\\mathcal{X}}(:,:,b))$, and $B$ is dimension number of the third mode of $\\mathcal{X}$. To speed up the estimation of LTMR, Long \\textit{et al}. \\cite{c166} introduced the concept of truncation value and obtain a fast LTMR (FLTMR) algorithm. Xu \\textit{et al}. \\cite{c8} presented a non-local patch tensor sparse representation (NPTSR) model that characterized the the spectral and spatial similarities among non-local HS patches by the t-product based tensor sparse representation.\n\nConsidering the HS image degradation by noise, some researchers study the Noise-robust HS SR problem. Li \\textit{et al}. \\cite{c168} proposed a TV regularized tensor low-multilinear-rank (TV-TLMR) model to improve the performances of the mixed-noise-robust HS SR task. Liu \\textit{et al}. \\cite{c167} transformed the HS SR problem as a convex TTN optimization, which permitted a SR process robust to an HS image striping case. \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_sr}\n\nIn this section, we select five representative tensor decomposition-based HS-MS fusion approaches: a Tucker decomposition-based method, i.e, CSTF \\cite{c146}; a CP decomposition-based method, i.e., Blind-STEREO \\cite{c145}; a TT decomposition-based method, i.e., LTTR \\cite{c148}; a BT decomposition-based method, i.e., SC-LL1 \\cite{c138} and a tensor singular value decomposition-based method, i.e., LTMR \\cite{c165}.\n\nThe quality assessment is conducted within a simulation study following Wald's protocol \\cite{c220}. One RS-HS data set is selected for the data fusion, i.e., the University of Houston campus used for the 2018 IEEE GRSS Data Fusion Contest. The original data is acquired by ITRES CASI 1500 HS camera, covering a 380-1050 nm spectral range with 48 bands at a 1-m GSD. a sub-image of $400 \\times 400 \\times 46 $ is chosen as the ground true after discarding some noisy bands. The input HR-MS image is generated by the reference image using the spectral response of WorldView 2, and the input LR-HS image is obtained via a Gaussian blurring kernel whose size equals five. Five quantitative metrics are used to assess the performances of the reconstructed HR-HS image, including MPSNR, ERGAS, root-mean-square error (RMSE), spectral angle mapper (SAM), and cross-correlation (CC). SAM measures the angles between the HR-HS image and the reference image, and smaller SAMs correspond to better performance. CC is a score between 0 and 1, where 1 represents the best estimation result. \n\n\n\n\nFig. \\ref{fig:Visio-sr-re} presents the reconstructed false-color images, enlarged local images, SAM error heatmaps, and mean relative absolute error (MRAE) heatmaps of five HS-MS fusion methods. From Fig. \\ref{fig:Visio-sr-re}, all five methods provide good spatial reconstruction results. However, LTMR and LTTR produce severe spectral distortions at the edge of the objects. In Tab. \\ref{tab:tab-fusion}, the conclusion of quantitative evaluation is consistent with that of the visual one. In other words, LTTR and LTMR perform poorly in the spectral reconstruction quality. The other three methods show a competitive ability in HS-MS fusion. Especially, CSTF gains the best MPSNR and ERGAS scores, and SC-LL1 achieves the best SAM, RMSE, and CC values among the competing approaches.\n\n\n\n\n\n\n\n\n\\subsection{Future challenges}\n\\label{sect:challenges_sr} \n\nThough tensor decomposition-based HS-MS fusion technology has been promoted rapidly in recent years and shows a promising reconstruction ability due to its strong exploitation of spatial-spectral structure information, a number of challenges remain.\n\n\n$ \\textbf{Non-registered HS-MS fusion}$: Tensor decomposition-based HS-MS fusion methods focuses on the pixel-level image fusion, which implies that image registration between two input modalities is a necessary prerequisite and the fusion quality heavily depends on the registration accuracy. However, most of the current methods pay more attention to the follow-up fusion step, ignoring the importance of registration. As a challenging task, image registration handles the inputs of two modalities acquired from different platforms and times. In the future, efforts should be made to accomplish non-registered HS-MS fusion tasks.\n\n\n\n$ \\textbf{Blind HS-MS fusion}$: Existing tensor decomposition-based HS-MS fusion methods contribute to the appropriate design of handcrafted priors to derive desired reconstruction results. However, the degradation models are often given without the estimation of real PSF and spectral response function in most of tensor-based methods. It is intractable to obtain precisely the degradation functions of real cases due to the uncertainty of sensor degradation.\nHow to devise blind HS-MS fusion methods with unknown degradation function is a desirable challenge. \n\n\n$\\textbf{Inter-image variability}$: The different times or platforms of two HS and MS modalities lead to the discrepancy, referring to the inter-image variability. However, tensor decomposition-based approaches usually assume that two modalities are acquired under the same condition, and hence ignore the spectral and spatial variability that usually happens in practice. Taking the inter-image variability phenomenon into consideration when modeling the degradation process is a key challenge for future researches.\n\n\n\n\n\n\\section{HS SU}\n\\label{sect:HSI-unmixing}\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[height=7cm]{unmixing_fr.pdf}\n\t\\end{center}\n\t\\caption[unmixing]{Illustration of HS unmixing based on linear mixing model and nonlinear mixing model. (a) Linear mixing, (b) intimate mixture, (c) multilayered mixture. }\n\t\\label{fig:Visio-unmixing-fr}\n\\end{figure*}\n\nOwing to its acquired continuous abundance maps, SU has been widely solved the inversion problems of typical ground object parameters, such as vegetation index, surface temperature and water turbidity in several decades \\cite{sonnentag2007mapping,alcantara2009improving, deng2013estimating}, and has been successfully applied in some RS applications, such as forest monitoring and land cover change detection \\cite{hlavka1995unmixing}. In addition, due to the mixing phenomenon caused by heterogeneity and stratified distribution of ground objects, SU can effectively realize crop identification and monitoring \\cite{lobell2004cropland, iordache2014dynamic, chi2014spectral }.\n\nWhen the mixing scale is macroscopic and each incident light reaching sensors has interacted with just one material, the measured spectrum is usually regarded as a linear mixing, as shown in Fig. \\ref{fig:Visio-unmixing-fr} (a). However, due to the existence of nonlinear interactions in real scenarios, several physics-based approximations of nonlinear linear mixing model (NLMM) have been proposed, mainly covering two types of mixing assumptions: intimate mixture (Fig. \\ref{fig:Visio-unmixing-fr} (b)) and multilayered mixture (Fig. \\ref{fig:Visio-unmixing-fr} (c)).\n The former describes the interactions suffered by the surface composed of particles at a microscopic scale. The intimate mixture usually occurs in scenes containing sand or mineral mixtures and requires a certain kind of prior knowledge of the geometric positioning of the sensor to establish the mixture model. The latter characterizes the light reflectance of various surface materials at a macroscopic scale. The multilayered mixture usually occurs in scenes composed of materials with some height differences, such as forest, grassland, or rocks, containing many nonlinear interactions between the ground and the canopy. In general, the multilayered mixture consisting of more than two orders is ignored owing to its negligible interactions. For the second-order multilayered mixture model, the family of bilinear mixing models is usually adopted to solve the NLMM.\nDue to the low spatial resolution of sensors, many pixels mixed by different pure materials exist in HS imagery, which inevitably conceals useful information and hinders the high-level image processing. SU aims to separate the observed spectrum into a suite of basic components, also called endmembers, and their corresponding fractional abundances. \n\n\\subsection{Linear Mixing Model}\nWith the assumption of the single interaction between the incident light and the material, representative SU methods are based on the following linear mixing model (LMM) \\cite{c189,RenL2021A}:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:LMM}\n \\mathbf{X} = \\mathbf{E} \\mathbf{A} + \\mathbf{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{X} \\in \\mathbb{R}^{z \\times hv}$, $\\mathbf{E} \\in \\mathbb{R}^{z \\times r}$, $\\mathbf{A} \\in \\mathbb{R}^{r \\times hv}$, and $\\mathbf{N} \\in \\mathbb{R}^{z \\times hv}$ denotes the observed unfolding HS matrix, the endmember matrix, abundance matrix, and additional noise, respectively. The LMM-based methods have drawn much attention due to their model simplicity and desirable performance \\cite{c205,c206,c207}. However, current LMM-based matrix factorization methods usually convert the 3-D HS cube into a 2-D matrix, leading to the loss of spatial information in the relative positions of pixels. Tensor factorization-based approaches have been dedicated to SU to overcome the limitation of LMM.\n\n\\subsubsection{CP or Tucker Decomposition Model}\n\nZhang \\textit{et al}. \\cite{c190,c191} first introduced nonnegative tensor factorization (NTF) into SU via CP decomposition. However, this NTF-SU method hardly considers the relationship between LMM and NTF, giving rise to the lack of physical interpretation. \nImbiriba \\textit{et al}. \\cite{c212} considered the underlying variability of spectral signatures and developed a flexible approach, named unmixing with LR tensor regularization algorithm accounting for EM variability (ULTRA-V). The ranks of the abundance tensor and the endmember tensor were estimated with only two easily adjusted parameters.\nSun \\textit{et al}. \\cite{c208} first introduced Tucker decomposition for blinding unmixing and increased the sparse characteristic of abundance tensor.\n\n\\subsubsection{BT Decomposition Model}\n\nIn terms of tensor notation, an HS data tensor can be represented by sum of the outer products of an endmember (vector) and its abundance fraction (matrix). This enables a matrix-vector third-order tensor factorization that consists of $R$ component tensors:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:SU-BTD}\n \\mathcal{X} & =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} +\\mathcal{N}\\\\\n &=\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} +\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{E}_{r}$ calculated by the product of $\\mathbf{A}_{r}$ and $\\mathbf{B}_{r}^{T}$ denotes the abundance matrix, $\\mathbf{c}_{r}$ is the endmember vector, and $\\mathcal{N}$ represented the additional noise. Apparently, this matrix-vector tensor decomposition has the same form as BT decomposition, set up a straightforward link with the previously mentioned LMM model. Qian \\textit{et al}. \\cite{c192} proposed a matrix-vector NTF unmixing method, called MVNTF, by combining the characteristics of CPD and Tucker decomposition to extract the complete spectral-spatial structure of HS images. The MVNTF method for SU is formulated as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:MVNTF}\n &\\min_{\\mathbf{E},\\mathbf{c}} || \\mathcal{X}- \\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} ||^2_F \\\\\n & {\\rm s.t.} \\mathbf{A}_{r} , \\mathbf{B}_{r}^{T}, \\mathbf{c}_{r} \\geq 0\n \\end{aligned}\n\\end{equation}\n\nMVNTF derived BT decomposition essentially and established a physical connection with LMM. Compared with NMF-based unmixing approaches, MVNTF can achieve better unmixing performance in most cases. Nevertheless, the abundance results extracted by MVNTF may be over-smoothing and lose detailed information due to the strict LR constraint of NTF. Various spatial and spectral structures, such as spatial-spectral smoothness and non-local similarity, are proven to tackle the problem of pure MVNTF. \n\nXiong \\textit{et al}. \\cite{c193} presented a TV regularized NTF (NTF-TV) method to make locally smooth regions share similar abundances between neighboring pixels and suppress the effect of noises. Zheng \\textit{et al}. \\cite{c194} offered a sparse and LR tensor factorization (SPLRTF) method to flexibly achieve the LR and sparsity characteristics of the abundance tensor. Feng \\textit{et al}. \\cite{c195} installed three additional constraints, namely sparseness, volume, and nonlinearity, into the MVNTF framework to improve the accuracies in impervious surface area fraction\/classification map. Li \\textit{et al}. \\cite{c196} integrated NMF into MVNTF by making full use of their individual merits to characterize the intrinsic structure information. Besides, a sparsity-enhanced convolutional operation (SeCoDe) method \\cite{c198} incorporated a 3-D convolutional operation into MVNTF for the blind SU task.\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=12cm]{unmixing_exp.pdf}\n\t\\end{center}\n\t\\caption[unmixing1]{Abundance maps of different methods on the Urban data set. }\n\t\\label{fig:Visio-unmixing1}\n\\end{figure*}\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\\includegraphics[height=8cm]{Endmember_result.pdf}\n\t\\end{center}\n\t\\caption[unmixing2]{Endmember results of different methods on the Urban data set. (a) Asphalt, (b) Grass, (c) Tree, and (d) Roof. }\n\t\\label{fig:Visio-unmixing2}\n\\end{figure*}\n\n\\begin{table*}[!htbp]\n\\centering\n\\caption{A quantitative comparison of different methods for HS SU.}\n\\begin{spacing}{1.0}\n\\scalebox{0.93}{\n\\begin{tabular}{c|c|cccc}\n\\hline\n\\multicolumn{2}{c|}{Method}&\tMVNTF & MVNTF-TV & SeCoDe & LR-NTF \\\\\n\\hline\n\\multirow{4}{*}{SAD} & Asphalt & 0.3738\t& 0.2606 & 0.2190 & $\\mathbf{0.1127}$ \\\\\n&Grass & 0.2572 &0.1722\t& $\\mathbf{0.0450}$ & 0.1349 \\\\\n&Tree\t&0.1474\t&0.1450 & 0.0854 &$\\mathbf{0.0632}$ \\\\\n&Roof\t&0.2825&\t0.2273&\t0.3861&\t$\\mathbf{0.0395}$\\\\\n\\cline{1-2}\n\\multicolumn{2}{c|}{MSAD} &\t0.2652&\t0.2013&\t0.1839&\n$\\mathbf{0.0876}$ \\\\\n\\multicolumn{2}{c|}{RMSE}\t&0.2638&\t0.2588\t&0.1453&$\\mathbf{\t0.1451}$ \\\\\n\\hline\n\\end{tabular}\n}\n\\label{tab:tab-unmixing}\n\\end{spacing}\n\\end{table*}\n\n\\subsubsection{Mode-$3$ Tensor Representation Model}\n\nUnder the definition of the tensor mode-$n$ multiplication, LMM (\\ref{eq:LMM}) is equivalent to\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:mul-LMM}\n \\mathcal{X} = \\mathcal{A} \\times_3 \\mathbf{E} + \\mathcal{N} \n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{A} \\in \\mathbb{R}^{h \\times v \\times R}$ denotes the abundance tensor containing $R$ endmembers. In \\cite{c197}, the non-local LR tensor and 3-DTV regularization of the abundance tensor were introduced further extract the spatial contextual information of HS data. With abundance nonnegative constraint (ANC) and abundance sum-to-one constraint (ASC) \\cite{c211}, the objective function of NLTR for SU is expressed as\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NLTR-SU}\n&\\min _{\\mathcal{A}} \\frac{1}{2}\\left\\|\\mathcal{X}-\\mathcal{A} \\times_{3} \\mathrm{E}\\right\\|_{F}^{2}+\\lambda_{\\mathrm{TV}}\\|\\mathcal{A}\\|_{\\mathrm{2DTV}}+\\lambda_{\\mathrm{NL}} \\sum_{k=1}^{K}\\left\\|\\mathcal{A}^{k}\\right\\|_{\\mathrm{NL}}\\\\\n&\\text { s.t. } \\mathcal{A} \\geq \\mathbf{0}, \\quad \\mathcal{A} \\times_{1} \\mathbf{1}_{P}=\\mathbf{1}_{h \\times v} \n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{1}_{P}$ is a $P$-dimensional vector of all 1, $\\mathbf{1}_{h \\times v} $ denotes a matrix of element 1, and the non-local LR regularization is defined as\n\\begin{equation}\n \\begin{aligned}\n\\|\\mathcal{A}^{k}\\|_{\\mathrm{NL}}=\\sum_{i=1}^{p} \\operatorname{LS}(\\mathbf{A}^{(i)})\n\\end{aligned} \n\\end{equation} \n\nANC, ASC, and the sparseness of abundance are often introduced into sparse unmixing models \\cite{Iordache2011Sparse,Iordache2014Collaborative}, which produces the endmembers and corresponding abundance coefficients by a known spectral library instead of extracting endmembers from HS data \\cite{RenL2022A,RenL2020A}. Sun \\textit{et al}. \\cite{c210} developed a weighted non-local LR tensor decomposition method for HS sparse unmixing (WNLTDSU) by adding collaborative sparsity and 2DTV of the endmember tensor into a weighted non-local LR tensor framework. The LR constraint and joint sparsity in the non-local abundance tensor were imposed in a non-local tensor-based sparse unmixing (NL-TSUn) algorithm \\cite{c209}.\n\n\n\n\n\\subsection{NonLinear Mixing Model}\nTo this end, numerous NLMMs have been studied in SU by modeling different order scatterings effects and producing more accurate unmixing results. To this end, numerous NLMMs have been proposed in SU by modeling different order scatterings effects and producing accurate unmixing results \\cite{c199,c200,c201}. Traditional NLMMs, such as Bilinear mixture models (BMMs), usually transform an HS cube into a 2-D matrix and have the same fault as LMMs \\cite{c202,c203,yao2019nonconvex}. \n\n\nTo effectively address the nonlinear unmixing problem, Gao et al. \\cite{c204} expressed an HS cube $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ based on tensor notation in the following format\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:NLMM1}\n \\mathcal{X}=\\mathcal{A} \\times_{3} \\mathbf{C}+\\mathcal{B} \\times_{3} \\mathbf{E}+\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{C} \\in \\mathbb{R}^{z \\times R}$, $\\mathcal{B} \\in \\mathbb{R}^{h \\times v \\times R(R-1)\/2}$, and $\\mathbf{E} \\in \\mathbb{R}^{z \\times R(R-1)\/2}$ represent the mixing matrix, the nonlinear interaction abundance tensor, and the bilinear interaction endmember matrix respectively. A nonlinear unmixing method \\cite{c204} was first based on NTF by taking advantage of the LR property of the abundance maps and nonlinear interaction maps, which validated the potential of tensor decomposition in nonlinear unmixing. \n\n\n\\subsection{Experimental results and analysis}\n\\label{sect:experiment_SU}\n\nThe Urban HS data set obtained by the HYDICE sensor over the urban area, Texas, USA, are selected for evaluating the performance of different unmixing methods qualitatively, including MVNTF \\cite{c192}, MVNTF-TV \\cite{c193}, SeCoDe \\cite{c198}, and LR-NTF \\cite{c204}. For a fair comparison, HSsignal subspace identification by minimum error (HySime) \\cite{c221} and vertex component analysis (VCA) \\cite{c222} algorithms are adopted to determine the number of endmembers and the endmember initialization. The urban data contains $307 \\times 307$ pixels and 210 bands ranging from 0.4 to 2.5 $\\mu$m. Due to the water vapor and atmospheric effects, 162 bands are remained after removing the affected channels. Four main materials in this scene are investigated, that is, $\\#1$ Asphalt, $\\#2$ Grass, $\\#3$ Tree, and $\\#4$ Roof. Two quantitative metrics are utilized to evaluate the extracted abundance and endmember results, namely RMSE and SAD. \n\nFor illustrative purposes, Fig. \\ref{fig:Visio-unmixing1} and Fig. \\ref{fig:Visio-unmixing2} display the extracted abundances and the corresponding endmember results of different tensor decomposition-based SU approaches. The quantitative results on the urban data are reported in Tab. \\ref{tab:tab-unmixing}, where the best results are marked in bold. MVNTF yields poor unmixing performance for both endmember extraction and abundance estimation compared with other tensor-based unmixing methods since it only considers the tensor structure to represent the spectral-spatial information of HS images and ignores other useful prior regularizations. Compared with MVNTF, MVNTF-TV integrates the advantage of TV and tensor decomposition, bringing certain performance improvements in terms of SAD, MSAD, and RMSE. SeCoDe addresses the problem of spectral variabilities in a convolutional decomposition fashion effectively, thereby yielding further performance improvement of endmember and abundance results. Different from SeCoDe, LR-NTF considers the nonlinear unmixing model of tensor decomposition and the low-rankness regularization of abundances. The unmixing results of LR-NTF are superior to those of other competitive approaches on the urban data, demonstrating its superiority and effectiveness.\n\n\n\n\n\n\n\n\\subsection{Future challenges}\n\\label{sect:challenges_SU} \n\nSeveral advanced tensor decomposition-based methods have recently achieved effectiveness in HS SU. Nonetheless, there is still a long way to go towards the definition of statistical models and the design of algorithms. In the following, we briefly summarize some aspects that deserve further consideration:\n\nThe most commonly utilized evaluation indices for HS SU include RMSE (that measures the error between the estimated abundance map and the reference abundance map) and SAD (which assesses the similarity of the extracted endmember signatures and the true endmember signatures). However, RMSE and SAD just contribute to a quantitative comparison of SU results when the ground truth for abundances and endmembers exists. If there are no references in the real scenario, meaningful and suitable evaluation metrics should be developed in future work.\n\nTraditional NLMMs are readily interpreted as matrix factorization problems. The tensor decomposition-based NLMM has been springing up in the recent few years. We should consider complex interactions like the intimate and multilayered mixture for establishing general and robust tensor models.\n\n\nAnother important challenge is the high time consumption required by high-performance SU architectures, which hinders their applicability in real scenarios. Especially, as the number of end members and the size of the image increase, the current NTF-based unmixing methods are difficult to deal with this situation owing to a large amount of computational consumption. Therefore, the exploration of more computationally efficient tensor-based approaches will be an urgent research direction in the future.\n\n\n\n\n\n\\section{Conclusion}\n\\label{sect:conclusion}\n\nHS technique accomplishes the acquisition, utilization, and analysis of nearly continuous spectral bands and permeates through a broad range of practical applications, having attached incremental attention from researchers worldwide. In HS data processing, large-scale and high-order properties are often involved in collected data. The ever-growing volume of 3-D HS data puts higher demands on the processing algorithms to replace the 2-D matrix-based methods. Tensor decomposition plays a crucial role in both problem modelings and methodological approaches, making it realizable to leverage the spectral information of each complete 1-D spectral signature and the spatial structure of each complete 2-D spatial image. In this article, we presented a comprehensive and technical review of five representative HS topics, including HS restoration, CS, AD, HS-MS fusion, and SU. Among these tasks, we reviewed current tensor decomposition-based methods with main formulations, experimental illustrations, and remaining challenges. The most important and compatible challenges related to consolidating tensor decomposition techniques for HS data processing should be emphasized and summarized in five aspects: model applicability, parameter adjustment, computational efficiency, methodological feasibility, and multi-mission applications.\n\n$\\textbf{Model applicability}$: Tensor decomposition theory and practice offer us versatile and potent weapons to solve various HS image processing problems. A high-dimensional tensor is often decomposed by different categories of tensor decomposition into several decomposition factors\/cores. One sign reveals that the mathematical meaning of different factors\/cores should be made connection with the physical properties of HS structure. Another sign is that each HS task contains multiple modeling problems, such as various types of HS noise (i.e., Gaussian noise, stripes, or mixed noise) caused by different kinds of sensors or external conditions. The tensor decomposition-based models should be capable of characterizing the specific HS properties and being used in different scenarios.\n\n\n$\\textbf{Parameter adjustment}$: In the algorithmic solution, parameter adjustment is an indispensable portion to achieve the significant performances of HS data processing. Parameters can be gradually tuned via extensive simulated experiments, while sometimes, they should be reset for various data sets due to the uncertainty of data size. In practice, users are most likely to be non-professional with little knowledge of a special algorithm, leading\nto improper parameter setting and unsatisfactory processing results. Therefore, in the future, efforts should be made to design a fast proper-parameter search scheme or reduce the number of parameters to increase algorithmic practicability.\n\n\n\n$\\textbf{Computational consumption}$: Tensor decomposition-based methods have achieved satisfactory results in HS data processing, yet they sometimes cause high computational consumption. For instance, a non-local LR tensor denoising model, TDL spends more than 10 min under a data set of $200 \\times 200 \\times 80$. As the image size increases, the increasing number of non-local FBPs will cause a larger amount of time consumption. Thus, there still exists a vast room for promotion and innovation of improving the optimization efficiency of HS data processing.\n\n\n\n\n$\\textbf{Methodological feasibility}$: Unlike deep learning-based methods, designing handcrafted priors is the key to tensor decomposition-based methods. Existing methods exploit the structure information of the underlying target image by implementing various handcrafted priors, such as LR, TV, and non-local similarity. However, different priors assumptions apply to specific scenarios, making it challenging to choose suitable priors according to the characteristics of HS images to be processed. Deep learning-based methods automatically learn the prior information implicitly from data sets themselves without the trouble of manually designing a manual regularizer. As an advisable approach, deep learning can be incorporated into tensor-based methods to mine essential multi-features and enhance the methodological feasibility.\n\n\n\n$\\textbf{Multi-mission applications}$: The extremely broad field of HS imagery makes it impossible to provide an exhaustive survey on all of the promising HS RS applications. It is certainly of significant interest to develop tensor decomposition-based models for other noteworthy processing and analysis chains in future work, including classification, change detection, large-scale land cover mapping, and image quality assessment. Some HS tasks serve as the pre-processing step for high-level vision. For example, the accuracy of HS classification can be improved after an HS denoising step. How to apply tensor decomposition for high-level vision and even multi-mission frameworks may be a key challenge.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n\n\\section{Motivation and significance}\n\\label{sect:ms}\n\nOver the past decade, massive efforts have been made to process and analyze HS RS data after the data acquisition. Initial HS data processing considers either the gray-level image for each band or the spectral signature of each pixel \\cite{c2}. From one side, each HS spectral band is regarded as a gray-level image, and the traditional 2-D image processing algorithms are directly introduced band by band \\cite{c175}. From another side, the spectral signatures that have similar visible properties (e.g., color, texture) can be used to identify the materials \\cite{c174}. Furthermore, extensive low-rank (LR) matrix-based methods are proposed to explore the high correlation of spectral channels with the assumption that the unfolding HS matrix has a low rank \\cite{c20}. However, these traditional LR models reshape each spectral band as a vector, leading to the destruction of the inherent spatial-spectral completeness of HS images. Correct interpretations of HS images and the appropriate choice of the intelligent models should be determined to reduce the gap between HS tasks and the advanced data processing technique. Both 2-D spatial information and 1-D spectral information are considered when an HS image is modeled as a three-order tensor.\n\n\\begin{figure}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.45\\textwidth]{papernumber.pdf}\n\t\\end{center}\n\t\\caption[houston]{The number of journal and conference papers that published in IEEE Xplore on the subject of \"hyperspectral\" and \"tensor decomposition\" within different time periods. }\n\t\\label{fig:Visio-papernum}\n\\end{figure}\n\n\\begin{figure*}[htp!]\n\t\\begin{center}\n \\includegraphics[width = 0.85\\textwidth]{total-fr.pdf}\n\t\\end{center}\n\t\\caption[houston]{A taxonomy of main tensor decomposition-based methods for hyperspectral data processing. Brackets enclose the number of papers for each topics that appeared in IEEE Xplore. }\n\t\\label{fig:Visio-total}\n\\end{figure*}\nTensor decomposition, which originates from Hitchcock's works in 1927 \\cite{c179}, touches upon numerous disciplines, but it has recently become prosperous in the fields of signal processing and machine learning over the last ten years \\cite{c181}. The early overviews focus on two common decomposition ways: Tucker decomposition and CANDECOMP\/PARAFAC (CP) decomposition. In 2008, these two decompositions were first introduced into HS restoration tasks to remove the Gaussian noise \\cite{c25}. The tensor decomposition-based mathematical models avoid converting the original dimensions, and also to some degree, enhance the interpretability and completeness for problem modeling. Different types of prior knowledge (e.g, non-local Similarity in the spatial domain, spatial and spectral smoothness) in HS RS are considered and incorporated into the tensor decomposition frameworks. However, on the one hand, additional tensor decomposition methods have been proposed recently, including block term (BT) decomposition, tensor-singular value decomposition (T-SVD) \\cite{c184}, tensor train (TT) decomposition \\cite{c185}, and tensor ring (TR) decomposition \\cite{c126}. On the other hand, as a versatile tool, tensor decomposition related to HS data processing has not been reviewed until. \n\nFig. \\ref{fig:Visio-papernum} displays the dynamics of tensor decompositions used for HS data processing in the HS community. The listed numbers contain both scientific journal and conference papers published in IEEE Xplore, which regards \"hyperspectral\" and \"tensor decomposition\" as the main keywords in abstracts. To highlight the increasing trend of number of publications, time period has been divided into four equal time slots (i.e., 2007-2010, 2011-2014, 2015-2018, 2019-2022(05 January)).\nIn this article, we mainly present a systematic overview from the perspective of the state-of-the-art tensor decomposition techniques for HS data processing in terms of the five burgeoning topics previously mentioned. \n\n\n\n\n\n\\begin{figure*}[htb]\n\t\\begin{center}\n\t\t\\includegraphics[width = 0.8\\textwidth]{total-fr-fig.pdf}\n\t\\end{center}\n\t\\caption[restoration]{A schematic diagram of HS data processing, including restoration, compressive sensing, anomaly detection, hyperspectral-multispectral (HS-MS) fusion, and spectral unmixing. }\n\t\\label{fig:Visio-total2}\n\\end{figure*}\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(1) To the best of our knowledge, this is the first time to provide a comprehensive survey of the state-of-the-art tensor decomposition techniques for processing and analyzing HS RS images. More than 100 publications in this field are reviewed and discussed, most of which were published during the last five years.\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(2) For each HS topic, major representative works are scrupulously presented in terms of the specific categories of tensor decomposition. We introduce and discuss the pure tensor decomposition-based methods and their variants with other HS priors in sequence. The experimental examples are performed for validating and evaluating theoretical methods, followed by a discussion of remaining challenges and further research directions.\n\n\n\\noindent\n\\hangafter=1\n\\setlength{\\hangindent}{2em}\n(3) This article makes a connection between tensor decomposition modeling and HS prior information. We summarizes with the publication years, brief description, and prior information. Either beginners or experiencers are expected to obtain certain harvest pertinent to the tensor decomposition-based frameworks for HS RS. The available codes are also displayed for the sake of repeatability and further studies in the final submission.\n\n \n\n\n \n\n\\section{Outline}\n\\label{sect:outline}\n\nThis paper provides a brief introduction for various tensor decomposition models. Fig. \\ref{fig:Visio-total} illustrates a taxonomy of main tensor decomposition-based methods for HS data processing. Very recently, some excellent performances of tensor decompositions for HS data processing have garnered growing attention from researchers, leading to the novel prosperity of tensor modelings and related solving algorithms. These issues and solutions pose fresh challenges to research on optimizations, which inspires the further development of both tensor decompositions and HS data processing. Fig. \\ref{fig:Visio-total2} presents the illustration for each topic. \n\n\n\n\n\\subsection{Restoration}\n\\label{sect:restoration}\n\n\n\nIn the actual process of HS data acquisition and transformation, external environmental change and internal equipment conditions inevitably lead to noises, blurs, and missing data (including clouds and stripes), which degrade the visual quality of HS images and the efficiency of the subsequent HS data analysis. Therefore, HS image restoration appears as a crucial pre-processing step for further applications. Mathematically, an observed degraded HS image can be formulated as follows\n \\begin{equation}\n\t\\begin{split}\n\t\t\\label{eq:degrade}\n\\mathcal{T}=M(\\mathcal{X}) + \\mathcal{S} + \\mathcal{N}\n\t\\end{split}\n\\end{equation} \nwhere $\\mathcal{T} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$, $\\mathcal{S} \\in \\mathbb{R}^{h \\times v \\times z}$ and $\\mathcal{N} \\in \\mathbb{R}^{h \\times v \\times z}$ represents an observed HS image, the restored HS image, the sparse error and additive noise, respectively, and $M(\\cdot)$ denotes different linear degradation operators for different HS restoration problems: (a) when $M(\\cdot)$ is a blur kernal also called as point spread function (PSF), Eq. (\\ref{eq:degrade}) becomes HS deblurring problem; \n(b) when $M(\\cdot)$ is a binary operation, i.e., 1 for original pixels, and 0 for missing data, Eq. (\\ref{eq:degrade}) turns into the HS inpainting problem;\n(c) when $M(\\mathcal{X})$ keeps $\\mathcal{X}$ constant, i.e., $M(\\mathcal{X}) = \\mathcal{X}$, Eq. (\\ref{eq:degrade}) is reformulated as the HS destriping problem ($\\mathcal{T}=\\mathcal{X} + \\mathcal{S}$) or HS denoising problem (only consider Gaussian noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{N}$ or consider mixed noise $\\mathcal{T}=\\mathcal{X} + \\mathcal{S} + \\mathcal{N}$). The HS restoration task is to estimate recovered HS images $\\mathcal{X}$ from the given HS images $\\mathcal{T}$. This ill-posed problem suggests that extra constraints on $\\mathcal{X}$ need to be enforced for the optimal solution of $\\mathcal{X}$. The HS restoration problem can be summarized as \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:summ}\n \\underset{\\mathcal{X}}{ \\min } \\frac{1}{2} || \\mathcal{T} - M(\\mathcal{X}) - \\mathcal{S} ||^2_F + \\tau f(\\mathcal{X}) + \\lambda g(\\mathcal{S})\n \\end{aligned}\n\\end{equation}\nwhere $f(\\mathcal{X})$ and $g(\\mathcal{S})$ stand for the regularizations to explore the desired properties on the recovered $\\mathcal{X}$ and sparse part $\\mathcal{S}$, respectively. $\\tau$ and $\\lambda$ are regularization parameters.\n\n\n\n\\subsection{Compressive sensing}\n\\label{sect:HSI-CS}\n\n\n\nCompressive sensing (CS) of HS images aims to preciously reconstruct an HS data $\\mathcal{X} \\in \\mathbb{R}^{h \\times v \\times z}$ from a few compressive measurements $\\textbf{y} \\in \\mathbb{R}^m$ by effective HS CS algorithms. The compressive measurements $\\textbf{y}$ can be formulated by:\n\\begin{equation}\n\\label{eq:y1}\n\\textbf{y} = \\Psi (\\mathcal{X})\n\\end{equation}\nwhere $\\Psi$ is a measurement operator instantiated as $\\Psi= \\textbf{D} \\cdot \\textbf{H} \\cdot \\textbf{P}$, where $\\textbf{D}$ is a random downsampling operator, $\\textbf{H}$ is a random permutation matrix, $\\textbf{P}$ is a WalshHadamard transform and the mapping of $\\Psi$ is $ \\mathbb{R}^{h \\times v \\times z} \\rightarrow \\mathbb{R}^m$ (the sampling ratio $m=hvz$). The strict reconstruction of $\\mathcal{X}$ from $\\mathbf{y}$ will be guaranteed by the CS theory when $\\Psi$ satisfies the restricted isometry property (RIP). The HS CS task can be generalized the following optimization problem:\n\\begin{equation}\n\\label{eq:hs_cs}\n\\begin{aligned}\n\\underset{\\mathcal{X}}{\\textrm{min}}\\;&\\|\\textbf{y}-\\Psi (\\mathcal{X})\\|_F^2 + \\lambda F(\\mathcal{X}),\n\\end{aligned}\t\n\\end{equation}\nwhere $F(\\mathcal{X})$ denotes the additional regularization term to use different types of HS prior information such as spectral correlation, spatial and spectral smoothness, and non-local Similarity. \n\n\n\n\\subsection{Anomaly detection}\n\\label{sect:HSI-AD}\n\n\n HS aomaly detection (AD) aims to discover and separate the potential man-made objects from the observed image scene, which is typically constructive for defense and surveillance developments. The key to coping with this problem is to exploit the discrepancy between anomalies and their background. Anomalies commonly occur with low probabilities and their spectral signatures are quite different from neighbors.\n\nHS images containing two spatial dimensions and one spectral dimension are intrinsically considered as a three-order tensor. The tensor-based approaches have been gradually attaching attention for HS AD in recent years. Tucker Decomposition is the first and essential type of tensor-decomposition methods used for HS AD. Therefore, in the following sections, we mainly focus on the Tucker decomposition-based methods and a few other types of tensor-based methods. An observed HS image $\\mathcal{T}$ can be decomposed into two parts by Tucker decomposition, i.e.,\n\\begin{equation}\n \\begin{aligned}\n \\mathcal{T} = \\mathcal{X} + \\mathcal{S}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathcal{X}$ is LR background tensor and $\\mathcal{S}$ is the sparse tensor consisting of anomalies.\n\n\n\n\n \\subsection{HS-MS fusion}\n\\label{sect:HSI-SR}\n\nHS and MS fusion aims to improve the spatial resolution of HS images with the assistance of MS images and generate final HS images with high-spatial resolution and original spectral resolution. \nSuppose that a desired high-spatial-spectral resolution HS (HR-HS) image, a low-resolution HS (LR-HS) image, and a high-resolution MS (HR-MS) image are denoted by $\\mathcal{X} \\in \\mathbb{R}^{ H \\times V \\times B}$, $\\mathcal{Y} \\in \\mathbb{R}^{ h \\times v \\times B}$ and $\\mathcal{Z} \\in \\mathbb{R}^{ H \\times V \\times b}$ ($H \\gg h$, $V \\gg v$, $B \\gg b$), respectively. A LR-HS image is seen as a spatially downsampled and blurring version of $\\mathcal{X}$, and a HR-MS image is the spectrally downsampled version of $\\mathcal{X}$. The two degradation models are expressed as follow\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR}\n \\mathbf{Y}_{(3)} = \\mathbf{X}_{(3)} \\mathbf{R} + \\mathbf{N}_h\n \\end{aligned}\n\\end{equation}\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR2}\n \\mathbf{Z}_{(3)} = \\mathbf{G} \\mathbf{X}_{(3)} + \\mathbf{N}_m\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{R} = \\mathbf{B} \\mathbf{K}$, $\\mathbf{B}$ denotes a convolution blurring operation. $\\mathbf{K}$ is a spatial downsampling matrix, and $\\mathbf{G}$ represents a spectral-response function if a MS image sensor, which can be regarded as a spectral downsampling matrix. $\\mathbf{N}_h$ and $\\mathbf{N}_m$ stand for noise.\nAccording to references \\cite{c142}, $\\mathbf{R}$ and $\\mathbf{G}$ are assumed to be given in advance of solving the HS SR problem \n\\begin{equation}\n \\begin{aligned}\n \\label{eq:HSR-pro}\n \\min_{\\mathcal{X}} ||\\mathbf{Y}_{(3)} - \\mathbf{X}_{(3)} \\mathbf{R} ||^2_F +\n ||\\mathbf{Z}_{(3)} - \\mathbf{G} \\mathbf{X}_{(3)} ||^2_F + \\tau f ({\\mathcal{X}})\n \\end{aligned}\n\\end{equation}\n\n\n\\subsection{Spectral unmixing}\n\\label{sect:HSI-unmixing}\n\n\nDue to the low spatial resolution of sensors, many pixels mixed by different pure materials exist in HS imagery, which inevitably conceals useful information and hinders the further image processing. Spectral unmixing aims to separate the observed spectrum into a suite of basic components, also called endmembers, and their corresponding fractional abundances. \n\nAn HSI data tensor can be represented by sum of the outer products of an endmember (vector) and its abundance fraction (matrix). This enables a matrix-vector third-order tensor factorization that consists of $R$ component tensors:\n\\begin{equation}\n \\begin{aligned}\n \\label{eq:SU-BTD}\n \\mathcal{X} & =\\sum_{r=1}^{R} \\mathbf{A}_{r} \\cdot \\mathbf{B}_{r}^{T} \\circ \\mathbf{c}_{r} +\\mathcal{N}\\\\\n &=\\sum_{r=1}^{R} \\mathbf{E}_{r} \\circ \\mathbf{c}_{r} +\\mathcal{N}\n \\end{aligned}\n\\end{equation}\nwhere $\\mathbf{E}_{r}$ calculated by the product of $\\mathbf{A}_{r}$ and $\\mathbf{B}_{r}^{T}$ denotes the abundance matrix, $\\mathbf{c}_{r}$ is the endmember vector, and $\\mathcal{N}$ represented the additional noise. The tensor factorization of endmember and its abundance can replaced by other decompositions.\n\nIn the final submission, we will offer specific tensor decomposition modelings, show the experimental performances and pose fresh challenges for each topic.\n\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{\\label{sec:Introduction}Introduction}\n\n\n\nQuantum information processing arbitrates controlled interaction of the Hilbert space of a quantum system, for the purpose of generating a target probability distribution expressed in a computational basis defined by an experimental measurement scheme. The Hilbert space of a quantum system generally grows exponentially with the number of degrees of freedom, but for the purpose of quantum information processing it needs to be opportunistically partitioned in order to execute algorithms. The most common encoding exploits a collection of qubits, two-level systems, and it is known that the dimensionality of the Hilbert space is maximized when the states are arranged as a collection of qutrits, three-level systems, for a fixed number of allowed quantum states \n \\cite{greentree2004maximizing}. \nWithout loss of generality, multiple qudits can be merged into the definition of a new qudit, and a qudit can be mapped via binary encodings into a minimum of $\\log_2(d)$ qubits and viceversa. For instance, the binary expansion\n\n\\begin{eqnarray}\n\\bigotimes_{k=0}^{N-1}|s_k\\rangle& \n\\xrightleftharpoons[qubits]{qudits}\n&\\left|\\sum_{k=0}^{N-1} s_k d^k\\right\\rangle\n\\label{eq:quditmapping}\n\\end{eqnarray}\nwould map the computational basis state of a ququart (i.e. a four-level qudit) onto two qubits $|0\\rangle\\rightarrow|00\\rangle$, $|1\\rangle\\rightarrow|01\\rangle$, $|2\\rangle\\rightarrow|10\\rangle$, $|3\\rangle\\rightarrow|11\\rangle$.\n\nIt is known since the beginning of quantum computing architecture research that universal quantum computing could be achieved by operating constructively on single-qubit and two-qubit at a time~\\cite{divincenzo1995two} via the implementation of quantum gates temporally arranged into quantum circuits. A similar result is known for qudits of arbitrary dimension~\\cite{bullock2005asymptotically,wang2020qudits}, which can provide hardware-efficient solutions \\cite{liu2021constructing} and lower-depth gate compilation and noise improvement compared to qubit-based systems \\cite{gokhale2019asymptotic, otten2021impacts, blok2021quantum, gustafson2021prospects, gustafson2022noise}. Of particular interest in the current period of technological maturity of quantum processors (the NISQ Era~\\cite{preskill2018quantum}) are variational algorithms such as the Quantum Approximate Optimization Algorithm (QAOA) and the Variational Quantum Eigensolver (VQE) that might achieve some quantum advantage without the fault-tolerance overhead of active error-correction~\\cite{cerezo2021variational}. Typically the quantum circuits of these algorithms feature unitary gates implementing a set of parametrized single-qudit rotations $U_M(\\beta)$ depending on some real angle $\\beta$. For instance, let us consider the set of $SU(2)$ rotations around the $X$-axis of the Bloch sphere for qubit systems, and the set of $SO(3)$ rotations that leave invariant the $|0\\rangle+|1\\rangle+|2\\rangle$ state for qutrits. Their matrix representations $U_{M}^{(2)}(\\beta)$ and $U_{M}^{(3)}(\\beta)$ are, respectively:\n\\setlength{\\thickmuskip}{0mu}\n\\setlength{\\medmuskip}{0mu}\n\\begin{eqnarray}\n U_{M}^{(2)}&\\equiv&\\begin{bmatrix} c_{\\frac{\\beta}{2}}&-is_{\\frac{\\beta}{2}} \\\\is_{\\frac{\\beta}{2}}&c_{\\frac{\\beta}{2}}\\end{bmatrix}\\label{eq:UMix}\\\\\n U_{M}^{(3)}&\\equiv&\\frac{1}{3}\\begin{bmatrix}\n 1\\text{+}2c_\\beta & 1-c_\\beta-\\sqrt{3}s_\\beta & 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta\\\\\n 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta & 1\\text{+}2c_\\beta & 1-c_\\beta-\\sqrt{3}c_\\beta\\\\\n 1-c_\\beta-\\sqrt{3}s_\\beta & 1-c_\\beta\\text{+}\\sqrt{3}s_\\beta & 1\\text{+}2c_\\beta,\\nonumber\n \\end{bmatrix}\n\\end{eqnarray}\n\\setlength{\\thickmuskip}{2mu}\n\\setlength{\\medmuskip}{2mu}\nwhere $c_x$, $s_x$ indicate $\\cos(x)$ and $\\sin(x)$ and the computational basis states are ordered in the canonical ascending way.\nThe two-qudit gates of interest for QAOA\/VQE ans\\\"atze are often diagonal in the computational basis. For instance, the following two-qudit and two-qutrit unitary gates $U_C(\\gamma)$ introduce a phase shift by the angle $\\gamma$ if the two qudits have the same computational state:\n\\begin{eqnarray}\n U_C^{(2)}&\\equiv&\\begin{bmatrix}\n e^{i\\gamma} & 0 & 0 & 0\\\\\n 0 & 1 & 0 & 0\\\\\n 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & e^{i\\gamma}\n \\end{bmatrix}\\label{eq:UCost}\\\\\n U_C^{(3)}&\\equiv&\n \\begin{bmatrix}\n e^{i\\gamma} & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & e^{i\\gamma} & 0 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & 0\\\\\n 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & e^{i\\gamma}\\\\\n \\end{bmatrix},\\nonumber\n\\end{eqnarray}\nwhere the canonically ordered basis for the matrix representation is used~\\footnote{$|00\\rangle$, $|01\\rangle$, $|10\\rangle$, $|11\\rangle$ for qubits, and $|00\\rangle$, $|01\\rangle$, $|02\\rangle$, $|10\\rangle$, $|11\\rangle$, $|12\\rangle$, $|20\\rangle$, $|21\\rangle$, $|22\\rangle$ for qutrits}. Note that for the most common case of qubits $U_C^{(2)}\\propto\\exp(i (\\gamma\/2)\\sigma_z\\otimes\\sigma_z)$, where $\\sigma_z$ are the standard Pauli matrices. For circuit quantum electrodynamics (cQED) systems, note also that there are ways to find effective spin models, which is generally used for encoding of quantum heuristic algorithms \\cite{miyazaki2022effective}.\n\n\n\nImplementing parametrized gates such as (\\ref{eq:UMix}-\\ref{eq:UCost}) starting from the elementary interactions provided by a NISQ processor is a non-trivial problem of \\emph{synthesis}~\\cite{magann2021pulses}, which often can be tackled only via heuristic numerical approaches and online experimental calibration~\\cite{klimov2020snake}. In this work, we consider the problem of synthesis of gates of the type (\\ref{eq:UMix}-\\ref{eq:UCost}) by driving with carefully optimized time-dependent interactions in a system of interacting states.\nMore specifically, the Hilbert space we are considering is spanned by a truncated set of anharmonic bosonic modes, defined with second quantized operators $a_m$, coupled in a density-density fashion. The corresponding many-body Hamiltonians and their truncated diagonal first quantization representations are:\n\\begin{eqnarray}\nH_m &=& \\omega_m a_m^\\dagger a_m + \\xi_m (a_m^\\dagger a_m)^2\\label{eq:Hm}\\\\\n \\Big|_{n_m\\ }&\\xrightarrow{}&|0\\rangle \\langle 0|+\\sum_{n=1}^{n_m-1} \\left[\\omega_m n + \\xi_m n^2\\right] |n\\rangle \\langle n|,\\nonumber\\\\\nH_{mm\\prime}^{int} &=& \\xi_{mm^\\prime} a_m^\\dagger a_m a^\\dagger_{m^\\prime} a_{m\\prime}\\label{eq:Hmm}\\\\\n \\Big|_{{n_m}\\atop{n_{m^\\prime}}}&\\xrightarrow{}&|00\\rangle \\langle 00|+\\sum_{n=1}^{n_m-1} \\sum_{k=1}^{n_{m\\prime}-1} \\xi_{mm^\\prime}nk |nk\\rangle\\langle nk|,\\nonumber\n\\end{eqnarray}\nwhere $n_m$, $n_{m\\prime}$ are the number of levels considered for each mode. In photonic implementations, $\\xi_m$ is called the self-Kerr coefficient for mode $m$ and $\\xi_{mm^\\prime}$ is called the cross-Kerr coefficient between modes $m$ and $m^\\prime$. \n\\begin{figure}[!htbp]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figure1v4.pdf}\n\\par\\end{centering}\n\\caption{\\label{fig:System_and_Spectrum}\nTop: System figure with colored waves representing different cavity electromagnetic modes. Eigenspectrum for $\\mathcal{H}^{(A)}$ (left) and $\\mathcal{H}^{(B)}$ (right). \nSystem parameters and resonant frequencies are given in Section \\ref{sec:quantum_control}.\nArrows indicate the transition frequencies.\nDashed (continuous) arrows represent transitions between different energy levels with $|0\\rangle_{T}$ and $|0\\rangle_{T}$ ($|1\\rangle_{T}$).}\n\\end{figure}\nWe are considering two illustrative setups in order to describe how quantum information could be manipulated in systems featuring Hamiltonians of the type (\\ref{eq:Hm}-\\ref{eq:Hmm}). \nIn particular we consider that there is one \"control qubit mode\" $T$ whose Hilbert space is truncated to the first two computational states: \n\\begin{eqnarray}\nH_T &=& |0\\rangle_T \\langle 0|_T+(\\omega_T+\\xi_T)|1\\rangle_T \\langle 1|_T,\\label{eq:HT}\n\\end{eqnarray}\nand either one or two computational modes ($C$) interacting with the control mode, respectively truncated to the first 8 and 3 computational states:\n\\begin{eqnarray}\n\\mathcal{H}^{(A)} &=& H_T + H_m + H_{Tm}^{int}\\Big|_{\\substack{n_T=2\\\\n_m=8}}\n\\label{eq:H2}\n\\\\\n\\mathcal{H}^{(B)} &=& \\left. \\begin{array}{l} \n H_T + H_l + H_m\\\\\n + H_{Tl}^{int} + H_{Tm}^{int} \\nonumber\\\\ \\end{array}\\right|_{^{\\substack{n_T=2\\\\n_l=3\\\\n_m=3}}},\n \n\\end{eqnarray}\nwhere the dependence over the $\\omega$ and $\\xi$ parameters of the Hamiltonians is implied. This setup is a specific case of a generalized Jaynes-Cummings model~\\cite{blais2021circuit}. Note that for $\\mathcal{H}^{(B)}$, each $C$ mode is naturally a qutrit, while as noted in Eq.~(\\ref{eq:quditmapping}), the quantum occupation numbers of the cavity modes could be directly associated to qubit registers via binary expansion~\\cite{sawaya2020resource}. \n\nIn Fig.~\\ref{fig:System_and_Spectrum}, we show the energy spectrum of two specifications of $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$ as well as a pictorial representation of a possible experimental setup that could be described by such effective Hamiltonians: A transmon circuit is embedded into a multimode 3D superconducting cavity, driven by the field of a coupled antenna. Indeed, our reference Hamiltonians can be derived by considering the superconducting transmon to be coupled with the cavity resonator in a dispersive way, i.e., by considering the effective interaction derived by perturbation theory assuming that the ratio of the transmon-cavity coupling and the difference between the transmon and the cavity fundamental frequencies is small, and neglecting the small effective couplings (i.e. cross-Kerr) between the cavity modes~\\cite{blais2021circuit, ma2021quantum}. The quantum control drive can be introduced in the model by adding a time-dependent term that allows to create and destroy excitations of a mode $m$:\n\\begin{equation}\n H^{drive}_{m}(t) = d_m(t) a_m + \\bar{d}_m(t) a_m^\\dagger,\\label{eq:HD}\n\\end{equation}\nwhere $d_m(t)$ are complex functions. This control Hamiltonian could be related to the (comparatively slowly varying) field generated by the antenna via phenomenologically justified approximations~\\cite{gerry2005introductory}. \n\n\nHaving introduced the main definitions and the systems under study, we outline the rest of the paper. In section \\ref{sec:quantum_control} we present the synthesis problem from a numerical point of view, following the implementation of quantum optimal control numerics in the open source package \\texttt{Juqbox.jl}~\\cite{Juqbox_Github}. Subsection \\ref{subseq:evaluation} will present results for the synthesis of simple QAOA proof-of-concept circuits based on the parallel execution of gates (\\ref{eq:UMix})-(\\ref{eq:UCost}). Finally,\nin Section \\ref{sec:discussion} \nwe will discuss future work, including improvements and generalizations of our case study to larger and realistic systems and what is needed for this method to be applied in practice for compilation of variational quantum algorithms in realistic bosonic quantum processors based on 3D cQED technology.\n\n\n\\section{\\label{sec:quantum_control} Pulse Engineering Approach}\n\nThe \\emph{gate synthesis} problem that we are facing could be framed as the task of discovering the functions $d_m(t)$ that allow the Schr\\\"odinger evolution for a time $\\tau$ of $\\mathcal{H}+H^{drive}$ to match as close as possible a target unitary operation $U$: \n\\begin{eqnarray}\n U &\\;\\simeq\\;& \\mathcal{U}(\\tau)=\\mathcal{T} \\exp\\left[-\\frac{i}{\\hbar}\\int_0^\\tau dt\\left( \\mathcal{H}+H^{drive}(t)\\right)\\right]\n \\label{eq:schroevol}\n\\end{eqnarray}\nIn particular, as discussed in the previous section, we will be considering Eqs.(\\ref{eq:H2}) for $\\mathcal{H}$ and Eqs. (\\ref{eq:UMix}-\\ref{eq:UCost}) as target unitary matrices. In order to solve synthesis numerically, the problem is cast into an optimization challenge over a finite number of real parameters, which can be tackled following the theory of quantum optimal control (QOC)~\\cite{palao2002quantum}. There are multiple strategies currently implemented for gate sythesis via QOC or machine learning, all with respective benefits and tradeoffs. However, these methods are currently tested on specific limited cases, and insights are difficult to generalize, e.g. see~\\cite{riaz2019optimal, niu2019universal, PRXQCTRL}. In this paper we follow the techniques described in Ref.~\\cite{petersson2021optimal}, targeting specifically cQED models, which we will now briefly review and contextualize for the system under study. \n\nWe leverage a key simplification of the QOC problem, consisting in the decomposition of the $d_m(t)$ control functions into a truncated basis spanned by a linear combination of $N_b$ B-spline quadratic polynomials, $S_b(t)$, corresponding to wavelets modulated with $N_f$ resonant frequencies, i.e.\n\\begin{eqnarray}\n d_m(t)&=&\\sum_k^{N_f} e^{i\\Omega_{m,k}} W_{m,k}(t)\\nonumber\\\\\n W_{m,k}(t)&=&\\sum_b^{N_b}\\alpha_{m,k,b}S_b(t),\n \\label{eq:Bsplines}\n\\end{eqnarray}\nwhere $\\alpha$s are complex coefficients, representing the unknowns of the optimization problem. The choice of B-splines as a basis for expansion is motivated by computational efficiency of parametrization of the control functions.\nThe resonant frequencies $\\Omega_{m,k}$ are defined by considering the energy differences between the states corresponding to the creation or annihilation of a boson, leaving the remaining occupations unchanged. Signals tuned at these frequencies initiate transitions as it can be proven by first order time-dependent perturbation theory.\n\nWe show in Fig.~\\ref{fig:System_and_Spectrum} the resonant frequencies for our illustrative systems: for $\\mathcal{H}^{(A)}$, we count 8 transitions related to $T$-bosons and 14 transitions for $C$-bosons for a total of 22 resonant frequencies. For $\\mathcal{H}^{(B)}$, there are 9 resonant frequencies in total that trigger T transitions, and 24 transitions related to the C modes. However, some transitions are degenerate -- only 17 different frequencies are required. \n\nWe consider the following values of parameters, with reference to a perspective reference cQED potential implementation: $\\omega_{\\mathrm{T}}\/2\\pi=$\\,5\\,GHz; $\\omega_{\\mathrm{m}}\/2\\pi=$\\,3\\,GHz, $\\omega_{\\mathrm{l}}\/2\\pi=$\\,4\\,GHz; $\\xi_m\/2\\pi=$\\,0.6\\,MHz, $\\xi_l\/2\\pi=$\\,0.9\\,MHz; $\\xi_T\/2\\pi =$\\,200\\,MHz. In line with our inspiration of a cavity-transmon systems in the dispersive regime~\\cite{nigg2012black}, we assign interaction parameters to be the geometric means of the local self-interactions $\\xi_{Tm}\/2\\pi = \\sqrt{\\xi_m \\times \\xi_T}\/2\\pi =$\\,10.95\\,MHz and $\\xi_{Tl}\/2\\pi = \\sqrt{\\xi_l \\times \\xi_T}\/2\\pi =$\\,13.42\\,MHz. The parameters that we used for $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$ are inspired from expectations of results that would be obtained applying black-box quantization to Tesla-cavity systems~\\cite{romanenko2020three} coupled dispersively to transmons with coherence times $\\simeq$ 100 $\\mu s$ \\cite{nersisyan2019manufacturing}. Following that inspiration, we assume small linewidth for the cavity mode compared to their separations and we set the minimum frequency difference between the transmon and cavity mode frequencies to be of the order of the GHz, in order to justify independent access of the control pulses to the transmon and for each cavity modes.\n\n\nFollowing \\texttt{Juqbox.jl}~\\cite{Juqbox_Github}, the \\emph{pulse engineering} algorithm attempts to discover the best $\\alpha_{m,k,b}$ coefficients (i.e. $2 \\times N_f \\times N_b$ real parameters), which works as follows. Initially, a random pulse is selected by initializing the vector of parameters using random positive numbers uniformly distributed within ${[0, 0.2\\,\\text{MHz})}$. Then, an objective function is calculated (see Subsection \\ref{subseq:evaluation}) and the pulse is iteratively updated by computing the Schr\\\"odinger evolution and gradients efficiently by symplectic time-integration of adjoint equations~\\cite{petersson2020discrete}. Note that due to the B-spline parametrization, the number of control parameters does not depend directly on the pulse total duration $\\tau$. However, the number of B-splines $N_b$ defines the design of the temporal structure of the pulses, so one needs to choose large enough $\\tau$ and $N_b$ to allow the method to converge to a numerically robust solution. In particular, the slowest frequency resolution of the pulses is given by 1\/$\\tau$. We choose to vary $\\tau$ in the 500-8000 ns range for our numerical experiments on $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$, allowing for a frequency resolution of 0.125-2 MHz. \nThe B-splines vary on the time scale $\\tau\/N_b$. Hence we choose $N_b$ = 10 to allow resolution at the scale of $\\xi_{Tm}$, which controls multiple energy separations in the spectrum. The values of $\\xi_l$, $\\xi_m$ define the smallest resonant frequencies. \n\n\\begin{figure*}[!htbp]\n\\begin{centering}\n\\includegraphics[width = .99 \\textwidth]{Fig2.pdf}\n\\par\\end{centering}\n\\caption{\\label{fig:EngineeredPulses} \n(a) Prototype circuits for the synthesis of Max-k-Cut QAOA. Single C-mode represent 8 computational states (equivalent to 3 qubits). (b) Illustrative Fourier spectrum of a high-fidelity engineered pulse via \\texttt{Juqbox.jl}. Top row shows results for $d_T(t)$ while the bottom row shows the control of the computational modes ($d_m(t)$ and $d_l(t)$). Darker tones (black, blue, orange) indicate the pulses that synthesize mixing layers, while light tones (gray, cyan, yellow) refer to phase-separation layers. (c) Fidelity for pulse engineered QAOA layers of the prototype circuits. Black lines indicate the mean across angles, individually plotted in gray. Each line is the mean of 10 random restarts (20-80 percentiles across restarts is plotted as shaded area). Leakage plots are presented in the Supplementary Material.}\n\\end{figure*}\n\n\\subsection{Evaluation test case: QAOA}\\label{subseq:evaluation}\n\nOur numerical prototype experiment is based on the synthesis of QAOA-like quantum circuits, which in their basic implementation consist of the layered alternated application of \\emph{phase-separation} unitary gates and \\emph{mixing} gates~\\cite{hadfield2019quantum}. With reference to the known Max-k-Cut qudit mapping of QAOA~\\cite{fuchs2021efficient}, where k corresponds to the dimensionality of the qudits, we can craft the phase-separation layers using $U_C(\\gamma)_{ij}$ gates and we have the freedom of designing the mixing layers using the $U_M(\\beta)_{i}$ gates in Eqs.~\\ref{eq:UMix}-\\ref{eq:UCost}, where $i$, $j$ indicate the distinguishable qudits that are targeted by the specific gate execution. Other choices would also be appropriate~\\cite{deller2022quantum}.\nFor clarity, in Fig.~\\ref{fig:EngineeredPulses}-a, we show the two toy-model circuits that we are going to synthesize, respectively via pulse engineering on $\\mathcal{H}^{(A)}$ and $\\mathcal{H}^{(B)}$. For completeness, the test circuit include the \\emph{initialization} operation, which is usually taken to be a generalized Hadamard gate (although it could be substituted by a mixing over the $|0\\rangle^{\\otimes N}$ state).\n\nWe note that quantum processor programmers have formally the freedom to execute gates sequentially or in parallel, and to exchange them in temporal execution order if they commute. However, in a real world implementation, if the processor is not fault-tolerant, under reasonable assumptions we expect decoherence and dephasing errors to be roughly proportional to execution time, so a compiler for NISQ algorithms often tries to parallelize gate execution as much as possible~\\cite{venturelli2018compiling}. Moreover, considering the mapping of the computational variables to the spectrum of the Hamiltonians (Fig.~\\ref{fig:System_and_Spectrum}), the possible qudit identity assignments are inequivalent with respect to pulse engineering, although it would be inconsequential if the synthesis was perfect. \\texttt{SWAP} operations could restrict the number of active qudits, by relegating some states to be just memory storage and not participate in processing. However, these operations and controls for our Hamiltonians need to be synthesized as well, increasing the complexity of the entire compilation significantly. \nBearing in mind these considerations, in our case study we choose to implement the single-qudit gates in parallel when possible, without implementing \\texttt{SWAP}s but directly synthesizing all required two-body interactions instead across the entire Hilbert space. We will discuss in Section~\\ref{sec:discussion} the scalability issues associated to this approach.\n\nNoting that in cQED implementations, the Hamiltonians in Eqs.~(\\ref{eq:H2}) are defined on truncated versions of a physically infinite Hilbert space, it is customary to include a few additional \\emph{guard states} corresponding to high occupation of boson modes to help the robustness of the numerical optimization, i.e., the following parameters are renormalized $n_T\\rightarrow\\tilde{n}_T= n_T+\\delta n_T$, $n_m\\rightarrow\\tilde{n}_m=n_m+\\delta n_m$ and $n_l\\rightarrow\\tilde{n}_l=n_l+\\delta n_l$, where $\\delta n$ represent guard states with values in Table \\ref{tab:parameters}.\n\n\n\n\nFollowing \\cite{petersson2021optimal}, the optimization objective to be minimized is chosen to be a sum of the infidelity and average leakage. The infidelity is a measure of a similarity score between the synthesized unitary matrix and the target, which can be defined as $O_F=1-|\\Tr(\\mathcal{U}(\\tau)^\\dagger U)\/E|^2$, where E is a normalization constant. The average leakage is defined as $O_L=(1\/\\tau)\\int_0^\\tau \\Tr(\\mathcal{U}^\\dagger(t) W \\mathcal{U}(t))dt$, where $W$ is a diagonal matrix which is non-zero only on the indices corresponding to the guard levels. \nThe weights in $W$ are set to be 1.0 for the highest guard state and then decrease exponentially in powers of 10 for each lower state. The objective of the numerics is to minimize $O=O_F+O_L$ by solving the related optimization problem on the $\\alpha$ parameters by using the IPOPT L-BFGS optimizer~\\cite{wachter2006implementation} and using the efficient \\texttt{Juqbox.jl} numerical integration scheme to compute the required $O$ and $\\nabla_\\alpha O$.\n\n\\begin{table}[!htbp]\n\\centering\n\\caption{\\label{tab:parameters}Parameters used for prototype (See Fig.~\\ref{fig:EngineeredPulses})}\n\\begin{tabular}{|c||c||c||}\n\\hline\n\\bf{parameter} & \\bf{$\\mathcal{H}^{(A)}$} & \\bf{$\\mathcal{H}^{(B)}$}\\\\[0.5ex]\n\\hline\\hline\nB-splines $N_b$ & 10 & 10\\\\\ncarrier frequencies $N_f$ & 22 & 17\\\\\n$T$ guard states $\\delta n_T$ & 3 & 3\\\\\n$C$ guard states $\\delta n_m$, $\\delta n_l$ & 2 & 2\\\\\nmax iterations & 100 & 30-150\\\\\nnumber of restarts & 10 & 10\\\\\ntarget fidelity 1-$O_F$ & 0.99 & 0.99\\\\\n \\hline\\hline\n\\end{tabular}\n\\end{table}\n\nThe optimization heuristics has a stopping condition based on either the achievement of a target threshold fidelity (1-$O_F$) or the execution of a maximum number of iterations. As mentioned, we perform multiple restarts initializing the optimization with different random pulses (see Table \\ref{tab:parameters} for a summary of some of the parameters used for the numerical experiments). Computations have been performed allowing an optimization time in the order of days. See Supplemental Material for computational details.\n\n\nTo give a sense of the resulting control signals that generate the QAOA circuit layers, we show the resulting Fourier transform of one engineered $d_T(t)$, $d_m(t)$, $d_l(t)$ functions in Fig.~\\ref{fig:EngineeredPulses}-b, for one random seed and pulse time $\\tau$ = 8000 ns, which in retrospect we know guaranteeing high fidelity of the synthesis. The angle parameters $\\beta$ and $\\gamma$ have been set to a fixed arbitrary value of $\\pi\/5$ for illustration but the qualitative features of the pulses that we are describing are preserved for different $\\tau$ and angles. As evident from the plots, the scheme and parameters described above clearly generate peaks around the identified resonant frequencies corresponding to the single-boson transitions in Fig.~\\ref{fig:System_and_Spectrum}. In particular, for the $d_T(t)$ controls, the highest peak corresponds to $\\omega_T$ while the other equispaced peaks are centered among multiples of $\\xi_{Tm}$, $\\xi_{Tl}$ or integer combinations of the two energy values for the $\\mathcal{H}^{(B)}$ system. For the C-mode controls, the peaking frequencies are topped by $\\omega_m$, $\\omega_l$ and generally $\\xi_{Tm}$, $\\xi_{Tl}$ plus multiples of $\\xi_m$, $\\xi_l$ respectively.\nClearly, each reported spectrum corresponds to a real-time microwave combination of pulses that can be crafted via an arbitrary waveform generator (AWG) in an experimental setup.\n\n\n\nIn Fig.~\\ref{fig:EngineeredPulses}-c, we provide the aggregated performance of the pulse engineering approach, plotting the fidelity between the final pulse $\\mathcal{U}(\\tau)$ and target circuit layers (phase separation and mixing) and Hadamard gates, for different pulse times. We show the mean fidelity, estimated averaging 10 random initializations, i.e. restart of the L-BFGS optimizer (the default optimizer for \\texttt{Juqbox.jl}), for QAOA layers parametrized with 11 different $\\gamma$ and $\\beta$ (from -$\\pi$ to $\\pi$ in fractions of $\\pi$\/5). As expected, notwithstanding outliers, the statistics is sufficient to indicate that the method can reach the target 0.99 fidelity if the pulse is allowed to be sufficiently long. \n\n\n\n\n\n\\section{Discussion and Outlook}\\label{sec:discussion}\n\nIn the previous section, we described a proof-of-concept of numerical synthesis for simple quantum circuits describing the building blocks of Max-k-Cut QAOA using qubits (mapped onto qudits) and qutrits, on bosonic quantum processors. The main question that is left to be addressed is if the synthesis approach we employed is sufficiently robust to be applied at application-scale. We break down the question in a discussion of three scalability challenges: Computational effort, realistic implementation, and circuit fidelity.\n\n\\paragraph{Computational Effort:} \nAs mentioned, the computational effort required by numerical packages to obtain high-fidelity in our case study is already very significant and scales both with the Hilbert space size and with the pulse duration. This means that the proposed methodology will most certainly not be viable if straightforwardly applied to systems at large scale, although larger synthesis can be achieved if the code is optimized to leverage GPU clusters. The envisioned practical synthesis of larger circuits will necessarily need to be broken down in modules, each of which working on a subspace of the entire Hilbert space. The requirement for this modularization is that the gate synthesized numerically in a system with few modes will have to be applied in a system with several modes and levels. The optimal gate from numerics should ideally act as an identity on the degree of freedoms that were not considered in the synthesis in order not to cause the crosstalk problem \\cite{ozguler2022dynamics}. Scaling up the single-mode case $\\mathcal{H}^{(A)}$ that we used will not likely be viable, since the non-local mapping onto qubits would require any gate to address the entire level structure independently from the locality of the gates, which is why we opted to synthesize the entire phase-separation circuit as opposed to the individual two-qubit gates independently. However, it is envisionable to generalize the $\\mathcal{H}^{(B)}$ system adding more C-modes, i.e., considering the Hamiltonian\n\\begin{eqnarray}\n\\mathcal{H}^{(B)}_{multi}[N] &=& \n H_T + \\sum_{j=1}^N \\left[H_{m_j}\n + H_{Tm_j}^{int}\\right], \n \\label{eq:multi-qutrits}\n\\end{eqnarray} which is $\\mathcal{H}^{(B)}$ for N=2. If the $\\xi$ parameters of each C-mode are sufficiently separated, the peaked frequency structure of the engineered pulses suggests that it is possible that none of the peaks in the final pulses would correspond to resonances with single-boson excitations that we don't want to trigger, which would likely induce very small leakage outside the two-mode target computational space. This needs to be verified theoretically or numerically in future work. Ultimately, frequency crowding will be an issue and more sophisticated numerics or frequency spacing and bandwidth engineering will be required.\n\nIt should be noted that if the modularization works as expected, the computing time spent synthesizing algorithmic primitives would be an offline \\emph{una tantum} cost to be paid to populate a lookup table (LUT) that would be accessed at runtime by the perspective user of the quantum solver. Indeed, similarly as in other domains, it is envisioned that the LUT would be computed for a large grid of parameters (angles $\\gamma$ and $\\beta$ in our QAOA example) and then machine learning algorithms would learn and return an interpolation of the engineered pulses if the compiler is called for a parameter that was not pre-computed, or would use nearby known points to initialize a fast optimization round to engineer a new pulse on the fly \\cite{xu2022neural}. \n\n\n\n\n\n\n\n\\paragraph{Realistic Implementation:} \n\nWhile the described technique is generically applicable to any bosonic interacting system, our case study has a specific 3D cQED implementation in mind, as illustrated in the inset of Fig.~\\ref{fig:System_and_Spectrum}. \nIt should be noted that the general framework that we employed, pulse engineering via QOC, while proven powerful~\\cite{heeres2017implementing} is not the only known approach to achieve universal synthesis of unitary quantum gates defined in the Fock space for these kind of systems. For instance, the use of selective number-dependent arbitrary phase (SNAP) protocol~\\cite{heeres2015cavity,fosel2020efficient} or echoed conditional displacement~\\cite{eickbusch2021fast} are strong candidates for the universal control of a single-mode system. Qudits have potential to be affected by noise less so than qubits \\cite{otten2021impacts} but working with large photon-number states comes with additional complications in terms of decoherence, which are still theoretically not entirely understood~\\cite{hanai2021intrinsic}.\n\nThe multiqudit system (Eq.~\\ref{eq:multi-qutrits}) could be viable but its practical implementation will likely suffer from the aforementioned quantum and classical crosstalk problems whose handling is currently one of the main active research topics of the 3D multimode cQED domain~\\cite{chakram2020seamless}. Even assuming that the bandwidth of the control pulses and the level spacing has sufficient resolution, there is a need for the co-design of a NISQ cQED architecture that would allow two-mode gates to operate in large Hilbert space with a controllable effect over spectator modes that are subject to an always-on interaction \\cite{alam2022quantum}. Theory results on quantum adiabatic protocols~\\cite{das2008colloquium, ozguler2018steering} on bosonic systems could provide an initial reference point to be generalized~\\cite{pino2018quantum, starchl2022unraveling}.\n\n\\paragraph{Fidelity:} The fidelity target we used in our prototype (0.99) is in line with the fidelity of native gates in industrial grade quantum processors but it is of course somewhat arbitrary. In accordance with conservative models of uncorrelated errors, we could estimate the final fidelity of the entire circuits in Fig.~\\ref{fig:EngineeredPulses}-a as the product of the fidelities of each synthesized layer, which means that ultimately the fidelity decreases exponentially with the number of layers. Hence, quantum-volumetric tests~\\cite{blume2020volumetric} would fail rather fast if we were to scale our circuits beyond few variables. However, it should be noted that for quantum optimization algorithms of the variational type, it is not clear if high fidelities are required, considering that the underlying computational principle is preserved for Lindblad evolution~\\cite{yang2017optimizing}. The degree of freedom of parameter setting might contribute to mitigate the misspecification of the gates due to poor synthesis. The non-requirement of exact synthesis is intuitive, since for optimization tasks we are not necessarily trying to reproduce a quantum process but rather to drive the system towards a probability distribution, which might be achievable also with partially coherent systems or in the presence of spurious unknown interactions that give rise to systematic coherent errors. So, as long as the nature of the errors is not specifically adversarial against the optimization tasks, there is still reasonable hope that a low-fidelity circuit could deliver speedup in the NISQ era. An important contribution that we are considering to improve the fidelity would be to generalize the technique of \\texttt{Juqbox.jl} to open systems, and fit the experimental noise to solve for a more realistic model. Fortunately, there has already been active development in that direction, including enabling quantum optimal control and pulse-level programming in XACC \\cite{nguyen2020extending, nguyen2021enabling} with \\texttt{QuaC} plugin \\cite{otten2017quac}, and a recently released open-source package for high-performance optimal control, \\texttt{Quandary}~\\cite{gunther2021quandary}.\n\n\n\n\nIn conclusion, we investigated the application of quantum optimal control techniques to design unitary gates for a class of physical systems that could be programmed to act as qudit-based quantum computers. We used variational algorithms such as QAOA for qubits (mapped onto a single qudit) and qutrits as targets for our case-study. Our current results, similar to other applied quantum computing works for multimode cQED~\\cite{kurkcuoglu2021quantum}, are still limited on small proof-of-concept models, due to limitations in computational effort, realistic implementation and achievable fidelity. While we identified pathways to overcome such limitations, we should note that for the purpose of variational optimization there are multiple recent attempts to employ co-designed digital-analog approaches that are directly related to QOC as optimization algorithms~\\cite{magann2021pulses,gokhale2019partial, choquette2021quantum}, and might not require the burdens of high-fidelity gate synthesis. We envision that our work could also contribute to those innovative methods that have already been delivering promising results.\n\n\n\n\\section*{Acknowledgments}\n\nWe thank Jens Koch, Srivatsan Chakram, Taeyoon Kim, Joshua Job, Matthew Reagor, Matthew Otten, Keshav Kapoor, Silvia Zorzetti, Sohaib Alam, Doga Kurkcuoglu and the SQMS 3D Algorithms Group and SQMS Codesign Group for discussions and feedback. We thank Adam Lyon, Jim Kowalkowski, Yuri Alexeev and Norm Tubman for their assistance on computing aspects, including support through XSEDE computational Project no. TG-MCA93S030 providing compute time at Bridges-2 of the Pittsburgh Supercomputer Center. A.B.\\\"O. thanks Gabriel Perdue, Adam and Jim for their guidance during his early career years. We thank Anders Petersson for his support in configuring \\texttt{Juqbox.jl}. D.V. acknowledges support via NASA Academic Mission Service (NNA16BD14C). This material is based upon work supported by the U.S. Department of Energy, Office of Science, National Quantum Information Science Research Centers, Superconducting Quantum Materials and Systems Center (SQMS) under contract number DE-AC02-07CH11359. We gratefully acknowledge the computing resources provided on Bebop, a high-performance computing cluster operated by the Laboratory Computing Resource Center at Argonne National Laboratory.\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nThe intrinsic notion of boundary has been extensively studied for both noncollapsed $\\RCD(K,N)$ spaces ($\\ncRCD(K,N)$ in short) and Alexandrov spaces. When we say Alexandrov spaces, we always mean complete, geodesic, finite dimensional Alexandrov space. For an Alexandrov space $(A,\\mathsf{d}_A)$, { Burago, Gromov and Perelman introduced the definition of boundary in \\cite{BGP92}}, deonted by $\\mathcal{F}A$, see \\eqref{eq:Alexboundary}. {From the uniqueness of tangent cones along in interiors of geodesics proved by Petrunin }in \\cite{Petrunin98}, it can be deduced that the interior of an Alexandrov space, i.e.\\ $A\\setminus\\mathcal{F}A$, is strongly convex, which means that any geodesic joining points in the interior does not intersect $\\mathcal{F}A$. For a $\\ncRCD(K,N)$ space $(X,\\mathsf{d},\\mathcal{H}^N)$, there are $2$ intrinsic definitions of boundary. One is defined by Kapovitch-Mondino in \\cite{KapMon19}, in the same spirit of defining the boundary for an Alexandrov space, we denote this boundary also by $\\mathcal{F}X$, see \\eqref{eq:KMboundary}. The other is defined by De Philippis-Gigli in \\cite{DPG17}, making use of the stratification of the singular set. We denote this boundary by $\\partial X$, see \\eqref{eq:DPGboundary}.\n\nIn parallel to the strong convexity of the interior of an Alxeandrov space, it is conjectured by De Philippis and Gigli \\cite[Remark 3.8]{DPG17} that the interior of $X$, i.e.\\ $X\\setminus\\partial X$ is strongly convex. We will see that this conjecture follows from the conjecture that the two notions of the boundary of $\\ncRCD(K,N)$ spaces agree.\n\n\n\nIn this paper, we look at the boundary from an an extrinsic point of view, namely, given $K\\in \\R$ and positive integer $N$ we consider two sitiations\n\n\\begin{enumerate}\n \\item\\label{item:alex} an $N$-dimensional Alexandrov space has an $N$-dimensional Alexandrov subspace;\n \\item \\label{item:RCD} a $\\ncRCD(K,N)$ space has a $\\ncRCD(K,N)$ subspace with mild boundary control.\n\\end{enumerate} \n\nWe prove that in the case of \\eqref{item:alex} the intrinsic boundary of an Alexandrov subspace coincides with the topological boundary, and in the case of \\eqref{item:RCD} the De Phlippis-Gigli boundary coincides with the topological boundary. See the precise statement in Theorem \\ref{thm:main1} and Theorem \\ref{thm:main2} below. A direct consequence is that synthetic curvature bounds on a subspace automatically imply regularity of its topological boundary, for example topological structure and rectifiability, see \\cite{BNS20}.\n\n\\begin{theorem}\\label{thm:main1}\nLet $(X,\\mathsf{d}_X)$ be an $N$-dimensional Alexandrov space, $N\\in \\mathbb{N}$, and $\\Omega\\subset X$ be open, if $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is an Alexandrov space, $\\bar{\\Omega}\\cap\\mathcal{F}X=\\varnothing$, and $\\Omega={\\rm Int}_{\\rm top}(\\bar{\\Omega})$, where ${\\rm Int}_{\\rm top}(\\bar\\Omega)$ is the topological interior, i.e. the largest open subset of of $\\bar\\Omega$, \nthen \n\\begin{enumerate}\n \\item\\label{main1:item1} $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$;\n \\item\\label{main1:item2} any (minimizing) geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ is a local geodesic of $(X,\\mathsf{d}_X)$;\n\n \\item\\label{main1:item3} any (minimizing) geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a quasi-geodesic in $(X,\\mathsf{d}_X)$.\n\\end{enumerate}\n\\end{theorem}\n \nTheorem \\ref{thm:main1} will follow from the invariance of domain theorem for Alexandrov spaces, Theorem \\ref{thm:inv}. \nThe proof of Theorem \\ref{thm:inv} has been worked out on the mathoverflow website quite a while ago by Belegradek, Petrunin and Ivanov but does not seem to exist in literature. Since we have an application of this theorem we present the proof following closely the existing one by Belegradek-Petrunin-Ivanov.} It works in a more general purely topological category of MCS spaces, see Theorem~\\ref{thm:inv-dom-mcs}.\n\n\\begin{remark}\\label{rem:counterex}\nThe assumption $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ is clearly necessary and cannot be removed. For example, let $X=\\R^n$, $\\Omega=\\R^n\\setminus\\{0\\}$, which is open and dense. We see that $\\bar\\Omega=X$ is an Alexandrov space without Alexandrov boundary, but the topological boundary of is $\\{0\\}$ which is not empty. This shows that item \\ref{main1:item1} does not hold without the assumption $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ even for smooth manifolds. \n\nNext, the assumption that $\\bar{\\Omega}\\cap\\mathcal{F}X=\\varnothing$ is also clearly necessary. For example let $X$ be the closed unit disk in $\\R^2$ and $\\Omega=X$. Then $\\partial \\bar \\Omega$ is empty while $\\mathcal{F}\\bar\\Omega=\\mathbb S^1$.\n\n\n Also, a geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ joining two points on the boundary need not to be a local geodesic in $(X,\\mathsf{d}_X)$, so the conclusion in item \\ref{main1:item3} of Theorem \\ref{thm:main1} that a geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a quasi-geodesic in the ambient space is optimal.\n\n \n \n For example, consider the space $X\\mathrel{\\mathop:}= D^2\\times\\{0\\}\\sqcup_{\\mathbb{S}^1\\times\\{0\\}}\\mathbb{S}^1\\times [0,\\infty)$ with length metric, which is a cylinder glued along the boundary circle with a disk at the bottom, this is an Alexandrov space of non-negative curvature. Then let $\\Omega=\\mathbb{S}^1\\times (0,\\infty)$. Clearly $\\Omega$ is open, however, any geodesic on $\\partial_{\\rm top}\\Omega=\\mathcal{F}\\Omega=\\mathbb{S}^1\\times\\{0\\}$, which is an arc, is never a local geodesic w.r.t. the metric of $X$, since a segment in $D^2$ connecting any 2 points on its boundary circle is always shorter than the corresponding arcs.\nThis example also shows that $\\bar{\\Omega}$ need not to be locally convex. Compare this with Theorem \\ref{thm:han}.\n\n\\end{remark}\n\nFor $\\ncRCD(K,N)$ spaces, we are able to obtain a similar result to Theorem \\ref{thm:main1} under an extra assumption of a local Lipschitz condition on the metric $\\mathsf{d}_\\Omega$ which serves as a weak substitute for the regularity of the topological boundary.\n \\begin{theorem}\\label{thm:main2}\n Let $(X,\\mathsf{d},\\mathcal{H}^N_X)$ be a $\\ncRCD(K,N)$ space, $\\Omega$ be an open subset of $X$ such that $\\Omega={\\rm Int}_{\\rm top}(\\bar \\Omega)$ and $\\bar \\Omega\\cap \\partial X=\\varnothing$. Suppose that $(\\bar{\\Omega},\\mathsf{d}_\\Omega,\\mathcal{H}^N_{\\bar\\Omega})$ is also an $\\RCD(K,N)$ space and for every $x\\in\\partial_{\\rm top}\\bar{\\Omega}$ there exist an neighborhood $U_x$ of $x$ and $C(U_x)>1$ such that $\\mathsf{d}_\\Omega\\le C(U_x) \\mathsf{d}_X$ when restricted to $U_x\\cap\\bar \\Omega$. Then $\\partial_{\\rm top}\\bar \\Omega=\\partial \\bar\\Omega$.\n \n \n \n \\end{theorem}\n Here, $\\mathcal{H}^N_X$ (resp. $\\mathcal{H}^N_{\\bar\\Omega})$) is the Hausdorff measure induced by $\\mathsf{d}_X$ (resp. $\\mathsf{d}_{\\Omega}$). Notice the following relations between the two Hausdorff measures:\n \n \\begin{remark}\\label{rmk:equiHaus}\n From our assumption and the definition of intrinsic length metric, it follows that $\\bar\\Omega$ is embedded in $X$ in a locally biLipschitz way, i.e.\\ for any $x\\in \\partial_{\\rm top}\\bar\\Omega$ and its neighborhood $U_x$, $\\mathsf{d}_X\\le \\mathsf{d}_{\\Omega}\\le C \\mathsf{d}_{X}$ when restricted to $U\\cap\\bar\\Omega$, so the notions such as Hausdorff dimension and measure zero sets for both Hausdorff measures are equivalent for sets in $\\bar\\Omega$ {since we can always find a countable covering by neighborhoods on which the 2 metrics are biLipschitz to each other}.\n \n \n\n\n \\end{remark}\n \nThere are 2 main technical difficulties in proving Theorem \\ref{thm:main2}. The first being that in general there is no topological information on any neighborhood of a singular point. An important fact used to prove the invariance of domain theorem for Alexandrov spaces is that every point has a neighborhood homeomorphic to a cone over its space of directions, which is not available for $\\ncRCD(K,N)$ spaces. In particular, as opposed to the situation in Alexandrov spaces, for a given point in an $\\ncRCD(K,N)$ space its tangent cone(s)in general do not carry topological information of its neighborhood. For example, Colding-Naber \\cite{CN11} constructed an example of a noncollapsed Ricci limit space with a singular point at which there are two non-homeomorphic tangent cones. Another difficulty is that the topological boundary may in principle vanish when taking tangent cones. Conjecturally this cannot happen but this is unknown at the moment.\nA model case of this phenomenon would be a cusp, for example $X=\\R^2$, and $\\Omega=\\{(x,y)\\in \\R^2: y<\\sqrt{|x|}\\}$, where $0\\in \\partial_{\\rm top}\\bar\\Omega$ but its tangent cone in $\\bar\\Omega$ and in $X$ are both $\\R^2$. We can quickly rule out this case since if $\\bar\\Omega$ were a $\\ncRCD(K,N)$ space, then $0$ would have density $1$ in $\\bar\\Omega$, which in turn implies the neighborhood of $0$ $\\Omega$ is a manifold, a contradiction. However, this argument does not work if the point on the topological boundary is itself a singular point of the ambient space. A unified way to overcome both difficulties is to find a regular point on the topological boundary, if it is more than De Philippis-Gigli boundary. Indeed, we are able to do this with the help of Deng's H\\\"older continuity of tangent cones along the interior of a geodesic, \\cite{deng2020holder}.\n\nA motivation for studying the extrinsic notion of boundary is provided by the following observation on manifolds. Han in \\cite{han20} showed that for a weighted $n$-dimensional manifold $(M,g, e^{-f}\\vol_g)$ with smooth boundary, the measure valued Ricci tensor \n\\begin{equation}\n \\mathrm{\\bf Ric}(\\nabla \\phi,\\nabla \\phi):=\\mathbf{\\Delta}\\frac{|\\nabla \\phi|^2}{2}-(\\langle\\nabla \\phi,\\nabla \\Delta \\phi\\rangle+|{\\mathrm{Hess}}_{\\phi}|^2)e^{-f}\\vol_g\n\\end{equation}\n defined by Gigli \\cite{Gigli14} can be expressed as\n\\begin{equation}\n \\mathrm{\\mathbf{Ric}}=(\\mathrm{Ric}+{\\mathrm{Hess}}_f) e^{-f}\\vol_g+ \\mathrm{II}_{\\partial M}e^{-f}\\mathcal{H}^{n-1}|_{\\partial M},\n\\end{equation}\n where $\\mathbf{\\Delta}$ is the measure valued Laplacian. If $(M,g, e^{-f}\\vol_g)$ satisfies $\\CD(K,\\infty)$ condition, then $\\mathrm{\\bf Ric}\\ge K e^{-f}\\vol_g$. Combined with Han's expression, this lower bound in particular implies that the second fundamental form is non-negative definite, which means the boundary is convex and it is well known that this implies that geodesics joining interior points do not intersect boundary. Han further interprets this convexity where a subset and its topological boundary are considered, moreover, the boundary is not $C^2$ so it is not possible to define the second fundamental form on it.\n To proceed, we fix some notations. For a length metric space $(X,\\mathsf{d})$, and an open connected subset $\\Omega\\subset X$, denote by $\\mathsf{d}_{\\Omega}$ the intrinsic length metric on $\\Omega$, it extends by continuity to $\\bar\\Omega$. Denote by $\\partial_{\\rm top}\\bar\\Omega$ the topological boundary of $\\bar\\Omega$ in $X$. More precisely, Han proved\n\n\\begin{theorem}[\\cite{han20}]\\label{thm:han}\nLet $(M,g)$ be a complete $n$-dimensional manifold, and $\\Omega\\subset M$ be open. Suppose that $(\\bar{\\Omega}, \\mathsf{d}_\\Omega, \\mathfrak{m})$ satisfies that $\\supp(\\mathfrak{m})=\\bar{\\Omega}$ and $\\CD(K,\\infty)$ condition, then $\\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega\\ll \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega$, if furthermore $\\bar\\Omega$ has Lipschitz and $\\mathcal{H}^{n-1}$-a.e.\\ $C^2$ boundary, then $\\mathfrak{m}(\\partial_{\\rm top}\\Omega)=0$ and $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is {locally convex}, i.e., every (minimizing) geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ is a local geodesic in $(M,g)$\n\\end{theorem}\n\nIn particular, every minimizing geodesic in $(\\bar{\\Omega},\\mathsf{d}_\\Omega)$ joining $2$ points in $\\Omega$ does not intersect ${\\partial_{\\rm top}\\Omega}$. We would like to generalize to non-smooth setting the {above theorem of Han, but in view of Remark \\ref{rem:counterex}, it is not true that the (synthetic) Ricci curvature lower bound on a closed subset forces the set to be locally convex. The correct notion to consider for metric spaces is the locally totally geodesic property. \n\n\\begin{definition}\n\tLet $(X,\\mathsf{d})$ be a geodesic metric space. A { connected open subset $\\Omega$} is said to be \\emph{locally totally geodesic} if every (minimizing) geodesic {in $(\\bar \\Omega, \\mathsf{d}_{\\Omega})$} joining two points in $\\Omega$ is a local geodesic in $(X,\\mathsf{d})$.\n\\end{definition}\n} \n\n\n\n\n{With this notion, we see from item \\ref{main1:item2} of Theorem \\ref{thm:main1} that we have shown that the synthetic sectional curvature lower bound on the closure of an open subset forces this open subset to be locally totally geodesic.\n\n For $\\ncRCD$ spaces, the natural approach to generalize the fact that Ricci curvature lower bound on a subset forces locally totally geodesic property is to show the equivalence between the intrinsic and topological boundary, since the convexity results for intrinsic boundary will then apply to the topological boundary as well. For example, with extra assumption that Kapovitch-Mondino boundary and De Philippis-Gigli boundary coincide, we can derive that the interior of an $\\ncRCD(K,N)$ subspace is locally totally geodesic by combining Theorem \\ref{thm:main2} and Theorem \\ref{thm:intconv}. See also Corollary \\ref{cor:loc-total-geo}. } \n\n\n However, for $\\ncRCD(K,N)$ spaces, the strong convexity of its (intrinsic) interior is not presently known, to derive it we need an extra assumption that the Kapovitch-Mondino boundary and the De Philippis-Gigli boundary are the same.\n\n\\begin{theorem}[Corollary \\ref{cor:intconv}]\\label{thm:intconv}\n Let $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. Assume $\\partial X=\\mathcal{F} X$, then ${\\rm Int}(X)\\mathrel{\\mathop:}= X\\setminus \\partial X$ is strongly convex, i.e.\\ any geodesic joining points in ${\\rm Int}(X)$ does not intersect $\\partial X$.\n\\end{theorem}\n\n Although the equivalence between the two boundary notions, hence the strong convexity of the interior of $\\ncRCD(K,N)$ space is unknown, we can still obtain an a.e.\\ version of convexity of the interior of a $\\ncRCD(K,N)$ space. This in turn implies that for a $\\ncRCD(K,N)$ subset, intrinsic geodesics joining most interior points are away from its topological boundary. The a.e.\\ convexity of interior follows from the following more general a.e.\\ convexity of regular set at essential dimension which is a slight generalization of pairwise a.e. convexity of $\\mathcal{R}_n$ proved by Deng \\cite[Theorem 6.5]{deng2020holder}.\n \n\\begin{proposition}\\label{thm:almostconvex}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space of essential dimension $n$. For \\textit{every} $x\\in X$, there exists a subset $R_x\\subset \\mathcal{R}_n$ so that $\\mathfrak{m}(X\\setminus R_x)=0$ and for any $y\\in R_x$ there is a minimizing geodesic joining $x,y$ contained in $\\mathcal{R}_n$ except possibly for $x$. \n\\end{proposition}\n\nFor the proof we need the technique of localization via transport rays of any $1$-Lipschitz function, developed by Cavalletti-Mondino \\cite{CavMon15} in non-smooth setting. \n\nFinally, we conjecture that Theorem \\ref{thm:han} holds in much larger generality including the measure regularity part, see Conjecture \\ref{conj:collapseconv}.\n\n\n\n\n\nThe paper is organized as follows: In section \\ref{sec:prelim}, we recall concisely the structure results for Alexandrov and $\\RCD(K,N)$ spaces. In section \\ref{sec:inv} we prove invariance of domain theorem for Alexandrov spaces. Section \\ref{sec:equi} is devoted to the proof of main theorems Theorem \\ref{thm:main1} and Theorem \\ref{thm:main2}. The last two sections, section \\ref{sec:AppConv} and \\ref{sec:almost} focus on applications of the main theorems to subsets satisfying $\\ncRCD(K,N)$ condition in various ambient spaces.\n\n\\smallskip\\noindent\n\\textbf{Acknowledgement.} The second named author thanks Anton Petrunin for bringing invariance of domain for Alexandrov spaces to his attention, Qin Deng for suggesting Proposition \\ref{thm:almostconvex}, Igor Belegradek and Jikang Wang for several helpful discussions.\n\n\\section{Preliminary}\\label{sec:prelim}\n\n\\subsection{Stratified spaces}\\label{subsec:} \nIn this section we give a brief review of topological stratified spaces. \n\n\\begin{definition}\nA metrizable space $X$ is called an \\emph{MCS-space (space with multiple conic singularities)} of dimension $n$ if every point $x\\in X$ has a neighborhood pointed homeomorphic to the open cone over a compact $(n-1)$-dimensional MCS space. Here we assume the empty set to be the unique $(-1)$-dimensional MCS-space.\n\\end{definition}\n\n\n\\begin{remark}\nA compact $0$-dimensional MCS-space is a finite collection of points with discrete topology. A 1-dimensional MCS-space is a locally finite graph.\n\\end{remark}\n\n\nAn open conical neighborhood of a point in an MCS-space is unique up to pointed homeomorphism~\\cite{Kwun}. However given an open conical neighborhood $U$ of $x\\in X$ pointed homeomorphic to a cone over an $(n-1)$-dimensional space $\\Sigma_x$, the space $\\Sigma_x$ need not be uniquely determined by $U$.\n\n\nIt easily follows from the definition that an MCS space has a natural topological stratification constructed as follows.\n\nWe say that a point $p\\in X$ belongs to the $l$-dimensional stratum $X_l$ if $l$ is the maximal number $m$ such that the conical neighbourhood \nof $p$ is pointed homeomorphic to $\\R^m\\times K(S)$ for some MCS-space $S$. It is clear that $X_l$ is an $l$-dimensional topological manifold. It is also immediate that for $x\\in X_l$ all points in the conical neighborhood of $X$ belong to the union of $X_k$ with $k\\ge l$. Therefore the closure $\\bar X_l$ of the $l$-stratum is contained in the union $\\cup_{m\\le l} X_m$ of the strata of dimension at most $l$.\n\nThe $n$ stratum $X_n$ is an $n$-dimensional manifold and by above it is open and dense in $X$. We will also refer to $X_n$ as the \\emph{top stratum} of $X$.\n\n\n\\subsection{Structure theory for $\\RCD(K,N)$ spaces}\\label{subsec:ncRCD}\nWhen writing $\\RCD(K,N)$ space, we always assume that $N\\in [1,\\infty)$.\nWe assume familiarity with the structure theory of $\\RCD(K,N)$ spaces and just collect a few facts to fix notations. \n\n\\begin{definition}\n Given an $\\RCD(K,N)$ space $(X,\\mathsf{d},\\mathfrak{m})$, let $\\mathcal{R}_k$ be the set of points at which the tangent cone is $(\\R^k,|\\cdot|,\\mathcal{L}^k)$, for $k\\in [1,N]\\cap \\mathbb{N}$. $\\mathcal{R}(X)\\mathrel{\\mathop:}= \\cup_k \\mathcal{R}_k$ is called the regular set of $X$. \n\\end{definition}\n\nIf there is no confusion we also write $\\mathcal{R}$ instead of $\\mathcal{R}(X)$. It is shown in \\cite{MN14} that $\\mathfrak{m}(X\\setminus\\cup_{k}\\mathcal{R}_k)=0$ and each $\\mathcal{R}_k$ is $\\mathcal{H}^k$-rectifiable. Then it is shown in \\cite{BrueSemola20Constancy} that there is a unique $n\\in [1,N]\\cap \\mathbb{N}$ such that $\\mathfrak{m}(X\\setminus\\mathcal{R}_n)=0$. Such $n$ is called the essential dimension of $(X,\\mathsf{d},\\mathfrak{m})$ which is also denoted by ${\\rm essdim}$. \n{It is equal to the maximal $k$ such that $\\mathcal{R}_k$ is non empty, see for example \\cite{kitabeppu2017sufficient}.}\nThe singular set $\\mathcal{S}$ is the complement of the regular set, $\\mathcal{S}\\mathrel{\\mathop:}= X\\setminus \\cup_{k}\\mathcal{R}_k$. { The singular set has measure zero.}\n\nThe notion of noncollapsed $\\RCD(K,N)$ ($\\ncRCD(K,N)$ in short) is proposed in \\cite{DPG17}, requiring that $\\mathfrak{m}=\\mathcal{H}^N$, which in turn implies $N\\in\\mathbb{N}$ and the essential dimension of a $\\ncRCD(K,N)$ space is exactly $N$, see \\cite[Theorem 1.12]{DPG17}. When considering $\\ncRCD(K,N)$ spaces, finer structure results are available.\n\nThe density function\n\\begin{equation}\n \\Theta_N(x)\\mathrel{\\mathop:}=\\lim_{r\\to 0}\\frac{\\mathcal{H}^N(B_r(x))}{\\omega_N r^N}\\le 1\n\\end{equation}\nplays a crucial role in the study of regularity of $\\ncRCD(K,N)$ spaces. The existence of the limit and the upper bound $1$ come from the Bishop-Gromov inequality. Note that the density function characterizes the regular points in the following way \\cite[Corollary 1.7]{DPG17}:\n\\begin{equation}\n \\Theta_N(x)=1 \\Leftrightarrow x\\in \\mathcal{R}_N=\\mathcal{R}.\n\\end{equation}\n\n Thanks to the splitting theorem \\cite{Gigli13} and the volume cone to metric cone property \\cite{DPG16} in a $\\ncRCD(K,N)$ space, the singular set $\\mathcal{S}$ is stratified into \n\\[\n\\mathcal{S}_0\\subset \\mathcal{S}_1\\subset \\cdots\\subset \\mathcal{S}_{N-1},\n\\]\nwhere for $0\\le k\\le N-1$, $k\\in \\N$, $\\mathcal{S}_k=\\{x\\in \\mathcal{S}: \\text{no tangent cone at $x$ is isometric to } \\R^{k+1}\\times C(Z)\\text{ for any metric space } Z\\}$, where $C(Z)$ is the metric measure cone over a metric space $Z$. It is proved in \\cite[Theorem 1.8]{DPG17} that\n\\begin{equation}\\label{eq:sing}\n \\dim_{\\mathcal{H}}(\\mathcal{S}_k)\\le k.\n\\end{equation}\n With the help of the metric Reifenberg theorem \\cite[Theorem A.1.1-A.1.3]{Cheeger-Colding97I}, it can be derived that for points whose the density is close to $1$ there is a neighborhood homeomorphic to a smooth manifold. We have from \\cite[Theorem 1.7, Corollary 2.14]{KapMon19} that\n\n\\begin{theorem}\\label{thm:regular}\n Let $(X,\\mathsf{d},\\mathfrak{m})$ be a $\\ncRCD(K,N)$ space, and $\\alpha\\in (0,1)$. There exists $\\delta\\mathrel{\\mathop:}= \\delta(\\alpha,K,N)>0$ small enough so that if $x\\in X$ satisfies $\\Theta_N(x)> 1-\\delta$, then there is a neighborhood of $x$ biH\\\"older homeomorphic to a smooth manifold with H\\\"older exponent $\\alpha$. Moreover the set $\\{x\\in X: \\Theta_N(x)> 1-\\delta\\}$ is open and dense.\n\\end{theorem}\n\nWe call such points manifold points, and call the complement non-manifold points. It then follows that the set of non-manifold points has Hausdorff codimension at least $1$ since it is contained in $S^{N-1}$. \n \n Finally let us recall here some facts about the boundary of a $\\ncRCD$ space $(X,\\mathsf{d},\\mathcal{H}^N)$. Based on the stratification of $\\mathcal{S}$, De Philippis and Gigli proposed the following definition of the boundary of a $\\ncRCD(K,N)$ space $(X,\\mathsf{d},\\mathfrak{m})$:\n\\begin{equation}\\label{eq:DPGboundary}\n \\partial X\\mathrel{\\mathop:}= \\overline{\\mathcal{S}_{n-1}\\setminus \\mathcal{S}_{n-2}}.\n\\end{equation}\nOn the other hand, Kapovitch-Mondino (\\cite{KapMon19}) proposed another recursive definition of the boundary analogous to that of Alexandrov spaces, for $N\\ge 2$: \n\\begin{equation}\\label{eq:KMboundary}\n \\mathcal{F}X\\mathrel{\\mathop:}=\\{x\\in X: \\exists Y\\in {\\rm Tan}(X,\\mathsf{d},\\mathfrak{m},x), Y=C(Z), \\mathcal{F}Z\\neq \\varnothing\\}.\n\\end{equation}\nIn this definition $Z$ must be a non-collapsed $\\RCD(N-2,N-1)$ space with suitable metric and measure (\\cite[Lemma 4.1]{KapMon19}, after \\cite{Ketterer2015}), so one can inductively reduce the consideration to the case $N=1$, in which case the classification is completed in \\cite{KL15}.\n\n\nThe measure theoretical and topological structure of De Philippis-Gigli's boundary is subsequently studied in \\cite{BNS20} and \\cite{BPS21}. We will need the following relation from combining \\cite[Lemma 4.6]{KapMon19} and \\cite[Theorem 6.6]{BNS20}: \n\\begin{equation}\\label{eq:boundaryrelation}\n \\mathcal{S}^{N-1}\\setminus\\mathcal{S}^{N-2}\\subset\\mathcal{F}X\\subset \\partial X.\n\\end{equation}\n An implication of the above relation is that not having boundary in both senses are the same, { and is equivalent to $ \\mathcal{S}^{N-1}\\setminus\\mathcal{S}^{N-2}=\\varnothing$.} It is conjectured that $\\mathcal{F}X=\\partial X$, and this is verified for Alexandrov spaces and Ricci limit spaces with boundary, see \\cite[Chapter 7]{BNS20}.\n \n \n \n \\subsection{Structure theory of Alexandrov Spaces}\\label{subsec:Alexandrov}\nObserve that the structure theory of $\\ncRCD(K,N)$ spaces holds for Alexandrov spaces since $N$-dimensional Alexandrov spaces with lower curvature bound $K$ are $\\ncRCD(K,N)$ spaces \\cite{Petrunin11}, though some results can have different, usually easier, proofs. Instead of attempting to give a thorough introduction, we collect here the following facts that are necessary for this paper and are more refined than that of $\\ncRCD(K,N)$ spaces. We refer readers to \\cite{BGP92, BBI01, Petr-conv} for detailed structure theory of Alexandrov spaces. \n\nFix an $N$-dimensional Alexandrov space $(X,\\mathsf{d})$. We describe the tangent cones, boundary and topological structure of $X$.\n \nTangent cones in an Alexandrov space are nicer than those in $\\ncRCD$ spaces, for example, the tangent cone at every point is unique. To better describe tangent cones, we introduce the space of directions:\n\n\\begin{definition}\nFor any $p\\in X$, we say that any $2$ geodesics emanating from $p$ have the same direction if their angle at $p$ is zero. This induces an equivalence relation on the space of all geodesics emanating from $p$ and the angle induces a metric on the space of equivalent classes of such geodesics. The metric completion of it is the space of directions at $p$, denoted by $\\Sigma_p(X)$.\n\\end{definition} \n\n$\\Sigma_p(X)$ is an $(N-1)$-dimensional Alexandrov space of curvature lower bound $1$ \\cite[Theorem 10.8.6]{BBI01}. The (metric) tangent cone at $p$ is the metric cone over $\\Sigma_p(X)$, this definition is consistent with the (blow-up) tangent cone $T_pX$ obtained by taking the pGH limit of $(X, r^{-1}\\mathsf{d},p)$ as $r\\to 0$. This observation along with Perelman's stability theorem \\cite{Perelman91} implies that $p$ has a neighborhood homeomorphic to a cone over $\\Sigma_p(X)$, therefore $X$ is an n-dimensional MCS-space by induction. For an alternative proof of this result see \\cite{Per-Morse}. \n\nThe boundary $\\mathcal{F}X$ is defined for $N\\ge 2$ as \n\\begin{equation}\\label{eq:Alexboundary}\n \\mathcal{F}X=\\{p\\in X: \\Sigma_p(X) \\text{ has boundary}\\}.\n\\end{equation}\nWhen $N=1$ Alexandrov spaces are manifolds, the boundary is just boundary of a manifold, see \\cite[7.19]{BGP92}. This gives the inspiration to the Kapovitch-Mondino boundary \\eqref{eq:KMboundary}. It is clear that when $(X,\\mathsf{d},\\mathcal{H}^N)$ is viewed as a $\\ncRCD(K,N)$ space, this boundary is exactly the Kapovitch-Mondino boundary, which justifies the use of notation.\n\nSimilar to the $\\ncRCD$ case, the set of manifold points of $X$ is open and dense, the non manifold points of $X$ is of Hausdorff dimension and topological dimension at most $n-1$ if $X$ has boundary and codimension at most $n-2$ if $X$ does not have boundary. This follows by combining \\eqref{eq:sing}, \\eqref{eq:boundaryrelation} and Theorem \\ref{thm:regular}. \n\nWe will also need the notion and properties of quasigeodesics on Alexandrov spaces \\cite{PP-quasigeoodesics}. Recall that a unit speed curve $\\gamma$ in an Alexandrov space is called a \\emph{quasigeodesic} if restrictions of distance functions to $\\gamma$ have the same concavity properties as their restrictions to geodesics. For example, for non-negatively curved Alexandrov space $X$ this means that for any $p\\in X$ the function $t\\mapsto d(\\gamma(t), p)^2$ is 2-concave. Every geodesic is obviously a quasigeodesic but the converse need not be true. For example if $X$ is the unit disk in $\\R^2$ then the boundary circle is a quasigeodesic in $X$.\nPetrunin and Perelman showed \\cite{PP-quasigeoodesics} that for every point $p$ in an Alexandrov space there are infinite quasigeodesic starting in every direction at $p$.\n\n\n\\section{Invariance of Domain for Alexandrov spaces}\\label{sec:inv}\n\nAs stated in the introduction, the invariance of domain for Alexandrov spaces has long been known by experts, we present here a precise statement and its proof due to Belegradek-Ivanov-Pertunin on mathoverflow \\cite{BIP10}. \n\n\\begin{theorem}\\label{thm:inv}\nLet $(X,\\mathsf{d}_X)$, $(Y,\\mathsf{d}_Y)$ be Alexandrov spaces of same dimension, $f:X\\to Y$ be a injective continuous map. For any open subset $U\\subset X$, if $U\\cap \\mathcal{F}X=\\varnothing$ then $f(U)\\cap \\mathcal{F}Y=\\varnothing$, and $f(U)$ is open in $Y$. \n\\end{theorem}\n\nThis theorem follows from the following purely topological Invariance of Domain Theorem for MCS spaces.\n\n\\begin{theorem}\\label{thm:inv-dom-mcs}\n{\nLet $X,Y$ be $n$ dimensional MCS spaces such that $X_{n-1}=Y_{n-1}=\\varnothing$ and for all points in $Y$ their open conical neighborhoods have connected $n$-strata.\n}\n\nLet $f: X\\to Y$ be continuous and injective.\n\nThen $f(X)$ is open in $Y$ {and open conical neighborhoods of all points in $X$ have connected top strata.}\n\n\\end{theorem}\n\n\n\n\n\n\nWe need the following lemma regarding the $\\mathbb{Z}_2$-cohomology for\n{ MCS spaces originated from Grove-Petersen \\cite{PG93}, initially stated for compact Alexandrov spaces without boundary}. Note that finite dimensional\nMCS spaces are locally compact, and locally contractible, since every point has a neighborhood homeomorphic to a cone, so Alexander-Spanier cohomology, singular cohomology and Cech cohomology all coincides. It is not necessary to specify which cohomology to use. In what follows all cohomology is taken with $\\mathbb{Z}_2$ coefficients.\n\nWe will make use of the following duality which holds for Alexander-Spanier cohomology with compact support \\cite[Chapter 1]{Massey-book}.\nGiven a locally compact and Hausdorff space $Y$ and a closed subset $A\\subset Y$ it holds that $H^n_c(Y,A)\\cong H^n_c(Y\\setminus A)$.\n{ \n\\begin{lemma}\\label{lem:GP}\nLet $X$ be an $n$-dimensional compact MCS space where $X_n$ has $k$ connected components and $X_{n-1}=\\varnothing$, then $H^n(X)\\cong\\mathbb{Z}_2^k$.\n\\end{lemma}\n}\n\\begin{proof}\nThe proof is the same as in \\cite{PG93}.\n\n\nSince $X_n=X\\setminus S$ is an $n$-manifold with $k$ connected components we have that $H^n_c(X\\setminus S)\\cong \\mathbb{Z}_2^k$. On the other hand by Alexander-Spanier duality we have that $H^n_c(X\\setminus S)\\cong H^n_c(X,S)\\cong H^n(X,S)$ where the last isomorphism holds since $X$ is compact. Now the result immediately follows from the long exact sequence of the pair $(X,S)$ using the fact that $S$ is the union of strata of dimension $\\le 2$ and hence $H^{n-1}(S)\\cong H^{n}(S)=0$.\n\\end{proof}\n\n \n \n Note that in the above proof we get that $H^n_c(X\\setminus S)\\cong H^n(X,S)\\cong H^n(X)$.\nCompare this to the proof of the following Lemma\n\n\\begin{lemma}\\label{lem:Igor}\n Let $(X,\\mathsf{d}_X)$ be a compact $n$-dimensional MCS space with connected $X_n$ and ${X_{n-1}}=\\varnothing$, take $x\\in X_n$.\n Then we have\n \\begin{enumerate}\n \\item\\label{lem:Igoritem1} $H^n(X\\setminus \\{x\\})=0$;\n \\item\\label{lem:Igoritem2} the inclusion $i: (X,\\varnothing)\\to (X,X\\setminus\\{x\\})$ induces an isomorphism on cohomology, that is \n \n \\begin{equation}\n i^*: H^n(X, X\\setminus \\{x\\})\\to H^n(X)\n \\end{equation}\n is an isomorphism.\n \\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}[Proof of Lemma~\\ref{lem:Igor}]\n{\n\n\n\n We first show item \\ref{lem:Igoritem2}. Let $U\\subset X\\setminus S=X_n$ be connected and open. Since $X\\setminus S$ is a manifold and $U$ is connected, we have that the inclusion $U\\hookrightarrow X\\setminus S$ induces an isomorphism between compactly supported cohomology $H^n_c(U)$ and $H^n_c (X\\setminus S)$. \n \n \n Also, since $X$ is compact we have that $H^n_c(X, X\\setminus U)\\cong H^n(X, X\\setminus U)$ and similarly $H^n_c(X,S)\\cong H^n(X,S)$.\n \n \n With this at disposal, consider the inclusion of pairs $(X,S)\\hookrightarrow (X,X\\setminus U)$, we have \n \n \\begin{tikzcd}\nH^n_c(U) \\arrow[r, \"\\cong\"] \\arrow[d,\"\\cong\"]\n& H^n_c(X\\setminus S) \\arrow[d, \"\\cong\"] \\\\\nH^n(X, X\\setminus U) \\arrow[r]\n& |[]| H^n(X,S),\n\\end{tikzcd}\\\\\n\nwhere the vertical arrows are Alexander-Spanier duality combined with the above isomorphisms $H^n_c(X, X\\setminus U)\\cong H^n(X, X\\setminus U)$ and $H^n_c(X,S)\\cong H^n(X,S)$.\n}\n\nThis gives an isomorphism between $H^n(X, X\\setminus U)$ and $H^n(X,S)$ hence between $H^n(X, X\\setminus U)$ and $H^n(X)$ by inclusion. Note that $X\\setminus \\{x\\}$ deformation retract to $X\\setminus U$ for some open conical open neighborhood $U\\subset X\\setminus S$ of $x$, which implies that $i^*: H^n(X, X\\setminus \\{x\\})\\to H^n(X)$ is an isomorphism. \n\nNext we show item \\ref{lem:Igoritem1}. To compute $H^n(X\\setminus \\{x\\})$, look at the long exact sequence for the pair $(X,X\\setminus \\{x\\})$:\n \\begin{equation}\n \\cdots\\rightarrow H^n(X,X\\setminus \\{x\\})\\xrightarrow{\\cong} H^n(X)\\rightarrow H^n(X\\setminus \\{x\\})\\xrightarrow{0} H^{n+1}(X,X\\setminus \\{x\\})\\rightarrow\\cdots,\n \\end{equation}\n$H^n(X\\setminus \\{x\\})=0$ follows directly. \n\\end{proof}\n\nNow we can prove the invariance of domain for MCS spaces. The strategy is to localize $X,Y$ to suspensions over lower dimensional strata\n, so that the proof reduces to the case of compact MCS\nspaces with connected top stratum and empty codimension 1 stratum, where the above lemmas apply.\n\n\\begin{proof}[Proof of Theorem \\ref{thm:inv-dom-mcs}]\n\n{ Let us first prove the theorem under the extra assumption that for all points in $X$ the top strata of their conical neighborhoods are connected.}\n\nWe break the proof into steps.\n\n{\\bf Step 1: }Localize to suspensions, which are MCS spaces satisfying assumption in Lemma \\ref{lem:GP} and Lemma \\ref{lem:Igor}. \n\n\nLet $x\\in U$ and $y\\mathrel{\\mathop:}= f(x)\\in f(U)$. Both $x,y$ have a neighborhood homeomorphic to cones over\n{ some $(n-1)$-dimensional MCS spaces} $\\Sigma_x$, $\\Sigma_y$, respectively. Take cone neighborhoods of $x$, $B_x\\Subset B'_x\\Subset U$, then there exists a cone neighborhood of $y$, say $B_y\\subset f(U)$ such that $B_y\\cap f(\\overline{B'_x}\\setminus B_x)=\\varnothing$. Let $C\\mathrel{\\mathop:}= U\\setminus B_x$ and $D\\mathrel{\\mathop:}= Y\\setminus B_y$. Note that both $U\/C$ and $Y\/D$ are homeomorphic to a suspension over $\\Sigma_x$, $\\Sigma_y$ respectively. The quotient map induces a new map $\\tilde{f}: U\/C\\to Y\/D$ between compact {$n$-dimensional MCS spaces with connected top stratum and empty codimension $1$ stratum. \nObserve that $\\tilde{f}$ remains injective on $f^{-1}(B_y)=f^{-1}(Y\\setminus D)$. \n\nIt suffices to show that $\\tilde{f}$ is surjective onto $B_y$ identified with its image in $Y\/D$. By continuity, it suffices to show every\npoint in $Y_n \\cap B_y$ is in the image of $\\tilde{f}$. \n\n\n\n{\\bf Step 2: }We show that $\\tilde{f}^*: H^n(Y\/D)\\to H^n(U\/C)$ is an isomorphism.\n\n\n First, we claim that there exists a\n point $x'\\in X_n $ such that $y'\\mathrel{\\mathop:}= f(x)\\in Y_n$. To see this, let $x\\in U\\cap X_n$, and take a compact neighborhood $B$, it is of topological dimension $n$, since $f$ is injective and continuous, it is a homeomorphism between $B$ and $f(B)$, so $f(B)$ also has topological dimension $n$, which means $f(B)$ can not be entirely in $\\cup_{k=0}^{n-2}Y_k$,\n which is of topological dimension at most $n-2$. Now that we have $x'\\in X_n$ and $y'\\in Y_n$ , we claim that $\\tilde{f}^*:H^n(Y\/D, Y\/(D\\setminus \\{y'\\}))\\to H^n(U\/C,U\/(C\\setminus\\{x'\\}))$ is an isomorphism. \n\nTo this end, take an excision around the manifold neighborhood of $x',y'$ respectively. The desired claim reduces to showing that $f^*:H^n(B^n,B^n\\setminus \\{x'\\})\\to H^n(f(B^n),f(B^n)\\setminus \\{y'\\})$ is an isomorphism for injective and continuous $f$ such that $f(x')=y'$, where $B^n$ is a ball in $\\mathbb{R}^n$. The invariance of domain for $\\mathbb{R}^n$ has been used to show that $f(B^n)$ is open so that an excision can be applied on $Y\/D$. The invariance of domain for $\\mathbb{R}^n$ also shows $f:(B^n, B^n\\setminus \\{x'\\})\\to (f(B^n),f(B^n)\\setminus \\{y'\\})$ is a homeomorphism, the claim follows.\n\nNow consider the induced map $f^*$ between long exact sequences of the pairs $(Y\/D, Y\/(D\\setminus \\{y'\\}))$ and $(U\/C,U\/(C\\setminus \\{x'\\}))$, taking also into account item \\ref{lem:Igoritem2} of Lemma \\ref{lem:Igor}, by 5-Lemma it follows that $\\tilde{f}^*: H^n(Y\/D)\\to H^n(U\/C)$ is an isomorphism. \n\n\n\n{\\bf Step 3: }Arguing by contradiction assume that\n $\\tilde{f}$ is not surjective onto $(Y_n\\cap B_y)$ identified with its image in $Y\/D$, we show that $\\tilde{f}^* :H^n(Y\/D)\\to H^n(X\/C)$ is a zero map. However, $\\tilde{f}^*$ cannot be both zero map and isomorphism (from Step 2), because by Lemma \\ref{lem:GP}, $H^n(Y\/D)= H^n(U\/C)=\\mathbb{Z}_2$, a contradiction.\n \n\nFor this purpose, suppose that a\npoint $z\\in Y_n\\cap B_y$ is missed by $\\tilde{f}$, then $\\tilde{f}$ can be factored through \n\\begin{equation}\n \\tilde{f}: U\/C\\rightarrow Y\/(D\\setminus \\{z\\})\\rightarrow Y\/D.\n\\end{equation}\nSince $H^n(Y\/(D\\setminus \\{z\\}))=0$ due to item \\ref{lem:Igoritem1} of lemma \\ref{lem:Igor}, $\\tilde{f}^* :H^n(Y\/D)\\to H^n(X\/C)$ is a zero map. \n\n\nThis concludes the proof of the theorem under the extra assumption that for all points in $X$ the top strata of their conical neighborhoods are connected. \n\n\n\nTo complete the proof in the general case we will need the following general lemma.\n\n\\begin{lemma}\\label{lem-top-strata}\nLet $Z$ be a connected $n$-dimensional MCS space space that that it's top stratum $Z_n$ is not connected. Then there exists a point $z\\in Z$ such that the top stratum of its conical neighborhood $U_z$ is not connected.\n\\end{lemma}\n\\begin{proof}[Proof of Lemma \\ref{lem-top-strata}]\nlet $p, q$ be points lying in different connected components of $Z_n$. Since $Z$ is connected there is a path $\\gamma:[ 0,1]\\to Z$ such that $\\gamma(0)=p, \\gamma(1)=q$.\nBy compactness of $[0,1]$ there exists finitely many connected components $U_1,\\ldots U_k$ of $Z_n$ whose closures intersect $\\gamma$. Since the top stratum is dense in $Z$ we have that\n$\\gamma$ is contained in $\\bar U_1\\cup \\bar U_2\\cup\\ldots\\cup \\bar U_k$, therefore $[0,1]=\\gamma^{-1}(\\bar U_1)\\cup \\gamma^{-1}(\\bar U_2)\\cup \\ldots\\cup \\gamma^{-1}(\\bar U_k)$.\nAs all these sets are closed and $[0,1]$ is connected this covering can not be disjoint and hence there is $t_0\\in [0,1]$ which belongs to at least two $\\gamma^{-1}(\\bar U_j)$. Then $z=\\gamma(t_0)$ satisfies the conclusion of the Lemma.\n\n\\end{proof}\nWe now continue with the proof of Theorem \\ref{thm:inv-dom-mcs}.\n\nRecall that we have proved the theorem under the assumption that all conical neighborhood of points in $X$ have connected top strata.\n\nNow suppose there are some points in $X$ such that the top strata of their conical neighborhoods are not connected. Let $l$ be the largest number that $X_l$ contains such a point $x$. Take such $x\\in X_l$.\n\n\nThen its conical neighborhood $U_x$ has the form $\\R^l\\times C(\\Sigma)$ where $\\Sigma$ is $(n-l-1)$-dimensional MCS space. \nNote that points in $U_x$ outside of $\\R^l\\times \\{*\\}$ (here $*$ is the cone point in $C(\\Sigma)$) lie in the union of strata of dimension $>l$.\n\nWe claim that $\\Sigma$ has more than one connected components. Indeed, if not then its top stratum is not connected while $\\Sigma$ itself is connected. Then by Lemma \\ref{lem-top-strata} applied to $\\Sigma$ there exists a point $\\sigma\\in \\Sigma$ such that the top stratum of its conical neighborhood in $\\Sigma$ is not connected. But then the corresponding point in $U_x$ will lie in $X_m$ for $m>l$ and also have the property that its conical neighborhood has more than one top stratum components. This contradicts the maximality of $l$ in the choice of $x$. \n\nLet $\\Sigma'$ be one component of $\\Sigma$. Then the subset $W'=\\R^l\\times C(\\Sigma')\\subset U_x$ is an $n$-dimensional MCS space with empty $(n-1)$-stratum and such that the top stratum of all conical neighborhoods in $W'$ are connected.\n Then we have an injective embedding $f: W'\\to Y$ and by the proof above the image $f(W')$ is an open neighborhood of $f(x)$. But the same argument applies to any other component $\\Sigma''$ of $\\Sigma$ and gives another subset $W''\\subset U_x$ which contains $x$ and such that $f(W'')$ is also an open neighborhood of $f(x)$. This contradicts injectivity of $f$ near $x$. Therefore under the assumption of the theorem conical neighborhoods of points in $X$ must necessarily have connected top strata. }\n\\end{proof}\n\n\\begin{remark}\nThe connectedness assumption of top strata of conical neighborhoods in $Y$ is essential. For example, take $Y=\\R^n\\bigvee \\R^n$ to be the wedge sum of two copies of $\\R^n$ glued at $0$, $X=\\R^n$ and $f:X\\hookrightarrow Y$ be inclusion of the first copy of $\\R^n$. \nThis map is clearly 1-1 but the image is not open since it does not contain any neighborhood of $0$ in $Y$.\n\n\t\n\\end{remark}\n\n\\begin{remark}\nThe conclusion that conical neighborhoods of points in $X$ must be connected can be viewed as a non-embeddability result. In other words the following holds. Suppose $Y$ satisfies the assumption of the theorem and $X$ is an $n$-dimensional MCS space with empty $(n-1)$-stratum and such that there is a point in $X$ such that the top stratum of its conical neighborhood is not connected. Then there is no 1-1 continuous map $f:X\\to Y$.\n\\end{remark} \n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:inv}]\n\tAs pointed out in section \\ref{subsec:Alexandrov}, every Alexandrov space is an MCS space with connected top stratum.\n\t\n\t The assumption $U\\cap \\mathcal{F}X=\\varnothing$ implies that $U$ has empty codimension $1$ stratum. } Next, for every $p\\in Y$ its conical neighborhood $W_p$ is homeomorphic to $T_pY$ which is a nonnegatively curved Alexandrov space.\n\t\n Since the top stratum of $T_pY$ is connected the same is true for $W_p$.\n\t\n\t It suffices to show that $f(U)\\cap\\mathcal{F}Y=\\varnothing$, everything else follows from Theorem \\ref{thm:inv-dom-mcs}. \n\t\n\tAssume $\\mathcal{F}Y\\neq \\varnothing$. Take the metric double $\\tilde{Y}$ of $Y$, $\\tilde{Y}$ an $n$-dimensional Alexandrov space without boundary, and $f:X\\to Y$ extends to an injective and continuous map into $\\tilde Y$ by post composing with the inclusion map $Y\\hookrightarrow \\tilde Y$. We still denote it by $f$. Applying Theorem \\ref{thm:inv-dom-mcs} to $f:X\\to \\tilde{Y}$, we see that $f(U)$ must be open in $\\tilde Y$. If there exists $z\\in f(U)\\cap \\mathcal{F}Y$, then there exists an open neighborhood $V$ of $z$ in $f(U)\\cap\\tilde Y$. By definition of metric double $V$ must intersect both copies of $Y$ in $\\tilde Y$, this is a contradiction to the definition of $f$, from which it follows that $f(U)$ can not intersect $\\mathcal{F}Y$.\n\\end{proof}\n\n\n\n\n\n\n\n\\section{Equivalence of intrinsic and extrinsic boundary}\\label{sec:equi}\n\n\\subsection{Alexandrov case}\n\n\n\\begin{proof}[Proof of Theorem \\ref{thm:main1}]\n\n We first show that $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$. Since tangent cones at points in $\\Omega$ have no boundary, we see that $\\mathcal{F}\\bar\\Omega\\subset\\partial_{\\rm top}\\Omega$. Now take $p\\in \\partial_{\\rm top}\\bar\\Omega$, if to the contrary $p\\notin \\mathcal{F}\\bar\\Omega$, then there is an open set $U\\subset \\bar{\\Omega}$ containing $p$ such that $U\\cap \\mathcal{F}\\bar\\Omega=\\varnothing$, since $\\mathcal{F}\\bar\\Omega$ is closed. The Invariance of Domain Theorem \\ref{thm:inv} applied to inclusion $i:\\bar{\\Omega}\\hookrightarrow X$ yields that $i(U)=U$ is also an open subset of $X$, so $p\\in U\\subset {\\rm Int}_{\\rm top}(\\bar\\Omega)=\\Omega$, a contradiction to $p\\in\\partial_{\\rm top}\\bar\\Omega$. \n\n Now that $\\partial_{\\rm top}\\bar\\Omega=\\mathcal{F}\\bar\\Omega$, it follows immediately that $\\Omega$ coincides with $\\bar\\Omega\\setminus \\mathcal{F}\\bar\\Omega$, which is the interior in the sense of Alexandrov spaces. So strong convexity of the interior of an Alexandrov space yields that $\\gamma$ does not intersect $\\mathcal{F}\\Omega$ hence $\\partial_{\\rm top}\\Omega$. \n The proof of \\ref{main1:item2} is completed by noticing that any $\\mathsf{d}_\\Omega$ geodesic connecting points in ${\\rm Int}_{\\rm top}(\\bar\\Omega)=\\Omega$ and entirely contained in $\\Omega$ is a local geodesic of $(X,\\mathsf{d}_X)$.\n\nFor the proof of item \\ref{main1:item3}, let $p,q\\in \\partial{\\rm Int}_{\\rm top}(\\bar\\Omega), d=\\mathsf{d}_\\Omega(p,q)$, and $\\gamma: [0,d]\\to \\bar\\Omega$ be a unit speed geodesic with respect to $\\mathsf{d}_\\Omega$ joining $p,q$ such that $\\gamma(0)=p$, $\\gamma(1)=q$. For any small enough $\\varepsilon\\in (0,d\/3)$, take $p'=\\gamma(\\varepsilon)$ and $q'=\\gamma(d-\\varepsilon)$. \n{\nWe can find points in $\\{p'_n\\}$ and $\\{q'_n\\}$ in $\\Omega$ so that $p'_n\\to p'$ and $q'_n\\to q'$. The geodesic $\\gamma_n$ of $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining $p_n$ and $q_n$ must converge to $\\gamma|_{[\\varepsilon,1-\\varepsilon]}$ otherwise there would have been branching geodesics between $p,q$. \nOn the other hand $\\gamma_n$ is a local geodesic of $(X,\\mathsf{d}_X)$ and hence is a quasigeodesic in $X$. Since limits of quasigeodesics are quasigeodesics it follows that \n $\\gamma|_{[\\varepsilon,1-\\varepsilon]}$ is a quasi-geodesic in $X$. Letting $\\varepsilon\\to 0$ we conclude that $\\gamma$ is a quasi-geodesic in $X$ as well.}\n\\end{proof}\n\n\n\n\\subsection{$\\ncRCD$ case}\nThe purpose of this section is to prove Theorem \\ref{thm:main2}. We need the following pairwise almost convexity proved by Deng in \\cite[Theorem 6.5]{deng2020holder}.\n\n\\begin{proposition}\\label{prop:pairconv}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space with ${\\rm essdim}=n$. For $\\mathfrak{m}\\times\\mathfrak{m}$-a.e.\\ every $(x,y)\\in\\mathcal{R}_n\\times \\mathcal{R}_n$, there exists a geodesic joining $x,y$, and entirely contained in $\\mathcal{R}_n$.\n\\end{proposition}\n\n \n \\begin{proof}[Proof of Theorem \\ref{thm:main2}]\n We assume that $\\bar\\Omega\\neq X$, otherwise it either contradicts the assumption $\\bar \\Omega\\cap \\partial X=\\varnothing$ or makes the statement trivial. We break the proof into several steps.\n \n {\\bf Step 1:} we show $\\partial \\bar\\Omega\\subset \\partial_{\\rm top} \\bar\\Omega$. \n \n \n Observe that $\\mathsf{d}_\\Omega$ and $\\mathsf{d}_X$ coincide on sufficiently small open subsets of $\\Omega$, hence tangent cones taken at the same point by the same rescaling sequence w.r.t. both metrics are isometric for points in $\\Omega$, in particular tangent cones at points in $\\Omega$ have no boundary. Which means that $\\mathcal{S}^{N-1}(\\bar\\Omega)\\setminus \\mathcal{S}^{N-2}(\\bar\\Omega)\\subset \\partial_{\\rm top} \\bar\\Omega$. Since $\\partial_{\\rm top} \\bar\\Omega$ is closed, we have $\\partial \\bar\\Omega\\subset \\partial_{\\rm top} \\bar\\Omega$.\n \n {\\bf Step 2:} Suppose $\\partial_{\\rm top} \\bar \\Omega\\subset \\partial \\bar\\Omega$ is not true, we find a point $q\\in \\partial_{top}\\bar\\Omega\\setminus \\partial \\bar\\Omega$ so that $q\\in \\mathcal{R}(X)$.\n \n \n First, there exists $p\\in \\partial_{\\rm top}\\bar\\Omega\\setminus \\partial \\bar\\Omega$. Since $\\partial_{\\rm top}\\bar\\Omega$ and $\\partial \\bar\\Omega$ are both closed, there exists $\\varepsilon>0$ such that $B_{2\\varepsilon}(p)\\cap \\partial \\bar\\Omega=\\varnothing$. Now consider any two points in $B_{\\varepsilon\/2}(p)$. By triangle inequality, any geodesic joining such two points lies in $B_{\\varepsilon}(p)$ hence does not intersect $\\partial \\bar\\Omega$, moreover, note that $\\mathcal{H}^N_X(B_{\\varepsilon\/2}(p)\\cap \\Omega)>0$, $\\mathcal{H}^N_X(B_{\\varepsilon\/2}(p)\\cap (X\\setminus \\bar\\Omega))>0$ (recall we assumed $\\bar\\Omega\\neq X$), by Deng's pairwise almost convexity of the regular set, Proposition \\ref{prop:pairconv}, there exist $x\\in B_{\\varepsilon\/2}(p)\\cap \\Omega\\cap \\mathcal{R}(X)$ and $y\\in B_{\\varepsilon\/2}(p)\\cap (X\\setminus \\bar\\Omega)\\cap \\mathcal{R}(X)$ such that some geodesic, denote it by $\\gamma_{xy}$, joining $x,y$ is entirely contained in $\\mathcal{R}(X)$, meanwhile, $\\gamma_{xy}$ must intersect $\\partial_{\\rm top}\\bar\\Omega$, and the point of intersection, denoted by $q$, is the desired point. \n \n {\\bf Step 3:} We show that for the point $q$ we found in step $2$, there exists a neighborhood $U$ so that $\\partial_{\\rm top}\\bar \\Omega\\cap U$ has Hausdorff codimension at least $2$ (recall Remark \\ref{rmk:equiHaus}), and there exists $\\delta\\mathrel{\\mathop:}= \\delta(K,N)>0$ depending only on $K,N$ such that $\\Theta_{\\bar\\Omega}(x)\\le 1-\\delta$ for any $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$.\n \n \n Since $q\\in \\mathcal{R}(X)\\cap (\\partial_{\\rm top}\\bar \\Omega\\setminus \\partial \\bar\\Omega)$, there exists an open neighborhood $U$ such that $U$ is homeomorphic to a manifold and $U\\cap \\partial \\bar\\Omega=\\varnothing$. We claim that $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}^{N-2}(\\bar\\Omega)$. It suffices to show $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}(\\bar\\Omega)$ since $U$ is disjoint from $\\partial \\bar\\Omega$. \n \n Let $\\delta\\mathrel{\\mathop:}=\\delta(K,N)>0$ be as in Theorem \\ref{thm:regular}, if there exists $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$ with $\\Theta_{\\bar\\Omega}(x)> 1-\\delta$ then there exists $V\\subset U\\cap \\bar\\Omega$ containing $x$, open relative to $\\bar\\Omega$, and homeomorphic to a manifold. Now the invariance of domain for manifolds applied to the inclusion $V\\hookrightarrow U$ yields that $V$ is open in $X$, hence $V\\subset \\Omega$. This contradicts that $x\\in \\partial_{\\rm top}\\bar \\Omega$. Therefore for any $x\\in \\partial_{\\rm top}\\bar \\Omega\\cap U$ it holds that $\\Theta_{\\bar\\Omega}(x)\\le 1-\\delta$ which by the choice of $\\delta$ implies that $\\partial_{\\rm top}\\bar \\Omega\\cap U\\subset \\mathcal{S}^{N-2}(\\bar\\Omega)$. \n Since Hausdorff codimension of $\\mathcal{S}^{N-2}(\\bar\\Omega)$ is at least $2$, the proof of this step is completed.\n \n \n {\\bf Step 4:} We show that when we blow up the inclusion map $ i_0:\\bar\\Omega\\hookrightarrow X$ at $q$, the induced map $i_1: T_q\\bar \\Omega\\to T_q X \\cong \\R^N$ is not surjective near $0$, in fact, $0$ is on the topological boundary of $i_1(T_q\\bar \\Omega)$.\n \n Denote by $B^X_r$ (resp. $B^{\\bar\\Omega}_r$) the ball of radius $r$ in metric $\\mathsf{d}_X$ (resp. $\\mathsf{d}_\\Omega$). We claim that $\\mathcal{H}^N_X(B^{\\bar\\Omega}_r(x))=\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(x))$ for $x\\in U\\cap \\bar\\Omega$ and $r>0$ small enough so that $B^{\\bar\\Omega}_r(x)\\subset U$. Observe that the two distances $\\mathsf{d}_X$ and $\\mathsf{d}_{\\Omega}$ coincide with each other for small enough open subsets in $\\Omega$, so $\\mathcal{H}^N_X$ and $\\mathcal{H}^N_{\\bar\\Omega}$ gives the same mass to open subsets of $\\Omega$. Now observe that $B^{\\bar\\Omega}_r(x)=(B^{\\bar\\Omega}_r(x)\\cap \\Omega)\\cup (B^{\\bar\\Omega}_r(x)\\cap\\partial_{\\rm top} \\bar\\Omega)$, where the former is open in $\\Omega$, the latter has codimension at least $2$ proved in step 3 hence measure zero, which completes the proof of the claim. Recall from step 2 and step 3 we know that $\\Theta_X(p)=1$ and $\\Theta_{\\bar\\Omega}(p)\\le 1-\\delta$, it follows\n \n \\begin{equation}\\label{eq:density}\n \\lim_{r\\to 0} \\frac{\\mathcal{H}^N_X(B^{\\bar\\Omega}_r(p))}{\\mathcal{H}^N_X(B^{X}_r(p))}=\\lim_{r\\to 0} \\frac{\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(p))}{\\mathcal{H}^N_X(B^{X}_r(p))}=\\frac{\\Theta_{\\bar\\Omega}(p)}{\\Theta_{X}(p)}\\le 1-\\delta.\n \\end{equation} \n If $i_1(T_q \\bar \\Omega)$ contains $B^{\\R^N}_{\\varepsilon}(0)$ for some $\\varepsilon>0$, then the local coincidence of the metrics when away from boundary implies $B^{\\R^N}_{\\varepsilon\/2}(0)=B^{T_q\\bar\\Omega}_{\\varepsilon\/2}(0)$, which in turn implies $\\mathcal{H}^N_{\\R^N}(B^{T_q\\bar\\Omega}_{\\varepsilon\/2}(0))=\\mathcal{H}^N_{\\R^N}(B^{\\R^N}_{\\varepsilon\/2}(0))$, this contradicts \\eqref{eq:density}.\n\n {\\bf Step 5:} We derive a contradiction by iteratively blowing up at a topological boundary point. \n \n \n If $N=1$, then the statement is clear thanks to the classification theorem \\cite{KL15}. It suffices to consider the case $N\\ge 2$. In this case the topological boundary of $T_q\\bar\\Omega$ is more than a single point, to show this, it is enough to notice that $i_1$ is bi-Lipschitz (recall remark \\ref{rmk:equiHaus}), so it is an homeomorphism onto its image. \n \n Now we summarize the properties needed for the blow-up procedure. In the setting of this theorem, let $i_0:\\bar \\Omega\\hookrightarrow X$ be the inclusion map, $q\\in \\partial_{\\rm top}\\bar\\Omega$ and $i_1: T_q\\bar\\Omega\\to T_qX$ be the blow-up of $i_0$ at $q$. In order for the cone tip of $T_q\\bar\\Omega$ to be on $\\partial_{\\rm top}i_1(T_q\\bar\\Omega)$, it is sufficient to have:\n \n \\begin{enumerate}\n \\item $q\\in \\mathcal{R}(X)$ and $\\Theta_{\\bar\\Omega}(q)\\le 1-\\delta$; \n \\item $\\mathcal{H}^N_{\\bar\\Omega}(B^{\\bar\\Omega}_r(q))=\\mathcal{H}^N_{X}(B^{\\bar\\Omega}_r(q))$ for sufficiently small $r>0$;\n \\item $q\\notin \\mathcal{F}\\bar\\Omega$.\n \\end{enumerate}\n \n After the blow-up procedure in step 4, the ambient space $T_q X\\cong \\R^N$ has no singular points, moreover, $q\\notin \\partial \\bar\\Omega$ implies $q\\notin \\mathcal{F}\\bar\\Omega$, which means iterated tangent cones at $q$ w.r.t. $(\\bar\\Omega, \\mathsf{d}_\\Omega)$ have no boundary, so every point on $\\partial_{\\rm top} i_1(T_q\\bar\\Omega)$ (not empty by step 4) still satisfies the conditions listed above, so we can continue blowing up at any point on $\\partial_{\\rm top} i_1(T_q\\bar\\Omega)$ other than the cone tip, each time keeping the the base point a point on the topological boundary. In finitely many blow-up procedures, we end up with a bi-Lipschitz map $i_N: \\R^N\\to \\R^N$ such that $i_N(0)=0$, $i_N$ not surjective, and $0$ is on the topological boundary of $i_N(\\R^N)$, this is impossible by invariance of domain.\n \n \\end{proof}\n \n \n \n\n\n\n\n\n\\section{Applications}\\label{sec:AppConv}\n In this section we derive from the boundary equivalence in various ambient spaces the {locally totally geodesic} property, i.e., a subset satisfying $\\ncRCD(K,N)$ condition forces the geodesics in intrinsic metric joining interior points to be disjoint from boundary. \n \n We first introduce a technical result which is a direct consequence of H\\\"older continuity along interior of tangent cones pointed out in \\cite[Corollary 1.5]{CN12}, it is available for $\\ncRCD(K,N)$ spaces thanks to Deng's generalization of this statement \\cite{deng2020holder}. \n\n\\begin{proposition}\\label{prop:dentofull}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space, and $\\gamma$ be a geodesic in $X$. The set of points in $\\gamma$ with unique tangent cone is relatively closed in the interior of $\\gamma$. In particular, for each integer $1\\le k\\le N$, $\\gamma\\cap \\mathcal{R}_k$ is closed relative to the interior of $\\gamma$. If in addition $\\gamma\\cap \\mathcal{R}_k$ is dense in the interior of $\\gamma$, then it is all of the interior.\n\\end{proposition}\n\nWe start with the following simplest setting, where the ambient space is a smooth manifold {but there are no assumption on the regularity of topological boundary}.\n\\begin{theorem}\\label{thm:smooth-rcd-subset}\n Let $(M,g)$ be an $n$-dimensional smooth\n \n manifold, and $\\Omega\\subset M$ be open, connected and such that ${\\rm Int}(\\bar\\Omega)=\\Omega$. If $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ is a $\\ncRCD(K,n)$ space, then \n \\begin{enumerate}\n \\item $\\partial_{\\rm top}\\bar \\Omega=\\partial \\bar\\Omega$.\n \\item any minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ does not intersect $\\partial \\Omega$ hence a local geodesic in $(M,g)$, {i.e., $\\Omega$ is locally totally geodesic};\n \\item any minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points on $\\partial_{\\rm top}\\bar\\Omega$ is either entirely contained in $\\partial_{\\rm top}\\bar\\Omega$, or its interior is entirely in $\\Omega$. In the latter case the minimizing geodesic is also a local geodesic in $(M,g)$.\n \n \\end{enumerate}\n\\end{theorem}\n\n\\begin{proof}\n We first show that if $p\\in \\partial_{\\rm top}\\bar\\Omega$, then any tangent cone taken w.r.t. $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ at $p$ cannot be $\\R^n$. This is contained in step 3 of the proof of Theorem \\ref{thm:main2}. If there is a tangent cone w.r.t. $(\\bar\\Omega,\\mathsf{d}_\\Omega, \\vol_g\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\bar\\Omega)$ at $p$ is $\\R^n$, then there is a neighborhood $V$ of $p$ open in $\\bar\\Omega$ homeomorphic to $\\R^n$, while there is also a neighborhood $U$ of $p$ open in $M$ homeomorphic to $\\R^n$. Then the invariance of domain applied to the inclusion $U\\cap V\\hookrightarrow U$ show that $U\\cap V$ is open in $M$ and $U\\cap V\\subset \\Omega$, a contradiction to $p\\in \\partial_{\\rm top}\\bar\\Omega$. It follows directly that $\\partial_{\\rm top}\\bar\\Omega=\\partial \\bar \\Omega$.\n \n Consider now a minimizing geodesic $\\gamma:[0,1]\\to \\bar\\Omega$ in $(\\bar\\Omega,\\mathsf{d}_{\\Omega})$. Then $\\gamma((0,1))\\cap \\partial_{\\rm top} \\bar\\Omega$ is relatively closed in $\\gamma((0,1))$. By Proposition \\ref{prop:dentofull}, $\\gamma((0,1))\\setminus \\partial_{\\rm top} \\bar\\Omega$ is also relatively closed, this is the set of points in $\\gamma((0,1))$ having tangent cone $\\R^n$. It follows from the connectedness of $\\gamma((0,1))$ that either $\\gamma((0,1))\\setminus \\partial_{\\rm top} \\bar\\Omega$ or $\\gamma((0,1))\\cap \\partial_{\\rm top} \\bar\\Omega$ is empty.\n\\end{proof}\n\n{\n\\begin{remark}\nNote that Theorem \\ref{thm:smooth-rcd-subset} implies that $\\bar \\Omega$ is locally convex in $M$ and is hence locally Alexandrov (globally Alexandrov if it is compact).\n\\end{remark}\n}\nWe now move to the case where the ambient space is a $\\ncRCD(K,N)$ space. With the extra assumption $\\partial X=\\mathcal{F}X$ and the stability of absence of boundary \\cite[Theorem 1.6]{BNS20} of an $\\RCD(K,N)$ space, the exact same idea can be used to prove that ${\\rm Int}(X)$ is strongly geodesically convex.\n\n\\begin{corollary}\\label{cor:intconv}\n Let $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. Assume $\\partial X=\\mathcal{F} X$, then ${\\rm Int}(X)\\mathrel{\\mathop:}= X\\setminus \\partial X$ is strongly convex, i.e.\\ any geodesic joining points in ${\\rm Int}(X)$ does not intersect $\\partial X$.\n\\end{corollary}\n\n \n\\begin{proof}\n For a constant speed geodesic $\\gamma:[0,1]\\to X$ joining two points in ${\\rm Int}(X)$, if $\\gamma\\cap \\partial X\\neq \\varnothing$, then there exists a $t_0\\in (0,1)$ such that $t_0=\\sup\\{t: \\gamma([0,t))\\cap \\partial X\\}=\\varnothing$ and $\\gamma (t_0)\\in \\partial X$, since $\\partial X$ is closed. Note that $\\gamma(t_0)$ is an interior point of $\\gamma$ and for every $t\\in (0,t_0)$, any tangent cone at $\\gamma(t)$ does not have boundary, now the he stability of absence of boundary \\cite[Theorem 1.6]{BNS20} under pmGH convergence and h\\\"older continuity of tangent cones along the interior of a geodesic yield that any tangent cone at $\\gamma(t_0)$ has no boundary, this contradicts $\\gamma(t_0)\\in \\partial X=\\mathcal{F}X$.\n\\end{proof}\n\n\n\n\t\\begin{corollary}\\label{cor:loc-total-geo}\n\t\tIn the setting of Theorem \\ref{thm:main2}, with the extra assumption that $\\mathcal{F}X=\\partial X$, any (minimizing) geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining two points in $\\Omega$ is a local geodesic in $(X,\\mathsf{d}_X)$, hence $ \\Omega$ is locally totally geodesic.\n\t\\end{corollary}\n\n\n\nIf we consider only a noncollapsed Ricci limit space with boundary $(X,\\mathsf{d},\\mathfrak{m})$, i.e. the pmGH limit of $n$-dimensional manifolds with convex boundary and uniform Ricci curvature lower bound in the interior and uniform volume lower bound of ball of radius $1$ centered at points chosen in the pmGH convergence, then $\\mathcal{F}X=\\partial X$ is already verified \\cite[Theorem 7.8]{BNS20}, naturally we have:\n\n\\begin{corollary}\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be a noncollapsed Ricci limit space with boundary, then its interior $X\\setminus\\partial X$ is strongly convex.\n\\end{corollary}\n\nDue to the lack of a notion of intrinsic boundary for collapsed spaces we have been discussing noncollapsed spaces only. Without stratification of singular set, De Philippis-Gigli definition's cannot be applied, and Kapovitch-Mondino's definition also fails to provide the correct definition of boundary, as the metric horn example by Cheeger-Colding \\cite[Example 8.77]{Cheeger-Colding97I} shows a collapsed Ricci limit space can have an interior cusp at which the tangent cone is a half line. Nevertheless we conjecture that Han's Theorem \\ref{thm:han} holds in much larger generality, that is, a subspace in a $\\ncRCD(K,N)$ ambient space along with some reference measure satisfying $\\RCD(K,\\infty)$ condition should still enjoy the property that geodesics in the intrinsic metric joining points in the interior remains away from boundary, and the reference measure gives measure $0$ to the topological boundary. This would provide a partial converse ( different from local-to-global theorem) to the well-known global-to-local theorem for $\\RCD(K,\\infty)$ spaces from \\cite[Theorem 6.18]{AGS14a}: \n\n \\begin{theorem}\n Let $Y$ be a weakly geodesically convex closed subset of an $\\RCD(K,\\infty)$ space $(X, \\mathsf{d},\\mathfrak{m})$ such that $\\mathfrak{m}(Y)>0$ and $\\mathfrak{m}(\\partial_{\\rm top}Y)=0$. Then $(Y,\\mathsf{d}_Y, \\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} Y)$ is also an $\\RCD(K,\\infty)$ space.\n \\end{theorem}\n\nMore precisely, we conjecture that\n\n\\begin{conjecture}\\label{conj:collapseconv}\n Let $\\Omega$ be an open subset in a $\\ncRCD(K,N)$ space $(X,\\mathsf{d}_X, \\mathcal{H}^N)$, where $N$ is a positive integer, so that for some Radon measure $\\mu$ with $\\supp \\mu=\\bar\\Omega$, $(\\bar{\\Omega}, \\mathsf{d}_{\\Omega},\\mu)$ is an $\\RCD(K,\\infty)$ space. Assume that $\\partial_{\\rm top} \\Omega$ is $\\mathcal{H}^{N-1}$-rectifiable, then $\\mu\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} \\Omega\\ll \\mathcal{H}^N\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex}\\Omega$ and $\\mu(\\partial_{\\rm top} \\Omega)=0$ and every geodesic joining two points in $\\Omega$ w.r.t. $\\mathsf{d}_\\Omega$ does not intersect $\\partial_{\\rm top}\\bar\\Omega$ hence a local geodesic w.r.t. $\\mathsf{d}_X${, in particular, $\\Omega$ is locally totally geodesic}.\n\\end{conjecture}\n\n\n\n\n\\section{ Almost convexity}\\label{sec:almost}\n\n\n\n\\subsection{1-D localization}\nWe minimally collect the elements of the localization technique introduced in \\cite{Cav14} and \\cite{CavMon15}, we remark that this technique is available for a much general class of metric measure spaces, the so called essentially non-branching ${\\rm MCP}(K,N)$ spaces, which contains essentially non-branching $\\CD(K,N)$ spaces, hence $\\RCD(K,N)$ spaces (\\cite{RajalaSturm12}). \n\nLet $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space, $u$ be a $1$-Lipschitz function. Define the transport set induced by $u$ as:\n\\[\n\\Gamma(u)\\mathrel{\\mathop:}=\\{(x,y)\\in X\\times X: u(x)-u(y)=\\mathsf{d}(x,y)\\},\n\\]\nand its transpose as $\\Gamma^{-1}(u)\\mathrel{\\mathop:}= \\{(x,y)\\in X\\times X: (y,x)\\in \\Gamma(u)\\}$. The union $R_u\\mathrel{\\mathop:}= \\Gamma^{-1}(u)\\cup \\Gamma(u)$ defines a relation on $X$. By excluding negligible isolated and branching points, one can find a transport set $\\mathcal{T}_u$ such that $\\mathfrak{m}(X\\setminus\\mathcal{T}_u)=0$ and $R_u$ restricted to $\\mathcal{T}_u$ is an equivalence relation. So there is a partition of $\\mathcal{T}_u:=\\cup_{\\alpha\\in Q} X_{\\alpha}$, where $Q$ is a set of indices, denote by $\\mathfrak{Q}:\\mathcal{T}_u\\to Q$ the quotient map. In \\cite[Proposition 5.2]{Cav14}, it is shown that there exists a measurable selection $s:\\mathcal{T}_u\\to \\mathcal{T}_u$ such that if $x R_u y$ then $s(x)=s(y)$, so we can identify $Q$ as $s(\\mathcal{T}_u)\\subset X$. Equip $Q$ with the $\\sigma$-algebra induced by $\\mathfrak{Q}$ and the measure $\\mathfrak{q}\\mathrel{\\mathop:}= \\mathfrak{Q}_{\\sharp}(\\mathfrak{m}\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex}\\mathcal{T}_u)$, we can hence view $\\mathfrak{q}$ as a Borel measure on $X$. Furthermore, each $X_{\\alpha}$ is shown (\\cite[Lemma 3.1]{CavMon15}) be to isometric to an interval $I_{\\alpha}$, the distance preserving map $\\gamma_{\\alpha}: I_{\\alpha}\\to X_{\\alpha}$ extend to an geodesic still denoted by $\\gamma_{\\alpha}:\\bar{I}_{\\alpha}\\to X$. Putting several results together, we have (\\cite[Theorem A.5]{KapMon19}):\n\n\\begin{theorem}\\label{thm:disint}\n Let $(X,\\mathsf{d},\\mathfrak{m})$ be an $\\RCD(K,N)$ space. $u$ be a $1$-Lipschitz function. Then $\\mathfrak{m}$ admits a disintegration:\n \\[\n \\mathfrak{m}=\\int_{Q}\\mathfrak{m}_{\\alpha}\\mathfrak{q}(\\dd\\alpha),\n \\]\n where $\\mathfrak{m}_{\\alpha}$ is a non-negative Radon measure on $X$, such that \n \\begin{enumerate}\n \\item For any $\\mathfrak{m}$-measurable set $B$, the map $\\alpha\\mapsto \\mathfrak{m}_{\\alpha}(B)$ is $\\mathfrak{q}$-measurable.\n \\item\\label{item:strcons} for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_{\\alpha}$ is concentrated on $X_{\\alpha}=\\mathfrak{Q}^{-1}(\\alpha)$. This property is called strong consistency of the disintegration.\n \\item \\label{item:disint} for any $\\mathfrak{m}$-measurable set $B$ and $\\mathfrak{q}$-measurable set $C$, it holds\n \\[\n \\mathfrak{m}(B\\cap \\mathfrak{Q}^{-1}(C))=\\int_C \\mathfrak{m}_{\\alpha}(B)\\mathfrak{q}(\\dd\\alpha).\n \\]\n \\item\\label{item:pos} for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_{\\alpha}=h_{\\alpha}\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_{\\alpha}\\ll\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_{\\alpha}$, where $h_{\\alpha}$ is a $\\log$ concave density, and $(\\bar{X}_{\\alpha}, \\mathsf{d},\\mathfrak{m}_{\\alpha})$ is an $\\RCD(K,N)$ space.\n \\end{enumerate}\n\\end{theorem}\n\n\n\n\n\\subsection{ Proof of Proposition \\ref{thm:almostconvex} and Consequences}\n\n\\begin{proof}[Proof of Proposition \\ref{thm:almostconvex}]\nTake $x\\in X$, disintegrate $\\mathfrak{m}$ w.r.t $\\mathsf{d}_x\\mathrel{\\mathop:}= \\mathsf{d}(x,\\cdot)$. Item \\ref{item:disint} in Theorem \\ref{thm:disint} yields that \n\\begin{equation}\n \\begin{split}\n 0=\\mathfrak{m}(X\\setminus \\mathcal{R}_n)=\\int_Q \\mathfrak{m}_\\alpha(X\\setminus \\mathcal{R}_n)\\mathfrak{q}(\\dd\\alpha).\n \\end{split}\n\\end{equation}\nThen for $\\mathfrak{q}$-a.e.\\ $\\alpha\\in Q$, $\\mathfrak{m}_\\alpha(X\\setminus \\mathcal{R}_n)=0$, we set $\\widetilde{Q}\\mathrel{\\mathop:}= \\{\\alpha\\in Q: \\mathfrak{m}_{\\alpha}(X\\setminus \\mathcal{R}_n)=0 \\}$, then $R_x\\mathrel{\\mathop:}= (\\cup_{\\alpha\\in \\widetilde{Q}}X_{\\alpha})\\cap \\mathcal{R}_n$ is the desired set. Indeed, for any $y\\in R_x$, there is a geodesic (segment) $\\gamma$ contained in $X_\\alpha$ joining $x,y$, for some $\\alpha\\in \\widetilde{Q}$, with $\\mathfrak{m}_{\\alpha}(\\gamma\\setminus \\mathcal{R}_n)=0$. the $\\log$-concavity of $h_{\\alpha}$ implies that $h_{\\alpha}$ is $\\mathcal{H}^1$ a.e.\\ positive on $X_{\\alpha}$, so we get that $\\mathcal{H}^1\\mathbin{\\vrule height 1.6ex depth 0pt width 0.13ex\\vrule height 0.13ex depth 0pt width 1.3ex} X_\\alpha(\\gamma\\setminus \\mathcal{R}_n)=0$, which in turn implies that regular points of essential dimension is dense in the interior of $\\gamma$. Now apply Proposition \\ref{prop:dentofull}, we see that the interior of $\\gamma$ is entirely in $\\mathcal{R}_n$ and the end point $y$ is also in $\\mathcal{R}_n$. \n\\end{proof}\n\n\n\n{ Since $\\mathcal R_n\\subset {\\rm Int}(X)$ Proposition \\ref{thm:almostconvex} immediately implies almost convexity of ${\\rm Int}(X) = X\\setminus \\partial X$. }\n\n\n\\begin{corollary}\nLet $(X,\\mathsf{d},\\mathcal{H}^N)$ be a $\\ncRCD(K,N)$ space. For \\textit{every} $x\\in {\\rm Int}(X)$\n there exists a subset $R_x\\subset {\\rm Int}(X)$ so that $\\mathfrak{m}(X\\setminus R_x)=0$ and for any $y\\in R_x$ there is a minimizing geodesic joining $x,y$ and entirely contained in ${\\rm Int}(X)$. \n\n\\end{corollary}\n\n\n\n \n We then naturally obtain the following corollary.\n\n\\begin{corollary}\nIn the setting of Theorem \\ref{thm:main2}, for every point $x\\in \\Omega$, there exists a set $\\mathcal{R}_x\\subset\\Omega$ such that $\\mathcal{H}^N(\\bar\\Omega\\setminus\\mathcal{R}_x)=0$ and for every $y\\in\\mathcal{R}_x$, there is a minimizing geodesic in $(\\bar\\Omega,\\mathsf{d}_\\Omega)$ joining $x,y$ lies entirely in $\\Omega$, hence a local geodesic in $(X,\\mathsf{d}_X)$, {i.e., $\\Omega$ is almost locally totally geodesic}. \n\\end{corollary}\n\n\\bibliographystyle{alpha}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Remark on thermodynamic consistency}\n\\label{sec:Constitutive_functions_Process_consistency}\n\nIn Section \\ref{sec:Curing_model_general_framework} the thermodynamic consistency of the general modelling framework has been considered but not finally proved since specific constitutive functions had not been set up to that point. The evaluation of the two remaining conditions \\eqref{eq:dissi_qdot} and \\eqref{eq:dissi_zdot} is discussed in this section. To this end, constitutive assumptions presented in Sections \\ref{sec:Constitutive_functions_Degree_of_Cure} - \\ref{sec:Constitutive_functions_Process_dependency} are employed.\n\nFirstly, inequality~\\eqref{eq:dissi_zdot} is considered. To prove this condition, the partial derivative of the isochoric part of the free energy function $\\hat{\\psi}_G$ with respect to the intrinsic time scale $z$ has to be calculated. Since only the viscoelastic parts $\\hat{\\psi}_{ve,k}$ include the dependency on $z$ (cf. Eq.~\\eqref{eq:psi_G}), it remains to show that\n\\begin{equation}\n\\label{eq:therm_cons_psi_ve}\n - \\sum_{k=1}^{N_k} \\dfrac{\\partial \\hat{\\psi}_{ve,k}}{\\partial z} \\ge 0 \\ .\n\\end{equation}\nFurthermore, it is assumed that not only the sum of all Maxwell elements but also every single Maxwell element meets the consistency condition. Thus, it is sufficient to show that\n\\begin{equation}\n\\label{eq:therm_cons_psi_ve_k}\n - \\dfrac{\\partial \\hat{\\psi}_{ve,k}}{\\partial z} \\ge 0\n\\end{equation}\nholds. According to \\cite{Haupt_Lion_2002} or \\cite{Lion_Kardelky_2004}, the thermodynamic consistency of the ansatz \\eqref{eq:psi_visc_single} is met, if the conditions \n\\begin{equation}\n\\label{eq:therm_cons_psi_ve_conditions}\n G_k(t) \\ge 0 \\ ,\n \\qquad\n \\dfrac{\\rm d}{{\\rm d}t} G_k(t) \\le 0 \\ ,\n \\qquad\n \\dfrac{\\rm d^2}{{\\rm d}t^2} G_k(t) \\ge 0\n\\end{equation}\nhold. Obviously, these conditions are satisfied by the relaxation function \\eqref{eq:kernel}.\n\nNext, the remaining condition \\eqref{eq:dissi_qdot} is considered. Since inequality \\eqref{eq:dissi_qdot} cannot be evaluated in a general form, an estimation under consideration of some physically reasonable assumptions is employed. In the first step, the term $\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q $ of Eq.~\\eqref{eq:dissi_qdot} is estimated. Here, an ansatz for the thermochemical part of the specific enthalpy per unit mass is introduced \\cite{Kolmeder_Etal_2011,Lion_Yagimli_2008}\n\\begin{equation}\n\\label{eq:ansatz_enthalpy}\n h_{\\theta C}(\\theta,q) = h_{fluid}(\\theta) \\, (1-q) + h_{solid}(\\theta) \\, q \\ .\n\\end{equation}\nThe functions $h_{fluid}(\\theta)$ and $h_{solid}(\\theta)$ are the specific enthalpy per unit mass of the uncured and fully cured material, respectively. Note that the general ansatz \\eqref{eq:ansatz_enthalpy} depends on the temperature $\\theta$ and the degree of cure $q$. Specific models for the consideration of temperature dependent behaviour have been introduced in \\cite{Kolmeder_Etal_2011} and \\cite{Lion_Yagimli_2008}. However, for the estimation conducted in this section this is omitted and the values for $h_{fluid}$ and $h_{solid}$ are assumed to be constant. \n\nThe next step is to calculate the thermochemical free energy $\\psi_{\\theta C}$ from $h_{\\theta C}$. This can be accomplished by approaches presented in \\cite{Lion_Yagimli_2008} or \\cite{Mahnken_2013}. However, here an alternative formulation of this calculation step is used as follows. Firstly, the Legendre transformation \n\\begin{equation}\n\\label{eq:legendre}\n \\psi + \\theta \\, \\eta = h + \\dfrac{1}{\\JI\\STAPEL\\varrho!^\\SLtilde\\!} \\ I_1(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\\cdot \\Ten2 \\gamma) \n\\end{equation}\nis employed which relates the free energy and the enthalpy (see, for example, \\cite{Lion_Yagimli_2008,Lubarda_2004}). Therein, $\\eta$ is the specific entropy per unit mass and $\\Ten2 \\gamma = (\\STAPEL C!_\\SLstrich!_\\SLstrich-\\STAPEL I!_\\SLstrich!_\\SLstrich)\/2$ is the Green-Lagrange strain tensor. Next it is assumed, that DSC experiments take place at zero mechanical stresses \\cite{Lion_Yagimli_2008}. Thus, the last term of \\eqref{eq:legendre} is neglected. Furthermore, the constitutive relation $\\eta = - \\partial \\psi \/ \\partial \\theta$ at constant stress state is employed and the resulting equation is formulated with respect to the thermochemical potentials $\\psi_{\\theta C}$ and $h_{\\theta C}$. This yields the reduced relation \n\\begin{equation}\n\\label{eq:potentials_dgl}\n \\psi_{\\theta C}(\\theta,q) - \\theta \\, \\dfrac{\\partial \\psi_{\\theta C}(\\theta,q)}{\\partial \\theta} = h_{\\theta C}(\\theta,q) \\ .\n\\end{equation}\nEq.~\\eqref{eq:potentials_dgl} is a differential equation which has to be solved for $\\psi_{\\theta C}$. Its solution reads as\n\\begin{equation}\n\\label{eq:potentials_dgl_solu}\n \\psi_{\\theta C}(\\theta,q) = C \\, \\dfrac{\\theta}{\\theta_0} - \\theta \\, \\int \\dfrac{1}{\\theta^2}\\,h_{\\theta C}(\\theta,q) \\, {\\rm d}\\theta \\ .\n\\end{equation}\nHere, $C$ is an integration constant that does not need to be determined in our evaluation. Next, this general solution is applied to the ansatz for the thermochemical enthalpy which has been introduced in Eq.~\\eqref{eq:ansatz_enthalpy}. This yields a specific model for the thermochemical free energy \n\\begin{equation}\n\\label{eq:potentials_dgl_solu_specific}\n \\psi_{\\theta C}(\\theta,q) = C \\, \\dfrac{\\theta}{\\theta_0} + h_{fluid}\\, (1-q) + h_{solid} \\, q \\ .\n\\end{equation}\nBased on this solution, the term $\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q $ of Eq. \\eqref{eq:dissi_qdot} is calculated by\n\\begin{equation}\n\\label{eq:dpsi_dq_solu}\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}(\\theta,q)}{\\partial q} \n = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\ \\dfrac{\\partial \\psi_{\\theta C}(\\theta,q)}{\\partial q} \n = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\ (h_{solid} - h_{fluid}) \\ .\n\\end{equation}\nTo quantify this expression, the maximum specific reaction enthalpy per unit mass $\\Delta h$ of a complete curing experiment has to be taken into account. This quantity has been measured by DSC experiments (see \\cite{Kolmeder_Etal_2011,Lion_Yagimli_2008} for detailed description) and can be related to the model \\eqref{eq:ansatz_enthalpy} by the relation\n\\begin{equation}\n\\label{eq:hsolid_hfluid}\n \\Delta h = h(\\theta,q=1) - h(\\theta,q=0) = h_{solid} - h_{fluid} \\ .\n\\end{equation}\nHere, a value of $\\Delta h \\approx - 300 \\, \\rm J\/g$ has been identified. Furthermore, taking into account the mass density $\\JI\\STAPEL\\varrho!^\\SLtilde\\! \\approx 1.1 \\, \\rm g\/cm^3$, the first term of Eq.~\\eqref{eq:dissi_qdot} is estimated by \n\\begin{equation}\n\\label{eq:dpsidq_value}\n\\partial \\hat{\\psi}_{\\theta C}(\\theta,q) \/ \\partial q = -330 \\ \\rm MPa \\ .\n\\end{equation}\n\nNext, the second term in inequality \\eqref{eq:dissi_qdot} is examined. Therein, the chemical shrinkage parameter $\\beta_q$ can be identified by the help of Eq.~\\eqref{eq:phi_thetaC}. Here, the relation\n\\begin{equation}\n\\label{eq:calc_betaq}\n \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q} \\,\n = \\beta_q \n\\end{equation}\nholds. Furthermore, a relation to the hydrostatic pressure $p$ is obtained by evaluation of\n\\begin{equation}\n\\label{eq:calc_pressure}\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M} \\, J_M= - J p\\ ,\n \\quad\n p = - \\dfrac{1}{3} \\, \\dfrac{1}{J} \\, I_1(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich) \\ .\n\\end{equation}\n\nFinally, inequality \\eqref{eq:dissi_qdot} can be evaluated. To this end, expressions \\eqref{eq:calc_betaq} and \\eqref{eq:calc_pressure} are substituted into Eq. \\eqref{eq:dissi_qdot} which yields\n\\begin{equation}\n\\label{eq:reformulate_cond}\n - \\dfrac{\\partial \\hat{\\psi}_{\\theta C}(\\theta,q)}{\\partial q}\n - \\beta_q\\,J\\,p \\ \\ge 0 \\ .\n\\end{equation}\nMoreover, the estimation \\eqref{eq:dpsidq_value} and the chemical shrinkage parameter $\\beta_q=-0.05$ (cf. Section \\ref{sec:Constitutive_functions_Volume}) are inserted in \\eqref{eq:reformulate_cond}, and the resulting inequality is resolved for the expression $J\\,p$. This finally yields the condition\n\\begin{equation}\n\\label{eq:pressure_cond}\n J\\,p \\ \\ge - 6600 \\ {\\rm MPa} \\ .\n\\end{equation}\nSince $J>0$ holds in general, it can be concluded from \\eqref{eq:pressure_cond} that a hydrostatic pressure with $p>0$ does not endanger the thermodynamic consistency. However, if the material is loaded in hydrostatic tension ($p<0$), the condition \\eqref{eq:pressure_cond} may be violated. If a constant volume is assumed ($J=1$), a hydrostatic tension of $p = -6600 \\, \\rm MPa$ would be necessary to violate thermodynamic consistency. Nevertheless, this value seems to be unrealistic to be achieved in real experiments. Thus, the thermodynamic consistency can be proved for the case of physically reasonable conditions (see also \\cite{Lion_Hoefer_2007,Lion_Yagimli_2008,Mahnken_2013}).\n\n\\section{Introduction}\n\\label{sec:Introduction}\n\nMotivated by the desire for continuous improvement, industrial countries constantly aim to develop innovative concepts and products of highest standards. Within this context, lightweight construction and smart structures are doubtless crucial keywords nowadays. Within the last decade, a number of new developments based on lightweight concepts were successfully established in nearly all fields of engineering \\cite{Wiedemann_Sinapius_2013}. One very challenging aspect for the implementation of such new concepts is the joining technology. Thereby, adhesives are given an important role because they join the most diverse materials, not only locally but also as full-surface bonding \\cite{messler_2004}. In the research field of smart structures, high importance is awarded to piezoceramic patches as they combine static structures with actuator and sensor functionality \\cite{Prasad_Etal_2005}. In that context, the Piezoceramic Fibre Composites (PFC) were shown to be the most promising technology. More precisely, the Macro Fibre Composite (MFC) is the most sophisticated device yet invented \\cite{Lloyd_2004}. \n\nDespite excellent properties of piezoceramic patches, the state of art is their application to the fabricated parts only after manufacturing which leads to a time and cost intense procedure \\cite{Neugebauer_Etal_2010_ProdEng}. Scientific fundamentals for an economic production of active structural components are worked out in the Collaborative Research Center\/Transregio \"PT-PIESA\". One of the pursued concepts, that is considered in this paper, is the joining of sheet metal lightweight construction and piezo elements with structural adhesives to smart Piezo Metal Composites (PMC) by an innovative ma\\-nu\\-fac\\-tu\\-ring approach. The basic idea is to merge the steps of forming and piezo application into one process such that they are no longer separated \\cite{Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}. A schematic representation of the approach is depicted in Fig.~\\ref{fig:pic_Principle-PMC}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig1}\n\t\\caption{Schematic illustration of the piezo metal composite (top) and manufacturing process (bottom)}\n\t\\label{fig:pic_Principle-PMC}\n\\end{figure}\n\nFirstly, the MFC is entirely surrounded by a structural adhesive, placed inside two light metal sheets, where one of them is the sheet intended to be formed and the other is a local covering sheet, see Fig.~\\ref{fig:pic_Principle-PMC} (top). A specific distance between both metal sheets is adjusted by the help of spacers. Next, the sandwich structure is formed to its final shape while the adhesive is not yet cured. During this stage, the MFC is protected from excessively high loads by a floating support. After the forming process, the adhesive cures to a solid and thereby provides a material closure between the MFC and the light metal structure in the formed state. The principal feasibility of this method could already be demonstrated in earlier studies (see, for instance, \\cite{Drossel_Etal_2009_CIRP,Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}). \n\nWithin the PMC, an essential role is given to the adhesive layer. Beside the impact of the adhesive's specific material behaviour, its geometrical design (i.e. the layer thickness) is of great importance. If, on the one hand, the adhesive layer is very thin, the protective function during forming vanishes. On the other hand, if the adhesive layer is too thick, risk of overloads due to volume shrinkage processes increases. Moreover, secondary deformations of the PMC might occur and a thick adhesive layer may lead to loss of the electric field in the piezoceramic due to the additional capacity between actuator and structure \\cite{Seemann_Sattel_1999}. \n\nFirst studies on the influence of the adhesive during forming have been conducted by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. However, curing of the adhesive has not been taken into account so far. Thus, the aim of this study is to set up a simulation tool which enables the simulation of curing phenomena in adhesives and to investigate the impact of the curing process on formed PMCs and more precisely on the embedded MFC. One essential part of this work is to provide a phenomenological model which is capable of representing the material behaviour of the adhesive during cure and in the fully cured state. The main characteristics of this model are\n\\begin{itemize}\n \\item the description of the progress of the chemical process during cure,\n \\item the modelling of dependencies of mechanical properties during the curing process as well as at different temperatures, and\n \\item the prediction of volume changes which are caused by chemical shrinkage and heat expansion phenomena.\n\\end{itemize}\nHere, different modelling approaches have been presented before (see, for example, \\cite{Hossain_Etal_2009,Hossain_Etal_2010,Klinge_Etal_2012,Kolmeder_Etal_2011,Liebl_Etal_2012,Lion_Hoefer_2007,Mahnken_2013}). The basic structure of those models is similar. Beside the application to different specific materials, one further basic difference is their employed mechanical submodel. For example models of finite strain elasticity \\cite{Hossain_Etal_2009}, finite strain viscoelasticity \\cite{Hossain_Etal_2010,Klinge_Etal_2012,Kolmeder_Etal_2011,Lion_Hoefer_2007,Mahnken_2013} and viscoplasticity at small \\cite{Liebl_Etal_2012} and finite strains \\cite{Landgraf_Ihlemann_2011} have been used. In this paper, a general modelling approach which includes the main characteristics of the Lion and H\\\"ofer model \\cite{Lion_Hoefer_2007} is presented (see Section \\ref{sec:Curing_model}). However, it is formulated in a more general way. Especially, different mechanical submodels can be incorporated to represent the mechanical behaviour during curing. \n\nIn Section~\\ref{sec:Curing_model_constitutive_functions}, a particular model is introduced which is able to capture curing phenomena of one specific two component epoxy based adhesive. To this end, appropriate constitutive material functions are chosen and the thermodynamic consistency is evaluated. Within this specification, the mechanical behaviour is represented by a combination of models of finite strain pseudo-elasticity and viscoelasticity. Furthermore, changes in volume due to heat expansion and chemical shrinkage processes are taken into account.\n\nThe second part of this paper deals with different aspects of the finite element implementation (see Section~\\ref{sec:FEM_implementation}). The numerical integration of constitutive equations as well as the derivation of appropriate stress and material tangent measures for the implementation into the finite element software \\textit{ANSYS}$^{\\rm TM}$ are described. Moreover, a new algorithm is presented, which addresses numerical difficulties that arise due to thermal and chemically related volume changes. The constitutive functions for the representation of heat expansion and chemical shrinkage processes are introduced with respect to specific reference values for the temperature and a degree of cure, which is an internal variable representing the progress of the curing process. If initial values for both variables differ from previously defined reference values, an immediate volume change would be computed which may lead to instant mesh distortion. The new algorithm calculates a correction and thus keeps the initial volume constant for arbitrary initial values.\n\nFinally, the material model is applied to the simulation of curing processes in bonded PMCs which is described in Section \\ref{sec:Finite_element_simulation}. Here, a finite element model of a deep drawn cup geometry is employed in a simplified manner such that only the part directly surrounding the MFC is modelled. To obtain a realistic forming simulation, the geometry of the final formed model relies on data which has been extracted from comprehensive simulations presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. This simplified approach allows for reduction of computational efforts related to complicated forming simulations and makes it possible to concentrate on phenomena which accompany the curing of the adhesive. An analysis of the strains in the MFC will highlight the benefits of the new process chain of manufacturing described above. \n\n\n\n\n\n\\section{Constitutive modelling of curing phenomena in polymers}\n\\label{sec:Curing_model}\n\nFor the mathematical representation of the phenomenological model presented in this paper, a coordinate free tensor formalism according to Ihlemann \\cite{Ihlemann_2006} is used. Thereby, the rank of a tensor is denoted by the number of its underlines. To exemplify, $\\Ten2 X$ and $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt$ are second- and fourth-rank tensors, respectively. Furthermore, the following general notations are used throughout this article:\n\\begin{itemize}\n \\item second-rank identity tensor: $\\STAPEL I!_\\SLstrich!_\\SLstrich$, \\\\[-2mm]\n \\item first and third principle invariant: $I_1(\\Ten2 X)$ , $I_3(\\Ten2 X)$, \\footnote{ The first principle invariant equals the trace operator of the Cartesian coordinates $X_{ab}$, thus $I_1(\\Ten2 X) = {\\rm trace}[X_{ab}]$. Accordingly, the third principle invariant can be derived by the determinant, thus $I_3(\\Ten2 X) = {\\rm det}[X_{ab}]$}\\\\[-2mm]\n \\item deviatoric part of a tensor: $\\Ten2 X' = \\Ten2 X - \\frac{1}{3}\\,I_1(\\Ten2 X) \\, \\STAPEL I!_\\SLstrich!_\\SLstrich$,\\\\[-2mm]\n \\item unimodular part of a tensor: $\\Ten2 X!^\\vrule width \\SLeffbreite height.4pt = I_3(\\Ten2 X)^{-1\/3} \\, \\Ten2 X$,\\\\[-2mm]\n \\item inverse and transpose of a tensor: $\\Ten2 X^{\\minus 1}$ and $\\Ten2 X^T$,\\\\[-2mm]\n \\item material time derivative: $\\frac{\\rm d}{{\\rm d}t}\\Ten2 X = \\Ten2 X!^\\SLdreieck$.\n\\end{itemize}\n\nA further tensor operation is introduced as follows. Assume two arbitrary second rank tensors $\\Ten2 X$ and $\\Ten2 Y$ and a symmetric second rank tensor $\\Ten2 Z = \\Ten2 Z^T$. Based on these, a tensor operation denoted by superscript $S_{24}$ is defined by\n\\begin{equation}\n\\label{eq:S24}\n \\left( \\Ten2 X \\otimes \\Ten2 Y\\right)^{S_{24}} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\Ten2 Z\n = \\dfrac{1}{2} \\left( \\Ten2 X \\cdot \\Ten2 Z \\cdot \\Ten2 Y + \\Ten2 Y^T \\cdot \\Ten2 Z \\cdot \\Ten2 X^T \\right) \\, .\n\\end{equation} \nIn the following, the kinematics and constitutive assumptions of the general modelling approach are presented. \n\n\\subsection{Kinematics}\n\\label{sec:Kinematics}\nThe phenomenological model for the representation of adhesive's curing is built up within the framework of nonlinear continuum mechanics using the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich$ for the description of the underlying kinematics. The corresponding right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich$ is defined by\n\\begin{equation}\n\\label{eq:rCG}\n \\STAPEL C!_\\SLstrich!_\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nFurthermore, the total volume ratio is abbreviated by $J = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich) = {\\rm d} V \/{\\rm d}\\STAPEL V!^\\SLtilde $. To capture different sources of deformation, the deformation gradient gets multiplicatively decomposed as depicted in Fig.~\\ref{fig:defgrad}.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig2}\n\t\\caption{Multiplicative decomposition of the deformation gradient}\n\t\\label{fig:defgrad}\n\\end{figure}\nFirstly, $\\STAPEL F!_\\SLstrich!_\\SLstrich$ gets decomposed into a thermochemical part $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ and a mechanical part $\\STAPEL F!_\\SLstrich!_\\SLstrich_M$ by\n\\begin{equation}\n\\label{eq:split_mech_thermochem}\n \\STAPEL F!_\\SLstrich!_\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich_{M}\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} \\ .\n\\end{equation}\nThe thermochemical part is related to chemical shrinkage and heat expansion phenomena which are assumed to be isotropic. Thus, $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ is an isotropic tensor \n\\begin{equation}\n\\label{eq:FthetaC}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} = J_{\\theta C}^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich \n \\ , \\quad\n J_{\\theta C} = \\varphi_{\\theta C}(\\theta,q) \\ ,\n\\end{equation}\nwhere $J_{\\theta C} = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}) = {\\rm d} V_{\\theta C}\/{\\rm d}\\STAPEL V!^\\SLtilde$ is the scalar valued volume ratio which denotes the pure thermochemical volume change. This volume ratio is constituted by a function $\\varphi_{\\theta C}(\\theta,q)$ which depends on the thermodynamic temperature $\\theta$ and a variable $q$ referred to as degree of cure. A specific ansatz for $\\varphi_{\\theta C}$ is provided in Section \\ref{sec:Constitutive_functions_Volume}. The mechanical part of the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_M$ as well as its corresponding right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich_M$ are calculated by substituting Eq.~\\eqref{eq:FthetaC}$_1$ into \\eqref{eq:split_mech_thermochem} which yields\n\\begin{equation}\n\\label{eq:FM_CM}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_M = J_{\\theta C}^{\\,-1\/3} \\STAPEL F!_\\SLstrich!_\\SLstrich \\ , \\quad\n \\STAPEL C!_\\SLstrich!_\\SLstrich_M = \\STAPEL F!_\\SLstrich!_\\SLstrich_M^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_M = J_{\\theta C}^{\\,-2\/3} \\STAPEL C!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nNext, the mechanical deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_{M}$ is multiplicatively decomposed into $\\STAPEL F!_\\SLstrich!_\\SLstrich_V$ representing pure mechanical volume changes and a remaining isochoric (i.e. volume-preserving) part $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich$:\n\\begin{equation}\n\\label{eq:split_vol_isochor}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_M = \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich_{V} \\ ,\n \\quad \\STAPEL F!_\\SLstrich!_\\SLstrich_V = J_M^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\nTherein, $J_M = I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich_M) = {\\rm d}\\widehat{V} \/{\\rm d}V_{\\theta C}$ is the mechanical volume ratio. Substituting \\eqref{eq:split_vol_isochor}$_2$ into \\eqref{eq:split_vol_isochor}$_1$ yields the isochoric deformation gradient \n\\begin{equation}\n\\label{eq:Fg}\n \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich = J_M^{-1\/3} \\ \\STAPEL F!_\\SLstrich!_\\SLstrich_M = J^{\\,-1\/3} \\ \\STAPEL F!_\\SLstrich!_\\SLstrich \\ ,\n\\end{equation}\nwhich exhibits the property $I_3(\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich) = 1$. Its corresponding isochoric right Cauchy-Green tensor is calculated by\n\\begin{equation}\n\\label{eq:Cg}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich = \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich^T\\cdot\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLstrich = J_M^{\\,-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich_M= J^{-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich \\ .\n\\end{equation}\n\nAt this point, all necessary aspects of the underlying kinematics have been introduced. However, for subsequent evaluations, time derivatives of different kinematic quantities have to be calculated as well. In the following, the most important relations will be summarized. \n\nThe material time derivative of the mechanical right Cauchy-Green tensor $\\STAPEL C!_\\SLstrich!_\\SLstrich_M$ (cf. eq. \\eqref{eq:FM_CM}$_2$) is given by\n\\begin{equation}\n\\label{eq:CMdot}\n\\begin{array}{lcl}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M \n = \\dfrac{\\rm d}{{\\rm d}t} \\Big[ J_{\\theta C}^{\\,-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich\\Big] \\\\[3mm]\n \\phantom{\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M}\n = J_{\\theta C}^{\\,-2\/3} \n \\left\\{ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \n - \\dfrac{2}{3} \\dfrac{1}{J_{\\theta C}} \n \\left(\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta} \\dot{\\theta}\n +\\dfrac{\\partial J_{\\theta C}}{\\partial q} \\dot{q}\n \\right)\\STAPEL C!_\\SLstrich!_\\SLstrich\n \\right\\}.\n\\end{array} \n\\end{equation}\nMoreover, the rate of the mechanical volume ratio $J_M$ equals\n\\begin{equation}\n\\label{eq:JMdot}\n\\begin{array}{lcl}\n \\STAPEL J!^{\\vbox{\\hbox{$\\displaystyle.$}\\vskip.03cm}}_M \n \\, = \\, \\dfrac{\\rm d}{{\\rm d}t} \\bigg[ \\sqrt{I_3(\\STAPEL C!_\\SLstrich!_\\SLstrich_M)}\\bigg] \n \\, = \\, \\dfrac{1}{2} \\, J_M \\, \\STAPEL C!_\\SLstrich!_\\SLstrich_M^{\\minus 1} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M \\ .\n\\end{array}\n\\end{equation}\nThe material time derivative of $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich$ (see eq. \\eqref{eq:Cg}) can be expressed by\n\\begin{equation}\n\\label{eq:Cgdot}\n\\begin{array}{lcl}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck \n \\, = \\, \\dfrac{\\rm d}{{\\rm d}t}\\Big[J_M^{-2\/3}J_{\\theta C}^{-2\/3} \\ \\STAPEL C!_\\SLstrich!_\\SLstrich \\Big]\n \\, = \\, \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\cdot \\left( \\STAPEL C!_\\SLstrich!_\\SLstrich_M^{\\minus 1} \\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck_M\\right)' \\ .\n\\end{array}\n\\end{equation}\n\n\\subsection{General modelling framework}\n\\label{sec:Curing_model_general_framework}\n\nIn this section, a general modelling framework is introduced which defines the basic structure of the adhesives material model. To obtain a thermodynamically consist model, the second law of thermodynamics in form of the Clausius-Duhem inequality is considered. In Lagrangian representation it reads as follows\n\\begin{equation}\n\\label{eq:CDU}\n \\dfrac{1}{2} \\, \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \n - \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\dot{\\psi} - \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \\, \\dot{\\theta} - \\dfrac{1}{\\theta} \\STAPEL q!_\\SLstrich!^\\SLtilde \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta \\ge 0 \\ .\n\\end{equation}\nTherein, the first term is the stress power per unit volume, $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ is the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor and $\\psi$ and $\\eta$ are the Helmholtz free energy and the entropy, respectively, per unit mass. Furthermore, $\\JI\\STAPEL\\varrho!^\\SLtilde\\!$ is the mass density and $\\STAPEL q!_\\SLstrich!^\\SLtilde$ is the heat flux vector, both defined on the reference configuration. The expression $\\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta$ denotes the temperature gradient with respect to the reference configuration.\n\nTo specify the general structure of the adhesive's material model, an ansatz for the Helmholtz free energy function $\\hat{\\psi} = \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\psi$ per unit volume is introduced. It is additively decomposed into three parts according to\n\\begin{equation}\n\\label{eq:free_energy_allg}\n \\hat{\\psi} \n = \\hat{\\psi}_{G}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z\\Big) \n + \\hat{\\psi}_V\\Big(J_M\\Big)\n + \\hat{\\psi}_{\\theta C}\\Big(\\theta,q\\Big)\\ .\n\\end{equation}\nTherein, $\\hat{\\psi}_{G}$ represents the stored energy as a result of isochoric deformations described by $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich$. Furthermore, this part depends on the current temperature~$\\theta$ and on an additional material function $z$ which reflects some process dependencies.\\footnote{ The isochoric part of the free energy may be extended by additional internal variables. This would be necessary if, for example, models of multiplicative viscoelasticity or viscoplasticity are employed (cf. \\cite{Landgraf_Ihlemann_2011}).} The second contribution of Eq. \\eqref{eq:free_energy_allg} describes the material response due to pure mechanical volume changes and only depends on the volume ratio $J_M$ of the mechanical deformation. The remaining part of Eq.~\\eqref{eq:free_energy_allg} defines the thermochemically stored energy $\\hat{\\psi}_{\\theta C} = \\hat{\\psi}_{\\theta C}(\\theta,q)$ of the material which is a function of the temperature $\\theta$ and the degree of cure $q$. It is attributed to an amount of energy, which is initially stored in the material and which gets released due to the exothermic chemical process during curing. Furthermore, it describes the energy storage related to varying temperatures.\n\nIn Eq.~\\eqref{eq:free_energy_allg}, the variables $q$ and $z$ are treated as internal variables. Thus, they are prescribed by evolution equations which are defined in general form by \n\\begin{equation}\n\\label{eq:qdot}\n \\dot{q} = f_q(q,\\theta,t) \\ge 0 \\ , \\quad q(t=0) = q_0 \\ ,\n\\end{equation}\n\\begin{equation}\n\\label{eq:zdot}\n \\dot{z} = f_z(q,\\theta,t) \\ge 0 \\ , \\quad z(t=0) = z_0 \\ .\n\\end{equation}\nTherein, $q_0$ and $z_0$ are appropriate initial conditions. It can be seen that both variables are monotonically increasing. More details and specific constitutive functions will be provided in Section \\ref{sec:Curing_model_constitutive_functions}. \n\nTo evaluate the ansatz \\eqref{eq:free_energy_allg} within the Clausius-Duhem inequality \\eqref{eq:CDU}, the rate of the Helmholtz free energy function $\\hat{\\psi}$ has to be calculated. Taking into account all dependencies of Eq.~\\eqref{eq:free_energy_allg}, its rate reads as \n\\begin{equation}\n\\label{eq:free_energy_derivative}\n\\begin{array}{lcl}\n \\dot{\\hat{\\psi}} \n &=& \\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta}\n +\\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta}\n \\right] \\, \\dot{\\theta}\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck \\ + \n \\\\[4mm]\n & & \\ \\\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{V}}{\\partial J_M} \\dot{J}_M\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \\, \\dot{q}\n \\, + \\, \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\dot{z} \\ .\n\\end{array}\n\\end{equation}\nA substitution of expressions \\eqref{eq:CMdot}~-~\\eqref{eq:Cgdot} and \\eqref{eq:free_energy_derivative} into the Clausius-Duhem inequality \\eqref{eq:CDU} yields the dissipation inequality\n\\begin{equation}\n\\label{eq:dissip_ineq}\n\\begin{array}{rcl}\n \\left\\{ \n \\dfrac{1}{2} \\, \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \n - \\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich}\\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\right]'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} + \\dfrac{J_M}{2} \\,\\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\\,\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \n \\right\\} \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \\ + \\ \\ \\\\[5mm]\n - \\left\\{\n \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \n + \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta} \n + \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta} \n - \\left( \\dfrac{J_M}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n \\right) \\right\\}\\, \\dot{\\theta} \\ + \\ \\ \\\\[3mm]\n - \\left\\{\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \n - \\left( \\dfrac{J_M}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n \\right) \\right\\}\\, \\dot{q} \\ + \\ \\ \\\\[3mm]\n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\dot{z}\n - \\dfrac{1}{\\theta} \\, \\STAPEL q!_\\SLstrich!^\\SLtilde \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta \\ge 0 \\ ,\n\\end{array}\n\\end{equation}\nwhich has to be satisfied for arbitrary thermomechanical processes. Following the standard methods for the evaluation of \\eqref{eq:dissip_ineq} (cf. \\cite{Haupt_2002}), it is firstly stated that the terms in brackets in front of the $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck$ and $\\dot{\\theta}$ have to be zero. This yields the potential relations for the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor\n\\begin{equation}\n\\label{eq:Ttil_allg}\n \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \n = 2\\,\\left[ \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich}\\cdot \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\right]'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} + J_M\\,\\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\\,\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\ ,\n\\end{equation}\nand the entropy \n\\begin{equation}\n\\label{eq:eta_allg}\n \\JI\\STAPEL\\varrho!^\\SLtilde\\! \\, \\eta \n = - \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial \\theta} \n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial \\theta} \n + \\left( \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial \\theta}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n J_M\n \\right) \\ .\n\\end{equation} \n\nNext it is assumed, that each of the remaining terms of inequality \\eqref{eq:dissip_ineq} has to be non-negative which is a sufficient but not necessary condition. The non-negativity of the last term of inequality \\eqref{eq:dissip_ineq} is complied by Fourier's law. Formulated on the reference configuration it reads as\n\\begin{equation}\n\\label{eq:Fourier}\n \\STAPEL q!_\\SLstrich!^\\SLtilde = - \\kappa \\, J \\, \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\cdot \\STAPEL \\nabla!^\\SLtilde!_{\\hspace{0.5ex}\\SLstrich} \\theta\\ .\n\\end{equation} \nHere, $\\kappa \\ge 0$ is the thermal conductivity. Furthermore, taking into account the properties $\\dot{q} \\ge 0$ and $\\dot{z} \\ge 0$ (see Eqs. \\eqref{eq:qdot} and \\eqref{eq:zdot}), the final two restrictions read as\n\\begin{equation}\n\\label{eq:dissi_qdot}\n - \\left\\{\n \\dfrac{\\partial \\hat{\\psi}_{\\theta C}}{\\partial q} \n - \\left( \\dfrac{1}{J_{\\theta C}}\\,\n \\dfrac{\\partial J_{\\theta C}}{\\partial q}\\,\n \\dfrac{\\partial \\hat{\\psi}_V}{\\partial J_M}\n J_M\n \\right) \\right\\} \\ \\ge 0 \\ ,\n\\end{equation} \n\\begin{equation}\n\\label{eq:dissi_zdot}\n - \\dfrac{\\partial \\hat{\\psi}_{G}}{\\partial z} \\ge 0 \\ .\n\\end{equation} \nThese conditions cannot be evaluated in general form. However, for the case of the concretized material model presented in Section~\\ref{sec:Curing_model_constitutive_functions}, the thermodynamic consistency is proved (cf. Section~\\ref{sec:Constitutive_functions_Process_consistency}).\n\n\n\n\\section{Application to an epoxy based adhesive}\n\\label{sec:Curing_model_constitutive_functions}\n\nIn this section, the general modelling framework is specified to simulate the material behaviour of one specific class of adhesives. More precisely, the two-part epoxy based structural adhesive \\textit{DP410}$^{\\rm TM}$ provided by \\textit{3M Scotch-Weld}$^{\\rm TM}$ is modelled \\cite{DP410_2003}. The adhesive is composed by mixing of two paste-like components. Afterwards, the mixture cures to a solid without any further initiation. In particular, curing takes place at room temperature such that no heating is necessary. The fully cured material can be applied within a temperature range of $-55^\\circ C$ to $80^\\circ C$ and the glass transition temperature is about $50^\\circ C$. Furthermore, the mass density is approximately $1.1 \\, \\rm g\/cm^3$ \\cite{DP410_2003}. In the following, different aspects of the model specifications are addressed.\n\n\\subsection{Degree of cure}\n\\label{sec:Constitutive_functions_Degree_of_Cure}\n\nFirst of all, the curing process is examined in more detail. In analogy to the procedures described in \\cite{Kolmeder_Lion_2010} and \\cite{Lion_Yagimli_2008}, the curing process has been measured by Differential Scanning Calorimetry (DSC) experiments. Thus, it is assumed that the curing process can completely be determined by the exothermic reaction during the chemical process. According to Halley and Mackay \\cite{Halley_Mackay_1996}, different phe\\-no\\-me\\-no\\-lo\\-gi\\-cal models can be applied to simulate the curing processes of epoxy based materials. In this work, the so called $n$-th-order model (model of reaction order $n$) is employed. In view of Eq.~\\eqref{eq:qdot}, this specific ansatz is expressed by\n\\begin{equation}\n\\label{eq:q_ansatz}\n \\dot{q} = f_q(q,\\theta,t) = K_1(\\theta) \\cdot\\Big(1-q\\Big)^n \\cdot f_D(q,\\theta) \\ .\n\\end{equation}\nTherein, $n$ is a constant material parameter and $K_1(\\theta)$ is a temperature dependent thermal activation function which is constituted by the Arrhenius ansatz (cf. \\cite{Halley_Mackay_1996})\n\\begin{equation}\n\\label{eq:q_Kfunc}\n K_1(\\theta) = K_{10} \\, {\\rm exp}\\left[-\\frac{E_1}{R\\,\\theta}\\right] \\ ,\n\\end{equation}\nwhere $K_{10}$ and $E_1$ are constant material parameters and \\mbox{$R = 8.3144 \\, \\rm J\/(mol\\,K)$} is the universal gas constant. To account for diffusion controlled curing, which takes place at temperatures below the glass transition temperature, the ansatz \\eqref{eq:q_ansatz} includes an empirical diffusion factor $f_D(q,\\theta)$ which, according to Fournier \\etal \\cite{Fournier_Etal_1996}, reads as\n\\begin{equation}\n\\label{eq:q_diffusion}\n f_D(q,\\theta) = \\dfrac{2}{1+{\\rm exp}\\left[\\frac{q-q_{end}(\\theta)}{b}\\right]}-1 \\ .\n\\end{equation}\nHere, $b$ is another constant material parameter and $q_{end}$ is the maximum degree of cure, which can be attained at a certain temperature $\\theta$. To evaluate the maximum attainable degree of cure, typically the DiBenedetto equation is adopted \\cite{DiBenedetto_1987,Kolmeder_Lion_2010,Pascault_Williams_1990}:\n\\begin{equation}\n\\label{eq:q_diBenedetto}\n \\dfrac{T_g(q)-T_{g,0}}{T_{g,1}-T_{g,0}} = \\dfrac{\\lambda \\, q}{1- (1-\\lambda) \\, q} \\ .\n\\end{equation}\nTherein, $T_g(q)$ is the glass transition temperature as a function of the degree of cure and $T_{g,0}$ and $T_{g,1}$ are the glass transition temperatures at degree of cure $q=0$ and $q=1$, respectively. Furthermore, $\\lambda$ is a constant material parameter. In order to calculate the maximum attainable degree of cure at certain isothermal curing temperatures, Eq.~\\eqref{eq:q_diBenedetto} has to be solved for $q$ as follows\n\\begin{equation}\n\\label{eq:q_qend}\n q_{end}(\\theta) = \\dfrac{f_T(\\theta)}{f_T(\\theta)-\\lambda \\, f_T(\\theta) + \\lambda} \\ .\n\\end{equation}\nHere, an abbreviation $f_T(\\theta)$ has been introduced \n\\begin{equation}\n\\label{eq:q_fTheta}\n f_T(\\theta) = \\dfrac{\\theta + \\Delta T - T_{g,0}}{T_{g,1}-T_{g,0}} \\ .\n\\end{equation}\nIn Eq.~\\eqref{eq:q_fTheta} the assumption $T_g(q) = \\theta + \\Delta T$ has been employed. Therein, $\\Delta T$ denotes the difference between the glass transition temperature $T_g(q)$ attainable at specific isothermal curing temperatures, and the curing temperature~$\\theta$ itself.\n\nThe material parameters of the model \\eqref{eq:q_ansatz} - \\eqref{eq:q_fTheta} have been identified using the DSC measurements. The corresponding values are listed in Table~\\ref{tab:MatPar_Cure}. Moreover, the phenomenological behaviour of this model is depicted in Fig. \\ref{fig:curing} for different temperatures.\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:q_ansatz} - \\eqref{eq:q_fTheta}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $K_{10}$ & $1.608\\cdot10^{10} \\rm$ \n & $\\ \\ $ $T_{g,1}$ & $324.85 \\ \\rm K$ \\\\\n $\\ \\ $ $E_1$ & $79835 \\ \\rm J\/mol$ \n & $\\ \\ $ $T_{g,0}$ & $234.35 \\ \\rm K$ \\\\ \n $\\ \\ $ $b$ & $0.057$ \n & $\\ \\ $ $\\Delta T$ & $11 \\ \\rm K$ \\\\\n $\\ \\ $ $n$ & $1.217$ \n & $\\ \\ $ $\\lambda$ & $1.7$ \\\\ \n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Cure}\n\\end{table} \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.45\\textwidth]{Fig3}\n\t\\caption{Evolution of the degree of cure $q$ for different temperatures $\\theta$}\n\t\\label{fig:curing}\n\\end{figure}\n\n\\subsection{Heat expansion and chemical shrinkage}\n\\label{sec:Constitutive_functions_Volume}\n\nNext, the thermochemical volume change due to chemical shrinkage and heat expansion processes is specified. To this end, an idealized model with the following ansatz has been chosen:\n\\begin{equation}\n\\label{eq:phi_thetaC}\n \\varphi_{\\theta C}(\\theta, q) = {\\rm exp}\\left[ \\alpha_\\theta \\, \\big(\\,\\theta - \\STAPEL \\theta!^\\SLtilde\\,\\big) + \\beta_q \\, q\\right] \\ .\n\\end{equation}\nTherein, $\\alpha_\\theta$ is a volumetric heat expansion coefficient and $\\beta_q$ is the maximum volumetric chemical shrinkage. According to first measurement results, the material parameters have been set to the values listed in Table~\\ref{tab:MatPar_Volume}.\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eq.~\\eqref{eq:phi_thetaC}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\\n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $\\STAPEL \\theta!^\\SLtilde$ & $295 \\ \\rm K$ \n & & \\\\\n $\\ \\ $ $\\alpha_\\theta$ & $5\\cdot10^{-4} \\ \\rm K^{\\minus 1}$ \n & $\\ \\ $ $\\beta_q$ & $-0.05$ \\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Volume}\n\\end{table} \n\n\\subsection{Free energy and stresses}\n\\label{sec:Constitutive_functions_Free_Energy}\n\nTo complete the curing model, the mechanical parts of the free energy function \\eqref{eq:free_energy_allg} and thus the corresponding stress strain relationships have to be specified. Firstly, the mechanical response due to isochoric deformations is considered. It is described by the free energy contribution $\\hat{\\psi}_{G}(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z)$ in Eq.~\\eqref{eq:free_energy_allg}. This part is modelled by a combination of a finite strain pseudo-elasticity with temperature and degree of cure dependent stiffness and a sum of multiple Maxwell elements, each including process dependencies described by the material function $z(t)$ (cf. Eq.~\\eqref{eq:zdot}). Fig.~\\ref{fig:rheolog_model} illustrates this model by means of a one-dimensional rheological representation. \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.25\\textwidth]{Fig4}\n\t\\caption{Rheological model of a nonlinear spring connected in parallel to multiple Maxwell elements}\n\t\\label{fig:rheolog_model}\n\\end{figure}\nThe corresponding free energy $\\hat{\\psi}_{G}$ is constituted as a sum of contributions related to a pseudo-elastic part $\\hat{\\psi}_{el}$ and $N_k$ Maxwell elements, each denoted by $\\hat{\\psi}_{ve,k}$:\n\\begin{equation}\n\\label{eq:psi_G}\n \\hat{\\psi}_{G}(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta,z)\n = \\hat{\\psi}_{el}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta\\Big) \n + \\sum_{k=1}^{N_k} \\hat{\\psi}_{ve,k}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,z\\Big) \\ .\n\\end{equation}\nThe pseudo-elastic part is modelled by an ansatz proposed by Lion and Johlitz \\cite{Lion_Johlitz_2012}. It takes the form\n\\begin{equation}\n\\label{eq:psi_ela}\n\\begin{array}{ll}\n \\hat{\\psi}_{el}\\Big(\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich,\\theta\\Big) = \\Ten2 Q \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich , \n\\end{array}\n\\end{equation}\nwhere the tensor $\\Ten2 Q$ is given by\n\\begin{equation}\n\\label{eq:psi_ela_II}\n\\begin{array}{ll}\n \\displaystyle\n \\Ten2 Q = - \\int\\limits_{-\\infty}^{t}2\\,c_{10}\\Big(\\theta(t),q(s)\\Big)\\,\\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s\\ .\n\\end{array}\n\\end{equation}\nTherein, a stiffness function $c_{10}(\\theta,q)$ is introduced. It takes into account the dependency on the curing history described by the degree of cure $q(s)$ ($s$ is the integration variable). Furthermore, a dependency on the current temperature $\\theta(t)$ is included. The stiffness function exhibits the properties\n\\begin{equation}\n\\label{eq:psi_ela_stiffness}\n c_{10}\\Big(\\theta,q\\Big)\n \\ge 0 \\ , \\quad\n \\dfrac{\\partial }{\\partial q} \\, c_{10}\\Big(\\theta,q\\Big) \\ge 0 \\ .\n\\end{equation}\nA specific ansatz will be provided in Section \\ref{sec:Constitutive_functions_Process_dependency}. The free energy $\\hat{\\psi}_{ve,k}$ of one single Maxwell element (see Eq.~\\eqref{eq:psi_G}) is modelled according to an ansatz proposed by Haupt and Lion \\cite{Haupt_Lion_2002}:\n\\begin{equation}\n\\label{eq:psi_visc_single}\n \\displaystyle\n \\hat{\\psi}_{ve,k} \n = \\left\\{ - \\int\\limits_{-\\infty}^{z}G_k(z-s)\\,\\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s\\right\\}\\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich \\ .\n\\end{equation}\nTherein, $G_k(z-s)$ is a relaxation function which is constituted by\n\\begin{equation}\n\\label{eq:kernel}\n \\displaystyle\n G_k(z-s)\n = 2 \\, \\mu_k \\, {\\rm e}^{-\\frac{z-s}{\\tau_k}} \\ .\n\\end{equation}\nTherein, ${\\rm e}$ denotes the Euler's number and the parameters $\\mu_k$ and $\\tau_k$ are the stiffness and the relaxation time, respectively, for the $k$-th Maxwell element (cf. Fig.~\\ref{fig:rheolog_model}). Note that Eq.~\\eqref{eq:psi_visc_single} is formulated with respect to the material function $z(t)$ instead of the physical time $t$. The variable $z(t)$ is also referred to as intrinsic time scale and is governed by an evolution equation which has been introduced in general form by Eq.~\\eqref{eq:zdot}. Different sources of process dependencies may be defined by choosing appropriate constitutive functions for this evolution equation. However, only temperature and degree of cure dependent behaviour is assumed in this work. A specific ansatz for Eq.~\\eqref{eq:zdot} is provided in Section~\\ref{sec:Constitutive_functions_Process_dependency}.\n\nTo complete the mechanical part of the free energy function \\eqref{eq:free_energy_allg}, the volumetric stress response described by $\\hat{\\psi}_V$ has to be constituted. This contribution is assumed to be pure elastic and is described by the ansatz\n\\begin{equation}\n\\label{eq:psi_vol}\n \\displaystyle\n \\hat{\\psi}_V = \\dfrac{K}{2} \\, \\Big(J_M - 1\\Big)^2 \\ .\n\\end{equation}\nTherein, $K>0$ is the bulk modulus. \n\nFinally, the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ is calculated by evaluation of Eq.~\\eqref{eq:Ttil_allg} in combination with the specific constitutive relations \\eqref{eq:psi_G} - \\eqref{eq:psi_vol}. The resulting contributions to the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor are summarized in Eqs.~\\eqref{eq:stress_sum} - \\eqref{eq:stress_visc_k}. More information on the process dependent material functions $c_{10}(\\theta,q)$ and~$\\dot{z}$ and a summary of specific values for material parameters are provided in Section \\ref{sec:Constitutive_functions_Process_dependency}. \n\n\\begin{center}\n\\hrule \\footnotesize\n\\nopagebreak\n\\vspace{1ex}\n\\begin{eqnarray}\n\\omit\\rlap{\\text{Total $2^{\\rm nd}$ PK stress}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\ \\ \\,\\,\\, \\;\n &=& \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V} +\\displaystyle \\Big(\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G}\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\Big)'\\cdot\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\label{eq:stress_sum} \\\\[1mm]\n\\omit\\rlap{\\text{Volumetric part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V} \\,\\,\\,\\,\n &=& \\ K \\, J_M \\, (J_M - 1 ) \\, \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \\label{eq:stress_vol} \\\\[1mm]\n\\omit\\rlap{\\text{Isochoric part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G} \\,\\,\\,\\,\n &=& \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el} + \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve} \\label{eq:stress_iso} \\\\[1mm]\n\\omit\\rlap{\\text{Pseudo-elastic part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\\,\\,\\,\\,\n &=& - \\int\\limits_{-\\infty}^{t}2\\,c_{10}\\Big(\\theta(t),q(s)\\Big)\\,\n \\Bigg(\\frac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\label{eq:stress_ela} \\\\[1mm]\n\\omit\\rlap{\\text{Viscoelastic part}} \\nonumber \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}\\,\\,\\,\n &=& \\sum_{k=1}^{N_k} \\ \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\label{eq:stress_visc} \\\\[1mm]\n\\quad \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}\n &=& - \\int\\limits_{-\\infty}^{z(t)}2 \\mu_k\\,{\\rm e}^{-\\frac{z(t)-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\,{\\rm d}s \\label{eq:stress_visc_k} \n\\end{eqnarray}\n\\nopagebreak\n\\hrule \n\\end{center}\n\n\n\n\\subsection{Process dependencies of mechanical properties}\n\\label{sec:Constitutive_functions_Process_dependency}\n\nIn Eq.~\\eqref{eq:psi_ela} a stiffness parameter $c_{10}(\\theta,q)$ has been introduced which includes dependencies on the temperature $\\theta$ and the degree of cure $q$. The specific ansatz used in this paper consists of a separation of the different physical processes\n\\begin{equation}\n\\label{eq:c10_ansatz}\n c_{10}(\\theta,q) = c_{10,0}\\,f_{c\\theta}(\\theta)\\,f_{cq}(q) \\ .\n\\end{equation}\nTherein, $c_{10,0}$ is a constant stiffness parameter which equals the half shear modulus $G$ in small strain shear experiments. Furthermore, $f_{c\\theta}(\\theta)$ and $f_{cq}(q)$ are normalized functions representing the temperature and degree of cure dependencies, respectively. The normalization is accomplished in a way, such that the corresponding values of both functions range from $0$ to $1$. \n\nFor the representation of the temperature dependency of the fully cured material, the normalized function $f_{c\\theta}(\\theta)$ is constituted by the ansatz\n\\begin{equation}\n\\label{eq:c10_fTheta}\n f_{c\\theta}\\big(\\theta\\big) = \\dfrac{1}{\\pi}\\Bigg\\{{\\rm atan}\\Big[ a_{c\\theta} \\cdot\\big(\\theta - T_{g,1}\\big)\\Big] + \\dfrac{\\pi}{2}\\Bigg\\} \\ .\n\\end{equation}\nIt takes into account the major part of stiffness change near the glass transition temperature $T_{g,1}$ of the cured material (cf. Table~\\ref{tab:MatPar_Cure}). An additional material parameter $a_{c\\theta}$ enables one to adjust the specific shape of the function. Fig.~\\ref{fig:ela_temp_func} illustrates the phenomenology of Eq.~\\eqref{eq:c10_fTheta}. The chosen material parameter $a_{c\\theta}$ is listed in Table~\\ref{tab:MatPar_Ela}.\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.45\\textwidth]{Fig5}\n \\caption{Temperature dependency $f_{c\\theta}(\\theta)$ of the equilibrium stiffness}\n \\label{fig:ela_temp_func}\n\\end{figure}\n\nThe second normalized function $f_{cq}(q)$ of Eq.~\\eqref{eq:c10_ansatz} represents the change in stiffness due to the curing process and thus only depends on the degree of cure. The chosen ansatz reads as\n\\begin{equation}\n\\label{eq:c10_gC}\n f_{cq}\\big(q\\big) = \\dfrac{1}{d_{cq}}\\Bigg\\{{\\rm atan}\\Big[ a_{cq} \\cdot\\big(q - b_{cq}\\big)\\Big] + c_{cq}\\Bigg\\} \\ .\n\\end{equation}\nTherein, $a_{cq}$ and $b_{cq}$ are material parameters and the variables $c_{cq}$ and $d_{cq}$ are evaluated in a way such that the conditions \\mbox{$f_{cq}(q=0) = 0$} and \\mbox{$f_{cq}(q=1)=1$} hold. An evaluation of both conditions yields the expressions\n\\begin{equation}\n\\label{eq:c10_gC_II}\n c_{cq}= - {\\rm atan}\\Big[ - a_{cq}\\cdot b_{cq} \\Big], \\quad\n\\end{equation}\n\\begin{equation}\n\\label{eq:c10_gC_III}\n d_{cq}= {\\rm atan}\\Big[ a_{cq}\\cdot (1-b_{cq})\\Big] + c_{cq} \\ .\n\\end{equation}\nThe course of the function $f_{cq}(q)$ is depicted in Fig.~\\ref{fig:ela_cure_func}. The material parameters $a_{cq}$ and $b_{cq}$ used for illustration are listed in Table~\\ref{tab:MatPar_Ela}.\n\n\\begin{figure}[ht]\n \\centering\n\t \\includegraphics[width=0.45\\textwidth]{Fig6}\n \\caption{Degree of cure dependency $f_{cq}(q)$ of the \n equilibrium stiffness}\n\t\\label{fig:ela_cure_func}\n\\end{figure}\n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:stress_vol}, \\eqref{eq:stress_ela} \n and \\eqref{eq:c10_ansatz} - \\eqref{eq:c10_gC}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline& & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline& & & \\\\[-3mm]\n $\\ \\ $ $K$ & $5000 \\, \\rm MPa$ \n & $\\ \\ $ $c_{10,0}$ & $500 \\, \\rm MPa$ \\\\[1mm]\n $\\ \\ $ $a_{c\\theta}$ & $- 0.5 \\, \\rm K^{\\minus 1}$ \n & $\\ \\ $ $a_{cq}$ & $10$ \\\\[1mm]\n &\n & $\\ \\ $ $b_{cq}$ & $0.4 $ \\\\ [1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Ela}\n\\end{table} \n\nFinally, the process dependency of the viscoelastic part of Eq.~\\eqref{eq:psi_G} is considered. A set of several Maxwell elements has been modelled to capture the materials viscoelastic behaviour. As presented in Section \\ref{sec:Constitutive_functions_Free_Energy}, the process dependency is accommodated by the intrinsic time scale $z(t)$. The following ansatz for the evolution equation \\eqref{eq:zdot} has been chosen\n\\begin{equation}\n\\label{eq:zdot_ansatz}\n \\displaystyle\n \\dot{z} \n \\,=\\, f_z(q,\\theta,t) = 10^{f_{z\\theta}(\\theta)}\\cdot10^{f_{zq}(q)} \\ .\n\\end{equation}\nIn accordance to the pseudo-elastic stiffness \\eqref{eq:c10_ansatz}, the dependencies on the temperature and the degree of cure have been separated in Eq.~\\eqref{eq:zdot_ansatz}. The constitutive equations for both functions $f_{z\\theta}(\\theta)$ and $ f_{zq}(q) $ are\n\\begin{align}\n f_{z\\theta}(\\theta) \n &= \\dfrac{a_z}{\\pi} \\, {\\rm atan}\\Big[b_z\\cdot[\\theta-T_{g,1}]\\Big]\n +\\dfrac{\\pi}{2} \\ ,\\label{eq:zdot_ansatz_function_theta}\n \\\\[2mm]\n f_{zq}(q) \n &= c_z\\cdot(1-q^{n_z}) \\ .\n\\label{eq:zdot_ansatz_function_q}\n\\end{align}\nThe material parameters belonging to the viscoelastic part of the model are listed in Table~\\ref{tab:MatPar_Visc}. \n\n\\begin{table}[ht]\n \\centering\n \\caption{Material parameters for Eqs.~\\eqref{eq:stress_visc}, \\eqref{eq:stress_visc_k}, \\eqref{eq:zdot_ansatz} - \\eqref{eq:zdot_ansatz_function_q}} \n {\\begin{tabular}{p{1.6cm}p{2.0cm}p{1.6cm}p{1.4cm}}\n \\hline & & & \\\\[-3mm]\n parameter & value & parameter & value\\\\ \n \\hline & & & \\\\[-3mm]\n $\\ \\ $ $N_k$ & $7$ \n & $\\ \\ $ $\\mu_{2-7}$ & $ 5 \\, \\rm MPa$ \\\\\n $\\ \\ $ $\\mu_{1}$ & $75 \\, \\rm MPa$ \n & $\\ \\ $ $\\tau_{2-4}$ & $10^{k-4} \\, \\rm s$ \\\\ \n $\\ \\ $ $\\tau_1$ & $10 \\, \\rm s$ \n & $\\ \\ $ $\\tau_{5-7}$ & $10^{k-3} \\, \\rm s$ \\\\[1mm]\n $\\ \\ $ $a_z$ & $6.0$ \n & $\\ \\ $ $b_z$ & $0.05 \\ \\rm K^{\\minus 1}$ \\\\ \n $\\ \\ $ $c_z$ & $5.0$ \n & $\\ \\ $ $n_z$ & $0.6$ \\\\ [1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_Visc}\n\\end{table} \n\n\\section{Aspects of finite element implementation}\n\\label{sec:FEM_implementation}\n\nThe constitutive model for the representation of curing phenomena in adhesives has been implemented into the finite element software \\textit{ANSYS}$^{\\rm TM}$. In this section, the numerical integration of constitutive equations as well as the derivation of \\textit{ANSYS}$^{\\rm TM}$ specific stress and material tangent measures are summarized. Additionally, a new algorithm is introduced which suppresses undesired initial volume changes. Those volume changes may result from thermal expansion and chemical shrinkage when initial values for the temperature and the degree of cure differ from their reference values. \n\n\n\\subsection{Numerical integration}\n\\label{sec:FEM_implementation_integration}\n\nFor the numerical integration of constitutive equations, a typical time interval ($t_n$, $t_{n+1}$) with $\\Delta t = t_{n+1} - t_n > 0$ is considered. Within this time step, the $2^{\\rm nd}$ Piola-Kirchhoff stress \\eqref{eq:stress_sum} has to be computed. Thus, \n\\begin{equation}\n\\label{eq:stresses_incremental}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde\n = \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{V}\n + \\Big(\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{G}\\cdot\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich\\,\\Big)'\\cdot\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1} \n\\end{equation}\nhas to be solved. Here, values at time instance $t_{n+1}$ are denoted by $\\indLT{n+1}(\\cdot)$. Accordingly, values at time instance $t_n$ are indicated by $\\indLT{n}(\\cdot)$ and values that are defined at the midpoint $t_n + \\frac{\\Delta t }{2}$ are represented by $\\indLT{n\/2}(\\cdot)$. Within a single time step $\\Delta t$, it is assumed that the deformation gradients $\\indLT{n}\\STAPEL F!_\\SLstrich!_\\SLstrich$ and $\\indLT{n+1}\\STAPEL F!_\\SLstrich!_\\SLstrich$ as well as the temperatures $\\indLT{n}\\theta$ and $\\indLT{n+1}\\theta$ are known. Furthermore, internal variables of the preceding time step are given. \n\nFirstly, the solution for the degree of cure $\\indLT{n+1}q$ is obtained by numerical integration of the evolution Eq.~\\eqref{eq:q_ansatz}. Here, Euler backward method (Euler implicit) is employed. A formulation of Eq.~\\eqref{eq:q_ansatz} for time instance $t_{n+1}$ and a substitution of the approximation $\\indLT{n+1}{\\dot{q}} \\approx \\frac{1}{\\Delta t}(\\indLT{n+1}q-\\indLT{n}q)$ yields \n\\begin{equation}\n\\label{eq:q_EBM}\n \\indLT{n+1}q = \\indLT{n}q + \\Delta t \\ f_q(\\indLT{n+1}q,\\indLT{n+1}\\theta) \\ ,\n\\end{equation}\nwhich is a nonlinear equation with respect to the solution $\\indLT{n+1}q$. It is computed by application of Newton's method. Additionally, the degree of cure $\\indLT{n\/2}q$ at time instance $t = t_n + \\frac{\\Delta t}{2}$ has to be computed as well. In consideration of the temperature $\\indLT{n\/2}\\theta = \\frac{1}{2}(\\indLT{n+1}\\theta + \\indLT{n}\\theta)$, the value $\\indLT{n\/2}q$ is obtained according to \\eqref{eq:q_EBM} by\n\\begin{equation}\n\\label{eq:q_EBM_n2}\n \\indLT{n\/2}q = \\indLT{n}q + \\dfrac{\\Delta t}{2} \\ f_q(\\indLT{n\/2}q,\\indLT{n\/2}\\theta) \\ .\n\\end{equation}\nNext, the computation of the intrinsic time scale $\\indLT{n+1}z$ is considered. Since this variable is governed by an evolution equation \\eqref{eq:zdot_ansatz} which cannot be solved in closed form, a numerical scheme has to be applied as well. By analogy with Eq.~\\eqref{eq:q_EBM}, Euler backward method and the approximation $\\indLT{n+1}{\\dot{z}} \\approx \\frac{1}{\\Delta t}(\\indLT{n+1}z-\\indLT{n}z)$ are adopted\n\\begin{equation}\n\\label{eq:z_EBM}\n \\indLT{n+1}z = \\indLT{n}z + \\Delta t \\ f_z(\\indLT{n+1}q,\\indLT{n+1}\\theta) \\ .\n\\end{equation}\nIn contrast to Eqs.~\\eqref{eq:q_EBM} and \\eqref{eq:q_EBM_n2}, this relation can directly be evaluated. Thus, no iterative procedure has to be applied.\n\nNext, the calculation of the stresses Eq.~\\eqref{eq:stresses_incremental} is considered. While $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_V$ is already completely determined (see Eq.~\\eqref{eq:stress_vol}), the stress contribution $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_G$ has to be regarded in more detail. According to Eq. \\eqref{eq:stress_iso}, it includes a pseudo-elastic part $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}$ and a sum of several viscoelastic parts summarized by $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}$. \n\nTo compute the pseudo-elastic stress tensor, firstly Eq.~\\eqref{eq:stress_ela} has to be formulated for time instance $t_{n+1}$:\n\\begin{equation}\n\\label{eq:Tel_incemental}\n\\displaystyle \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\n = - \\int\\limits_{-\\infty}^{t_{n+1}}2\\,c_{10}( \\indLT{n+1}\\theta, q(s) ) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n\\end{equation}\nNote that the temperature $\\indLT{n+1}\\theta$ does not depend on the integration variable $s$ but only on time instance $t_{n+1}$. \n\nAfter substituting the ansatz \\eqref{eq:c10_ansatz} for the stiffness $c_{10}(\\theta,q)$, all values that do not depend on the integration variable $s$ are excluded from the integral such that the pseudo-elastic stress can be rewritten by \n\\begin{align}\n \\displaystyle\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}\n &= - 2\\, c_{10,0} \\, f_{c\\theta}\\big(\\indLT{n+1}\\theta\\big) \\ \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \\label{eq:Tel_incemental_abbrv} \\ .\n\\end{align}\nHere, $\\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el}$ is an abbreviation for the remaining integral \n\\begin{align}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n &= \\int\\limits_{-\\infty}^{t_{n+1}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n \\label{eq:Tel_incemental_reduced}\n\\end{align}\nNext, this integral is split into two sub-integrals by substituting $t_{n+1} = t_n + \\Delta t$ \n\\begin{equation}\n\\label{eq:Tel_incemental_split}\n\\begin{array}{l}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n = \\int\\limits_{-\\infty}^{t_{n}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ +\\\\\n \\qquad \\qquad\\qquad \\displaystyle\n + \\int\\limits_{t_n}^{t_{n + \\Delta t}} \\, f_{cq}\\big(q(s)\\big) \\, \\Bigg(\\dfrac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Bigg)\\ {\\rm d} s \\ .\n\\end{array}\n\\end{equation}\nThe first term on the right-hand side of Eq.~\\eqref{eq:Tel_incemental_split} is the solution of Eq. \\eqref{eq:Tel_incemental_reduced} for time instance $t_n$. Thus, it equals $\\indLT{n}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} $. The second term of Eq.~\\eqref{eq:Tel_incemental_split} is computed numerically by the midpoint method \n\\begin{equation}\n\\label{eq:Tel_incemental_midpoint}\n\\begin{array}{l}\n { \\displaystyle\n \\int\\limits_{t_n}^{t_{n}+ \\Delta t}} f_{cq}\\big(q(s)\\big) \\, \\Big(\\frac{{\\rm d}}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt^{\\minus 1}(s)\\Big)\\ {\\rm d} s \\\\\n \\qquad \\qquad \\qquad\n \\approx \\Delta t \\ f_{cq}\\Big(\\indLT{n\/2}q\\Big) \\ \\ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\vrule width \\SLeffbreite height.4pt!^\\SLdreieck^{\\minus 1}\\Big|_{t_n + \\frac{\\Delta t}{2}} \\ ,\n\\end{array}\n\\end{equation}\nand the material time derivative of the isochoric inverse right Cauchy-Green tensor at time instance $t_n + \\frac{\\Delta t}{2}$ is approximated by\n\\begin{equation}\n\\label{eq:CG_approx}\n\\begin{array}{l}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{t_n + \\frac{\\Delta t}{2}}\n \\approx\n \\dfrac{1}{\\Delta t} \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array} \n\\end{equation}\nA substitution of Eqs.~\\eqref{eq:Tel_incemental_midpoint} and \\eqref{eq:CG_approx} into \\eqref{eq:Tel_incemental_split} yields the incremental representation\n\\begin{equation}\n\\label{eq:Tel_incemental_solu}\n\\begin{array}{l}\n \\displaystyle\n \\indLT{n+1}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n = \\indLT{n}\\STAPEL P!_\\SLstrich!_\\SLstrich_{el} \n + f_{cq}\\Big(\\indLT{n\/2}q\\Big) \\ \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big)\\ ,\n\\end{array}\n\\end{equation}\nand the pseudo-elastic stress $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{el}$ can be computed by Eq.~\\eqref{eq:Tel_incemental_abbrv}. Finally, it remains to calculate the viscoelastic stress $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve}$. Note that the constitutive equations for the pseudo-elastic stress \\eqref{eq:stress_ela} and the viscoelastic stress of one Maxwell element \\eqref{eq:stress_visc_k} have a similar structure. Thus, the procedure for numerical integration is similar as well. However, the procedure described above has to be slightly adapted. Firstly, the stress contribution for one single Maxwell element \\eqref{eq:stress_visc_k} is formulated for time instance $t_{n+1}$:\n\\begin{equation}\n\\label{eq:Tve_incemental}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = - \\int\\limits_{-\\infty}^{\\indLT{n+1}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n+1}z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s \\ .\n\\end{equation}\nNext, this integral is split into two sub-integrals which yields\n\\begin{equation}\n\\label{eq:Tve_incremental_split}\n\\begin{array}{l}\n\\displaystyle\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = - \\int\\limits_{-\\infty}^{\\indLT{n}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s \n \\\\[5mm]\\displaystyle\n \\qquad \\qquad\n - \\int\\limits_{\\indLT{n}z}^{\\indLT{n}z + \\Delta z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\, {\\rm d}s .\n\\end{array}\n\\end{equation}\nNote that in contrast to the calculation step \\eqref{eq:Tel_incemental_split} here the intrinsic time scale $\\indLT{n+1}z = \\indLT{n}z + \\Delta z$ has been substituted. The first integral on the right-hand side of Eq.~\\eqref{eq:Tve_incremental_split} can be expressed by the solution $\\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}$ as follows\n\\begin{equation}\n\\label{eq:Tve_incremental_first_term}\n\\begin{array}{l}\n\\displaystyle\n-{\\rm e}^{-\\frac{\\Delta z}{\\tau_k}}\n \\int\\limits_{-\\infty}^{\\indLT{n}z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z -s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\ {\\rm d}s\n \\\\[5mm]\\displaystyle\n \\qquad \\qquad\n = {\\rm e}^{-\\frac{\\Delta z}{\\tau_k}} \\ \\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\ .\n \\end{array}\n\\end{equation}\nSince the second term of Eq.~\\eqref{eq:Tve_incremental_split} cannot be solved in closed form, a numerical procedure has to be applied. Here again, the midpoint method is employed:\n\\begin{equation}\n\\label{eq:Tve_incremental_midpoint}\n\\begin{array}{l}\n\\displaystyle\n - \\int\\limits_{\\indLT{n}z}^{\\indLT{n}z + \\Delta z}\n 2 \\mu_k\\,{\\rm e}^{-\\frac{\\indLT{n}z + \\Delta z-s}{\\tau_k}} \n \\bigg(\\dfrac{\\rm d}{{\\rm d}s}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1}(s)\\bigg) \\ {\\rm d}s\n \\\\[4mm]\n\\displaystyle\n \\qquad\\quad \\approx -\\Delta z \\\n 2 \\mu_k\\,{\\rm e}^{-\\frac{ \\Delta z}{2\\,\\tau_k}} \n \\ \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{\\indLT{n}z + \\frac{\\Delta z}{2}} \\ .\n\\end{array} \n\\end{equation}\nFurthermore, the material time derivative $\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\SLdreieck^{\\minus 1}$ is approximated by\n\\begin{equation}\n\\label{eq:Tve_CG_approx}\n\\begin{array}{l}\n \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich!^\\triangle^{\\minus 1}\\Big|_{\\indLT{n}z + \\frac{\\Delta z}{2}}\n \\approx\n \\dfrac{1}{\\Delta z} \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array} \n\\end{equation}\nA substitution of Eqs.~\\eqref{eq:Tve_incremental_first_term} - \\eqref{eq:Tve_CG_approx} into Eq.~\\eqref{eq:Tve_incremental_split} yields the incremental representation of the stresses $\\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k}$ of one single Maxwell element\n\\begin{equation}\n\\label{eq:Tve_incremental_single}\n\\begin{array}{l}\n \\indLT{n+1}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \n = {\\rm e}^{-\\frac{\\Delta z}{\\tau_k}} \\ \\indLT{n}\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde_{ve,k} \\, +\n \\\\[5mm]\\displaystyle\n \\qquad \\qquad \\quad \n - 2 \\mu_k\\,{\\rm e}^{-\\frac{ \\Delta z}{2\\,\\tau_k}} \n \\ \\Big(\\indLT{n+1}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} - \\indLT{n}\\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLstrich^{\\minus 1} \\Big) \\ .\n\\end{array}\n\\end{equation}\nBased on this result, the total viscoelastic stress contribution can be calculated by the help of Eq.~\\eqref{eq:stress_visc}.\n\n\n\\subsection{ANSYS specific stress and material tangent}\n\\label{sec:FEM_implementation_ANSYS} \n\nThe material model has been implemented into the commercial finite element software \\textit{ANSYS}$^{\\rm TM}$ by the help of the user subroutine \\textit{USERMAT} \\cite{Ansys_allg_2011}. Within a calculation step $\\Delta t = t_{n+1} - t_{n}$, several input variables are transferred to the user subroutine. These are, for example, the deformation gradients $\\indLT{n}\\STAPEL F!_\\SLstrich!_\\SLstrich$ and $\\indLT{n+1}\\STAPEL F!_\\SLstrich!_\\SLstrich$ as well as user defined internal variables of the previous time step. In the opposite direction, appropriate output values have to be transferred to the software. The output includes the stress and the material tangent operator. Moreover, internal variables have to be updated for the next calculation step. \n\nThe material model described in Sections \\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions} has been set up in total Lagrangian representation. Thus, the stresses are given in the form of the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$. Its corresponding material tangent operator $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde$ is defined by\n\\begin{equation}\n\\label{eq:total_lagrange}\n \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde!^\\SLdreieck = \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL C!_\\SLstrich!_\\SLstrich!^\\SLdreieck \\ ,\n \\qquad\n \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \n = \\dfrac{\\partial \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde}{\\partial \\STAPEL C!_\\SLstrich!_\\SLstrich} \\ .\n\\end{equation}\nSince \\textit{ANSYS}$^{\\rm TM}$ uses an updated Lagrangian representation for the formulation of finite strain material models \\cite{Ansys_allg_2011}, appropriate transformations of the stress and material tangent measures have to be performed. An \\textit{ANSYS}$^{\\rm TM}$ specific stress tensor $\\Ten2 \\sigma_U$ and its corresponding material tangent operator $\\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U$ are defined on the configuration $\\mathcal{K}_U$ which arises from the polar decomposition theorem. (cf. Fig.~\\ref{fig:defgrad_polar}). \n\\begin{figure}[ht]\n\t\\centering\n\t\t\t\\includegraphics[width=0.35\\textwidth]{Fig7}\n\t\\caption{Polar decomposition of the deformation gradient.}\n\t\\label{fig:defgrad_polar}\n\\end{figure}\nAccording to the polar decomposition theorem, the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich$ gets decomposed into pure stretch tensors and a pure rotation tensor \n\\begin{equation}\n\\label{eq:polar_decompo}\n \\STAPEL F!_\\SLstrich!_\\SLstrich = \\Ten2 V \\cdot \\Ten2 R = \\Ten2 R \\cdot \\Ten2 U \\ .\n\\end{equation}\nTherein, $\\Ten2 V$ and $\\Ten2 U$ are the positive definite symmetric left and right stretch tensor, respectively. Furthermore, $\\Ten2 R$ is the orthogonal rotation tensor, thus $\\Ten2 R^{\\minus 1} = \\Ten2 R^T$ holds. By the help of the stretch tensor $\\Ten2 U$, the stress tensor $\\Ten2 \\sigma_U$ is obtained by the push forward operation of the $2^{\\rm nd}$ Piola-Kirchhoff stress tensor $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ \n\\begin{equation}\n\\label{eq:Cauchy_pull_back_final}\n \\Ten2 \\sigma_U \n \\,=\\, \\dfrac{1}{J} \\, \\Ten2 U \\cdot \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\cdot \\Ten2 U \n \\,=\\, \\dfrac{1}{J} \\, \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\ .\n\\end{equation}\nHere, Eq.~\\eqref{eq:S24} has been applied for the definition of the fourth order tensor\n\\begin{equation}\n\\label{eq:TensM}\n \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U\n = \\left( \\Ten2 U \\otimes \\Ten2 U\\right)^{S_{24}} \\ .\n\\end{equation}\nAccording to \\cite{Ihlemann_2006} and \\cite{Rendek_Lion_2010}, the corresponding material tangent $\\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U$ is calculated by the operation\n\\begin{equation}\n\\label{eq:tangent_euler_rot_final}\n \\STAPEL k!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U\n = \\dfrac{2}{J} \\ \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}}\n \\left\\{\n \\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde \n + \\left( \\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde \\otimes \\STAPEL C!_\\SLstrich!_\\SLstrich^{\\minus 1}\\right)^{S_{24}}\n \\right\\}\n \\mathbin{\\mathord{\\cdot}\\mathord{\\cdot}} \\STAPEL M!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt_U \\ .\n\\end{equation}\n\nEqs. \\eqref{eq:Cauchy_pull_back_final} and \\eqref{eq:tangent_euler_rot_final} have been implemented into the user subroutine \\textit{USERMAT} right after the computation of the Lagrangian tensors $\\STAPEL T!_\\SLstrich!_\\SLstrich!^\\SLtilde$ and $\\STAPEL K!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!_\\vrule width \\SLeffbreite height.4pt!^\\SLtilde $.\n\n\\subsection{Algorithmic correction of the initial volume}\n\\label{sec:FEM_implementation_correction}\n\nIn this section, an algorithm is presented which enables the consideration of initial conditions that differ from reference values used in constitutive equations for the representation of thermochemical volume changes. \n\nIn Section \\ref{sec:Constitutive_functions_Volume}, a material function $\\varphi_{\\theta C}(\\theta,q)$ has been introduced which describes changes in volume related to heat expansion and chemical shrinkage processes. This function has been formulated with respect to a specific reference temperature $\\STAPEL \\theta!^\\SLtilde$ and a reference value \\mbox{$\\STAPEL q!^\\SLtilde = 0$} for the degree of cure. If the current temperature $\\theta(t)$ and the degree of cure $q(t)$ equal those reference values, the function yields \\mbox{$\\varphi_{\\theta C}(\\STAPEL \\theta!^\\SLtilde,\\STAPEL q!^\\SLtilde) = 1$}. Thus, no change in density and volume is computed. If, however, the temperature or the degree of cure differ from their reference values, a certain volume change would be calculated depending on the differences of the input variables and the specific material properties (i.e. heat expansion and chemical shrinkage coefficients). The same situation may occur right at the beginning of a numerical simulation, i.e. in the initial state. In particular, the initial temperature $\\theta_0$ and the initial degree of cure $q_0$ may vary compared to the previously defined reference values $\\STAPEL \\theta!^\\SLtilde$ and $\\STAPEL q!^\\SLtilde$, respectively. In such a case, a value $\\varphi_{\\theta C}(\\theta_0 \\ne \\STAPEL \\theta!^\\SLtilde,q_0\\ne \\STAPEL q!^\\SLtilde) \\ne 1$ and, consequently, an immediate volume change would be calculated right at the beginning of a simulation. In finite element simulations, this would either lead the finite element mesh to change its volume or volumetric stresses would occur initially. Moreover, a distortion of the finite element mesh might occur.\n\nIn order to avoid this undesired behaviour and to keep the initial volume of a finite element mesh constant, a correction is made at the beginning of a simulation. To this end, the reference state and the initial state at $t=0$ of a simulation are strictly separated by introducing a reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ and an initial configuration $\\mathcal{K}_0$ according to Fig.~\\ref{fig:defgrad_korr}.\\footnote{The initial configuration $\\mathcal{K}_0$ can be interpreted as a new reference \\cite{Shutov_Etal_2012}.}\n\n\\begin{figure}[ht]\n\t\\centering\n\t\t\t\\includegraphics[width=0.35\\textwidth]{Fig8}\n\t\\caption{Decomposition of the deformation gradient for the distinction between reference and initial configuration}\n\t\\label{fig:defgrad_korr}\n\\end{figure}\n\nThe reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ is constituted by reference values for the temperature $\\STAPEL \\theta!^\\SLtilde$, the degree of cure \\mbox{$\\STAPEL q!^\\SLtilde = 0$} and the mass density $\\JI\\STAPEL\\varrho!^\\SLtilde\\!$. Likewise, the initial state $\\mathcal{K}_0$ at time $t=0$ of a simulation is represented by the initial values \n\\begin{equation}\n\\theta_0 = \\theta(t=0), \\ q_0 = q(t=0), \\ \\varrho_0 = \\varrho(t=0) \\ . \n\\end{equation}\n\nNext, the different deformation paths occurring in Fig. \\ref{fig:defgrad_korr} are defined. Firstly, a new deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich^{\\rm new}$ is introduced which represents the true deformation within a simulation. Accordingly, the deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$ represents the true thermochemical volume change. The latter operator is constituted by\n\\begin{equation}\n\\label{eq:F_thetaC_init}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C} = J_{\\theta C}^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich ,\n\\quad\n J_{\\theta C} = \\dfrac{{\\rm d}V_{\\theta C}}{{\\rm d} V_0} = \\dfrac{\\varrho_0}{\\varrho_{\\theta C}} \\ ,\n\\end{equation}\nwhere $J_{\\theta C}$ is the true thermochemical volume ratio that occurs in a simulation. Since the initial state $\\mathcal{K}_0$ is assumed to be deformation free, the condition \n\\begin{equation}\n\\label{eq:cond_init_vol}\n\\STAPEL F!_\\SLstrich!_\\SLstrich^{\\rm new}(t=0) \\, = \\, \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(t=0) \\, = \\, \\STAPEL I!_\\SLstrich!_\\SLstrich \n\\end{equation}\nholds. Next, the mapping between the reference configuration $\\STAPEL {\\cal K}!^\\SLtilde$ and the initial configuration $\\mathcal{K}_0$ is represented by an isotropic deformation gradient $\\STAPEL F!_\\SLstrich!_\\SLstrich_0$\n\\begin{equation}\n\\label{eq:F_0}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{0} \n = J_0^{\\,1\/3} \\ \\STAPEL I!_\\SLstrich!_\\SLstrich , \\qquad\n J_0 \n = \\dfrac{{\\rm d}V_0}{{\\rm d}\\STAPEL V!^\\SLtilde} \n = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{\\varrho_0} \\ .\n\\end{equation}\nHere, $J_0$ represents the volume ratio between the reference and the initial configuration. This value is not a function of time and thus remains constant throughout the whole deformation process. Moreover, $J_0$ will be interpreted as a correction of the initial volume. \n\nFinally, a mapping between the reference state $\\STAPEL {\\cal K}!^\\SLtilde$ and the configuration $\\mathcal{K}_{\\theta C}$ is introduced as follows:\n\\begin{equation}\n\\label{eq:F_thetaC_ref}\n \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C} = \\varphi_{\\theta C}^{1\/3}\\big(\\theta,q\\big) \\ \\STAPEL I!_\\SLstrich!_\\SLstrich , \\quad\n \\varphi_{\\theta C}\\big(\\theta,q\\big) = \\dfrac{{\\rm d}V_{\\theta C}}{{\\rm d}\\STAPEL V!^\\SLtilde} = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{\\varrho_{\\theta C}} \\ .\n\\end{equation}\nNote that $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}$ is defined by the constitutive function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ (see Eq.~\\eqref{eq:phi_thetaC}) and thus only depends on the current temperature $\\theta(t)$ and degree of cure $q(t)$. It represents a hypothetical volume ratio that would occur if no correction is computed.\n\nNext, the initial correction is calculated. According to Fig. \\ref{fig:defgrad_korr}, the relation between the deformation gradients \\eqref{eq:F_thetaC_init}, \\eqref{eq:F_0} and \\eqref{eq:F_thetaC_ref} at arbitrary values $\\theta$ and $q$ reads as\n\\begin{equation}\n\\label{eq:F0_decompo}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(\\theta, q) = \\STAPEL F!_\\SLstrich!_\\SLstrich_0^{\\minus 1} \\cdot \\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}(\\theta, q) \\ ,\n\\end{equation}\nSince $\\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}$, $\\STAPEL F!_\\SLstrich!_\\SLstrich_0$ and $\\STAPEL F!_\\SLstrich!_\\SLstrich!^\\SLtilde_{\\theta C}$ are isotropic, a similar relation can be formulated for the corresponding volume ratios:\n\\begin{equation}\n\\label{eq:J0_decompo}\n J_{\\theta C}(\\theta,q) = \\dfrac{\\varphi_{\\theta C}\\big(\\theta,q\\big)}{J_0} \\ .\n\\end{equation}\nRecall that the function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ only depends on the current temperature and the current degree of cure, and thus is fully determined. In contrast, the values $J_{\\theta C}\\big(\\theta,q\\big)$ and $J_0$ remain to be calculated. To this end, the condition \\eqref{eq:cond_init_vol} is employed. It takes into account that no volume change occurs in the initial state. Thus, it is stated that the initial volume remains unaffected by different initial values $\\theta_0$ and $q_0$. More precisely, \n\\begin{equation}\n\\label{eq:init_vol}\n \\STAPEL F!_\\SLstrich!_\\SLstrich_{\\theta C}(\\theta_0, q_0) = \\STAPEL I!_\\SLstrich!_\\SLstrich \n \\ \\ \\Leftrightarrow \\ \\ \n J_{\\theta C} (\\theta_0, q_0) = 1 \\ \n\\end{equation}\nis assumed. Based on this condition, the initial correction $J_0$ is calculated by evaluation of Eq.~\\eqref{eq:J0_decompo} at the initial state $\\theta = \\theta_0$ and $q = q_0$, which yields\n\\begin{equation}\n\\label{eq:J0_result}\n J_{0} = \\varphi_{\\theta C}(\\theta_0, q_0) = \\text{const.} \\ .\n\\end{equation}\nFurthermore, the initial mass density $\\varrho_0$ for a certain set of values $\\theta_0$ and $q_0$ is adjusted by \n\\begin{equation}\n\\label{eq:rho0_result}\n \\varrho_{0} = \\dfrac{\\JI\\STAPEL\\varrho!^\\SLtilde\\!}{J_0}= \\text{const.} \\ .\n\\end{equation}\nIn summary, the algorithm works as follows. Firstly, the initial correction $J_0$ is calculated within the first load step by Eq.~\\eqref{eq:J0_result}. This value is stored and can be accessed throughout all subsequent load steps of the simulation. Within each load step, the function $\\varphi_{\\theta C}\\big(\\theta,q\\big)$ constituted by Eq.~\\eqref{eq:phi_thetaC} is evaluated and the true thermochemical volume ratio $J_{\\theta C}\\big(\\theta,q\\big)$ is computed by Eq. \\eqref{eq:J0_decompo}.\n\n\n\n\\section{Finite element simulation of PMCs}\n\\label{sec:Finite_element_simulation}\n\nIn this section, the material model is applied within finite element simulations regarding the newly proposed manufacturing process for deep drawn PMCs (cf. Section~\\ref{sec:Introduction}). Since this paper primarily focuses on phenomena related to the adhesive's curing reaction, the simulation of the forming step is reduced to a simplifying approximation based on a more complex forming simulation presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. Nevertheless, the forming step cannot be omitted since the subsequent volume shrinkage of the adhesive leads to quite different results concerning secondary deformations of the PMC or evolving strains on the MFC. \n\nThe considered finite element model is related to one specific deep drawing geometry (see Section~\\ref{sec:Finite_element_model}). Based on this model, two different manufacturing processes are investigated. Firstly, simulations on the new manufacturing process (see Section~\\ref{sec:Introduction}) are conducted. To this end, Section~\\ref{sec:Part_I_Forming_step} deals with the simplified forming step where the adhesive is not yet cured. The subsequent curing of the adhesive is considered in Section \\ref{sec:Part_II_Curing_process}. To compare the obtained results regarding the impact on the formed PMC and the MFC, an alternative manufacturing process is investigated as well. In Section~\\ref{sec:reversed_process} the order of forming and curing is switched which will illustrate the negative influence on the MFC when forming takes place after the adhesive has been fully cured.\n\n\\subsection{Finite element model}\n\\label{sec:Finite_element_model}\n\nThe specific example of sheet metal forming simulation considered in this paper relies on a deep drawn rectangular cup geometry as presented in \\cite{Neugebauer_Etal_2013_WGP}. A metal sheet with in-plane dimensions of $200 \\, \\rm mm$ x $130 \\, \\rm mm$ is to be formed. A cover metal sheet and a MFC are bonded to the structure by an adhesive as described in Section \\ref{sec:Introduction}. A schematic illustration of a quarter of the final formed deep drawing cup is depicted in Fig.~\\ref{fig:pic_deep_drawing}. \n\n\\begin{figure}[ht]\n\t\\centering\n \\includegraphics[width=0.45\\textwidth]{Fig9}\n\t\\caption{Quarter section of the deep drawn PMC with the section of the finite element model (dashed line) }\n\t\\label{fig:pic_deep_drawing}\n\\end{figure}\nDeduced from the objectives of this work, the finite element model used in this work is confined to the MFC's surrounding region of the PMC (see the dashed line in Fig.~\\ref{fig:pic_deep_drawing}). Furthermore, two planes of symmetry are utilized such that the final model covers a quarter of the inner part of the PMC. Its basic area coincides to the size of the quarter aluminium cover sheet (Fig.~\\ref{fig:pic_model}). Taking into account the thicknesses of the different layers (see Table~\\ref{tab:Thicknesses}), the overall dimensions of the employed finite element model are $42.5 \\, \\rm mm$ x $35 \\, \\rm mm$ x $2.9 \\, \\rm mm$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig10}\n\t\\caption{Geometry of the model with aluminium layers (1), \n\t\t\t\t\t spacer (2), adhesive layer (3) and \n\t\t\t\t\t macro fibre composite (4) with its active part (5)}\n\t\\label{fig:pic_model}\n\\end{figure}\n\n\n\\begin{table}[ht]\n \\caption{Thicknesses of the PMC layers in the finite element model}\n \\vspace{.5ex}\n \\centering\n {\\begin{tabular}{p{.05cm}p{4.4cm}p{2cm}p{.05cm}}\n \\hline & & & \\\\[-3mm]\n &layer & thickness&\\\\ \n \\hline & & & \\\\[-3mm]\n &aluminium cover sheet & $0.8 \\ \\rm mm$ &\\\\[1mm]\n &adhesive above MFC & $0.15 \\ \\rm mm$ &\\\\[1mm]\n ¯o fibre composite & $0.3 \\ \\rm mm$ &\\\\[1mm]\n &adhesive below MFC & $0.15 \\ \\rm mm$ &\\\\[1mm]\n &aluminium bottom sheet & $1.5 \\ \\rm mm$ &\\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:Thicknesses}\n\\end{table} \n\nBoth, the PMC and the finite element model consist of several components with remarkably different material behaviour. Therefore, different material models are applied for the layers of the PMC. To represent the curing behaviour of the adhesive layer, the model and its corresponding material parameters presented in Section \\ref{sec:Curing_model} are employed. The material behaviour of the aluminium sheets is described by an elastic-plastic model provided by \\textit{ANSYS}$^{\\text{TM}}$ \\cite{Ansys_allg_2011}. The model incorporates plastic hardening effects represented by a bilinear kinematic hardening rule.\\footnote{ For even more exact prediction of the residual stresses and spring back, material models with nonlinear kinematic hardening are needed \\cite{Shutov_Kreissig_2008}.} The corresponding material parameters are chosen according to the manufacturer's specifications \\cite{ENAW5083_2002}. The active part of the MFC with its unidirectional piezoceramic fibres aligned in y-direction is represented by a model of transversely isotropic elasticity \\cite{Giddings_2009,MFC_2012}. The non-active part of the MFC as well as the spacer are modelled by isotropic elasticity laws. A summary of the employed material parameters for the described materials is given in Table~\\ref{tab:MatPar_FE_Model}.\n\n\n\\begin{table}[ht]\n \\caption{Material parameters for the different layers of the PMC}\n \\vspace{.5ex}\n \\centering\n {\\begin{tabular}{|lll|}\n \\hline && \\\\[-3mm]\n \\multicolumn{3}{|l|}\n {\\underline{Metal sheets: Elastoplasticity with kinematic hardening}}\\\\[3mm]\n & $E \\,\\,\\, = \\ 70000 \\ \\rm MPa$ \n & $\\ \\nu \\,\\,\\, = \\ 0.31$ \\\\[1mm] \n & $\\sigma_F = \\ 150 \\ \\rm MPa$ \n & $\\ E_T = \\ 1015 \\ \\rm \\rm MPa$ \\\\[4mm]\n \\multicolumn{3}{|l|}\n {\\underline{Active part of the MFC: Transversely isotropic elasticity} }\\\\[3mm]\n & $E_y \\, \\; = \\ 30336 \\ \\rm MPa$\n & $\\ E_x \\, \\; = \\ E_z = 15857 \\ \\rm MPa$ \\\\ [1mm]\n & $\\nu_{xz} \\,\\, = \\ 0.31$ \n & $\\ \\nu_{xy} \\,\\, =\\ \\nu_{zy}= 0.16$ \\\\[1mm]\n & $G_{xy} = \\ G_{zy}= 5515 \\ \\rm MPa$ \n & $\\ G_{zx} = \\ \\frac{E_{x}}{2\\left( 1 + \\nu_{xz} \\right)}$ \\\\[4mm] \n \n \\multicolumn{3}{|l|}\n {\\underline{Non-active part of the MFC: Isotropic elasticity}}\\\\[3mm]\n & $E_{kapt} = \\ 3500 \\ \\rm MPa$ \n & $\\ \\nu_{kapt} = \\ 0.33$ \\\\[4mm]\n \\multicolumn{3}{|l|}\n {\\underline{Spacer: Isotropic elasticity}}\\\\[3mm]\n & $E_{tape} = \\ 8000 \\ \\rm MPa$ \n & $\\ \\nu_{tape} = \\ 0.35$ \\\\[1mm]\n \\hline\n \\end{tabular}}\n \\label{tab:MatPar_FE_Model}\n\\end{table} \n\n\nThe finite element model has been meshed by a bottom up approach with three-dimensional structural solid elements, each incorporating eight nodes and linear regression functions (Fig.~\\ref{fig:pic_Mesh}). The complete mesh consists of about $40000$ elements from which $14000$ involve the adhesive's material model. To avoid volume locking effects within the adhesive layer, a mixed u-p-formulation is used for the corresponding elements. Furthermore, radii at the outer edges of the MFC have been modelled to reduce effects of stress concentration (see highlighted region of Fig.~\\ref{fig:pic_Mesh}).\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig11}\n\t\\caption{Finite element mesh with highlighted radii at the outer edge of the MFC}\n\t\\label{fig:pic_Mesh}\n\\end{figure}\n\n\n\\subsection{Part I: Forming step}\n\\label{sec:Part_I_Forming_step}\n\nThe first step of the simulation process approximates the deep drawing of the PMC's inner part. To this end, node displacements from the deep drawing simulation in \\cite{Neugebauer_Etal_2013_WGP} have been interpolated and defined as boundary conditions on the bottom sheet (see Fig.~\\ref{fig:pic_boundary_conditions_I}). This approach of approximating the deformation of the inner part provides the possibility to focus on studies regarding regions surrounding the adhesive while reducing the computational effort of a complete deep drawing simulation. Thus, typical challenges in sheet metal forming simulations like wrinkling, spring-back and contact formulations are avoided.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.3\\textwidth]{Fig12}\n\t\\caption{Interpolated displacement values at the bottom (1) \n\t and symmetry boundary conditions (2) on the model \n\t for the simulation of the forming step}\n\t\\label{fig:pic_boundary_conditions_I}\n\\end{figure}\n\nWithin the forming step, the adhesive has not yet reached its gelation point and, thus, can be treated as a viscoelastic fluid \\cite{Winter_1986}. To adopt the fluid like material behaviour, a simplified model is used which includes pure elastic behaviour with respect to volume changes and viscoelastic behaviour in case of isochoric deformations. The viscoelastic behaviour is represented by one single Maxwell element with constant material parameters. Here, an \\textit{ANSYS}$^{\\rm TM}$ built in viscoelastic model has been used. The material parameters are $K = 5000 \\ \\rm MPa$ for the bulk modulus, $c = 2.0 \\ \\rm MPa$ for the Neo-Hookean stiffness, and $\\tau = 20.0 \\ \\rm s$ for the relaxation time. \n\nThe simulation time of the forming step is $1000 \\ \\rm s$. The forming itself takes $30 \\ \\rm s$. The remaining period of $970 \\ \\rm s$ is included to achieve a relaxed state in the adhesive. This procedure has been chosen due to numerical difficulties when using shorter simulation times for the forming step or applying smaller values for the adhesive's relaxation time. Some numerical investigations on appropriate representations of the liquid adhesive during forming were conducted in \\cite{Neugebauer_Etal_2013_WGP}. However, the aim of this simulation step is to obtain a finite element mesh of the formed PMC. Thus, the described procedure is assumed to be sufficient for the needs of this work. \n\nResulting from the forming simulation, Fig.~\\ref{fig:pic_uz_bottom} shows the circularity of the contour lines of the displacement $u_z$ which points to a good reproduction of the profile generated by the rectangular punch with a double curvature of $100 \\, \\rm mm$.\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig13}\n\t\\caption{Perspective view (left) and bottom view (right) of the \n\t\t\t\t\t model's $u_z$-displacement due to forming}\n\t\\label{fig:pic_uz_bottom}\n\\end{figure}\n\nIn order to analyse the functionality of the formed PMC, the strain affecting the MFC has to be examined. According to manufacturer's specifications, the linear elastic tensile strain limit of the MFC's brittle piezoceramic fibres is about $1 \\! \\cdot \\! 10^{-3}$~\\cite{Daue_Kunzmann_2010,MFC_2012}. If this strain level is exceeded, risk of depolarization effects increases significantly and the MFC might possibly not be able to maintain its sensor and actuator functionalities in the required magnitude ~\\cite{Daue_Kunzmann_2010}. Complete failure of the MFC occurs, if a maximum operational tensile strain of $4.5 \\! \\cdot \\! 10^{-3}$ is exceeded \\cite{MFC_2012}. \n\nAs a result of the conducted forming simulation, Fig.~\\ref{fig:pic_MFC_eps_y} reveals that the deformation of the forming step exceeds the linear tensile strain limit by two to three times. However, failure of the MFC is not predicted since the strain magnitudes are below the maximum operational tensile strain. Moreover, it can be seen that there are compressed and stretched regions which points to major influence of bending deformation on the MFC. \n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig14}\n\t\\caption{Total mechanical strain $\\varepsilon_{y}$ along the orientation of fibres on the top (left) and the bottom (right) of the active part of the MFC}\n\t\\label{fig:pic_MFC_eps_y}\n\\end{figure}\n\n\n\\subsection{Part II: Curing process}\n\\label{sec:Part_II_Curing_process}\n\nThe second part of the simulation gives insight into the impact of the adhesive's curing and its associated volume shrinkage on the PMC. In order to obtain a continuous process chain of simulation, the deformed mesh of the simplified forming simulation presented in Section \\ref{sec:Part_I_Forming_step} is employed as starting point for the curing simulation. To account for curing phenomena in the adhesive, the viscoelastic material model used within the forming step is replaced by the material model presented in Sections~\\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions}. The point of time to conduct this change of material laws is set to the gelation point at which the fluid turns into a solid \\cite{Winter_1986}. According to Meuwissen \\etal \\cite{Meuwissen_Etal_2004} the gelation point is represented by a degree of cure \\mbox{$q \\approx 0.5-0.6$}. Here, the initial value is set to $q_0 = 0.5$. \n\nWithin the curing simulation, the applied boundary conditions are confined to symmetries at the two cross-sectional planes as can be seen in Fig.~\\ref{fig:pic_boundary_conditions_I}. The boundary conditions on the bottom sheet are released. Since no residual stresses are transferred from the forming simulation to the curing step, no initial spring-back is observed. The simulation time of the curing simulation is set to $1800 \\, \\rm s$ according to manufacturer's specifications \\cite{DP410_2003}. Additionally, a constant temperature $\\theta = 318 \\, K$ is prescribed. As an example for the decisive effects of adhesive curing process, Fig.~\\ref{fig:pic_J3_section} shows the mechanical volume ratio~$J$ resulting from the material's volume shrinkage. The final degree of cure at this stage is $q = 0.89$.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig15}\n\t\\caption{Volume ratio~$J$ of the PMC after the curing process shows the adhesive's chemically induced shrinkage (cover sheet is hidden, highlighted region indicates the adhesive)}\n\t\\label{fig:pic_J3_section}\n\\end{figure}\n\nNote that if a free volume shrinkage is assumed, $J$ would be equally distributed throughout the entire adhesive volume. The apparent deviations of~$J$ shown in Fig.~\\ref{fig:pic_J3_section} are caused by supporting effects of adjacent materials. These supporting effects can also be observed by viewing the displacements in normal direction of the free surfaces of the aluminium layers (see Fig.~\\ref{fig:pic_uz_curing}). As a consequence of their different thicknesses, the resulting normal displacements at the free surface of the top layer are more pronounced than these of the bottom layer. \n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig16}\n\t\\caption{Surface markings due to the adhesive's volume shrinkage during the curing process on top (left) and at the bottom (right) of the finite element model}\n\t\\label{fig:pic_uz_curing}\n\\end{figure}\n\nAnalogous to the investigations of the simplified for\\-ming simulation (cf. Section \\ref{sec:Part_I_Forming_step}), the mechanical strain $\\varepsilon_{y}$ of the piezoceramic fibres is examined and presented in Fig.~\\ref{fig:pic_MFC_eps_y_curing}. \n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig17}\n\t\\caption{Total mechanical strain~$\\varepsilon_{y}$ in the direction of the orientation of the fibres on the top (left) and the bottom (right) of the MFC due to the chemical curing process}\n\t\\label{fig:pic_MFC_eps_y_curing}\n\\end{figure}\n\nAs can be seen from Fig.~\\ref{fig:pic_MFC_eps_y_curing}, the linear elastic tensile strain limit is not exceeded due to the curing reaction. However, superimposing the strains of the forming and the curing simulation reveals that the MFC is loaded additionally in the compressed zones. For the herein presented example with its adhesive layer thickness, its further geometrically dimensions and the given forming displacements, the influence is rather negligible. Nevertheless, deviations from these conditions (smaller radii or thicker adhesive layer) may result in strain amplitudes due to forming and curing process which are vice versa.\n\n\\subsection{Comparison to reversed manufacturing process}\n\\label{sec:reversed_process}\n\nFinally, an analysis of a reversed manufacturing process proves the adhesive's protecting function to the MFC. The concluding simulation is conducted with the material model presented in Sections~\\ref{sec:Curing_model} and \\ref{sec:Curing_model_constitutive_functions}. Here, an initial degree of cure of~$q_0=0.5$ is chosen and a constant temperature of $\\theta = 318 \\,K$ is prescribed. The simulation time of the curing process is $1800 \\, \\rm s$. Within this period of time, the chemical reaction is finished with the value $q = 0.89$ which is in accordance with the simulation in Section \\ref{sec:Part_II_Curing_process}. Subsequently, the forming step with the cured adhesive is simulated. In Fig.~\\ref{fig:pic_MFC_eps_y_reverse}, the resulting tensile strain along the fibre direction is depicted.\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Fig18}\n\t\\caption{Total mechanical strain~$\\varepsilon_{y}$ along fibre orientation on the top (left) and the bottom (right) of the MFC due to forming with already cured adhesive}\n\t\\label{fig:pic_MFC_eps_y_reverse}\n\\end{figure}\n\nAs Fig.~\\ref{fig:pic_MFC_eps_y_reverse} reveals, due to this strategy the entire MFC has been uniformly stretched and the maximum strain is almost four times higher than in the simulation of the actually intended manufacturing process shown in Sections~\\ref{sec:Part_I_Forming_step} and \\ref{sec:Part_II_Curing_process}. Moreover, not only the linear elastic tensile strain limit but also the maximum operational tensile strain of $4.5 \\! \\cdot \\! 10^{-3}$ is exceeded. Hence, failure of the MCF is predicted in this simulation. In conclusion, it can be confirmed that the liquid adhesive protects the MFC during the forming process (see results in Sections \\ref{sec:Part_I_Forming_step} and \\ref{sec:Part_II_Curing_process}).\n\\section{Conclusion and discussion}\n\\label{sec:Conclusions}\n\nThe present work aims at supporting investigations of the innovative manufacturing process for smart PMCs. Apparently, the role of the adhesive is crucial due to its specific material behaviour and its geometrical design. To allow for investigations on this issue, a general material modelling approach is presented in Section \\ref{sec:Curing_model}. Furthermore, a concretized model for the representation of curing phenomena in one specific adhesive is described in Section \\ref{sec:Curing_model_constitutive_functions}. This model is able to capture the main characteristics of the two-component epoxy based adhesive considered in this paper. Moreover, the thermodynamic consistency of the model could be proved. \n\nIn Section \\ref{sec:FEM_implementation}, different aspects of numerical implementation into the finite element software \\textit{ANSYS}$^{\\rm TM}$ have been discussed. Beside the numerical integration of constitutive equations and the derivation of software specific stress and material tangent tensors, a new algorithm has been proposed which suppresses undesired volume changes at the beginning of numerical simulations. Those volume changes may result from thermal expansion and chemical shrinkage when initial values for the temperature and the degree of cure differ from their reference values. Only by the help of the new algorithm, the consideration of heterogeneous initial fields of temperature and degree of cure has been made possible.\n\nFinally, in Section \\ref{sec:Finite_element_simulation} a finite element model for a deep drawn rectangular cup has been built up as representative example for the innovative manufacturing process. The model is based on experimental and numerical studies presented by Neugebauer \\etal \\cite{Neugebauer_Etal_2013_WGP}. However, in the present work a simplified model which includes only the closest surroundings to the MFC has been employed. Based on the results of the forming and curing simulations, it can be stated that the simplified forming step is capable of producing a sufficiently accurate geometry as basis for the curing simulation. \n\nTo highlight the benefit of the new manufacturing process, a reversed sequence of production has been investigated as well and the results of both processes have been compared. In Section \\ref{sec:reversed_process} it has been predicted that failure of the MFC occurs if the PMC is formed after the adhesive is completely cured. Hence, it can be seen that the new manufacturing process considered in this paper exhibits particular potential to overcome these shortcomings. Here, the floating support resulting from the uncured adhesive prevents sufficiently from overloading or delamination during the forming step.\n\nBased on the present work, which shows the qualitative feasibility of the proposed material model as well as the strategy of numerical simulation, different aspects are planned to be conducted in the future. This includes complete identification of material parameters based on experimental investigations and the extension to thermomechanically coupled simulations analogous to \\cite{Landgraf_Etal_2012,Landgraf_Etal_2013} or \\cite{Mahnken_2013}. Finally, it is intended to employ the presented approach to the simulation of different deep drawing processes (cf. \\cite{Neugebauer_Etal_2010_ProdEng,Neugebauer_Etal_2013_WGP}). \n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\n\nA broad range of observations including the cosmic microwave background radiation anisotropies have established\nthe so-called standard cosmological model in which the \nuniverse mainly consists of two unknown substances called dark energy and\ndark matter.\nAccording to the standard model,\nsmall primeval density fluctuations generated in the very early universe grow by gravitational instability, to form \nnonlinear objects called dark matter halos.\n\nDark halos with mass $\\sim 10^5$--$10^6{\\rm \\, M_{\\odot}}$ are thought to be the birth place of the first generation of stars \\citep{1997_Tegmark, 2003_YoshidaAbelHernquist}. \nStar-forming gas clouds are formed through condensation of the primordial gas by molecular hydrogen cooling. \nPopulation~III (\\ion{Pop}{3}) stars are formed \nin the primordial gas clouds when the age of the universe is a few tens\/hundreds million years \n\\citep[e.g,][]{2002_AbelBryanNorman, 2006_Naoz, 2012_Fialkov}.\n\nUltra-violet (UV) and X-ray radiation from the first stars and their remnants ionize and heat the inter-galactic medium (IGM), to initiate cosmic reionization.\nRecent observations including\nthe measurement of the electron scattering optical depth \\citep[e.g.,][]{2016_PlanckCollaboration}, the Gunn-Peterson trough in quasar spectra \\citep{1965_GunnPeterson, Fan:2006, Banados:2018}, and Ly-$\\alpha$ emission from star-forming galaxies \\citep{Mason:2018} suggest that the process of reionization completes by $z\\sim 6$ \\citep[e.g.,][]{Weinberger:2020, Naidu:2020}. \nIt is expected that early stages of reionization can be directly probed by future radio telescopes such as Square Kilometer Array\nthrough observation of redshifted 21-cm emission from neutral hydrogen in the IGM \\citep[see, e.g.,][for a recent review]{2016_Barkana, Mesinger:2019}. \n\nIn the early phase of reionization, the density distribution of the IGM,\nor the so-called gas clumping, is an important factor that critically sets the UV photon budget necessary for reionization.\nDense gas clouds hosted by cosmological minihalos can be significant \nphoton sinks, and their existence and the abundance\naffect the process and duration of reionization.\nUnfortunately, it is nontrivial to derive the abundance \nof the gas clouds and to estimate the effective gas clumping factor,\nbecause small gas clouds are effectively \nphoto-evaporated by the emerging UV background radiation.\nAt the same time, stars can be formed in the gas clouds, \nwhich then act as UV photon {\\it sources}. \n\nA number of studies have investigated early star formation under various environments \\citep[e.g.,][]{1976_Low, 2000_Omukai, 2002_Schneider, 2003_Schneider, 2007_GloverJappsen, 2007_Jappsen, 2009_Jappsen, 2009_Jappsenb, 2009_Smith, 2018_Chiaki, 2019_ChiakiWise}.\nMetal enrichment affects the evolution of gas clouds and subsequent star formation process \nthrough enhanced radiative cooling by heavy element atoms and dust grains \\citep[e.g,][]{2005_Omukai, 2019_Hartwig}. \nAlthough details of low-metallicity star formation \nhas been explored by recent numerical simulations \\citep{2018_Chiaki},\nthe effect of metal-enrichment on halo photoevaporation has not been systematically studied. \nIt is important to study the evolution of minihalos \nwith a wide range of metallicities\nin order to model the physical process of cosmic reionization\nin a consistent manner.\n\n\n\nReionization begins as a local process in which an individual\nradiation source generates an \\ion{H}{2} region around it.\n\\cite{1986_Shapiro} and \\cite{1987_ShapiroGiroux} \nuse a one-dimensional model under spherical symmetry to study the ionization front (I-front) propagation through the IGM in a cosmological \ncontext.\nRadiative transfer calculations have been performed in a post-processing\nmanner by using the density field realized in cosmological simulations\n\\citep{1999_Abel, 1999_RazoumovScott, 2001_SokasianAbelHernquist, 2002_Cen, 2003_HayesNorman}. \nFully coupled radiation-hydrodynamics simulations have been used to study reionization in a cosmological volume \\citep{2000_Gnedin, 2001_GnedinAbel, 2002_RicottiGnedinShull, 2004_SusaUmemuraa, 2004_SusaUmemurab, 2014_Wise, 2016_Xu}. Recent simulations of reionization explore the process of cosmic reionization employing large cosmological volumes of $\\sim 100^3$ comoving Mpc$^3$ while resolving the first galaxies in halos with mass of $\\sim 10^8 M_\\odot$\n\\citep{Semelin:2017, Ocvirk:2018}. However, even these state-of-the-art simulations do not fully resolve minihalos \nnor are able to follow the process of photoevaporation,\nand thus it still remains unclear \nhow the small-scale gas clumping affects reionization.\n\n\\cite{2004_Shapiro} and \\cite{2005_Iliev} study the dynamical evolution of minihalos irradiated by UV radiation.\nThey perform 2D radiation hydrodynamics simulations with \nincluding the relevant thermo-chemical processes. They explore a wide range of parameters such as halo mass, redshift, and the strength of UV radiation.\nIt is shown that gas clumping at sub-kiloparsec scales dominate absorption of ionizing photons during the early phase of reionization. \nAn important question is whether or not the minihalos survive\nunder a strong UVB for a long time, over a significant fraction of the age of the universe.\nAnother interesting question is whether or not stars are formed in metal-enriched minihalos. If massive stars are formed, they also contribute to reionization and may thus imprint characteristic features in the $21{\\, \\rm cm}$ signals \\citep{2016_Cohen}.\n\nIn the present paper, we perform a large set of high-resolution radiation hydrodynamics simulations of \nminihalo photoevaporation. \nWe aim at investigating systematically the effects of metal enrichment on the photoevaporation.\nWe evaluate the characteristic photoevaporation time and study its metallicity dependence. We also develop an analytic model\nof photoevaporation and compare the model prediction with our\nsimulation result.\nThe rest of the present paper is organized as follows.\nIn \\secref{sec:methods}, we explain the details of our computational methods. \nThe simulation results are presented in \\secref{sec:results}. \nWe develop an analytical model \nthat describes the physical process of minihalo photoevaporation in \\secref{sec:analytic}. \nWe discuss the physics of minihalo photoevaporation in \\secref{sec:discussion}. \nFinally, we give summary and concluding remarks in \\secref{sec:conclusions}.\n\nThroughout the present paper, we assume a flat $\\Lambda$CDM cosmology\nwith $(\\Omega_{\\rm m}, \\Omega_\\Lambda, \\Omega_{\\rm b}, h) = (0.27, 0.73, 0.046, 0.7)$ \\citep{2011_Komatsu}.\nAll the physical quantities in the following are given in physical units.\n\n\n\n\n\\section{Numerical Simulations} \\label{sec:methods}\n\tWe perform \n\tradiation-hydrodynamics simulations \n\tof minihalo photoevaporation by an external UV background radiation. \n\t We run a set of simulations systematically by varying the gas metallicity, dark matter halo mass, intensity of the radiation background and redshift. \n\t\n\tOur simulation set up is schematically shown in \\fref{fig:schematic}. \n\t\\begin{figure*}[htbp]\n\t\t\\begin{center}\n\t\t\\includegraphics[clip, width = \\linewidth-5cm]{schematic.pdf}\n\t\t\\caption{\n\t\t\tWe consider plane-parallel radiation incident on a halo with mass $M$ and metallicity $Z$.\n\t\t\tThe I-front reaches the halo at $z = z_{\\scalebox{0.6}{\\rm IN}}$. \n\t\t\t}\n\t\t\\label{fig:schematic}\n\t\t\\end{center}\t\t\n\t\\end{figure*}\n\t\n\tThe numerical method is essentially the same as in \\cite{2019_Nakatani}, where we study\n\tthe dynamical evolution of molecular gas clouds \n\texposed to an external UV radiation field.\n\tBriefly, we use a modified version of PLUTO \\citep[version 4.1;][]{2007_Mignone}\n\tthat incorporates ray-tracing radiative transfer\n\tof UV photons \n\tand non-equilibrium chemistry.\n\tThe details of the implemented physical processes are found in \n\t\\cite{2018_Nakatani, 2018_Nakatanib}.\n\t\n\tThe simulations are configured with 2D cylindrical coordinates. \n\tThe governing equations are\n\t\\gathering{\n\t\t\\frac{\\partial \\rho_{\\rm b}}{\\partial t} + \\nabla \\cdot \\rho_{\\rm b} \\vec{v} = 0 ,\t\n\t\t\\label{eq:continuity}\\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} v_R}{\\partial t} + \\nabla \\cdot \\left( \\rho_{\\rm b} v_R \\vec{v} \\right) \n\t\t= -\\frac{\\partial P}{\\partial R} - \\rho_{\\rm b} \\partialdif{\\Phi}{R},\n\t\t\t\\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} v_x}{\\partial t} + \\nabla \\cdot \\left( \\rho_{\\rm b} v_x \\vec{v} \\right) \n\t\t= - \\frac{\\partial P}{\\partial x } -\\rho_{\\rm b} \\partialdif{\\Phi}{x} , \\\\\n\t\t\\frac{\\partial \\rho_{\\rm b} E}{\\partial t} + \\nabla \\cdot \\left(\\rho_{\\rm b} H \\vec{v} \\right) \n\t\t= - \\rho_{\\rm b} \\vec{v} \\cdot \\nabla \\Phi \n\t\t+\\rho_{\\rm b} \\left( \\Gamma -\\Lambda \\right), \\\\\n\t\t\\frac{\\partial n_{\\text{\\rm H}} y_i }{\\partial t} + \\nabla \\cdot \\left( n_{\\text{\\rm H}} y_i \\vec{v} \\right)\n\t\t= n_{\\text{\\rm H}} C_i , \\label{eq:chemevoeq}\n\t}\n\twhere $R$ and $x$ are the physical radial distance and vertical height, respectively,\n\tand $t$ is proper time in the frame of the halo. \n\tWe denote the gas density, velocity, and pressure as $\\rho_{\\rm b}$, $\\vec{v} = (v_R, v_x)$, \n\tand $P$. In the equation of motion, $\\Phi$ is the external gravitational potential of the host dark halo.\n\tIn the energy equation, $E$ and $H$ are the total specific energy \n\tand total specific enthalpy, and\n\t$\\Gamma$ and $\\Lambda$ are the specific heating rate \n\tand specific cooling rate.\n\tThe abundance of $i$-th chemical species, $\\abn{i}$, is\n\tdefined by the ratio of the species' number density $n_i$ \n\tto the hydrogen nuclei number density $n_{\\text{\\rm H}}$.\n\tThe total reaction rate is denoted as $C_i$. The gas is composed of eight chemical species:\n\tH, \\ce{H+}, \\ce{H2}, \\ce{H2+}, He, CO, \\ce{C+}, O, and \\ce{e-}.\n\tThe elemental abundances of carbon and oxygen are normalized by\n\tthe assumed metallicity $Z$ as\n \t$0.926\\e{-4} \\, Z\/Z_{\\odot}$ \n\tand $3.568\\e{-4} \\, Z \/ Z_{\\odot}$, respectively \\citep{1994_Pollack, 2000_Omukai}.\n\t\n\t\t\n\tWe consider halos with a wide range of masses of \n\t$10^{3} {\\rm \\, M_{\\odot}} \\leq M \\leq 10^8{\\rm \\, M_{\\odot}}$. Each halo\n\thas a Navarro, Frenk \\& White density profile scaled appropriately \\citep{1997_NFW}.\n\tWe assume that the halo potential is fixed and is given by a function of \n\tthe spherical radius $r \\equiv \\sqrt{R^2+x^2}$ as\n\t\\eq{\n\t\t\\rho_{\\rm DM} \\propto \\frac{1 }{c_{\\rm N} \\xi ( 1 + c_{\\rm N} \\xi)^2},\t\\label{eq:nfw}\n\t}\n\twhere $c_{\\rm N}$ is the concentration parameter, \n\tand $x$ is the spherical radius normalized by the virial radius, i.e. $\\xi \\equiv r \/ r_{\\rm vir}$.\t\n\tThe virial radius of a halo collapsing at redshift $z$ is\n\t\\eq{\n\t\\splitting{\n\t\tr_{\\rm vir} = &1.51 \\braket{\\frac{\\Omega_m h^2}{0.141}}^{-1\/3}\n\t\t\t\t\t\t\t\\braket{\\frac{M}{10^8 \\,{\\rm \\, M_{\\odot}}}}^{1\/3} \\\\\n\t\t\t\t\t\t&\\times\t\\braket{\\frac{\\Delta_{\\rm c}}{18\\pi^2}}^{-1\/3}\n\t\t\t\t\t\t\t\\braket{\\frac{1+z}{10}}^{-1}\\,\\, {\\rm kpc} \t\\label{eq:rvir}\n\t\t\t\t\t\t\t}\n\t}\n\twhere $\\Delta_{\\rm c}$ is the overdensity relative to the critical density \n\tof the universe at the epoch.\n\tWe adopt $\\Delta_{\\rm c} =18\\pi^2$.\n\tThen the halo potential $\\Phi$ is explicitly given by\n\t\\gathering{\n\t\t\\Phi (r) \n\t\t\t\t\t= - V_{\\rm c}^2\n\t\t\t\t\t\\frac{\\ln\\braket{1 + c_{\\rm N} \\xi}}{c_{\\rm N} \\xi }\n\t\t\t\t\t\\frac{c_{\\rm N} }\n\t\t\t\t\t{\\ln(1+c_{\\rm N}) - \\dfrac{c_{\\rm N}}{1+ c_{\\rm N}}},\n\t\t\t\t\t\\label{eq:potential}\n\t\\\\\n\tV_{\\rm c} \\equiv \\sqrt{\\frac{GM}{r_{\\rm vir}}}\n\t}\n \n\t\n\t\n\t\n\t\n\tThe initial conditions are given by assuming a fully atomic, isothermal gas in hydrostatic equilibrium with the virial temperature\n\t\\eq{\n\tT_{\\rm vir}\t=\t\\frac{GM \\mu m_{\\rm p}}{2r_{\\rm vir} k_{\\rm B}},\n\t}\n\twhere $\\mu$ is mean molecular weight, \n\t$m_{\\rm p}$ is the proton mass,\n\tand $k_{\\rm B}$ is the Boltzmann constant. \n\tThe initial density profile is \n\t\\eq{\n\t\t\\rho_{\\rm b} (r)\n\t\t\t= \\hat{\\rho_{\\rm b}} \\exp\\left[-\\frac{\\Phi }{ k_{\\rm B}T_{\\rm vir}\/ \\mu m_{\\rm p}} \\right], \t\n\t\t\t\\label{eq:densityprofile}\n\t}\n\twhere $\\hat{\\rho_{\\rm b}}$ is the normalization factor \n\t\\eq{\n\t\t\t\\hat{\\rho_{\\rm b}} \\equiv \n\t\t\t\\frac{M \n\t\t\t\\Omega_{\\rm b} \\Omega_{\\rm m}^{-1}}\n\t\t\t{\n\t\t\t\\displaystyle\n\t\t\t\\int _0 ^{r_{\\rm vir}} {\\rm d} r\\, 4\\pi r^2\n\t\t\t\\exp\\left[-\\dfrac{\\Phi }{ k_{\\rm B}T_{\\rm vir}\/ \\mu m_{\\rm p}} \\right]}.\n\t}\n\tWith this normalization, the mass ratio of baryons to dark matter within $r_{\\rm vir}$ equals the global cosmic baryon fraction\n\t$f_{\\rm b} = \\Omega_{\\rm b} \/ \\Omega_{\\rm m}$.\n\tThe initial density profile is specified by $M$, $c_{\\rm N}$ and $\\xi$.\n\tNote that $\\hat{\\rho_{\\rm b}}$ is not the central density\n\tbut is a geometry-weighted average density,\n\twhich is independent of $M$ but scales as $\\propto (1+z)^3$\n\tfor fixed $\\Delta_{\\rm c}$ and $c_{\\rm N}$. \n\n\tThe initially fully atomic gas in the halo is exposed to plane-parallel UV radiation as illustrated in Figure \\ref{fig:schematic}.\n\tWe follow photoionization by extreme UV (EUV; $13.6\\,{\\rm eV} < h\\nu < 100\\,{\\rm eV}$) photons\n\tand photodissociation by far-UV photons in the Lyman-Werner (LW) band (FUV; $11.2\\,{\\rm eV} \\lesssim h\\nu < 13.6 \\,{\\rm eV}$).\n\tThe UV spectrum is given by \n\t\\eq{\n\t\tJ(\\nu) = J_{21} \\braket{\\frac{\\nu}{\\nu_1}}^{-\\alpha} \\e{-21} \n\t\t\\unit{erg}{}\\unit{s}{-1}\\unit{cm}{-2}\\unit{Hz}{-1}\\unit{sr}{-1},\t\\label{eq:UVSED}\n\t}\n\twhere $\\nu_1$ is the Lyman limit frequency (i.e., $h\\nu_1 = 13.6\\,{\\rm eV}$). \n\tWe set the UV spectral slope $\\alpha = 1$ and \n consider $J_{21}$ in the range $0.01 \\leq J_{21} \\leq 1$ \\citep{1996_ThoulWeinberg}. \n\tWe calculate the photodissociation rate with\n\ttaking into account the self-shielding of hydrogen molecules\n\t\\citep{1996_DraineBertoldi, 1996_Lee}.\n\n\tHeating and cooling rates are calculated self-consistently with the\n\tnon-equilibrium chemistry model of \\cite{2019_Nakatani}.\t\n\tMajor processes are photoionization heating,\n\tLy{\\rm $\\alpha$} cooling,\n\tradiative recombination cooling, \n\t\\ion{C}{2}~line cooling, \n\t\\ion{O}{1}~line cooling, \n\t\\ce{H2} line cooling,\n\tand CO line cooling. \n\tThe corresponding \n\theating\/cooling rates are found in\n\t\\cite{2018_Nakatani, 2018_Nakatanib}.\n\tFUV-induced photoelectric heating is not effective with FUV intensity and metallicity of interest in this study. We omit it from our thermochemistry model. \n\tFor the present study, we also implement the Compton cooling by the CMB photons interacting with free electrons with physical density $n_e$ as \n\t\\eq{\n\t\t\\Lambda_{\\rm Comp} = 5.65\\e{-36}\\nspe{e} (1+z)^4 (T - T_{\\rm CMB}) \\unit{erg}{}{\\, \\rm cm}^{-3} \\unit{s}{-1},\n\t}\n\twhere $T_{\\rm CMB}$ is the CMB temperature given by $T_{\\rm CMB} = 2.73(1+z){\\rm \\, K}$.\n\t\n\n\tOur computational domain extends $0\\, {\\rm kpc} \\leq R \\leq r_{\\rm vir}$ and $-r_{\\rm vir} \\leq x \\leq r_{\\rm vir}$. \n\tWe define computational grids uniformly spaced with the number of grids $N_R \\times N_x = 320\\times 640$. \t\n\tUV photons are injected from the boundary plane at $x = -r_{\\rm vir}$. \n\tWe assume that the cosmological I-front arrives at the plane at a redshift of $z_{\\scalebox{0.6}{\\rm IN}}$. \t\n\tAll our runs start at this time denoted as $t = 0 \\, {\\rm yr} $. \n\tNote that the external halo potential is fixed; we do not consider growth of halo mass.\n\tWe discuss potential influences of this simplification in \\secref{sec:discussion}. \n\t\n\tWe run a number of simulations with varying three parameters in the range\n\t$0\\, Z_{\\odot} \\leq Z \\leq 10^{-3} \\, Z_{\\odot}$, $0.01 \\leq J_{21} \\leq 1$,\n\t$10^{3} {\\rm \\, M_{\\odot}} \\leq M \\leq 10^8 {\\rm \\, M_{\\odot}}$,\n\tand $10\\leq z_{\\scalebox{0.6}{\\rm IN}} \\leq 20$, respectively. \n\tA total of 495 ($=5\\times3\\times11\\times3$) \n\tsimulations are performed.\n\tHereafter, we dub each run based on the assumed values of the parameters. A simulation with $(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}}) = (10^{-a}Z_{\\odot}, 10^{-b}, 10^{c}{\\rm \\, M_{\\odot}}, d)$\n\tis referred to as ``\\sil{$a$}{$b$}{$c$}{$d$}''.\n\tFor example, \\sil{$\\infty$}{0}{5.5}{15} indicates \n\t$(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}}) =(0 \\,Z_{\\odot}, 1, 10^{5.5}{\\rm \\, M_{\\odot}}, 15)$. \n\n\n\\section{Photoevaporation Process} \\label{sec:analytic}\n\\begin{figure*}\n \\centering\n \\includegraphics[clip, width = \\linewidth]{model.pdf}\n \\caption{Schematic view of our analytic model in \\secref{sec:analytic}. Planer UVB is incident on the halo, dividing it into self-shielded region (represented by the blue hemisphere) and photoevaporative flow region. Photoevaporative flows launch from the outermost layer of the self-shielded region. The red line indicates the locus of the I-front, $x_i(R)$. There is a UVB-shade in the downstream region, where the gas is neutral.}\n \\label{fig:analytic}\n\\end{figure*}\nBefore presenting the results from a number of simulations, it would be\nillustrative to describe the basic physics of photoevaporation.\nIt also helps our understanding of the overall evolution of a UV-irradiated minihalo. To this end, we develop an analytic model and \nevaluate the photoevaporation rate numerically so that it can be compared directly with our simulation results.\n\nPhotoevaporation is driven by gas heating associated with\nphoto-ionization.\nIncident UV radiation ionizes the halo gas, forming a sharp boundary between the ionized and neutral (self-shielded) regions.\nPhotoevaporative flows are launched from the outermost layer of the self-shielded region. \nThe number of UV photons incident on the self-shielded region is equal to that of evaporating gas particles. \nThe gas mass evolution of a halo can be described as \n\\eq{\n\\splitting{\n\t\\frac{{\\rm d} M _{\\rm s}}{{\\rm d} t} & = - m \\int_{\\partial V_{\\rm s}} {\\rm d} \\vec{S}_i \\cdot \\hat{x} \\, J \\\\\n\t&=\t- m \\int_{V_{\\rm s}} {\\rm d} V \\, \\nabla \\cdot \\hat{x} \\,J \\\\\n\t&\t=\t - m \\int _0^{R_i } 2\\pi R \\, J (R, x_i) \\, {\\rm d} R, \n\t}\\label{eq:masslosseq}\n}\nwhere $M_{\\rm s}$ is the total gas mass in the self-shielded region, $m n_{\\text{\\rm H}} \\equiv \\rho_{\\rm b}$, $V_{\\rm s}$ is the volume of the self-shielded region, $\\hat{x}$ is a unit vector in the $x$-direction, $J$ is total photon number flux, $R_i$ is the maximum radial extent of the self-shielded region (i.e., the radial position of the I-front), and $x_i(R)$ gives the locus of the I-front on the $R$--$x$ plane (\\fref{fig:analytic}).\n\nWe define the following dimensionless quantities:\n$\\vec{\\xi} = (\\xi_x, \\xi_R) \\equiv (x\/r_{\\rm vir}, R\/r_{\\rm vir})$, \n$\\tilde{t} = t\/ (r_{\\rm vir} V_{\\rm c}^{-1}) $, \n$\\tilde{M} = M_{\\rm s} \/ (f_{\\rm b} M )$ with $f_{\\rm b} = \\Omega_{\\rm b} \/ \\Omega_{\\rm m}$,\nand $\\tilde{J} = J \/ J_0$ $(J_0\\equiv 8.1\\e{5}\\, J_{21} \\unit{s}{-1}\\unit{cm}{-2})$; \nand rewrite \\eqnref{eq:masslosseq} in a dimensionless form\n\\eq{\n\t\\frac{{\\rm d} \\tilde{M} _{\\rm s}}{{\\rm d} \\tilde{t} } = \n\t- \\frac{J_0 r_{\\rm vir}^3 m }{f_{\\rm b} M V_{\\rm c}}\n\t\\int _0^{\\xi_i } 2\\pi \\xi_R \\, \\tilde{J} (\\xi_R, \\xi_{x, i}) \\, {\\rm d} \\xi_R. \n\t\\label{eq:nondimmassloss}\n}\nThe ionizing photon number flux at the I-front, $J(R, x_i)$, is equal to $J_0$ minus the total recombinations along the ray up to the I-front. Note that $J(R,x_i)$ depends on the recombination coefficient and the density, and also on the velocity profile of the photoevaporative flows. \n\nWe approximate the I-front to be a hemispheric surface facing toward the incident radiation in the region $x_i < 0$,\ni.e., $x_i = - R_i \\sqrt{1 - R^2\/ R_i^2}$.\nIn the other region $x_i \\geq 0$, the I-front lies at the surface of a cylinder with radius $ R_i $. \nWe further assume that photoevaporative flows are spherically symmetric in $x_i < 0$ and the ionized gas is isothermal with $T = 10^4{\\rm \\, K}$.\nThe wind velocity is assumed to be the sound speed of the ionized gas, $c_i$. \nThe ionizing photon number flux at the I-front is then given by\n\\gathering{\n\t{J}(R, x_i) = \\frac{2 J_0}{1 + \\sqrt{1 + 4\\dfrac{\\alpha_{\\rm B} R_i J_0}{c_i^2} \\dfrac{\\theta_{R,i}}{\\sin \\theta_{R,i} }} } \\nonumber \\\\\n\t\\theta_{R,i} \\equiv \\arccos\\sqrt{1 - \\braket{\\frac{R}{R_i}}^2}\n\t= \\arccos\\sqrt{1 - \\braket{\\frac{\\xi_R}{\\xi_i}}^2}, \\nonumber \n}\nwhere $\\alpha_{\\rm B}$ is the case-B recombination coefficient. \nWith these results, \\eqnref{eq:nondimmassloss} reduces to \n\\gathering{\n\t4\\pi \\xi_i^2 \\tilde{\\rho_{\\rm b}}(\\xi_i) \\frac{{\\rm d} \\xi_i}{{\\rm d} \\tilde{t} } = \n\t- \\eta\n\t\\int _0^{\\xi_i } \\frac{4\\pi \\xi_R \\, {\\rm d} \\xi_R}{1 + \\sqrt{1 + 4q\\xi_i \n\t\\dfrac{\\theta_{R,i}}{\\sin \\theta_{R,i}}\n\t} \n\t}\n\t\\label{eq:difmassloss}\\\\ \n\t\\eta \\equiv \\frac{J_0 r_{\\rm vir}^3 m }{f_{\\rm b} M V_{\\rm c}}\n\t\\approx 14 J_{21} \\braket{\\frac{M}{10^6{\\rm \\, M_{\\odot}}}}^{-1\/3} \\braket{\\frac{1+z}{11}}^{-7\/2}, \\nonumber \\\\\n\tq \\equiv \\dfrac{\\alpha_{\\rm B} r_{\\rm vir} J_0}{c_i^2} \n\t\\approx 1.7\\e{2} J_{21} \\braket{\\frac{M}{10^6{\\rm \\, M_{\\odot}}}}^{1\/3} \\braket{\\frac{1+z}{11}}^{-1}. \\nonumber \n}\nNote that the dimensionless parameter $\\eta$ effectively measures the ratio of Hubble time to the ionization time scale.\nThe other parameter $q$ quantifies the magnitude of UV absorption in the photoevaporative flows; absorption is negligible if $q \\ll 1$, while it is significant if $q \\gg 1$. \nAlthough the differential equation is not solved analytically, we can derive the asymptotic behaviour of the gas mass in a few limiting cases. \nFor $q \\gg 1$, the right-hand-side coefficient is approximately proportional to $\\simeq \\eta q^{- 1\/2} \\propto J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$.\nIt follows that there is a similarity in the gas mass evolution among halos having similar $\\eta q^{-1\/2}$ and initial $\\xi_i$.\nHence we define the similarity parameter as \n\\eq{ \n \\chi \\equiv \\eta q ^ {-1\/2}. \\label{eq:chi}\n}\nThe initial $\\xi_i$ is determined by the radius \nat which the I-front turns to the D-critical type from R-type \\citep[e.g.,][]{1978_Spitzer, 1989_Bertoldi}.\nThe radius is obtained by numerically solving the integral equation\n\\gathering{\n\t\\int_{-\\infty}^{-\\xi_i} \\tilde{\\rho_{\\rm b}}^2 (\\xi_R, \\xi_x) \\,{\\rm d}\\xi_x = \n\t\\frac{J_0 }{ r_{\\rm vir} n_0^2 \\alpha_{\\rm B}} \n\t\\braket{1 \n\t- \\frac{2c_i n_0 }{ J_0} \\tilde{\\rho_{\\rm b}} (\\xi_i)\n\t} \\nonumber \\\\\n\tn_0 \\equiv \\frac{f_{\\rm b} M} {m r_{\\rm vir}^3} \\nonumber \\\\\n\t\\tilde{L}_{\\rm s}\\equiv \\frac{J_0 }{ r_{\\rm vir} n_0^2 \\alpha_{\\rm B}} \\nonumber \\\\\n\t\\tilde{u}_{\\rm IF} \\equiv \\frac{2c_i n_0 }{ J_0}, \\nonumber \n}\nwith the initial density profile of \\eqnref{eq:densityprofile}. \nTypically, the initial $x_i$ is larger\nfor lower $J_0$ and for higher $n_0$ (i.e., higher $z_{\\scalebox{0.6}{\\rm IN}}$). \nWe list $q, \\eta, q, \\chi, \\tilde{L}_{\\rm s}, $ and $\\tilde{u}_{\\rm IF}$ for each of our runs in \\tref{tab:data} of \\appref{sec:supplymental}. \n\nIn the above model, we have assumed a constant flow velocity, \nbut in practice the flow is accelerated within the ionized boundary layer after \nlaunched with a small, negligible velocity. Also, we do not consider gravitational force by the host halo, which can decelerate the photoevaporative flows. In \\secref{sec:similarities}, we will introduce a few corrections \nto these simplifications and\nexamine carefully the similarity of gas mass evolution by comparing with the simulation results.\n\n\\section{Simulation Results} \\label{sec:results}\nWe first describe the dynamical evolution of \na minihalo in our fiducial case, and examine the effect of metal- and dust-cooling in \\secref{sec:result1}. \nThen, we focus on the photoevaporation rates \nand study the dependence on metallicity and on \nhalo mass in \\secref{sec:massloss}. The dependence of the photoevaporation \nrates on radiation intensity and the turn-on redshift is\nstudied in \\secref{sec:result2} and \\secref{sec:result3}.\nWe summarize the halo mass evolution in \\secref{sec:similarities}. \nThen we provide an analytical fit to the derived mass evolution as a function of time in \\secref{sec:evatime}. \nFor convenience, we term halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$ atomic cooling (massive) halos in the following sections. The corresponding mass range is $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$ ($ 10^7{\\rm \\, M_{\\odot}}$) at $z = 10$ (20). Lower-mass halos ($T_{\\rm vir} < 10^4{\\rm \\, K}$) are referred to as low-mass halos; those with $M \\gtrsim 10^{6.5}$--$10^7{\\rm \\, M_{\\odot}}$ ($10^6$--$10^{6.5}{\\rm \\, M_{\\odot}}$) at $z_{\\scalebox{0.6}{\\rm IN}} = 10$ (15--20) are specifically called molecular cooling halos.\n\n\\subsection{Photoevaporation and Metallicity Dependence}\t\\label{sec:result1}\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsFigSnapshots.pdf}\n\\caption{\n Snapshots of \\sil{4}{0}{5.5}{10} (top panels) and \\sil{$\\infty$}{0}{5.5}{10} (bottom panels).\n\tThe upper and lower half of each panel indicate the density and temperature distributions, respectively. \n\tThe color bars are shown at the right. \n\tThe magenta dashed lines are density contours for $n_{\\text{\\rm H}} = 10^{-2},10^{-1},1, 10, 100 {\\, \\rm cm}^{-3}$, \n\tand the gray dot-dashed lines are ionization degree for $0.1, 0.5, 0.9$. \n\tThe arrows represent velocity fields and are scaled by the magnitude. \n\tThe reference arrow length for $20\\,{\\rm km\\,s^{-1}}$ is shown at the top left in each panel. \n\tNote that the view closes-up as time goes for clarity. \n\tThe planer UV field is incident on the computational \n\tdomain at $x\/r_{\\rm vir} = -1$. \n\tThe UV-heated gas has a temperature of $\\sim 10^4{\\rm \\, K}$ \n\tand streams off the halo. \n\tThe self-shielded regions (orange regions in the temperature maps) have relatively low temperatures. \n}\n\\label{fig:snapshots}\n\\end{center}\n\\end{figure*}\n\n\\fref{fig:snapshots} shows the density and temperature distributions for a halo with $M=10^{5.5}{\\rm \\, M_{\\odot}}$ irradiated by the UV background with $J_{21} = 1$ at $z_{\\scalebox{0.6}{\\rm IN}} =10$.\nWe compare the results with two different metallicities of $Z =0\\,Z_{\\odot}$ and $Z=10^{-4}\\,Z_{\\odot}$ (i.e., \\sil{$\\infty$}{1}{5.5}{10} and \\sil{4}{1}{5.5}{10}). \nIn both runs, we find hot, ionized gas flows (hereafter ``wind region'') \nand a cold, dense region (hereafter ``self-shielded region''). \nThe boundary between the two is the launching \"base\" of the photoevaporative flows. \nIn \\fref{fig:snapshots}, the base appears as a transitional layer that divides \na hot ($\\sim 10^4{\\rm \\, K}$; white) region and a cool ($\\lesssim 10^3{\\rm \\, K}$; orange) region.\nThe wind regions are heated by EUV photons, \nand the temperature is $\\sim 10^4{\\rm \\, K}$ near the base,\nbut quickly decreases as the wind expands.\n\nThe self-shielded region of a \nmetal-free halo contracts slowly\nbecause of inefficient cooling.\nWith a slight amount of heavy elements, atomic cooling such as \\ion{C}{2}{} and \\ion{O}{1}{} cooling becomes effective, \nand also the grain-catalyzed \\ce{H2} formation reaction produces abundant \\ce{H2} molecules ($y_{\\ce{H2}} \\sim 10^{-4}$)\nthat further enhance cooling efficiency. \nSince the thermal coupling of dust and gas is weak at low densities, \nthe dust temperature does not become high enough to sublimate.\nThe radiative cooling rates are roughly comparable\nfor \\ce{H2}, \\ion{C}{2}{}, and \\ion{O}{1}{} species \nin metal-rich halos ($Z \\gtrsim 10^{-4}\\,Z_{\\odot}$).\nTypically, the most efficient coolant is\n\\ce{H2} in a large portion of the self-shielded region,\nwhereas \\ion{C}{2}{} cooling is dominant only in the central, very dense region ($\\gtrsim10^{3}{\\, \\rm cm}^{-3}$).\nThe central low temperature ($T \\sim 10^2{\\rm \\, K}$) part in the upper panels of \\fref{fig:snapshots} is formed through the \nefficient atomic cooling. \n\n\nThe self-shielded region of the metal-free halo (\\sil{$\\infty$}{1}{5.5}{10}) has a temperature close to the\nvirial temperature of the halo.\nThe wind region has similar thermal and chemical structure \nin the two runs shown in \\fref{fig:snapshots}. Lyman~${\\rm \\alpha}$ cooling \nis the dominant cooling process near the I-front, whereas Compton cooling is \nimportant in outer regions. Note that the efficiency of the latter depends on the cosmological redshift $z_{\\scalebox{0.6}{\\rm IN}}$. The gas temperature in the wind region is $\\sim 5000$--$10000 {\\rm \\, K}$ and decreases as the gas expands outward.\n\nSince the cooling time is progressively shorter in the central, denser part, \nthe gas cools and condenses in an inside-out manner.\nA dense core forms quickly at the halo center, as can be seen in the time evolution\nin \\fref{fig:snapshots}.\nIn metal-enriched halos, a sufficient amount of \\ce{H2} molecules is\nformed via grain-catalyzed reactions, even though the incident radiation continuously dissociates \\ce{H2}.\nWith increasing metallicity, \\ce{H2} molecules form more rapidly,\nand \\ion{C}{2} and \\ion{O}{1} line cooling also lower the gas temperature.\nAn important effect of metal cooling is to lower the minimum \nmass for gas cloud collapse. \nWe discuss whether or not star formation takes place in low-mass, low-metallicity halos in \\secref{sec:starformation}. \n\nWe have focused on the results with $z_{\\scalebox{0.6}{\\rm IN}} = 10$\nand with the fiducial UV radiation intensity \n$J = 10^{-21} {\\rm \\, erg} \\sec^{-1} {\\, \\rm cm}^{-2} {\\rm Hz}^{-1} {\\rm sr}^{-1}$.\nEssentially the same physical processes operate \nin other cases. \nWith $\\sim 10^2$--$10^3$ times higher LW intensities, \\ce{H2} molecules are almost completely photodissociated in the halo \\citep{2010_ShangBryanHaiman, 2012_Agarwal, 2014_ReganJohanssonWise, 2015_Hartwig, 2016_ReganJohanssonWisea, 2017_Schauer}.\nWe note that primordial gas clouds formed under strong LW radiation are\nsuggested to be possible birth places of massive black holes \\citep{2001_Omukai, 2010_ShangBryanHaiman, 2012_Agarwal, 2014_ReganJohanssonWise, 2015_Hartwig, 2016_ReganJohanssonWisea, 2017_Schauer}. \nIn metal-enriched halos, \nthe gas can still cool and condense by metal and dust cooling\neven under strong UV radiation.\n \n \nIn contrast to low-mass, \\ce{H2}-cooling halos, Ly$\\alpha$ cooling dominates in \nhalos with $T_{\\rm vir} \\gtrsim 20000{\\rm \\, K}$.\nSince the efficiency of Ly$\\alpha$ cooling is independent of metallicity, \nthe gas condensates quickly in the massive halos with $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$. \nIf the gas is enriched to $Z \\gtrsim 10^{-3}\\,Z_{\\odot}$,\ndust-gas collisional heat transfer \ncauses efficient gas cooling for $n_{\\text{\\rm H}} \\gtrsim 10^7{\\, \\rm cm}^{-3}$ \\citep{2005_Omukai, 2008_Omukai, 2016_Chiaki}. \n\n\nOur findings are largely consistent with \\cite{2014_Wise} regarding\nthe correspondence between halo mass and dominant cooling processes.\nHalos with $M \\gtrsim 10^7{\\rm \\, M_{\\odot}}$ are expected to be \nmetal-enriched by Pop~III supernovae triggered in the progenitor halo or in nearby halos,\nand thus metal cooling is dominant or comparable to atomic\/molecular hydrogen cooling. \nFollowing \\cite{2014_Wise}, we call such halos ``metal-cooling halos'', which have masses between the atomic cooling limit ($\\sim 10^{7.5}$--$10^8{\\rm \\, M_{\\odot}}$) and the upper mass limit of molecular cooling halos ($\\sim 10^{6.5}$--$10^7{\\rm \\, M_{\\odot}}$). We find similarly efficient metal cooling for $M \\gtrsim 10^7{\\rm \\, M_{\\odot}}$, but the most important effect of metal enrichment in our cases is to lower the molecular cooling limit by allowing formation of \\ce{H2} through grain-catalyzed reactions, especially at $Z \\gtrsim10^{-4}\\,Z_{\\odot}$.\n\nThe gas in massive halos ($T_{\\rm vir} > 10^4{\\rm \\, K}$) is gravitationally bound even under strong UV radiation.\nWe find rather small mass loss in the runs with $M = 10^{7.5}{\\rm \\, M_{\\odot}}$. \nApproximately 10\\% of the initial gas mass is lost via photoevaporation, \nbut the diffuse, ionized gas follows the concentrated gas toward the center. \nThis process slightly recovers the total gas mass within $r_{\\rm vir}$. \nFor halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$,\noutgoing flows are not excited, and all of the baryons concentrate \nto the halo center regardless of \nthe UV strength. The total mass slightly increases from the initial state by accretion of the diffuse gas in the outer part.\n\n\n\\subsection{Mass Loss} \\label{sec:massloss}\nThe photo-heated gas flows outward from the surface of the self-shielded region, while the central part continues contracting. \nThe rate of the gas mass loss can be calculated as \n\\eq{\n\t\\dot{M}_{\\rm ph} = \\int _ S {\\rm d} \\vec{S} \\cdot( \\rho_{\\rm b} \\vec{v} ), \t\\label{eq:masslossrate}\n}\nwhere \n$S$ is the surface area of the launching base. \nNote that the right-hand-side of this equation is equal to that of \\eqnref{eq:masslosseq}.\nThe mass-loss rate is essentially determined by the radial extent of the self-shielded region, because the initial velocity of the photoevaporative winds is typically the sound speed of the ionized gas ($\\sim 10\\,{\\rm km\\,s^{-1}}$), and the base density is determined by the EUV flux. The mass flux at the base is not strongly dependent of the gas metallicity \\citep{1989_Bertoldi,2019_Nakatani}. \nWe measure the total gas mass within a halo as \n\\eq{\n\tM_{\\rm b} = \\int_{r \\leq r_{\\rm vir}} 2\\pi R \\rho_{\\rm b} \\, {\\rm d} x \\, {\\rm d} R.\n}\nThe evolution of $M_{\\rm b}$ can be characterized by two phases separated by the time when the diffuse outer part is stripped off and a \"naked\" dense core is left. In the later phase, the core is directly exposed to UV radiation but \nthe net mass loss is small owing to small geometrical size. The \n photoevaporation rate decreases rapidly during the transition phase. \n A similar process is known also in the study of molecular cloud photoevaporation \\citep[e.g.,][]{1989_Bertoldi, 2019_Nakatani}. \nIt is difficult to follow the photoevaporation process\nin detail after the transition phase, because\nthe small core is resolved only with \nseveral computational cells in our simulations.\nWe thus calculate $M_{\\rm b}$ only up to the transitional phase.\n\nWe empirically determine the transition time by the following\nconditions:\n\\gathering{\n \\frac{1}{M_{\\rm b}}\\int_{\\rho_{\\rm b} > 10^{-3}\\rho_{\\rm b, max}} 2\\pi R \\rho_{\\rm b} \\, {\\rm d} x \\, {\\rm d} R \n > 0.8 \\label{eq:limitingtime1}\\\\\n \\rho_{\\rm b, max} > \\rho_{\\rm b,0}. \\label{eq:limitingtime2}\n}\nHere, $\\rho_{\\rm b, max}$ is the maximum density in the computational domain and $\\rho_{\\rm b,0} \\equiv \\rho_{\\rm b}(t=0,r=0)$.\n\n\\fref{fig:masslossrates} shows the evolution of $M_{\\rm b}$ for halos with $M = 10^{5.5-8} {\\rm \\, M_{\\odot}}$ with various metallicities and other parameters.\n\\begin{figure*}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Zdependence_J0M5565z10_showconcentration_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Mdependence_Z3infJ0z10_showconcentration_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_Jdependence_Z3infM6z10_annotation.pdf}\n\\includegraphics[clip, width = \\linewidth\/2-0.5cm]{Haloevap_metallicity_dependenceMassEvolution_zdependence_Z3infM55z_showconcentration_annotation.pdf}\n\\caption{\n Time evolution of the total gas mass relative to the initial gas mass for selected runs.\n The panels show the dependence of the gas mass evolution on the four simulation parameters $(Z, M, J_{21}, z_{\\scalebox{0.6}{\\rm IN}})$:\n\t\t(a) metallicity dependence of the gas mass evolution for $M = 10^{5.5}, 10^{6.5}$ with $J_{21} = 10 $ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. The line colors and styles differentiate metallicity and the halo mass, respectively. The round marker points indicate the time at which all of the atmospheric gas has been lost, leaving an concentrated core (cf.~\\eqnref{eq:limitingtime1} and \\eqnref{eq:limitingtime2}).\n\t\tNote that the curve for \\sil{5}{0}{5.5}{10} overlaps those of \\sil{$\\infty$}{0}{5.5}{10} and \\sil{6}{0}{5.5}{10}. \n\t\t(b) halo mass dependence of the gas mass evolution for $Z = 0, 10^{-3}\\,Z_{\\odot}$ with $J_{21} = 1$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. The line styles correspond to halo masses as annotated. The lines for \\sil{$\\infty$}{0}{8}{10} and \\sil{3}{0}{8}{10} overlap. \n\t\t(c) $J_{21}$ dependence of the gas mass evolution for $Z = 0, 10^{-2} \\, Z_{\\odot}$ with $M = 10^6 {\\rm \\, M_{\\odot}}$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$. Solid, dashed, dotted lines indicate $J_{21} = 1, 0.1, 0.01$, respectively. \n\t\t(d) $z_{\\scalebox{0.6}{\\rm IN}}$ dependence of the gas mass evolution for $Z = 0, 10^{-3}\\,Z_{\\odot}$ with $M = 10^{5.5}{\\rm \\, M_{\\odot}}$ and $J_{21} = 1$. \n\t\t}\n\\label{fig:masslossrates}\n\\end{center}\n\\end{figure*}\nFor a given $M$, \n$M_{\\rm b}$ evolves along the same track on the $t$--$M_{\\rm b}$ plane for various metallicities. \nWe have discussed in \\secref{sec:result1}\nthat the effect of metal-enrichment appears clearly in the concentrating core, but\nthe strength of EUV-driven photoevaporation is independent of metallicity. \nThus the outer diffuse gas has not cooled in the early evolutionary phase even with nonzero $Z$ cases, and\nthe size of the self-shielded region is similar regardless of $Z$ (compare the rightmost panels in \\fref{fig:snapshots}).\nTherefore, the evolution of $M_{\\rm b}$ does not differ significantly \nuntil the diffuse envelope gas is lost. \n\n\nWe show the time evolution of $M_{\\rm b}$ for halos with various $M$\nincluding the high mass case ($T_{\\rm vir} > 10^4{\\rm \\, K}$; $M \\gtrsim 10^{7.5}{\\rm \\, M_{\\odot}}$) \nin \\fref{fig:masslossrates} (Panel~b). \nA large amount of gas is lost from low-mass halos ($T_{\\rm vir} < 10^4{\\rm \\, K}$)\nbut the mass-loss rates are significantly smaller for massive halos with $10^8{\\rm \\, M_{\\odot}}$. \nInterestingly, the gas mass {\\it increases} slightly.\nThe diffuse gas at $r>r_{\\rm vir}$ is accreted while the central part\nkeeps cooling.\nPhotoevaporation is hardly observed in these runs, because the initial temperature, $T_{\\rm vir}$, is higher than the typical temperature of a photo-heated gas in the first place.\nHalo's gravity is so strong that it retains the \nphoto-heated gas. \nAnother important feature is that the gas mass evolution of the massive halos is nearly independent of metallicity. \nRadiative cooling by hydrogen is dominant in these halos (\\secref{sec:result1}). \n\n\n\n\n\n\n\n\n\\subsection{Radiation Intensity}\t\\label{sec:result2}\n\nIn photo-ionized regions, the characteristic ionization time,\n$t_{\\rm ion} \\sim 0.01 J_{21}^{-1} \\, {\\rm Myr} $, is orders of magnitude \nshorter than the typical crossing time of photoevaporative flows,\n$ t_{\\rm cr} \\simeq r_{\\rm vir} \/ 10\\,{\\rm km\\,s^{-1}} \\simeq 10 \\,M_{5}^{1\/3} [(1+z)\/10]^{-1} \\, {\\rm Myr} $ with $M_{5} \\equiv M\/10^{5} {\\rm \\, M_{\\odot}}$. \nFor weak UV radiation, the I-front is located at the outer part \nof the halo where the density is low.\nWe find that the boundary where $\\abn{HII} = 0.5$ is located at \na radius where $\\col{HI} \\sim 10^{18} J_{21} ^{1\/2} {\\, \\rm cm}^{-2}$,\nand the base density is approximately estimated to be $n_{\\text{\\rm H}} \\sim 10^{-1}$--$10^{-0.5} J_{21} {\\, \\rm cm}^{-3}$,\nwhich is consistent with the result of \\cite{2004_Shapiro}. \n\n\nThe UV intensity $J_{21}$ does not strongly change the overall dynamical \nphotoevaporation process.\nHowever, the mass loss rate sensitively depends on $J_{21}$,\nbecause the base is located at a larger $r$ for lower $J_{21}$, where \nthe gas density is lower. \nOne may naively expect that the geometrical size can make\nthe photoevaporation rate large, \nbut the small density at the large radius mitigates\nincrease of mass loss\n(cf.~\\eqnref{eq:masslossrate}). \nThe base density is approximately proportional to the UV flux,\nwhile the base radius increases by only a small factor \nbecause the density \nrapidly decreases with increasing radial distance.\nHence the mass loss rate, $|\\dot{M}_{\\rm b}|$, decreases for smaller\n$J_{21}$ as can be seen in \\fref{fig:masslossrates}-(c). \nThe mass loss rate decreases with time, ${\\rm d} |\\dot{M}_{\\rm b}| \/ {\\rm d} t < 0$, because the geometrical cross-section of halo decreases. \nWe also note that the characteristic mass-loss time for massive halos \nis longer than the Hubble time \nat the epochs considered. \n\n\\subsection{Turn-on Redshift}\t\\label{sec:result3}\nHalos forming at different redshifts have different properties.\nMost notably high-redshift halos are more compact and denser\n(cf.~\\eqnref{eq:densityprofile}).\nWe study cases with different \n$z_{\\scalebox{0.6}{\\rm IN}}$, the timing of radiation turn-on,\nwhen the cosmological I-front reaches the halo.\nOne can consider that different $z_{\\scalebox{0.6}{\\rm IN}}$ effectively corresponds \nto different \nreionization histories, or to an inhomogeneous reionization model\nin which the effective $z_{\\scalebox{0.6}{\\rm IN}}$ differs from place to place.\n\nThe process of photoevaporation is essentially the same as those described in \\secref{sec:massloss} and \\secref{sec:result2}.\nThe gas density at the photoevaporative flow \"base\" is primarily set by $J_{21}$, and is not explicitly dependent of $z_{\\scalebox{0.6}{\\rm IN}}$. \nHowever, the relative distance of the base to the halo center, $\\xi (\\equiv r\/r_{\\rm vir})$, \nis larger for halos at higher redshift owing to higher average density,\nwhile the size of the neutral, self-shielded region is {\\it smaller}.\nThese two effects nearly cancel out and yield photoevaporation rates nearly independent of $z_{\\scalebox{0.6}{\\rm IN}}$. \nWe find that $|\\dot{M}_{\\rm b}|$ increases only by $20$--$30\\%$ with $\\Deltaz_{\\scalebox{0.6}{\\rm IN}} = -5$. \n\nIn the characteristic minihalo case with $M=10^{5.5}{\\rm \\, M_{\\odot}}$, $J_{21} = 1$, $Z = 10^{-3}\\,Z_{\\odot}$ and $z_{\\scalebox{0.6}{\\rm IN}} = 10$,\nabout 80\\% of the initial gas mass is lost,\nand the mass loss fraction is only $\\sim 60\\%$ with $z_{\\scalebox{0.6}{\\rm IN}} = 20$. \nSimilar trend of the final core mass is seen in other runs with different $M$ and $J_{21}$. This is consistent with the results of\n\\cite{2005_Iliev} who show weak dependence of mass-loss time scale on turn-on redshift. \n\n\n\\subsection{Gas Mass Evolution}\\label{sec:result4}\nWe have shown that the gas mass evolution depends most sensitively \non $M$ and $J_{21}$. Physically, these are the most relevant quantities to the gravitational force and the mass flux, respectively (cf.~\\eqnref{eq:masslosseq}).\nThe results of our numerical simulations can be characterized by two quantities: the half-mass time, $t_{1\/2}$, at which the gas fraction decreases to 0.5, and the remaining mass fraction, $f_{\\rm b,rem}$, which is the mass fraction of the \"remnant\" condensed core.\n\\begin{figure*}[htbp]\n \\centering\n \n \\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsEvaporationMap.pdf}\n \\caption{We plot $t_{1\/2}$ and $f_{\\rm b, rem}$ for our simulated halos. The left, middle, and right panels shows the results at $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. The circles, triangles, and squares represent $J_{21} = 1, 0.1, 0.01$. The colors of the markers indicate metallicity, and the sizes are scaled according to the halo mass $M$. The size reference is shown at the bottom right. The maximum marker size corresponds to $M = 10^7, 10^{6.5}, 10^{6.5}{\\rm \\, M_{\\odot}}$ for the panels of $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. \n Note that $t_{1\/2}$ is definable only for the halos whose mass reduces to $f_{\\rm b, rem} \\leq 0.5$. }\n \\label{fig:evamap}\n\\end{figure*}\n\\fref{fig:evamap} shows the distribution of $t_{1\/2}$ and $f_{\\rm b, rem}$. \nThis summarizes the overall dependence of gas mass evolution on \nthe simulation parameters $(Z, J_{21}, M, z_{\\scalebox{0.6}{\\rm IN}})$. \nMassive halos at lower $z_{\\scalebox{0.6}{\\rm IN}}$ reflect lower average densities.\nThe mass (symbol size) is larger towards the upper right corner in each panel of \\fref{fig:evamap}, indicating that the remaining mass fraction $f_{\\rm b, rem}$ increases for higher halo mass. Also halos with higher-metallicity have higher $f_{\\rm b, rem}$ owing to\nefficient cooling (\\secref{sec:massloss}). \nFor higher $J_{21}$, both $t_{1\/2}$ and $f_{\\rm b,rem}$ are smaller\nbecause of the faster mass loss for stronger UV radiation (\\secref{sec:result2}). \n\n\n\n\n\\subsection{Similarity in Mass Loss}\t\\label{sec:similarities}\nWe have shown that the mass-loss rate has weak metallicity dependence\nat least until the bulk of the diffuse halo gas is stripped off. In this section, we first derive nontrivial similarity of the gas mass evolution for metal-free halos. We then apply this model to other low-metallicity cases.\n\nIn \\secref{sec:analytic}, we have developed an analytic model with a key parameter $\\chi$ that characterizes the gas mass evolution of a photoevaporating halo. \nThere, the effect of the host halo's gravity has not been incorporated (Eqs. \\ref{eq:masslosseq}, \\ref{eq:nondimmassloss}, and \\ref{eq:difmassloss}). \nWe expect that deceleration by gravity becomes important for halos\nwhose virial temperatures are \ncomparable to the typical temperature of the photo-ionized gas, $\\sim 10^4{\\rm \\, K}$. In such cases, assuming $c_i = 10\\,{\\rm km\\,s^{-1}}$ as the photoevaporative\nflow velocity overestimates the photoevaporation rate (see \\eqnref{eq:difmassloss} and the description above it). \nTo account for the deceleration, we adopt\na \"reduced\" parameter defined as \n\\eq{\n \\chi^\\prime \\equiv \\chi \\frac{c_i - V_{\\rm c}}{c_i}, \\label{eq:modchi}\n}\nin the following discussions. The derived values are listed in \\tref{tab:data} of \\appref{sec:supplymental}. \n\nThe top panel of \\fref{fig:similarity} shows the gas mass evolution of metal-free halos with various parameter sets in the dimensionless form. \n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_metallicity_dependenceMassEvolution_similarity.pdf}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsEvaporationMap_selfsimilarity.pdf}\n\\caption{\n(top) Similarities in the gas mass evolution for metal-free halos with various parameter sets. Note that we omit ``Z$\\infty$'' from the simulation labels in the legend. Values in the parentheses after each simulation label show corresponding $\\chi(\\equiv \\eta q^{-0.5})$ and $\\chi^\\prime (\\equiv \\eta q^{-0.5} \\Delta c_i \/ c_i)$. The horizontal and vertical axes indicate the dimensionless time and mass, respectively (cf.~\\eqnref{eq:nondimmassloss}). Lines are colored according to $\\chi^\\prime$ as indicated by the color bar. \nLines with similar colors almost overlap on the $\\tilde{t}$--$\\tilde{M}$ plane, indicating that having similar $\\chi^\\prime$ results in similar mass evolution. \n(bottom) $\\chi^\\prime$ vs dimensionless half-mass time $\\tilde{t}_{1\/2}$ for metal-free halos. The markers are used in the same manner as in \\fref{fig:evamap} but the colors represent $z_{\\scalebox{0.6}{\\rm IN}}$. The magenta dashed line is a fit. A negative correlation is seen between $\\chi^\\prime$ and $\\tilde{t}_{1\/2}$. \n}\n\\label{fig:similarity}\n\\end{center}\n\\end{figure}\nHalos with similar $\\chi$ or $\\chi^\\prime$ \nevolve on essentially the same track in the $\\tilde{M}$--$\\tilde{t}$ plane.\nWe explain the similarity further by using a specific example as follows.\nThe simulation parameters of \\sil{$\\infty$}{0}{6.5}{10} (cyan dotted line; $\\chi = 0.54, \\chi^\\prime = 0.24$) are close to those of \\sil{$\\infty$}{0}{6.5}{20} (yellow dashed line; $\\chi = 0.078, \\chi^\\prime = 0.018$), but the mass evolution significantly deviates from each other on the dimensionless plane. \nOn the other hand, it is closer to those of \\sil{$\\infty$}{1}{4.5}{20} (cyan-green solid line; $\\chi = 0.25, \\chi^\\prime = 0.21$) and \\sil{$\\infty$}{2}{5}{10} (cyan dashed line; $\\chi = 0.31, \\chi^\\prime = 0.25$). \nA more straightforward case is \\sil{$\\infty$}{2}{5}{10} (cyan solid line; $\\chi = 0.31, \\chi^\\prime = 0.25$). It is close to \\sil{$\\infty$}{2}{4.5}{10} (light-blue dashed line; $\\chi = 0.54, \\chi^\\prime = 0.48$), as expected from the close parameter values. Interestingly, the \\sil{$\\infty$}{2}{4.5}{10} (blue dashed line; $\\chi = 0.54, \\chi^\\prime = 0.48$) run is also very close to \\sil{$\\infty$}{1}{5.5}{10} (cyan dash-dotted line; $\\chi = 0.54, \\chi^\\prime = 0.4$).\n\nWe find that gravitational deceleration of the photoevaporating gas is \nan important factor.\nThe evolution in Run~Z{$\\infty$}J{0}M{6.5}z{10} (cyan dotted line) is close to \nother cases with green lines, but its $\\chi$ value $(\\approx 0.54)$ is actually closer to those of runs indicated by the blue lines. \nAlso, $\\chi \\approx 0.31$ for Run \\sil{$\\infty$}{0}{7}{10} (yellow-green solid line)\nis close to those of runs indicated by the green lines, but the \nactual evolution apparently deviates. \nGravitational deceleration of the photoevaporating gas reduces the mass-loss rate \nfor these relatively massive minihalos with virial temperature of $2400 - 5000 {\\rm \\, K}$. \nClearly, it is important to incorporate the correction of $\\chi$ owing to deceleration.\nWe conclude that $\\chi^\\prime$ is the essential parameter to characterize the gas mass \nevolution of photoevaporating halos. \n\n\nThe bottom panel of \\fref{fig:similarity} shows correlation between $\\chi^\\prime$ and the dimensionless half-mass time $\\tilde{t}_{1\/2} \\equiv t_{1\/2} \/ t_0$ for metal-free halos. There is a tight correlation\ngiven by $\\tilde{t}_{1\/2} = 0.2 {\\chi^\\prime}^{-0.75}$. The correlation confirms the importance of $\\chi^\\prime$ in characterizing the gas mass evolution of photoevaporating halos. \nNote that the same correlation holds for low-metallicity halos.\nThus the fit can be applied for halos with any metallicity that lose more than a half of the initial mass. \n\n\n\\subsection{Fitting function}\t\\label{sec:evatime}\nBased on the similarity studied so far,\nwe derive a fit of $\\tilde{M} $ that can be readily used \nin semi-numerical models \\citep[e.g.,][]{2013_Sobacchi, 2012_Fialkov, 2013_Fialkov, 2015_Fialkov, 2016_Cohen}. \nFrom the result shown in ~\\fref{fig:masslossrates},\nwe propose a function\n\\eq{\n\\splitting{\n\t\\tilde{M}_{\\rm fit} &= f (\\tilde{t}) = \\frac{1-C_1}{(\\tilde{t}\/\\tilde{t}_{\\rm s})^{p} + 1} + C_1, \\label{eq:fittingfunc\n\t}\n}\nwhere $p, C_1$, $\\tilde{t}_{\\rm s}$ are fitting parameters,\nwhich control the steepness of mass decrease, the remaining mass fraction, and the dimensionless time at which $f_{\\rm b} = (1+C_1)\/2$, respectively. \nWe restrict the parameter ranges to $0 \\leq C_1\\leq 1$, $0 \\leq \\tilde{t}_{\\rm s}$, and $0\\leq p$, in order to avoid unphysical fitting results. \n\nWe list the best fit values in \\tref{tab:data} in Appendix.\nThe excellent accuracy of the fit can be seen in \\fref{fig:fitting}\nin comparison with the simulation results.\n\\begin{figure}[htbp]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsConcentrationTimes.pdf}\n\\caption{\nTop: We compare the mass evolution given by equation (\\ref{eq:fittingfunc}) and the \nsimulation results. \nThe horizontal and vertical axes are the same as \\fref{fig:similarity}. The solid lines show fits, and the marker points show the mass evolution in several selected runs. The marker's color represents metallicities, but the marker shapes are randomly adopted. \nThe round marker points at the tails of the curves for Z3J2M6z10, Z4J0M7z10, Z5J2M7z20, and Z6J0M8z20 indicate the time at which the bulk of the atmospheric gas is lost as in \\fref{fig:masslossrates}.\nBottom: Relative errors between the simulation and the fit in percent. \n}\n\\label{fig:fitting}\n\\end{center}\n\\end{figure}\nThe three-parameter function fits the simulation results well until the mass decreases to $\\tilde{M} \\approx 0.01$. \nFor halos with $T_{\\rm vir} > 10^4{\\rm \\, K}$,\nthe gas mass increases owing to the halo's strong gravity. \nWe do not consider the slight increase of gas mass in deriving the fit of \\eqnref{eq:fittingfunc},\nand simply set $\\tilde{M}_{\\rm fit} = 1$.\n\nSince we have not followed the evolution of the dense core after the \nouter diffuse gas is photoevaporated, the resulting \n$\\tilde{M}_{\\rm fit}$ does not strongly depend on metallicity.\nThe photoevaporation becomes inefficient in the late phase when the concentrated core is directly exposed to the UVB radiation, because of its small geometrical cross section (cf.~\\eqnref{eq:masslosseq}).\nIn order to follow the late phase evolution more accurately, \nwe will need to run simulations with\na much higher spatial resolution so that the small core can be fully resolved.\nHowever, we expect that the mass evolution would not differ significantly \nfrom those obtained in this section, \nbecause the remaining mass fraction is already small in the late phase. \n\n\n\n\n\n\n\\section{Discussion} \t\\label{sec:discussion}\n\n\\subsection{Star Formation in Metal-Enriched Halos}\n\\label{sec:starformation}\n\n\tAs the surrounding gas cools and falls on to the center, the\n\tcentral gas density further increases but the \n temperature remains low, \n and thus the dense core can be gravitationally unstable to induce \n star formation.\n\n\n\n\n\n\n\n\n\n\tIn metal-enriched halos, cooling by metal atoms\/ions and by dust grains\n\tenable the gas to condense even under the influence of strong UV radiation.\n\tHence, metal enrichment can effectively lower the mass threshold of star-forming halos \\citep[$\\sim 10^5$--$10^6{\\rm \\, M_{\\odot}}$; e.g.,][]{1997_Tegmark, 2001_Machacek, 2002_BrommCoppiLarson, 2003_YoshidaAbelHernquist}. \n In this section, we study the relation between metallicity and the minimum star-forming halo mass, $M_{\\rm min}$. \n\t\n\tWe assume that star formation occurs when an enclosed gas mass\n\t\\eq{\n\t\tM_{\\rm enc} ( r) = \\int_{\\leq r} \\rho_{\\rm b} \\, {\\rm d} V .\n\t\t\\label{eq:enclosedmass}\n\t}\n\texceeds the Bonnor-Ebert mass \\citep{1955_Ebert, 1956_Bonnor},\n\t\\eq{\n\t\tM_{\\rm BE} ( r) \\simeq 1.18 \\frac{\\bar{c_{\\rm s}}^4}{\\sqrt{P_{\\rm c} G^3}}, \\label{eq:bemass}\n\t}\n\tat a spherical radius of $r$.\n\tHere, $\\bar{c_{\\rm s}}$ is an average sound speed of gas,\n\tand $P_{\\rm c}$ is confining pressure. \n\tWe calculate $\\bar{c_{\\rm s}}$ and $P_{\\rm c}$ \n\tby integrating the pressure within an enclosed volume $V$ and over the corresponding enclosing surface $\\partial V$, respectively,\n\t\\gathering{\n\t\t\\bar{c_{\\rm s}} (r) = \\sqrt{ M_{\\rm enc} ^{-1} \\int_{\\leq r }P \\, {\\rm d} V } \\\\ \n\t\tP_{\\rm c} (r) = \\dfrac{1 }{4 \\pi r^2} \\int_{\\partial V} P \\, {\\rm d} S.\n\t\t\\label{eq:confiningpressure}\n\t}\n\tSince we are interested in star formation within dark halos,\n\twe set the enclosing radius to the scale radius (core radius), $r_{\\rm s} \\equiv r_{\\rm vir}\/c_{\\rm N}$, in \\eqnref{eq:enclosedmass}--\\eqnref{eq:confiningpressure}. \n\tWe regard a halo as star-forming if it has $M_{\\rm enc}(r_{\\rm s})\/M_{\\rm BE}(r_{\\rm s})$ larger than unity at a certain point during the evolution. \n\t\\fref{fig:starformation} shows star-forming halos and non-star-forming halos defined by this condition. \n\tWe also provide a fit to the resulting $M_{\\rm min}$ as a function of $z_{\\scalebox{0.6}{\\rm IN}}$, $J_{21}$ and $Z$ in \\appref{sec:fitminimum}. \n\t\\begin{figure*}\n\n\t \\centering\n\t \\includegraphics[clip, width = \\linewidth]{Haloevap_paperfigsGetMmin2.pdf}\n\t \\caption{Star formation vs. photoevaporation for all our runs.\n\t The left, middle, and right panels correspond to $z_{\\scalebox{0.6}{\\rm IN}} = 10, 15, 20$, respectively. \n\t The horizontal and vertical axes are halo mass and $J_{21}$ in all the panels. \n\t Each of the three panels is divided into $11\\times 3$ rectangular blocks indicating combinations of $M$ and $J_{21}$. Each block is further divided into 5~square sections to show the dimension of metallicity; corresponding metallicity is $Z = 0,10^{-6},10^{-5},10^{-4},10^{-3}\\,Z_{\\odot}$ from top to bottom.\n We represent star-forming halos by filling the sections with colored squares as\n\t $10^{-3}\\,Z_{\\odot}$ (navy), $10^{-4}\\,Z_{\\odot}$ (purple), $10^{-5}\\,Z_{\\odot}$ (pink), $10^{-6}\\,Z_{\\odot}$ (orange), $0\\,Z_{\\odot}$ (yellow). \n\t The vertical blue dashed line shows the molecular cooling limit, which we derive using an expression in \\cite{2013_Fialkov} \\citep[cf.][]{2001_Machacek, 2012_Fialkov}, for a reference; note that we have not taken into account the correction to the molecular cooling limit due to relative velocity between baryons and cold dark matter. Incorporating the correction does not significantly change the limit mass for the redshifts of interest here.\n\t }\n\t \\label{fig:starformation}\n\t\\end{figure*}{}\n\t\n\tIn molecular cooling halos, the gas cools to satisfy the unstable condition, $M_{\\rm enc}\/M_{\\rm BE}>1$, at any metallicity including the primordial case. \n\tThe halos retain more than 10\\% of the initial gas.\n\tThe remaining gas mass is larger for higher halo mass and for lower $J_{21}$. \n\tIt is nearly unity for atomic cooling halos ($T_{\\rm vir} > 10^4{\\rm \\, K}$).\n\t\n\tWe find strong impact of metal enrichment in halos whose mass is lower than the atomic cooling limit. \n The gas cools via \\ce{H2} cooling faster than the bulk of the gas is photoevaporated. \n\tThis effect is clearly seen in higher-metallicity halos (\\fref{fig:starformation}).\n\tInterestingly, the minimum collapse mass is lowered \n\teven with very small metallicities ($Z \\lesssim 10^{-5}\\,Z_{\\odot}$),\n\tand becomes as small as $M_{\\rm min} \\sim 10^4{\\rm \\, M_{\\odot}}$ with $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$. \n\t\n \\citet{2014_Wise} show that \n \n star formation is active in the metal-cooling halos, and the ionizing photons of the formed stars can provide up to 30\\% of ionizing photons responsible for reionization. \n Metal-cooling halos are {\\it heavy} analog of molecular cooling halos and have masses slightly below the atomic cooling limit ($\\sim 10^8{\\rm \\, M_{\\odot}}$). We have found {\\it light} analog of molecular cooling halos in which the gas cools by \\ce{H2} molecules that are formed by grain-catalysed reactions. \n Recent numerical simulations suggest formation of massive or even very massive stars in metal-enriched halos \\citep{2016_Chiaki, 2018_Fukushima}. \n Theses stars can allow the beginning of reionization to occur earlier than without the metal effects. It does not likely change the redshift where reionization completes \\citep{2018_Norman}.\n \t\n\n\n\\subsection{Implications to $21{\\, \\rm cm}$ Line Observations}\nThe hyperfine spin-flip emission of atomic hydrogen (the so-called $21{\\, \\rm cm}$ line) is a promising probe of the neutral IGM in the Epoch of Reionization. \nBecause the strength of the 21-cm signals depend crucially on astrophysical processes at the early epochs, observations of the emission\/absorption against the CMB will provide invaluable information on star formation and the physical state of the IGM. \n\n Semi-numerical models have been used to predict the large-scale fluctuations of the $21{\\, \\rm cm}$ signal \\citep[e.g.,][]{Mesinger:2011,Fialkov:2014b}.\n Results of such models are often utilized to derive upper limits on the high-redshift astrophysics \\citep[e.g.,][]{Monsalve:2018, Monsalve:2019,Ghara:2020,Mondal:2020, Greig:2020}, and make forecasts for ongoing measurements of the 21-cm power spectrum with experiments such as HERA \\citep{DeBoer:2017}, LOFAR \\citep[e.g.,][]{Mertens:2020}, MWA \\citep[e.g.,][]{Trott:2020}, LWA \\citep{Eastwood:2019}, and the future SKA \\citep{Koopmans:2015}, and of the 21-cm sky-averaged (global) signal using LEDA \\citep{Price:2018}, SARAS \\citep{Singh:2018}, EDGES \\citep{Bowman:2018}, PRIZM \\citep{philip19}, MIST\\footnote{http:\/\/www.physics.mcgill.ca\/mist\/} , and REACH \\footnote{https:\/\/www.kicc.cam.ac.uk\/projects\/reach}. \n \n The speed and convenience of the semi-numerical methods come along with their poor spacial resolution which is compensated by extensive use of sub-grid models. \nFor example, \\citet{2013_Sobacchi, 2016_Cohen} study the effect of the UV\nbackground radiation in terms of gas cooling threshold $M_{\\rm cool}$. \nHalos more massive than $M_{\\rm cool}$ are regarded as star-forming halos. \nThe simple prescription in previous studies, however, does not take into account the effect of metal-enrichment. \nAs we have shown in \\secref{sec:starformation}, star formation can occur in metal-enriched minihalos even during reionization.\n\\cite{2016_Cohen} have shown that star formation in such metal-enriched minihalos affects the $21{\\, \\rm cm}$ signal from high redshifts.\nIn particular, signatures of baryon acoustic oscillation (BAO) imprinted on the $21{\\, \\rm cm}$ signal is amplified and can possibly be detected over a wide range of redshift. \n\\cite{2016_Cohen} set a threshold mass for cooling (and thus star-formation) similar to that of molecular cooling halos. \nInterestingly, this somewhat conservative assumption is well justified by our results, where metal enrichment lowers $M_{\\rm min}$ from the molecular cooling limit by an order of magnitude for $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$ even under the effects of UVB. Our results support the predicted enhancement and survival of BAO signature imprinted on $21{\\, \\rm cm}$ signals due to effects of metal enrichment. \n\n\\input{stellarfeedback}\n\n\\subsection{X-Ray Effects}\nX-rays are attenuated by a larger column $(\\sim 10^{21} {\\, \\rm cm}^{-2})$ compared to EUV \\citep{2000_Wilms}.\nThey can reach halos and pre-ionize\/pre-heat the gas \nbefore the ionization front hits the halos. \nA larger attenuation column also implies larger penetration depths. \nHigher-density photoevaporative flows would be driven \nif the gas temperature increases sufficiently to allow it escape from gravitational binding. \nAccordingly, mass-loss rates would be significantly larger than those of EUV-driven photoevaporation. \n\nX-ray possibly delays concentration of the self-shielded regions and thereby star formation, if it efficiently heats the gas. \nOn the other hand, X-ray ionization can promote \\ce{H2} formation via the electron-catalyzed reactions: \n\\ce{H + e- -> H- + \\gamma} and \\ce{H- + H -> H2 + e-} \\citep{1996_Haiman, 1999_Bromm, 2003_Glover, 2015_Hummel,2011_Inayoshi,2015_InayoshiTanaka, 2016_Glover, 2016_ReganJohanssonWiseb}. \nIf X-rays are strong, very strong LW intensities are required to photodissociate \\ce{H2} in the entire halos. \nWe expect X-rays to have significant effects on evolution of irradiated halos and star formation activities. \nX-ray chemistry is already implemented in our code \\citep{2018_Nakatanib} and we plan to investigate their influences on halo photoevaporation in the future. \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\subsection{Model Limitation}\n\nWe have studied the gas mass evolution for a wide range of model parameters. \nExploring the large volume of the parameter space is essential in order to \nunderstand the evolution of a {\\it population} of photoevaporating halos during reionization. Since we have adopted a few simplifications mainly to save computational times, it would be worth examining the limitations of our results. \nWe have fixed the dark matter halo potential throughout our simulations. \nIn practice, halos grow in mass by mergers and accretion, which in turn strengthens the halo's gravity \\citep[e.g.,][]{2013_SobacchiMesinger}. \nThis effect may not be negligible for the halos that have longer mass-loss timescale than halo's growth timescale.\nEspecially, our results indicate that photoevaporation will be significantly suppressed, once halos grow to $T_{\\rm vir} \\gtrsim 10^4{\\rm \\, K}$. \nLower mass halos ($T \\ll 10^4{\\rm \\, K}$) disperse quickly, and thus including halo growth is not expected to significantly alter our results. In fact, we find that our results at $Z = 0\\,Z_{\\odot}$ \nare in good agreement with \\cite{2004_Shapiro} and \\cite{2005_Iliev}, \nwhere halo evolution is incorporated in an approximate manner. \n\nThe UVB radiation intensity is also fixed in our simulations.\nIn general, it can vary over time.\nAlthough the UV background intensity is hardly constrained by observations\nfor $z\\gtrsim 5$ \\citep[e.g.,][]{2007_BoltonHaehnelt, 2011_McQuinn}, numerical simulations predict that the UV background builds up over \nseveral hundred million years. As we have shown, $J_{21}$ strongly affects the mass evolution of halos at any metallicity (\\fref{fig:masslossrates}-(c)). \nWe expect that growth of halo with time partially compensates for the rise in the intensity of the ionizing background; \nwe have shown that mass of halos giving similar $\\chi^\\prime$, which is approximately proportional to $J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$, evolve in a similar manner (cf.~\\secref{sec:similarities} and Eqs.(\\ref{eq:chi}), (\\ref{eq:modchi})). \n\n\nFinally, metallicity is also fixed in each individual run of our simulations. Cosmic average metallicity can increase by an order of magnitude on a timescale of $\\sim 1{\\rm \\,Gyr}$ \\citep{2014_MadauDickinson}, \nwhich is a much longer that typical mass-loss timescales that we found in this study. \nTime-dependent metallicity might have an effect only on long-lived photoevaporating halos by affecting their thermochemistry.\nHowever, we expect that the metallicity-dependent trend would not significantly differ from what we have reported in this study.\n\n\n\n\n\\section{Conclusions and Summary}\t\\label{sec:conclusions}\nPhotoheating and metal enrichment of halos during the epoch of reionization can affect star formation efficiency and thus change the course of reionization. The effects of metal enrichment, however, have never been explored systematically in prior works. We have run a suite of hydrodynamics simulations of photoevaporating minihalos ($T_{\\rm vir}\\lesssim10^4{\\rm \\, K}$) irradiated by the UVB, covering a wide range of metallicity, halo mass, UV intensity, and turn-on redshift of UV sources. \n\nOur main findings are summarized as follows: \n\\begin{itemize}\n \\item \n In low-mass minihalos with $T_{\\rm vir} < 10^4{\\rm \\, K}$,\n the gas cools mainly via \\ion{C}{2} and \\ce{H2} line emission to a temperature of $\\lesssim 100{\\rm K}$ if $Z \\gtrsim 10^{-4}\\,Z_{\\odot}$.\n \\ce{H2} molecules are produced through grain-catalyzed reactions in \n the self-shielded, neutral region.\n The cooled gas concentrates towards the potential center to form a dense core. \n \n \\item The evolution of the gas mass is qualitatively the same at any metallicity. The photoevaporation rate decreases after the bulk of the \n diffuse gas is lost. The dispersal of the diffuse gas completes at an earlier time for halos with higher metallicity. \n\n \n \\item In halos with $T_{\\rm vir} > 20000{\\rm \\, K}$, the gas cools by hydrogen Lyman$\\alpha$ cooling, and thus the overall evolution of photoevaporating halos does not depend on metallicity.\n\n \\item The photoevaporation rate depends only weakly on turn-on redshift, and it is slightly smaller for higher $z_{\\rm IN}$.\n\n \\item There is a simple scaling relation for the gas mass evolution of photoevaporating minihalos. \n The time evolution is characterized by a parameter ($\\chi^\\prime$) scaling as $\\propto J_{21}^{1\/2} M^{-1\/2} (1+z)^{-3}$. \n It indicates that the obtained evolution applies to any halos with the same $\\chi^\\prime$. \n We give a fit to the gas mass evolution as a function of time. \n \n \\item The concentrating cores of the molecular\/atomic cooling halos are likely a suitable environment for star formation. The efficient cooling of metal-rich halos fastens the concentration, and it results in lowering the molecular cooling limit by a factor for small metallicities ($Z \\lesssim 10^{-5}\\,Z_{\\odot}$) and by an order of magnitude for metal-rich cases ($Z \\gtrsim 10^{-4}\\,Z_{\\odot}$). \n Stellar feedback of the formed stars may be significant enough to disperse the baryons of the parental molecular cooling halos. \n\n\\end{itemize}\n\nOur study suggests existence of small mass, metal-enriched halos\nin which stars are formed even under the influence of emerging\nUV background radiation.\n\n\n\\acknowledgments %\nWe thank Gen Chiaki and Kana Moriwaki for insightful comments \non this manuscript and technical advice. \nRN acknowledges support from Special Postdoctoral Researcher program at RIKEN and from Grant-in-Aid for Research Activity Start-up (19K23469).\nAF is supported by the Royal Society University Research Fellowship.\nThe numerical simulations were carried out on the Cray\nXC50 at the Center for Computational Astrophysics, National\nAstronomical Observatory of Japan.\n\n\n\n\n\\bibliographystyle{aasjournal}\n\n\n\\subsection{Internal Stellar Feedback Effect}\n\n\t\n\tMassive stars formed in metal-enriched halos affect the host halo by UV radiation and stellar winds, and by supernova explosions.\n\tThen the halo gas would not only photoevaporate by external UV radiation\n\tbut also can be dispersed by these internal processes.\n\t\n\tThe stellar feedback is effective if\n\t(i) the enclosed gas mass $M_{\\rm enc}$ (\\eqnref{eq:enclosedmass})\n\texceeds the Bonnor-Ebert mass $M_{\\rm BE}$ (\\eqnref{eq:bemass});\n\t(ii) stellar feedback energy deposited by massive stars, $E_{\\rm dep}$,\n\tis larger than the gravitational binding energy of the neutral gas\n\tin the self-shielded regions\n\t\\eq{\n\t\tE_{\\rm dep} \\gtrsim \\frac{G (M + M_{\\rm s}) }{r_{\\rm s} } M _{\\rm s},\n\t}\n\twhere $r_{\\rm n}$ is the size of the neutral gas clump.\n\tWe first consider only the supernova explosion energy for simplicity. \n\tThe deposited energy by other feedback processes can be easily accounted for\n\tby increasing $E_{\\rm dep}$ by a suitable factor.\n\t\n\tThe cold gas that satisfies the condition~(i) is assumed to form stars with \n\ta star formation efficiency of $c_*$\n\tand with an initial mass function, $\\Psi(M_*)$.\n\tLet $\\epsilon_{\\rm SN}$ be the average \n\tsupernova explosion energy. \n\tThe deposited feedback energy is estimated as \n\t\\eq{\n\t\tE_{\\rm dep} = c_* M_{\\rm cold} \\, \\epsilon_{\\rm SN}\n\t\t\\braket{\\int M_* \\Psi {\\rm d} M_*}^{-1}\n\t\t{\\int_{ M_{\\rm th} }\\Psi {\\rm d} M_*},\n\t}\n\twhere $M_{\\rm cold}$ is the mass of enclosed cold gas,\n\tand $M_{\\rm th}$ is a threshold stellar mass above which stars cause supernova explosion. \n\tWe adopt $c_* = 0.1$, $\\epsilon_{\\rm SN} = 10^{51} {\\rm \\, erg}$,\n\t$M_{\\rm th} = 8{\\rm \\, M_{\\odot}}$, \n\tand use the initial mass function of \\cite{2003_Chabrier}. \n\t\n\n\n\nWhen the conditions~(i) and (ii) are met and stars are formed \nover a free-fall time, \nthe self-shielded region is expected to evolve differently \nfrom what has been shown in \\secref{sec:results}.\nIf the feedback effects are strong enough to disrupt the entire halo, \nthe halo evolution described in \\secref{sec:results} \nis valid only up to one free-fall time. \n\\fref{fig:masslossZdependence_stars} shows the same plots as \\fref{fig:masslossrates}-(a), \nbut the lines extend to the time when the stellar feedback is assumed to be effective.\n\\begin{figure}[h]\n\\begin{center}\n\\includegraphics[clip, width = \\linewidth]{Haloevap_metallicity_dependenceMassEvolution_showstarformation.pdf}\n\\caption{\n\t\tThe same plot as \\fref{fig:masslossrates}-(a), \n\t\tbut the lines are terminated at the time of star formation and feedback (marked with star symbols) \n\t\tfor halos that meet the conditions~(i) and (ii). \n\t\t\t\t}\n\\label{fig:masslossZdependence_stars}\n\\end{center}\n\\end{figure}\nMassive, metal-rich halos are likely to be self-destructed before the gas is lost via photoevaporation. \nNote that the fraction of the cold gas that is converted to stars is small.\n\nIn metal-free halos, the stellar feedback can also be important, especially for massive halos.\nWe expect that the stellar feedback is unimportant\nin low-mass halos with $T_{\\rm vir} \\lesssim 100{\\rm \\, K}$ ($M\\lesssim 10^{4.5} \\,{\\rm \\, M_{\\odot}}$ at $z = 10$) for any metallicity. In these halos, most of the gas is quickly lost by photoevaporation before star formation.\nIn conclusion, \nthe deposited energy due to the stellar feedback can disperse the gas from the host halo,\nsimilarly to the well-known feedback effect in dwarf galaxies devoid of \\ion{H}{1} content \n\\citep{2009_Grcevich, 2014_Spekkens}.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfclp b/data_all_eng_slimpj/shuffled/split2/finalzzfclp new file mode 100644 index 0000000000000000000000000000000000000000..1443efea1861fa1f384d360621a4192fcc67e96e --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfclp @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nIn classical physics, a wave is described by well-defined amplitude and phase, and both quantities can be measured simultaneously with arbitrary accuracy. In the quantum realm the situation is completely different and up to date a well-defined Hermitian phase operator has been elusive\\cite{Dirac,London,Susskind,Louissel,Carruthers,Nieto,Turski,Levy,Loudon,Pegg2,Pegg,Vaccaro3,Bialynicki,Schleich,Perinova,Lynch,Vaccaro,Moya,Royer}.\nThe lack of such a phase operator has yield many scientists to propose diverse approaches to describe quantum phase. In this regard, one may mention the London-Susskind-Glogower phase operators, the variants of phase space representations, ranking from number-phase Wigner functions to radially integrated quasiprobability distributions \\cite{Garraway,Otro,Vaccaro2}. Perhaps the most prominent approach is the so-called Pegg-Barnett (PB) phase operator\\cite{Pegg,Barnett}. This PB phase operator is defined in a finite-dimensional state space and due to this finiteness one can define phase states in a rigorous way. The success of the PB formalism lies on the fact that all expectation values of physical variables in a finite-dimensional Hilbert space give real numbers which depend parametrically on the dimension of the state space \\cite{Gantsog, Varro}. Certainly, many realistic physical systems are discrete and finite, as a result, the PB theory can be readily used to investigate phase-dependent quantum systems.\n\\\\\nAlong those lines, optics occupies a notable place since discrete optical systems can be created either in free space or on-chip, e.g. waveguide lattices \\cite{abo}. In fact, over the past decade, the tremendous progress in fabrication and characterization of photonic structures has allowed us to create arrays of evanescently coupled waveguides with a number of channels ranging from a few to a few hundred \\cite{Christodoulides,Szameit}. The resulting discrete diffraction makes such coupled configurations a perfect paradigm for the realization of quantum particle tunneling in one or two dimensional lattices, and it permits the observation of quantum and condensed matter phenomena in macroscopic integrated systems using classical and quantum light \\cite{Leija,Longhi,Weimann,Grafe,Leija2}. In the waveguides, the local refractive index and the width of the channels determine the on-site potentials (propagation constants), while the tunneling amplitude (coupling coefficients) from site to site is adjusted by changing the separation distance between adjacent waveguides \\cite{Szameit2}.\n\\\\\nThe aim of the present work is to show that the so-called Pegg-Barnett phase operator can be obtained by the application of the discrete Fourier transform to the number operator defined in a finite-dimensional Hilbert space. Then, we elucidate that the structure of the London-Susskind-Glogower phase operator is contained into the Hamiltonian of circular waveguide arrays. As a result, light dynamics arrising in such waveguide configurations will feature properties akin to finite phase operators.\n\n\n\\section{Fourier transform and the phase operator}\n\nIn quantum mechanics the position operator ${x}$ and momentum operator $p$ are canonically conjugated variables and they are related via the Fourier operator \\cite{Agarwal,Fan}\n\\begin{eqnarray}\n{p} = i^{\\hat{n}}x(-i)^{\\hat{n}}.\n\\end{eqnarray}\nHere ${\\hat{n}}=a^{\\dagger}a$ is the number operator, with $a$ and $a^{\\dagger}$ being the annihilation and creation operators, respectively. In terms of $a$ and $a^{\\dagger}$, $x$ is written as\n\\begin{eqnarray}\nx=\\frac{a+a^{\\dagger}}{\\sqrt{2}}.\n\\end{eqnarray}\nIn similar manner, $p$ becomes\n\\begin{eqnarray}\\label{PtoQ}\n{p} = e^{i\\frac{\\hat{n}\\pi}{2}}\\frac{a+a^{\\dagger}}{\\sqrt{2}}e^{-i\\frac{\\hat{n}\\pi}{2}}=-i\\frac{a-a^{\\dagger}}{\\sqrt{2}}.\n\\end{eqnarray}\n\\\\\nAccording to Eq.(\\ref{PtoQ}), the momentum operator is obtained from the position operator through a Fourier transform.\n\\\\\nFollowing these ideas, we now define an operator $\\Phi_s$ obtained by the application of the $(s+1)$-dimensional DFT to the finite number operator $N_s$\n\\begin{eqnarray} \\label{Pegg}\n\\Phi_s \\propto {\\cal F}_s N_s{\\cal F}_s^{-1}.\n\\end{eqnarray}\nHere ${\\cal F}_s$ represents the DFT, ${\\cal F}_s^{-1}$ its inverse, and the number operator $N_s$ is defined as\n\\begin{eqnarray}\n N_s=\\sum_{k=0}^s k|k\\rangle\\langle k|.\n\\end{eqnarray}\nIn matrix form the number operator is written as\n\\begin{equation}\n {N_s}= \\left(\n \\begin{array}{ccccc}\n 0 & 0 & 0& \\cdots & 0 \\\\\n 0 & 1 & 0&\\cdots & 0 \\\\\n 0 & 0 & 2&\\cdots & 0\\\\\n \\cdots&&&&\\vdots\\\\\n 0& 0 & \\cdots & 0& s\\\\\n \\end{array}\n \\right).\n\\end{equation}\nThe DFT is given by the Vandermonde matrix \\cite{Soto}\n\\begin{equation}\n {\\cal F}_s= \\frac{1}{\\sqrt{s+1}} \\left(\n \\begin{array}{cccc}\n 1 & 1 & \\cdots & 1 \\\\\n \\lambda_0 & \\lambda_1 & \\cdots & \\lambda_s \\\\\n \\lambda_0^2 & \\lambda_1^2 & \\cdots & \\lambda_s^2\\\\\n \\cdots\\\\\n \\lambda_0^{s} & \\lambda_1^{s} & \\cdots & \\lambda_s^{s}\\\\\n \\end{array}\n \\right)\n\\end{equation}\nwith\n\\begin{equation}\n\\lambda_j = \\exp \\left[ i \\frac{2\\pi}{s+1} j \\right], \\qquad j=0,1,2,\\cdots,s. \\label{eigen}\n\\end{equation}\nNote that, because ${\\cal F}_s^\\dagger={\\cal F}_s^{-1}$ the phase operator defined in Eq.(\\ref{Pegg}) is Hermitian. In Dirac notation the DFT operator reads\n\\begin{eqnarray}\n{\\cal F}_s = \\frac{1}{\\sqrt{s+1}}\\sum_{n=0}^s \\sum_{k=0}^s |n\\rangle\\langle k|e^{ i \\frac{2\\pi}{s+1} nk } .\n\\end{eqnarray}\nFrom the above equations one can readily see that the DFT of the number operator is\n\\begin{eqnarray}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\frac{1}{{s+1}}\\sum_{n=0}^s \\sum_{k=0}^s |n\\rangle\\langle k| \\sum_{m=0}^s me^{ i \\frac{2\\pi}{s+1} (n-k)m }. \\label{transformada}\n\\end{eqnarray}\nBy defining the phase states\n\\begin{equation}\n|\\theta\\rangle =\\frac{1}{\\sqrt{1+s}}\\sum_{n=0}^se^{in\\theta}|n\\rangle,\n\\end{equation}\nwe can rewrite Eq.(\\ref{transformada}) in a more compact form\n\\begin{eqnarray}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\sum_{m=0}^s m|\\theta_m\\rangle\\langle \\theta_m|,\n\\end{eqnarray}\nwith $\\theta_m=\\frac{2m\\pi}{s+1}$. Normalization of Eq.(\\ref{transformada}) yields \\cite{Pegg2}\n\\begin{eqnarray}\\label{PO}\n{\\cal F}_s \\frac{2\\pi N_s}{s+1} {\\cal F}_s^{\\dagger}= \\sum_{m=0}^s \\theta_m|\\theta_m\\rangle\\langle \\theta_m|,\n\\end{eqnarray}\nwhich is the so-called Pegg-Barnett phase operator with phase reference set to zero. This demonstrates that indeed the PB phase operator arises as the DFT of the finite number operator.\n\\\\\nWe can further work on Eq.(\\ref{transformada}) to find a simpler form of the transformation\n\\begin{eqnarray}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\frac{1}{{s+1}}\\sum_{n=0}^s |n\\rangle\\langle n| \\sum_{m=0}^s m+\\frac{1}{{s+1}}\\sum_{n\\ne k}^s |n\\rangle\\langle k| \\sum_{m=0}^s me^{ i \\frac{2\\pi}{s+1} (n-k)m }\n\\end{eqnarray}\nwhich gives\n\\begin{eqnarray}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\frac{s}{{2}}+\\frac{1}{{s+1}}\\sum_{n\\ne k}^s |n\\rangle\\langle k| \\sum_{m=0}^s me^{ i \\frac{2\\pi}{s+1} (n-k)m }.\n\\end{eqnarray}\nNote that the last sum can be cast in a closed form\n\\begin{eqnarray}\n\\sum_{m=0}^s me^{ i \\frac{2\\pi}{s+1} (n-k)m }=-i\\frac{s+1}{2\\sin(\\frac{\\pi}{s+1}[n-k])}e^{i\\frac{2s+1}{s+1}\\pi(n-k)}.\n\\end{eqnarray}\nThus, we obtain\n\\begin{eqnarray}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\frac{s}{{2}}+-\\frac{i}{2}\\sum_{n\\ne k}^s \\frac{|n\\rangle\\langle k|e^{i\\frac{2s+1}{s+1}\\pi(n-k)} }{\\sin(\\frac{\\pi}{s+1}[n-k])}.\n\\end{eqnarray}\nAfter some algebra we finally find\n\\begin{eqnarray}\\label{CF}\n{\\cal F}_s N_s {\\cal F}_s^{\\dagger}= \\frac{s}{{2}}+(s+1)\\sum_{n\\ne k}^s \\frac{|n\\rangle\\langle k|}{e^{i\\frac{2\\pi}{s+1}[n-k]}-1} .\n\\end{eqnarray}\n\n\\section{Circular waveguide arrays and the London-Susskind-Glogower phase operator}\n\nIn this section we show that the Hamiltonian describing light propagation in circular waveguide arrays explicitly contains the structure of the London-Susskind-Glogower (exponential) phase operator. To do so, we start by considering an array of waveguides in a circular configuration and let the individual guides interact only to first neighbours, the field in each waveguide obeys the following system of differential of equations\n\\begin{eqnarray}\\label{sistema1}\n i \\frac{d E_0}{dz}&=\\gamma \\left( E_s + E_1\\right)\\nonumber \\\\\n i \\frac{d E_{n}}{dz}&=\\gamma \\left( E_{n-1} + E_{n+1}\\right), \\qquad n=1, ..., s-1.\\\\\n i \\frac{d E_s}{dz}&=\\gamma \\left( E_{s-1} + E_1\\right)\\nonumber\n\\end{eqnarray}\nHere, $z$ represents the propagation distance and $\\gamma$ the coupling constants.\nBy introducing the matrix\n\\begin{equation} \\label{London1}\n {V_s}= \\left(\n \\begin{array}{cccccc}\n 0 & 1 & 0& \\cdots & 0&0 \\\\\n 0 & 0 & 1&0 &\\cdots & 0 \\\\\n 0 & 0 & 0&\\cdots &1& \\vdots\\\\\n 0&0&\\cdots&0&0&1\\\\\n 1& 0 & \\cdots & 0& 0&0\\\\\n \\end{array}\n \\right),\n\\end{equation}\nwe can cast Eq.~(\\ref{sistema1}) as follows\n\\begin{equation}\\label{sistema}\n i \\frac{d \\vec{E}}{dz}=\\gamma \\left(V_s+V_s^\\dagger\\right)\\vec{E} ,\n\\end{equation}\nwhere $V_s^\\dagger$ represents the transpose of $V_{s}$, and\n\\begin{equation}\n \\vec{E}=\\left(\n \\begin{array}{lll}\n E_{0} \\\\\n E_{2} \\\\\n \\vdots \\\\\n E_{s}\n \\end{array}\n \\right).\n\\end{equation}\nTherefore, the field amplitudes over the entire array is given by the formal solution\n\\begin{equation}\n \\vec{E}(z)=\\exp\\left[-i \\gamma z \\left( V_s + V_s^\\dagger \\right) \\right] \\vec{E}(0),\n\\end{equation}\nwhere $\\vec{E}(0)$ represents the initial optical field.\nSince $V_s$ and $V_s^\\dagger$ commute and since they are the inverse of each other, we can write the above solution as\n\\begin{equation}\n \\vec{E}(z)=\\exp\\left[ -\\gamma z \\left( iV_s - \\frac{1}{iV_s} \\right) \\right] \\vec{E}(0),\n\\end{equation}\nHence, by using the generating function of the Bessel functions of the first kind\n\\begin{equation}\n \\exp\\left[\\frac{x}{2}\\left(t-\\frac{1}{t}\\right)\\right]=\\sum_{n=-\\infty}^{\\infty}t^{n}J_{n}(x),\n\\end{equation}\none can show that the field amplitude is given by\n\\begin{equation}\\label{field1}\n \\vec{E}(z)=\\sum_{n=-\\infty}^{\\infty} i^n J_n (-2\\gamma z) (V_s)^n \\vec{E}(0).\n\\end{equation}\nBefore discussing the properties of this solution, we note that by using Dirac notation $V_{s}$ acquires the form\n\\begin{equation}\n {V_s}= \\sum_{n=0}^{s-1} |n\\rangle\\langle n+1| +|s \\rangle\\langle 0|,\n\\end{equation}\nwhich is the London-Susskind-Glogower phase operator defined in a $s+1$-dimensional Hilbert space.\\\\\nIn what follows we show the functional relationship between the matrix $V_s$ and the PB phase operator.To do so, we consider the eigenvalues of $V_s$, given by Eq.(\\ref{eigen}), which allow us to compute any function of $V_s$ through the corresponding Vandermonde matrices\\footnote{In this case the similiarity (transformation) matrix and the Vandermonde matrix are the same.}\n\\begin{eqnarray}\nf(V_s) ={\\cal F}_s f(D_s) {\\cal F}_s^{\\dagger} \\label{function},\n\\end{eqnarray}\nwhere $D_s$ is a diagonal matrix having the eigenvalues of $V_s$ as elements\n\\begin{equation}\n {D_s}= \\left(\n \\begin{array}{ccccc}\n 1 & 0 & 0& \\cdots & 0 \\\\\n 0 & e^{i \\frac{2\\pi}{s+1} } & 0&\\cdots & 0 \\\\\n 0 & 0 & e^{i \\frac{4\\pi}{s+1} }&\\cdots & 0\\\\\n \\cdots&&&&\\vdots\\\\\n 0& 0 & \\cdots & 0& e^{i \\frac{2s\\pi}{s+1} }\\\\\n \\end{array}\n \\right),\n\\end{equation}\nor in Dirac notation\n\\begin{eqnarray}\n D_s= \\sum_{n=0}^s \\exp \\left[ i \\frac{2\\pi}{s+1} n \\right] |n\\rangle\\langle n|.\n\\end{eqnarray}\nFrom Eq.(\\ref{function}) is clear that\n\\begin{eqnarray}\n\\ln(V_s) ={\\cal F}_s \\sum_{n=0}^s \\ln \\left(\\exp \\left[ i \\frac{2\\pi}{s+1} n \\right] \\right) |n\\rangle\\langle n| {\\cal F}_s^{\\dagger},\n\\end{eqnarray}\nor equivalently\n\\begin{eqnarray}\\label{Phi}\n\\ln(V_s) =i\\frac{2\\pi}{s+1}{\\cal F}_s N_s {\\cal F}_s^{\\dagger}.\n\\end{eqnarray}\nWe note that the right hand side of Eq.~(\\ref{Phi}) is the PB phase operator, see Eq.~(\\ref{PO}). As a result, by assuming $V_s=exp(i\\Phi_s)$, which is exponential of the phase operator, we obtain\n \\begin{eqnarray}\n\\Phi_s =\\frac{\\pi s}{s+1}+ 2\\pi\\sum_{n\\ne k}^s \\frac{|n\\rangle\\langle k|}{e^{i\\frac{2\\pi}{s+1}[n-k]}-1},\n\\end{eqnarray}\nwhere we have used Eq.(\\ref{CF}) to obtain a closed form expression. This indicates that unitary transformations performed by circular waveguide arrays over discrete light fields will exhibit the same dynamics as the action of the phase operators over finite number states $N_{s}$. In the waveguide array each site represents a number state and the evolution operator is represented by the waveguide array itself. \\\\\nNow we turn our attention to the field solution Eq.~(\\ref{field1})\n\\begin{equation}\n \\vec{E}(z)=\\sum_{n=-\\infty}^{\\infty} J_n (-2\\gamma z) i^nV_s^n \\vec{E}(0).\n\\end{equation}\nThe above equation may be rewritten as\n\\begin{equation}\n \\vec{E}(z)=\\sum_{n=0}^{\\infty} J_n (-2\\gamma z) i^nV_s^n \\vec{E}(0)+\\sum_{n=1}^{\\infty} J_{-n} (-2\\gamma z) i^{-n}V_s^{\\dagger n}\\vec{E}(0),\n\\end{equation}\ntaking into account that $V^{s+1}=V^{\\dagger (s+1)}=1$ we find that the sums become finite, for instance\n\\begin{equation}\n \\sum_{n=0}^{\\infty} J_n (-2\\gamma z) i^nV_s^n =\\sum_{n=0}^s F_nV_s^n\n \\end{equation}\nwith $F_n=\\sum_{k=0}^s i^{k(s+1)+n}J_{k(s+1)+n}(-2\\gamma z)$. In addition, the property $V^{s+1}=V^{\\dagger (s+1)}=1$ implies that self-imaging proccess can occur in these types of systems. As an example, in Fig.~(\\ref{fig1}) we show the intensity evolution when light is injected into one the guides of a circular waveguide array having six channels.\n\\begin{figure}[h!]\n\\begin{center}\n\\includegraphics[scale=0.3]{ev.png}\n\\caption{Intensity light evolution in a circular waveguide array having six sites. In this simulation we assume a circular array of identical waveguides and equal coupling coefficients $\\gamma=1$. The blue curve (online only) depicts the intensity evolution along th excited site. Note that the intensities are periodic and that at $z=\\pi$ the intensity of the excited channel becomes one.}\\label{fig1}\n\\end{center}\n\\end{figure}\n\\section{Conclusions}\nBy noting that position and momentum operators are related via the Fourier operator, we propose that the $(s+1)$-dimensional phase and number operators are related by the discrete Fourier transform, which naturally leads to the Pegg-Barnett phase operator. We have given a way to model functions of the Pegg-Barnett phase operator by propagating classical light in circular array waveguide arrays.\n\\bigskip\n\\bigskip\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro}\nPhotometric redshift (photo-$z$\\xspace) estimation continues to be an active research area as it plays a major role in solving the big questions in cosmology. Redshifts provide radial information (distance) to the traditional two dimensional sky maps of galaxies. They are traditionally determined through spectroscopic methods (spectroscopic redshifts, or spec-$z$\\xspace's). Yet since the process requires long telescope time for high completeness, photo-$z$\\xspace's are instrumental for the analysis of large surveys containing of order $10^{8-9}$ galaxies. Photo-$z$\\xspace methodology has been evolving and improving a lot over the past couple of decades \\mbox{\\citep[e.g.][]{brescia_data_2018,salvato_many_2019}}, such that it had been sufficiently useful for most recent cosmological researches. \n\nPhoto-$z$\\xspace, as its name suggests, is often determined through the use of a handful of broadband photometric filters obtained from large sky surveys. Photo-$z$\\xspace estimation methods are generally categorised into two different types: the template-based method, which relies on accurate models of spectral energy distribution (SED) templates of different types of galaxies; and the data-driven empirical method, which relies on training sets of galaxies and machine-learning algorithms. Each method however has its own limitations: template-based methods may produce photo-$z$\\xspace's with large scatter and catastrophic rates without representative templates; while machine-learning methods may perform poorly outside the regions of the parameters covered by the training sample \\citep{disanto_return_2018}. As a result, hybrid methods have been implemented to utilise the best of both worlds \\citep{cavuoti_cooperative_2017,duncan_photometric_2018,duncan_observational_2019}.\n\nMany current and upcoming surveys such as the Dark Energy Survey \\citep[DES,][]{abbott_dark_2005}, Legacy Survey of Space and Time \\citep[LSST,][]{ivezic_lsst:_2008}, \\textit{Euclid} \\citep{laureijs_euclid_2011}, Kilo-Degree Survey \\citep[KiDS,][]{de_jong_kilo-degree_2013}, Wide Field Infrared Survey Telescope \\citep[WFIRST,][]{spergel_wide-field_2013} and Hyper Suprime-Cam \\citep[HSC,][]{aihara_hyper_2018} have set stringent photo-$z$\\xspace requirements to ensure that they meet their science goals, forcing the quality of photo-$z$\\xspace methodology to constantly improve. For example, LSST's photo-$z$\\xspace requirement is to reach a root-mean-square error of $\\sigma_\\rf{RMS}<0.02(1+z)$, while the \\textit{Euclid} requirement is $\\sigma_\\rf{RMS}<0.05(1+z)$. High quality photo-$z$\\xspace's are required for a reliable estimation of e.g. weak lensing \\citep{benjamin_cfhtlens_2013}, angular clustering \\citep{crocce_galaxy_2016}, intrinsic alignment \\citep{johnston_pau_2020}, structure formation, galaxy classification and galaxy properties \\citep{jouvel_photometric_2017,laigle_cosmos2015_2018,siudek_vimos_2018_1}.\n\nThe aforementioned surveys are predominantly broadband surveys which use between $4$-$9$ broadband filters ranging from infrared to ultraviolet. This work however, explores the estimation of photo-$z$\\xspace's in narrowband surveys, focusing on the Physics of the Accelerating Universe Survey \\citep[PAUS,][]{padilla_physics_2019}, which observes the sky using $40$ narrow bands (see Section~\\ref{sec:pau}). Producing high quality photo-$z$\\xspace's for such a survey requires careful optimisation between narrow and broad bands, since machine-learning based methods have to be optimised for a larger number of inputs \\citep{eriksen_pau_2020}, while template-based methods require more attention towards the narrow emission line features. \n\n\\citet{marti_precise_2014} used simulations to predict that by using PAUS narrowband photometry, the photo-$z$\\xspace quality could reach an unprecedentedly low $68$th percentile error of $\\sigma_{68}=0.0035(1+z)$ at a quality cut of $50$ per cent at $i<22.5$. This has been verified by \\citet{eriksen_pau_2019}, where they combined the $40$ PAUS narrow bands (early data release) with broad bands $uBVriz$ from the Cosmic Evolution Survey \\citep[COSMOS,][]{laigle_cosmos2015_2016}, and using their template-based photo-$z$\\xspace code \\bcnz, they showed that this result is achievable when a $50$ per cent photometric quality cut was imposed on the final testing set. In a more recent work, \\citet{eriksen_pau_2020} used \\textsc{deepz}, a deep learning algorithm on the same data set and showed that it outperformed \\bcnz by reaching $50$ per cent lower in $\\sigma_{68}$. Furthermore, \\citet{alarcon_pau_2021} showed that an ever greater precision can be achieved when using additional photometric bands available in the COSMOS field (a total of $66$ bands).\n\nWe are motivated by the work of \\citet{eriksen_pau_2019}, but instead of using purely template-based methods, we attempt to achieve this PAUS photo-$z$\\xspace precision by utilising Gaussian processes (GPs, see Section~\\ref{sec:delight}) to make empirical adjustments to templates, working on the same data set and conditions. We seek to produce an independent method that is competitive, as that will allow us to exploit synergies with \\bcnz by \\citet{eriksen_pau_2019} as shown in this work, \\textsc{deepz} \\citep{eriksen_pau_2020}, and photo-$z$\\xspace's by \\citet{alarcon_pau_2021} in the future. Therefore the contents of this paper reflect our findings, putting special emphasis on the performance and application of \\textsc{delight}\\xspace \\citep{leistedt_data-driven_2017}, a hybrid template-machine-learning photo-$z$\\xspace code. When carefully calibrated and combined with COSMOS broadband fluxes, \\textsc{delight}\\xspace should achieve equally good results as that of \\bcnz. The main aims of this paper are threefold:\n\n\\begin{enumerate}\n \\item to optimise and test the performance of the hybrid template-machine-learning photo-$z$\\xspace code \\textsc{delight}\\xspace on a narrowband survey; \n \\item to develop an optimal method to calibrate the fluxes between the COSMOS broadbands and the PAUS narrow bands;\n \\item to provide an independent photo-$z$\\xspace solution for PAUS, enabling the study of photometric and spectroscopic redshift outliers.\n\\end{enumerate}\n\nThis paper is structured as follows. In Section~\\ref{sec:data} we first introduce PAUS and the sources of photometry and spectroscopic redshifts used in this work. Section~\\ref{sec:algor} describes the algorithms (\\textsc{delight}\\xspace, \\textsc{annz}$2$\\xspace and \\textsc{bpz}\\xspace) used in this work, together with their optimisation settings and SED templates used. Section~\\ref{sec:calibration} describes the full details of how the photometry and spectroscopy from PAUS, COSMOS and zCOSMOS are cross-matched, how the galaxy fluxes are selected, the three methods to calibrate the broadband and narrowband fluxes, and the performance metrics used in this work to compare the results between runs and codes. Section~\\ref{sec:res_delight} shows the photo-$z$\\xspace results obtained by \\textsc{delight}\\xspace, and a thorough analysis is conducted to compare its performance with \\textsc{annz}$2$\\xspace, \\textsc{bpz}\\xspace and \\bcnz. Finally, in Section~\\ref{sec:app} we study the photo-$z$\\xspace outliers of \\textsc{delight}\\xspace and \\bcnz, and derive new metrics with improved photo-$z$\\xspace outlier identifications. Our work is concluded in Section~\\ref{sec:conc}.\n\n\n\\section{Photometry and Spectroscopy}\\label{sec:data}\nIn this work, photometric data were obtained from PAUS (Section~\\ref{sec:pau}) and \\mbox{COSMOS} (Section~\\ref{sec:cosmos}), while spectroscopic redshifts were obtained from \\mbox{zCOSMOS} (Section~\\ref{sec:zcosmos}). In this section, these surveys will be introduced, together with the selection cuts used to obtain our training and testing sets. \n\n\\subsection{PAUS}\\label{sec:pau}\nPAUS is a narrowband photometric galaxy survey aimed at mapping the large-scale structure of the Universe up to $i\\sim23.0$. Using $40$ narrow bands spaced by $100$~\\r{A} in the range between $4500$ to $8500$~\\r{A} \\citep[filter responses visualised in][and Fig.~\\ref{fig:overlapbands}]{eriksen_pau_2019}, PAUS aims to achieve redshifts with a precision of $\\sigma_\\rf{RMS}<0.0035(1+z)$ for galaxies with $i_\\rf{auto}<22.5$. PAUS uses the PAUCam instrument \\citep{padilla_physics_2019} on the $4$ m William Herschel Telescope (WHT) at Observatorio del Roque de los Muchachos (ORM) in La Palma. It has observed more than $50$ deg$^2$ of sky since the beginning of $2016$, and observations to full depth in all narrow bands for $100$ deg$^2$ are planned.\n\nThe PAUS forced-aperture coadded photometry has its aperture defined by using the $50$ per cent light radius ($r_{50}$), the point spread function (PSF), ellipticity and S\\'ersic index of COSMOS morphology, such that the fluxes measure a fixed fraction of light. The reader is referred to \\citet{eriksen_pau_2019} for detailed information on how the PAUS fluxes are measured. In this work we used the early data release from PAUS (objects are observed at least five times, using an elliptical aperture with $62.5$ per cent light radius), and select objects with $i_\\rf{auto}\\le 22.5$, entries with no missing measurement, and the COSMOS flag \\texttt{TYPE=0} (extended objects).\n\n\\subsection{COSMOS}\\label{sec:cosmos}\nThe Cosmic Evolution Survey \\citep[COSMOS,][]{scoville_cosmic_2007} covers a sky area of $2$ deg$^2$ ($149.47^{\\circ}\\le\\alpha\\le150.7^{\\circ}$, $1.62^{\\circ}\\le\\delta\\le2.83^{\\circ}$) and is known for its high sensitivity, depth and an exceptionally low and uniform Galactic extinction ($E_{B\\text{-}V}\\sim 0.02$).\n\nIn this work we used photometry from the COSMOS$2015$ Catalogue \\citep{laigle_cosmos2015_2016}; it is a highly complete mass-selected sample to very high redshifts, highly optimised for the study of galaxy evolution and environments in the early Universe. The COSMOS$2015$ Catalogue provides $30$ band photometry ranging from near UV to near infrared wavelengths, all these have been observed through multiple facilities, two of which are the Canada-Hawaii-France Telescope (CFHT) and Subaru Telescope \\citep{miyazaki_subaru_2002}. From this catalogue we only use the CFHT $u^*$-band \\citep{boulade_megacam:_2003} and Subaru $B$, $V$, $r$, $i^+$ and $z^{++}$ bands \\citep{miyazaki_subaru_2002}, in conjunction with the narrowband photometry of PAUS. For simplicity, these bands will be referred to collectively as the $uBVriz$ bands; the superscripts are dropped for easier reading.\n\n\\subsection{zCOSMOS}\\label{sec:zcosmos}\nThe zCOSMOS Survey \\citep{lilly_zcosmos:_2007} targets galaxies in the COSMOS field using the Visible Multi-Object Spectrograph \\citep[VIMOS,][]{le_fevre_commissioning_2003}. zCOSMOS-Bright observed $20\\,689$ galaxies in a sky area of $1.7$~deg$^2$, these galaxies have magnitudes $1519.75$) and small angular sizes ($r_{50}<60$ ACS pixels, or $1.8''$), which describe most galaxies of interest for PAUS. We study several different attributes of these objects, namely their respective photo-$z$\\xspace's by \\textsc{delight}\\xspace, \\bcnz and \\textsc{lephare}\\xspace, photo-$z$\\xspace PDFs, best-fit templates (Brown and GP), spectra and images. We summarise important observations according to their respective attributes below.\n\n\\textit{Photo-$z$\\xspace's}. While these $30$ objects have been identified as outliers when trained using $46$ bands, we find that two-thirds of these objects have non-catastrophic photo-$z$\\xspace's when trained with either only the broad or narrow bands, respectively. In other words, only one-third of these objects have catastrophic photo-$z$\\xspace's regardless of which bands were used in the training or fitting process. This suggests that most of the time, outlier fluxes in the broad or narrow bands may have caused a degradation in photo-$z$\\xspace quality when trained together (more on this in the \\textit{templates} paragraph below). We have also made a comparison between \\textsc{delight}\\xspace photo-$z$\\xspace's with those produced by \\textsc{lephare}\\xspace for the COSMOS2015 catalogue \\citep{laigle_cosmos2015_2016}, and found that in fact half of the $30$ objects have non-catastrophic \\textsc{lephare}\\xspace photo-$z$\\xspace's. This suggests that the infrared $yJHK$ bands could have played a role in improving the PAUS photo-$z$\\xspace's, and could be incorporated in future trainings in case the PAUS photometry is problematic\\footnote{We note that these additional bands will not be available over most of PAUS, which targets Canada-France-Hawaii Telescope Legacy Survey (CFHTLS) wide fields W1 to W4. There is however some infrared data on these fields provided by the Wide-field InfraRed Camera (WIRCam) and the VISTA Kilo-degree Infrared Galaxy Survey (VIKING).}.\n\n\\textit{Photo-$z$\\xspace PDFs}. We inspected the secondary\/tertiary peaks of the PDFs for all \\textsc{delight}\\xspace runs (trained with $6$ broad bands, $40$ narrow bands, or both), and find that less than $20$ per cent of these secondary\/tertiary peaks coincide with their respective spec-$z$\\xspace's. We deduce that despite the importance of secondary PDF peaks in redshift distributions, they do not significantly influence the photo-$z$\\xspace quality of these $30$ objects.\n\n\\textit{Templates}. \\textsc{delight}\\xspace utilises the $129$ \\citet{brown_atlas_2014} templates and the $4203$ training objects to guide the GP to produce the same number of new flux-redshift templates, which are used to produce photo-$z$\\xspace's for the objects. In the training process, \\textsc{delight}\\xspace would always choose one best-fit Brown template for each training galaxy to be trained by the GP. Here we inspected two different kinds of best-fit Brown templates to these $30$ outliers: one fixed at the spec-$z$\\xspace, and the other with the redshift as a free parameter. In both cases, we examined\n\\begin{enumerate}\n \\item if the objects fit to the same templates when trained with only broad bands, only narrow bands, or both, respectively;\n \\item if there are any trends in galaxy morphological types, based on the galaxy type classification indicated by the template;\n \\item if there is any correlation between the $\\chi^2$ value of the best-fit templates and the quality of photo-$z$\\xspace's; and\n \\item if any outlier narrowband fluxes can be identified as the cause of the degradation of photo-$z$\\xspace.\n\\end{enumerate}\n\nAs expected, we find that $70$ per cent of the outlier objects have different best-fit Brown templates between the fits at fixed photo-$z$\\xspace and spec-$z$\\xspace, which contrasts with the case for non-outliers at only $35$ per cent. We also find that only slightly more than a third of both the outlier and non-outlier objects were fitted to the same templates when trained using broad bands as compared to trained with all $46$ bands. The high percentage of objects with different template fits at different reference redshifts (photo-$z$\\xspace or spec-$z$\\xspace) and flux combinations (broad bands, narrow bands, or both) also resulted in no trend in galaxy morphological types among the outliers. \n\nHowever, it was found that up to $60$ per cent of the objects have their best-fit template $\\chi^2$ value correlating with the quality in photo-$z$\\xspace, which further affirms the usage of this as a metric to remove unreliable photo-$z$\\xspace's (see Section~\\ref{sec:newmetrics}), as also attempted by \\citet{eriksen_pau_2019} and \\citet{eriksen_pau_2020}.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/sec6_fig13.pdf}\n\\caption{A sample of best-fit Brown templates (unfixed redshift) when fit to only broadband fluxes (top), only narrowbands fluxes (middle), and both fluxes (bottom) for the galaxy with zCOSMOS ID $805216$. $L_{v}(\\lambda)$ is the rest-frame luminosity density (or SED) of the galaxy. This galaxy has $z_\\rf{spec}=0.736$, $z_\\rf{p}$ and $t_\\rf{b}$ in the figure refer to its photo-$z$\\xspace and best-fit Brown template number, respectively. The outlier narrowband flux shown in the middle panel (red circle) has caused a misfit in template type, resulting in erroneous photo-$z$\\xspace's for both cases.} \\label{fig:tempfit}\n\\end{figure}\n\nPerhaps a more significant finding from the study of the best-fit templates is the ability to identify outlier narrowband fluxes. Fig.~\\ref{fig:tempfit} shows an example which highlights the importance of identifying outlier narrowband fluxes, which is shown to significantly affect the photo-$z$\\xspace results. It was found that a third of the $30$ objects contained outlier narrowband fluxes, which results in entirely different template fits and photo-$z$\\xspace's when trained with narrow bands, as compared to when trained with broad bands only. Among these $10$ objects, $8$ of them are shown to have worse photo-$z$\\xspace as compared to training without the narrow bands. We find indications for a significant fraction of narrowband flux outliers also for galaxies without catastrophic redshift failures. Forthcoming PAUS data reductions will therefore implement methods to identify and correct flux outliers.\n\n\\textit{Images}. We inspect the individual object images compiled by zCOSMOS DR$3$, these are $5''\\times 5''$ images observed by the Hubble Space Telescope\/Advanced Camera for Surveys (HST\/ACS) in the F$814$W filter \\citep{koekemoer_cosmos_2007}. Among the $30$ outlier objects, we find $63.3$ and $26.7$ per cent of them having bright neighbours within $5''$ and $3''$ of the primary source, respectively. Having said that, we have not found any correlation between the presence of bright neighbours to the other attributes that we have studied thus far. In fact the opposite is true: we find that $60$ per cent of the objects with outlier narrowband fluxes actually have primary sources without any bright neighbours in vicinity.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=0.9\\linewidth]{figs\/sec6_fig14.pdf}\n\\caption{Spectral line fitting (red) for the original spectra (black) of the galaxy with zCOSMOS ID $804179$. The spec-$z$\\xspace given by zCOSMOS is $0.4217$ (top), while the best-fit using \\textsc{ez} \\citep{garilli_ez_2010} gives a spec-$z$\\xspace of $0.0847$ (bottom), which is closer to the photo-$z$\\xspace value of $0.1150$ estimated by \\textsc{delight}\\xspace.} \\label{fig:ezfit}\n\\end{figure*}\n\n\\textit{Spectra}. So far we have assumed that the zCOSMOS spectra obtained are reliable, as only entries with high-confidence quality flags have been selected for training (see Section~\\ref{sec:zcosmos}). In order to probe further, we examined the one-dimensional spectra obtained by the VIMOS spectrograph, which is processed by the VIMOS Interactive Pipeline and Graphical Interface \\citep[VIPGI,][]{scodeggio_vvds_2005} to produce the zCOSMOS spec-$z$\\xspace's used in this work. The spectra have a range between $5500$~\\r{A} and $9450$~\\r{A}, measured with a resolution of $R\\sim 600$ at $2.5$~\\r{A} per pixel \\citep{lilly_zcosmos_2009}. \n\nWe used the redshift measurement tool \\textsc{ez} \\citep{garilli_ez_2010} to inspect the spectra of the $30$ outlier objects, and compared our best fits to the spectroscopic redshift produced by zCOSMOS, and also the photo-$z$\\xspace's produced by \\textsc{delight}\\xspace, \\bcnz, \\textsc{deepz}\\xspace, \\textsc{lephare}\\xspace (COSMOS2015) and those of \\citet{alarcon_pau_2021}. \n\nUpon inspection, we find that up to $10$ of these objects ($33$ per cent) have disputable zCOSMOS spec-$z$\\xspace (e.g. two possible redshift values, different best-fit redshift values, line confusion, and low signal-to-noise). However, most of these potential spec-$z$\\xspace failures could be forced-fitted to the zCOSMOS spec-$z$\\xspace and still look satisfactory, which leaves only $2$ ($6.7$ per cent) of these objects having truly catastrophic spec-$z$\\xspace's. Both these objects are found to have better \\textsc{ez} fits at redshift values within $10$ per cent uncertainty from the photo-$z$\\xspace's produced by \\textsc{delight}\\xspace and other algorithms. The spectrum of one of these objects is shown in Fig.~\\ref{fig:ezfit}. We have also found one isolated case where the spectra belonged to a bright neighbour and has been mismatched to the PAUS photometry.\n\nGenerally, the higher-redshift objects are identified by clear \\textsc{O II} ($3727.1$~\\r{A}) emission lines, while the lower-redshift objects are identified by clear H$\\alpha$ ($6564.6$~\\r{A}) emission lines. We therefore conclude that although catastrophic spec-$z$\\xspace's played a role in this situation, our results did not provide enough evidence to say that it is a major cause for catastrophic photo-$z$\\xspace's produced by \\bcnz and \\textsc{delight}\\xspace. This is not surprising since we have only selected secure spectroscopic redshifts from COSMOS to be used in this work. However this highlights the usefulness of multiple PAUS photo-$z$\\xspace's being used to determine failure rates in insecure spectroscopic redshifts.\n\nTo summarise this part, we believe that the potentially important source for catastrophic photo-$z$\\xspace's in the context of PAUS are the outlier narrowband fluxes, with weak evidence for the existence of a small number of spec-$z$\\xspace failures. We leave the tackling of outlier narrowband fluxes to future work, but in the following section, we attempt to improve our process to identify and remove these outlier photo-$z$\\xspace's.\n\n\n\\subsection{New metrics to remove photo-z outliers} \\label{sec:newmetrics}\nIn Figs.~\\ref{fig:res_meti2} and \\ref{fig:res_metzp2} we have used the Bayesian odds ($\\Theta$) to cut the sample, and the aim of this was to keep as many objects as possible while achieving the goal of $\\sigma_{68}\\le0.0035(1+z)$. Here, we extend our previous results further towards that goal by introducing several new metrics to better separate the photo-$z$\\xspace outliers from the sample. These metrics are motivated by the inspection of the $30$ outliers in Section~\\ref{sec:outlier}, and they are defined as follows:\n\n\\begin{enumerate}\n \\item The \\textit{Delight-BCNz2 metric} ($\\Delta_\\rf{DB}$),\n \\begin{equation}\n \\Delta_\\rf{DB} \\equiv \\frac{\\left| z_\\rf{Delight}-z_\\rf{BCNz} \\right|}{1+\\frac{z_\\rf{Delight}+z_\\rf{BCNz}}{2}},\n \\end{equation}\n a metric used to identify the similarity between \\textsc{delight}\\xspace and \\bcnz photo-$z$\\xspace's. It is plausible that, in general, the closer the photo-$z$\\xspace's between the two algorithms, the more reliable they are;\n \n \\item The \\textit{Delight photo-$z$\\xspace standard deviation} ($\\sigma_\\rf{D}$), which is the standard deviation between all \\textsc{delight}\\xspace photo-$z$\\xspace runs regardless of calibration method and number of bands. Smaller deviations could indicate more reliable photo-$z$\\xspace's;\n \n \\item The \\textit{chi-squared value of the best-fit Brown template} ($\\chi^2_\\rf{t}$), where we identified a trend that the better the fit, the more reliable the photo-$z$\\xspace; and\n \n \\item The \\textit{broadband-narrowband complementary metric} ($\\rho^2$),\n \\begin{equation}\n \\rho^2 \\equiv \\int p_{\\rf{BB}}(z)p_{\\rf{NB}}(z)\\, dz,\n \\end{equation}\n where $p_\\rf{BB}(z)$ and $p_\\rf{NB}(z)$ are the $p(z)$ produced by \\textsc{delight}\\xspace when trained with only broad bands and only narrow bands, respectively. By multiplying these two $p(z)$ and summing over the distribution at each step $i$, we can identify the consistency between the broadband and narrowband $p(z)$. A higher value of $\\rho^2$ means a larger overlap, which indicates more reliable photo-$z$\\xspace's.\n\\end{enumerate}\n\nTogether with $\\Theta$ and the \\textsc{delight}\\xspace photo-$z$\\xspace error ($\\delta z$), we yield a total of $6$ metrics to experiment with. Using the results from the flux calibration method, we generate and test the individual performance for each of these metrics. For each metric, we measure the $\\sigma_\\rf{RMS}$ and $\\sigma_{68}$ after systematically removing objects with the worst metric values, $10$ per cent of the total sample size each time, until we reach a sample size of only $40$ per cent. \n\nWe also repeat the exercise by using combined cuts on several metrics, testing all $57$ combinations of the $6$ metrics. We note that we do not combine the metrics by averaging or multiplying them, as it would have diluted the impact of the individual metrics. Instead, we rank the values for each metric individually (from best to worst), and remove objects rank by rank, starting with metric values lying in the worst rank. E.g. for the combination of metrics $\\Theta+\\Delta_\\rf{DB}$, we first remove all objects which share the worst values of $\\Theta$ and $\\Delta_\\rf{DB}$, then remove all objects sharing the second worst values of them, and so on, until we reach a required sample size percentile ($90$, $80$, etc), where we output the values of $\\sigma_\\rf{RMS}$ and $\\sigma_{68}$. We visualise the performance of these metric cuts at several percentiles for $\\sigma_{68}$ with respect to $i_\\rf{auto}$ (cumulative) in Fig.~\\ref{fig:newmetric2}.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/sec6_fig15.pdf}\n\\caption{Plot of $68$th percentile error ($\\sigma_\\rf{68}$) vs. $i_\\rf{auto}$ (cumulative) when cut using the following metrics: the Bayesian odds ($\\Theta$), best-fit Brown template $\\chi^2_\\rf{t}$ value, \\textsc{delight}\\xspace-\\bcnz metric ($\\Delta_\\rf{DB}$), \\textsc{delight}\\xspace photo-$z$\\xspace error ($\\delta z$) and standard deviation ($\\sigma_\\rf{D}$), and the broadband-narrowband complementary metric ($\\rho^2$). The coloured lines follow the same percentile cuts as shown in Fig.~\\ref{fig:res_meti2}, with the dotted-coloured lines in the background of the bottom panels depicting the results of $\\Theta$ for easier comparison. The bottom-left panel shows the cut in $\\Delta z$ (defined in Section~\\ref{sec:metrics}), the unsurpassable theoretical best used for reference. The bottom-middle panel shows the cuts when all the above metrics were combined, while the bottom-right shows the combination of metrics which yield the best results.} \\label{fig:newmetric2}\n\\end{figure*}\n\nWe find that each performance metric cuts the sample differently: while metric cuts of $\\sigma_\\rf{D}$ and $\\rho^2$ reduce the scatter ($\\sigma_\\rf{RMS}$) significantly, metric cuts of $\\Theta$ and $\\Delta_\\rf{DB}$ reduce the $\\sigma_{68}$ instead. The metric $\\chi^2_\\rf{t}$, however does not seem to bring any significant improvement to the results. We have also plotted a cut in $\\Delta z=\\frac{\\left| z_\\rf{phot}-z_\\rf{spec} \\right| }{1+z_\\rf{spec}}$ (bottom-left panel in Fig.~\\ref{fig:newmetric2}), which is the theoretical `best metric', providing an upper limit to be compared with the performance of each of the metrics. Here we noticed that even with the theoretical best metric, a cut of slightly lesser than $70$ per cent (blue line) on the sample is still necessary to fulfil the PAUS target of $\\sigma_{68}<0.0035(1+z)$ (dotted line) for \\textsc{delight}\\xspace. \n\nTherefore, we select the $60$ per cent cut (navy line, retaining $60$ per cent of galaxies) as a benchmark to assess the performance of these metrics, we do so by locating where this line cuts the dotted line (i.e., finding the maximum value of $i_\\rf{auto}$ where the photo-$z$\\xspace's achieves the PAUS target at $60$ per cent cut). From Fig.~\\ref{fig:newmetric2}, it is clear that cutting in all $6$ metrics does not necessarily outperform the performance when cutting with only $\\Theta$, so we searched for the best combination of metrics for $\\sigma_\\rf{RMS}$ and $\\sigma_{68}$ separately.\n\nFor $\\sigma_\\rf{RMS}$, the best combination of metrics is $\\Delta_\\rf{DB}+\\sigma_\\rf{D}+\\rho^2$, and this combination achieves $\\sigma_\\rf{RMS}<0.0035(1+z)$ at $i_\\rf{auto}<19.27$ at $60$ per cent cut, a significant improvement to the case when only $\\Theta$ was used, where it did not cut the line at all. For $\\sigma_{68}$, the best combination of metrics is $\\Theta+\\Delta_\\rf{DB}$ where it reached $\\sigma_\\rf{68}<0.0035(1+z)$ at $i_\\rf{auto}<21.25$ at $60$ per cent cut, which is also a significant improvement as compared to $\\Theta$ at $i_\\rf{auto}<20.88$. Here we note that in fact using $\\Delta_\\rf{DB}$ alone, the target can be reached at a higher limit of $i_\\rf{auto}<21.50$, which highlights the significance of a synergy between \\textsc{delight}\\xspace and \\bcnz in selecting a high quality photo-$z$\\xspace sample.\n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=\\linewidth]{figs\/sec6_fig16.pdf}\n\\caption{Plot of percentage of objects within each photo-$z$\\xspace bin with respect to the cut in performance metric values listed in Fig.~\\ref{fig:newmetric2}. The lines show the percentiles of the same colour scheme as in Fig.~\\ref{fig:res_meti2}, while the histograms in the background show the relative number of objects in each photo-$z$\\xspace bin. The bottom-right plot shows when the combination of all $6$ metrics are used to cut the sample.} \\label{fig:newmetric3}\n\\end{figure*}\n\nFinally, we also show the performance of the metrics in terms of the completeness with respect to the photo-$z$\\xspace (using \\textsc{delight}\\xspace's flux calibration method), visualised in Fig.~\\ref{fig:newmetric3}. We find that metrics like $\\sigma_\\rf{D}$ and $\\rho^2$ tend to selectively remove high photo-$z$\\xspace objects, while $\\Theta$, $\\chi^2_\\rf{t}$ and $\\Delta_\\rf{DB}$ tend to remove mid-ranged photo-$z$\\xspace objects. In general, a cut using all $6$ performance metrics at $60$ per cent cut shows a balanced result in the completeness, keeping a sufficient number of high redshift objects in the sample.\n\nTo summarise the performance of the individual metrics, \n\\begin{itemize}\n \\item $\\chi^2_\\rf{t}$ is the least-performing metric here; it does not bring significant positive impact to the results;\n \\item Cuts in $\\sigma_\\rf{D}$ and $\\rho^2$ help to improve the scatter, however, they tend to selectively remove higher photo-$z$\\xspace objects from the sample;\n \\item $\\Theta$ and $\\delta z$ show very similar results, however $\\Theta$ tends to keep more high photo-$z$\\xspace objects in the sample; and\n \\item $\\Delta_\\rf{DB}$ is the best-performing metric here, and we recommend the use of such a metric to remove outlier photo-$z$\\xspace's from a sample.\n\\end{itemize}\n\n\n\n\\section{Conclusion and Future Work} \\label{sec:conc}\n\nIn this work we have optimised \\textsc{delight}\\xspace, a hybrid template-machine learning algorithm such that it could be used to obtain photo-$z$\\xspace's for PAUS, by utilising its $40$ narrowband fluxes combined with $6$ $uBVriz$ COSMOS broadband fluxes. We have shown three distinct methods to calibrate the broadband and narrowband fluxes, and found that all three methods yield comparable results, although the most stable and the one which produces the lowest value of $\\sigma_{68}$ is what we defined as the \\textit{flux calibration method}: a method where we calibrate the broadband fluxes with respect to the narrowband fluxes by finding the flux ratio of the filter combinations which overlap. This calibration method is entirely photometric, and it was able to produce photo-$z$\\xspace's with a scatter reaching as low as $\\sigma_\\rf{RMS}=0.0331(1+z)$ and $\\sigma_{68}=0.0081(1+z)$ for the full PAUS galaxy sample at $i_\\rf{auto}<22.5$.\n\nWe have also compared the results of \\textsc{delight}\\xspace with a machine learning algorithm (\\textsc{annz}$2$\\xspace) and a template-based algorithm (\\textsc{bpz}\\xspace and \\bcnz). We find that \\textsc{annz}$2$\\xspace underperforms significantly, indicating that \\textsc{annz}$2$\\xspace in its basic form is not suitable for narrowband surveys with large number of bands and small number of training objects. \n\nDespite the photo-$z$\\xspace performance of \\textsc{bpz}\\xspace being within $9$ per cent difference of that of \\textsc{delight}\\xspace, the latter still stood out in terms of the quality of the photo-$z$\\xspace PDF $p(z)$ ($16$ per cent better in $\\rho_\\rf{CRPS}$) and the effectiveness of its Bayesian odds ($\\Theta$) cut in retaining objects with higher quality photo-$z$\\xspace without losing too many high-redshift objects. \\textsc{delight}\\xspace is also shown to produce competitive results as compared to \\bcnz ($5$ per cent lower in $\\sigma_{68}$), the default photo-$z$\\xspace produced for the PAUS.\n\nFurther investigation on the common photo-$z$\\xspace outliers of \\textsc{delight}\\xspace and \\bcnz led to the conclusion that outlier narrowband fluxes are the main cause for erroneous photo-$z$\\xspace's, an insight which will inform improvements in forthcoming PAUS data reductions. We have also inspected the spectra and identified catastrophic spec-$z$\\xspace's, however the effects are shown to be insignificant in this work. Motivated by the study of $30$ outliers shared between \\textsc{delight}\\xspace and \\bcnz, we introduced several new metrics to help improve the identification of photo-$z$\\xspace outliers and remove them from the sample to achieve better results. From the $6$ metrics compared, our newly introduced \\textsc{delight}\\xspace-\\bcnz metric ($\\Delta_\\rf{DB}$) is shown to significantly improve our photo-$z$\\xspace quality, allowing it to reach the PAUS target of $\\sigma_{68}<0.0035(1+z)$ at $i_\\rf{auto}<21.5$ while retaining $60$ per cent of the sample objects. These new metrics could be utilised to return more accurate uncertainties in redshift, which are vital in many cosmological studies.\n\nThis opens the door to future studies in finding synergies between different photo-$z$\\xspace algorithms and between broadband and narrowband photometry. Together with the promising developments of deep learning approaches to deal with narrowband data \\citep{eriksen_pau_2020}, these insights will pave the way towards unprecedentedly precise and accurate photometric redshifts for the full PAUS survey and beyond, like the Javalambre Physics of the Accelerating Universe Astrophysical Survey \\citep[J-PAS,][]{benitez_j-pas_2014}.\n\n\\section*{Acknowledgements}\nThe authors wish to thank the referee for the helpful and constructive comments. JYHS would like to thank Boris Leistedt for fruitful discussions and the setup of \\textsc{delight}\\xspace earlier in this work. JYHS also would like to thank Hwee San Lim and Tiem Leong Yoon for assisting in the setup of equipment in Universiti Sains Malaysia where most of the computational work of this paper was completed. JYHS acknowledges financial support from the MyBrainSc Scholarship by the Ministry of Education, Malaysia, a studentship provided by Ofer Lahav, and the Short Term Research Grant by Universiti Sains Malaysia (304\/PFIZIK\/6315395). JYHS and BJ acknowledge support by the University College London Cosmoparticle Initiative. MS acknowledges funding from the National Science Centre of Poland (UMO-2016\/23\/N\/ST9\/02963) and the Spanish Ministry of Science and Innovation through the Juan de la Cierva Formaci\\'on programme (FJC2018-038792-I). H. Hildebrandt acknowledges support by a Heisenberg grant of the Deutsche Forschungsgemeinschaft (Hi 1495\/5-1) and an ERC Consolidator Grant (no. 770935). H. Hoekstra acknowledges support from the Netherlands Organisation for Scientific Research (NWO) through grant 639.043.512. IEEC and IFAE are partially funded by the Instituci\\'o Centres de Recerca de Catalunya (CERCA) and Beatriu de Pin\\'os Programme of Generalitat de Catalunya. Work at Argonne National Lab is supported by UChicago Argonne LLC, Operator of Argonne National Laboratory (Argonne). Argonne, a U.S. Department of Energy Office of Science Laboratory, is operated under contract no. DE-AC02-06CH11357.\n\nThis project has received funding from the European Union's Horizon 2020 Research and Innovation Programme under the Marie Sk\\l odowska-Curie Actions, through the following projects: Latin American Chinese European Galaxy (LACEGAL) Formation Network (no. 734374), the Enabling Weak Lensing Cosmology (EWC) Programme (no. 776247), and Barcelona Institute of Science and Technology (PROBIST) Postdoctoral Programme (no. 754510). \n\nPAUS is partially supported by the Ministry of Economy and Competitiveness (MINECO, grants CSD2007-00060, AYA2015-71825, ESP2017-89838, PGC2018-094773, PGC2018-102021, SEV-2016-0588, SEV-2016-0597 and MDM-2015-0509). Funding for PAUS has also been provided by Durham University (ERC StG DEGAS-259586), ETH Zurich, and Leiden University (ERC StG ADULT-279396). \n\nThe PAU data centre is hosted by the Port d'Informaci\\'o Cient\\'ifica (PIC), maintained through a collaboration of CIEMAT and IFAE, with additional support from Universitat Aut\\`onoma de Barcelona and the European Research Development Fund (ERDF).\n\n\n\\section*{Data availability}\nThe data from PAUS (photometry and photo-$z$\\xspace's) is currently not yet publicly available. The data from COSMOS were accessed from the ESO Catalogue Facility (\\url{https:\/\/www.eso.org\/qi\/}, while the data from zCOSMOS (spectra and spec-$z$\\xspace's) were accessed from the zCOSMOS database (\\url{http:\/\/cesam.lam.fr\/zCosmos\/}). The derived data generated in this research will be shared on reasonable request to the corresponding author.\n\n\\bibliographystyle{mnras}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nMotion cues provide us with a rich source of information for perception of our visual world. Objects and surfaces become immediately distinct once they start to move and we instinctively use motion information to group regions together \\cite{koffka2013principles}.\nThe task in Video Object Segmentation (VOS) is to separate the dominant foreground object(s) in a video sequence. Existing learning based approaches can be classified as either semi-supervised or unsupervised methods. \nIn the semi-supervised \\cite{lai2020mast, meinhardt2020make, perazzi2017learning, maninis2018video} setting, the ground-truth is provided for the first frame of the video during inference. The aim is then to propagate this mask and obtain segmentation for all subsequent frames. In unsupervised VOS \\cite{mahadevan2020making, yang2019anchor, ren2021reciprocal}, no such object masks are available during inference. However, existing semi-supervised and unsupervised VOS methods still rely on ground-truth masks to train or use pre-trained optical flow \\cite{ren2021reciprocal} or saliency models \\cite{lu2020learning}. In contrast, we propose a fully unsupervised training pipeline for our model that uses neither annotated labels nor pre-trained semantics as proxy supervision for training. Our network determines the dominant object in a scene through purely pixel motion analysis\n \nA layered approach to representing 2D image sequences \\cite{layered1994} provides a promising model for \ncapturing the details of our 3D world where objects and surfaces move independently. This works by first estimating motions present, separating pixels into layers based on similarity of motion, and finally, moving layers on top of one another to synthesise the next image in the sequence. We adapt this process for a CNN by making the entire layered image synthesis pipeline differentiable for end-to-end training. We estimate two motions in a scene using affine parametric models and segment all image pixels into either of these motion classes. \n\n \n\n\\begin{figure}[t!]\n\\centering\n \\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main0frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main0gt.png}\n\t\t\t\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main0seg.png}\n\t\t\t\n\t\\end{subfigure}\n\t\n \\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main1frame1.png}\n\t\t\\subcaption{Frame1}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main1gt.png}\n\t\t\t\t\\subcaption{GT}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/mainpage\/main1seg.png}\n\t\t\t\t\\subcaption{Prediction} \n\t\\end{subfigure}\n\t\\caption{Our network successfully segments the dominant object in videos through a bottom-up grouping of pixels based on affine motion similarity. We train our network unsupervised using only unannotated RGB image pairs. Visual examples from two different scenes on MovingCars shown.}\n\t\n\n\\label{fig:intro}\n\\end{figure}\n\n \nImage representation using moving layers is well-established in the literature \\cite{darrell1991robust, jepson1993mixture, layered1994} but no implementation so far has employed an end-to-end CNN to achieve this. Sevilla-Lara \\textit{et al.} in \\cite{sevilla} combine a layered model and a neural network but their method is a hybrid approach where a pretrained network is combined with a variational expectation maximisation algrorithm for inference. In \\cite{zhang2018layered}, Zhang \\textit{et al.} use a maxout operation to perform disjoint separation of flow. \nHowever, they do not apply any explicit constraints during flow separation and it is unclear whether flow is reliably grouped based on motion homogeneity. In \\cite{yang2021self}, Yang \\textit{et al.} also propose a foreground object segmentation network that is trained without any manual annotations but they require a pre-computed flow map as input. They then use a slot attention network to group together pixels based on visual homogeneity. In contrast, our model does not need flow as input but only uses unannotated image pairs to group foreground\/background pixels based on affine motion homogeneity. \n\n\nUsing a layered image reconstruction process, we show that photometric loss can be used for learning to segment a dominant object in videos. We do not require pre-computed flow\/saliency maps and use only unlabelled image sequences to train making our approach fully unsupervised during both training and inference.\n\n\n\n\nWe identify our three main contributions as follows:\n\n\\begin{enumerate}\n \\item We propose a novel {\\it unsupervised network\n to learn explicit dense dominant object segmentation} in an end-to-end CNN from only unlabelled pairs of RGB image sequences showing 2 rigid-body motions.\n \\item We introduce a novel {\\it layered differentiable image synthesis (LDIS)} module to separate frame 1 into two affine motion layers and synthesise frame 2. \n \n \n \n \n \\item We demonstrate that our network is able to explicitly segment the dominant foreground object without any annotation\n \n during training or inference. We contribute a new real-world dataset of moving cars and show competetive results against state-of-the-art methods. Our code and dataset will be made publicly available. \n\\end{enumerate}\n\n\\section{Related Work}\n\n\\subsection{Video Object Segmentation}\n \nVOS methods have traditionally relied on heuristics such as saliency \\cite{wang2015saliency,guo2008spatio,mahadevan2009spatiotemporal},\nobject proposals using motion\/appeareance information \\cite{lee2011key, ma2012maximum,zhang2013video}\nor motion analysis of point trajectories \\cite{brox2010object,ochs2011object}.\nWith the advent of deep learning, the state-of-the-art has been dominated by semi-supervised methods that seek to propagate the given first frame object annotation across subsequent frames. Online methods perform network update during inference time to better capture object masks as the object moves through the sequence \\cite{maninis2018video, meinhardt2020make}\nwhereas offline methods achieve this without requiring any network fine-tuning during inference \\cite{li2018video,oh2018fast}.\nIn the unsupervised\/self-supervised VOS setting, first frame object annotation is not provided and methods learn to detect and segment objects by exploiting appearance and spatio-temporal information \\cite{tokmakov2019learning, jain2017fusionseg}.\nUsually, pre-computed optical flow maps along with images are used as input in a two-stream framework \\cite{ren2021reciprocal, zhou2020motion, fragkiadaki2015learning}\nto generate object masks. Other methods use only image pairs as input and learn pairwise dependencies between frames using non-local operators to model temporal cues as done in AnchorDiff-VOS \\cite{yang2019anchor}. Similarly, COSNet \\cite{lu2019see} uses co-attention layers instead to capture rich correlations between frames. However, these methods still require ground-truth masks to train which can be laborious to annotate for real world images.\n\nIn \\cite{lu2020learning}, Lu \\textit{et al.} train a VOS network without any annotated ground-truth data by relying on a saliency model and CAM maps to generate proxy-labels for foreground discrimination. However, their model only works in the semi-supervised setting. Yang \\textit{et al.} in \\cite{yang2021self} also train a VOS network without annotated labels but require a pre-computed flow map (that is converted to an RGB flow image) as input and use slot attention \\cite{locatello2020object} to perform iterative clustering in RGB image space. \n\nIn contrast, we train our network without any ground truth labels and require only RGB image sequences as input. Instead of relying on top-down saliency or object priors, we propose a fully unsupervised VOS framework that is based on a bottom-up approach to object segmentation by grouping together pixels based on affine motion similarity. \n \n\\subsubsection{Layered Image Representation}\nMethods incorporating layers for motion estimation and segmentation are well-established in literature with \\cite{darrell1991robust, jepson1993mixture, layered1994} providing the first few seminal works. These methods exploit the idea that motion in any scene is not globally homogeneous but appears to be piecewise continuous. \nThere have been many implementations that have since expanded on this idea \\cite{ayer1995layered, hsu1994accurate, sun2013fully}.\nOur main inspiration remains \\cite{layered1994} where Wang and Adelson detail the process of layered image synthesis that we adapt for our end-to-end CNN implementation. This is in contrast to the expectation maximisation (EM) style algorithms that most methods use which can be tricky to optimise. Like \\cite{jepson1993mixture, layered1994, lee1997layered}, we use an affine model to estimate motion for each layer. In this paper, we build a network to handle two motions.\n\nThere have been prior attempts to combine deep learning and a layered model with Sevilla-Lara \\textit{et al.} in \\cite{sevilla} using pre-trained scene semantics from a CNN to initialise layers. However, their method is not end-to-end or unsupervised and flow estimation is done using a variational EM algorithm (as in \\cite{sun2013fully}). In contrast, our method uses a fully end-to-end CNN and only unlabelled RGB image sequences for training and inference. In \\cite{zhang2018layered}, Zhang \\textit{et al.} propose a CNN based layered optical flow estimation that relies on their ``soft-mask\" module to separate of flow into disjoint classes but they do not synthesise images using layers. \nOur LDIS pipeline remains faithful to the traditional layered approach and provides explicit constraints during grouping of pixels using affine motion models allowing us to confidently identify motion homogeneous regions.\n\n\n\\subsubsection{Motion Segmentation}\nMotion segmentation involves partitioning image pixels into groups with homogeneous motion. Typically, methods work towards separating pre-determined feature point trajectories that track moving objects through an image sequence \\cite{brox2010object, arrigoni2019robust}.\nIt can be challenging to compute these tracks, requires an additional step, and the resulting segmentation is generally sparse. This problem has been tackled \nusing geometric constraints \\cite{jung2014rigid}, subspace clustering \\cite{ji2015shape} and through fitting of motion models \\cite{fischler1981random}. Another approach is to first compute optical flow to then cluster regions according to flow variations \\cite{verri1989motion, fragkiadaki2015learning}. However, using flow as a precursor is likely to carry over errors intrinsic to the flow estimation process (e.g., inaccurate flow around motion boundaries due to smoothness regularisation). Some methods use pre-trained semantic information to initialise object regions \\cite{zhuo2019unsupervised}. This approach \nrelies on the accuracy of object recognition of the semantic network that is trained supervised using manual annotations. \nIn \\cite{ranjan2019competitive}, Ranjan \\textit{et al.} obtain motion segmentation using an unsupervised CNN. However, they primarily exploit scene geometry to only distinguish between camera or independent motion, require camera parameters beforehand, and need five consecutive frames for inference. \n\nWe propose an end-to-end CNN that can train without supervision or prior feature point trajectories - only two sequential RGB images are needed as input. We do not require a separate flow estimator to obtain segmentation - affine flow and segmentation are estimated in a symbiotic end-to-end framework. We take a bottom-up approach to motion segmentation by relying entirely on apparent pixel motion - no semantic priors or camera motion assumptions are necessary (as in \\cite{ranjan2019competitive}).\n\n\n\n\n\n\\section{Methodology}\nIn this section, we propose a video object segmentation framework that uses affine parametric models to distinguish the foreground from the background. We achieve this through a layered image representation whose theory is well-established in the literature but implementation in a deep neural network presents new challenges. A key challenge we tackle in this paper is ensuring that all operations within the architecture are differentiable for end-to-end learning.\n\n\n\n\n\\subsection{Layered Differentiable Image Synthesis} \\label{sec:layered}\nIn \\cite{layered1994}, Wang and Adelson describe a method to decompose an image sequence into layers that represent moving objects\/surfaces. They propose using three maps to define each layer: the intensity (RGB) map, the alpha map (defines pixel opacity\/transparency), and the velocity (optical flow) map. An image is decomposed into multiple disjoint layers using the intensity and alpha map. These layers are then warped (using the flow map) and reassembled to produce a reconstruction of the next image in the sequence.\n\nInspired by this process, we design a layered differentiable image synthesis (LDIS) module that we use to reconstruct frame 2 using frame 1 of an image pair. We assume that each pixel is either opaque or fully transparent and has an associated alpha value that provides pixel support (segmentation). Additionally, each layer also consists of an intensity map and a flow map that we use to composite layers for image synthesis. \nWe show the overall image synthesis process in Figure \\ref{fig:overall} and follow with details below.\n\n\\begin{figure}[t!]\n\\includegraphics[width=\\textwidth]{.\/Images\/layersegnet2.png}\n\\caption{Simulated visual example of our layered differentiable image synthesis pipeline where $I_1$ is separated into layers that are each individually warped\nand combined to obtain $\\hat{I}_2$.}\n\\label{fig:overall}\n\\end{figure}\n\n\n\nLet us first define an input sequence of RGB images $I_1$ and $I_2$\nwith determinate and identical image size. We use the coordinate notation $\\textbf{x} = (x,y)$ to access pixel values in the image domain of $I_1$ ($\\Omega_1$) or $I_2$ ($\\Omega_2$). Our aim is to separate $I_1$ into disjoint layers, move the layers independently and then combine the layers to synthesise a reconstruction of $I_2$. Each layer consists of an RGB intensity map ($L_i: (\\Omega \\subset \\mathbb{R}^2) \\rightarrow \\mathbb{R}^3$), an alpha map ($\\alpha_i: (\\Omega \\subset \\mathbb{R}^2) \\rightarrow \\mathbb{R}^1$) and a flow map ($w_i: (\\Omega \\subset \\mathbb{R}^2) \\rightarrow \\mathbb{R}^2$). \n\nIn the sequence $I_1, I_2$, let there be two distinct affine motions parameterised by $A_1$ and $A_2$:\n\n\\begin{align}\n W_i(x, y) = & \n \\textbf{A}_i\n \\begin{bmatrix}\n 1\\\\\n x\\\\\n y\\\\\n \\end{bmatrix} =\n \\begin{bmatrix}\n a_i^1 & a_i^2 & a_i^3\\\\\n a_i^4 & a_i^5 & a_i^6\\\\\n \\end{bmatrix}\n \\begin{bmatrix}\n 1\\\\\n x\\\\\n y\\\\\n \\end{bmatrix}\n ,\\ i \\in \\{1, 2\\} \\label{eq:affine}\n\\end{align}\n\\noindent\nwhere $W_i: (\\Omega \\subset \\mathbb{R}^2) \\rightarrow \\mathbb{R}^2$ gives dense flow maps calculated using \naffine parameters $A_i$ for all \\textbf{x}. Corresponding to the two affine motions, we separate all pixels in $I_1$ into layers using alpha maps that define spatial support regions. Ideally, for each pixel, we would have a one-hot vector and assign binary values\nto indicate layer memberships (each pixel can only be associated with one motion). However, such discreet value assignment is not conductive to a differentiable framework. \n\nInstead, we implement softmax binning \\cite{yang2018deep}\nthat approximates a hard binning operation.\nHere, we use two `bins' for our two motion layers to obtain their alpha maps:\n\n\n\n\n\n\n\n\n\n\n\\begin{align}\n \\alpha^i(\\textbf{x}) \\in [0,1], \\ i \\in \\{1,2\\} \\label{eq:alphadefinition}\n\\end{align}\n\\noindent\n\n\nWe explicitly ensure that layer membership is disjoint using a modified maxout operation inspired by\n\\cite{zhang2018layered}. For each pixel, the maxout operation retains the maximal value of the two alpha maps, the non-maximal value is set to 0. We use these alpha maps to obtain the spatial support for flow of each layer:\n\\begin{align}\nw_i = \\alpha^i \\odot W_i \\label{eq:flowsupport}\n\\end{align}\nwhere $\\odot$ denotes element-wise product with broadcasting. Note that $W_i$ gives a dense calculation of flow values using affine parameters $A_i$ for all $\\textbf{x}$ in the image domain.\nThe constrained layer flow map $w_i$ associates each pixel's alpha map to the motion of a particular layer. We also associate the alpha maps to pixel intensities.\nHowever, to prevent input pixel intensities in $I_1$ being scaled by the continuous alpha map, we binarise these maps to separate $I_1$ into 2 layer intensity maps:\n\\begin{align}\nL_i = \\alpha^{i-binary} \\odot I_1 \\label{eq:layerintensity}\\\\ \n\\alpha^{i-binary} = (\\alpha^i > 0.5) \\label{eq:alphabinary}\n\\end{align}\nNote that binarisation of alpha map stops gradients from passing through this particular operation. However, \nthe continuous alpha map in Eq. \\ref{eq:flowsupport} allows\ngradients\nthrough during backpropagation.\nThis acts as a constraint on our network to effectively learn the alpha maps through association to the flow values in $W_i$ without affecting the pixel intensities in $I_1$. \n\nNext, we warp the intensity map ($L_i$) and alpha map ($\\alpha_i$) of each layer using their flows $w_{i\\in\\{1, 2\\}}$ with\nforward warping \\cite{niklaus2020softmax} to obtain the warped layer intensity maps $\\hat{L}_i$:\n\n\\begin{align}\n \\hat{L}_i(\\textbf{x} + w_i(\\textbf{x})) \\leftarrow L_i(\\textbf{x}) \\label{eq:layerwarp}; \\\\\n \\hat{\\alpha}^i(\\textbf{x} + w_i(\\textbf{x})) \\leftarrow \\alpha^i(\\textbf{x}); \\label{eq:alphawarp} \\\\\n \\forall \\textbf{x} \\in \\textbf{m}_i,\\ i \\in \\{1, 2\\} \\nonumber\n\\end{align}\n\nWe are ready to synthesise reconstruction of $I_2$ by compositing the warped layer intensity maps $\\hat{L}_{i\\in\\{1,2\\}}$. However, the warped layers will now contend for positions where they overlap. Our LDIS model explicitly resolves this through depth ordering via the alpha map\nso that the layer closer to the camera occludes the layer beneath. This process mimics how moving objects result in occlusions.\nThis contention for output pixel position due to occlusion is an issue that \narises in an explicit forward warping process.\nIn our case, we first perform warping separately for each layer then \ncombine them using a fixed depth ordering. There is no issue of contention between pixels in the same layer due to our explicit affine motion model (i.e. no many-to-one mappings within a layer). Without a layered approach, a separate approach to occlusion reasoning is needed to solve this issue such as using brightness constancy as a measure of occlusion \\cite{niklaus2020softmax}.\n \nWe synthesise $I_2$ from the warped layers by following a simple rule:\na smaller $i$ represents a layer closer to the camera (i.e. pixels of $\\hat{L}_1$ occlude pixels of $\\hat{L}_2$). \n\nFinally, we synthesise the output image $\\hat{I}_2$. Since we only have 2 layers, with layer 1 on top of layer 2, we only need $\\hat{\\alpha}_1$ for reconstruction ($\\hat{\\alpha}_1$ gives the layer support for the warped pixels in $\\hat{L}_1$). We binarise the warped alpha map as in Eq. \\ref{eq:alphabinary} and reconstruct $I_2$ using the warped layers $\\hat{L}_{i\\in\\{1, 2\\}}$:\n\n\\begin{align}\\label{eq:reconstruct}\n \\hat{I}_2 =\\ & \\hat{\\alpha}_{1-binary} \\cdot \\hat{L}_1\\ + (1 - \\hat{\\alpha}_{1-binary}) \\cdot \\hat{L}_2 \n\\end{align}\n\\noindent\nwhere $\\hat{I}_2$ is the synthesised image. \n\n\\subsection{Loss Function} \\label{sec:loss}\n\nWhen layers overlap, it is possible that for some pixel regions none of the layers contribute any pixel-value (resulting in black pixels in $\\hat{I}_2$ in Figure \\ref{fig:overall}). We detect these dis-occluded pixels by simply searching $\\hat{I}_2$ for pixels that are zero-valued. The dis-occluded pixels are masked out during loss calculation using the dis-occlusion mask:\n\n\\begin{align}\n\\textbf{D}(\\textbf{x}) = &\n \\begin{cases}\n 1 &,\\ \\text{if}\\ \\hat{I}_2(\\textbf{x}) = 0 \\\\\n 0 &,\\ \\text{otherwise}\n \\end{cases}\n\\end{align}\n\nThe learning is driven by a photometric loss that \nuses the robust generalised Charbonnier penalty function \\cite{bruhn2005lucas}: \n\n\n\\begin{align}\nLoss = & \\sum_{\\textbf{x} \\in \\Omega_2} (1 - \\textbf{D}(\\textbf{x})) \\rho(I_2(\\textbf{x}) - \\hat{I_2}(\\textbf{x})) \\\\\n\\rho(\\textbf{x}) = & \\sqrt{\\textbf{x}^2 + 0.001^2} \\label{eq-char}\n\\end{align}\n\nWe have now established the blueprint of our LDIS framework. We follow with specifics of implementation in a CNN.\n \n\\section{Network Implementation}\\label{sec:implementation}\nOur network LayerSegNet is inspired by encoder-decoder flow architectures \\cite{dosovitskiy2015flownet,pwcnet}. We use the encoding structure from pwc-net \\cite{pwcnet} that obtains separate feature encodings for $I_1$ and $I_2$. The outputs from the deepest level of the encoder are stacked then passed on to both the optical flow and segmentation pipelines. \n \n\\begin{figure}[t!]\n\\includegraphics[width=\\textwidth]{.\/Images\/layersegnet_arch_largefont.png}\n\\caption{Illustration of LayerSegNet that estimates flow maps ($W_1$, $W_2$) and alpha maps ($\\alpha_1$, $\\alpha_2$). Numbers beneath layers indicate number of feature channels.}\\label{fig:network}\n\\end{figure}\n\n\\subsubsection{Optical Flow Module:}\nIt contains three convolutional layers (96-strided, 64-strided and 48 feature channels) followed by four fully connected layers (512, 256, 64 and six output units). Each convolution is followed by a leaky ReLU. This module outputs two sets of affine motion parameters: $\\textbf{A}_1$ and $\\textbf{A}_2$. Using these parameters and the process outlined in equation (\\ref{eq:affine}) we obtain two flow maps $W_1,W_2 \\in \\mathbb{R}^{W\\times H\\times 2}$.\n\n\\subsubsection{Segmentation Module:}\nThis is a decoder module that takes the same input as the optical flow module and outputs a single value for each pixel $\\textbf{x}$ that is continuous in the range $[0,1]$. It performs three bilinear upsamplings (followed by convolutions) to output a single-channeled map. \n\nThis single-channeled map is passed through the softmax binning followed by maxout operation resulting in a dual-channeled map that at each pixel location gives the corresponding layer alpha value.\n\n\nIn \\cite{maas2013rectifier}, Maas {\\it et al.} proposed the leaky ReLU that restricts the slope of the ReLU \\cite{nair2010rectified} for input values below 0.\nEmpirically, we discovered that a modified leaky ReLU, when used as the activation function for estimating alpha maps, results in a significant boost to performance (see Section \\ref{subsec:ablation}). We modify the leaky ReLU by restricting the output slope for inputs below 0 as well as above 1. We refer to it as the leaky double-rectified linear unit (leaky DoReLU), show a comparison between leaky ReLU and leaky DoReLU in Figure \\ref{fig:drelu}, and define it as follows:\n\n\\begin{align}\n y(x) = &\n \\begin{cases}\n 1 + \\frac{x - 1}{\\gamma} &,\\ x > 1 \\\\\n x &,\\ \\text{if}\\ 0 \\leq x \\leq 1 \\\\\n \\frac{x}{\\gamma} &,\\ x < 0.\n \\end{cases}\n\\end{align}\n\nNote that our leaky DoReLU imposes leaky upperbound and lowerbound restrictions to the input signal and is different in implementation compared to the dual rectified linear unit, \\cite{godin2018dual}.\nIn this module, each convolution is followed by a leaky DoReLU with $\\gamma = 10$ except the last activation function that caps values to be strictly within $[0,1]$ (leaky part is removed) giving the required input to the softmax binning operation.\n\n\\begin{figure}[t!]\n\\centering\n \n \n\t\\includegraphics[width=0.9\\textwidth]{.\/Images\/Leaky_dorelu_vs_relu.png}\n \n \n\\caption{Visualisation of our leaky DoReLU (left) vs leaky ReLU (right). The x-axis is input $x$ and y-axis is output $y(x)$.} \\label{fig:drelu}\n\\end{figure}\n\n\n\\subsubsection{Image Reconstruction:}\nUsing $W_1$, $W_2$, $\\alpha_1$, $\\alpha_2$ and input image $I_1$, we perform the following steps to reconstruct $I_2$:\n\n\n\\begin{enumerate}\n\\item Extract the flow for each layer (eq.~\\ref{eq:flowsupport}): $w_{i} = \\alpha^i * W_i$\n\\item Extract layer intensity maps (eq.~\\ref{eq:layerintensity}): $L_i = \\alpha^{i-binary} * I_1; \\alpha^{i-binary} = (\\alpha^i > 0)$\n\\item Warp layer intensity maps with flows (eq.~\\ref{eq:layerwarp}): $\\hat{L}_i = \\textbf{warp}(L_i, w_{i})$\n\\item Warp $\\alpha^1$ with flow $w_{1}$ (eq.~\\ref{eq:alphawarp}): $\\hat{\\alpha}^1 = \\textbf{warp}(\\alpha^1, w_{1})$\n\\item Combine warped layers (eq.~\\ref{eq:reconstruct}): $\\hat{I}_2 = \\hat{\\alpha}^{1-binary} \\cdot \\hat{L}_1\\ + (1 - \\hat{\\alpha}^{1-binary}) \\cdot \\hat{L}_2$\n\\end{enumerate}\n\\noindent\nwhere $i=\\{1,2\\}$ is the layer index, $\\alpha^{i-binary}$ gives a binarised alpha map and \\textbf{warp()} is the forward warping process.\n\n \n\\section{Experiments}\n\n\\begin{figure*}[t!]\n\\centering\n \n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/3.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/3gt.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/3anchor.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/3cos.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/3ours.png}\n\t\\end{subfigure}\n\t \n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/5.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/5gt.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/5anchor.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/5cos.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/5ours.png}\n\t\\end{subfigure}\n\t \n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/2.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/2gt.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/2anchor.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/2cos.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/2ours.png}\n\t\\end{subfigure}\n\t \n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/4.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/4gt.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/4anchor.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/4cos.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/4ours.png}\n\t\\end{subfigure}\n\t \n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/1.png}\n\t\t\\subcaption*{Frame1}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/1gt.png}\n\t\t\\subcaption*{GT}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/1anchor.png}\n\t\t\\subcaption*{AnchorDiff-VOS}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/1cos.png}\n\t\t\\subcaption*{COSNet}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/comparison\/1ours.png}\n\t\t\\subcaption*{LayerSegNet (Ours)}\n\t\\end{subfigure}\n\\caption{Results on the MovingCars dataset of LayerSegNet (ours) trained without any annotated masks purely on a synthetic dataset (no finetuning) against AnchorDiff-VOS \\cite{yang2019anchor} and COSNet \\cite{lu2019see}. LayerSegNet, due to its purely affine motion segmentation approach successfully segments the dominant moving car in Rows 1 and 2 where AnchorDiff-VOS and COSNet both label all instances of cars (even cars that are not moving as in Row 2) due to their reliance on object features.}\n\\label{fig:results_mov_cars}\n\\end{figure*}\n \n\\subsection{Datasets}\n\n\nWe train our network on a synthetic dataset named RoamingSeg, created using images from \\cite{janai}, where a randomly scaled foreground moves on top of a moving background (all movements are random and linear). We create 11,000 sequences of resolution 256x512 (HxW) with ground-truth flow and segmentation maps. \n \nWe create a testing dataset named MovingCars to demonstrate object segmentation on real-world car images with affine motions. In each video scene, a dominant foreground object (car) is moving across the background. We prepare five sequences of moving cars with different backgrounds. The videos were taken handheld using a mobile phone camera that pans along with the moving cars. We do not finetune our network on MovingCars to test the generalisation capability of our method on real world street scenes.\n\n\nWe also create another dataset named MovingObjects under more controlled setting to demonstrate dense motion segmentation on real-world images with two rigid-body motions. In the image scene, a foreground object and a background surface both undergo small translations. We prepare a fine-tuning set for training that contains sequences of three objects \nagainst a common background. We build a test set that adds four additional unseen object classes set against unseen backgrounds. The images were taken using a mobile phone camera that was rested on and moved along a solid surface. For both these datasets, each class in the test set contains three sequential images with ground-truth segmentation masks for the first two images. \n\n\n\n\n \\subsection{Training Details}\nWe use a batchsize of 8 and train our network end-to-end using adam \\cite{kingma2014adam} with $\\beta_1=0.9$ and $\\beta_2=0.999$. Similar to pwc-net, our network outputs a quarter-resolution flow and segmentation maps that are upsampled to input image size to improve training efficiency. On RoamingSeg, we train for 600,000 iterations and start with a learning rate of $1.25\\times 10^{-5}$. We use this network to test on the MovingCars dataset (without any finetuning) comparing against two recent VOS methods that are trained with annotated ground-truth masks. We found no methods that directly match our approach of VOS using using unlabelled image pairs during both training and inference. \n\n\nTo finetune on MovingObjects, we train for 400,000 iterations with a learning rate of $6.25\\times 10^{-6}$\nThe learning rates are halved every 200,000 iterations. Note that we train our network from scratch fully unsupervised using our photometric loss. \nAt no stage of training do we make use of any semantic information or explicit supervision using ground-truth maps. \n\n\\subsection{Evaluation}\nWe evaluate our network using the mean Jaccard Index $\\mathcal{J}$ which is the intersection over union of predicted and GT object masks.\nWe demonstrate our network's ability to segment the dominant foreground object\non MovingCars (Figure \\ref{fig:results_mov_cars}), show results for rigid-body motion segmentation on MovingObjects (Figure \\ref{fig:results}) and perform an ablation study for our DoReLU activation (Table \\ref{table:ablation}). \n\n\n\\subsection{LayerSegNet vs Maxout baseline}\n \nTo the best of our knowledge, our method is the first implementation of an end-to-end layered image model in a CNN. \nIn \\cite{zhang2018layered}, Zhang \\textit{et al.}\nintroduce their ``softmask\" module that performs layered flow estimation using a CNN but without layered image modelling. This was achieved through a modified maxout operation\nto obtain $k$ disjoint soft-masks that are multiplied to $k$ estimated flow maps ($k$ determined beforehand). These resulting disjoint flow maps are added to obtain the final flow. The authors state that the softmask module can be used with any base flow estimation network to improve flow.\n\nWe use the same modified maxout operation in our network to ensure that the estimated alpha maps are disjoint. However, the ability of our network to achieve motion segmentation comes from our overall LDIS module. We posit that obtaining disjoint flow maps using maxout is not sufficient to obtain segmentation of motion-homogeneous regions. To investigate this and compare motion segmentation results against our LDIS framework, we create a baseline network using the softmask module.\n\n\n\nSince we were unable to obtain source code for \\cite{zhang2018layered}, we reproduce a similar network by modifying our network to accomodate the softmask module while removing LDIS. The segmentation module is changed to output two soft-masks and the optical flow module to give two dense flow maps (as in \\cite{zhang2018layered}). The soft-masks are made disjoint using maxout then multiplied to the flow maps and fused to obtain a final (single) flow map. We train this network supervised on RoamingSeg using ground-truth flow and on the same training regimen as our LayerSegNet. \n\n\\begin{figure}[t!]\n\\centering\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/464000_frame1.jpg}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/464000_flow_gt.jpg}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/464000_flow.jpg}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/600000_seg_l0_thresr.png}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/600000_seg_l1_thresr.png}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/roaming\/3_frame1.png}\n\t\t\\subcaption{}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/roaming\/3_flow_gt.png}\n\t\t\\subcaption{}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/roaming\/3_pred_flow.png}\n\t\t\\subcaption{}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/roaming\/3_segl0_thres.png}\n\t\t\\subcaption{}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.19\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images_supp\/roaming\/3_segl1_thres.png}\n\t\t\\subcaption{}\n\t\\end{subfigure}\n\t\n\\caption{Visual comparison of results between maxout baseline (top row) and our LayerSetNet (bottom row) on RoamingSeg. From the left column: (a) Frame 1, (b) Ground-truth flow, (c) Estimated flow, (d) Layer 1, (e) Layer 2.}\n\\label{fig:ablation}\n\\end{figure}\n\n\n\n \n \n \n \nIn the maxout baseline, we find that although the flow is accurately estimated, there is no visible separation of motion observable in the softmasks (Figure \\ref{fig:ablation}). We observe that the network only uses one intermediate flow map to estimate the final flow as evident by the lack of activations in Layer 1 and complete activations in Layer 2. In contrast, our proposed network successfully separates the foreground from the background (bottom row in Figure \\ref{fig:ablation}). \n\n\n\n\\subsection{Results}\n\\subsubsection{MovingCars}\nOn the MovingCars dataset, we compare our network LayerSegNet against two recent unsupervised VOS methods that are most similar to our approach: they are both end-to-end and do not need any annotations during inference. However, unlike us, they both require annotated masks during training. \nAnchorDiff-VOS \\cite{yang2019anchor} performs feature propagation across image sequences using non-local operators while COSNet \\cite{lu2019see} uses co-attention layers for global matching correspondences across images.\nWe show qualitative results in Figure \\ref{fig:results_mov_cars} where we show competitive performance against these methods. Our LayerSegNet performs purely bottom-up segmentation of dominant object based on affine motions whereas AnchorDiff-VOS and COSNet are both more reliant on feature similarity. We postulate this is why, in Row 2, we find that both of these methods have erroneously segmented the two parked cars on the right half of the image. Those cars do not move and only the car at the center (as seen in GT) shows motion. Our network does not give this false positive result and only segments the moving car.\n\\begin{table}[t]\n\\centering\n\\begin{tabular}{P{3cm} P{2cm} P{2cm}}\nMethod & Supervised Training & MovingCars ($\\mathcal{J}$ Mean $\\uparrow$)\\\\\n\\hline\\hline\nCOSNet \\cite{lu2019see} & \\cmark & 0.68 \\\\\nAnchorDiff-VOS\\cite{yang2019anchor} & \\cmark & 0.73\\\\\n\\hline\nLayerSegNet \\textbf{Ours} & \\xmark & 0.70\\\\\n\\hline\n\\end{tabular}\n\\caption{Quantitative results of our Network LayerSegNet compared against two recent VOS methods on MovingCars.}\\label{table:main}\n\\end{table}\n\nSimilarly, in Row 1, we find that both AnchorDiff-VOS and COSNet attempt to segment all cars in the image whereas ours can identify the dominant foreground object in the scene. In other scenes (Rows 3-5) where only a single moving car is present, we show comparable results to these two methods. Quantitative results on MovingCars in Table \\ref{table:main} show that our network outperforms COSNet by 2.9\\% and is only 4.2\\% behind AnchorDiff-VOS. Our LayerSegNet uses no annotated masks to train and is trained purely on a synthetic dataset with zero finetuning. In contrast, both AnchorDiff-VOS and COSNet rely on extensive ground-truth segmentation masks to train their networks. Note that our approach requires the dominant object to be moving with affine motion. If a car is moving more directly toward the camera its motion will not be affine, and the vehicle may only be partially detected. Examples of failed scenes are shown in the supplementary material.\n\n\\begin{figure}[t!]\n\\centering\n \\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/1_frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/1_segl0_thres.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/1_seg_gt.png}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/2_frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/2_segl0_thres.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsnew\/2_seg_gt.png}\n\t\\end{subfigure}\n\n\n\n\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/2_frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/2_segl0_thres.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/2_seg_gt.png}\n\t\\end{subfigure}\n\n\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/1_frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/1_segl0_thres.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/1_seg_gt.png}\n\t\\end{subfigure}\n\t\n\t\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/3_frame1.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/3_segl0_thres.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.3\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/Images\/resultsneweccv\/3_seg_gt.png}\n\t\\end{subfigure}\n\t\n\t \n\\caption{Our results on some classes of MovingObjects show good segmentation of rigid-body motions for a variety of objects (Rows 1-2 are objects not seen during finetuning).}\n\\label{fig:results}\n\\end{figure}\n\n\n\n\\subsubsection{MovingObjects}\nWe show quantitative results for MovingObjects in Table \\ref{table:ablation}. Leaky DoReLU (ours) shows strong performance in terms of Jaccard mean. Visual inspection of the segmentation results in Figure \\ref{fig:results} shows favourable results. Moving objects are correctly identified including for unseen objects (rows 1-4) and unseen backgrounds (rows 1-3). The segmentation maps show approximate coverage of the rectangular object classes. The gymnastic ring (fifth row) shows incomplete segmentation. As it has little internal texture, it may be that the motion is not consistently recovered.\n The maxout baseline failed to produce any foreground segmentation. \n\n\\subsection{Ablation}\n\n\n\n\n\nOur network fails to learn to segment motions without many vital components of our approach.\nFor example, we investigated relaxing the affine model to a regular optical flow map with smoothness regularisation for each layer. However, the network failed to learn accurately separate pixels into motion layers - we suspect applying only smoothness penalty is insufficient constraint towards segmenting pixels into motion layers.\nWhen separating $I_1$ pixels into layers, we found that binarising the $\\alpha$ maps is crucial as this prevents the pixel intensities from being scaled by the continuous $\\alpha$ values. This ensures that Frame 2 can be recreated without the network having to learn to modulate the pixel intensities. Without this change, the network fails to achieve any segmentation at all (Figure \\ref{fig:failures}).\n\n\\subsubsection{Leaky ReLU vs Leaky DoReLU activation}\\label{subsec:ablation}\nAs reported in Section \\ref{sec:implementation}, we use a leaky double rectified linear unit (DoReLU) as activation in our segmentation module. Two LayerSegNets, identical except for the activation (leaky DoReLU vs leaky ReLU) in the segmentation module, were trained and their results compared. \n\nOn MovingObjects, we obtained a mean IoU of 0.67 for leaky DoReLU (ours) vs 0.40 for leaky ReLU - a performance increase of 67.5\\%. Since our alpha map values are enforced to $[0,1]$, we postulate that the reason for the improved performance of leaky DoReLU is due to stable gradients when activation outputs are restricted below 0 and above 1 (compared to leaky ReLU that has no upperbound restriction).\n\n\\begin{figure}[t]\n\\centering\n \\begin{subfigure}[b]{0.29\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/26000_frame1.jpg}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.29\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/26000_seg_binary.jpg}\n\t\\end{subfigure}\n \\begin{subfigure}[b]{0.29\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/26000_seg_gt.jpg}\n\t\\end{subfigure}\n\t \n\\caption{Visual example of a failed segmentation due to scaling of $I_1$ pixel intensities by continuous-valued $\\alpha$ maps. From the left column: (a) Frame 1, (b) Estimated segmentation, (c) GT.}\n\\label{fig:failures}\n\\end{figure}\n\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{P{3.5cm} P{2.5cm} P{2cm}}\nActivation & $\\mathcal{J}$ Mean $\\uparrow$ & $\\Delta \\mathcal{J}$(\\%)\\\\\n\\hline\\hline\nLeaky ReLU & 0.40 & 0.0\\%\\\\\nLeaky DoReLU (ours) & 0.67 & +67.5\\%\\\\\n\\hline\n\\end{tabular}\n\\caption{Comparing segmentation accuracy of our LayerSegNet using leaky ReLU vs leaky DoReLU activation on the MovingObjects test set.\n}\\label{table:ablation}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Conclusions}\nWe propose a bottom-up approach to segmenting a dominant foreground object in videos by grouping pixels based on affine motion. By leveraging a representation of images using moving layers, we successfully obtain dense segmentation without requiring pre-trained saliency priors, optical flow maps or any manual annotations during training\/inference.\n\nOur two layered affine motion model works remarkably well on the real world MovingCars dataset where we show favourable results against methods that were trained using annotated ground-truths. \nWe hope our demonstration on our datasets will prove beneficial by drawing attention and enabling further innovations to this novel task of segmenting dominant moving object in videos without any human supervision. \nTo this end, we make our model and datasets publicly available.\n\n\n\n\n\n\n\\clearpage\n\n\n\n\n\n\\section*{Supplementary Material}\n\n\n\\subsection{Forward Warping Formulation}\\label{sec:fwdwarp}\n\nIn \\cite{jaderberg}, Jaderberg \\textit{et al.} propose a spatial transformer module that facilitates differentiable image warping. This warping process is identified in literature as `backward warping' where intensity values are interpolated from the input intensity-space and transported to discreet pixel locations in the output image grid. Mathematically, the backward warping process can be written as:\n\n\\begin{align}\n & \\hat{I}(\\textbf{x}) \\leftarrow I(\\textbf{x} + w(\\textbf{x})),\\ \\forall \\textbf{x} \\in \\Omega_{\\hat{I}},\n\\end{align}\n\n\\noindent\nwhere $\\Omega_{\\hat{I}}$ gives the image domain of output $\\hat{I}$, $w(\\textbf{x}) = (u,v)$ denotes flow vectors at particular spatial locations $\\textbf{x} = (x,y)$ and $\\hat{I}$ gives the warped output. Note that $\\textbf{x}$ gives pixel locations in the output image grid $\\hat{I}$ and the flow is applied (and thus associated) to pixel in $\\hat{I}$.\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=\\textwidth]{.\/Images\/warping_fwbw.png}\n\\caption{\\textit{Forward warping} (Left): Some pixels from input $I$ being warped to non-integer output coordinates in I. \\textit{Backward Warping} (Right): Some empty pixels in output $\\hat{I}$ being populated with values interpolated from non-integer input coordinates in I. Circles represent pixel locations that have a value (filled with colour) or are empty (not filled).}\n\\label{fig:warping}\n\\end{figure}\n\nOur proposed layered motion model aims to segment pixels in the input image based on motion. For this, we require a warping process where the flow is associated, and applied, to discreet pixels in the input (not output as in backward warping). Mathematically, the forward warping process is given by:\n\n\\begin{align}\n & \\hat{I}(\\textbf{x} + w(\\textbf{x})) \\leftarrow I(\\textbf{x}),\\ \\forall \\textbf{x} \\in \\Omega_I,\n\\end{align}\n\\noindent\nwhere $\\Omega_I$ gives the image domain of input $I$, $w(\\textbf{x})$ denotes flow vectors at particular spatial locations $\\textbf{x}$ and $\\hat{I}$ gives the warped output. Here, the flow is applied to pixels in the input image grid. This enables us to associate flow values to discreet input pixels and thus segregate these input pixels based on their flow. A visual comparison between forward and backward warping is shown in Figure \\ref{fig:warping}.\n\n\n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=\\textwidth]{.\/Images_supp\/baseline_arch.png}\n\\caption{Network architecture of maxout baseline incorporating the softmask pipeline from \\cite{zhang2018layered}.}\n\\label{fig:baseline}\n\\end{figure}\n\nIf the flow applied is non-integer valued, warped pixel-values are mapped to locations \\textit{between} the pixel grid in the output. To populate the output pixels, each warped pixel-value that ends up between the output grid is interpolated bi-linearly into the surrounding four (empty) output pixels. The final output is obtained after performing summation of pixel-value contributions from all warped input pixels.\n\nLet $V_i(x_i^V, y_i^V)$ represent pixels in the output grid and $W_j(x_j^W, y_j^W)$ represent the warped pixel-values from the input grid. The interpolation process can then be written as:\n\n\\begin{align}\n\tV_i^c = \\sum\\limits_j^{HW} W_{j}^c & \\max(0, 1-\\lvert {x_j^W - x_i^V}\\rvert)\\max(0, 1-\\lvert {y_j^W - y_i^V}\\rvert) \\\\ \n\t& \\forall i \\in [1\\dots HW],\\ \\ \\forall c \\in [R,G,B], \\nonumber\n\\end{align} \n\\noindent\nwhere $H$ and $W$ give the height and width of the input and output image grids and $c$ represents the image channels. This sampling process is sub-differentiable and allows gradients to pass through the network facilitating end-to-end learning. It is highly similar in implementation to the sampling mechanism for backward warping outlined in \\cite{jaderberg} and can be run very efficiently on GPUs. Our forward warping formulation is equivalent to the ``summation splatting\" presented in \\cite{niklaus2020softmax}. \n\n\n\n\\begin{figure*}[t!]\n\\centering \n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/0006_frame1seg.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/0138_frame1seg.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/0273_frame1seg.png}\n\t\\end{subfigure}\n\t\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/0414_frame1seg.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/0853_frame1seg.png}\n\t\\end{subfigure}\n\t\\begin{subfigure}[b]{0.32\\linewidth}\n\t\t\\includegraphics[width=\\textwidth]{.\/results_dicta\/failures\/supp\/1214_frame1seg.png}\n\t\\end{subfigure}\n\t\n\\caption{Results on the MovingCars dataset of LayerSegNet (ours) showing some failure cases where the dominant foreground moving object is not identified correctly. This usually happens when the object motion cannot be handled by the network's affine parametric model.}\n\n\n\\label{fig:failures-supp}\n\\end{figure*}\n\n\\begin{figure}[t!]\n\\includegraphics[width=\\textwidth]{.\/Images\/DICTA_softmax_binning.png}\n\\caption{Figure showing the output of softmax binning for different $\\tau$ values. X-axis gives the input variable p and y-axis gives the $\\alpha$ values after softmax for layer-0 (red) and layer-1 (blue).}\n\\label{fig:softmax_binning}\n\\end{figure}\n\n\n\n\n\\subsection{Maxout Baseline Network Details}\n\n\nWe create a baseline network for motion segmentation following steps to implement the soft-mask pipeline from \\cite{zhang2018layered}. We change the optical flow pipeline of our LayerSegNet to output two dense flow maps ($W_1$ and $W_2$) instead of 2 affine parameters. The segmentation module structure remains unchanged: the output is still two segmentation masks. However, we set all activation functions to leaky ReLU (as in optical flow pipeline) including in the last layer i.e. the values of the segmentation masks are not capped to $[0,1]$ as in our LayerSegNet. This is as outlined in \\cite{zhang2018layered}.\n\nThe segmentation masks are passed through the maxout operation to obtain two disjoint masks $M_1$ and $M_2$. These are multiplied and added to $W_1$ and $W_2$ to obtain the final flow map:\n\\begin{align}\n W_{final} = W_1*M_1 + W_2*M_2\n\\end{align}\n\nWe pass $W_{final}$ to a supervised loss where it is compared against ground-truth flow. We show the network structure in Figure \\ref{fig:baseline}.\n\n\n\\subsection{MovingCars Dataset}\nWe take videos on a smartphone to capture our MovingCars Dataset by following the trajectory of the moving car. Our affine parametric model provides strong constraint for segmenting pixels into motion layers but it fails to fully segment, or fails completely, in scenes where the dominant motion is too complex for an affine model. We show some example images in Figure \\ref{fig:failures-supp}.\n\n\n\\subsection{Softmax Binning}\nSoftmax binning removes the need for two separate variables to decide motion layer classification. Instead, there is only one variable that is mapped to the class label depending on which interval it lies in. \nHere, we use two `bins' for our two motion layers to obtain their alpha maps:\n\n\n\nThus, for each pixel $\\textbf{x}$, we bin a continuous valued input $p \\in [0,1]$ into one of 2 intervals: $[0.0, 0.5]$ or $(0.5, 1.0]$. We then associate a pixel's layer classification depending on which interval it falls under. First, we map $p$ to the respective intervals: \n\n\\begin{align}\n \\alpha_{w, b, \\tau}^{i}(p) = (w p - b)\/\\tau\n\\end{align}\nwhere $\\alpha^i$ gives the mapped value for each interval, $i \\in \\{0, 1, ..., n-1\\}$ gives the interval index and $n \\in \\mathbb{Z}^{+}$ gives the total number of intervals ($n = 2$ for LDIS), $w_i \\in \\{1, 2, ... , n\\}$ is a constant, $b_i \\in \\{0, \\beta_1, \\beta_2, ... , \\beta_n\\}$ are the cut-off points that separate each of the intervals and $\\tau > 0$ is a temperature factor. For our two intervals, $\\alpha^0$ and $\\alpha^1$: $w_i \\in \\{1, 2\\}$ and $b_i \\in \\{0, 0.5\\}$: \n\n\\begin{align}\n \\alpha^0(p) &= (w_0 * p - b_0)\/\\tau = x\/\\tau \\\\\n \\alpha^1(p) &= (w_1 * p - b_1)\/\\tau = (2x - 0.5)\/\\tau \n\\end{align}\n\nWe then concatenate and pass these through a softmax operation to get the final motion layer label for all pixels $\\textbf{x}$: \n\\begin{align}\n [\\alpha^0, \\alpha^1] = \\text{Softmax}([\\alpha^0, \\alpha^1])\n \\label{eq:softmaxbinning}\n\\end{align}\nwhere $\\alpha^0$ and $\\alpha^1$ give the respective class label values. As $\\tau \\rightarrow 0$, the alpha maps simulate a one-hot vector \n. We show this behaviour in Figure \\ref{fig:softmax_binning}. We plot the input $p$ in x-axis and the alpha map values in y-axis for both the layers (red curve for layer-0 and blue curve for layer-1). We can see that the outputs for both layers start to resemble a one-hot encoding as $\\tau \\rightarrow 0$ and the y values stay consistent as $p$ gets closer and closer to the cut-off point. \n\n\n\nThis interval is determined by the cut-off point which, for only two class labels, is 0.5. So, $\\alpha^0(p) \\rightarrow 1$ if $\\textbf{x}$ belongs to layer-0 and $\\alpha^1(p) \\rightarrow 1$ if it belongs to layer-1. This binning process also ensures that each pixel $\\textbf{x}$ can only belong to a single layer. More details on softmax binning can be found in \\cite{yang2018deep}. \n\n\n\n\n\\bibliographystyle{splncs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Methods}\n\\label{s:methods}\n\\subsection*{Gamma-ray observations}\nFor each system in our sample, we analysed 11.4 years of observations taken by the Large Area Telescope (LAT) onboard the \\textit{Fermi Gamma-ray Space Telescope}. We selected \\texttt{SOURCE}-class photons detected with reconstructed energies $50\\,\\textrm{MeV} < E <300\\,\\textrm{GeV}$, and with reconstructed directions from within a $3^\\circ$ region-of-interest (RoI) around each pulsar, according to the \\texttt{P8R3\\_SOURCE\\_V2} instrument response functions \\citep{Pass8,Bruel2018+P305}. \n\nSensitive unbinned-likelihood based methods for detecting eclipses \\citep[e.g.][]{Kerr2019+godot} account for each photon individually, and therefore must account for the relative probability of each photon having been emitted by the target source, as opposed to by a fore\/background source. This is achieved by \\textit{weighting} the contribution of each photon to the relevant statistic \\citep{Kerr2011+Weights}. Computing these weights requires an accurate spectral and spatial model of the emission from the target pulsar and all fore\/background sources in the RoI \\citep{Bruel2019+Weights}. For this, we used the 10-year incremental version (DR2) of the \\textit{Fermi}-LAT Fourth Source Catalog \\citep[4FGL,][]{4FGL,Ballet2020+4FGLDR2} (\\url{https:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/access\/lat\/10yr_catalog\/}) and the \\texttt{gll\\_iem\\_v07.fits} Galactic diffuse emission and \\texttt{iso\\_P8R3\\_SOURCE\\_V3\\_v1.txt} isotropic diffuse emission models to describe the diffuse background emission. The parameters of the spectra of the target pulsars were then refined such that the resulting photon weights maximize the significances of their gamma-ray pulsations, as described in \\citep{Bruel2019+Weights}. These photon weights make use of the ``PSF'' event types (\\url{https:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/analysis\/documentation\/Cicerone\/Cicerone_Data\/LAT_DP.html}) to benefit from the narrower point-spread function for well-reconstructed photon events. \n\nThe required timing ephemerides for each spider pulsar were compiled as part of an upcoming third iteration of the \\textit{Fermi}-LAT Pulsar Catalogue \\citep{2PC}. For each pulsar, we computed the orbital phase at which each photon was emitted according to these ephemerides using the \\texttt{TEMPO2} software \\citep{Edwards2006+Tempo2}. The orbital ephemeris validity was verified by the presence of gamma-ray pulsations throughout the data.\n\n\\subsection*{Test statistic for eclipse detection}\nTo test for possible eclipses we adopted the unbinned likelihood estimation methods described in \\citep{Kennedy2020+J0427} and \\citep{Kerr2019+godot}. Under this model, we assume that the eclipse has sharp in\/egresses, is centred on the pulsar's superior conjunction, and lasts for a fraction $\\theta$ of the orbital period. Within the eclipse, we assume a constant flux level, which we parameterise with $\\alpha$, the fractional flux level within the eclipse relative to the overall average flux. The increase in the log-likelihood for such an eclipse, compared to the null hypothesis of photons being uniformly distributed in orbital phase, is\n\\begin{equation}\n \\begin{split}\n \\label{e:logL}\n \\delta\\log\\mathcal{L}(\\alpha,\\theta) = &\\sum_{i\\in\\Theta}\\log \\left(w_i\\,\\alpha + 1-w_i\\right)\\\\ &+ \\sum_{i\\in\\bar{\\Theta}}\\log \\left(w_i \\frac{1-\\alpha\\theta}{1-\\theta} + 1-w_i\\right) \\\\ &-\\left(\\alpha\\eta_{\\Theta} + \\frac{1-\\alpha\\theta}{1-\\theta}\\eta_{\\bar{\\Theta}}-1\\right)\\sum_i w_i,\n\\end{split}\n\\end{equation}\nwhere $w_i$ is the photon probability weight for the $i$-th photon; $\\Theta$\/$\\bar{\\Theta}$ refer to photons with orbital phases inside\/outside the eclipse, respectively; and $\\eta_{\\Theta}$\/$\\eta_{\\bar{\\Theta}}$ denotes the fractional exposure inside\/outside the eclipse, respectively. The last term in Equation~\\ref{e:logL} accounts for variations in the exposure as a function of orbital phase. We computed the exposure for each pulsar in $30$\\,s time intervals over the \\textit{Fermi} mission using \\texttt{godot} \\citep{Kerr2019+godot}, and folded these on the orbital period to compute $\\eta_{\\Theta}$ and $\\eta_{\\bar{\\Theta}}$. After several years of observations, corresponding to several thousands of orbits of each pulsar system included here, the exposure is usually very evenly distributed across all orbital phases and hence exposure variations typically have very little effect on the resulting likelihood calculation; we correct for this effect nevertheless.\n\nFor each pulsar, we tested the hypothesis of a complete eclipse of the gamma-ray emission, corresponding to $\\alpha = 0$, testing for $\\theta \\in [0,0.2)$ with fine spacing. The upper bound on this range is more than twice as large as the maximum possible eclipse duration for our studied population, assuming that companion stars do not overflow their Roche lobes. The pulsar with the smallest mass ratio in our population is PSR~J2129$-$0429, with $q\\equiv M_{\\rm psr}\/M_{\\rm c} = 3.93\\pm0.06$, which would eclipse for 8.4\\% of an orbit if the companion filled its Roche lobe and was observed at $i=90^{\\circ}$.\n\nFor pulsars in which significant eclipses were detected we also tested alternative eclipse models with $0 < \\alpha < 1$ and with curved rather than sharp ingresses and egresses. No significant log-likelihood improvements were observed.\n\n\\subsection*{Posterior photon weights}\nA significant improvement in sensitivity when searching for eclipses can be obtained by incorporating into our analysis the fact that the gamma-ray emission is pulsed, i.e. that gamma-ray photons observed at pulse phases that fall within a peak in the gamma-ray pulse profile are more likely to have originated from the pulsar than from the background. To make use of this knowledge, we use the photon re-weighting method of \\citet{Kerr2019+godot}, which we briefly describe here.\n\nA photon weight, $w$, computed as above using the spectral and spatial model of the RoI, is our best estimate for the probability of that photon having been emitted by the pulsar, before including knowledge of the pulsar rotational phase at which the photon was emitted. We can denote this as a prior probability $P(S) = w$, where $S$ denotes the binary statement that the photon was emitted by our target source. The probability for the opposite case, $B$, where the photon is emitted by a background source, is then $P(B) = 1 - w$. The re-weighting method updates our knowledge of the probability of the photon being emitted by the target source, based on the rotational phase $\\phi$ at which the photon was emitted, by applying Bayes' theorem,\n\\begin{equation}\n P(S \\,|\\, \\phi) = \\frac{p(\\phi \\,|\\, S) \\,P(S)}{p(\\phi \\,|\\, S) \\,P(S) + p(\\phi \\,|\\, B)\\, P (B)}\\,.\n \\label{e:bayes_rule}\n\\end{equation}\nHere $P(S \\,|\\, \\phi)$ is now the posterior probability of the photon having been emitted by the pulsar, given its rotational phase; $p(\\phi \\,|\\, S)$ is the phase distribution of photons emitted by the pulsar, i.e. the pulsar's pulse profile, which we hereafter denote as $f(\\phi)$; and $p(\\phi \\,|\\, B)$ is the phase distribution of background photons, which we can safely assume to be uniform when folding on the millisecond pulse periods of the pulsars included here, hence $p(\\phi \\,|\\, B) = 1$. Re-writing equation \\ref{e:bayes_rule} with these values gives us the re-weighting equation,\n\\begin{equation}\n P(S \\,|\\, \\phi) = w^{\\prime} = \\frac{w f(\\phi)} {w f(\\phi) + 1 - w}\\,.\n \\label{e:posterior_weights}\n\\end{equation}\nWe hereafter refer to $w$ as the prior weights, and $w^{\\prime}$ as the posterior weights. For phases within peaks of the pulse profile, where $f(\\phi) > 1$, these posterior weights are always greater than the prior weights, and for phases outside of peaks, where $f(\\phi) < 1$, the posterior weights are always lower. Thus, photons within pulse peaks are up-weighted, while the rest are down-weighted. When searching for eclipses, the posterior weights help to increase the detection statistic values for true eclipses by downweighting the detrimental effect of photons that by chance have high weights, and fall within the eclipse region, but whose rotational phases do not lie within a pulse peak and are therefore less likely to have been emitted by the pulsar than initially predicted by the prior weight. Similarly, photons lying outside the eclipse region but within a pulse peak, and therefore more likely to have been emitted by the pulsar, have a larger positive contribution to the eclipse log-likelihood.\n\nTo obtain the pulse profile models, $f(\\phi)$, we fit a set of wrapped Gaussian functions to the prior-weighted photon phases using the maximum-likelihood method described by \\citet{2PC}. The number of Gaussian functions used to model each pulse profile was chosen to minimise the Bayesian Information Criterion \\citep{Schwarz1978+BIC}. \n\nWe initially performed our search using the prior weights, but changed to using the posterior weights after finding that they significantly improved the sensitivity to eclipses. Of the four significant eclipses that were found using the posterior weights for the eclipse search (prior weights were used for PSR~J1048$+$2339 as discussed below), three were originally significantly detected with the prior weights, but the posterior weights give significantly larger log-likelihood values, with $\\delta\\log\\mathcal{L}$ increasing by at least 2.6 for these pulsars. \nOnly the eclipse from PSR~J1555$-$2908 is undetected using the prior weights, with $\\delta\\log\\mathcal{L}$ = 3.35 compared to $\\delta\\log\\mathcal{L}$ = 10.07 with posterior weights, likely owing to its weak overall flux but very narrow pulse peaks. \n\nFor one pulsar in which a significant eclipse is found, PSR~J1048$+$2339, the radio timing ephemeris only covers a shorter 3-yr portion of the LAT mission, with variations in the orbital period and a low photon flux preventing generation of a full timing ephemeris using the LAT data. The radio timing ephemeris for this pulsar contains several orbital frequency derivatives to model these variations, but this ephemeris becomes highly uncertain when extrapolating outside the time interval in which it was derived. For our eclipse study, we removed these orbital frequency derivatives from the ephemeris and computed orbital phases assuming a constant orbital period. During the radio timing interval, these orbital period variations cause orbital phase shifts of up to $\\sim 10^{-3}$\\ orbits \\citep{Deneva2016+J1048}. This is around 2\\% of the duration of the eclipse detected in this system, and therefore we do not expect this additional source of uncertainty to substantially affect our results. For this pulsar, since pulsations are not observed outside the period covered by the radio ephemeris, we used the prior weights $w$ rather than the posterior weights $w^{\\prime}$ when searching for eclipses. \n\nWe include the two gamma-ray detected transitional MSPs, PSRs~J1023$+$0038 and J1227$-$4853 in our study, classifying these as RBs, as they appear to be very similar to this class when in their non-accreting state. We note that the source of their increased gamma-ray flux during the accreting states is unclear, but we assume that it also originates close to the neutron star (as indeed seems to be the case for the gamma-ray eclipsing transitional MSP candidate 4FGL~J0427.8-6704 \\citealt{Strader2016+J0427,Kennedy2020+J0427}), and include data from both the accreting- and non-accreting states in our analysis. Pulsations are not detected from these pulsars in their accreting states, and so for these we again use the prior weights, rather than the posterior weights.\n\n\\subsection*{Significance calibration via Monte-Carlo simulations}\nThe search over the eclipse width $\\theta$ introduces an unknown number of independent trials to our search. We therefore calibrated false-alarm probabilities ($P_{\\rm FA}$) via Monte-Carlo analysis. For each pulsar, we took the observed set of posterior weights, randomly sampled orbital phases from a uniform distribution, computed the log-likelihood of Equation~\\ref{e:logL} for the same set of $\\theta$ values as used in the eclipse search, and stored the maximum value, iterating $10^{7}$ times. \n\nFigure~\\ref{f:MonteCarlo} shows the results of the Monte-Carlo simulations that we used to calibrate the statistical significances of these eclipses. Of the five eclipsing pulsars, the eclipse in PSR J1555$-$2908 has the lowest significance, but still has a false-alarm probability $P_{\\rm FA} \\approx 5\\times10^{-5}$.\n\nThe $\\delta\\log\\mathcal{L}$ values observed from PSRs~J1816$+$4510 and J2129$-$0429 are larger than any obtained in our simulations. To estimate their false-alarm probabilities, we therefore performed a simple linear fit to the observed $\\delta\\log\\mathcal{L}$ vs. $\\log(P_{\\rm FA})$ curves for these pulsars, and extrapolated to the observed values.\n\nIn Figure~\\ref{f:MonteCarlo}, we also show the empirical survival function, i.e. the fraction of pulsars whose measured eclipse log-likelihoods would survive a given threshold. If there were no eclipses in our data set, then the set of measured log-likelihood values would be drawn from the null-hypothesis distribution and this empirical survival function curve would closely follow the simulated curves. The ratio between the empirical and simulated curves at the highest measured log-likelihood value illustrates the significance of the largest outlier, given the number of pulsars included in the sample. \n\nIf we remove the five eclipsing systems, then the empirical survival function curve does closely follow the simulated null-hypothesis curve, and only starts to deviate for the final pulsar, PSR~J0251$+$2606, which has a false-alarm probability of around $0.2\\%$. With $n_{\\rm psr} = 37$ pulsars remaining in this sample, the largest outlier should have a false-alarm probability of around $1 \/ n_{\\rm psr} = 1\/37 \\approx 2.7\\%$. This pulsar therefore has an eclipse log-likelihood value that has an estimated false-alarm rate around ten times lower than expected for the largest outlier from our study, given the number of pulsars included. This could be viewed as marginal evidence for an eclipse, with all other measured $\\delta\\log\\mathcal{L}$ values being consistent with the null-hypothesis. \n\n\\subsection*{Pulsar mass constraints}\nThe significant detection or exclusion of a gamma-ray eclipse provides a constraint on the binary inclination angle that depends on the angular size of the companion star as seen from the pulsar. The angular size of the companion star's Roche lobe only depends on the binary mass ratio, $q$, and hence it is convenient to parameterise the size of the companion star by $q$, and its Roche-lobe filling factor $f_{\\rm RL}$ (which we define as the radius of the star along the binary separation vector divided by the Roche lobe radius in the same direction). These parameters can be constrained by optical observations. The mass ratio is derived from measurements of the pulsar and companion projected radial velocity amplitudes ($K_{\\rm psr}$ and $K_{\\rm c}$), measured via pulsar timing and optical spectroscopy, respectively, with $q=M_{\\rm psr}\/M_{\\rm c} = K_{\\rm c}\/K_{\\rm psr}$. The Roche-lobe filling factor can be estimated from rotational broadening or surface gravity measurements via optical spectroscopy \\citep[e.g.][]{Kaplan2013+J1816} or from the amplitude of the ``ellipsoidal'' component of an observed optical light curve \\citep{Icarus}. However, this parameter is often correlated with the estimated inclination, and so previous estimates of $f_{\\rm RL}$ are not necessarily consistent with new inclination constraints from an eclipse detection or exclusion. \n\nTo compute expected eclipse durations, we generated model stars using the \\texttt{Icarus} \\citep{Icarus} binary modelling software. \\texttt{Icarus} assumes that the surface of the star follows an equipotential contour within its Roche lobe, and therefore the simulated surface accounts for the non-spherical shape of the star due to tidal and rotational deformation. For a given binary mass ratio and Roche-lobe filling factor we can then compute the range of orbital phases at which the pulsar is eclipsed by the model star, when viewed from a given inclination. We assume that the pulsar is effectively a point-source of gamma-ray emission, since gamma-ray emission is thought to either be produced inside, or just outside, the pulsar's light cylinder \\citep{Kalapotharakos2019+FP}, which is thousands of times smaller than the orbital separation in a spider binary. As we do not detect gradual in\/egresses in the eclipses, and since the density profile of the outer envelope of the companion star is unknown, we assume that any line-of-sight crossing the photosphere will be fully eclipsed.\n\nThe pulsar and companion masses can be estimated, as a function of inclination, from the binary mass function,\n\\begin{align}\n M_{\\rm psr} &= \\frac{K_{\\rm c}^3 P_{\\rm orb} (1 + 1\/q)^2}{2\\pi G \\sin^3 i}\\,,\\\\\n M_{\\rm c} &= \\frac{K_{\\rm psr}^3 P_{\\rm orb} (1 + q)^2}{2\\pi G \\sin^3 i}\\,.\n\\end{align}\nTables~~\\ref{t:eclipses} and \\ref{t:non_eclipses} list our resulting constraints on the inclination and component masses for eclipsing and non-eclipsing systems, respectively, with existing companion radial velocity measurements. When an eclipse is detected, we assume that the companion fills its Roche lobe to obtain a lower bound on the inclination, and hence a conservatively high upper bound on the pulsar and companion masses, while assuming $i=90^\\circ$ provides a strict lower limit on the masses with no assumption on the filling factor. For systems without detected eclipses, we assume a low $f_{\\rm RL} = 0.5$ to obtain an upper bound on the inclination, and lower bound on the component masses. This limit is based on the low filling factor for PSR~J1816$+$4510 estimated by \\citet{Kaplan2013+J1816} using the surface gravity determined by optical spectroscopy. Optical models for BW and RB systems tend to have rather higher estimated filling factors \\citep[e.g.][]{Draghis2019+LCs}, and so we adopt this value as a conservative estimate. \n\nWhere possible, we take radial velocity amplitudes that have been corrected for heating effects that shift the centre-of-light away from the companion's centre-of-mass. These corrections tend to increase the inferred $K_{\\rm c}$, and hence increase the pulsar mass estimate. Three RB pulsars in our list do not have published centre-of-light-corrected radial velocity amplitudes (PSRs J1431$-$4715, J1622$-$0315 and J1625$-$3205), and so the mass limits for these may be slightly underestimated. However, all three have optical light curves that suggest very little heating effect is present, so the required corrections are likely to be small for these systems.\n\nAll systems studied here that do not have radial velocity measurements are BWs (which tend to be fainter at optical wavelengths, and hence often inaccessible to spectroscopic studies). For these, we assume a typical mass ratio of $q=70$, and list the resulting inclination constraints in Supplementary Table~2. While the inclination constraints vary slowly with $q$ at values typical for BWs, the component masses do depend strongly on the assumed value of $q$, and so we do not list mass constraints here. \n\n\\subsection*{Optical constraints for eclipsing pulsars}\nPrevious results from optical observations and modelling of PSR~B1957$+$20 are discussed in the main text. In the paragraphs below we discuss the existing multiwavelength observations for the other four eclipsing pulsars. Where previous works provide constraints on the Roche-lobe filling factor, we use these constraints to obtain larger (but less robust) lower limits on the inclination angle (and therefore tighter upper bounds on the pulsar masses) than are obtained by assuming $f_{\\rm RL} = 1$ in the previous section. We also use these estimates to obtain upper limits on the inclination angle, rather than simply assuming $i < 90^{\\circ}$ (which may imply a very low filling factor). \n\nOptical observations of PSR~J1048$+$2339 show significant long-term variability, with the optical maximum varying by up to a magnitude \\citep{Cho2018+VariableRBs,Yap2019+J1048}. Such variability cannot yet be taken into account by precise light-curve modelling, and so no measurement of the inclination angle from optical modelling exists in the literature to date. When modelling their optical observations of this pulsar, \\citet{Yap2019+J1048} fixed the inclination to $i=76^{\\circ}$, the maximum value that was compatible with the lack of an observed X-ray eclipse. However, while a thermal X-ray component from the neutron star surface would indeed be eclipsed at higher inclinations, X-ray emission in RBs tends to be dominated by emission from an extended intra-binary shock, and so the lack of an X-ray eclipse does not necessarily preclude a higher inclination. \\citet{Yap2019+J1048} find that $f_{\\rm RL} \\approx 0.85$ is compatible with multiple light curves despite long-term variability. From optical spectroscopy, \\citet{MiravalZanon2021+J1048} find an observed companion radial velocity amplitude of $343.3\\pm4.4$km~s$^{-1}$. Using these values, we find that the observed eclipse duration requires an inclination greater than $80.9^{\\circ}$ (c.f. $80.4^{\\circ}$ assuming $f_{\\rm RL} = 1$ in Table~\\ref{t:eclipses}). Heating corrections reduce the estimated centre-of-mass velocity to $298.7\n\\pm7.7$\\,km\\,s$^{-1}$, for a larger mass ratio (and hence lower minimum inclination of $i=80.1^{\\circ}$) but a lower pulsar mass, $M_{\\rm psr}\\approx1.1M_{\\odot}$. We use the uncorrected value of $K_{\\rm c}$ in Table~\\ref{fig:eclipses} to obtain a conservative bound on the pulsar mass. As noted in the main text, eclipses longer than 8\\% of an orbit are consistent with the data, but would require the companion star to be significantly overflowing its Roche lobe.\n\nPSR~J1555$-$2908 is a BW pulsar that was recently discovered by \\citet{Ray2022+J1555} in a targeted radio search of a steep-spectrum radio continuum source identified within a pulsar-like gamma-ray source by \\citet{Frail2018+SteepSpectrumCands}. Modelling of both optical photometry and spectroscopy by \\citet{Kennedy2022+J1555}, using a model that includes the possibility of heat diffusion across the terminator, revealed the companion's projected radial velocity to be $397 \\pm 2$\\,km\/s, and indicated a high binary inclination of $i > 75^{\\circ}$, giving a maximum pulsar mass of $1.82\\,M_{\\odot}$. The Roche-lobe filling factor is found to be high $f_{\\rm RL} > 0.93$. The duration of the gamma-ray eclipse observed here requires an inclination $83.0^{\\circ} < i < 86.2^{\\circ}$, for a pulsar mass $1.58\\,M_{\\odot} < M_{\\rm psr} < 1.71\\,M_{\\odot}$ with the uncertainty dominated by that of the companion's radial velocity.\n\nOptical spectroscopy of PSR~J1816$+$4510 has been modelled by \\citet{Kaplan2013+J1816}. They find that this system is perhaps more similar to a white-dwarf companion than a normal RB, owing to its extremely high temperature, but due to the presence of radio eclipses that are not otherwise seen in pulsar--white-dwarf binaries we categorise it here as the latter. Detailed modelling of optical photometry to determine the inclination or Roche-lobe filling factor has not been performed, but from spectroscopic models \\citet{Kaplan2013+J1816} estimated a radius that corresponds to $f_{\\rm RL}\\sim 0.5$, which is much smaller than observed in other RBs, motivating our use of this value as a low estimate for $f_{\\rm RL}$ in Table~\\ref{t:non_eclipses}. Adopting this value instead of $f_{\\rm RL} = 1$ results in a higher minimum inclination of $82.6^{\\circ}$ and a lower pulsar mass range of $1.68\\,M_{\\odot} < M_{\\rm psr} < 2.11\\,M_{\\odot}$\n\nFor PSR~J2129$-$0429, \\citet{Bellm2016+J2129} measured a projected companion radial velocity amplitude of $K_2 = 250 \\pm 4$\\,km\\,s$^{-1}$ (for mass ratio $q=3.93\\pm0.06$), and inferred a filling factor of $f_{\\rm RL} = 0.82\\pm0.03$ and an inclination $i > 68^\\circ$ from optical light curve modelling. With these values of $q$ and $f_{\\rm RL}$, the duration of the gamma-ray eclipse requires an inclination between $76.6^{\\circ} < i < 78.3^{\\circ}$, consistent with the range allowed by optical modelling. This corresponds to a pulsar mass range of $1.61\\,M_{\\odot} < M_{\\rm psr} < 1.88\\,M_{\\odot}$ at 95\\% confidence. \\citet{AlNoori2018+J2129} also found dips in the \\textit{XMM-Newton} light curve for PSR~J2129$-$0429, consistent with a thermal X-ray component from the neutron star surface being eclipsed by the companion. \n\n\\subsection*{Searching for eclipses in recently-discovered redbacks and candidates}\nSeven \\textit{Fermi}-LAT sources have been found to contain periodic optical and X-ray sources that are almost certainly spider binary systems \\citep{Strader2014+J0523,Li2016+J0212,Linares2017+J0212,Halpern2017+J0838,Li2018+J0954,Swihart2020+J2333, Swihart2021+J0940, Li2021+J0336}. Shortly before submitting this paper, millisecond radio pulsations were detected from three of these objects (4FGL~J0838.7$-$2827, 4FGL~J0955.3$-$3949 and 4FGL~J2333.1$-$5527, hereafter PSRs~J0838$-$2827, J0955$-$3949 and J2333$-$5526, respectively) by the TRAPUM collaboration (\\url{http:\/\/trapum.org\/discoveries.html}), but a full timing solution is not yet available for them, and gamma-ray pulsations have not yet been detected. Pulsations have not yet been detected at any wavelength from the remaining four of these systems. A further four similar systems \\citep{Pletsch2012+J1311,Nieder2020+J1653,Ray2020+J2339,Clark2021+J2039} were initially identified in the same way, but have since been confirmed as spiders through radio or gamma-ray pulsation discoveries, and hence are already included in our search. \n\nTo search for gamma-ray eclipses in these systems, we prepared \\textit{Fermi}-LAT data sets in the same way as for the confirmed spider cases, but included 12.4 years of data. Since these data sets are not bound to the validity period of a pulsar timing ephemeris, we included this extra year of data to allow for stronger detections to partially mitigate the large trials factor (see below). For these data sets we used \\texttt{gtsrcprob} to compute the photon weights, rather than optimising these to maximise the pulsation significance (since this optimisation is not possible here without a gamma-ray pulsation detection), and used the prior probability weights, since posterior weights are unavailable in the absence of pulsations.\n\nUnlike the majority of systems studied here, where the pulsar's timing ephemeris provides precise orbital period and phase measurements over the \\textit{Fermi}-LAT data set, for these systems we only have imprecise orbital phase information from optical light curves and radial velocity curves. We therefore had to additionally search small ranges of orbital phases (parameterised by each pulsar's ascending node epoch, $T_{\\rm asc}$ and orbital period, $P_{\\rm orb}$). We chose the search ranges to be $\\pm 3\\times$ the published uncertainty on these parameters. Our step sizes in each parameter were chosen such that the maximum offset on the orbital phase for each photon would be $0.001$ orbits. \n\nThis searching introduces yet more trials in our search, reducing our sensitivity. For each of these sources, we again calibrated our search significances via Monte-Carlo simulations. We took the observed set of photon phases, assigned randomly-generated arrival times evenly distributed throughout the \\textit{Fermi} mission interval, performed the search over orbital period and phase as above, and took the maximum resulting $\\delta\\log\\mathcal{L}$, iterating 1000 times, or 10000 times if the resulting false-alarm probability was low.\n\nSensitivity is greatly reduced in these searches, with a false-alarm probability of $P_{\\rm FA} = 0.01$ corresponding to $\\delta\\log\\mathcal{L}\\approx9$, as opposed to $\\delta\\log\\mathcal{L}\\approx4.5$ when the pulsar's precise orbital ephemeris is known. Nevertheless, there is evidence of eclipses in two systems, PSRs~J0838$-$2827 and J2333$-$5526, with $\\delta\\log\\mathcal{L}=10.8$ and $\\delta\\log\\mathcal{L}=11.3$, corresponding to trials-corrected false-alarm probabilities of $3\\times10^{-3}$ and $1\\times10^{-3}$, respectively. We show the $\\delta\\log\\mathcal{L}$ over the searched parameter space for these two systems and the corresponding Monte Carlo calibrations in Figure~\\ref{fig:candidates}.\n\nFor both PSR~J0838$-$2827 and PSR~J2333$-$5526, although significant detections are made within the searched parameter space, the eclipse likelihoods are still high towards the borders of the searched regions. We therefore searched outside this region and found higher $\\delta\\log\\mathcal{L}$ values ($\\delta\\log\\mathcal{L}=11.8$ and $\\delta\\log\\mathcal{L}=14.9$, respectively), at orbital phases that are $\\Delta T_{\\rm asc} \\approx 300$\\,s and $\\Delta T_{\\rm asc} \\approx 620$\\,s later than that predicted by the ephemerides obtained from fitting the companions' radial velocity curves, corresponding to $\\approx 4\\sigma$ and $\\approx 11\\sigma$ deviations, respectively. Such offsets may be caused by the irradiation of the companion star by the pulsar, which can cause the radial velocity curve to depart slightly from a simple sinusoid, although \\citet{Swihart2020+J2333} did not find any evidence for this effect in their modelling of PSR~J2333$-$5526. The recent detection of radio pulsations from these pulsars will likely clarify these tensions by providing precise orbital ephemerides.\n\n\\subsection*{Expected number of eclipsing spiders}\nAlthough our population is small, we can also use the number of detected eclipses to probe whether or not the population of \\textit{Fermi}-LAT detected spiders are viewed from randomly-distributed inclination angles. This is not necessarily expected; as many of these sources have been discovered by targeting \\textit{Fermi}-LAT sources, the population may be biased towards those that are bright gamma-ray emitters, and gamma-ray emission models predict that MSPs are brightest around their rotational equator, which should in turn be aligned with the orbital plane during recycling. This would manifest in our population as a greater than expected number of eclipsing pulsars. Alternatively, if we observe a smaller number of eclipses than expected, this would be evidence that the companion stars in these systems tend to fill only a small fraction of their Roche lobes. \n\nUnder the assumption of randomly distributed orbital axes, the binary inclination angles will be drawn from a probability distribution $p(i) = \\sin(i)$, which we adopt as a prior. We restrict $i$ to $i \\leq 90^{\\circ}$, since systems at inclination $i$ are indistinguishable from those at $180^{\\circ} - i$ as the orbital direction cannot be determined. The prior probability of a pulsar being eclipsed is therefore the integral of this prior over inclination angles greater than the minimum inclination at which a pulsar would be eclipsed, $i_{\\rm ecl}$, i.e. $P(i > i_{\\rm ecl}) = \\cos(i_{\\rm ecl})$. Assuming Roche-lobe filling companions, for typical BW and RB mass ratios ($q \\equiv M_{\\rm psr}\/M_{\\rm c}$) of $q=70$ and $q=5$, respectively, this gives a prior probability for a BW or RB being eclipsed of 10\\% or 23\\%, respectively. The probability of observing a certain number of eclipses from the studied population follows a binomial distribution with these success factors. From the 28 BWs and 16 RBs in our sample (including the candidates discussed in the previous section) that are bright enough for us to significantly detect or rule out an eclipse, we find 2 eclipsing BWs and 5 eclipsing RBs. The binomial probabilities for these samples are 24\\% and 16\\% respectively, entirely consistent with our assumptions of randomly distributed inclinations and Roche-lobe filling companions, and indeed seven eclipses is the second most likely number of eclipses to observe from the combined population. \n\n\\section*{Data Availability}\nThe \\textit{Fermi}-LAT data are available from the Fermi Science Support Center \\url{http:\/\/fermi.gsfc.nasa.gov\/ssc}. Ephemerides and folded \\textit{Fermi}-LAT data sets including prior and posterior photon weights for systems with detected eclipses are available on Zenodo \\url{https:\/\/doi.org\/10.5281\/zenodo.7133502}. Ephemerides and folded data sets for other pulsars included in this study may contain unpublished information about unrelated scientific results; these are available from the authors upon request.\n\n\\section*{Code Availability}\nThe Fermitools, including \\texttt{gtsrcprob}, used for analysing \\textit{Fermi}-LAT data, are available from \\url{https:\/\/fermi.gsfc.nasa.gov\/ssc\/data\/analysis\/software\/}. \\texttt{TEMPO2} \\citep{Edwards2006+Tempo2}, used for computing rotational and orbital photon phases is available at \\url{https:\/\/bitbucket.org\/psrsoft\/tempo2}. \\texttt{Icarus} \\citep{Icarus}, used for computing expected eclipse durations to derive pulsar mass estimates, is available from \\url{https:\/\/github.com\/bretonr\/Icarus}. \\texttt{godot} \\citep{Kerr2019+godot}, used for computing \\textit{Fermi}-LAT exposure, is available from \\url{https:\/\/github.com\/kerrm\/godot}. \\texttt{PINT} \\citep{PINT}, used for evaluating template pulse profiles to derive posterior weights, is available from \\url{https:\/\/github.com\/nanograv\/PINT}. The scripts used to perform the eclipse searches and false-alarm calibrations are available on Zenodo \\url{https:\/\/doi.org\/10.5281\/zenodo.7133502}. \n\n\\section*{Acknowledgements}\nC.~J.~C. would like to thank Bruce Allen for useful discussions that led to the use of posterior weights that increased the significances of the detected eclipses. We would like to thank Seth Digel, Tyrel Johnson, Melissa Pesce-Rollins, David Thompson and Zorawar Wadiasingh for carefully reviewing the manuscript on behalf of the \\textit{Fermi}-LAT collaboration.\n\nC.~J.~C., R.~P.~B, M.~R.~K., D.~M.~S. and G.~V. acknowledge support from the ERC under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 715051; Spiders). This work was supported by the Max-Planck-Gesellschaft~(MPG). B.~B. acknowledges the support of the Department of Atomic Energy, Government of India, under project No. 12-R\\&D-TFR-5.02-0700. Support for H.~T.~C. was provided by NASA through the NASA Hubble Fellowship Program grant \\#HST-HF2-51453.001 awarded by the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., for NASA, under contract NAS5-26555. V.~S.~D. was supported by the STFC. M.~R.~K acknowledges support from the Irish Research Council in the form of a Government of Ireland Postdoctoral Fellowship (GOIPD\/2021\/670: Invisible Monsters). S.~M.~R. is a CIFAR Fellow and is supported by the NSF Physics Frontiers Center award 1430284. D.~M.~S. also acknowledges the Fondo Europeo de Desarrollo Regional (FEDER) and the Canary Islands government for the financial support received in the form of a grant with number PROID2020010104.\n\nPulsar research at Jodrell Bank Centre for Astrophysics and access to the Lovell telescope is\nsupported by a consolidated grant from the UK Science and Technology Facilities Council (STFC). Work at the Naval Research Laboratory was supported by the NASA Fermi program. The MeerKAT telescope is operated by the South African Radio Astronomy Observatory, which is a facility\nof the National Research Foundation, an agency of the Department of Science and Innovation. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. \n\nThe Fermi LAT Collaboration acknowledges generous ongoing support from a number of agencies and institutes that have supported both the development and the operation of the LAT as well as scientific data analysis. These include the National Aeronautics and Space Administration and the Department of Energy in the United States, the Commissariat \\`{a} l'Energie Atomique and the Centre National de la Recherche Scientifique \/ Institut National de Physique Nucl\\'{e}aire et de Physique des Particules in France, the Agenzia Spaziale Italiana and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education, Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and the K. A. Wallenberg Foundation, the Swedish Research Council and the Swedish National Space Board in Sweden. \n\nAdditional support for science analysis during the operations phase is gratefully acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'Etudes Spatiales in France. This work performed in part under DOE Contract DE-AC02-76SF00515.\n\nOur analysis made extensive use of the \\texttt{numpy} \\citep{numpy}, \\texttt{scipy} \\citep{scipy}, \\texttt{matplotlib} \\citep{matplotlib}, \\texttt{astropy} \\citep{astropy:2013,astropy:2018} and \\texttt{pycuda} \\citep{pycuda} python packages.\n\n\\section*{Author Contributions Statement}\nC.~J.~C. performed the eclipse search analyses, and wrote the manuscript. M.~Kerr assisted the analysis and exposure calculations. P.~B. and L.~G. produced the \\textit{Fermi}-LAT data sets and photon probability weights for each pulsar. R.~P.~B, V.~S.~D., M.~R.~K., D.~M.~S. and G.~V. contributed to the interpretation of optical modelling and discussion of gravitational lensing. M.~Kerr, B.~B., R.~P.~B, V.~S.~D., M.~R.~K., L.~N., M.~S.~E.~R. and D.~M.~S. reviewed the manuscript and provided comments. All remaining authors contributed pulsar timing ephemerides required to phase-fold the \\textit{Fermi}-LAT data.\n\n\\section*{Competing Interests Statement}\nThe authors declare no competing interests.\n\n\\clearpage\n\n\\begin{table*}\n \\centering\n \\scriptsize\n \\caption{Constraints for pulsars with detected eclipses. $\\theta^{\\rm min}$ and $\\theta^{\\rm max}$ are the minimum and maximum eclipse durations (in orbits) at 95\\% confidence. $i^{\\rm min}$ is the limiting inclination at which the minimum eclipse duration would be reached for a Roche-lobe filling companion, assuming the 2$\\sigma$ lower limit on the companion radial velocity amplitude $K_{\\rm c}$, which is taken from the references listed in the final column, with 1$\\sigma$ uncertainties quoted. $M_{\\rm psr}$ and $M_{\\rm c}$ give the (conservative) range of pulsar and companion masses that are allowed by the eclipse detection. The minimum masses are found by assuming $i=90^{\\circ}$ and the 2$\\sigma$ lower limit on $K_{\\rm c}$, and require companions to substantially underfill their Roche lobes (with the exception of PSR~J1048$+$2339). The maximum masses are found by assuming the 2$\\sigma$ upper limit on $K_{\\rm c}$, and the minimum inclination required to produce the minimum eclipse duration with the conservative assumption of a Roche-lobe filling companion.}\n \\label{t:eclipses}\n \\begin{tabular}{lcccccccccc}\n \\hline\n Pulsar & Class & $\\delta\\log\\mathcal{L}$ & $P_{\\rm FA}$ & $\\theta^{\\rm min}$ & $\\theta^{\\rm max}$ & $K_{\\rm c}$ (km s$^{-1}$) & $i^{\\rm min}$ ($^{\\circ}$) & $M_{\\rm psr}$ ($M_{\\odot}$) & $M_{\\rm c}$ ($M_{\\odot}$) & Ref.\\\\\n \\hline\n B1957$+$20 & BW & $12.63$ & $2\\times10^{-6}$ & $0.007$ & $0.011$ & $353.0 \\pm 4.0$ & $84.1$ & $1.67$--$1.94$ & $0.025$--$0.027$ & \\citep{vanKerkwijk2011+B1957} \\\\\n J1048$+$2339 & RB & $13.28$ & $2\\times10^{-6}$ & $0.058$ & $0.120$ & $343.3 \\pm 4.4$ & $80.4$ & $1.44$--$1.72$ & $0.31$--$0.35$ & \\citep{MiravalZanon2021+J1048} \\\\\n J1555$-$2908 & BW & $10.07$ & $4\\times10^{-5}$ & $0.023$ & $0.040$ & $397.0 \\pm 2.0$ & $83.1$ & $1.58$--$1.71$ & $0.057$--$0.060$ & \\citep{Kennedy2022+J1555} \\\\\n J1816$+$4510 & RB & $17.32$ & $<1\\times10^{-7}$ & $0.014$ & $0.019$ & $343.0 \\pm 7.0$ & $79.0$ & $1.64$--$2.17$ & $0.18$--$0.22$ & \\citep{Kaplan2013+J1816} \\\\\n J2129$-$0429 & RB & $17.67$ & $<1\\times10^{-7}$ & $0.030$ & $0.036$ & $250.3 \\pm 4.3$ & $76.3$ & $1.48$--$1.93$ & $0.39$--$0.47$ & \\citep{Bellm2016+J2129} \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\n\\begin{table*}\n \\centering\n \\scriptsize\n \\caption{Constraints for pulsars without detected eclipses. Parameters are the same as in Table~\\ref{t:eclipses}, but inclination upper limits $i^{\\rm max}$ and mass lower limits $M^{\\rm min}_{\\rm psr}$ and $M^{\\rm min}_{\\rm c}$ are computed for the maximum eclipse duration, and assuming the companion is 50\\% Roche-lobe filling, to obtain lower limits on the component masses. }\n \\label{t:non_eclipses}\n \\begin{tabular}{lccccccccc}\n \\hline\n Pulsar & Class & $\\delta\\log\\mathcal{L}$ & $\\theta^{\\rm max}$ & $K_{\\rm c}$ (km\/s) & $i^{\\max}$ ($^{\\circ}$) & $M^{\\min}_{\\rm psr}$ ($M_{\\odot}$) & $M^{\\min}_{\\rm c}$ ($M_{\\odot}$) & Ref.\\\\\n \\hline\n J0952$-$0607 & BW & $0.36$ & $0.010$ & $376.1 \\pm 5.1$ & $86.3$ & $1.40$ & $0.020$ & \\citep{Romani2022+J0952} \\\\\n J1023$+$0038 & RB & $0.00$ & $0.007$ & $295.0 \\pm 3.0$ & $81.9$ & $0.65$ & $0.09$ & \\citep{Stringer2021+tMSPs} \\\\\n J1227$-$4853 & RB & $0.00$ & $0.002$ & $294.4 \\pm 4.0$ & $81.1$ & $1.01$ & $0.18$ & \\citep{Stringer2021+tMSPs} \\\\\n J1301$+$0833 & BW & $0.31$ & $0.013$ & $284.9 \\pm 4.4$ & $85.7$ & $0.63$ & $0.014$ & \\citep{Romani2016+J1301} \\\\\n J1311$-$3430 & BW & $0.00$ & $0.001$ & $633.9 \\pm 5.3$ & $86.9$ & $1.66$ & $0.009$ & \\citep{Romani2015+J1311} \\\\\n J1431$-$4715 & RB & $2.54$ & $0.066$ & $278.0 \\pm 3.0$ & $90$ & $1.13$ & $0.11$ & \\citep{Strader2019+RBSpec} \\\\\n J1622$-$0315 & RB & $0.05$ & $0.009$ & $423.0 \\pm 8.0$ & $83.4$ & $1.33$ & $0.10$ & \\citep{Strader2019+RBSpec} \\\\\n J1628$-$3205 & RB & $2.72$ & $0.009$ & $358.0 \\pm 10.0$ & $82.2$ & $1.09$ & $0.14$ & \\citep{Strader2019+RBSpec} \\\\\n J1653$-$0158 & BW & $0.74$ & $0.000$ & $700.2 \\pm 7.9$ & $86.7$ & $1.76$ & $0.012$ & \\citep{Nieder2020+J1653} \\\\\n J1810$+$1744 & BW & $0.00$ & $0.001$ & $462.3 \\pm 2.2$ & $84.7$ & $1.59$ & $0.049$ & \\citep{Romani2021+J1810} \\\\\n J2039$-$5617 & RB & $0.40$ & $0.001$ & $327.2 \\pm 5.0$ & $81.7$ & $1.02$ & $0.14$ & \\citep{Strader2019+RBSpec,Clark2021+J2039} \\\\\n J2215$+$5135 & RB & $0.00$ & $0.001$ & $412.3 \\pm 5.0$ & $81.6$ & $1.58$ & $0.23$ & \\citep{Linares2018+J2215} \\\\\n J2339$-$0533 & RB & $0.00$ & $0.000$ & $377.6 \\pm 17.7$ & $81.1$ & $1.21$ & $0.24$ & \\citep{Romani2011+J2339} \\\\\n \\hline\n \\end{tabular}\n\\end{table*}\n\\clearpage\n\n\\begin{figure}\n \\centering\n\t\\includegraphics[width=\\columnwidth]{fig1}\n \\caption{Gamma-ray orbital light curves of seven eclipsing spider pulsars. The red dashed line shows the estimated background level. Phase zero corresponds to the pulsar's ascending node. The phase of the pulsar's superior conjunction, where eclipses would be expected to occur, has been placed at the centre of a phase bin, and is shown at the centre of the plot for emphasis. Bin widths have been chosen to be close to the best-fitting eclipse duration. Bin heights show the sum of the photon weights in each orbital phase bin, and error bars show the corresponding 1$\\sigma$ Poisson uncertainties.}\n \\label{fig:eclipses}\n\\end{figure}\n\n\\begin{figure}\n\\centering\n\t\\includegraphics[width=0.75\\columnwidth]{fig2}\n \\caption{Results of Monte-Carlo simulations used to calibrate eclipse false-alarm probabilities. Vertical lines show the measured log-likelihood values, maximised over eclipse widths, for each pulsar. Those for pulsars with significant eclipses are marked in colour. The coloured curves show the false-alarm probability from simulations using the distribution of photon weights from each of the five eclipsing pulsars. Horizontal dashed lines show the corresponding false-alarm probability according to the Monte-Carlo calibration. The dotted and solid black curves show the empirical survival function (i.e. the fraction of pulsars which survive a given log-likelihood threshold) for the real population of spiders studied here, before and after removing the five pulsars with significant eclipses, respectively. The diagonal dashed line is an extrapolation of the fit to the simulated false-alarm probability curves used to estimate the false-alarm probabilities for the most significant eclipses.}\n \\label{f:MonteCarlo}\n\\end{figure}\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{fig3}\n\\caption{Search results for the two candidate RB systems in which eclipses are detected. Left panels: Eclipse log-likelihoods as a function of the orbital parameters, maximised over eclipse durations. The blue crosshairs denote the $2\\sigma$ ranges around the orbital ephemeris from optical observations (see references in Supplementary Material). Contour lines are drawn at log-likelihoods corresponding to false-alarm probabilities of $5\\%$ (yellow) and $1\\%$ (red). These levels are also marked on the colour bar, along with the log-likelihood corresponding to a false-alarm probability of $0.1\\%$ (dark red), although this level is never reached. The position of the maximum likelihood is marked by a cyan cross, and the corresponding log-likelihood value is also marked in cyan on the colour bar. Right panels: the results of the Monte-Carlo simulations used to calibrate these false-alarm probabilities. The $5\\%$,$1\\%$ and $0.1\\%$ levels are marked in the same colours used in the left panels, with the maximum log-likelihood value found in the search and corresponding false-alarm probability marked in cyan. Top panels are for PSR~J0838$-$2827, lower panels for PSR~J2333$-$5526. In all other candidate systems, the false-alarm probability for the maximum log-likelihood value found was greater than $40\\%$. } \n\\label{fig:candidates}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n\t \\includegraphics[width=0.75\\columnwidth]{fig4}\n \\caption{Neutron star mass constraints for gamma-ray detected spider MSPs, using the constraints obtained from the detection or exclusion of gamma-ray eclipses from this work. The five pulsars with detected eclipses are highlighted in bold. The two additional eclipsing systems, PSRs~J0838$-$2827 and J2333$-$5526, are excluded from this plot, as their mass ratios are not yet known from pulsar timing, and so their masses cannot yet be estimated. The colour of each point indicates the sub-class of spider system: black widows are shown in black, redbacks shown in red. For pulsars with no detected eclipses, we show lower limits on the pulsar mass, indicated by arrows with arbitrary length. }\n \\label{f:mass_list}\n\\end{figure}\n\n\\clearpage\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nScintillating fiber trackers constitute an interesting alternative to\ngaseous tracking detectors. Impressive progress has been made in the last\nyears in both the quality of plastic fibers and the optical readout \\cite\n{leu95}\\cite{dam96}\\cite{adam95}. Fiber detectors avoid all the problems\nrelated to HV connections, sparks, electronic noise and pick-up in wire\nchambers and MSGC detectors. In view of these advantages the Hera-B \\cite\n{herab} collaboration investigated the possibility to build an inner tracker\nof scintillating fibers as an alternative to a MSGC system. The fiber\nsolution was finally abandoned due to the higher cost for a 150000 channel\nsystem and expected radiation damage. Nevertheless the results of the study\nare of possible interest for detectors operating under different\nexperimental conditions.\n\nSensitive light detection is most efficiently achieved with\nphotomultipliers. To reduce cost and space one has to use multichannel\nsystems. Interesting alternative solutions provide multichannel PMs (MCPM)%\n\\cite{leu95}\\cite{dam94} built in standard technique, hybrid PMs (HPMT)\\cite\n{dam94a}\\ and visible light counters (VLPC)\\cite{adam95}. The latter provide\nby far the best photoefficiency of up to 80 \\%, but need cooling to below\n10K. HPMTs allow for a finer spatial segmentation than MCPMs but require\nhigher voltages and low noise amplification. In this article we present\nmeasurements with a readout system consisting of fibers coupled to small\nsize 16 and 64 channel photomultipliers \\cite{yos97}.\n\nIn the following sections we describe the setup, the simulation of the\nphoton propagation and, finally, the measurements which concentrated on the\ninvestigation of the influence of magnetic fields and the cross-talk between\nchannels.\n\n\\section{The setup}\n\n\\FRAME{ftbpFU}{3.5258in}{2.7562in}{0pt}{\\Qcb{Setup to measure the optical\ncross-talk. }}{}{aufbau.ps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";display \"PICT\";valid_file \"F\";width 3.5258in;height 2.7562in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.0436875\";croptop \"0.7954013\";cropright \"0.9663484\";cropbottom\n\"0.2158048\";filename 'C:\/PMPAPER\/Aufbau.ps';file-properties \"XNPEU\";}}We\ninvestigated multichannel photomultiplier tubes\\footnote{%\nHAMAMATSU Photonics K.K., Japan} \\cite{yos97} with 16 (M16) and 64 channels\n(M64) on a square sensitive area of $18\\times 18mm^2$. The tubes have\nbialkali photocathodes and metal channel dynode structures of front pad\nsizes of $4\\times 4mm^2$ (M16) and $2\\times 2mm^2$ (M64). The gaps between\nthe dynodes of the matrix next to the window are $0.5mm$ (M16) and $0.3mm$\n(M64) respectively.\n\nTwo different setups were used to measure the gain uniformity of the PM\ntubes and the optical cross-talk between adjacent channels. For the\nuniformity measurement a LED was operated at a distance of $30cm$ in front\nof the PM tube. The experimental arrangement for the cross-talk measurement\n(see Fig.1) consisted of a fiber guide mounted on a x-y-z-table which was\nmoved under computer control across the PM window. The fiber was pressed by\na spring onto the PM window.\n\nIn order to investigate the PM performance in magnetic fields, the PM was\nplaced between two Helmholtz coils providing fields of up to $130$ Gauss.\n\nFor the uniformity measurement the light was produced by standard green or\nblue LEDs. Short light pulses of the LED were generated by a special high\ncurrent control circuit with adjustable pulse width. The optical cross-talk\nmeasurements were performed with scintillation light produced by $\\beta ^{-}$\nrays from a Sr$^{90}$-source in scintillating fibers.\n\n\\FRAME{ftbpFU}{2.0124in}{3.8164in}{0pt}{\\Qcb{Active high voltage divider\nchain.}}{}{schalt.eps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";maintain-aspect-ratio TRUE;display \"USEDEF\";valid_file \"F\";width\n2.0124in;height 3.8164in;depth 0pt;original-width 305.5625pt;original-height\n583.125pt;cropleft \"0\";croptop \"1\";cropright \"1\";cropbottom \"0\";filename\n'C:\/pmpaper\/schalt.eps';file-properties \"XNPEU\";}} The anode signals of the\nPM channels were amplified by standard NIM linear amplifiers with a fixed\ngain factor of $10$. They were digitized by CAMAC ADCs coupled to a PC.\n\nIn the Hera-B experiment rates of $200kHz$ per channel are expected with\naverage signal of 20 photoelectrons. The total photocurrent reaches $0.1mA$\nfor the 64 channel tube. To reduce the bleeder current an active voltage\ndivider circuit for the dynodes (Fig. 2) was developed. Compared to a\npassive chain the current and correspondingly the number of HV power\nsupplies and the heat dissipation can be reduced by a factor of five.\n\n\\section{Simulation of photon propagation}\n\nPhotons produced in a scintillating fiber are usually transferred to the\nphotosensitive device via clear plastic fibers. The coupling of the fibers,\nreflection losses and the connection of the clear fiber to the PM affect the\nphotoelectron yield and their lateral distribution. We have simulated these\nprocesses. For the numerical estimates we used refraction indices for the\nfiber ($n_f$), its gladding ($n_{cl}$), the PM window ($n_w$) and the\nphotocathode ($n_{ca}$) of $n_f=1.60$, $n_{cl}=1.42$, $n_w=1.53$ and $%\nn_{ca}=3.4$. \\FRAME{ftbpFU}{3.525in}{2.6852in}{0pt}{\\Qcb{Trapping efficiency\nacross a scintillating fiber. Spiraling photons are lost when the\nscintillating fiber is coupled at the center of a clear fiber.}}{}{fibef.ps}{%\n\\special{language \"Scientific Word\";type \"GRAPHIC\";display \"PICT\";valid_file\n\"F\";width 3.525in;height 2.6852in;depth 0pt;original-width\n568.9375pt;original-height 818.8125pt;cropleft \"0.0223125\";croptop\n\"0.7359812\";cropright \"1.0061639\";cropbottom \"0.2788293\";filename\n'C:\/pmpaper\/FIBEF.ps';file-properties \"XNPEU\";}}\n\nIn the following we assume that the cross section of the fibers is circular.\nThe trapping probability of the photons then depends on the distance of the\nproduction point from the surface. The trapping efficiency across the fiber\nis shown in Figure 3. The high efficiency for photons produced near the\nsurface is due to the trapping of photons with rather large angles with\nrespect to the fiber axis which then spiral along the surface in azimutal\ndirection. The average trapping efficiency is $t=1-(n_{cl}\/n_f)^2=0.213$.\nWhen the scintillating fiber is coupled to a clear fiber a large fraction of\nthe spiraling photons is lost, the exact amount depending on the ratio of\nthe diameters and the relative lateral positions of the axises. In the limit\nwhere photons are produced uniformly in the scintillating fiber, where the\nscintillating fiber is coupled axially at the center of the clear fiber and\nwhere the radius of the scintillating fiber is negligible compared to the\nclear fiber radius the trapping efficiency is reduced by a factor of about\ntwo: $t=1-n_{cl}\/n_f=0.113$.\n\nSpiraling photons suffer from a high number of reflections at the fiber\nsurface and thus have a higher probability to be lost than photons crossing\nthe fiber axis. The losses depend on the quality of the surface and on the\ncircular symmetry of the fiber. Often the cross section of so-called round\nfibers is in reality slightly elliptical and varying along the fiber. These\nlosses which are difficult to estimate have also a positive aspect: They\nreduce the light divergence at the exit of the fiber.\n\nFor an efficient detection of minimum ionization particles in scintillating\nfibers with PMs of about $10\\%$ conversion efficiency a minimum scintillator\nthickness of about 1 to 2 mm is required, the exact value depending on the\nquality of the fiber and many other parameters related to the light\ndetection system. Usually several thin scintillating fibers have to be\ncoupled to one clear fiber to obtain an efficient and precise tracking. The\nminimum radius of the clear fiber which one chooses as small as possible is\ndetermined by the number and the cross section of the scintillating fibers\nto be coupled. On the other hand the maximum allowable cross section depends\non the size of the PM channels, the window thickness and the light\ndivergence at the fiber exit.\\FRAME{ftbpFU}{3.6685in}{2.5538in}{0pt}{\\Qcb{%\nPhoton loss due to non-parallel coupling of the fibers. The solid curve\nrepresents the worst case, where the scintillating fiber is coupled at the\ncenter of a clear fiber. The upper curves correspond to fibers coupled at\nthe periphery.}}{}{tilt.ps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";display \"PICT\";valid_file \"F\";width 3.6685in;height 2.5538in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.0221525\";croptop \"0.5326729\";cropright \"0.6886484\";cropbottom\n\"0.1806249\";filename 'C:\/pmpaper\/TILT.ps';file-properties \"XNPEU\";}}\n\nFigure 4 shows the effect of a tilt of the two axises in the coupling of the\nfibers. Losses start to become important only for rather large tilt angles.\n\nThe cross-talk to neighbouring PM channels is illustrated in Figure 5. The\ncurves correspond to the fraction of photoelectrons collected by neigbouring\nchannels surrounding one of the central pads. (Of course this fraction is\nlower for channels at the borders or corners of the PM.) The photons are\nproduced in a bundle of seven scintillating fibers ($0.5mm$ diameter)\ncoupled to one clear fiber of $3m$ length. The cross-talk becomes disturbing\nwhen the fiber diameter approaches the size of the square channel pads and\nis especially important for the standard window of $1.3mm$ thickness. For a\nsignal of in average 10 photoelectrons and single photoelectron detection%\n\\FRAME{ftbpFU}{4.4278in}{3.5224in}{0pt}{\\Qcb{Simulation of optical\ncross-talk for 64 channel PM as a function of the fiber radius.}}{}{%\nctsimula.ps}{\\special{language \"Scientific Word\";type \"GRAPHIC\";display\n\"USEDEF\";valid_file \"F\";width 4.4278in;height 3.5224in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.0183592\";croptop \"0.764805\";cropright \"0.86705\";cropbottom\n\"0.2969302\";filename 'C:\/pmpaper\/CTSIMULA.ps';file-properties \"XNPEU\";}} a\nchannel multiplicity between $1.6$ and $2.2$ for a $1.5mm$ diameter fiber is\nexpected.\n\nThe large cross-talk for small radia and negligible reflection loss is due\nto the increase of the number of spiraling photons when the coupled\nscintillating fibers cover the full cross section of the clear fiber. If\nsufficient light is available, the cross-talk can be reduced drastically by\nsetting the threshold above the one photoelectron signal.\n\n\\section{Measurements and results}\n\nWe have tested six M16 tubes and three 64 channel prototypes with window\nthicknesses of $1.3mm$, $1.0mm$ and $0.8mm$.\n\n\\subsection{Signals, amplification}\n\nThe photomultiplier tubes have a fast time response. For the 16 channel\ndevice the pulse width for a single photoelectron is of the order of $1$ to $%\n2ns$, the transit time spread is $0.3ns$ and the gain at the nominal voltage\nof $800V$ is $3\\cdot 10^6$ $^2$. The numbers for the 64 channel tube are\nvery similar except for a lower gain of $3\\cdot 10^5$ $\\footnote{%\nHAMAMATSU Product specifications.}.$\n\nFor optical fiber read-out in fiber trackers the photomultipliers have to\nwork with very low light levels. Efficient detection of single\nphotoelectrons is therefore required. Figure 6 shows pulse height\ndistributions for single photoelectrons measured at different high voltage\nsettings for the M16 type. At voltages above $900V$ the single photoelectron\nsignal is clearly separated from noise with a signal to noise ratio larger\nthan ten$.$ The M64 type has a smaller gain than the 16 channel version but\na detection of single photoelectrons is also possible with an adapted\nreadout system.\\FRAME{ftbpFU}{4.28in}{3.5155in}{0pt}{\\Qcb{Single\nphotoelectron pulse height distribution for 16 channel PM for different HV\nsettings.}}{}{hv6.ps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";display \"PICT\";valid_file \"F\";width 4.28in;height 3.5155in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.063629\";croptop \"0.900244\";cropright \"0.841535\";cropbottom\n\"0.435104\";filename 'C:\/pmpaper\/HV6.ps';file-properties \"XNPEU\";}}\n\n\\subsection{Uniformity}\n\nThe channel to channel gain variations of the M16 and M64 photomultipliers\nwere measured using a LED at a distance of $30cm$ from the PM which\nilluminated the whole photocathode. This configuration generated single\nphotoelectron signals. All channels were read-out in parallel. The relative\ngain was calculated from the single photoelectron peak position in the pulse\nheight distribution. The maximum gain variation between different channels\nof the same tube is less than a factor of two for the M16 PM's , for M64\nPM's the variation is larger reaching a factor of four. The gain uniformity\nwas measured for 6 PM's of the M16 type and the first two available\nprototypes of the M64 series. Figure 7 shows for both cases a typical gain\nhistogram.\n\nThe mean gain variation of the six tested photomultipliers of the M16 type\nis $25\\%$.\\FRAME{ftbpFU}{4.0577in}{5.9845in}{0pt}{\\Qcb{Uniformity of the\nresponse of the channels for the 64 and the 16 channel PMs.}}{}{hom64_16.ps}{%\n\\special{language \"Scientific Word\";type \"GRAPHIC\";maintain-aspect-ratio\nTRUE;display \"PICT\";valid_file \"F\";width 4.0577in;height 5.9845in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.006314\";croptop \"1.032848\";cropright \"1.04174\";cropbottom\n\"-0.006119\";filename 'C:\/pmpaper\/HOM64_16.ps';file-properties \"XNPEU\";}}\n\n\\subsection{Position sensitivity and cross-talk}\n\nWe have measured the position sensitivity of the PMs between the individual\nchannels by scanning across three pads with a thin fiber of $50\\mu m$\ndiameter, where light was injected by a LED parallel to the axis. The\nresponse is rather uniform except for the gap region where the efficiency is\nreduced to an average of 60\\%. Thus the effective inefficient region is $%\n0.2mm$ for the M16 and $0.1mm$ for the M64 tube.\n\nIn fiber tracker applications cross-talk between adjacent channels increases\nthe apparent occupancy and produces fake signals. There are two possible\nsources for cross-talk: optical and electrical coupling.\n\nElectrical cross-talk between adjacent dynode structures might occur during\nthe electron multiplication processes. Up to the level of about ten\nphotoelectrons electrical cross-talk was not observed in our measurements\nindicating that for applications with low light levels the electrical\ncross-talk is negligible. This is explained by path length variations and\nfluctuations in the light emission time, coincidences between several\nphotoelectrons within the short transit time are unlikely in most cases.\n\nThe optical cross-talk was studied in a similar way as the sensitivity by\nmoving a fiber of diameters of $1.5mm$ in steps of 100 $\\mu m$ across three\ndynode pads. However, the light was produced by irradiating a fiber bundle\nof seven scintillating fibers with a Sr$^{90}$-source to achieve a more\nrealistic light distribution. The fiber bundle was coupled to a $2m$ long\nclear fiber which was connected to the PM. When the fiber is centered at a\npad the neigbouring channel receives a fraction of $0.023$ of the light\n(Fig. 8). Thus for four adjacent channels and a ten photoelectron average\nsignal the channel multiplicity will be about two for a single\nphotoelectrons threshold. This effect is especially disturbing since\nadjacent PM channels will not always correspond to adjacent fibers in the\ntracking detector.\n\n\\FRAME{ftbpFU}{4.35in}{3.1842in}{0pt}{\\Qcb{Relative photon yield as a\nfunction of the fiber position for two different window thicknesses.\nAdjacent channels detect $2.3\\%$ ($1.3mm$ window) and $0.9\\%$ ($0.8mm$\nwindow).}}{}{vgl13_08.ps}{\\special{language \"Scientific Word\";type\n\"GRAPHIC\";display \"PICT\";valid_file \"F\";width 4.35in;height 3.1842in;depth\n0pt;original-width 568.9375pt;original-height 818.8125pt;cropleft\n\"0.024887\";croptop \"0.754395\";cropright \"0.884614\";cropbottom\n\"0.302749\";filename 'C:\/pmpaper\/VGL13_08.ps';file-properties \"XNPEU\";}}\n\nTo reduce the optical cross-talk we asked the PM supplier\\ to decrease the\nentrance window thickness of the photomultiplier tubes to the minimum\nthickness which is technically possible. We received two prototype tubes\nwith entrance window thicknesses of $1.0mm$ and $0.8mm$. The cross-talk was\nreduced by a factor $2.5$ for the $0.8mm$ thick window.\n\nOur simulations (Fig. 5) predict for the two cases cross-talks of $0.05$ ($%\n0.8mm$ window) and $0.13$ ($1.3mm$ window) for four adjacent channels and\nnegligible reflection losses and correspondingly $0.005$ and $0.06$ when we\nassume that we loose a fraction of $0.001$ for each photon reflection. The\nmeasurements ($0.035$) and ($0.09$) are located in between these predictions.\n\n\\subsection{Behaviour in a magnetic field}\n\nAt Hera-B the photomultipliers would have to be operated in regions with\nmagnetic fields of several Gauss up to one Tesla. The gain of conventional\nphotomultipliers is strongly affected by magnetic fields. As a result of the\ndeflection of the electrons by the magnetic field in MCPMs in addition to\nreduced gain increased cross-talk might be expected.\\FRAME{ftbpFU}{4.3734in}{%\n3.186in}{0pt}{\\Qcb{PM signal as a function of the magnetic field strenth.}}{%\n}{magrst.ps}{\\special{language \"Scientific Word\";type \"GRAPHIC\";display\n\"PICT\";valid_file \"F\";width 4.3734in;height 3.186in;depth 0pt;original-width\n568.9375pt;original-height 818.8125pt;cropleft \"0.0363237\";croptop\n\"0.694499\";cropright \"0.9105767\";cropbottom \"0.258447\";filename\n'C:\/pmpaper\/MAGRST.ps';file-properties \"XNPEU\";}}\n\nThe influence of a magnetic field on the PM performance was measured with\nfields up to $100G$ in all three directions with respect to the PM. We used\nsingle photoelectron signals. The magnetic field behaviour of the M16 and\nM64 types is very similar as expected because both tubes have the same metal\nchannel dynode structure except for different lateral dimensions. Figure 9\nshows the relative anode current for the 64 channel PM as a function of the\nmagnetic field in the x-, y- and z-directions where the z axis coincides\nwith the PM axis. The PMs are rather insensitive to magnetic fields. Within\nthe precision of our measurement there is no effect for fields parallel to\nthe y-axis. For the two other field orientations the PM signal is reduced to \n$80\\%$ (z-axis) and $70\\%$ (x-axis). About half of the loss is due to a gain\nreduction, the remaining fraction is loss in efficiency probably due to the\ndeflection of the first photoelectron.\n\nA position scan with and without magnetic field showed no difference in the\namount of cross-talk. Thus the MCPMs can well be operated in magnetic fields\nbelow $100G$.\n\n\\section{Conclusions}\n\nWe have studied the possibility to use multichannel photomultipliers to\nread-out scintillating fiber trackers. The device is well suited for this\npurpose. It is fast, sensitive to single photoelectrons and can be operated\nin magnetic fields up to about $0.01$ T. No electric cross-talk was\nobserved. The optical cross-talk is of the order of $10\\%$ for the $1.3mm$\nversion and single photoelectron detection for fiber diameters near the pad\ndimensions. It was reduced by a factor $2.5$ by replacing the $1.3mm$ window\nby a $0.8mm$ version. The measured cross-talk is compatible with the\nsimulations which also show that it depends strongly on the amount of\nreflection losses in the fibers. The gain variations from channel to channel\nare rather large for the 64 channel tube and require individual off-line\ncorrections for applications where the analog signal has to be recorded.\n\n\\textbf{Acknowledgement}\n\nWe thank Drs. Y. Yoshizawa and J. Takeuchi for lending to us the\nphotomultiplier prototypes and for providing us with valuable informations.\nWe are grateful to our ingeneers W. U. Otto and R. Seibert for the\ndevelopment of high voltage dividers and LED drivers.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzhsrq b/data_all_eng_slimpj/shuffled/split2/finalzzhsrq new file mode 100644 index 0000000000000000000000000000000000000000..52278427e040cf854e253f6a1f7a190ff462ad2b --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzhsrq @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nReinforcement learning deals with taking sequences of actions in previously unknown environments, to maximize cumulative rewards. Deep reinforcement learning combines the power of deep neural networks for function approximation with reinforcement learning algorithms. Such algorithms have recently been successful in solving several challenging problems \\cite{mnih2015human, silver2017mastering, schulman2017proximal}. However, these algorithms are often not sample-efficient, typically requiring an unreasonable amount of interactions in an environment \\cite{mnih2015human, schulman2017proximal, duan2016benchmarking}. In real world applications, it is often the case that collecting data is expensive, motivating the need for algorithms that can learn from a minimal number of interactions with the environment.\n\nEffectively using a model of the environment for learning could result in a significant increase of sample-efficiency \\cite{chua2018deep, lowrey2018plan}. This is because models introduce an inductive bias that the environment behaves in a forward generative causal direction. Models can be directly used for planning actions that maximize the sum of expected rewards in a finite horizon. Previous works have successfully used fast and accurate simulators of the environment for planning, to solve challenging tasks such as control of a Humanoid \\cite{lowrey2018plan, tassa2012synthesis}. However, most real-world problems do not come with a fast and accurate simulator and engineering one from scratch is a laborious task. Even if a simulator exists: 1)~maintaining the simulator to accommodate for changes in the real-world is a lot of manual work, and 2)~matching the true state of the environment and the simulator is not trivial. This motivates the need for \\emph{learning} dynamics models of the environment. Dynamics models of the environment can be efficiently learned using supervised learning techniques \\cite{nagabandi2018neural} and also efficiently adapted to changes in the environment \\cite{clavera2018learning}.\n\nPlanning with learned dynamics models is challenging because the planner can exploit the inaccuracies of the model to arrive at actions that are imagined to produce highly over-optimistic rewards (see Figure~\\ref{f:traj_opt}). In this paper, we propose to regularize model-based planning with an energy-based model trained on the same transitions as the dynamics model. The planning procedure is augmented with an additive cost of minimizing the energy of the imagined transitions. This penalizes the planner from producing trajectories with transitions that are outside the training data distribution (that is, transitions with high energy estimates). We demonstrate that the proposed method is effective at regularizing model-based planning from exploiting inaccuracies of the model. Previous works have proposed to use ensembles of models to deal with this problem \\cite{chua2018deep}. We show that the proposed method can further improve the performance of planning on top of pre-trained ensemble of models. Furthermore, we show that the proposed method enables sample-efficient learning to achieve competitive performance in five popular continuous control tasks.\n\n\n\\section{Model-Based Planning}\n\nIn this section, we formalize the problem setting as a Markov Decision Process (MDP). At every discrete time-step $t$, the environment is in state $s_t$, the agent takes an action $a_t$ to receive a scalar reward $r_t = r(s_t, a_t)$ and the environment transitions to the next state $s_{t+1}$, following the dynamics $s_{t+1} = f(s_t, a_t)$. The goal of the agent is to choose actions $a_t$ so as to maximize the sum of expected future rewards (called return), $G = \\mathbb{E} \\left[ \\sum_{t=0}^\\infty r(s_t, a_t) \\right]$.\n\nIn this paper, we focus on finite-horizon planning with learned dynamics models $\\hat{f}$. Also, we assume that the reward function $r(s_t, a_t)$ is known and that the state $s_t$ is fully observed. Forward dynamics models of the environment can be learned using supervised learning techniques to predict the next state $s_{t+1}$ from the current state $s_t$ and action $a_t$:\n\\[ s_{t+1} = \\hat{f}(s_t, a_t) \\,. \\]\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=0.5\\textwidth, trim=180 120 180 120, clip]{training_loop.pdf}\n\\end{center}\n\\caption{Overview of training loop. We initially perform random exploration for one or more episodes to train the dynamics model and then interact with the environment by planning using the learned dynamics model. At the end of each episode, we re-train the dynamics model on the past experience.}\n\\label{f:training-loop}\n\\end{figure}\n\nAt time-step $t$, the agent can plan a sequence of actions $\\{a_t, \\ldots, a_{t+H}\\}$ by unrolling the learned dynamics model to maximize the sum of rewards:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\mathbb{E} \\left[ \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) \\right] \n\\,,\n\\label{eq:obj-true}\n\\end{align}\nsuch that $s_{\\tau+1} = f(s_\\tau, a_\\tau)$. Since we do not have access to the true dynamics $f$, we plan using the following objective as a proxy to the true objective:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) \n\\,,\n\\label{eq:obj-proxy}\n\\end{align}\nsuch that $s_{\\tau+1} = \\hat{f}(s_\\tau, a_\\tau)$. We use model-predictive control (MPC) \\cite{garcia1989model, nagabandi2018neural} to adapt our plans to new states, that is we apply just the first action from the optimized sequence and re-plan at next step. This reduces the effect of error accumulation due to multi-step predictions using the model.\n\nWe initially train the dynamics model using data collected by executing a random policy for one or more episodes. More episodes of initial exploration reduces the effect of the initial model weights. The initial data can also be collected by an existing policy, for example human demonstrations or human-engineered controllers. After training the dynamics models, we interact with the environment using model-based planning (Equation~\\ref{eq:obj-proxy}). The $(s_t, a_t, s_{t+1})$ transition observed at each interaction is stored in a replay buffer along with the initial data and the model is re-trained on the replay buffer at the end of each episode. We iterate this alternating process of training the dynamics model and model-based planning. An overview of the training loop is illustrated in Figure~\\ref{f:training-loop}.\n\n\\subsection{Regularizing Model-Based Planning}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth, trim=20 80 20 100, clip]{comp_graph.pdf}\n\\end{center}\n\\caption{Computational graph of regularizing model-based planning with energy-based models. At each timestep $t-1$, the environment is in a state $s_{t-1}$ and we initially consider an action trajectory $\\{a_t, a_{t+1}, \\ldots, a_{t+H}\\}$ that we could apply for the next $H$ timesteps. The dynamics model is used to predict the future state transitions $\\{\\tilde{s}_t, \\tilde{s}_{t+1}, \\ldots, \\tilde{s}_{t+H}\\}$ at each timestep for the considered action trajectory. The rewards at each time-step is computed directly from the state-action pairs, resulting in a prediction of the finite-horizon cumulative reward. An additive regularization term is augmented to the planning objective by computing the energy of each $(\\tilde{s}_\\tau, a_\\tau, \\tilde{s}_{\\tau+1})$ transition. The action trajectory $\\{a_t, a_{t+1}, \\ldots, a_{t+H}\\}$ can be optimized to maximize this regularized planning objective.}\n\\label{f:comp-graph}\n\\end{figure}\n\nDirectly optimizing the objective in Equation~\\ref{eq:obj-proxy} is challenging because $\\hat{f}$ is only an approximation of the true dynamics $f$. Deep neural networks are commonly used as function approximators to learn $\\hat{f}$ and they are amenable to erroneous predictions for samples outside the data distribution.\nAn effective optimizer can be easily deceived by these erroneous predictions, converging to action trajectories that are imagined to produce very high rewards but is not the case in reality. This problem can be alleviated by penalizing the optimizer from considering trajectories that are outside the training distribution of the learned dynamics model $\\hat{f}$. This can be achieved by augmenting the planning objective with an additive term to maximize the probability of imagined transitions:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} r(s_\\tau, a_\\tau) + \n\\alpha \\log p(s_t, a_t, s_{t+1}, \\ldots, s_{t+H}, a_{t+H}, s_{t+H+1})\n\\,.\n\\nonumber\n\\end{align}\nwhere the scalar $\\alpha$ modulates the weight between both costs. We approximate the joint probability of the whole trajectory as a sum of joint probabilities of each transition in the trajectory:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} \\left[ r(s_\\tau, a_\\tau) +\n\\alpha \\log p(s_\\tau, a_\\tau, s_{\\tau+1}) \\right]\n\\,.\n\\label{eq:obj-main}\n\\end{align}\n\nAssume that we want to learn the probability density function using a parameterized model $p(x_\\tau; \\theta)$, where $x_\\tau = [s_\\tau, a_\\tau, s_{\\tau+1}]$. The energy function $E(x_\\tau; \\theta)$ is the unnormalized log-density, that is the probability density function $p(x_\\tau)$ is defined as:\n\\begin{align}\np(x_\\tau; \\theta) = \\frac{1}{Z(\\theta)} \\exp{(-E(x_\\tau;\\theta))}\n\\,,\n\\nonumber\n\\end{align}\nwhere $Z(\\theta) = \\int{\\exp{(-E(x'_\\tau;\\theta))}} dx'_\\tau$ is the partition function which normalizes the probability density. Computing the partition function is generally intractable in practice and is not important for regularization in Equation~\\ref{eq:obj-main} since it does not depend on $x_\\tau$.\nWe can instead learn and use the energy function for regularizing model-based planning:\n\\begin{align}\na^*_t, \\ldots, a^*_{t+H} = \\argmax_{a_t, \\ldots, a_{t+H}} \\sum_{\\tau=t}^{t+H} \\left[ r(s_\\tau, a_\\tau) -\n\\alpha E(s_\\tau, a_\\tau, s_{\\tau+1}) \\right]\n\\,.\n\\label{eq:obj-energy}\n\\end{align}\n\n\n\\section{Energy-Based Models}\n\\label{sec:energy}\n\nIn principle, any energy-based model \\cite{lecun2006tutorial} can be used to estimate $E(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-energy}. In this paper, we use the recently introduced Deep Energy Estimator Networks (DEEN) \\cite{saremi2018deep, saremi2019neural} for energy estimation. In this section, we introduce deep energy estimator networks and further contrast it against direct score function estimation using a denoising autoencoder. We show that deep energy estimator networks offers a principled and scalable way to estimate both the energy and score functions, making it a good choice for regularization in both gradient-free and gradient-based planning.\n\nConsider a random variable $Y$ that is a noisy observation of another unknown random variable $X$ with density function $p(x)$. \n\\citet{robbins1956empirical} derived the least squares estimators of variable $X$ from observed value $y$ of variable $Y$ for Poisson, geometric and binomial noise distributions and \\citet{miyasawa1961empirical} later extended the work to derive the estimator for univariate Gaussian noise. \\citet{raphan2011least} generalized these results into a unified framework and derived least square estimators for more distributions including multivariate Gaussian distribution. The empirical Bayes least squares estimator or the optimal denoising function $g(y)$ for zero-mean multivariate Gaussian noise is given by:\n\\begin{align}\ng(y) = y + \\sigma^2 \\nabla_y \\log p(y)\n\\,,\n\\label{eq:opt-denoising}\n\\end{align}\nwhere $y \\sim x + N(0, \\sigma^2I_d)$. Assume that we have access to samples $x_i \\in X$. Then, we can corrupt the samples using zero-mean Gaussian noise to obtain samples $y_i \\in Y$. We can train a feedforward neural network $\\hat{g}$ to denoise each sample $y_i$ to predict $x_i$. Such a function $\\hat{g}$ can be implemented with a denoising autoencoder (DAE)\nand based on Equation~\\ref{eq:opt-denoising} we can use it to approximate the score function $\\nabla_y \\log p(y)$ of the corrupted distribution as follows:\n\\begin{align}\n\\nabla_y \\log p(y) \\propto \\hat{g}(y) - y\n\\,.\n\\label{eq:dae}\n\\end{align}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth, trim=10 10 10 10, clip]{deen_vs_dae.pdf}\n\\end{center}\n\\caption{Comparison of score function estimation using DEEN vs DAE. We consider a simple mixture of 1d Gaussian distributions and generate 1000 samples from it. The ground truth probability density function $p(x)$, $\\log\\ p(x)$ and score function $\\frac{\\partial\\log\\ p(x)}{\\partial x}$ are shown in the first three figures. We corrupt the training data using additive Gaussian noise with a scale of 0.5. We train a DEEN and a DAE on this training data and the energy and score function estimates of the corrupted distribution $p(y)$ are shown in the last three figures. DEEN provides good smooth estimates of the energy and score function. DAE also provides a reasonable estimate of the score function.}\n\\label{f:deen-dae}\n\\end{figure}\n\n\\citet{boney2019regularizing} proposed to use this approximation to directly estimate the gradient of $\\log p(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-main}. This can be used in gradient-based planning by using the penalty term $\\|\\hat{g}(x) - x\\|^2$ instead of $\\log p(s_\\tau, a_\\tau, s_{\\tau+1})$ in Equation~\\ref{eq:obj-main} and stopping the gradient propagation through the denoising autoencoder $g$.\n\nGradient-free planning with the regularized objective in Equation~\\ref{eq:obj-energy} requires explicit energy estimates. \\citet{boney2019regularizing} used DAE regularization for gradient-free planning, which is not accurate as the denoising error does not correspond to energy estimation. Gradient-free planning offers some attractive properties: 1)~It is very easy to parallelize, 2)~There is no need to backpropogate gradients, 3)~Gradient-based planning involves backpropagating through very deep networks (where the same network is repeatedly applied for a finite horizon), where the problem of exploding and vanishing gradients may arise, and 4)~The learned dynamics model can become chaotic leading to high variance in the backpropagated gradients \\cite{parmas2018pipps}. In this paper, we propose to use differentiable energy-based models to obtain explicit energy estimates, from which the score function can be computed (if needed).\n\n\\citet{saremi2018deep} proposed to explicitly parameterize the energy function $E(y; \\theta)$ with a neural network\nand to compute the derivative of the energy by backpropagating through the network. Such a network can be trained by minimizing the following objective based on the relation in Equation~\\ref{eq:opt-denoising}:\n\\begin{align}\n\\argmin_\\theta \\sum_{x_i \\in X, y_i \\in Y} \\left\\| x_i - y_i + \\sigma^2 \\frac{\\partial E(y=y_i; \\theta)}{\\partial y} \\right\\|^2\n\\label{eq:deen}\n\\end{align}\n\nThe energy function network $E(y; \\theta)$ is called deep energy estimator network (DEEN).\nNote that minimizing this objective involves double backpropagation at each step. Optimizing the objective in Equation~\\ref{eq:deen} ensures that the score function $\\partial E(y; \\theta) \/ \\partial y$ satisfies the relation in Equation~\\ref{eq:opt-denoising}. This leads the energy network $E$ to explicitly learn the energy function of the corrupted distribution such that the gradient of the network also corresponds to the score function of the corrupted distribution. In this paper, we propose to use the energy network $E(y; \\theta)$ to learn the energy function $E(s_\\tau, a_\\tau, s_{\\tau+1})$ and use it for regularization in the planning objective (Equation~\\ref{eq:obj-energy}). A computational graph of regularizing model-based planning with energy estimation using a DEEN network is illustrated in Figure~\\ref{f:comp-graph}.\n\nIt is to be noted that both the denoising autocoder and the DEEN methods approximate the score function of the corrupted distribution $p(y)$ instead of the true data distribution $p(x)$. This can potentially behave better in practice since $p(y)$ can be seen as a Parzen window estimate of $p(x)$ with variance $\\sigma^2$ as the smoothing parameter \\cite{saremi2019neural, vincent2011connection}.\n\nIn Section~\\ref{sec:exp}, we show that energy estimation with DEEN is more effective than direct score function estimation using DAE for regularizing model-based planning. Deep energy estimator networks have been shown to be robust for score function estimation since the score function is computed from explicit energy estimates \\cite{saremi2018deep}. We compare score function estimation using denoising autoencoders and deep energy estimator networks on a toy example in Figure~\\ref{f:deen-dae}. Previous works \\cite{saremi2019neural, alain2014regularized} have also observed that directly estimating the score function is not robust in practice.\n\n\n\\section{Experiments}\n\\label{sec:exp}\n\nWe compare the proposed energy-based regularization method to probabilistic ensembles with trajectory sampling (PETS) \\cite{chua2018deep} and DAE regularization \\cite{boney2019regularizing}. PETS is a state-of-the-art model-based algorithm which involves learning an ensemble of probabilistic dynamics models.\nWe perform the comparison in five popular continuous control benchmarks from \\cite{brockman2016openai}: Cartpole, Reacher, Pusher, Half-cheetah and Ant. We use cross-entropy method (CEM) \\cite{botev2013cross} as the optimizer in all our experiments since it is computationally significantly faster than the Adam optimizer used in \\cite{boney2019regularizing} and also was able to achieve competitive or even better results in these benchmarks. In Section~\\ref{sec:exp-pre}, we test energy-based regularization method on top of the pre-trained PETS models \nto show that energy-based regularization further improves planning. In Section~\\ref{sec:exp-scratch}, we\nshow that enery-based regularization enables sample-efficient learning to solve all tasks from just a handful of trials.\n\n\\subsection{Experiments on Pre-Trained Dynamics Models}\n\\label{sec:exp-pre}\n\nWe test the proposed regularization on top of the state-of-the-art model-based RL algorithm: PETS. We trained an ensemble of probabilistic dynamics models using the code provided by the authors of \\cite{chua2018deep}. The results are shown in Table~\\ref{t:pretrained-results}. We trained PETS on the Half-cheetah benchmark for 300 episodes and perform closed-loop planning by augmenting the planning objective with an additive term consisting of the energy estimates (Equation~\\ref{eq:obj-energy}). Both DAE and DEEN regularization are able to improve upon PETS, with DEEN regularization performing the best. Similar to \\cite{boney2019regularizing}, we did not observe any improvements on tasks with low-dimensional action spaces: Cartpole, Reacher and Pusher.\n\n\n\\begin{table}[h]\n\\caption{Comparison of planning using pre-trained PETS models with different optimizers}\n\\label{t:pretrained-results}\n\\centering\n\\begin{tabular}{llllll} \n\\toprule\nOptimizer & CEM & CEM + DAE & Adam + DAE & CEM + DEEN \\\\\n\\midrule\nReturn & $10955 \\pm 2865$ & $12967 \\pm 3216$ & $12796 \\pm 2716$ & $\\mathbf{13052 \\pm 2814}$ \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\subsection{Experiments on Learning from Scratch}\n\\label{sec:exp-scratch}\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=\\textwidth]{corl_results.pdf}\n\\end{center}\n\\caption{Results of our experiments on learning from scratch on five continuous control benchmarks: Cartpole, Reacher, Pusher, Half-cheetah and Ant. We compare to PETS \\cite{chua2018deep} and DAE regularization \\cite{boney2019regularizing} using the return (cumulative reward) obtained in each episode. \nPETS is a state-of-the-art model-based RL algorithm and DAE regularization has been shown to be effective in sample-efficient learning in these tasks. We also compare against planning with Gaussian Processes (GP) based dynamics model in the Cartpole task. We show the mean and standard deviation of each setting averaged across 5 seeds.}\n\\label{f:scratch-results}\n\\end{figure}\n\nIn this section, we test the effectiveness of regularization using energy-based models for learning from scratch. We perform one or more episodes of random exploration, train a dynamics model and the regularizer on the transitions and further interact with the environment using MPC by planning at each time-step using the regularized objective in Equation~\\ref{eq:obj-energy}. We maintain a replay buffer of all past transitions and at the end of each episode the dynamics model and regularizer are re-trained on the whole replay buffer. In these experiments, we use a single feedforward network as the dynamics model (as opposed to an ensemble of models in \\cite{chua2018deep}) to test the efficacy of the proposed method in a simple setting. The results are shown in Figure~\\ref{f:scratch-results}. In low-dimensional environments such as Cartpole, Reacher and Pusher, CEM optimization with DEEN regularization is comparable or better than the other methods. In Half-cheetah, CEM optimization with DEEN regularization clearly performs the best, obtaining good asymptotic performance in just 20,000 timesteps (corresponding to 16.6 minutes of experience). In Ant, CEM optimization with DEEN regularization also performs the best, learning to walk reasonably in just 4,000 timesteps (corresponding to 3.3 minutes of experience)\\footnote{Videos of the training progress are available at \\href{https:\/\/sites.google.com\/view\/regularizing-mbrl}{https:\/\/sites.google.com\/view\/regularizing-mbrl}.}. It can also be observed that CEM optimization with DEEN regularization performs competitively or better than Adam optimization with DAE regularization, which requires much more computation.\nIn the Half-cheetah benchmark, state-of-the-art model-free methods \\cite{haarnoja2018soft, fujimoto2018addressing} and PETS obtains better asymptotic performance during later stages of learning. \nWe postulate that this is due to lack of a proper exploration mechanism in the proposed approach. However, DEEN regularization enables excellent performance during the early stages of training and also shows consistent improvements after each episode. This is very important for practical applications in the real-world, where the agent is expected to perform sensible actions from the very beginning. The proposed method enables efficient exploitation of the learned dynamics model and combining it with an explicit exploration mechanism would facilitate controlled exploration and exploitation. We leave complementation of the proposed method with an effective exploration strategy to future research. For example, energy estimates of the transitions could be used as bonuses for curiosity-driven exploration \\cite{pathak2017curiosity} to visit novel states and try novel actions.\n\nWe visually demonstrate the effectiveness of DEEN regularization in Figure~\\ref{f:traj_opt}. To compare with \\cite{boney2019regularizing}, we visualize trajectory optimization on the Half-cheetah task using dynamics models obtained after 5 episodes of training. We perform actions in the environment for 50 timesteps using model-based planning to arrive at a stable state and then perform trajectory optimization using a randomly initialized population of action trajectories. Without any regularization, the planning procedure leads to trajectories that are predicted to produce high rewards but is not the case in reality. It can be observed that while DAE regularization is also effective, DEEN regularization is clearly better, being able to successfully prevent the reality and imagination from diverging and also leading to trajectories with a better outcome.\n\n\n\\begin{figure}[t]\n\\begin{center}\n\\includegraphics[width=.8\\textwidth]{traj_opt.pdf}\n\\end{center}\n\\caption{Visualization of trajectory optimization after 5 episodes of training in the Half-cheetah environment. We perform actions in the environment for 50 timesteps using MPC and then initialize the CEM optimization with the a random population of trajectories and optimize the planning objective in three different settings: 1) without any regularization, 2) with DAE regularization \\cite{boney2019regularizing}, and 3) with DEEN regularization. Here, the red lines denote the rewards predicted by the model (imagination) and the black lines denote the true rewards obtained when applying the sequence of optimized actions (reality). It is noticeable that planning without any regularization exploits inaccuracies of the dynamics model but DAE and DEEN regularization is able to prevent this.\n}\n\\label{f:traj_opt}\n\\end{figure}\n\n\\subsection{Model Architecture and Hyperparameters}\n\nThe important hyperparameters used in our experiments are reported in Table~\\ref{t:hyperparameters}.\nFollowing \\cite{chua2018deep, boney2019regularizing}, we used a Bayesian neural network (BNN) with mean and variance predictions as our dynamics model. Although the predicted variance is not used for planning, it was found to have a regularizing effect during training \\cite{chua2018deep}. We used a vanilla feedforward network to model the energy function\nThe energy estimation network is trained by corrupting the transitions in the replay buffer using additive Gaussian noise and minimizing the objective in Equation~\\ref{eq:deen}. We found the noise scale $\\sigma$, cost multiplier $\\alpha$ and number of training epochs to be the most sensitive hyperparameters. \nTo prevent overfitting to the replay buffer, we explored simple strategies like increasing the number of initial episodes with random exploration and decaying the number of training epochs after each episode. In Half-cheetah, we perform random exploration for the first 3 episodes and decay the number of training epochs to 8 after 10 episodes. In Ant, we decay the number of training epochs of the dynamics model and the DEEN network by factors of 0.6 and 0.7 respectively.\n\n\\begin{table}[h]\n\\caption{Important hyperparameters used in our experiments}\n\\label{t:hyperparameters}\n\\centering\n\\begin{tabular}{llccccc}\n\\toprule\n& Hyperparameter & Cartpole & Reacher & Pusher & Half-cheetah & Ant \\\\\n\\midrule\n\\multirow{4}{*}{Model} & Hidden layers & 3 & 3 & 3 & 4 & 4 \\\\\n& Hidden size & 200 & 200 & 200 & 200 & 400 \\\\\n& Epochs & 500 & 500 & 100 & 300 & 600 \\\\\n& Batch Size & 32 & 32 & 32 & 128 & 400 \\\\\n\\midrule\n\\multirow{6}{*}{DEEN} & Hidden layers & 3 & 3 & 3 & 5 & 3 \\\\\n& Hidden size & 200 & 200 & 200 & 500 & 300 \\\\\n& Epochs & 500 & 500 & 100 & 100 & 800 \\\\\n& Batch Size & 32 & 32 & 32 & 32 & 64 \\\\\n& Noise scale $\\sigma$ & 0.1 & 0.1 & 0.1 & 0.37 & 0.9 \\\\\n& Cost multiplier $\\alpha$ & 0.001 & 0.001 & 0.01 & 0.05 & 0.035 \\\\\n\\midrule\nCEM & Iterations & 5 & 5 & 5 & 5 & 7 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\n\\section{Related Work}\n\nThis paper is inspired by the recent works on planning with learned dynamics models. PILCO \\cite{deisenroth2011pilco} is a well-known method for sample-efficient learning in low-dimensional control problems. However, it has been difficult to scale such methods to high-dimensional problems. \\citet{nagabandi2018neural} used neural networks as dynamics models for planning to demonstrate sample-efficient learning on challenging continuous control problems and further fine-tuned the learned policy using model-free methods for good asymptotic performance. \\citet{mishra2017prediction} introduced deep generative models based on convolutional autoregressive models and variational autoencoders \\cite{kingma2013auto} and used it for gradient-based trajectory optimization and policy optimization. \\citet{chua2018deep} used ensembling of probabilistic dynamics models and trajectory sampling to deal with inaccuracies of learned dynamics models and demonstrated sample-efficient learning to achieve high asymptotic performance on challenging continuous control problems. While the use of ensembling in \\cite{chua2018deep} allows for a better dynamics model, energy-based regularization also bounds the transitions of the agent to be similar to it's previous experience, which might be important for safety-critical applications. While KL divergence penalty between action distribution has been used in previous works \\cite{levine2013guided, kumar2016optimal}, energy-based regularization also bounds the familiarity of states. \\citet{boney2019regularizing} proposed to use denoising autoencoders to prevent trajectory optimization from exploiting modelling inaccuracies and demonstrated sample-efficient learning from very few episodes, using gradient-based trajectory optimization. \\citet{hafner2018learning} introduced a novel latent dynamics model for planning in high-dimensional observation spaces, such as images. Recent works on model-based RL have also explored Dyna-style \\cite{sutton1990integrated} architectures where learned dynamics models are used to generate data to train model-free methods \\cite{kurutach2018modelensemble, clavera2018model, ha2018recurrent}.\n\nEnergy-based models can be directly used for planning by sampling future trajectories from the generative model. \\citet{du2019implicit} showed that energy-based models can be used to sample diverse predictions of future trajectories and it was later extended in \\cite{du2019model} for model-based planning. DEEN \\cite{saremi2018deep} could also be directly used for planning by sampling future trajectories using the novel walk-jump sampling algorithm introduced in \\cite{saremi2019neural}. However, sampling from such models at each timestep for planning is expensive and we instead use a separate forward dynamics model for directly predicting the future trajectories but only use the energy-based model for regularization. This can be seen as an ensemble of two different kinds of models.\n\n\n\\section{Conclusion}\n\nPlanning with learned dynamics models is challenging because planning can exploit the inaccuracies in the model to produce over-optimistic trajectories. In this paper, we propose to regularize planning using energy estimates of state transitions in the environment. We use a recently proposed energy estimation method called DEEN for this purpose. We demonstrated that an energy estimation network can be trained on the past experience of pre-trained dynamics models to further improve planning. We also demonstrated that the energy regularization enables sample-efficient learning on challenging tasks such as Half-cheetah and Ant, in just a few minutes of interaction.\n\nOne of the limitations of the proposed and related model-based planning algorithms is the additional hyperparameter tuning required for learning dynamics models. AutoML algorithms can be potentially used to automate the training of effective dynamics model by splitting the replay buffer into a training set and validation set and optimizing the prediction performance on the validation set. This could enable automatic architecture and hyperparameter tuning of dynamics models using more computational resources, without any additional data or human supervision. This would be an interesting line of future work.\n\n\n\n\\clearpage\n\n\\acknowledgments{We would like to thank Saeed Saremi for valuable discussions about his work on deep energy estimator networks and neural empirical Bayes.}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn diverse families of strongly correlated electron systems, including cuprates, iron-pnictides, and heavy fermion compounds, superconductivity is often found near a quantum critical point (QCP) where a magnetic phase vanishes in the limit of zero temperature, pointing to a magnetic glue as the source of electron pairing \\cite{Mathur,Shibauchi2014,Keimer}. In these materials, microscopic coexistence of superconducting and magnetically ordered phases both involving the same charge carriers is a striking example for unusual emergent electronic phases. Moreover, superconductivity is frequently the strongest near the QCP, suggesting that the proliferation of critical magnetic excitations emanating from the QCP plays an important role in Cooper pairing. Despite tremendous research, however, the entangled relationship between superconductivity and magnetism has remained largely elusive. \n\n\n\\begin{figure}[b]\n\\begin{center}\n\\includegraphics[width=1.0\\linewidth]{Fig1.eps}\n\\caption{\n(a) Schematic figure of the interaction between $d$-wave superconductivity (SC) and static antiferromagnetic (AFM) order via the interface. (b) Interaction between two competing orders under pressure near quantum critical point (QCP), where AFM order disappears. \n(c) High resolution cross-sectional TEM image for CeCoIn$_5$($5$)\/CeRhIn$_5$($5$) superlattice. (d) The TEM image of the boxed area in (c).\n}\n \\end{center}\n \\end{figure}\n\nRecently, realization that interactions between superconducting electrons and bosonic excitations through an atomic interface can have a profound influence on Cooper-pair formation has raised the exciting possibility of a new route to controlling superconductivity. For instance, when a monolayer of FeSe is grown on a SrTiO$_3$ substrate, the interaction between FeSe electrons and SrTiO$_3$ phonons via the interface enhances the pairing interaction, giving rise to the highest transition temperature $T_c$ among iron-based superconductors \\cite{Huang,DHLee,JJLee,Rademaker}. This discovery raises the possibility of a magnetic analogue in which the pairing interaction is influenced by magnetic fluctuations though an interface between an unconventional superconductor and a magnetic metal. This concept is illustrated schematically in Figs.\\,1(a) and 1(b). Besides allowing a new approach to revealing the entangled relationship between magnetism and unconventional superconductivity, this concept has the advantage that magnetic excitations are tunable as a magnetic transition is driven toward zero temperature, unlike phonon excitations in SrTiO$_3$. The state-of-the-art molecular beam epitaxy (MBE) technique enables realization of this idea through fabrication of artificial Kondo superlattices with alternating layers of Ce-based heavy fermion superconductors and magnets that are atomic layer thick \\cite{Shishido2010,Mizukami,Shimozawa2016}. These artificially engineered materials are particularly suitable systems to elucidate the mutual interaction through the interface, providing a new platform to study the interplay of competing orders. \n \nThe layered heavy fermion compounds Ce$M$In$_5$ ($M=$\\,Co, Rh) are ideal model systems in which the interplay between magnetism and superconductivity can be explored, because of their high purity and small energy scales \\cite{Thompson,Kenzelman,Knebel2010}. They have similar Fermi surface structures and similar pressure-temperature ($p$-$T$) phase diagrams. At ambient pressure, CeCoIn$_5$ is a superconductor ($T_c$=2.3\\,K) with $d_{x^2-y^2}$-wave symmetry \\cite{Izawa,Allan,Zhou}. The normal state possesses non-Fermi-liquid properties in zero field, including $T$-linear resistivity, indicative of a nearby underlying QCP \\cite{Sidorov,Nakajima}. In contrast, CeRhIn$_5$ orders antiferromagnetically at atmospheric pressure ($T_{\\rm N}$=3.8\\,K) \\cite{Bao}. Its magnetic transition is suppressed by applying pressure and the ground state becomes purely superconducting state at $p>p^\\ast\\approx$1.7\\, GPa, indicating the presence of a pressure induced QCP \\cite{Park2006,Knebel2008,Park2008,Shishido2005}. As disorder may seriously influence physical properties especially near a QCP, there is a great benefit in examining quantum critical systems which are stoichiometric, and hence, relatively disorder free; both compounds are ones of a small number of such systems. Both host a wide range of fascinating superconducting properties including an upper critical field $H_{c2}$ that is limited by extremely strong Pauli pair-breaking \\cite{Izawa,Knebel2008}. \n\nTo realize hybrid heterostructures shown in Figs.\\,1(a) and 1(b), we fabricate superlattice films with alternating block layers (BLs) of $n$ unit-cell-thick (UCT) CeCoIn$_5$ and $m$-UCT CeRhIn$_5$, CeCoIn$_5$($n$)\/CeRhIn$_5$($m$).\nWe demonstrate that the pairing interaction in a $d$-wave superconductor is tuned by injecting magnetic fluctuations through the atomic interface. Moreover, we show that the pairing strength is maximized near the critical pressure where AFM order vanishes. \n\n\n\nThe hybrid superlattices CeCoIn$_5$($n$)\/CeRhIn$_5$($m$) with $c$ axis oriented structure are grown on MgF$_2$ substrate by the MBE technique \\cite{Shishido2010,Mizukami,Shimozawa2016}. \nFigure\\,1(c) displays a high-resolution cross-sectional transmission electron microscope (TEM) image of a CeCoIn$_5$($5$)\/CeRhIn$_5$($5$) superlattice. The TEM image displayed in Fig.\\,1(d) (the boxed area in Fig.\\,1(c)) demonstrate that the Rh and Co atoms are clearly distinguished by bright and dark spots, respectively. No discernible atomic inter-diffusion between the neighboring Co and Rh layers is seen, which is also confirmed by lateral satellite peaks in an X-ray diffraction pattern. The epitaxial growth of each layer with atomic flatness is confirmed by reflection high energy electron diffraction (Fig.\\,S1 in \\cite{SM}). These results indicate the successful fabrication of epitaxial superlattices with sharp interfaces. \nHigh-pressure resistivity measurements have been performed under hydrostatic pressure up to 2.4\\,GPa using a piston cylinder cell with oil as pressure transmitting medium.\n\n\n\n\n \\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig2.eps}\n \t\n \t\t\\caption{\n\t\n\t\t(a), (b) $p$-$T$ phase diagrams of thin films and single crystals of (a) CeCoIn$_5$ and (b) CeRhIn$_5$. \n\t\n\t\t(c) Temperature dependence of the resistivity of CeCoIn$_5$ thin film at ambient pressure and at $p=2.1$\\,GPa. (d) and (e) show temperature dependence of the resistivity (solid lines, left axes) and its temperature derivative $d\\rho(T)\/dT$ (dotted lines, right axes) for CeRhIn$_5$ thin film and CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice at ambient pressure and at $p=2.1$\\,GPa, respectively. The peak of $d\\rho(T)\/dT$ corresponds to AFM transition. \n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n\nFigures\\,2(a) and 2(b) depict the resistively determined $p$-$T$ phase diagrams of separate, MBE-grown epitaxial thin films of CeCoIn$_5$ and CeRhIn$_5$, whose resistivities ($\\rho$) are shown in Figs.\\,2(c) and 2(d), respectively. The $p$-$T$ phase diagrams of both films are essentially those of single crystals. \n$T_c$ (=2.0\\,K) in the CeCoIn$_5$ thin film, however, is slightly reduced from the bulk value, possibly due to strain induced by a slight lattice mismatch with the substrate, while $T_{\\rm N}$ (=3.7\\,K) of CeRhIn$_5$ thin film is almost the same as that in a single crystal. With pressure, $T_c$ of CeCoIn$_5$ thin film increases and shows a broad peak near $p\\sim$1.7\\,GPa. CeRhIn$_5$ thin film undergoes the superconducting transition with no signature of AFM transition at $p\\approx$2.1\\,GPa. Similar to CeRhIn$_5$ single crystals \\cite{Park2006,Sidorov,Park2008}, superconductivity in the thin films develops at $p\\agt$1\\,GPa where it coexists with magnetic order, and there is only a purely superconducting state at $p\\agt$2.1\\,GPa (Fig.\\,S2 in \\cite{SM}), a slightly higher pressure than in single crystals. \n\n\n\\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig3.eps}\n \t\n \t\t\\caption{\n\t\n\t\n\t\t(a) $p$-$T$ phase diagram of CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice. \n\t\tOut-of-plane upper critical field $H_{c2\\perp}$ normalized by $T_c$, $H_{c2\\perp}\/T_c$, measures the coupling strength of the superconductivity. \n\t\t(b) Temperature dependence of in-plane and out-of-plane upper critical fields at ambient pressure and at $p=1.8$ and 2.1\\,GPa. (c) Anisotropy of upper critical field, $H_{c2\\parallel}\/H_{c2\\perp}$, near $T_c$ of superlattices at ambient pressure and at 2.1\\,GPa, along with the data of CeCoIn$_5$ thin film. (d) Angular dependence of upper critical field of superlattice at $p=1.8$ and 2.1\\,GPa. The inset is an expanded view of the low angle region.\n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n \n \nFigure\\,2(e) compares the $T$-dependence of $\\rho(T)$ and its temperature derivative $d\\rho(T)\/dT$ for a CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice at ambient pressure and at $p=2.1$\\,GPa. At ambient pressure, a distinct peak in $d\\rho(T)\/dT$ associated with an AFM transition can be seen at 3\\,K in addition to a superconducting transition at $\\sim1.4$\\,K \\cite{Knebel2008}. While $T_c$ and $T_{\\rm N}$ of the hybrid superlattice are lower than that of the CeCoIn$_5$ and CeRhIn$_5$ thin films, respectively, they are still larger than that of respective CeCoIn$_5$\/YbCoIn$_5$ and CeRhIn$_5$\/YbRhIn$_5$ superlattices (Fig.\\,S2 in \\cite{SM}), indicating the importance of mutual interaction between the CeCoIn$_5$ and CeRhIn$_5$ BLs. On the other hand, at $p=2.1$\\,GPa, there is no signature for magnetic order, while the superconductivity remains with slightly higher $T_c$ than at ambient pressure. In Fig.\\,3(a), we plot the $p$-dependence of $T_c$ and $T_{\\rm N}$ determined by the peak in $d\\rho(T)\/dT$. At $p\\sim2$\\,GPa, $T_c$ is a maximum, forming a dome-shaped $p$-dependence. With pressure, $T_{\\rm N}$ is suppressed gradually at low $p$, followed by a rapid suppression at $p\\agt1$\\,GPa (Fig.\\,S3 in \\cite{SM}). At $p\\agt 1.6$\\,GPa, evidence for magnetic order is hidden beneath the superconducting dome. A simple extrapolation of $T_{\\rm N}(p)$ gives a critical pressure $p_c\\sim2$\\,GPa at which the magnetic transition reaches zero temperature and $T_c$ shows a maximum. \n\nWe demonstrate that two-dimensional (2D) superconductivity is realized in CeCoIn$_5$ BLs in the whole pressure regime. Figures\\,3(b) and 3(c) depict the $T$-dependence of the upper critical field determined by the mid point of the resistive transition in a magnetic field $H$ applied parallel ($H_{c2\\parallel}$) and perpendicular ($H_{c2\\perp}$) to the $ab$ plane and the $T$-dependence of the anisotropy of upper critical fields, $H_{c2\\parallel}\/H_{c2\\perp}$, respectively. The anisotropy diverges on approaching $T_c$, in sharp contrast to the CeCoIn$_5$ thin film whose anisotropy shows little $T$-dependence up to $T_c$. This diverging anisotropy in the superlattice is a characteristic feature of 2D superconductivity, in which $H_{c2\\parallel}$ increases as $\\sqrt{T_c-T}$ due to the Pauli paramagnetic limiting, but $H_{c2\\perp}$ increases as $T_c-T$ due to orbital limiting near $T_c$. This result, along with the fact that the thickness of the CeCoIn$_5$-BL is comparable to the perpendicular superconducting coherence length $\\xi_{\\perp}\\sim3$--4\\,nm, indicates that each 5-UCT CeCoIn$_5$ BL effectively acts as a 2D superconductor \\cite{Mizukami}. The 2D superconductivity is reinforced by the angular variation of $H_{c2}(\\theta)$. Figure \\,3(d) and its inset show $H_{c2}(\\theta)$ below and above $p^*$. For both pressures, at $T\\ll T_c$,\n$H_{c2}(\\theta)$ in the regime $|\\theta|\\alt30^{\\circ}$ is enhanced with decreasing $|\\theta|$ and exhibits a sharp cusp at $\\theta=0$. This cusp behavior is typical for a Josephson coupled layered superconductor \\cite{Tinkham}.\n\nWe note that in stark contrast to CeRhIn$_5$ single crystal and our thin film, each CeRhIn$_5$ BL in CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice is not fully superconducting even when the AFM order is suppressed under pressure, which leads to the realization of 2D superconductivity in a wide range of pressure. In fact, as shown in Fig.\\,3(d), overall angle dependence of $H_{c2}(\\theta)$ including the cusp structure near $\\theta=0$ is observed at $p=1.8$\\,GPa, where the bulk superconductivity is not observed in CeRhIn$_5$ thin film (Fig.\\,2(b) and Fig.\\,S2 in \\cite{SM}). Essentially very similar angle dependence of $H_{c2}(\\theta)$ is observed at $p=2.1$\\,GPa above $p_c$. These results imply that 2D superconductivity occurs in CeCoIn$_5$ BLs even above $p_c$. Moreover, in CeRhIn$_5$(5)\/YbRhIn$_5$(5) \nsuperlattice zero resistivity is not attained under pressure (Fig.\\,S4 in \\cite{SM}). \nWith the reduction of BL thickness, the superconductivity of CeRhIn$_5$ is strongly suppressed, in stark contrast to CeCoIn$_5$. This may be related to the incommensurate magnetic structure of CeRhIn$_5$ with ordering vector $\\bm{q}=(0.5,0.5, 0.297)$ \\cite{Bao}, in which the long-wave-length AFM fluctuations perpendicular to the layers are suppressed in CeRhIn$_5$ BLs with atomic layer thickness. In CeCoIn$_5$, on the other hand, AFM fluctuations with different $\\bm{q}=(0.45, 0.45, 0.5)$ are dominant \\cite{Raymond}. This commensurability along the $c$ axis would be better compatible with the superlattice structure, and as a result, the superconductivity is robust against the reduction of BL thickness \\cite{Yamanaka}. \nWe here comment on the low temperature anisotropy of $H_{c2}$ of the CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice (Fig.\\,3(b)). At $p=2.1$\\,GPa, $H_{c2\\perp}$ exceeds $H_{c2\\parallel}$ at low temperatures. Such a reversed anisotropy of $H_{c2}$ has been reported in CeRhIn$_5$ single crystal above the pressure where the AFM order disappears \\cite{Thompson,Park2008}. However, similar reversed anisotropy ($H_{c2\\perp}>H_{c2\\parallel}$) is preserved at $p=1.8$\\,GPa, where $H_{c2\\parallel}$ exceeds $H_{c2\\perp}$ in CeRhIn$_5$ single crystal and thin film. This indicates that anisotropy reversal of $H_{c2}$ occurs under pressure in 5-UCT CeCoIn$_5$ BLs. Based on these results, we conclude that 2D superconducting CeCoIn$_5$ BLs in CeCoIn$_5$(5)\/CeRhIn$_5$(5) are coupled by the Josephson effect in the whole pressure regime.\n\n\n\n\\begin{figure}[t]\n \t\\begin{center}\n \t\n \t\t\\includegraphics[width=1.0\\linewidth]{Fig4.eps}\n \t\n \t\t\\caption{\n\t\n\t\t(a) Out-of-plane upper critical field $H_{c2\\perp}$ normalized by the orbital-limited upper critical field at $T=0$\\,K, $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$, for CeCoIn$_5$(5)\/CeRhIn$_5$(5) superlattice is plotted as a function of the normalized temperature $T\/T_c$. Two extreme cases, i.e. the result of the bulk CeCoIn$_5$ dominated by Pauli paramagnetic effect and the WHH curve with no Pauli effect, are also shown. (b) Pressure dependence of $H_{c2}^{\\rm orb}(0)$ of CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices with $n=4$ and 5 for ${\\bm H}$$\\parallel$$c$. For comparison, $H_{c2}^{\\rm orb}(0)$ of CeRhIn$_5$ single crystals for ${\\bm H}$$\\parallel$$a$ and that of CeCoIn$_5$ single crystal for ${\\bm H}$$\\parallel$$c$ are shown. \nSolid and dashed arrows represent $p_c$ for CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices and CeRhIn$_5$ single crystal, respectively. \n \t\t}\n \t\t\\label{fig:Fig1.eps}\n \t\\end{center}\n \\end{figure}\n\n\n\nApplication of the pressure leads to a drastic change in the nature of superconductivity in the hybrid superlattices. Figure\\,4(a) depicts the $T$-dependence of $H_{c2\\perp}$, normalized by the orbital-limited upper critical field at $T=0$\\,K, $H_{c2\\perp}^{\\rm orb}(0)$, which is obtained from the Werthamer-Helfand-Hohenberg (WHH) formula, $H_{c2\\perp}^{\\rm orb}(0)=-0.69T_c(dH_{c2\\perp}\/dT)_{T_c}$. We also include two extreme cases: $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$ for bulk CeCoIn$_5$ \\cite{Tayama}, in which $H_{c2}$ is dominated by Pauli paramagnetism, and the WHH curve with no Pauli effect. Pressure dramatically enhances $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}$. What is remarkable is that near the critical pressure $p_c\\sim 2$\\,GPa at which evidence for magnetic order disappears, $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}$ nearly coincides with the WHH curve, indicating that $H_{c2\\perp}$ is limited solely by orbital pair-breaking. \n\n\nThe fact that $H_{c2\\perp}$ approaches the orbital limit provides important insight on superconductivity of the hybrid superlattice. In CeCoIn$_5$\/YbCoIn$_5$, where YbCoIn$_5$ is a conventional metal, Pauli pair-breaking effect is weaken in the superlattice compared with the bulk due to local inversion symmetry breaking at the interfaces, which splits the Fermi surfaces with spin texture and thus effectively suppresses the Zeeman effect \\cite{Goh,Maruyama2012}. This leads to the Rashba-induced anisotropic suppression of the Zeeman effect \\cite{Shimozawa2016}, which may be partly responsible for the observed reversed anisotropy $H_{c2\\parallel}\/H_{c2\\perp}<1$ at low temperatures (Fig.\\,3(d)). However, this effect is less important in CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices compared with CeCoIn$_5$\/YbCoIn$_5$, which is evidenced by the fact that $H_{c2\\perp}\/H_{c2\\perp}^{\\rm orb}(0)$ does not strongly depend on $n$ (Fig.\\,S5 in \\cite{SM}). Moreover, such an effect is not expected to have significant pressure dependence. Therefore, there must be a different mechanism that significantly enhances the Pauli-limiting field $H_{c2\\perp}^{\\rm Pauli}=\\sqrt{2}\\Delta\/g\\mu_B$, where $g$ is the $g$-factor of electrons and $\\mu_B$ is the Bohr magneton. An enhancement of $H_{c2\\perp}^{\\rm Pauli}$ is not due to a dramatic suppression of $g$ by pressure, because it is highly unlikely that the Ce crystalline electric field state, which determines $g$-factor, strongly depends on pressure. Therefore the enhancement of $H_{c2\\perp}^{\\rm Pauli}$ is attributed to a strong increase in the superconducting gap $\\Delta$. This is supported by the observed enhancement of $H_{c2\\perp}\/T_c$ upon approaching $p_c$ shown in Fig.\\,3(a). Because $H_{c2\\perp}\\approx H_{c2\\perp}^{\\rm Pauli} \\ll H_{c2\\perp}^{\\rm orb}(0)$ in the low $p$ regime and $H_{c2\\perp}\\approx H_{c2\\perp}^{\\rm orb}(0) \\ll H_{c2\\perp}^{\\rm Pauli}$ near $p\\sim p_c$, the enhancement of $H_{c2\\perp}\/k_BT_c$ directly indicates an enhancement of $H_{c2\\perp}^{\\rm Pauli}\/T_c$ and hence $\\Delta\/k_BT_c$. This behavior contrasts with observations on CeCoIn$_5$ single crystals, in which $H_{c2}\/T_c$ decreases with pressure. The enhancement of $\\Delta\/k_BT_c$ is caused as a consequence of enhancement of pairing interaction. In spin fluctuation mediated scenario, the pairing interaction is mainly provided by high energy spin fluctuations whose energy scale is well above $\\Delta$ and low energy fluctuations cause the pair-breaking. Since the high energy fluctuations enhance $T_c$ while low energy ones reduce $T_c$, the enhancement of pairing interaction can give rise to the increase of $\\Delta\/k_BT_c$ without accompanying a large enhancement of $T_c$, which is consistent with the observed behavior. Thus, the present results demonstrate that the pairing interaction in CeCoIn$_5$ BLs is strikingly enhanced as a result of the quantum critical magnetic fluctuations that develop in CeRhIn$_5$ BLs, which are injected into CeCoIn$_5$ BLs through the interface. \n\n\nIt is well established that quantum fluctuations strongly influence normal and superconducting properties in many classes of unconventional superconductors. One of the most striking is a diverging effective quasiparticle mass $m^*$ upon approaching the QCP, as reported in cuprate, pnictide and heavy-fermion systems \\cite{Shibauchi2014,Shishido2005,Ramshaw}. Such a mass enhancement gives rise to a corresponding enhancement of $H_{c2}^{\\rm orb}$, which is proportional to $(m^*\\Delta)^2$. Here we stress that there is a fundamental difference in the present hybrid superlattices. Figure\\,4(b) depicts the $p$-dependence of $H_{c2\\perp}^{\\rm orb}$ of the CeCoIn$_5$($n$)\/CeRhIn$_5$($n$) superlattices with $n=4$ and 5, along with the result for CeCoIn$_5$ and CeRhIn$_5$ single crystals \\cite{Park2008,Knebel2010}. In contrast to a CeRhIn$_5$ single crystal which shows a sharp peak at the critical pressure, $H_{c2\\perp}^{\\rm orb}$ of the superlattices depends weakly on pressure with no significant anomaly at $p_c$. Compared to the monotonic decrease observed in single crystal CeCoIn$_5$, this weak dependence is consistent with an enlarged gap $\\Delta$, but the results suggest the absence of\nmass enhancement in the CeCoIn$_5$ BL. Such a behavior is in contrast to usual expectations for quantum criticality, details of which deserve further studies. \n\n\n\nIn summary, we have designed and fabricated hybrid superlattice CeCoIn$_5$\/CeRhIn$_5$ formed by alternating atomically thick layers of a $d$-wave heavy fermion superconductor CeCoIn$_5$ and an AFM metal CeRhIn$_5$. \nThe present results demonstrate the importance of the interface between which unconventional superconducting and nonsuperconducting magnetic layers can interact with each other. \nIn particular, the strength of the pairing interaction can be tuned by magnetic fluctuations, or paramagnons, injected through the interface, highlighting that the pairing interaction can be maximized by the critical fluctuations emanating from the magnetic QCP without an accompanying mass enhancement. The fabrication of a wide variety of hybrid superlattices paves a new way to study the entangled relationship between unconventional superconductivity and magnetism, offering a route to exploring the emergence of novel superconducting systems and the roles of their interface. \n\n\n\nWe thank E.-A. Kim, H. Kontani, A. H. Nevidomskyy, R. Peters, and Y. Yanase for fruitful discussions. This work was supported by Grants-in-Aid for Scientific Research (KAKENHI) (Nos. 25220710, 15H02014, 15H02106, and 15H05457) and on Innovative Areas `Topological Material Science' (No. JP15H05852) and `3D Active-Site Science' (No. 26105004) from Japan Society for the Promotion of Science (JPSJ). Work at Los Alamos National Laboratory was performed under the auspices of the U.S. Department of Energy, Office of Basic Energy Sciences, Division of Materials Sciences and Engineering.\n \n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{intro}\nNeural networks frequently suffer from the problem of \\textit{over-parameterization}, such that the model can be compressed by a large factor to drastically reduce memory footprint, computation as well as energy consumption while maintaining similar performance. \nThis is especially pronounced for models for computer vision~\\citep{simonyan2014very}, speech recognition~\\citep{pratap2020massively} and large text understanding models such as BERT~\\citep{devlin2018bert}. \nThe improvements obtained from intelligently reducing the number of model parameters has several benefits, such as reduction in datacenter power consumption, faster inference and reduced memory footprint on edge devices such as mobile phones {which also enable decentralized techniques ex. federated learning~\\citep{kairouz2019advances}.} \n\nThere are several techniques to reduce model size while maintaining similar generalization performance, such as model quantization~\\citep{polino2018model}, NAS (Neural Architecture Search)~\\citep{elsken2019neural} and model distillation through teacher-student networks~\\citep{gou2021knowledge}. \nFor the scope of this paper, we consider pruning as a technique to remove trainable weights in the network, and save on computation costs for the FBNet family of models. \nThe motivations for this are two-fold. \nFirstly, state-of-the-art models such as FBNet~\\citep{wu2019fbnet} already adopt the best practices in the area of efficient hardware-aware design of convolutional neural network based models, and are widely used across different vision tasks. \nThis makes them suitable baselines to understand whether pruning can offer any performance gain over their already optimized behavior. \n{While there has been limited work on pruning for efficient convolution network models they investigate older architectures such as EfficientNet and MobileNet~\\citep{aflalo2020knapsack} or integrate pruning into expensive techniques such as joint prune-and-architecture search~\\citep{wang2020apq}. }\n\nFor each of the constituent models of the FBNetV3 family (FBNetV3A, FBNetV3B,..., FBNetV3G) we reduce the number of parameters using two pruning based approaches: \n(1) \\textit{Global magnitude-based pruning}: \nStarting with the pre-trained model, we prune all weights whose magnitude is below a threshold chosen in order to achieve a target number of FLOPs for the pruned model; \n(2) \\textit{Uniform magnitude-based pruning}: \nStarting with the pre-trained model, we prune weights in each layer whose magnitude is below a level-specific threshold in order to yield a pruned model achieving a target number of FLOPs with the same sparsity in each layer. \nAfter either pruning method is applied, \nwe fine-tune the pruned model for a certain number of epochs until convergence is reached.\nWithin the scope of our study in this paper, we are mostly interested in the following research questions:\n\\begin{itemize}[leftmargin=*]\n \\item \\requ1: Pruning to improve computation vs.~performance tradeoff. Can a model obtained by pruning a larger FBNetV3 model \\textbf{M1} (optimized using NAS) achieve higher generalization performance than a smaller FBNetV3 model \\textbf{M2} when the pruned model has the same number of FLOPs as \\textbf{M2}? \n \\item \\textbf{RQ2}: Pruning as an efficient paradigm. When a larger FBNetV3 model \\textbf{M1} is available and computational resources are limited, is pruning a faster and less computationally expensive approach to obtain a model with higher accuracy at a desired computation level (FLOPs) than running a full-fledged architecture search?\n\\end{itemize}\n\\textit{Pruning to improve computation vs.~performance tradeoff (\\requ1).}\nThere have been recent research advances in the area of building hardware-aware efficient models~\\citep{deng2020model}. \nThese can provide good generalization performance while adhering to constraints on memory, inference latency and battery power, which are often dictated by the hardware environment where inference happens. \nExperiments described in existing work on efficient vision models such as ChamNet~\\citep{dai2019chamnet}, MobileNet~\\citep{howard2017mobilenets}, EfficientNet~\\citep{tan2019efficientnet} and FBNetV2~\\citep{wan2020fbnetv2} have shown that it is possible to achieve even higher performances on standard image recognition tasks such as ImageNet~\\citep{deng2009imagenet} at a certain level of FLOPs. \nHowever the efficient design of these models does not solve the over-parameterization problem completely, and none of these approaches study how model pruning can be performed to obtain even better trade-offs between computation and model accuracy. \nThis paper is the first of its kind to understand how we can improve on the state-of-the-art in this problem space. \n\n\\textit{Pruning as an efficient paradigm (\\textbf{RQ2}).}\nIn addition to achieving state-of-the-art performance with reduced FLOPs, we are also interested in understanding how such pruned models can be obtained \\textit{inexpensively} with limited resources that are generally available to a machine learning practitioner who has access to existing optimized models but limited computing resources. \nFor example, the FBNetV3 models are freely available through Facebook's Mobile Model Zoo\\footnote{FBNetV3 models available here \\url{http:\/\/https:\/\/github.com\/facebookresearch\/mobile_cv\/model_zoo\/models\/model_info\/fbnet_v2\/model_info_fbnet_v3.json}}, while EfficientNet models can be obtained at GitHub\\footnote{EfficientNet models available here \\url{https:\/\/github.com\/mingxingtan\/efficientnet}}. \nWhile the techniques needed to obtain computation- and latency-friendly models have been democratized through open-sourcing the source code as well as the models themselves, fully applying these techniques necessitates costly operations such as finding an optimal network topology through meta-learning approaches~\\citep{you2020greedynas} and search algorithms such as Genetic Algorithms (GAs)~\\citep{goldberg1991comparative}.\n\nGiven the high-degree of intractability of this problem, expensive computational resources are often needed in this case, easily exceeding the budget available to a university research laboratory or an angel-stage startup~\\citep{zoph2016neural}. \nWhen a starting model is already available, for example through open-sourcing, the best option would be to perform a cheap modification of the model to fit a certain target FLOPs\/latency requirement. \nIn this paper we have compared the NAS approaches for training FBNetV3 models with our pruning techniques on a computational complexity metric (GPU-hours) to effectively answer \\textbf{RQ2}.\n\n\\textit{Benchmark results.}\nIn addition to experimental outcomes for answering \\requ1 and \\textbf{RQ2}, we also benchmark pruned FBNetV3 models using available open-sourced quantized sparse kernels and conduct ablation studies to obtain additional insights into pruning performance. \nThese results augment our main observations and demonstrate that with existing hardware support, it is possible to deploy pruned cutting-edge computer vision models with practical latency reductions and improve further beyond the performance vs. FLOPs trade-off.\n\nWe conduct our experiments on ImageNet, which is an object-recognition task on a large training dataset of 1.2 million images. \nWe show that computationally less intensive techniques such as uniform and global magnitude-based pruning of larger FBNetV3 models can yield higher test accuracies than small models while having the same number of FLOPs. \nGiven a target computation budget for an efficient model, we show that it is more practically advantageous (both in terms of performance and running time) to simply prune the larger model than run a neural architecture search to find the target model from scratch. \n\n{The technique we have employed for pruning (unstructured sparsity) is already tried and tested, however our novelty lies in studying whether efficient image recognition models such as FBNetV3 can be optimized further to improve on the FLOPs-accuracy curve, and the contributions are two fold : (1) FBNets are themselves state-of-the-art in efficient vision models and we achieve better accuracy-FLOPs tradeoff over these models and (2) from the standpoint of computational overhead, we significantly reduce the amount of GPU hours required to obtain such models. Pruning a publicly available NAS optimized model incurs $\\approx$4x less GPU hours to achieve a target FLOPs level, compared to training a full-fledged NAS to obtain a model which has less accuracy at the same FLOPs level.}\n\n\\textit{Paper organization.}\nThe remainder of this paper is organized as follows. \nIn Section~\\ref{related-work}, we describe related work in the area of efficient vision model design and also provide an introduction to different pruning techniques. \nIn Section~\\ref{experimental-setup}, we discuss our experimental setup, including a description of the baseline models and the \\textit{global} and \\textit{uniform} pruning approaches we have employed. \nSection~\\ref{results} describes our main findings and we conclude the paper in Section~\\ref{conclusions}.\n\n\\section{Related Work}~\\label{related-work}\nWe discuss related literature in the areas of \\textit{computationally efficient vision models} and \\textit{model pruning}.\nWithin the scope of our work, we mainly focus on inference efficiency of models in contrast to training efficiency.\n\\par\n\\textit{Computationally efficient vision models:} Neural networks for computer vision are generally characterized by convolutional layers and fully-connected layers, along with blocks such as residual or skip connections. \nThis makes such networks resource intensive in terms of FLOPs, which affects the memory storage and power consumed, and also leads to increased latency. \nIt is of paramount importance to design more efficient networks which can provide higher performance for the same FLOPs or latency level, or even to optimize them appropriately to provide the same performance at reduced FLOPs\/latency. This can be performed either through the design of new simplified layers, for example in deep residual learning~\\citep{he2016deep} or though explicit model compression as in weight quantization~\\citep{polino2018model}.\nExtremely deep networks for image recognition often suffer from not only high complexity and inference latency, but also from the issue of \\textit{vanishing gradients}~\\citep{pascanu2013difficulty}. This was addressed through deep residual networks which effectively simplified network design through skip-connections. \nMobileNets~\\citep{howard2017mobilenets} are one of the earlier approaches to building small low-latency networks by using depthwise separable convolutions with two parameters, \\textit{width} and \\textit{resolution} multipliers. They demonstrate the effectiveness of MobileNets across different vision tasks, such as face embeddings and object detection. MobileNetV2~\\citep{sandler2018mobilenetv2} extends MobileNets by utilizing inverted residual filter structures and linear bottlenecks, obtaining improvements on state-of-the-art models both in terms of accuracy and computational complexity. ShuffleNets~\\citep{zhang2018shufflenet} propose dedicated residual units where 1\\ensuremath{\\times}1\\xspace convolutions are replaced with pointwise group convolutions and channel shuffling reducing FLOPs computations. \n\\par\nMore recently, the focus on building efficient neural network models has shifted to techniques that treat the design of efficient networks as a search problem, falling under the umbrella of Neural Architecture Search (NAS).\nEfficientNets~\\citep{tan2019efficientnet} propose a novel scaling method which adjusts the network's length, width, and resolution to optimize performance subject to target memory and FLOPs constraints. They also define a novel baseline that is optimized by a multi-objective neural architecture search. The FBNet collections of models---FBNet~\\citep{wu2019fbnet}, FBNetV2~\\citep{wan2020fbnetv2} and FBNetV3~\\citep{dai2021fbnetv3}---employ neural architecture search to obtain highly-optimized models that improve on the state-of-the-art for different visual understanding tasks. \nFBNet frames the architecture search as a differentiable meta-learning problem with gradient based techniques, namely \\textit{DNAS}---Differentiable Neural Architecture Search---by \\cite{wu2019fbnet}, and avoids selecting the optimized model over a discrete set. \nThe subsequent entry in this collection, FBNetV2, expands the search space over conventional DNAS, and employs a masking scheme to maintain the same level of computational complexity while searching over this expanded space. \nFBNetV3 further improves on the state-of-the-art by employing Neural Architecture Recipe Search (NARS) and searching over the space of not only architectures, but also corresponding recipes (which are generally hyper-parameters). In this paper, we consider FBNetV3 models as our baselines as they are state-of-the-art. \nWe are interested in understanding if they are overparameterized and evaluate how much model pruning can improve performance at a certain FLOPs level over the state-of-the-art in this family of models.\n\\par\n\\textit{Model Pruning:} Modern neural networks, particularly those processing complex sensory inputs (such as speech, vision and language) for perception applications, are often over-parameterized. \nIt is only to be expected that we should be able to compress such networks significantly to maintain the same level of performance at decreased level of computation (fewer weights and reduced FLOPs), memory footprint and power consumption. Foundational efforts in this space include the \\textit{Optimal Brain Surgeon}~\\citep{hassibi1993second} and \\textit{Optimal Brain Damage}~\\citep{lecun1990optimal}. \nRecently the idea of network pruning has been formalized through the lottery ticket hypothesis~\\citep{frankle2018lottery}, which claims that randomly initialized, feed-forward networks have winning sub-networks that perform just as well as the original network on an unseen test dataset. \nModel pruning is generally of two types: unstructured and structured pruning. \nUnstructured pruning, as the name suggests, doesn't adhere to any structure and prunes neurons based on chosen criteria (such as magnitude). This has the advantage of providing higher performance, but is difficult to implement in hardware, as it needs dedicated support for efficient sparse matrix multiplications. \nMeanwhile, structured pruning is the practice of removing entire groups of neurons (e.g., blocks within the weight matrix, or channels in convolutional neural networks). \nThis is easy to implement without dedicated hardware support, but has the issue of lower generalization performance than unstructured pruning~\\citep{yao2019balanced}. \nIn the literature, there have also been several studies, for example investigating whether rewinding (training from scratch with a fixed mask) can perform just as well as the fine-tuning on top of the original unpruned network~\\citep{renda2020comparing}. {~\\cite{blalock2020state} provide an overview survey of recent advances and open problems in neural network pruning.}\n\\par\nIn the research area of designing efficient networks for computer vision, there has not been much focus on understanding how pruning can be applied to the current generation of models.\nMost literature on pruning is based on older networks such as VGGNet, ResNet~\\citep{he2016deep}, and MobileNet~\\citep{sandler2018mobilenetv2}.\nOur work improves upon these existing studies by understanding how pruning can improve the FLOPs-accuracy tradeoff over existing state-of-the-art networks.\n\n\\section{Pruning Techniques and Setup}\n\\label{experimental-setup}\nIn this section, we describe the main components of our techniques and experimental setup, including \\textit{Baseline Models}, \\textit{Pruning Techniques}, \\textit{Latency Measurement} and \\textit{Metrics}. We have mainly used standard splits of the ImageNet dataset, further details are in Section~\\ref{dataset} of the appendix.\n\n\\subsection{Baseline Models}\\label{baseline-models}\n\\cite{dai2020fbnetv3} address the previous limitations of NAS-based architecture search where these approaches can only search over architectures given a training recipe (set of hyperparameters), and thus cannot optimize over both. \nAs described in Section~\\ref{related-work}, the most recent state-of-the-art models are based on NARS (Neural Architecture-Recipe Search), which we select as baseline models. Table~\\ref{tab:baseline-models} lists the accuracy of FBNetV3 models~\\citep{dai2021fbnetv3} on the ImageNet classification task, along with the number of model parameters and computation complexity in terms of FLOPs. \n\\par\nEach baseline model consists of multiple IRF (Inverted Residual Filter) blocks, which contain convolutional layers of different kernel sizes. \nFor our experiments, we are mostly interested in 1\\ensuremath{\\times}1\\xspace convolutions as potentially prunable, since within each FBNetV3 model, the 1\\ensuremath{\\times}1\\xspace convolution layers constitute >80\\% of total model FLOPs for all models in the family, and the open-sourced sparsity kernel support we use for latency benchmarking is available only for fully connected layers. \nA 1\\ensuremath{\\times}1\\xspace convolution can be transformed into an equivalent fully connected layer with a few tensor reshape operations without any significant loss of performance or latency.\n\nFor each initial and target FBNetV3 model $X$ and $Y$, where $X$ is larger than $Y$, we prune $X$ to a \\emph{sparsity level} of $S$ so that the FLOP count is the same as for $Y$. The number of FLOPs consumed by a linear layer of sparsity $S$ is proportional to the number of sparse matrix multiplications performed and is given by $S * F$, where $F$ is the corresponding dense FLOPs. \nThus if $F_{1\\ensuremath{\\times}1\\xspace}(X)$ is the number of FLOPs consumed by the 1\\ensuremath{\\times}1\\xspace convolution layers and $F(x)$ is the total number of FLOPs consumed by model $X$, we have:\n\\begin{equation}\\label{flops-eq}\n S = {(F(X) - F(Y))}\/{F_{1\\ensuremath{\\times}1\\xspace}(X)}\n\\end{equation}\nHence, sparsity measures the fraction of 1\\ensuremath{\\times}1\\xspace convolution weights removed, and so \nhigher sparsity indicates a smaller model. \nFor the uniform pruning scnario, Table~\\ref{sparsity-table} shows the amount of sparsity required to prune each larger FBNetV3 model to a smaller one based on Eq.~(\\ref{flops-eq}). For global pruning, (\\ref{flops-eq}) does not hold, and we compute the target sparsities empirically from the layer shapes instead with details provided in Section~\\ref{global_flops}.\nWe prune each larger FBNetV3 model to a discrete FLOPs target based on a defined set of smaller models in the family, and not to a continuous range of FLOPs values, as it makes it easier to compare models directly based on a given computation budget. \nIf we can demonstrate that for the same computation level, the pruned larger FBNetV3 model has higher performance than a smaller model with the same FLOPs, it is sufficient to demonstrate that we can improve on the FLOPs-accuracy curve over the state-of-the-art.\n\n\\subsection{Pruning Techniques}\\label{pruning-techniques}\nIn this paper, we utilize a pre-trained FBNetV3 model with higher number of FLOPs without training an image classification model from scratch with sparsity, which would be time consuming and computationally intensive. There are several approaches in the literature such as prune-and-fine-tune~\\citep{han2015learning} and iterative pruning with sparsity scheduling~\\citep{frankle2018lottery}. \nWe have utilized the former for our experiments, as although studies have shown that iterative and incremental pruning approaches lead to better generalization performance, they typically require training for high number of epochs, need tuning and selection of optimal sparsity schedules and are computationally resource intensive. We have therefore not considered them in our experiments. {For our prune and fine-tune experiments, we have used 8-GPU boxes, with each box having Nvidia V100 (Volta) 32G GPUs.}\nAs described in Section~\\ref{intro}, we perform both global and magnitude-based pruning experiments. For the latency benchmarking, we also perform magnitude-based uniform pruning with a sparse block size of $1\\times4$ as explained in Section~\\ref{latency}.\n\nWe have conducted a hyper-parameter tuning for the learning rate parameter, with LR {values in the set} \\{4e-5, 8e-5, 1.6e-4\\}, as fine-tuning generally admits smaller learning rates than training from scratch. We have found that using the same learning rate for all models, along with the same hyper-parameter settings used for training the seed model is sufficient to obtain pruned networks which are superior to the baseline FBNetV3 models. Hence minimal hyper-parameter tuning was required for our experiments and we have used values of settings such as weight decay and momentum to be the same as those used for training the baseline FBNetV3 models. During fine-tuning after pruning, we have used a smoothed validation loss to stop the process early after a convergence tolerance (0.01\\%) is reached between two consecutive epochs. Generally, we have observed fine-tuning to converge around $\\sim$250 epochs\n\n\\subsection{latency measurements and Metrics} \\label{latency}\nWe are interested not only in the sparsity level of our pruned models and the image recognition performance, but also in metrics which potentially improve due to model sparsity, such as number of parameters, the FLOP count and the model latency. \nFor reporting model performance under pruning, we use standard image recognition metrics such as Top-1 and Top-5 {test} accuracies.\nWe measure overall model sparsity, which is different to the layer sparsity since we \nonly prune 1\\ensuremath{\\times}1\\xspace convolution layers, as explained in Section~\\ref{baseline-models}. \nWe report the model FLOPs, because this metric captures the computational footprint of the model and its power consumption. \n\nLast, we record the total latency (in ms.) under pruning. The sparse kernels used in our experiments are already in open-source and released under the PyTorch sparse quantization library\\footnote{https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/ao\/nn\/sparse\/quantized\/linear.py}. Prior to using these kernels, we perform uniform layer-wise block-based pruning with block sizes of $1\\times4$. Magnitude based pruning is implemented at block level, and the model is quantized to 8-bit integers (int8) before latency benchmarking{, which is performed on Intel CPUs designed using the Skylake micro-architecture.}\nWhile we would expect sparsity to translate to tangible inference speedups, this is highly dependent on the sparse kernel support provided by hardware. \nCurrent hardware is not well-suited for unstructured randomly sparse matrix multiplications and tend to do better with structured sparsity in models~\\citep{anwar2017structured}. We have utilized block sparsity within the weight matrix for latency experiments.\nHowever this often tends to come at a decreased level of model performance. \nThe design of highly performant sparse models under structured sparsity with reasonable inference speedups remains an important research topic outside the scope of this paper.\n\n\n\\section{Results}~\\label{results}\n\\subsection{Pruned FBNetV3 model performance}\\label{pruning_performance}\nTo answer \\textbf{RQ1}, we consider the family of FBNetV3 models as baselines and seed models for further pruning. For each pair of models $X$, $Y$ in the family, we calculate the amount of sparsity required to prune the larger model $X$ to a model that consumes the same number of FLOPs as the target smaller model $Y$, via Equation~\\ref{flops-eq}.\nThere are 21 potential seed and target model pairs, however we conduct pruning experiments only for a depth of 2 for tractability. For example, given FBNetV3E as the seed, we only prune it to FLOPs targets corresponding to FBNetV3D and FBNetV3C.\nTable~\\ref{flops_table} presents the accuracy and number of parameters of the pruned models at each target FLOPs level. The improvement in performance is apparent even at lower FLOPs targets, where we might expect baseline models such as FBNetV3A to not be over-parameterized. \nFor example, pruning FBNetV3C to a target of 356.6 MFLOPs obtains a network which is 1.43\\% better than FBNetV3A. Figure~\\ref{flops-curve} plots the Top-1 ImageNet testing accuracy vs. FLOPs for the best pruned models as seen from Table~\\ref{flops_table}. This clearly shows that pruning FBNetV3 models with minimal fine-tuning can significantly improve on the state-of-the-art for FLOPs vs. Top-1 accuracy trade-off. \nThis analysis is performed for both uniform layer-wise and global magnitude-based prune with fine-tune settings. Global pruning ranks the weights of the entire network in contrast to uniform layer-wise pruning, which ranks each layer's weights to determine the sparsity mask. It would be expected that global pruning performs better than uniform pruning for the same target sparsity level or number of non-sparse parameters. However in our experiments we determine the pruning threshold based on FLOPs targets, and find global pruning to require higher sparsity levels, which results in uniform pruning outperforming global pruning in Top-1 ImageNet accuracy in most cases.\n\\begin{table}[]\n\\centering\n\\caption{Sparsity level (in percentage) and performance of pruned FBNetV3 networks on ImageNet dataset for different target MFLOPs. The best accuracy obtained at each target FLOPs level is highlighted in bold.}\n\\label{sparsity-table}\n\\label{flops_table}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Seed\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ MFLOPs\\end{tabular}}} &\n \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}Baseline\\\\ Accuracy\\end{tabular}} &\n \\multicolumn{3}{c|}{Uniform pruning} &\n \\multicolumn{3}{c|}{Global pruning} \\\\ \\cline{5-10} \n\\multicolumn{1}{|c|}{} &\n \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} &\n &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\begin{tabular}[c]{@{}l@{}}Top-1 \\\\ Acc.\\end{tabular} &\n \\multicolumn{1}{c|}{Gain(\\%)} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Top-1 \\\\ Acc.\\end{tabular}} &\n Gain(\\%) \\\\ \\hline\nB & A & 356.6 & 79.6 & 26.59 & 80.308 & 0.887 & 39.5 & 80.232 & 0.793 \\\\ \\hline\n C & A & 356.6 & 79.6 & 40.7 & \\textbf{80.738} & 1.43 & 57.9 & \\textbf{80.476} & 1.1 \\\\ \\hline\n C & B & 461.6 & 80.2 & 19.4 & 80.996 & 0.992 & 28.9 & 80.998 & 0.985 \\\\ \\hline\n D & B & 461.6 & 80.2 & 31.47 & \\textbf{81.116} & 1.142 & 43.7 & \\textbf{81.08} & 1.097 \\\\ \\hline\n D & C & 557.0 & 80.8 & 15.04 & 81.278 & 0.591 & 21.5 & \\textbf{81.208} & 1.256 \\\\ \\hline\n E & C & 557.0 & 80.8 & 31.0 & \\textbf{81.282} & 0.596 & 43.6 & 81.184 & 0.475 \\\\ \\hline\n E & D & 644.4 & 81.0 & 17.8 & 81.118 & 0.145 & 25.8 & 81.388 & 0.479 \\\\ \\hline\n F & D & 644.4 & 81.0 & 38.2 & \\textbf{82.00} & 1.234 & 67.8 & \\textbf{81.484} & 0.597 \\\\ \\hline\n F & E & 762.0 & 81.3 & 29.8 & \\textbf{82.19} & 1.094 & 54.7 & \\textbf{81.97} & 0.824 \\\\ \\hline\nG & E & 762.0 & 81.3 & 71.67 & 81.166 & -0.16 & 85.5 & 79.934 & -1.68 \\\\ \\hline\n G & F & 1181.6 & 82.0 & 49.69 & \\textbf{82.528} & 0.643 & 63.8 & \\textbf{82.454} & 0.553 \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n\\includegraphics[scale=0.40]{new_pruned_paper.pdf}\n\\caption{FLOPs vs. performance (ImageNet Top-1 acc.) for different pruned FBNetV3 networks. For comparison, the existing FBNetV3 networks are also shown here.}\n\\label{flops-curve}\n\\end{figure}\n\n\\subsection{Pruning Complexity}\nIn addition to demonstrating the improvement over state-of-the-art obtained by pruning FBNetV3 models, \nit is also important to quantify the reduction in computational complexity obtained in pruning a larger FBNetV3 model compared to training an FBNetV3 model directly through NAS (Network Architecture Search). \n\\textbf{RQ2} (pruning for efficient model search) asks if the pruning and subsequent fine-tuning approach in Section~\\ref{pruning_performance} is faster than a full-fledged neural architecture search. \nDuring pruning and subsequent fine-tuning, we train the pruned networks till the validation loss converges to within a pre-specified tolerance, as described in Section~\\ref{pruning-techniques}.\nThe time needed is generally less than when training the original FBNetV3 models, which runs for 400 epochs. \nThe number of GPU-hours is computed as (number of training GPU nodes) * (number of GPUs per node) * (training time to convergence) for each network.\nIn Table~\\ref{gpu-hours}, for each of the best performing uniformly-pruned models in Section~\\ref{pruning_performance} we report the number of GPU-hours consumed by the prune and fine-tune strategy, along with the GPU-hours consumed when obtaining a FBNetV3 model through architecture search using the method described in~\\cite{dai2020fbnetv3}. \nThe results are quite conclusive---we not only obtain pruned models superior in performance to the original neural search optimized models, but also as described in Section~\\ref{intro}, computational cost is significantly lower when starting from a pre-trained model with higher FLOPs. \nGiven the performance improvements obtained with lower computational resources, this approach is beneficial for an experimental setting where researchers have access to open-sourced pre-trained models and limited GPU resources, for example in a small startup or an academic environment. \nWe observe that the degree of speedup reduces as the network size gets bigger (e.g., in FBNetV3A vs. FBNetV3C) due to higher training time to convergence.\nNevertheless, we still obtain a speedup of 3-5 times compared to a full NAS (Neural Architecture Search). \n\n\\begin{table}[]\n\\centering\n\\caption{Computation speedup in term of GPU-hours when comparing NAS (neural Architecture Search) with pruning and fine-tuning approaches. {The selected seed networks are drawn from those in Table~\\ref{flops_table} with the best performance at target FLOPs.}}\n\\label{gpu-hours}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Target FLOPs \\\\ (FBNetV3 Model)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours \\\\ in NAS\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours\\\\ in pruning \\\\ and fine-tuning\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Computational cost\\\\ speedup\\end{tabular}} \\\\ \\hline\n356.6 (FBNetV3A) & 10.7k & 2.240k & 4.77 \\\\ \\hline\n557.0 (FBNetV3C) & 10.7k & 2.496k & 4.28 \\\\ \\hline\n762.0 (FBNetV3E) & 10.7k & 3.456k & 3.09 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\subsection{Latency Experiments}\nWe also measure the latency-performance tradeoff for the pruned FBNetV3G models. FBNetV3G is the largest model in the family and so is expected to have the best generalization performance under high sparsity levels. \nAs described in Section~\\ref{latency}, we prune the network using block sparsity (where the block size is $1\\times4$) to sparsity levels in the set \\{40\\%, 50\\%, 60\\%\\}. \nWe have not utilized lower sparsity levels, as we have observed that for the selected kernels we need at least 40\\% sparsity to yield any significant latency benefits. \nWe have pruned all 1\\ensuremath{\\times}1\\xspace convolution layers uniformly here and subsequently converted them to fully-connected layers for compatibility with the quantized sparse kernels. \nIn Figure~\\ref{latency-curve}, we present the Top-1 ImageNet accuracy vs. latency curve after pruning the FBNetV3G network for different sparsity levels. \nThe pruned FBNetV3G models show marked performance reduction with lower latency as expected, with a sparsity level of 60\\% translating to around 7\\% absolute accuracy reduction with a latency reduction of 18 ms (16\\% relative). While the 1\\ensuremath{\\times}1\\xspace convolution layers account for >80\\% of FLOPs, they only constitute 25\\% of overall network latency. \nThis is consistent with previous literature~\\citep{dudziak2020brp} which shows that computational complexity (ex. FLOPs) and latency are not well-correlated, and indeed the latter is more dependent on layer shapes. \nThis result underscores the need to develop more latency-friendly pruning techniques which can potentially improve on the state-of-the-art in this domain.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{latency}\n \\caption{Latency vs. Top-1 accuracy on ImageNet}\n \\label{latency-curve}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{sparsity_pattern.png}\n \\caption{Layer-wise sparsity pattern for FBNetV3E}\n \\label{sparsity_pattern}\n \\end{subfigure} \\\\\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{flops_curve.png}\n \\caption{Layer-wise FLOPs distribution for FBNetV3E}\n \\label{flops_pattern}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{boxplots}\n \\caption{Performance distribution for layer type}\n \\label{sensitivity_pattern}\n \\end{subfigure}\n \\caption{Latency benchmarking on FBNetV3G for different sparsity levels \\{40\\%, 50\\%, 60\\%\\} and layer-wise sparsity\/FLOPs\/accuracy sensitivity for a pruned FBNetV3E network.}\n \\label{fig:three graphs}\n\\end{figure}\n\\subsection{Insights into pruning experiments}\nOur pruning experiments demonstrate that we can improve on the state-of-the-art FBNetV3 models in generalization performance for a given FLOPs level. In this subsection, we obtain an insight into\n(1) the sparsity pattern under global magnitude-based pruning and \n(2) the sensitivity of each layer when pruned in isolation under uniform layer-wise magnitude pruning (sparsity level of 95\\%). For (1) in Figure~\\ref{sparsity_pattern}, we plot the amount of sparsity obtained per 1\\ensuremath{\\times}1\\xspace convolution layer. The model being considered is an FBNetV3E network pruned to a sparsity level of 43.6\\%, to the same FLOPs level as FBNetV3C and subsequently fine-tuned. We note that the sparsity level in lower layers is lower which is potentially required for maintaining the performance~\\cite{}. Higher sparsity can be admitted in upper layers of the network where it has learnt more redundant representations. SE (Squeeze and Excitation) 1\\ensuremath{\\times}1\\xspace convolution layers generally tend to get pruned more compared to other layers, with the sparsity being >99\\% for two such SE layers in stage $xif5\\_0$. This indicates that we can also consider revisiting SE layer role in FBNetV3 networks, and even remove entire layers in future work to yield additional latency and FLOPs benefits. \n\nFor analysis (2) we prune each 1\\ensuremath{\\times}1\\xspace convolution layer in isolation at a sparsity target of 95\\% and record the Top-1 test accuracy obtained on ImageNet dataset. For each type of layer, PW:expansion, PWL: bottleneck, SE: Squeeze-Excitation we plot the distribution of accuracies in Figure~\\ref{sensitivity_pattern}. We observe that the PW and PWL layers are most sensitive to high sparsity, while SE layers are able to retain performance adequately. We could also avoid pruning the most sensitive layers (appearing as outliers in the figure) to maintain generalization performance. This observation corroborates findings from analysis (1), and motivates us to revisit the role of squeeze-excitation layers in future work. \n\n\\section{Conclusions}~\\label{conclusions}\nIn this paper, we have investigated the problem of improving on the current state-of-the-art FLOPs vs. performance trade-off for FBNets which have been pre-optimized by NAS (Neural Architecture Search). We have employed network pruning techniques, and our results demonstrate that we can further improve on performance over FBNetV3 at a given FLOPs target through global as well as uniform magnitude-based pruning. This happens not only for relatively over-parameterized networks such as FBNetV3G, but also smaller networks such as FBNetV3A which have lower computational complexity. On average, the GPU-hours incurred during pruning is about $\\sim\\!\\!4\\times$ less than that consumed by a full-scale NAS. We have also performed latency measurements on the FBNetV3G model and conducted an analysis to understand the sparsity patterns and sensitivity of different FBNetV3 layers to pruning. For future work, we plan to investigate squeeze-excitation layers in more detail, and explore structured pruning approaches such as channel and layer pruning to further improve on the latency-performance tradeoff for this family of models. \n\n\n\\section{Introduction}\\label{intro}\nNeural networks frequently suffer from the problem of \\textit{over-parameterization}, such that the model can be compressed by a large factor to drastically reduce memory footprint, computation as well as energy consumption while maintaining similar performance. \nThis is especially pronounced for models for computer vision~\\citep{simonyan2014very}, speech recognition~\\citep{pratap2020massively} and large text understanding models such as BERT~\\citep{devlin2018bert}. \nThe improvements obtained from intelligently reducing the number of model parameters has several benefits, such as reduction in datacenter power consumption, faster inference and reduced memory footprint on edge devices such as mobile phones {which also enable decentralized techniques ex. federated learning~\\citep{kairouz2019advances}.} \n\nThere are several techniques to reduce model size while maintaining similar generalization performance, such as model quantization~\\citep{polino2018model}, NAS (Neural Architecture Search)~\\citep{elsken2019neural} and model distillation through teacher-student networks~\\citep{gou2021knowledge}. \nFor the scope of this paper, we consider pruning as a technique to remove trainable weights in the network, and save on computation costs for the FBNet family of models. \nThe motivations for this are two-fold. \nFirstly, state-of-the-art models such as FBNet~\\citep{wu2019fbnet} already adopt the best practices in the area of efficient hardware-aware design of convolutional neural network based models, and are widely used across different vision tasks. \nThis makes them suitable baselines to understand whether pruning can offer any performance gain over their already optimized behavior. \n{While there has been limited work on pruning for efficient convolution network models they investigate older architectures such as EfficientNet and MobileNet~\\citep{aflalo2020knapsack} or integrate pruning into expensive techniques such as joint prune-and-architecture search~\\citep{wang2020apq}. }\n\nFor each of the constituent models of the FBNetV3 family (FBNetV3A, FBNetV3B,..., FBNetV3G) we reduce the number of parameters using two pruning based approaches: \n(1) \\textit{Global magnitude-based pruning}: \nStarting with the pre-trained model, we prune all weights whose magnitude is below a threshold chosen in order to achieve a target number of FLOPs for the pruned model; \n(2) \\textit{Uniform magnitude-based pruning}: \nStarting with the pre-trained model, we prune weights in each layer whose magnitude is below a level-specific threshold in order to yield a pruned model achieving a target number of FLOPs with the same sparsity in each layer. \nAfter either pruning method is applied, \nwe fine-tune the pruned model for a certain number of epochs until convergence is reached.\nWithin the scope of our study in this paper, we are mostly interested in the following research questions:\n\\begin{itemize}[leftmargin=*]\n \\item \\requ1: Pruning to improve computation vs.~performance tradeoff. Can a model obtained by pruning a larger FBNetV3 model \\textbf{M1} (optimized using NAS) achieve higher generalization performance than a smaller FBNetV3 model \\textbf{M2} when the pruned model has the same number of FLOPs as \\textbf{M2}? \n \\item \\textbf{RQ2}: Pruning as an efficient paradigm. When a larger FBNetV3 model \\textbf{M1} is available and computational resources are limited, is pruning a faster and less computationally expensive approach to obtain a model with higher accuracy at a desired computation level (FLOPs) than running a full-fledged architecture search?\n\\end{itemize}\n\\textit{Pruning to improve computation vs.~performance tradeoff (\\requ1).}\nThere have been recent research advances in the area of building hardware-aware efficient models~\\citep{deng2020model}. \nThese can provide good generalization performance while adhering to constraints on memory, inference latency and battery power, which are often dictated by the hardware environment where inference happens. \nExperiments described in existing work on efficient vision models such as ChamNet~\\citep{dai2019chamnet}, MobileNet~\\citep{howard2017mobilenets}, EfficientNet~\\citep{tan2019efficientnet} and FBNetV2~\\citep{wan2020fbnetv2} have shown that it is possible to achieve even higher performances on standard image recognition tasks such as ImageNet~\\citep{deng2009imagenet} at a certain level of FLOPs. \nHowever the efficient design of these models does not solve the over-parameterization problem completely, and none of these approaches study how model pruning can be performed to obtain even better trade-offs between computation and model accuracy. \nThis paper is the first of its kind to understand how we can improve on the state-of-the-art in this problem space. \n\n\\textit{Pruning as an efficient paradigm (\\textbf{RQ2}).}\nIn addition to achieving state-of-the-art performance with reduced FLOPs, we are also interested in understanding how such pruned models can be obtained \\textit{inexpensively} with limited resources that are generally available to a machine learning practitioner who has access to existing optimized models but limited computing resources. \nFor example, the FBNetV3 models are freely available through Facebook's Mobile Model Zoo\\footnote{FBNetV3 models available here \\url{http:\/\/https:\/\/github.com\/facebookresearch\/mobile_cv\/model_zoo\/models\/model_info\/fbnet_v2\/model_info_fbnet_v3.json}}, while EfficientNet models can be obtained at GitHub\\footnote{EfficientNet models available here \\url{https:\/\/github.com\/mingxingtan\/efficientnet}}. \nWhile the techniques needed to obtain computation- and latency-friendly models have been democratized through open-sourcing the source code as well as the models themselves, fully applying these techniques necessitates costly operations such as finding an optimal network topology through meta-learning approaches~\\citep{you2020greedynas} and search algorithms such as Genetic Algorithms (GAs)~\\citep{goldberg1991comparative}.\n\nGiven the high-degree of intractability of this problem, expensive computational resources are often needed in this case, easily exceeding the budget available to a university research laboratory or an angel-stage startup~\\citep{zoph2016neural}. \nWhen a starting model is already available, for example through open-sourcing, the best option would be to perform a cheap modification of the model to fit a certain target FLOPs\/latency requirement. \nIn this paper we have compared the NAS approaches for training FBNetV3 models with our pruning techniques on a computational complexity metric (GPU-hours) to effectively answer \\textbf{RQ2}.\n\n\\textit{Benchmark results.}\nIn addition to experimental outcomes for answering \\requ1 and \\textbf{RQ2}, we also benchmark pruned FBNetV3 models using available open-sourced quantized sparse kernels and conduct ablation studies to obtain additional insights into pruning performance. \nThese results augment our main observations and demonstrate that with existing hardware support, it is possible to deploy pruned cutting-edge computer vision models with practical latency reductions and improve further beyond the performance vs. FLOPs trade-off.\n\nWe conduct our experiments on ImageNet, which is an object-recognition task on a large training dataset of 1.2 million images. \nWe show that computationally less intensive techniques such as uniform and global magnitude-based pruning of larger FBNetV3 models can yield higher test accuracies than small models while having the same number of FLOPs. \nGiven a target computation budget for an efficient model, we show that it is more practically advantageous (both in terms of performance and running time) to simply prune the larger model than run a neural architecture search to find the target model from scratch. \n\n{The technique we have employed for pruning (unstructured sparsity) is already tried and tested, however our novelty lies in studying whether efficient image recognition models such as FBNetV3 can be optimized further to improve on the FLOPs-accuracy curve, and the contributions are two fold : (1) FBNets are themselves state-of-the-art in efficient vision models and we achieve better accuracy-FLOPs tradeoff over these models and (2) from the standpoint of computational overhead, we significantly reduce the amount of GPU hours required to obtain such models. Pruning a publicly available NAS optimized model incurs $\\approx$4x less GPU hours to achieve a target FLOPs level, compared to training a full-fledged NAS to obtain a model which has less accuracy at the same FLOPs level.}\n\n\\textit{Paper organization.}\nThe remainder of this paper is organized as follows. \nIn Section~\\ref{related-work}, we describe related work in the area of efficient vision model design and also provide an introduction to different pruning techniques. \nIn Section~\\ref{experimental-setup}, we discuss our experimental setup, including a description of the baseline models and the \\textit{global} and \\textit{uniform} pruning approaches we have employed. \nSection~\\ref{results} describes our main findings and we conclude the paper in Section~\\ref{conclusions}.\n\n\\section{Related Work}~\\label{related-work}\nWe discuss related literature in the areas of \\textit{computationally efficient vision models} and \\textit{model pruning}.\nWithin the scope of our work, we mainly focus on inference efficiency of models in contrast to training efficiency.\n\\par\n\\textit{Computationally efficient vision models:} Neural networks for computer vision are generally characterized by convolutional layers and fully-connected layers, along with blocks such as residual or skip connections. \nThis makes such networks resource intensive in terms of FLOPs, which affects the memory storage and power consumed, and also leads to increased latency. \nIt is of paramount importance to design more efficient networks which can provide higher performance for the same FLOPs or latency level, or even to optimize them appropriately to provide the same performance at reduced FLOPs\/latency. This can be performed either through the design of new simplified layers, for example in deep residual learning~\\citep{he2016deep} or though explicit model compression as in weight quantization~\\citep{polino2018model}.\nExtremely deep networks for image recognition often suffer from not only high complexity and inference latency, but also from the issue of \\textit{vanishing gradients}~\\citep{pascanu2013difficulty}. This was addressed through deep residual networks which effectively simplified network design through skip-connections. \nMobileNets~\\citep{howard2017mobilenets} are one of the earlier approaches to building small low-latency networks by using depthwise separable convolutions with two parameters, \\textit{width} and \\textit{resolution} multipliers. They demonstrate the effectiveness of MobileNets across different vision tasks, such as face embeddings and object detection. MobileNetV2~\\citep{sandler2018mobilenetv2} extends MobileNets by utilizing inverted residual filter structures and linear bottlenecks, obtaining improvements on state-of-the-art models both in terms of accuracy and computational complexity. ShuffleNets~\\citep{zhang2018shufflenet} propose dedicated residual units where 1\\ensuremath{\\times}1\\xspace convolutions are replaced with pointwise group convolutions and channel shuffling reducing FLOPs computations. \n\\par\nMore recently, the focus on building efficient neural network models has shifted to techniques that treat the design of efficient networks as a search problem, falling under the umbrella of Neural Architecture Search (NAS).\nEfficientNets~\\citep{tan2019efficientnet} propose a novel scaling method which adjusts the network's length, width, and resolution to optimize performance subject to target memory and FLOPs constraints. They also define a novel baseline that is optimized by a multi-objective neural architecture search. The FBNet collections of models---FBNet~\\citep{wu2019fbnet}, FBNetV2~\\citep{wan2020fbnetv2} and FBNetV3~\\citep{dai2021fbnetv3}---employ neural architecture search to obtain highly-optimized models that improve on the state-of-the-art for different visual understanding tasks. \nFBNet frames the architecture search as a differentiable meta-learning problem with gradient based techniques, namely \\textit{DNAS}---Differentiable Neural Architecture Search---by \\cite{wu2019fbnet}, and avoids selecting the optimized model over a discrete set. \nThe subsequent entry in this collection, FBNetV2, expands the search space over conventional DNAS, and employs a masking scheme to maintain the same level of computational complexity while searching over this expanded space. \nFBNetV3 further improves on the state-of-the-art by employing Neural Architecture Recipe Search (NARS) and searching over the space of not only architectures, but also corresponding recipes (which are generally hyper-parameters). In this paper, we consider FBNetV3 models as our baselines as they are state-of-the-art. \nWe are interested in understanding if they are overparameterized and evaluate how much model pruning can improve performance at a certain FLOPs level over the state-of-the-art in this family of models.\n\\par\n\\textit{Model Pruning:} Modern neural networks, particularly those processing complex sensory inputs (such as speech, vision and language) for perception applications, are often over-parameterized. \nIt is only to be expected that we should be able to compress such networks significantly to maintain the same level of performance at decreased level of computation (fewer weights and reduced FLOPs), memory footprint and power consumption. Foundational efforts in this space include the \\textit{Optimal Brain Surgeon}~\\citep{hassibi1993second} and \\textit{Optimal Brain Damage}~\\citep{lecun1990optimal}. \nRecently the idea of network pruning has been formalized through the lottery ticket hypothesis~\\citep{frankle2018lottery}, which claims that randomly initialized, feed-forward networks have winning sub-networks that perform just as well as the original network on an unseen test dataset. \nModel pruning is generally of two types: unstructured and structured pruning. \nUnstructured pruning, as the name suggests, doesn't adhere to any structure and prunes neurons based on chosen criteria (such as magnitude). This has the advantage of providing higher performance, but is difficult to implement in hardware, as it needs dedicated support for efficient sparse matrix multiplications. \nMeanwhile, structured pruning is the practice of removing entire groups of neurons (e.g., blocks within the weight matrix, or channels in convolutional neural networks). \nThis is easy to implement without dedicated hardware support, but has the issue of lower generalization performance than unstructured pruning~\\citep{yao2019balanced}. \nIn the literature, there have also been several studies, for example investigating whether rewinding (training from scratch with a fixed mask) can perform just as well as the fine-tuning on top of the original unpruned network~\\citep{renda2020comparing}. {~\\cite{blalock2020state} provide an overview survey of recent advances and open problems in neural network pruning.}\n\\par\nIn the research area of designing efficient networks for computer vision, there has not been much focus on understanding how pruning can be applied to the current generation of models.\nMost literature on pruning is based on older networks such as VGGNet, ResNet~\\citep{he2016deep}, and MobileNet~\\citep{sandler2018mobilenetv2}.\nOur work improves upon these existing studies by understanding how pruning can improve the FLOPs-accuracy tradeoff over existing state-of-the-art networks.\n\n\\section{Pruning Techniques and Setup}\n\\label{experimental-setup}\nIn this section, we describe the main components of our techniques and experimental setup, including \\textit{Baseline Models}, \\textit{Pruning Techniques}, \\textit{Latency Measurement} and \\textit{Metrics}. We have mainly used standard splits of the ImageNet dataset, further details are in Section~\\ref{dataset} of the appendix.\n\n\\subsection{Baseline Models}\\label{baseline-models}\n\\cite{dai2020fbnetv3} address the previous limitations of NAS-based architecture search where these approaches can only search over architectures given a training recipe (set of hyperparameters), and thus cannot optimize over both. \nAs described in Section~\\ref{related-work}, the most recent state-of-the-art models are based on NARS (Neural Architecture-Recipe Search), which we select as baseline models. Table~\\ref{tab:baseline-models} lists the accuracy of FBNetV3 models~\\citep{dai2021fbnetv3} on the ImageNet classification task, along with the number of model parameters and computation complexity in terms of FLOPs. \n\\par\nEach baseline model consists of multiple IRF (Inverted Residual Filter) blocks, which contain convolutional layers of different kernel sizes. \nFor our experiments, we are mostly interested in 1\\ensuremath{\\times}1\\xspace convolutions as potentially prunable, since within each FBNetV3 model, the 1\\ensuremath{\\times}1\\xspace convolution layers constitute >80\\% of total model FLOPs for all models in the family, and the open-sourced sparsity kernel support we use for latency benchmarking is available only for fully connected layers. \nA 1\\ensuremath{\\times}1\\xspace convolution can be transformed into an equivalent fully connected layer with a few tensor reshape operations without any significant loss of performance or latency.\n\nFor each initial and target FBNetV3 model $X$ and $Y$, where $X$ is larger than $Y$, we prune $X$ to a \\emph{sparsity level} of $S$ so that the FLOP count is the same as for $Y$. The number of FLOPs consumed by a linear layer of sparsity $S$ is proportional to the number of sparse matrix multiplications performed and is given by $S * F$, where $F$ is the corresponding dense FLOPs. \nThus if $F_{1\\ensuremath{\\times}1\\xspace}(X)$ is the number of FLOPs consumed by the 1\\ensuremath{\\times}1\\xspace convolution layers and $F(x)$ is the total number of FLOPs consumed by model $X$, we have:\n\\begin{equation}\\label{flops-eq}\n S = {(F(X) - F(Y))}\/{F_{1\\ensuremath{\\times}1\\xspace}(X)}\n\\end{equation}\nHence, sparsity measures the fraction of 1\\ensuremath{\\times}1\\xspace convolution weights removed, and so \nhigher sparsity indicates a smaller model. \nFor the uniform pruning scnario, Table~\\ref{sparsity-table} shows the amount of sparsity required to prune each larger FBNetV3 model to a smaller one based on Eq.~(\\ref{flops-eq}). For global pruning, (\\ref{flops-eq}) does not hold, and we compute the target sparsities empirically from the layer shapes instead with details provided in Section~\\ref{global_flops}.\nWe prune each larger FBNetV3 model to a discrete FLOPs target based on a defined set of smaller models in the family, and not to a continuous range of FLOPs values, as it makes it easier to compare models directly based on a given computation budget. \nIf we can demonstrate that for the same computation level, the pruned larger FBNetV3 model has higher performance than a smaller model with the same FLOPs, it is sufficient to demonstrate that we can improve on the FLOPs-accuracy curve over the state-of-the-art.\n\n\\subsection{Pruning Techniques}\\label{pruning-techniques}\nIn this paper, we utilize a pre-trained FBNetV3 model with higher number of FLOPs without training an image classification model from scratch with sparsity, which would be time consuming and computationally intensive. There are several approaches in the literature such as prune-and-fine-tune~\\citep{han2015learning} and iterative pruning with sparsity scheduling~\\citep{frankle2018lottery}. \nWe have utilized the former for our experiments, as although studies have shown that iterative and incremental pruning approaches lead to better generalization performance, they typically require training for high number of epochs, need tuning and selection of optimal sparsity schedules and are computationally resource intensive. We have therefore not considered them in our experiments. {For our prune and fine-tune experiments, we have used 8-GPU boxes, with each box having Nvidia V100 (Volta) 32G GPUs.}\nAs described in Section~\\ref{intro}, we perform both global and magnitude-based pruning experiments. For the latency benchmarking, we also perform magnitude-based uniform pruning with a sparse block size of $1\\times4$ as explained in Section~\\ref{latency}.\n\nWe have conducted a hyper-parameter tuning for the learning rate parameter, with LR {values in the set} \\{4e-5, 8e-5, 1.6e-4\\}, as fine-tuning generally admits smaller learning rates than training from scratch. We have found that using the same learning rate for all models, along with the same hyper-parameter settings used for training the seed model is sufficient to obtain pruned networks which are superior to the baseline FBNetV3 models. Hence minimal hyper-parameter tuning was required for our experiments and we have used values of settings such as weight decay and momentum to be the same as those used for training the baseline FBNetV3 models. During fine-tuning after pruning, we have used a smoothed validation loss to stop the process early after a convergence tolerance (0.01\\%) is reached between two consecutive epochs. Generally, we have observed fine-tuning to converge around $\\sim$250 epochs\n\n\\subsection{latency measurements and Metrics} \\label{latency}\nWe are interested not only in the sparsity level of our pruned models and the image recognition performance, but also in metrics which potentially improve due to model sparsity, such as number of parameters, the FLOP count and the model latency. \nFor reporting model performance under pruning, we use standard image recognition metrics such as Top-1 and Top-5 {test} accuracies.\nWe measure overall model sparsity, which is different to the layer sparsity since we \nonly prune 1\\ensuremath{\\times}1\\xspace convolution layers, as explained in Section~\\ref{baseline-models}. \nWe report the model FLOPs, because this metric captures the computational footprint of the model and its power consumption. \n\nLast, we record the total latency (in ms.) under pruning. The sparse kernels used in our experiments are already in open-source and released under the PyTorch sparse quantization library\\footnote{https:\/\/github.com\/pytorch\/pytorch\/blob\/master\/torch\/ao\/nn\/sparse\/quantized\/linear.py}. Prior to using these kernels, we perform uniform layer-wise block-based pruning with block sizes of $1\\times4$. Magnitude based pruning is implemented at block level, and the model is quantized to 8-bit integers (int8) before latency benchmarking{, which is performed on Intel CPUs designed using the Skylake micro-architecture.}\nWhile we would expect sparsity to translate to tangible inference speedups, this is highly dependent on the sparse kernel support provided by hardware. \nCurrent hardware is not well-suited for unstructured randomly sparse matrix multiplications and tend to do better with structured sparsity in models~\\citep{anwar2017structured}. We have utilized block sparsity within the weight matrix for latency experiments.\nHowever this often tends to come at a decreased level of model performance. \nThe design of highly performant sparse models under structured sparsity with reasonable inference speedups remains an important research topic outside the scope of this paper.\n\n\n\\section{Results}~\\label{results}\n\\subsection{Pruned FBNetV3 model performance}\\label{pruning_performance}\nTo answer \\textbf{RQ1}, we consider the family of FBNetV3 models as baselines and seed models for further pruning. For each pair of models $X$, $Y$ in the family, we calculate the amount of sparsity required to prune the larger model $X$ to a model that consumes the same number of FLOPs as the target smaller model $Y$, via Equation~\\ref{flops-eq}.\nThere are 21 potential seed and target model pairs, however we conduct pruning experiments only for a depth of 2 for tractability. For example, given FBNetV3E as the seed, we only prune it to FLOPs targets corresponding to FBNetV3D and FBNetV3C.\nTable~\\ref{flops_table} presents the accuracy and number of parameters of the pruned models at each target FLOPs level. The improvement in performance is apparent even at lower FLOPs targets, where we might expect baseline models such as FBNetV3A to not be over-parameterized. \nFor example, pruning FBNetV3C to a target of 356.6 MFLOPs obtains a network which is 1.43\\% better than FBNetV3A. Figure~\\ref{flops-curve} plots the Top-1 ImageNet testing accuracy vs. FLOPs for the best pruned models as seen from Table~\\ref{flops_table}. This clearly shows that pruning FBNetV3 models with minimal fine-tuning can significantly improve on the state-of-the-art for FLOPs vs. Top-1 accuracy trade-off. \nThis analysis is performed for both uniform layer-wise and global magnitude-based prune with fine-tune settings. Global pruning ranks the weights of the entire network in contrast to uniform layer-wise pruning, which ranks each layer's weights to determine the sparsity mask. It would be expected that global pruning performs better than uniform pruning for the same target sparsity level or number of non-sparse parameters. However in our experiments we determine the pruning threshold based on FLOPs targets, and find global pruning to require higher sparsity levels, which results in uniform pruning outperforming global pruning in Top-1 ImageNet accuracy in most cases.\n\\begin{table}[]\n\\centering\n\\caption{Sparsity level (in percentage) and performance of pruned FBNetV3 networks on ImageNet dataset for different target MFLOPs. The best accuracy obtained at each target FLOPs level is highlighted in bold.}\n\\label{sparsity-table}\n\\label{flops_table}\n\\setlength{\\tabcolsep}{2pt}\n\\begin{tabular}{|l|l|l|l|l|l|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Seed\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ network\\\\FBNetV3\\_\\end{tabular}}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Target\\\\ MFLOPs\\end{tabular}}} &\n \\multirow{2}{*}{\\begin{tabular}[c]{@{}l@{}}Baseline\\\\ Accuracy\\end{tabular}} &\n \\multicolumn{3}{c|}{Uniform pruning} &\n \\multicolumn{3}{c|}{Global pruning} \\\\ \\cline{5-10} \n\\multicolumn{1}{|c|}{} &\n \\multicolumn{1}{c|}{} &\n \\multicolumn{1}{c|}{} &\n &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\begin{tabular}[c]{@{}l@{}}Top-1 \\\\ Acc.\\end{tabular} &\n \\multicolumn{1}{c|}{Gain(\\%)} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Sparsity \\\\ level(\\%)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Top-1 \\\\ Acc.\\end{tabular}} &\n Gain(\\%) \\\\ \\hline\nB & A & 356.6 & 79.6 & 26.59 & 80.308 & 0.887 & 39.5 & 80.232 & 0.793 \\\\ \\hline\n C & A & 356.6 & 79.6 & 40.7 & \\textbf{80.738} & 1.43 & 57.9 & \\textbf{80.476} & 1.1 \\\\ \\hline\n C & B & 461.6 & 80.2 & 19.4 & 80.996 & 0.992 & 28.9 & 80.998 & 0.985 \\\\ \\hline\n D & B & 461.6 & 80.2 & 31.47 & \\textbf{81.116} & 1.142 & 43.7 & \\textbf{81.08} & 1.097 \\\\ \\hline\n D & C & 557.0 & 80.8 & 15.04 & 81.278 & 0.591 & 21.5 & \\textbf{81.208} & 1.256 \\\\ \\hline\n E & C & 557.0 & 80.8 & 31.0 & \\textbf{81.282} & 0.596 & 43.6 & 81.184 & 0.475 \\\\ \\hline\n E & D & 644.4 & 81.0 & 17.8 & 81.118 & 0.145 & 25.8 & 81.388 & 0.479 \\\\ \\hline\n F & D & 644.4 & 81.0 & 38.2 & \\textbf{82.00} & 1.234 & 67.8 & \\textbf{81.484} & 0.597 \\\\ \\hline\n F & E & 762.0 & 81.3 & 29.8 & \\textbf{82.19} & 1.094 & 54.7 & \\textbf{81.97} & 0.824 \\\\ \\hline\nG & E & 762.0 & 81.3 & 71.67 & 81.166 & -0.16 & 85.5 & 79.934 & -1.68 \\\\ \\hline\n G & F & 1181.6 & 82.0 & 49.69 & \\textbf{82.528} & 0.643 & 63.8 & \\textbf{82.454} & 0.553 \\\\ \\hline\n\n\\end{tabular}\n\\end{table}\n\n\\begin{figure}[h] \n\\centering\n\\includegraphics[scale=0.40]{new_pruned_paper.pdf}\n\\caption{FLOPs vs. performance (ImageNet Top-1 acc.) for different pruned FBNetV3 networks. For comparison, the existing FBNetV3 networks are also shown here.}\n\\label{flops-curve}\n\\end{figure}\n\n\\subsection{Pruning Complexity}\nIn addition to demonstrating the improvement over state-of-the-art obtained by pruning FBNetV3 models, \nit is also important to quantify the reduction in computational complexity obtained in pruning a larger FBNetV3 model compared to training an FBNetV3 model directly through NAS (Network Architecture Search). \n\\textbf{RQ2} (pruning for efficient model search) asks if the pruning and subsequent fine-tuning approach in Section~\\ref{pruning_performance} is faster than a full-fledged neural architecture search. \nDuring pruning and subsequent fine-tuning, we train the pruned networks till the validation loss converges to within a pre-specified tolerance, as described in Section~\\ref{pruning-techniques}.\nThe time needed is generally less than when training the original FBNetV3 models, which runs for 400 epochs. \nThe number of GPU-hours is computed as (number of training GPU nodes) * (number of GPUs per node) * (training time to convergence) for each network.\nIn Table~\\ref{gpu-hours}, for each of the best performing uniformly-pruned models in Section~\\ref{pruning_performance} we report the number of GPU-hours consumed by the prune and fine-tune strategy, along with the GPU-hours consumed when obtaining a FBNetV3 model through architecture search using the method described in~\\cite{dai2020fbnetv3}. \nThe results are quite conclusive---we not only obtain pruned models superior in performance to the original neural search optimized models, but also as described in Section~\\ref{intro}, computational cost is significantly lower when starting from a pre-trained model with higher FLOPs. \nGiven the performance improvements obtained with lower computational resources, this approach is beneficial for an experimental setting where researchers have access to open-sourced pre-trained models and limited GPU resources, for example in a small startup or an academic environment. \nWe observe that the degree of speedup reduces as the network size gets bigger (e.g., in FBNetV3A vs. FBNetV3C) due to higher training time to convergence.\nNevertheless, we still obtain a speedup of 3-5 times compared to a full NAS (Neural Architecture Search). \n\n\\begin{table}[]\n\\centering\n\\caption{Computation speedup in term of GPU-hours when comparing NAS (neural Architecture Search) with pruning and fine-tuning approaches. {The selected seed networks are drawn from those in Table~\\ref{flops_table} with the best performance at target FLOPs.}}\n\\label{gpu-hours}\n\\begin{tabular}{|l|l|l|l|}\n\\hline\n\\multicolumn{1}{|c|}{\\begin{tabular}[c]{@{}c@{}}Target FLOPs \\\\ (FBNetV3 Model)\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours \\\\ in NAS\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}GPU-hours\\\\ in pruning \\\\ and fine-tuning\\end{tabular}} &\n \\multicolumn{1}{c|}{\\begin{tabular}[c]{@{}c@{}}Computational cost\\\\ speedup\\end{tabular}} \\\\ \\hline\n356.6 (FBNetV3A) & 10.7k & 2.240k & 4.77 \\\\ \\hline\n557.0 (FBNetV3C) & 10.7k & 2.496k & 4.28 \\\\ \\hline\n762.0 (FBNetV3E) & 10.7k & 3.456k & 3.09 \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\\subsection{Latency Experiments}\nWe also measure the latency-performance tradeoff for the pruned FBNetV3G models. FBNetV3G is the largest model in the family and so is expected to have the best generalization performance under high sparsity levels. \nAs described in Section~\\ref{latency}, we prune the network using block sparsity (where the block size is $1\\times4$) to sparsity levels in the set \\{40\\%, 50\\%, 60\\%\\}. \nWe have not utilized lower sparsity levels, as we have observed that for the selected kernels we need at least 40\\% sparsity to yield any significant latency benefits. \nWe have pruned all 1\\ensuremath{\\times}1\\xspace convolution layers uniformly here and subsequently converted them to fully-connected layers for compatibility with the quantized sparse kernels. \nIn Figure~\\ref{latency-curve}, we present the Top-1 ImageNet accuracy vs. latency curve after pruning the FBNetV3G network for different sparsity levels. \nThe pruned FBNetV3G models show marked performance reduction with lower latency as expected, with a sparsity level of 60\\% translating to around 7\\% absolute accuracy reduction with a latency reduction of 18 ms (16\\% relative). While the 1\\ensuremath{\\times}1\\xspace convolution layers account for >80\\% of FLOPs, they only constitute 25\\% of overall network latency. \nThis is consistent with previous literature~\\citep{dudziak2020brp} which shows that computational complexity (ex. FLOPs) and latency are not well-correlated, and indeed the latter is more dependent on layer shapes. \nThis result underscores the need to develop more latency-friendly pruning techniques which can potentially improve on the state-of-the-art in this domain.\n\n\\begin{figure}\n \\centering\n \\begin{subfigure}[b]{0.49\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{latency}\n \\caption{Latency vs. Top-1 accuracy on ImageNet}\n \\label{latency-curve}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=0.75\\textwidth]{sparsity_pattern.png}\n \\caption{Layer-wise sparsity pattern for FBNetV3E}\n \\label{sparsity_pattern}\n \\end{subfigure} \\\\\n \\begin{subfigure}[b]{0.5\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{flops_curve.png}\n \\caption{Layer-wise FLOPs distribution for FBNetV3E}\n \\label{flops_pattern}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.4\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{boxplots}\n \\caption{Performance distribution for layer type}\n \\label{sensitivity_pattern}\n \\end{subfigure}\n \\caption{Latency benchmarking on FBNetV3G for different sparsity levels \\{40\\%, 50\\%, 60\\%\\} and layer-wise sparsity\/FLOPs\/accuracy sensitivity for a pruned FBNetV3E network.}\n \\label{fig:three graphs}\n\\end{figure}\n\\subsection{Insights into pruning experiments}\nOur pruning experiments demonstrate that we can improve on the state-of-the-art FBNetV3 models in generalization performance for a given FLOPs level. In this subsection, we obtain an insight into\n(1) the sparsity pattern under global magnitude-based pruning and \n(2) the sensitivity of each layer when pruned in isolation under uniform layer-wise magnitude pruning (sparsity level of 95\\%). For (1) in Figure~\\ref{sparsity_pattern}, we plot the amount of sparsity obtained per 1\\ensuremath{\\times}1\\xspace convolution layer. The model being considered is an FBNetV3E network pruned to a sparsity level of 43.6\\%, to the same FLOPs level as FBNetV3C and subsequently fine-tuned. We note that the sparsity level in lower layers is lower which is potentially required for maintaining the performance~\\cite{}. Higher sparsity can be admitted in upper layers of the network where it has learnt more redundant representations. SE (Squeeze and Excitation) 1\\ensuremath{\\times}1\\xspace convolution layers generally tend to get pruned more compared to other layers, with the sparsity being >99\\% for two such SE layers in stage $xif5\\_0$. This indicates that we can also consider revisiting SE layer role in FBNetV3 networks, and even remove entire layers in future work to yield additional latency and FLOPs benefits. \n\nFor analysis (2) we prune each 1\\ensuremath{\\times}1\\xspace convolution layer in isolation at a sparsity target of 95\\% and record the Top-1 test accuracy obtained on ImageNet dataset. For each type of layer, PW:expansion, PWL: bottleneck, SE: Squeeze-Excitation we plot the distribution of accuracies in Figure~\\ref{sensitivity_pattern}. We observe that the PW and PWL layers are most sensitive to high sparsity, while SE layers are able to retain performance adequately. We could also avoid pruning the most sensitive layers (appearing as outliers in the figure) to maintain generalization performance. This observation corroborates findings from analysis (1), and motivates us to revisit the role of squeeze-excitation layers in future work. \n\n\\section{Conclusions}~\\label{conclusions}\nIn this paper, we have investigated the problem of improving on the current state-of-the-art FLOPs vs. performance trade-off for FBNets which have been pre-optimized by NAS (Neural Architecture Search). We have employed network pruning techniques, and our results demonstrate that we can further improve on performance over FBNetV3 at a given FLOPs target through global as well as uniform magnitude-based pruning. This happens not only for relatively over-parameterized networks such as FBNetV3G, but also smaller networks such as FBNetV3A which have lower computational complexity. On average, the GPU-hours incurred during pruning is about $\\sim\\!\\!4\\times$ less than that consumed by a full-scale NAS. We have also performed latency measurements on the FBNetV3G model and conducted an analysis to understand the sparsity patterns and sensitivity of different FBNetV3 layers to pruning. For future work, we plan to investigate squeeze-excitation layers in more detail, and explore structured pruning approaches such as channel and layer pruning to further improve on the latency-performance tradeoff for this family of models. \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{}\n\\vspace{-1cm}\n\n\n\n\\footnotetext{\\textit{$^{a}$~Institut Charles Sadron, CNRS UPR22 - Universit\\'{e} de Strasbourg, Strasbourg, France; Fax: 33 (0)3 88 41 40 99; Tel: 33 (0)3 88 41 40 43; E-mail: Wiebke.Drenckhan@ics-cnrs.unistra.fr}}\n\\footnotetext{\\textit{$^{b}$~Sorbonne Universit\\'{e}s, UPMC Univ Paris 06, CNRS-UMR 7588, Institut des NanoSciences de Paris, 4 place Jussieu, 75005 Paris, France; Email: hohler@insp.upmc.fr} }\n\\footnotetext{\\textit{$^{c}$~Universit\\'{e} Gustave Eiffel , 5 Bd Descartes, Champs-sur-Marne, F-77454 Marne-la-Vall\\'{e} cedex 2, France. }}\n\\footnotetext{\\textit{$^{d}$~TU Dortmund University, Department of Physics, 44221 Dortmund, Germany }}\n\n\\footnotetext{Electronic supplementary information (ESI) available. See DOI: 10.1039\/D1SM01109J}\n\n\n\n\n\n\n\n\n\n\n\\section{Introduction}\n\\label{sec:Intro}\n\n\nThe mechanical response of interfaces separating immiscible fluids enters into many fundamental and applied problems of topical interest. Within the current desire to describe complex liquid interfaces \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014}, two scientific communities meet, accustomed to treating either \\textit{drops or bubbles} with \\textit{fluid-like} interfaces, or \\textit{capsules} and \\textit{balloons} whose membranes have \\textit{solid-like} mechanical properties. \n\nThe interfacial tension of complex \\textit{fluid interfaces} of drops or bubbles commonly depends on the adsorption of surfactant molecules and on their interactions (top of Fig. \\ref{fig:interfaces}). Their interfacial stress is isotropic and in the static limit insensitive to shear deformations. Such fluid systems can present an elastic stress response to dilation in addition to the surface tension. This is commonly called \\textit{Gibbs Elasticity} if surfactant exchange between interface and bulk can be neglected. \n\nThe stress in the \\textit{solid-like membranes} bounding capsules or balloons (bottom Fig. \\ref{fig:interfaces}) strongly depends on both shear deformation and compression away from a stress-free \"reference state\". These membranes are often thin enough for the elastic bending energy to be negligible compared to those associated with dilation and shear. These \"skins\" behave like 2D solids, with an elastic response characterised for small deformations by an \\textit{interfacial dilational modulus} and a \\textit{interfacial shear modulus}. \n\\par Like the physics of simple drops\/bubbles \\cite{Miller1998}, the physics of capsules\/balloons \\cite{Pozrikidis2003,Mueller_Strehlow_2004,Neubauer_ACIS_2014,Fery_Pol_2007,Sagis2015,Sagis2015a} is now quite well understood. \nHowever, \"intermediate\" systems are of increasing interest, which we shall name \"droploons\" or \"bubbloons\". Their interfacial properties combine those encountered respectively in drops\/bubbles and capsules\/balloons: interfacial tension and solid-like membrane-stresses coexist. Here, the reference state is defined by the absence of a solid-like stress contribution so that only capillary stress is present. \nA multitude of bubbloon- and droploon-like systems have been investigated in the past, involving interfacially active particles, proteins, cross-linked surfactant monolayers, polymer multi-layers, polymer-surfactant mixtures, etc. \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Erni2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. In most of these systems, liquid- and solid-like elastic contributions are intricately entangled, \ncalling for physical models and experimental approaches helping to distinguish and study these contributions. In the following, we provide a very short state of the art of relevant approaches before introducing the one taken for this article. \nInterfacial stresses may in general be of dynamic or static nature and they may present a plastic response depending on deformation history. Here we shall concentrate on the quasi-static response of interfaces. For more details, the reader is referred to recent books and review articles \\cite{Edwards1991,Rehage_RheoAct_2002,Miller2009,Sagis_RevModPhys_2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. For simplicity, we will also only talk about \\textit{droploons} and \\textit{liquid\/liquid} interfaces, but all derived concepts apply equally to \\textit{bubbloons} and \\textit{gas\/liquid} interfaces. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=7cm,keepaspectratio]{FIGURES\/LiquidSolidInterface.png}\n \\caption{Contrast between the elastic response of fluid and solid interfaces. \\textit{Top}: The dilation of the concentration of surfactant molecules adsorbed to fluid interfaces creates an elastic contribution to surface stress. If the exchange of the surfactants with the bulk is inhibited, this response is static and called \"Gibbs elasticity\". \\textit{Bottom}: An elastic stress also appears when a solid like-skin covering the interface is stretched. \\label{fig:interfaces}}\n\\end{figure}\n\nThe development of dedicated interfacial shear rheometers has enabled reliable measurements of the \\textit{interfacial shear modulus} \\cite{Miller2009,Kragel2010}. However, the characterisation of the \\textit{dilational modulus} remains challenging due to the experimental difficulty of applying an accurately controlled homogeneous dilation to an interface and of assessing the accuracy of the modulus measurement if the deformation is only approximately a homogeneous dilation. \\par\nRecently, Vermant and coworkers \\cite{Verwijlen_ACIS_2014} constructed a special Langmuir trough in which the surface dilation is achieved by the action of twelve fingers arranged circularly. They used this set-up to investigate successfully the static and dynamic dilational response of complex interfaces. However, in order to access the surface stresses, this technique uses a Wilhelmy Balance which introduces potential errors in the measurement due to the influence of the contact line configuration on the Wilhelmy plate. Moreover, the large surfaces required for these measurements are prone to attract impurities, to encourage evaporation and make it challenging to work with liquid\/liquid systems. \n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.5cm]{FIGURES\/Sceme_Variables_3.png}\n\\caption{Of interest here is the in- and deflation of spherical drops around a reference state of radius $R_0$. These drops are either isolated or attached to a capillary with circular cross-section of radius $R_n$.}\n\\label{fig:Schemes}\n\\end{figure}\n\n\\par\nSince the volume change of a sphere leads to a perfect dilation of its surface, measuring the pressure-radius relation of a small, spherical droploon should be the preferred method to determine the dilational modulus. This has been implemented for capsules using osmotic pressure variations \\cite{Gao_EPJ_2001} or acoustic pressure fields \\cite{Dollet2019}\\footnote{Note that many other techniques have been developed which squeeze initially spherical droploons between two plates, use AFM, spinning drops or investigate the deformation of droploons in controlled flow fields. However, the associated deformations are all a combination of interfacial shear and dilation, making the quantitative analysis extremely complex.}. However, these approaches introduce physico-chemical or technical complexity. It is much more convenient to study the pressure\/shape relation of drops held by a capillary with circular cross-section, a technique called \"capillary tensiometry\" or \"pressure tensiometry\" when shape or pressure analysis is used, respectively. In the past, it has been used extensively for droploons deformed under gravity, a variant called \"capillary elastometry\" \\cite{Knoche2013,Hegemann2018}. However, the drop shapes are in this case non-spherical with complex interfacial deformations combining shear and dilational components, so that numerical fitting procedures with numerous parameters are required for the data analysis, introducing many uncertainties. Various improvements have been made to these approaches in the past, including improved shape fitting algorithms or combined shape\/pressure analysis \\cite{Danov_CollIntSci_2015,Nage_l2017}, yet without removing the complexity arising from the non-trivial object shape.\n\nIn the aim to identify and validate a quantitative technique for measuring the interfacial dilational modulus, we propose here to use the simplest possible geometry: an initially spherical droploon attached to a capillary in the absence of gravity. Combining simulations and analytical modelling, we investigate how pressure-deformation relations depend on the interplay between surface tension and solid-like interfacial elasticity. \nPressure tensiometry of hemi-spherical drops has been exploited in the past \\cite{Russev_2008,Kotula_JRheo_2015}, but in all previous work homogeneous isotropic interfacial dilation was assumed. This is an uncontrolled approximation, since such an idealised deformation is incompatible with the boundary imposed by the attachment to the capillary. The exact bubble shape depending on surface tension, the elastic properties of the skin and the gas pressure cannot be calculated analytically. We therefore perform pioneering simulations for this configuration using the Surface Evolver Software - a finite element tool graciously developed and provided by Ken Brakke - in which the combined effects of interfacial tension and specific local mechanical constitutive laws can be implemented. Surface Evolver has already been successfully applied to advance our understanding of systems composed of simple drops \\cite{Weaire1999} or of capsules\/membranes without surface tension \\cite{Bouzidi_CompStruct_2004,Quilliet2016}. Surprisingly, its power has not yet been exploited to perform predictive simulations of droploon-type systems where surface tension and solid-like elasticity are combined. Since direct numerical schemes can be used for the axisymmetric droploon problem (Section \\ref{sec:ModelKierfeld}), it provides an ideal benchmark test for the Surface Evolver simulations. The latter will be necessary to predict the response of more complex objects, such as droploon assemblies, where direct numerical schemes will fail.\n\nWe treat here a simple model interface, as sketched in the top row of Fig. \\ref{fig:Schemes}. We assume it to be composed of a liquid\/liquid interface of interfacial tension $\\gamma_0$, on which a permanently cross-linked, polymeric gel of thickness $h_0$ is grown. The liquid phase containing the gel is supposed to be a good solvent for the gel, such that the interfacial tension between the gel and the solvent is negligibly small. We furthermore assume that this gel layer is thick enough to be considered a bulk material with bulk shear modulus $G$ and that its mechanical response can be described by a Neo-Hookean model (Section \\ref{sec:Theory}). For this purpose, we make the simplifying assumption that the gel can be considered as incompressible, in the sense that its bulk modulus is much larger than its shear modulus. Last but not least, we make the assumption that the gel is dilute enough such that neither its presence nor its deformation modifies the liquid\/liquid interfacial tension, thus equal to that of the pure solvent $\\gamma_0$.\n\\par\n\n \n After a general introduction to the main theoretical concepts (Section \\ref{sec:TheoryFundamentals}), we provide exact analytical relations for the pressure-deformation relations of spherical droploons (Section \\ref{sec:TheorySpheres}) and, for the first time, well-matching analytical approximations for droploons on capillaries (Section \\ref{sec:TheoryNeedle}). We then show how Surface Evolver can be used to provide reliable simulations of the equilibrium shapes and pressure-deformation relations of this simple physical scenario (Section \\ref{sec:SE}), and we show excellent agreement with direct numerical predictions (Section \\ref{sec:ModelKierfeld} \\cite{Knoche2013,Hegemann2018}). In Section \\ref{sec:ResultsDropsNeedles}, we combine theory and simulation to show that the main influence of the capillary results from the change in geometry and not the induced deformation anisotropy. The influence of the capillary on the pressure-volume relationship of a doploon represents a challenging and unsolved theoretical problem because of the interplay of the curved droploon equilibrium shape with the presence of a rigid inclusion, which induces anisotropic elastic deformations of the droploon. We show that this stress anisotropy is strongly localised around the capillary and provide for the first time analytical relations to estimate the parameter ranges over which the anisotropy at the capillary\n has negligible impact on the pressure-deformation relation, i.e., over which the provided analytical pressure-deformation relations may be used reliably to analyse experiments. We regularly compare with analytical predictions obtained for perfectly fluid interfaces with Gibbs elasticity as a reference. \n\n We note that in most experimental systems the interfacial stress may not only depend on deformation but also on the exchange of surfactant molecules between the bulk and the surface or to temperature changes. Sufficiently thick skins may also present a bending stiffness. In addition to an elastic, reversible mechanical response, viscous and plastic behavior is commonly observed. None of these effects will be considered in the present paper focused on the simplest case of linear and nonlinear 2D elastic skin behavior which is already challenging.\n\n\n\n\n\\section{Theory}\n\\label{sec:Theory}\n\n\\subsection{Theoretical framework}\n\n\\label{sec:TheoryFundamentals}\n\nSince the recent literature has seen many debates about the physically correct description of the deformation of complex interfaces, we consider it necessary to start here with a fairly general introduction to clarify our point of view before introducing the specific concepts used later in the article.\n\n\nInterfaces are characterised by the amount of \\textit{interfacial free energy per surface area}, that we will denote $f$. If the \\textit{interfacial stress} is independent of area changes, the work needed to increase the area by $dA$ is $\\gamma dA = f dA $; $f$ and $\\gamma$ are in this case equivalent quantities. However, this is no longer true if the stress and energy density are modified by interfacial area changes. This can be due to interacting surfactant molecules in a fluid-like interface (top of Fig. \\ref{fig:interfaces}), or due to a solid, elastic (polymer) skin adsorbed to the interface (bottom of Fig. \\ref{fig:interfaces}), or due to a mixture of both.\n\nIn this general case, the interfacial stress is no longer necessarily isotropic and its description requires a second rank tensor $\\sigma_{ij}$, where $i,j=1,2$ specify components in a 2D cartesian coordinate system locally tangent to the interface. Assuming that the stresses due to the liquid interfacial tension $\\gamma\\delta_{ij}$ and those due to the adsorbed elastic skin $\\tau_{ij}$ are simply additive one may write \\cite{Jaensson_COCIS_2018}\n\\begin{equation}\n \\sigma_{ij} = \\gamma\\delta_{ij} + \\tau_{ij},\n \\label{eq:combine}\n\\end{equation}\nwhere $\\delta_{ij}$ is the Kronecker symbol with $\\delta_{ij}=1$ if $i=j$ and $\\delta_{ij}=0$ otherwise. $\\tau_{ij}$ may contain both isotropic and anisotropic contributions, in contrast to $\\gamma\\delta_{ij}$ which is purely isotropic. The additive decomposition in Eq. (\\ref{eq:combine}) should not be taken for granted: if surfactants are cross-linked or co-adsorbed with a polymeric skin, the different contributions to the interfacial stress may be hard to tell apart, not only experimentally but also conceptually. In the present paper, we will not consider this issue further. \n\n\n Any measure of interfacial strain is based on the coordinates of a given interfacial point: $X_i$ in the reference state and $x_i$ after the\ndeformation ($i=1,2,3$). From these, one may derive the displacement field $U_i(X_i)=x_i - X_i$, where $U_1$ and $U_2$ are the tangential displacements\nand $U_3$ the displacement normal to the interface.\nFor an interface with the two principal radii of curvature in the reference shape $R_{01}$ and $R_{02}$,\ndisplacements give rise to an \\textit{infinitesimal} strain tensor \\cite{LandauLifshitz}\n\\begin{equation}\n \\epsilon_{ij}=\\frac{1}{2}\\left(\\frac{\\partial U_i}{\\partial X_j}\n +\\frac{\\partial U_j}{\\partial X_i} +\n \\frac{\\partial U_3}{\\partial X_i} \\frac{\\partial U_3}{\\partial X_j}\n \\right)\n +\\delta_{ij}\\frac{U_3}{2}\\left(\\frac{1}{R_{01}}+\\frac{1}{R_{02}}\\right)\n \\label{eq:epsilon}\n\\end{equation}\ndescribing the interfacial 2D strains ($i,j=1,2$). For a spherical surface, the two principal curvature radii are equal ($R_{01}=R_{02}=R_0$) and $\\frac{1}{2}(\\frac{1}{R_{01}}+\\frac{1}{R_{02}})=\\frac{1}{R_0}$.\nIt contains information about the deformation which is\ninvariant to rotation and translation \\cite{LandauLifshitz}.\n Following Kirchhoff's hypothesis \\cite{Ventsel2001}, we apply\n classical thin shell approximations, and\n neglect all strains in the plane normal to the interface,\n $\\epsilon_{i3}=\\epsilon_{3i}=0$ ($i=1,2,3$).\n Both in the Surface Evolver simulations and in the shape equation\n calculus we will employ alternative \\textit{finite strain}\n measures, which are introduced below. Their relation to the infinitesimal strain tensor is provided in Appendix \\ref{AppendixA}.\n\n\n\n\n\nFor fluid-like interfaces, stress and strain are isotropic, and in this case scalar quantities of the stress $\\sigma$ and the strain $\\epsilon$ are useful. They are defined as \n\\begin{eqnarray}\n \\sigma= \\frac{1}{2}(\\sigma_{11}+\\sigma_{22}) \\\\ \n \\epsilon =\\epsilon_{11}+\\epsilon_{22}.\n \\label{eq:isotropicStressStrain}\n\\end{eqnarray}\n$\\epsilon$ is equal to the relative variation of surface area $dA\/A$. \\par\nA rigorous description of \\textit{finite strains} can be derived either by considering nonlinear corrections to the kinematics based on the infinitesimal strain tensor \\cite{LandauLifshitz,Audoly_2010} or using the displacement gradient tensor \\cite{Beatty1987, Mal1991}\n\\begin{equation}\n F_{ij} = \\frac{\\partial x_i}{\\partial X_j},\n \\label{eq:defgrad}\n \\end{equation}\n and finally the left Cauchy-Green strain tensor\n \\begin{equation}\n B_{ij}= F_{ik} F_{jk},\n \\label{eq:defB}\n\\end{equation}\nor the right Cauchy-Green tensor\n\n\\begin{equation}\n C_{ij}= F_{ki} F_{kj},\n \\label{eq:defC}\n\\end{equation}\n which extract from $F_{ik}$ information about the strain which is independent of rotation and translation. Please note that in this paper we consider right Cauchy Green tensors in 2 and 3 dimensions. To avoid confusions, we denote them respectively as $\\mathbf{C}$ and $\\mathcal{C}$.\n In this paper, Surface Evolver computes numerically the strain of the surface using the right Cauchy-Green-Tensor, whose explicit expression in the finite element method is derived in the APPENDIX \\ref{AppendixA}. For theoretical expressions, however, we will use the left Cauchy-Green tensor, to conform to the commonly used stress-strain expression derived using the Cayley-Hamilton theorem \\cite{Macosko}. As stressed by Beatty \\cite{Beatty1987}, both tensors have identical principal values (Tr($B_{ij}$)=Tr($C_{ij}$), Tr($B_{ij}^2$)=Tr($C_{ij}^2$), det($B_{ij}$)=det($C_{ij}$)), and are hence equivalent regarding the computation of strain energy.\n \n In Eqs. (\\ref{eq:defgrad}) and (\\ref{eq:defB}), we use Einstein's summation convention: indices occurring twice should be summed over. \\par\n In some models, the Hencky strain is found to be convenient. In the case of an extension that transforms a length $L$ measured in the reference state into a length $L'$, the infinitesimal strain definition in this scalar case would yield $(L-L')\/L$ while the Hencky strain is defined as $\\ln(L'\/L)$. Extensions of the Hencky strain to the tensorial case have been discussed in the literature \\cite{Verwijlen_ACIS_2014}.\n\nTo build constitutive laws, the strain must be connected to energy density and stress.\nShuttleworth has demonstrated the following general relation between surface stress $\\sigma_{ij}$ and surface energy density, assuming constant temperature \n\\cite{Shuttleworth_ProcPhysSocA_1950}\n\n\\begin{equation}\n \\sigma_{ij} = f\\delta_{ij} + \\frac{\\partial f}{\\partial \\epsilon_{ij}},\n \\label{eq:shuttleworth}\n\\end{equation}\nwhere i,j=1,2. $f$ combines potential energy contributions due to the excess energy of solvent molecules at the interface, adsorbed molecules or elastic potential energy of the skin. \n\nIn the case of fluid interfaces without skins where the stress is isotropic, a scalar model is sufficient. By taking half of the trace of Eq. (\\ref{eq:shuttleworth}) and using Eq.s (\\ref{eq:isotropicStressStrain}) we obtain the average surface stress, which is equal to the surface tension \n\\begin{equation}\n \\sigma(\\epsilon) = \\gamma(\\epsilon) = f + \\frac{\\partial f}{\\partial {\\epsilon}}.\n \\label{eq:isotropicshuttleworth}\n\\end{equation}\n \n\n \n \n\n\n\n \n \n \n\n \nFor the more general case, we can consider a first order expansion of $\\sigma (\\epsilon)$ around the reference state yielding\n\\begin{equation}\n \\sigma(\\epsilon) =\\sigma(0) +K \\epsilon,\n \\label{eq:K}\n\\end{equation}\n\nwhere we have introduced the elastic dilational modulus\n\\begin{equation}\n K= \\left. \\frac{\\partial f}{\\partial\\epsilon}\\right|_{\\epsilon=0}.\n \\label{eq:liquid1}\n\\end{equation}\n \n\nIn the spirit of the Hencky strain, the following alternative definition of a dilational modulus, commonly called \"Gibbs modulus\", is often used \\cite{Mysels1961,Kitchener1962}\n\\begin{equation}\n K_G= \\frac{\\partial f}{\\partial\\ln A}.\n \\label{eq:Gibbs}\n\\end{equation}\n For infinitesimal strains, $d \\mathrm{ln}A =dA\/A = \\epsilon $ and both definitions (Eq.s (\\ref{eq:liquid1}) and ( \\ref{eq:Gibbs})) coincide so that $K = K_G$. For finite strains, there is a distinction between $dA\/A$ where the area $A$ evolves along the deformation and $dA\/A_0=\\epsilon$ where $A_0$ is the area in the reference state. However, since the Gibbs modulus and the dilational modulus can vary independently as a function of strain, there is no contradiction between the two definitions. Using the Gibbs modulus and assuming its independence of strain amounts to choosing a particular type of constitutive law which appears to describe well some experimental systems \\cite{Salonen_2016,Verwijlen_ACIS_2014}. \\\\\n \n Let us now turn to interfaces with an adsorbed solid skin. Eq. (\\ref{eq:combine}) illustrates our simple hypothesis that the total surface stress is the sum of an interfacial tension and the elastic stress from the skin. To model this latter contribution, we focus on the case where plastic or viscous response is negligible so that the stress can be derived from a mechanical potential energy. Such materials are called hyperelastic. We focus further on incompressible materials and recall that in this case, the most general constitutive law relating the three-dimensional elastic stress to deformation can be cast in the form \\cite{Beatty1987, Mal1991}\n \\begin{equation}\n \\sigma^{3D}_{ij} = -p \\delta_{ij} + \\beta_1 \\mathcal{B}_{ij} - \\beta_{-1} \\mathcal{B}_{ij}^{-1},\n \\label{eq:constitutive}\n \\end{equation}\nwhere i,j=1,2,3 and where $p$ is the 3D pressure. The so-called response functions $\\beta_1$ and $\\beta_{-1}$ depend on the properties of the material and must be expressed as functions of the invariants of the strain tensor to ensure frame invariance. In the simplest case, they are constants leading to what is commonly called the \"Mooney-Rivlin\" model. It has proven successful in describing many polymers \\cite{Macosko,Mueller_Strehlow_2004}. Within this class of models, the case $\\beta_{-1} = 0$ is of particular interest. It leads to the so called Neo-Hookean model where $\\beta_1$ is equal to the shear modulus $G$ \\cite{Beatty1987} so that\n\\begin{equation}\n \\sigma^{3D}_{ij} = -p \\delta_{ij} + G\\, \\mathcal{B}_{ij}.\n\\end{equation}\nThis Neo-Hookean model has been derived from a simplified microscopic description of polymer dynamics using statistical mechanics \\cite{Larson1998,Mueller_Strehlow_2004}, and it successfully describes the stress response under finite strains. Since for moderate deformations, the Neo-Hooke model remains very close to the Mooney-Rivlin model, it is the method of choice for our simulations.\nIn the limit of small deformations, the Neo-Hookean model reduces to the well known Hookean model of linear elastic response.\n The 3D mechanical elastic energy density of a Neo-Hookean solid can be expressed as \n \\begin{equation}\n W = \\frac{G}{2} (I_\\mathcal{B} - 3),\n \\end{equation}\n where $I_\\mathcal{B}$ is the first invariant of the left Cauchy Green tensor defined in Eq. (\\ref{eq:defB}), defined as its trace. This will be useful for the simulations presented in Section \n \\ref{sec:Modelling}.\\par\n \n\n\n\\subsection{Perfectly spherical droploons}\n\\label{sec:TheorySpheres}\n\nAs given in Eq. (\\ref{eq:combine}) and sketched in Fig.s \\ref{fig:interfaces} and \\ref{fig:Schemes}, we assume that the total interfacial stress can be modeled as the sum of surface tension and and elastic contribution. In the case of fluid-like interfaces, this elastic contribution is given by a Gibbs elasticity. In the case of a solid-like interface, the extra elastic stresses arise from a (Neo-)Hookean skin. \n\nIf the interface is fluid, i.e. only Gibbs elasticity is present, one can integrate Eq. (\\ref{eq:Gibbs}) assuming a constant Gibbs dilational modulus $K_G$. In the limit of negligible gravity (i.e. low density mismatch between the phases or $\\Delta\\rho gR_0^2\/\\gamma_0\\ll 1$), the reference shape of the drop is spherical and the principal radii of curvature can be assumed to be equal ($R_{01}=R_{02}\\equiv R_0$). This gives for a spherical droploon of radius $R$ \n\\begin{equation}\n \\sigma(A) = \\gamma(A) = \\gamma_0 + K_{G} \\ln{\\left( \\frac{A}{A_0}\\right)} = \\gamma_0 + 2K_{G} \\ln{\\left( \\frac{R}{R_0} \\right)}.\n \\label{eq:GibbsGamma}\n\\end{equation}\n\nFrom this, the pressure drop $\\Delta P$ across the interface is obtained via the Young-Laplace law\n \\begin{equation}\n \\Delta P = \\frac{2\\gamma}{R}.\n \\label{eq:Laplace}\n \\end{equation}\n\nIn the reference state $R=R_0$ and $\\gamma =\\gamma_0$ so that $\\Delta P_0 = 2 \\gamma_0\/R_0$.\n\nTo prepare our analysis of solid-like and fluid-like contributions, we introduce the following normalised quantities.\nWe define an \"elastocapillary number\"\n\\begin{equation}\n \\alpha = \\frac{K}{\\gamma_0},\n \\label{eq:alpha}\n\\end{equation}\nwhich compares the dilational elastic modulus $K$ to the interfacial tension $\\gamma_0$ of the reference state. $K$ is either due to Gibbs elasticity (denoted $K_G$ in this case) or to a solid-like elasticity, as given later.\n\nFor spheres, the stretch $\\lambda$ is given by\n\\begin{equation}\n\\lambda = \\frac{R}{R_0}.\n \\label{eq:areaStretch}\n\\end{equation}\n\nMoreover, we introduce the normalised interfacial stress\n\n\\begin{equation}\n\\hat{\\sigma} = \\frac{\\sigma}{\\gamma_0}. \\label{eq:NormalisedStress}\n\\end{equation}\nIn the case where only Gibbs elasticity is present, the total interfacial stress is therefore given by\n\\begin{equation}\n\\hat{\\sigma} = \\hat{\\gamma}= 1 + 2\\alpha \\ln{\\lambda}.\n\\label{eq:GibbsND}\n\\end{equation}\n In the small-deformation limit this reduces to \n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\gamma} = 1+ 2\\alpha (\\lambda-1).\n \\label{eq:GibbsSmallDef}\n \\end{equation}\nWhatever the origin of the tension and elastic response may be, the normalised pressure is obtained using\n \n\\begin{equation}\n \\Delta \\hat{P} = \\frac{\\Delta P}{\\Delta P_0} = \\frac{\\hat{\\sigma}}{\\lambda}.\n \\label{eq:NormalisedPressure}\n\\end{equation}\n \nLet us now consider solid-like interfaces. For the case of a spherical balloon with initial skin thickness $h_0<< R_0$, starting from Eq. (\\ref{eq:constitutive}), Beatty \\cite{Beatty1987} derived an analysis valid for any hyperelastic material \n \\begin{equation}\n \\Delta P(\\lambda)=\\frac{2\\sigma}{R}=\\frac{2 G h_0}{\\lambda R_0}\\left[ 1-\\frac{1}{\\lambda^6}\\right]\\left(\\beta_1-\\lambda^2\\beta_{-1}\\right).\n \\end{equation}\nIn the neo-Hookean case this yields the following expression for the stress in the skin \n \\begin{equation}\n \\sigma_{Balloon}= G h_0\\left[ 1-\\lambda^{-6}\\right].\n \\end{equation}\n\nIn several more recent models of non-linear mechanical behavior, nonlinear variations of the response functions with the strain invariants are considered, as reviewed in \\cite{Horgan2015,Puglisi2015}. However, for the remainder of this paper we restrict ourselves to the use of the Neo-Hookean model.\n\n \n\n\nWe characterised the elastic skin, assumed to be isotropic and incompressible, by its 3D shear modulus $G$. To link it to the 2D dilational modulus, we note that the skin is in a state of plane stress, and that in this case\n\\begin{equation}\n \\epsilon=\\epsilon_{11}+\\epsilon_{22}=\\frac{\\sigma_{11}+\\sigma_{22}}{2 E} = \\frac{\\sigma}{h_0 E} \n\\end{equation}\nwhere $E$ is Young's modulus. Here, the biaxial stress in the solid induced by stretching is expressed as a skin tension divided by the skin thickness. In view of Eq. (\\ref{eq:K}), this means that $K=E h_0$ in the present case. For incompressible materials $E=3G$, so that for isotropic, small deformations \n\\begin{equation}\n K= 3G h_0. \n \\label{eq:SolidModulus}\n\\end{equation}\n\nIn the case of an elastic skin attached to an interface with tension $\\gamma_0$ we therefore obtain for the elastocapillary number \n \\begin{equation}\n \\alpha = \\frac{3Gh_0}{\\gamma_0}.\n \\label{eq:alphaNH}\n \\end{equation}\nThe total interfacial stress of a spherical neo-Hookean droploon is therefore given by \n \\begin{equation}\n \\hat{\\sigma} = 1+\\frac{G h_0}{\\gamma_0}(1-\\lambda^{-6})=1+\\frac{\\alpha}{3}(1-\\lambda^{-6}).\n \\label{eq:NeoHookeTension}\n \\end{equation}\nIn the small deformation limit one obtains the prediction of the linear elastic Hooke model\n \\begin{equation}\n \\hat{\\sigma} = 1+ 2\\alpha(\\lambda-1),\n \\label{eq:HookeTension}\n \\end{equation}\n which is identical to Eq. (\\ref{eq:GibbsSmallDef}). This result shows that in the limit of isotropic and small deformations both Gibbs elasticity and Neo-Hookean elasticity lead to a linear elastic response captured by Hooke's law in two dimensions with a compression modulus $K_G = 3Gh_0$.\\par\n \n \\begin{table*}\n\\renewcommand{\\arraystretch}{2}\n \\centering\n \\begin{tabular}{|c|c|c|c|}\n \\hline\nSphere model & Normalised surface stress $\\hat{\\sigma}$ & Critical stretch $\\lambda_{A,c}$ & Stretch at maximum pressure $\\lambda_{A,m}$ \\\\ \\hline\nGibbs (liquid) & $1 + \\alpha \\ln{\\lambda_A}$ & $ \\exp{\\left(-\\frac{1}{\\alpha}\\right)}$ & $ \\exp{\\left(2-\\frac{1}{\\alpha}\\right)}=e^2\\lambda_{A,c}$\\\\ \\hline\nNeo-Hooke (solid) & $1 + \\frac{\\alpha}{3} (1-\\lambda_A^{-3})$ & $ \\left( \\frac{\\alpha}{\\alpha+3} \\right) ^{1\/3} $ & $\\left( \\frac{7\\alpha}{\\alpha+3} \\right)^{\\frac{1}{3}} =7^{\\frac{1}{3}}\\lambda_{A,c}$ \\\\ \\hline\nHooke & $1 + \\alpha(\\lambda_A-1) $ & $ \\left( 1-\\frac{1}{2\\alpha} \\right)^2$ (for $\\alpha>0.5$) & no maximum \\\\\n\\hline\n \\end{tabular}\n \\caption{Summary of the normalised expressions for the normalised surface stress $\\hat{\\sigma}=\\sigma\/\\gamma_0$; the critical stretch $\\lambda_{A,c}$ at which the pressure changes sign; and the stretch at maximum pressure $\\lambda_{A,m}$ for the Gibbs, Neo-Hooke and Hooke model.}\n \\label{tab:models}\n\\end{table*}\n \nEq. (\\ref{eq:HookeTension}) shows that for $\\alpha > 1\/2$, an extensional stretch induces a positive total surface stress, acting as a restoring force while for $\\alpha < 1\/2$ an extensional stretch yields a negative total stress which favors further deformation. Analogous tendencies are predicted for compression. The condition $\\alpha = 1\/2$ has therefore received particular attention and is often called the \"Gibbs criterion\" since the physical response of a system may change fundamentally around this value. This is known, for example, for the case of bubble dissolution and foam coarsening \\cite{Stocco2009,Salonen_2016}.\n\nIn the case of spheres, it is natural to express interfacial stresses and curvatures via the radial stretch $\\lambda$. However, for more general surfaces, the relationship between both depends on the geometry of the surface. In this case it is more appropriate to express the dilational stresses via the area stretch $\\lambda_A = A\/A_0$. \nFor spheres, the relationship between area and radial stretch is simply \n\\begin{equation}\n\\lambda = \\frac{R}{R_0} = \\left( \\frac{A}{A_0} \\right)^{1\/2} = \\lambda_A^{1\/2}.\n \\label{eq:radiusStretch}\n\\end{equation}\nIn Table \\ref{tab:models} we summarise the interfacial stresses for the Gibbs, Neo-Hookean and Hookean model expressed via their area stretches, together with some critical stretches which are discussed in Section \\ref{sec:ResultsSpheres}. In the following we will use those relations.\n\n\n\n\\subsection{Droploons on capillaries}\n\\label{sec:TheoryNeedle}\nLet us now consider droploons attached to capillaries with circular cross-section of radius $R_n$ (Fig. \\ref{fig:Schemes}). In this case one geometrically removes a cap of radius $R_n$ from the droploon and fixes the perimeter of the resulting circular hole to the end of the capillary. For fluid interfaces with Gibbs elasticity, the interfacial stresses are isotropic and constant everywhere in the interface, even if the droploon is inflated or deflated. Hence, the droploon shapes remain spherical sectors and, as we show below, all pressure-deformation relations can be calculated analytically, giving useful insight into the impact of the geometry change. In the case of interfaces with a solid skin, this is much less straightforward. Fixing the interface points on the capillary boundary induces shear deformation in the vicinity of the capillary upon inflation or deflation and hence deviations from the shape of a perfect sphere. The presence of the capillary in the case of a solid-like skin therefore combines a geometrical impact (as for the Gibbs elasticity) with one of a non-isotropic deformation. Both contributions are coupled and their relative importance depends on the capillary number $\\alpha$, on the deformation $A\/A_0$ and on the capillary-to-drop size ratio $R_n\/R_0$.\\\\\nLet us assume in the following that shear stresses remain negligible and that we can estimate the droploon shape by spherical sectors derived from perfect spheres of radius $R$ from which a cap of radius $R_n$ is removed, as depicted in Fig. \\ref{fig:Schemes}. The interfacial area $A$ is then given by \n\\begin{equation}\n \\begin{split}\n A(R) & = 2\\pi R^2\\left(1 \\mp \\sqrt{1-\\left(\\frac{R_n}{R}\\right)^2}\\right),\n \\end{split}\n \\label{eq:interfacialarea01}\n\\end{equation}\nwhere the two signs correspond to droploons larger than a hemisphere (\"+\") or smaller than a hemisphere (\"-\").The latter geometry introduces a major difference between drops with and without capillaries: the radius of the drop \\textit{increases} upon further deflation from the hemisphere. This changes dramatically the pressure-deformation relation, which is why we will exclude this case in the remaining discussion.\\\\\nEq. (\\ref{eq:interfacialarea01}) can be used to relate the area stretch $\\lambda_A$ and the radial stretch $\\lambda$ via\n\n\\begin{equation}\n \\begin{split}\n \\lambda = & \\;\\lambda_A^{1\/2}\\frac{1+\\sqrt{1-\\left(\\frac{R_n}{R_0}\\right)^2}}{\\sqrt{2\\left[1+\\sqrt{1-\\left(\\frac{R_n}{R_0}\\right)^2}\\right] -\\left(\\frac{R_n}{R_0}\\right)^2\\frac{1}{\\lambda_A}}} \\\\\n = & \\;\\lambda_A^{1\/2} \\; \\mathpzc{f}\\left( \\frac{R_n}{R_0},\\lambda_A\\right).\n \\end{split}\n \\label{eq:GeometricalCorrection}\n\\end{equation}\n\ni.e. when comparing with the full sphere expression of Eq. (\\ref{eq:areaStretch}), the presence of the capillary introduces a correction factor $\\mathpzc{f}\\left( \\frac{R_n}{R_0},\\lambda_A\\right)$ to the relationship between the radial and the area stretch. \n\nFor a given area stretch $\\lambda_A$ - which is experimentally and computationally more easily accessible than the radial stretch $\\lambda$ - we can then rewrite the pressure-deformation relation as\n\n\\begin{equation}\n \\Delta \\hat{P}=\\frac{\\hat{\\sigma}(\\lambda_A)}{\\lambda}=\\frac{\\hat{\\sigma}(\\lambda_A)}{\\lambda_A^{1\/2}} \\mathpzc{f}^{-1} = \\Delta \\hat{P}_S \\mathpzc{f}^{-1},\n \\label{eq:PressureDefNeedle}\n\\end{equation}\nwhere $\\Delta \\hat{P}_S$ is the pressure of the sphere with the same area stretch and the interfacial stress $\\hat{\\sigma}$ is given in Table \\ref{tab:models} for the different models. Hence, in the approximation of negligible shear contributions, the capillary may be considered to impose a simple geometrical correction on the pressure-deformation relation which depends only on the capillary size $\\frac{R_n}{R_0}$ and the area stretch $\\lambda_A$. In the case of fluid-like interfaces (Gibbs elasticity), Eq. (\\ref{eq:PressureDefNeedle}) is accurate, while in the case of solid-like interfaces (Neo-Hooke \\& Hooke), this is an approximation. We shall see in Section \\ref{sec:ResultsDropsNeedles} that this remains nevertheless an excellent approximation over a wide range of parameters.\n\nHere we have chosen to express the pressure-deformation relations in terms of area stretch $\\lambda_A$ since it simplifies comparison with simulations and experiments. One may also choose to express them in terms of radial stretch $\\lambda$. In this case it is the expression of the interfacial stress $\\hat{\\sigma}$ which needs to be modified, leading to more complex expressions. We provide these relations for the interested reader in Annex \\ref{annex:PressDefNeedle}. \n\n\n\n\n\\section{Numerical modelling}\n\\label{sec:Modelling}\n\\subsection{Surface Evolver simulations}\n\\label{sec:SE}\n\\label{Subsec:SEPrinciple}\n\nSurface Evolver\\cite{Brakke1992} is a widely used software that determines the equilibrium structure of systems containing several fluid phases separated by interfaces. It uses the principle that in equilibrium, the interfacial energy must be minimal under the constraints imposed by boundary conditions. Examples of this are foams where the volume of each bubble is fixed \\cite{Buffel2014,Weaire2017,Hohler2017,Ginot2019}. Surface Evolver can also be used to model elastic membranes \\cite{Bouzidi_CompStruct_2004,Quilliet2016}.\n\n\nIn Surface Evolver simulations, interfaces are represented as meshes of triangular facets whose energy is evaluated. Most previous studies on bubble or drop shapes focus on systems where this energy is proportional to the interfacial area, the proportionality factor being the surface tension $\\gamma$. Additionally to this contribution, Surface Evolver simulations can also take into account an elastic energy induced by the deformation of each facet, simulating an elastic skin. Several constitutive laws are implemented in the Evolver Software and can be used: Hooke's law describing linear elastic response, as well as the non-linear Saint-Venant or Neo-Hooke's law\\cite{Bouzidi_CompStruct_2004}. In the work reported here, we use Neo-Hooke's law introduced in Section \\ref{sec:TheoryFundamentals}. We implement, for the first time to our knowledge, an interface with both surface tension and neo-Hooke interfacial elasticity. As a first implementation, we thoroughly compare Surface Evolver results to the numerical solution of the shape equations (Section \\ref{sec:ModelKierfeld}), and ensure that it provides physically sound results in the investigated range of parameters.\n\nIn contrast to fluid interfaces where the interfacial area uniquely determines the energy, the energy of elastic skins depends on their deformation with respect to a reference state. The reference state of an interface element is given by a shape with zero interfacial elastic stress. This state is encoded in the reference positions of the facet vertices. The implementation of elastic stress in the framework of the Surface Evolver requires an expression of the facet deformation energy for arbitrary large strains, given as a function of the vertex positions. A detailed presentation of this feature and the implementation of elastic energy in the Surface Evolver has not been published so far to our knowledge. We therefore provide this information in the Appendix \\ref{AppendixA} to clarify for the interested reader how exactly the software operates. Here we shall concentrate on a very general description of the approach.\n\n\nOur Surface Evolver calculations simulate an experiment where a bubble or drop is inflated at the tip of a cylindrical hollow capillary inserted into a liquid, as illustrated in Fig. \\ref{fig:DroploonSimulations}. In the first step, we need to obtain a physically correct reference shape for a drop without interfacial elasticity. For this purpose, an initially very coarse mesh is attached to a cylindrical boundary representing the capillary. The interfacial area is then minimised for the given drop target volume assuming that interfacial energy is due only to a uniform and constant surface tension\\footnote{This could represent a physical system where the elastic skin forms progressively at an initially \"naked\" interface}. Successive refinements and energy minimisations of the mesh are then performed to simulate the drop shape and the pressure in the reference bubble accurately. When the relative variation of total interfacial energy $|E^{n+1}-E^{n}|\/E^n$ remains smaller than $10^{-8}$ over 100 iteration steps we consider that convergence has been achieved.\\\\\nIn the second step of the simulation, an elastic skin is added to the drop surface of the obtained reference state, so that initially there is no elastic stress. Numerically, it consists in saving the current positions $\\{\\vec{X}_i\\}$ of the vertices as their reference positions, and setting a non-zero elastic modulus value for the interfacial energy computation for further minimisation iterations. How reference and current positions are used for deformation computation is detailed in Appendix \\ref{AppendixA}.\\\\\nThe third step consists in inflating or deflating this droploon up to a new volume where mechanical equilibrium is again established via progressive mesh relaxation. Frequent merging of facets significantly smaller than average and refinement of facets larger than average hastens convergence whilst avoiding to trap the system in local energy minima. These operations are all performed by Surface Evolver in-built routines as part of a standard energy minimisation procedure. When the mesh management and energy minimisation have converged ($|E^{n+1}-E^{n}|\/E^n<10^{-8}$), the elastic stress in the skin, the pressure in the bubble and the bubble shapes are recorded.\\\\\n\n\n\n\\subsection{Numerical integration of the shape equations} \n\\label{sec:ModelKierfeld}\n\nWe solve for the shape and stress\/strain profile of an axi-symmetric capsule by \nnumerically integrating the \\emph{shape equations} \\cite{Hegemann2018,Knoche2013}. Because we impose axial symmetry, \nthe droploon can be parametrised as a single arc with\narc length $s$ and arc angle $\\Psi$. The transformation from arc length parametrisation to cylindrical coordinates $\\{r, \\phi, z\\}$ gives the first two shape equations\n\\begin{equation}\n \\label{eqn:shape_eqn_rz} \\frac{\\mathrm{d}r}{\\mathrm{d}s} = \\cos\\Psi ~~~\\text{and}~~~\n \\frac{\\mathrm{d}z}{\\mathrm{d}s} = \\sin\\Psi \\,.\n\\end{equation}\nThe remaining shape equations, needed to close the set of partial differential equations, take into account the constitutive material law and reflect the force balance at every point along the arc $s$. They are derived by searching for the stationary solutions of the appropriate energy functional.\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=7cm,keepaspectratio]{FIGURES\/drop_visualization.pdf}\n \\caption{A pendant droploon parametrised in arc-length $s$ and arc-angle\n $\\Psi$.\n \n \\label{fig:parametrization}}\n\\end{figure}\n\nIn the experimentally relevant setting we control either the droploon volume or the mechanical pressure at the capillary inlet. Thus, the appropriate energy functional is the enthalpy\n\\begin{equation}\n H = \\int \\mathrm{d}A_0 \\, W_{2D} + \\int \\mathrm{d}A \\, \\gamma_0 - \\int \\mathrm{d}V \\, \\Delta P \\,,\n \\label{eqn:enthalpy_functional}\n\\end{equation}\nwith a contribution from the surface energy $W_{2D}$, measured \nwith respect to the undeformed area $A_0$, from the surface tension $\\gamma_0$ and from the volumetric work \nagainst a pressure difference $\\Delta P$.\nWe find the stationary states of the enthalpy $H$ of Eq. (\\eqref{eqn:enthalpy_functional}) via the first \nvariation, $\\delta H = 0$ (see \\cite{Knoche2013, Hegemann2018} for details), leading to the shape equations\n\\begin{align}\n \\label{eqn:shape_eqn_psi} \\frac{\\mathrm{d} \\Psi}{\\mathrm{d}s} &= \\kappa_s = \\frac{1}{\\sigma_s} \\left(\\Delta P - \n \\kappa_\\phi \\sigma_\\phi \\right)\n \\,, \\\\\n \\label{eqn:shape_eqn_taus} \\frac{\\mathrm{d} \\sigma_s}{\\mathrm{d}s} &= \\frac{\\cos\\Psi}{r} \\left( \\sigma_\\phi - \\sigma_s \\right) \\,,\n\\end{align}\nwhere ($\\kappa_s,\\kappa_{\\phi}$) and ($\\sigma_{s},\\sigma_{\\phi}$) are the meridional and circumferential curvatures and surface stresses, respectively. The curvatures are given by \n$\\kappa_\\phi = {\\sin \\Psi}\/{r}$ and \n$\\kappa_s = {\\mathrm{d}\\Psi}\/{\\mathrm{d}s}$.\nNote that the shape equations \\eqref{eqn:shape_eqn_rz}, \\eqref{eqn:shape_eqn_psi} and \\eqref{eqn:shape_eqn_taus} still require a constitutive material law for closure.\nAt this point, no detailed knowledge about the 2D surface energy functional $W_{2D}$ is required, as we\ndefine \n\\begin{equation}\n \\sigma_{s, \\phi} = \\frac{1}{\\lambda_{\\phi, s}}\\left(\\frac{\\partial W_{2D}}{\\partial \\lambda_{s, \\phi}} \n + \\frac{\\partial (\\gamma_0 \\lambda_s \\lambda_\\phi)}{\\partial \\lambda_{s, \\phi}}\\right) \\,,\n \\label{eqn:stress_energy_functional}\n\\end{equation}\nwhere $\\lambda_s$ and $\\lambda_\\phi$ are the meridional and circumferential\nstretch ratios of the droploon. The shape equations \\eqref{eqn:shape_eqn_rz},\n\\eqref{eqn:shape_eqn_psi} and \\eqref{eqn:shape_eqn_taus}\nare written in terms of the arc length $s$ of the deformed \nshape. For the numerical solution we reparametrise in terms of the\n\\emph{undeformed} arc length coordinate $s_0$ of the original undeformed\nshape by using the relation\n$\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$, which is necessary in order to\ngain access to the meridional stretches $\\lambda_s$.\nThe circumferential stress $\\lambda_\\phi = r\/r_0$\nis given by the ratio of undeformed and deformed \nradial coordinate.\n\nThe surface energy $W_{2D}$ accounts for the material specific model and can incorporate various effects, such as film thinning. To express the constitutive equation in terms of our parametrisation we write the right 2D Cauchy-Green tensor, discussed in Section \\ref{sec:Modelling} and in Appendix \\ref{AppendixA}, as \n\\begin{equation}\n \\mathbf{C} = \\mathrm{diag}(\\lambda_s^2, \\lambda_\\phi^2)\\,.\n\\end{equation}\nFor a two-dimensional Neo-Hookean elastic material the surface energy is given by Eq. \\eqref{eq:2D energy density final} from the Appendix \\ref{sec:energy}\n\\begin{equation}\n W_{2D} =\\frac{G h_0}{2} \\left( \\mathrm{Tr}\\mathbf{C} + \\mathcal{C}_{33} + \\frac{G}{\\Lambda} \\mathcal{C}^2_{33}\\right).\n\\end{equation}\nwith 3D Lam\u00e9 parameters $G$ and $\\Lambda$.\nHere, $\\mathbf{C}$ is the 2D Cauchy-Green tensor describing deformations within the surface, while \n$\\mathcal{C}_{33}$ is the component of the 3D Cauchy-Green tensor describing normal (thickness) deformations of the elastic skin. Requiring the absence of normal stresses, $\\mathcal{C}_{33}$ becomes a function of $G\/\\Lambda$ and $\\mathrm{det}\\mathbf{C}$ as derived in Appendix \\ref{sec:energy}.\n\nFrom this surface energy, we extract the constitutive law needed to close the shape equations using Eq.\\ \\eqref{eqn:stress_energy_functional},\n\\begin{equation}\n \\sigma_{s, \\phi} = G h_0 \\left( \\frac{\\lambda_{s, \\phi}}{\\lambda_{\\phi, s}} - \\frac{\\mathcal{C}_{33}}{\\lambda_s \\lambda_\\phi} \\right) + \\gamma_0 \\,.\n\\end{equation}\nIn the following, we focus on the incompressible limit $G \/ \\Lambda \\ll 1$, where $\\mathcal{C}_{33} \\approx 1 \/ \\mathrm{det}\\mathbf{C} = 1 \/ \\lambda_s^2 \\lambda_\\phi^2$. \n\nFor a given undeformed shape (described by a function $r_0(s_0)$),\nthe shape equations, along with the constitutive equations, are numerically integrated from the apex ($s=0$) to the attachment point at the capillary ($s=L$) using a Runge-Kutta scheme, paired with a shooting algorithm to satisfy the boundary conditions\n\\begin{equation}\n r(s = 0) = z(s = 0) = \\Psi(s = 0) = 0~~~\\text{and}~~~\n r(s = L) = R_n .\n\\end{equation}\nIn the shooting procedre, \nwe prescribe an apex stress $\\sigma_s(s = 0)$ and iteratively search for\na pressure drop $\\Delta P$ satisfying the attachment boundary condition at the\ncapillary. Moreover, we restrict the prescribed apex stresses to the\nphysically relevant ones for our context giving $\\sigma_s(s = 0) > 0$ (no\ncompressive stresses), and do not exceed the maximal possible apex stress\nallowed by the constitutive equations,\n$\\sigma_{s, \\phi}(s = 0)^{\\text{max}} = G h_0 + \\gamma_0$.\n\n\n\n\n\n\\section{Results}\n\\label{sec:Results}\n\n\nIn Section \\ref{sec:ResultsSpheres} we compare the theoretical predictions of the different elastic laws in Eqs. \\eqref{eq:GibbsND}, \\eqref{eq:alphaNH} and \\eqref{eq:NeoHookeTension}, and the results obtained from Surface Evolver simulations. In Section \\ref{sec:ResultsDropsNeedles}, we compare the numerical simulations to the analytical predictions where the needle is treated as a geometrical perturbation truncating an isotropic droploon (Section \\ref{sec:TheoryNeedle}). These two results are compared to the direct numerical predictions (Section \\ref{sec:ModelKierfeld}), which account both for the geometrical perturbation and the anisotropy imposed by the needle. Finally, we quantify the perturbation of the pressure induced by the needle, and show that it can be in large part explained by the geometrical perturbation. In the last step, we use the direct numerical predictions to quantify the importance of anisotropic stretches, and provide experimentalists with guidelines to predict the parameter ranges over which the influence of the capillary (shape change and\/or stress anisotropy) can be neglected.\n\n\\subsection{Spherical droploons}\n\\label{sec:ResultsSpheres}\n\n\\begin{figure*}[h]\n\\centering\n\\includegraphics[width=13cm,keepaspectratio]{FIGURES\/Quad_ElasticSphere.png}\n\\caption{Normalised pressure as a function of area stretch $\\lambda_A$ for spherical droploons whose skin elasticity is described by Gibbs', Neo-Hooke's or Hooke's law. Four characteristic elastocapillary number values ($\\alpha = 0.1$, $0.5$,$1$,$10$) are investigated. The data obtained by Surface Evolver simulations are obtained assuming Neo-Hookean elasticity. }\n\\label{fig:PressureDeformationSPhere}\n\\end{figure*}\n \n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=9.5cm]{FIGURES\/lambda_Ac_lambda_Am_pressure_vs_epsilon_LambdaArea.png}\n\\caption{Variation of characteristic features (critical area stretch $\\lambda_{A,c}$, stretch $\\lambda_{A,m}$ at maximum pressure and maximum pressure $\\Delta \\hat{P}(\\lambda_{A,m})$) with the elastocapillary number $\\alpha$ predicted for droploons with skins presenting Gibbs, Neo-Hookean and Hookean elasticity. Surface Evolver simulations are performed for the Neo-Hooke case. }\n\\label{fig:SummFeaturesSphere}\n\\end{figure}\n\nWe run Neo-Hookean Surface Evolver simulations (Section \\ref{sec:SE}) for spheres with four different elastocapillary numbers ($\\alpha =$ 0.1, 0.5, 1, 10) imposing inflation and deflation while recording the normalised pressure difference $\\Delta \\hat{P}$. The results are shown in Fig. \\ref{fig:PressureDeformationSPhere} as a function of area stretch $\\lambda_A$ along with the theoretical predictions for the Gibbs, Hooke and Neo-Hooke models provided in Section \\ref{sec:TheorySpheres}. \n\nThe simulations show excellent agreement with the Neo-Hookean theory over the full range of investigated deformations. As expected and discussed in Section \\ref{sec:TheorySpheres}, all three models coincide in the small deformation limit $\\lambda_A \\approx 1$. However, for deformations of a few percent, the three models already show very pronounced differences, indicating the importance of choosing the physically most realistic model for the interpretation of pressure-deformation relations. \n\nFor non-zero $\\alpha$, in the case of the Gibbs and Neo-Hookean elasticity, the initially monotonously decreasing Young-Laplace-like behaviour is replaced by a pressure-deformation relation with a well-pronounced pressure maximum $\\Delta\\tilde{P}(\\lambda_{A,m})$ at a characteristic stretch $\\lambda_{A,m}$. Upon deflation ($\\lambda_A<1$), this leads to the apparition of a critical stretch $\\lambda_{A,c}$ at which the pressure difference is zero, and beyond which it becomes negative. This point corresponds to elastic instabilities of compressed interfaces, which manifest themselves in buckling phenomena \\cite{LandauLifshitz,Zoldesi1998,Sacanna2011}. A proper handling of this range requires to take into account the bending energies of the interfaces. Since this is neither of interest here, nor implemented in our simulations, we stay away from the buckling range in our analysis.\n\nThe variation of $\\lambda_{A,c}$, $\\lambda_{A,m}$ and of the pressure difference $\\Delta\\tilde{P}$ at $\\lambda_{A,m}$ with elastocapillary number $\\alpha$ for the different models are shown in Fig. \\ref{fig:SummFeaturesSphere}. The corresponding analytical expressions are given in Table \\ref{tab:models}. They put in evidence clear differences between Gibbs, Hookean and Neo-Hookean models. In comparison to Gibbs elasticity, the Neo-Hookean critical and maximal stretches vary only mildly with $\\alpha$. The Surface Evolver results again agree very well with theory. The critical stretch for Hooke's model appears when the elastocapillary number crosses the Gibbs criterion $\\alpha=0.5$. The Gibbs critical stretch tends exponentially towards 0, as $\\lambda_{A,c}=\\mathrm{exp}(-1\/\\alpha)$. In the limit of large $\\alpha$, the critical stretches all converge towards $\\lambda_{A,c}=1$, that is, a shell so rigid that it buckles as soon as compressed. \nHooke elasticity does not predict a local pressure maximum at any elastocapillary number. But it predicts an interesting deformation-independent pressure for $\\alpha=0.5$, i.e. at the \"Gibbs criterion\". Gibbs and Neo-Hooke, on the other hand, have a maximal pressure stretch increasing with $\\alpha$. In particular, at the Gibbs criterion $\\alpha=0.5$, the maximal pressure is reached at null deformation ($\\lambda=1)$. Lower elastocapillary numbers move $\\lambda_{A,m}$ to the compression regime ($\\lambda_{A,m}<1$), while $\\alpha>0.5$ shift $\\lambda_{A,m}$ to the dilation regime ($\\lambda_{A,m}>1$).\nThe most remarkable features of the elastocapillary transition (onset of significant critical stretch, variation of the maximal pressure stretch) occur for elastocapillary numbers between $0.1$ and $10$. For this reason, we expose in this article results for $\\alpha=0.1$, $1$ and $10$, so as to span two decades of elastocapillary numbers. Because of its history as the Gibbs criterion and its pivot point between capillarity and elasticity, $\\alpha=0.5$ will also be represented.\n\n\\subsection{Droploons on capillaries}\n\\label{sec:ResultsDropsNeedles}\n\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=8.5cm,keepaspectratio]{FIGURES\/DropSimulation_Alpha05.png}\n\\caption{Examples of neo-Hookean droploons at different area stretches and capillary ratios $R_n\/R_0$ obtained for $\\alpha = 0.5$ using Surface Evolver.}\n\\label{fig:DroploonSimulations}\n\\end{figure}\n\nIn a second step, we run Surface Evolver simulations of pendant droploons attached to a capillary with circular cross-section of radius $R_n$ (Fig. \\ref{fig:Schemes}). The droploons are inflated and deflated while their interfacial area and inner pressure are recorded (Section \\ref{sec:SE}). Three ratios between the capillary radius $R_n$ and the radius $R_0$ of the droploon in the reference configuration are used: $R_n\/R_0$ = 0.1, 0.5 and 0.9. Representative examples of obtained droploon shapes are shown in Fig. \\ref{fig:DroploonSimulations} for three characteristic area stretches ($\\lambda_A=$ 0.8, 1, 2) for the case of $\\alpha = 0.5$. \n\nIn Fig. \\ref{fig:PressureDeformationSPhereNeedle} we show the obtained pressure-deformation relations for the elastocapillary numbers $\\alpha = 0.1, 0.5, 1, 10$. Along with the Surface Evolver results (crosses) we plot results obtained by direct numerical predictions (empty circles) using the Neo-Hookean shape equations for the same set of parameters (Section \\ref{sec:ModelKierfeld}). The excellent agreement between both for all elastocapillary numbers, capillary radii and deformations demonstrates the reliability of Surface Evolver simulations for such systems.\n\nThe solid lines shown in Fig. \\ref{fig:PressureDeformationSPhereNeedle} correspond to the analytical approximation given in Eq. (\\ref{eq:PressureDefNeedle}) which models droplets as spherical sectors covered with a Neo-Hookean skin. The agreement is excellent in the whole deformation range for all capillary sizes and elastocapillary numbers. This means that in this parameter range the deviation from the predictions for spherical droploons without any capillary (gray line in Fig. \\ref{fig:PressureDeformationSPhereNeedle}) are essentially a result of the associated change of the geometry induced by the capillary, rather than due to the shear deformation in the vicinity of the capillary. Deviations from the simple model set in only for large capillary sizes ($R_n\/R_0=0.9$) and large elastocapillary numbers ($\\alpha = 10$). \n\nTo investigate why the spherical sector approximations fit the results so well, Fig.\\ \\ref{fig:anisotropy_area_stretch} plots different measures of the anisotropy of the stretch distributions on the droploon surface obtained from the Neo-Hookean shape equations for the same parameter ranges as in Fig.\\ \\ref{fig:PressureDeformationSPhereNeedle}. In the case of fully isotropic deformation, corresponding to a spherical sector shape, the deviation of the mean stretch ratio along the contour $\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1$ (Fig.\\ \\ref{fig:anisotropy_area_stretch}a,b) and the standard deviation of the meridional and circumferential stretches $\\mathrm{std}_s(\\lambda_{s,\\phi})$ (Fig.\\ \\ref{fig:anisotropy_area_stretch}c,d) are both zero. Since we neglect gravitational effects, it is clear that the unstressed shape\nof the capsule at $\\lambda_A = 1$ \\emph{must} be a spherical sector. The\nstretched shape will be anisotropically stressed, in general, because of the\nboundary condition imposed by the attachment at the capillary. We can find,\nhowever, another particular stretch, where the \\emph{stressed} shape is a\nspherical sector. This is reached at the critical stretch $\\lambda_{A,c}$\n(see also Section \\ref{sec:ResultsSpheres}) at which $\\Delta \\hat {P}=0$. The\nforce balance for every point on the capsule requires that the pressure force\ncancels the tension force. For $\\Delta \\hat {P} = 0$, we therefore have\n$\\sigma_s = \\sigma_\\phi = 0$ all over the surface, i.e. the surface is\nstress-free everywhere at this critical stretch. Since\n$\\sigma_s = \\sigma_\\phi = 0$ implies isotropic stretching, the shape at this\npoint is again correctly described by the spherical sector equation\n(\\ref{eq:PressureDefNeedle}). If the stretch is further decreased to\n$\\lambda_A<\\lambda_{A,c}$ both $\\sigma_s<0$ and $\\sigma_\\phi<0$ will become compressive and buckling or wrinkling instabilities of the droploon interface will \noccur \\cite{LandauLifshitz,Knoche2013}. \n\n\\begin{figure*}\n\\centering\n\\includegraphics[width=15cm,keepaspectratio]{FIGURES\/NeoHooke_Needles.png}\n\\caption{Normalised pressure as a function of area stretch $\\lambda_A$ of Neo-Hookean droploons on capillaries for three ratios of capillary and initial droploon radius ($R_n\/R_0 = 0.1, 0.5, 0.9$), and four characteristic elastocapillary numbers ($\\alpha = 0.1, 0.5,1,10$). Surface Evolver simulations are compared with direct numerical predictions (Section \\ref{sec:Modelling}) and with the analytical expression of Eq.\\ (\\ref{eq:NeoHookeTension}) using a simple geometrical correction to the perfect sphere theory.}\n\\label{fig:PressureDeformationSPhereNeedle}\n\\end{figure*}\n\n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/comparison_anisotropy.pdf}\n \\caption{ Characterization of the stretch anisotropy and the\n stretch inhomogeneity. (a,b) The \n mean ratio of meridional and circumferential stretches\n $\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1$\n along the contour characterizes stretch\n anisotropy and is shown \n for (a) $\\alpha \\leq 0.5$ and (b) $\\alpha > 0.5 $.\n The standard deviations of (c) meridional stretches\n $\\lambda_s$ and (d) circumferential stretches $\\lambda_{\\phi}$\n along the contour characterize the inhomogeneity of\n stretches. We show the critical stretches $\\lambda_{A,c}$ as red diamonds in (a-d).\n }\n \\label{fig:anisotropy_area_stretch}\n\\end{figure*}\n\nFor stretch values other than $\\lambda_A = 1$ or $\\lambda_{A,c}$, the droploon\nshape is non-spherical, because of the anisotropy\n($\\lambda_s \\neq \\lambda_\\phi$) introduced by the boundary condition at the\ncapillary. This can clearly be seen in Figs.\\\n\\ref{fig:anisotropy_area_stretch}a,b. For inflated shapes $\\lambda_A > 1$, we find\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle - 1 >0$\nindicating that stretching is biased towards meridional deformations resulting\nin slightly prolate shapes, whereas for deflated shapes $\\lambda_A < 1$,\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle-1<0$ and\ncircumferential deformations are preferred, resulting in slightly oblate\nshapes.\n The mean anisotropy increases upon inflation before\ndecreasing again at much higher stretches (see the insets in\nFig.\\ \\ref{fig:anisotropy_area_stretch}a,b for a wider deformation range), when the influence of the capillary\nbecomes again negligible.\n\nFurthermore, the standard deviation of the stretches along the contour\n$\\mathrm{std}_s(\\lambda_s )$ and $\\mathrm{std}_s(\\lambda_\\phi)$ shown in\nFigs.\\ \\ref{fig:anisotropy_area_stretch}c,d characterizes the inhomogeneity of\nthe stretches along a contour. A standard deviation of\n$\\mathrm{std}_s(\\lambda_s ) = \\mathrm{std}_s(\\lambda_s ) = 0$ corresponds to a\nspherical sector. The meridional and circumferential stretches of an inflated\ndroploon are isotropic at the apex with\n$\\lambda_s(s = 0) = \\lambda_\\phi(s = 0)\\propto \\lambda_A^{1\/2}$. At the\ncapillary, the attachment condition mandates $\\lambda_\\phi^\\mathrm{cap} = 1$\nwhile $\\lambda_s^\\mathrm{cap}$ increases with $\\lambda_A$, which introduces\nanisotropy and inhomogeneity into the problem with meridional stresses\naccumulating at the capillary. The spherical approximation will hold well for\nshapes where the stretches are approximately \\textit{homogeneous} over a large\narc length, corresponding to a small standard deviation of the stretches, and\n\\textit{isotropic}, corresponding to a mean stretch along the contour\n$\\left\\langle \\frac{\\lambda_s}{\\lambda_{\\phi}} \\right\\rangle$ close to unity.\nThis is fulfilled at the two spherical configurations\n$\\lambda_A = 1$ and $\\lambda_{A,c}$. The spherical configuration \nwith $\\lambda_{A,c}$ appears to be highly sensitive, and small changes in\n$\\lambda_A$ lead to large deviations in the anisotropy (and inhomogeneity).\nIt is interesting to note that at small deformations around $\\lambda_A= 1$,\nthe anisotropy evolution depends only on the ratio $R_n \/ R_0$ and not on $\\alpha$.\n\n\nWe argue that the evolution of the anisotropy and inhomogeneity can be grasped by considering that the capillary acts similarly to a rigid inclusion\nin a stretched elastic membrane as both enforce the absence of circumferential\nstretching ($\\lambda_\\phi = 1$)\nat their boundary. A rigid inclusion\nin a stretched elastic membrane is known to concentrate the meriodional\nstresses creating anisotropy and inhomogeneity, similar to the stress\nconcentration around a crack tip. For flat membranes, a rigid inclusion is a\nclassic problem that was studied for neo-Hookean membranes by Wong and Shield\n\\cite{Wong1969}. For the droploon we have a curved geometry, which gives rise to an even more pronounced increase of anisotropy around the capillary.\n\nWe see clear evidence of the increased anisotropy \naround the capillary in numerical solutions to\nthe full anisotropic shape equations from Section\n\\ref{sec:ModelKierfeld} as shown in Fig.\\ \\ref{fig:decayDetails}. \nIn \nFig.\\ \\ref{fig:decayDetails}a,b,c, we show the stretch ratios $\\lambda_s$ and $\\lambda_\\phi$ and \nthe redistribution of arc length along the contour of inflated \ndroploons. These results show the rise of meridional stretch \nclose to the capillary.\nFig.\\ \\ref{fig:decayDetails}d reveals that the\nresulting \nstretch anisotropy $\\lambda_s\/\\lambda_\\phi-1$ is localized at the\ncapillary and that it decays exponentially\nover a characteristic arc length\n$s_0^*$ away from the capillary.\nHere, $s_0$ is the arc length of the undeformed reference shape\n(the spherical droplet),\nwhich is related to the arc length $s$ of the deformed\nshape by the meridional stretch ratio,\n$\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$\n(see section \\ref{sec:ModelKierfeld}).\nWe use the logarithmic derivative\nof $\\lambda_s\/\\lambda_\\phi-1$ to numerically determine the size $s_0^*$\nof the zone of increased anisotropy around the capillary. \n\n\nWe propose that the relative meridional extent\n of the anisotropy zone along\n the \\emph{deformed} droploon contour provides a non-dimensonal number $Q$,\n which is suitable to \n characterize the importance of elastic anisotropy effects\n in the regime $\\alpha>1$, where elastic energies dominate.\n We thus define $ Q \\equiv {s^*}\/{L} $, where\n $s^*$ is the meridional length of the anisotropy region\n measured in terms of the \\emph{deformed} arc length, while $L$ is the\n total arc length of the deformed droploon contour. \n For $\\alpha<1$,\n elastic energies are small compared to droplet surface tension such\n that also elastic anisotropy becomes less important. \n\n\nIn order to evaluate the anisotropy parameter $Q$, we \nuse the general relation $\\mathrm{d}s \/ \\mathrm{d}s_0 = \\lambda_s$\nbetween deformed and undeformed arc length at the capillary\nand $L \\sim \\pi R_0 \\lambda_A^{1\/2}$ for the total arc length $L$\nin the limit $R_n \\ll R_0$ to obtain\n\\begin{equation}\n Q \\equiv \\frac{s^*}{L} \\sim \\frac{s_0^* \\lambda_s^\\mathrm{cap}}{L}\n \\sim \\frac{s_0^* \\lambda_s^\\mathrm{cap}}{\\pi R_0} \\lambda_A^{-1\/2}\n \\label{eqn:Q}\n\\end{equation}\nwhere $\\lambda_s^\\mathrm{cap}$ is the meridional stretch at the\ncapillary. To make further progress, we derive relations\nfor the size $s_0^*$ of the anisotropy zone and the\nstretch ratio $\\lambda_s^\\mathrm{cap}$ at the capillary\n from numerical results shown in Fig.\\ \\ref{fig:Q}.\n\nBecause the \n maximal stretch anisotropy is found at the capillary and \n $\\lambda_{\\phi}=1$ at the capillary, the meridional stretch\n at the capillary actually equals the maximal stretch anisotropy, \n$\\max{\\left(\\frac{\\lambda_s} {\\lambda_\\phi}\\right)} =\n\\lambda_s^\\mathrm{cap}$. While in the case of flat membranes the maximal\naniosotropy $\\lambda_s^\\mathrm{cap} \\propto \\lambda_s(s=\\infty)$ is\nproportional to the radial stretch at infinity \\cite{Wong1969},\nour numerical results for curved droploons indicate \nthat $\\lambda_s^\\mathrm{cap}$ first increases upon inflation $\\lambda_A>1$ but\nsaturates for highly inflated droploons with\narea stretches $\\lambda_A$ exceeding\na fairly well-defined value $\\lambda_A^\\dag$,\nas shown in Fig.\\ \\ref{fig:Q}c for the\ncase of $\\alpha = 10$. Further numerical analysis of the\nsaturation value as performed in Fig.\\ \\ref{fig:Q}b\nallows us to quantify the saturation value as \n\\begin{equation}\n \\max{\\left(\\frac{\\lambda_s} {\\lambda_\\phi}\\right)}\n \\approx \\lambda_s^\\mathrm{cap} \\equiv\n {\\rm const} \\left( \\frac{R_n}{R_0}\\right)^{-1\/3} \n \\label{eqn:maximal_anisotropy}\n \\end{equation}\n with ${\\rm const} \\approx 1.47$ in the regime $\\alpha >1$. This saturation\n value is solely determined by the geometrical parameter $R_n \/ R_0$ of the\n undeformed droploon, which demonstrates that saturation is induced by\n droplet curvature.\n We also find $\\lambda_A^\\dag \\sim (\\lambda_s^\\mathrm{cap})^{3 \/ 2}$\n for the\n area stretch, where saturation of the maximal anisotropy sets in. \n The maximal anisotropy given in Eq.\\ (\\ref{eqn:maximal_anisotropy})\n diverges in the limit $R_n \/ R_0 \\approx 0$, which\nseems counter-intuitive at first, because the spherical approximation works\nbest for exactly this limit. This issue will be resolved below. \n\n\n \n\\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/decay_details.pdf}\n \\caption{\n Stretch anisotropy of droploon shapes with $\\alpha = 10$ for\n three values of $R_n\/R_0$ for each of three area stretches\n $\\lambda_A \\gg \\lambda_A^\\dag$, $\\lambda_A > \\lambda_A^\\dag$, and\n $\\lambda_A < \\lambda_A^\\dag$ (see also Fig.\\ \\ref{fig:Q}\n for a definition of the characteristic \n area stretch $\\lambda_A^\\dag$). \n (a,b) Stretch ratios $\\lambda_s$ and\n $\\lambda_\\phi$ as a function of the undeformed arc length $s_0\/L_0$\n along the contour. While $\\lambda_\\phi$ is approaching the undeformed\n value of 1 at the capillary ($s_0\/L_0=1$), $\\lambda_s$ rises at the\n capillary. (c) shows that the deformed arc length $s$ considerably\n deviates from the the undeformed arc length $s_0$ along the contour. (d)\n The resulting stretch anisotropy $\\lambda_s \/ \\lambda_\\phi - 1$ is\n localized at the capillary. The size of the anisotropy zone around the\n capillary can be characterized by an exponential decay arc\n length $s_0^*$,\n which is calculated from the logarithmic derivative of\n $\\lambda_s \/ \\lambda_\\phi - 1$ at the capillary for the solid lines and\n shown as colored dots in all plots (a-d). We also show the maximal stretch at the capillary from Eq.\\ (\\ref{eqn:maximal_anisotropy}) as red diamonds in (a) and (d). }\n \\label{fig:decayDetails}\n\\end{figure*}\n\n\nLet us quantify the\nsize $s_0^*$ of the anisotropy zone around the capillary. From Fig.\\ \\ref{fig:Q}a, we find a conservative bound\n\\begin{equation}\n s_0^* \\leq \\frac{R_n}{2}.\n \\label{eqn:decaylength}\n \\end{equation}\nThis relation reveals that the size of the\nstretch anisotropy zone\n is set by the geometry parameter $R_n\/R_0$ of the reference\n state rather than the elastocapillary number $\\alpha$.\n\n \n Using Eq.\\ (\\ref{eqn:decaylength}) for $s_0^*$\n and the saturation value given in Eq.\\ (\\ref{eqn:maximal_anisotropy})\n for $\\lambda_s^\\mathrm{cap}$ in (\\ref{eqn:Q}), we obtain \n\\begin{equation}\n Q = \\frac{\\rm const}{2\\pi} \\left(\\frac{R_n}{R_0}\\right)^{2 \/ 3}\n \\frac{1}{\\lambda_A^{1 \/ 2}} \n \\label{eqn:QLargeLambda}\n \\end{equation}\n for the anisotropy parameter $Q$ for highly inflated droploons\n $\\lambda_A>\\lambda_A^\\dag$.\n This parameter remains small for $R_n \\ll R_0$ indicating that\n we can neglect anisotropy effects in this limit.\n\n At smaller deformations $1 < \\lambda_A < \\lambda_A^\\dag$, \n where\n saturation of the capillary anisotropy has not yet set in, we numerically find\n that the maximal stretch anisotropy scales\n with $\\log(\\lambda_A)$ (see Fig.\\ \\ref{fig:Q}c), giving\n\\begin{equation}\n Q = \\frac{R_n}{R_0} \\frac{\\lambda_s^\\mathrm{cap} - 1}\n {3 \\pi \\log(\\rm \\lambda_s^\\mathrm{cap})}\n \\frac{\\log(\\lambda_A)}{\\lambda_A^{1 \/ 2}},\n \\label{eqn:anisotropyQuantifier}\n\\end{equation}\nwhere we again use the saturation value $\\lambda_s^\\mathrm{cap}$ from Eq.\\ (\\ref{eqn:maximal_anisotropy}).\n\nWe obtain a full contour plot of the anisotropy\nparameter $Q$ in Fig.\\ \\ref{fig:Q}d by\njoining the results in the two regimes\n( $\\lambda_A > \\lambda_A^\\dag$ and $\\lambda_A < \\lambda_A^\\dag$) with\na smooth interpolating function. This plot confirms that $Q$\nis small ($Q \\ll 1$) for shapes where the spherical approximation works\nbest. In particular, we find that\nwe can neglect anisotropy effects ($Q\\ll 1$)\nin the limit $R_n \/ R_0 \\approx 0$,\nresolving the counter-intuitive behaviour of the maximal anisotropy. We\nemphasize the fact that Eq.\\ \\eqref{eqn:anisotropyQuantifier} only\ndepends on\n$R_n \/ R_0$ and $\\lambda_A$ and \\emph{not} on $\\alpha$, as long as\n$\\alpha > 1$. This indicates that stretch anisotropy is\nmainly governed by geometry rather than by elastic energy\ncontributions.\nAs already pointed out above, elastic contributions and, thus,\nalso elastic anisotropy effects become increasingly irrelevant\nfor $\\alpha < 1$, where surface tension dominates and the\nshape resembles a spherical liquid droplet.\nThe regions $\\lambda_A > \\lambda_A^\\dag$ and $\\lambda_A < \\lambda_A^\\dag$\ndiffer markedly in their functional dependence on \n $\\lambda_A$.\nThis results in a maximum of the parameter $Q$ for area stretches\n$\\lambda_A\\sim \\lambda_A^\\dag\\propto (R_n \/ R_0)^{- 1 \/ 2}$\nat a fixed value of $R_n\/R_0$.\nThis, in turn, indicates that stretch anisotropy is most relevant\nfor these intermediate area stretches. \n\n \\begin{figure*}\n \\centering\n \\includegraphics[width=15cm]{FIGURES\/QPlot.pdf}\n \\caption{Analysis of the anisotropy zone and the anisotropy\n parameter $Q$ from\n numerical solutions of the anisotropic shape\n equations. (a) The size of the anisotropy zone $s_0^*$ is roughly constant\n giving rise to the bound (\\ref{eqn:decaylength}).\n (b) The saturation value is mainly determined by the\n parameter ($R_n\/R_0$), see Eq.\\\n (\\ref{eqn:maximal_anisotropy}).\n (c) As a function of the area stretch $\\lambda_A$, the maximum\n anisotropy saturates at large deformations beyond a value\n $\\lambda_A^\\dag$ (results for $\\alpha =10$ shown as colored diamonds). \n (d) Contour plot of the non-dimensional anisotropy parameter $Q$\n according to Eq.\\ (\\ref{eqn:Q}).\n Stretch anisotropy effects are negligible for $Q\\ll 1$.}\n \\label{fig:Q}\n\\end{figure*}\n\nThe possibility of approximating the droploon shape by a spherical sector over a wide range of parameters is an important piece of information for experimentalists since it means that the analytical expression of Eq. (\\ref{eq:PressureDefNeedle}) can be used to quantify reliably the elastocapillary properties of the droploon interfaces over a reasonably wide range of elastocapillary numbers. We also remind the reader that from the expressions it evident that within our geometrical approximations, the critical area stretch at which the pressure changes sign is independent of the size of the capillary. \nThe combined numerical analysis provides another important piece of information: for reasonably small capillary sizes ($R_n\/R_0<0.5$), the pressure-deformation relation is actually well described by the simple sphere equations without capillary (Section \\ref{sec:TheorySpheres}), making the quantitative interpretation of experimental data fairly straightforward. In order to quantify the deviation from the simple sphere theory, we plot in Fig.\\ref{fig:ErrorDroploonNeedle} the heatmap of the normalised deviation of the numerically predicted pressure $\\Delta\\hat{P}$ with capillary (using Surface Evolver) from that predicted by the sphere theory $\\Delta \\hat{P}_S$ for a given area stretch, i.e. we plot\n\\begin{equation}\n \\left| \\frac{\\Delta\\hat{P}_S-\\Delta\\hat{P}}{\\Delta\\hat{P}}\\right| = \\left| 1 - \\frac{\\Delta\\hat{P}_S}{\\Delta\\hat{P}}\\right|.\n \\label{eq:ErrorHeatMap}\n\\end{equation}\nMaking the spherical sector hypothesis of Eq. (\\ref{eq:PressureDefNeedle}), this expression becomes simply \n\\begin{equation}\n \\left| 1 - \\frac{\\Delta\\hat{P}_S}{\\Delta\\hat{P}} \\right| = \\left| 1 - \\mathpzc{f} \\right|,\n \\label{eq:ErrorHeatMapSpherical}\n\\end{equation}\nwhich is plotted as lines of equal relative error. These isolines are identical in all four graphs of Fig.\\ref{fig:ErrorDroploonNeedle} since they are independent of $\\alpha$ (see Eq.\\eqref{eq:GeometricalCorrection}). \n\nDeviations of the heatmaps in Fig. \\ref{fig:ErrorDroploonNeedle} from the geometrical prediction have two origins: imperfect relaxation in the simulations and the influence of shear contributions of the solid skin which are neglected in the geometrical approximations. The first is at the origin of most of the deviations for $\\alpha < 10$, while the latter starts to be clearly visible for $\\alpha = 10$. Nevertheless, this latter difference remains small ($<0.5\\%$), confirming again that shear contributions play a minor role in most of the investigated parameter range in accordance with the non-dimensional $Q$-parameter plotted in Fig. \\ref{fig:Q}d. Our geometrically-corrected pressure-deformation relation of Eq. (\\ref{eq:PressureDefNeedle}), although not accounting for stretch anisotropy, is therefore a very good approximation for pendant drops with Neo-Hookean elastic interfaces within the parameter range investigated here. \n\nLet us now turn to the analysis of the heatmaps themselves. They indicate that in the small deformation limit ($\\lambda_A \\approx 1$), the error made in using the sphere approximation remains smaller than $1\\%$ at any radii ratio and elastocapillary number. For larger deformations in the inflation regime ($\\lambda_A>1$), the approximation error is still smaller than $1\\%$ for small capillary radii ($R_n\/R_0<0.2$). Similar behaviour is observed in the deflation regime. However, the prediction systematically fails when approaching the critical stretch $\\lambda_{A,c}$. This is because wrinkling instabilities in the skin may become relevant in this regime. This phenomenon can be captured neither within the sphere approximation, nor by our Surface Evolver simulations where the skin bending energy - crucial for wrinkling - is not taken into account. Skin bending can be implemented in Surface Evolver, but is beyond the scope of this paper. In the heatmaps we have therefore colored these zones in gray. \n\nAt small $\\alpha$ and large $R_n\/R_0$ an additional zone of large approximation error ($>10\\%$) appears for pressures $\\Delta\\hat{P}\\approx 1$. This deviation arises from the increasing difference between sphere and truncated sphere geometry: As the truncated sphere shrinks, it reaches the shape of a half-sphere of radius $R_n$. Any further decrease in drop volume causes an actual increase in curvature radius which is not captured by the sphere theory, hence the failure of the analytical prediction beyond this point in the parameters space.\n\nDespite those considerations for large capillary radii, the heatmaps of Fig.\\ref{fig:ErrorDroploonNeedle} provide very good news for the experimentalist aiming to quantify the elastic properties of droploon surfaces: when working with reasonable capillary sizes ($R_n\/R_0<0.5$), reasonably small deformations (<0.1) and reasonable elastocapillary numbers ($\\alpha < 10$), experimental data can be confidently fitted by the simple sphere theory (without capillary) since experimental errors are likely to outweigh the small error introduced by the sphere assumption. \n\n\n\n\n\\begin{figure*}\n\\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{FIGURES\/DeltaQuad.png}\n \\caption{Relative error of pressure difference between Surface Evovler and neo-Hookean perfect sphere, at the same area stretch $\\lambda_A$ for four elastocapillary numbers ($\\alpha=0.1$,$0.5$,$1$,$10$). The grey boxes delimit the stretch values below critical stretch value $\\lambda_{A,c}$. Full lines are lines of equal relative error between the neo-Hookean perfect sphere and the neo-Hookean truncated sphere, given by Equ. \\eqref{eq:ErrorHeatMapSpherical}. }\n \\label{fig:ErrorDroploonNeedle}\n\\end{figure*}\n\n\n\n\\clearpage\n\n\\newpage\n\n\\section{Conclusion and outlook}\n\nTreating the seemingly simple problem of a drop covered by an elastic skin attached to a circular capillary in the absence of gravity, we have been able to show that Surface Evolver simulations are a powerful tool to study systems in which surface tension and nonlinear (Neo-Hookean) elasticity co-exist within the same interface. We have chosen on purpose such a simple geometry, in order to avail of independent theoretical and numerical predictions relying on cylindrical symmetry (Section \\ref{sec:Theory} and Section \\ref{sec:ModelKierfeld}) which can be compared to the Surface Evolver solutions. In all cases, they showed excellent agreement. Surface Evolver will therefore be useful to tackle more complex geometries, such as droploons on complex capillary shapes, interacting droploons or complete emulsions composed of droploons, where theory or alternative numerical predictions requiring symmetry will not be available. In contrast to other finite element tools, the energy minimisation approach of Surface Evolver, widely used in the communities studying foams and emulsions, provides access to a wide range of problems in which interfaces of complex geometry play a key role. In the Appendix we provide a detailed description of the implementation of nonlinear elasticity in Surface Evolver simulations to facilitate future developments, and we also provide our Surface Evolver code for download in the Supplementary Materials. Taking into account bending stiffness in the simulations would be an interesting perspective for future work.\n\nFor simplicity, we have been talking about drops\/droploons all along. However, all presented concepts are equally valid for bubbles\/bubbloons and hence foams. Our analysis shows how complex the interplay of capillary and elastic forces at an interface is, even for the relatively simple geometry of an initially spherical droploon inflated on a circular capillary. Due to the intricate coupling of changes in interfacial curvature and area, accurate theoretical models and simulations are required to extract interfacial properties quantitatively from measured pressure-deformation relations.\n\nThe problem of the pressure deformation of a\ndroploon covered by an elastic skin and attached to a capillary in the absence of gravity is a seemingly simple problem. From the point of view of elasticity theory it is challenging, however, because the elastic skin represents a closed curved shell and the capillary\na rigid circular inclusion within this shell.\nHoles or rigid inclusions in elastic membranes are known to produce stress anisotropies and stress concentration upon stretching. Here, the droploon skin is stretched by inflation, contains a rigid inclusion and features the additional complication of a background curvature because the initial relaxed shape is spherical\n(neglecting gravity). We obtained theoretical predictions regarding the influence of the stress anisotropy induced by the capillary\nonto the pressure-deformation relation from Surface Evolver simulations and a careful numerical analysis\nof stresses and strains in the shape equation approach. \nA full analytical solution remains an open problem for future research.\n\n\nIn the parameter range investigated by our simulations, we have been able to show that for elastocapillary numbers of $\\alpha < 10$ the influence of the capillary on the pressure-deformation relation is essentially of geometrical nature, i.e. the capillary modifies in the first place the relationship between the area stretch (related to interfacial stress) and the interface curvature. In this case, the droploon shapes can be represented approximately by spherical sectors and the pressure-deformation relation is given by Eq. (\\ref{eq:PressureDefNeedle}). For interfaces with Gibbs elasticity, this expression is exact, while for (Neo-)Hookean interfaces it remains an (excellent) approximation. Deviations from this simple geometrical approximation are starting to be significant for the largest capillary sizes ($R_n\/R_0=0.9$) and elastocapillary number ($\\alpha = 10$) simulated by us, suggesting that the anisotropic contribution to the interfacial stress and deformation near the capillary is starting to play a role. \n\nTo show that this anisotropy is indeed strongly localised at the capillary, we calculate, as a function of position on the interface, the deviation of the ratio of meriodional and circumferential stretches from one. This quantity decays nearly exponentially with the distance from the capillary, over a characteristic length $s^*$. The extent of this anisotropically strained zone can be compared to the total droploon size by defining the non-dimensional ratio $Q=s^*\/L$, where $L$ is the total arc length of the droploon. For droploon inflation and for $\\alpha > 1$, we find \n\\begin{equation}\n Q = \\frac{R_n}{R_0} \\frac{\\rm \\lambda_s^\\mathrm{cap} - 1}{3 \\pi \\log(\\rm \\lambda_s^\\mathrm{cap})} \\frac{\\log(\\lambda_A)}{\\lambda_A^{1 \/ 2}},\n \\label{eqn:anisotropyQuantifier2}\n\\end{equation}\nwith \n\\begin{equation}\n \\lambda_s^\\mathrm{cap} \\equiv\n {\\rm const} \\left( \\frac{R_n}{R_0}\\right)^{- 1 \/ 3}, \n \\label{eqn:maximal_anisotropy2}\n \\end{equation}\nbeing the \"saturation\" meridional stretch reached at the capillary for large deformations and const = 1.47. \nFor large deformations, we therefore obtain\n\\begin{equation}\n Q = \\frac{\\rm const}{2\\pi} \\left(\\frac{R_n}{R_0}\\right)^{2\/3} \\frac{1}{\\lambda_A^{1 \/ 2}}. \n \\label{eqn:QLargeLambda2}\n\\end{equation}\nThese relations and their analysis provided in Section \\ref{sec:ResultsDropsNeedles} and Fig. \\ref{fig:Q} put in evidence that the extent of the anisotropic zone (and hence its influence on the pressure-deformation relation), is mainly controlled by the reference geometry of the droploon ($R_n\/R_0$) and by the stretch $\\lambda_A$. We therefore show for the first time that the extent of this zone is essentially governed by geometrical features while the influence of the elastocapillary number $\\alpha$ remains negligible. These are very good news for experimentalists who can rely on the spherical droploon equations given in Table \\ref{tab:models} combined with the geometrical correction of Eq. (\\ref{eq:GeometricalCorrection}) to fit their data for a wide range of $\\alpha$ as long as $R_n\/R_0$ and $\\lambda_A$ remain reasonable. The heatmaps and relations provided in Section \\ref{sec:ResultsDropsNeedles} will help to estimate the appropriate parameter ranges.\n\n\nMore importantly for the analysis of experimental data, we have also shown that when working with sufficiently small capillaries ($R_n\/R_0<0.5$) and at small deformations ($\\sim 5\\%$ area), the simple analytical pressure-deformation relations of spheres \\textit{without} capillaries (Table \\ref{tab:models}) provide excellent approximations to the pressure-deformation relations of droploons on capillaries. The much simpler analytical relations of Table \\ref{tab:models} can therefore be used to extract quantitative interfacial properties from fits to experimental data. Experimentalists are referred to Fig. \\ref{fig:ErrorDroploonNeedle} to estimate the error they make using this approximation. \n\nIn Section \\ref{sec:TheorySpheres} we showed that for small deformations, the Gibbs, Neo-Hookean and Hookean models for liquid- and solid-like interfaces all predict the same kind of pressure-deformation relation. In view of the analysis presented above, this may explain why a lot of experimental data for solid-like interfaces seems to have been successfully fitted in the past by the Gibbs model. Indeed, our analysis shows that at small deformations, pendant drop experiments with nearly spherical droploons do not allow to discriminate between liquid-like and solid-like interfaces. Alternative experiments, such as interfacial shear rheology measurements or the Capillary Mensicus Dynanometry\\cite{Danov_CollIntSci_2015} are required to obtain this information. \n\nWe have chosen here a minimal model of a droploon interface where the elastic extra stress of a Neo-Hookean solid material is simply added to a constant interfacial tension. Real interfaces are not as simple \\cite{Edwards1991,Rehage_RheoAct_2002,Sagis_RevModPhys_2011,Erni2011,Fuller_SoftMatter_2011,Fuller_2012,Sagis_COCIS_2014,Verwijlen_ACIS_2014,Pepicelli_SocRheo_2019}. Surface tension and elasticity tend to be coupled in a complex manner \\cite{Verwijlen_ACIS_2014}, and the description of the response of the elastic membrane is likely to require taking into account an anisotropic, viscous and plastic response as well as non-linearities which are more complex than those of the Neo-Hookean model. Nevertheless, our simple approach already gives important insight into some fundamental properties of pressure-deformation relations of pendant droploons.\n\nConsidering that pendant drop experiments, even in the simplest configuration without gravity, overlay a geometric non-linearity with non-linearities in the material response of a solid-like interfacial material, it remains questionable if this is the appropriate experimental choice to discriminate between appropriate models to describe solid-like interfaces. Differences between models are likely to show up only at larger deformations which makes the interpretation extremely difficult. However, due to their simplicity, pendant drop experiments remain an excellent choice for a phenomenological characterisation of the dilational visco-elastic properties at small deformation. \n\n\nLast but not least, all our investigations have been performed without gravity, while pendant drops (and bubbles) are prone to gravity-driven deformations rendering them non-spherical. We recall that for a nearly spherical drop the Bond number $ Bo = \\Delta\\rho g R_0^2\/\\gamma_0$ indicates the ratio of the hydrostatic pressure difference between the top and the bottom of the bubble $\\Delta\\rho g 2R_0$ and the Laplace pressure which is due to surface tension $2 \\gamma_0 \/R_0$. The impact of gravity on bubble shape is negligible if $Bo \\ll 1$. If density-matched systems cannot be used, very small bubbles may therefore be a solution \\cite{Kotula_JRheo_2015} to reduce the impact of gravity. This also has the advantage to increase the interface curvature, and hence the pressure and therefore experimental sensitivity.\n\nIf gravity-driven deformation cannot be completely avoided, the following two aspects need to be taken into account. The first influence of gravity is on the shape of the droploon in the reference state. Gravity may create a concave neck close to the capillary, which creates additional stress localisation. Using numerical investigations of the droplet shape bifurcation diagram (yellow line of bifurcations in Figs.\\ 4 and 5 of Ref.\\ \\cite{Kratz2020}), we could show in previous work that only for\n\\begin{equation}\n\\frac{R_n}{R_0} < 2.6 \\mathrm{Bo}^{1.64},\n\\end{equation}\nthe drop remains fully convex and neck formation can be neglected. \n\nThe second aspect concerns deformation with elastic skins, where the increasing droploon size upon inflation or the decreasing effective surface stresses upon deflation may make the system increasingly sensitive to gravity. In this case one may want to introduce an elastic Bond number which contains the deformation-dependent elastic contribution to the surface stress based on the Hookean expression (\\ref{eq:HookeTension})\n\n\\begin{equation}\n Bo_{el} = \\frac{\\Delta \\rho g}{\\gamma_0(1+2\\alpha(\\lambda-1))}\\lambda^2 R_0^2.\n\\end{equation}\n \n\nFor sufficiently small elastic Bond numbers, gravity can then be neglected. Since gravity can be implemented easily in Surface Evolver, future investigations may explore the influence of gravity more quantitatively. \n\n\n\\section*{Acknowledgements}\nThis work has been conducted in the framework of an ERC Consolidator Grant (agreement 819511 \u2013 METAFOAM). It has also profited from an IdEx Unistra \"Attractivity grant\" (Chaire W. Drenckhan) and has, as such, benefited from funding from the state, managed by the French National Research Agency as part of the 'Investments for the future' program. The authors would like to thank Fran\u00e7ois Schosseler, Leandro Jacomine, Stephane Pivard and Aur\u00e9lier Hourlier-Fargette for regular in-depth discussions concerning the experimental characterisation of pressure-deformation relations of droploons, which has stimulated greatly this numerical investigation. \n\n\n\n\\section{Appendix A : numerical determination of the interfacial deformation}\n\\label{AppendixA}\n\n\nWe use the Surface Evolver software to determine the bubble or droplet shapes for which interfacial energy is minimal, respecting volume constraints and boundary conditions. The case where an elastic skin is attached to the interface raises the question how local strain should be deduced from the representation of the interface as an assembly of triangular facets. Section \\ref{sec:convected} explains how {\\it convected } coordinates are used for this. Section \\ref{sec:energy} provides details about the calculation of the elastic energy density, based on the neo Hooke constitutive model. \n\\subsection{Strain represented using convected coordinates \\label{sec:convected}}\n\\begin{figure}\n\\centering\n \\includegraphics[width=\\linewidth,keepaspectratio]{FIGURES\/Figure9.JPG}\n \\caption{A triangular finite element of an interface is represented in the reference configuration and in the current, deformed configuration. The figure illustrates the notations used in the text: $\\vec{\\bf x}$ for vectors pointing to vertices and $\\vec{\\bf s}$ for finite element edge vectors. Capital letters are used for the reference configuration and small letters for the current configuration. For the sake of simplicity, only one of the three vectors pointing to vertices is shown in each configuration. The contravariant components of both $\\vec{\\bf X}$ and $\\vec{\\bf x}$ are indicated on the same set of Cartesian axes.}\n \\label{fig:convected}\n\\end{figure}\nThe shape of the triangular facets used in the Surface Evolver as finite elements is fully defined if two edge vectors are given. Upon deformation of the investigated bubble, the facet is generally displaced and the edge vectors are changed, spanning a facet of modified shape. In the spirit of a linear discretization, an affine displacement field is assumed within each facet. One could describe the facet deformation using a coordinate system whose origin is attached to a given vertex of the facet, and express how the Cartesian coordinates of each point on the facet evolve. Alternatively, one may interpret the edge vectors as basis vectors which evolve upon a deformation and which are therefore in general non orthogonal. In this latter approach, the coordinates of each point of the interface are fixed and the deformation is represented in terms of a change of the basis vectors. This \"convected coordinate\" method goes back to pioneering work by Hencky \\cite{Hencky1925}.\nIn the Surface Evolver this method is convenient because the relevant edge vectors can easily be derived from the three facet vertex positions in the current configuration, denoted $\\vec{x}_1,\\vec{x}_2,\\vec{x}_3$ and in the reference configuration $\\vec{X}_1,\\vec{X}_2,\\vec{X}_3$,\n\\begin{equation}\n\\label{eq:defS}\n\\begin{split}\n \\vec{S}_1&=\\vec{X}_3-\\vec{X}_1 , \\quad \\vec{s}_1=\\vec{x}_3-\\vec{x}_1,\\\\\n \\vec{S}_2&=\\vec{X}_2-\\vec{X}_1 , \\quad \\vec{s}_2=\\vec{x}_2-\\vec{x}_1.\n\\end{split}\n\\end{equation}\nThe edge vectors are represented using a cartesian orthonormal basis $(\\vec{e}_x,\\vec{e}_y)$ such that $\\vec{S}_i=S_{ix}\\vec{e}_x+S_{iy}\\vec{e}_y$ and $\\vec{s}_i=s_{ix}\\vec{e}_x+s_{iy}\\vec{e}_y$. \\par \nAs mentioned, convected coordinates remain constant upon a deformation; this introduces simplicity. But this choice also introduces complexity since the expression of the scalar product is no longer given by contraction $\\vec{a}\\cdot\\vec{b}=a_ib_i$, additional terms appear since the basis vectors are generally not orthogonal. To avoid such complexity, one represents vectors and tensors that one wishes to associate in products using two different bases: a \"covariant\" and contravariant one. Covariant basis vectors follow the deformation of the edge facets. They are denoted $\\vec{G}_1,\\vec{G}_2$ in the reference state and $\\vec{g}_1,\\vec{g}_2$ in the current state. Covariant quantities are identified by lower indices, \n\\begin{equation}\n\\begin{split}\n \\vec{G}_1&= \\vec{S}_1,\\quad \\vec{g}_1= \\vec{s}_1,\\\\\n \\vec{G}_2&=\\vec{S}_2,\\quad \\vec{g}_2= \\vec{s}_2.\n\\end{split}\n\\end{equation}\nContravariant basis vectors $(\\vec{G}^1,\\vec{G}^2) $ or $(\\vec{g}^1,\\vec{g}^2 )$, are identified by upper indices, and they are defined through the following orthogonality relations:\n\\begin{equation}\n \\vec{G}^i\\cdot\\vec{G}_j=\\delta^i_j, \\quad \\vec{g}^i\\dot \\quad\\vec{g}_j=\\delta^i_j,\n \\label{eq:orthogonality}\n\\end{equation}\nwhere $\\delta^i_j =1$ if $i=j$ and $\\delta^i_j =0$ otherwise. The Cartesian coordinate system is a special case within this general framework where covariant and contravariant bases coincide. Using co- and contravariant bases simplifies the expressions of the scalar products of vectors and tensors in the case of non-orthogonal basis vectors.\n\nAn arbitrary vector $d\\vec{X}$ representing a small line element on the surface reads in terms of the covariant basis\n\\begin{equation}\n d\\vec{X} = d\\Theta^j \\vec{G}_j.\n\\end{equation} \n $d\\Theta^j$ are the convected contravariant coordinates. We use the Einstein summation convention\n and sum over repeated indices.\n \nDescriptions of strain in large deformation continuum mechanics are commonly based on the deformation gradient tensor $\\mathbf{F}$, represented by a matrix that transforms a line element $ d\\vec{X} $ in the reference state into $ d\\vec{x} $ in the current state,\n\\begin{equation}\n d\\vec{x} = \\mathbf{F} d\\vec{X}. \n\\end{equation}\nIn terms of convected coordinates, $\\mathbf{F}$ may be written\n\\begin{equation}\n\\mathbf{F}=g_j\\otimes G^j.\n\\label{Equation:RightCauchyGreen}\n\\end{equation}\nThe symbol $\\otimes$ indicates an operation assembling two vectors into a tensor, called tensor product. \nIndeed, in view of Eq.\\ \\ref{eq:orthogonality} we have\n\\begin{equation}\n \\mathbf{F} d \\vec{X} = (\\vec{g}_i\\otimes \\vec{G}^i)\\,d\\Theta^i G_j=d\\Theta^i \\vec{g}_i = d \\vec{x}.\n\\end{equation}\nThe deformation gradient tensor contains information about rotations that is irrelevant for interfacial energy. The interfacial energy in the Surface Evolver is computed using the 2D right Cauchy-Green strain tensor $\\mathbf{C}$ which is invariant to rotations\\cite{Mal1991}, contrary to $\\mathbf{F}$:\n\\begin{equation}\n\\mathbf{C}=\\mathbf{F}^T \\mathbf{F}=(\\vec{G}^i\\otimes \\vec{g}_i)(\\vec{g}_j\\otimes \\vec{G}^j)=g_{ij}\\,\\vec{G}^i\\otimes \\vec{G}^j.\n\\label{Equation:RightCauchyGreenAppendix}\n\\end{equation}\n$g_{ij}$ is the metric tensor in the current configuration, defined as follows:\n\\begin{equation}\n g_{ij} =\\vec{g}_i \\cdot \\vec{g}_j.\n \\label{eq:metric}\n\\end{equation}\n\n\n\nTo determine the elastic energy of a facet in a simulation, {\\bf C} needs to be determined numerically. The components of the contravariant basis vectors in the reference state $G^i$ are deduced from the covariant ones using the orthogonality properties (\\ref{eq:orthogonality}): \n\n\\begin{equation}\n\\begin{aligned}\n\\vec{G}^1\\cdot\\vec{G}_1=1=S^{1x}S_{1x}+S^{1y}S_{1y} &\\rightarrow &S^{1x}=\\frac{1-S^{1y}S_{1y}}{S_1x}\\\\\n\\vec{G}^2\\cdot\\vec{G}_1 = 0 =S^{2x}S_{1x}+S^{2y}S_{1y} & \\rightarrow & S^{2x}=-S^{2y}\\frac{S_{1y}}{S_{1x}}\\\\\n\\vec{G}^2\\cdot\\vec{G}_2 = 1 =S^{2x}S_{2x}+S^{2y}S_{2y} & \\rightarrow & S^{2x}=\\frac{1-S^{2y}S_{2y}}{S_{2x}}\\\\\n\\vec{G}^1\\cdot\\vec{G}_2 = 0 =S^{1x}S_{2x}+S^{1y}S_{2y} & \\rightarrow & S^{1x}=-S^{1y}\\frac{S_{2y}}{S_{2x}}.\n\\end{aligned}\n\\label{Equation:OrthogonalMaterialBasis}\n\\end{equation} \n\nSolving the system (\\ref{Equation:OrthogonalMaterialBasis}) yields the components of the vectors $\\vec{G}^i$: \n\\begin{equation}\\begin{aligned}\n\\vec{G}^1=\\vec{S}^1=\\left(\\frac{S_{2y}}{S_{1x}S_{2y}-S_{1y}S_{2x}}, -\\frac{S_{2x}}{S_{1x}S_{2y}-S_{1y}S_{2x}}\\right) \\\\\n\\vec{G}^2=\\vec{S}^2=\\left(-\\frac{S_{1y}}{S_{1x}S_{2y}-S_{1y}S_{2x}}, \\frac{S_{1x}}{S_{1x}S_{2y}-S_{1y}S_{2x}}\\right).\n\\end{aligned}\n\\label{Equation:ContravariantMaterialCoordinates}\n\\end{equation}\n\nTo express the Cauchy Green strain tensor directly as a function of the edge vectors, it is convenient to introduce Gram matrices. The Gram matrix of two arbitrary vectors $\\vec{v}_1$ and $\\vec{v}_2$ is a 2x2 matrix whose element $ij$ is by definition given by the scalar product $\\vec{v}_i\\cdot \\vec{v}_j$. The covariant metric tensor defined in Eq.\\ (\\ref{eq:metric}) is thus the Gram matrix of the edge vectors in the current configuration. Following the notation used in the Surface Evolver manual, we will call this quantity $\\mathbf{s}$: \n \\begin{equation}\n \\mathbf{s} = \\begin{pmatrix}\n \\vec{s}_1\\cdot\\vec{s}_1 & \\vec{s}_1\\cdot\\vec{s}_2 \\\\\n \\vec{s}_2\\cdot\\vec{s}_1 & \\vec{s}_2\\cdot\\vec{s}_2\n \\end{pmatrix}=g_{ij}.\n \\end{equation}\n\nThe Gram matrix of the edge vectors in the reference state is denoted $\\mathbf{S}$:\n\\begin{equation}\n \\mathbf{S} = \\begin{pmatrix}\n \\vec{S}_1\\cdot\\vec{S}_1 & \\vec{S}_1\\cdot\\vec{S}_2 \\\\\n \\vec{S}_2\\cdot\\vec{S}_1 & \\vec{S}_2\\cdot\\vec{S}_2\n \\end{pmatrix}.\n \\end{equation}\n\nWe note that the denominators in Eqs.\\ (\\ref{Equation:ContravariantMaterialCoordinates}) are the determinant of $ \\mathbf{S}$: \n\\begin{equation}\n\\mathrm{det}\\,\n \\mathbf{S} = \\left(\\vec{S}_1\\cdot\\vec{S}_1\\right)\\cdot\\left(\\vec{S}_2\\cdot\\vec{S}_2\\right) - \\left(\\vec{S}_1\\cdot\\vec{S}_2\\right)^2=\\left(S_{1x}S_{2y}-S_{1y}S_{2x} \\right)^2.\n\\label{Equation:CovariantGramDeterminant}\n\\end{equation}\n\nSince the components of the tensor $G^i\\otimes G^j$ are the scalar products of $G^i$ and $G^j$ \\cite{Kelly} we can now write Eq.\\ (\\ref{Equation:RightCauchyGreenAppendix}) in terms of the cartesian components of S, using Eqs.\\ (\\ref{Equation:ContravariantMaterialCoordinates}) and (\\ref{Equation:CovariantGramDeterminant}),\n\n\\begin{equation}\\begin{aligned}\n\\vec{G}^1\\cdot\\vec{G}^1 &= \\frac{S_{2x}S_{2x}+S_{2y}S_{2y}}{\\mathrm{det}(\\mathbf{S})} =& \\frac{\\vec{S}_2\\cdot\\vec{S}_2}{\\mathrm{det}(\\mathbf{S})} \\\\\n\\vec{G}^2\\cdot\\vec{G}^2 &= \\frac{S_{1x}S_{1x}+S_{1y}S_{1y}}{\\mathrm{det}(\\mathbf{S})} =&\\frac{\\vec{S}_1\\cdot\\vec{S}_1}{\\mathrm{det}(\\mathbf{S})} \\\\\n\\vec{G}^1\\cdot\\vec{G}^2 &= -\\frac{S_{1x}S_{2x}+S_{1y}S_{2y}}{\\mathrm{det}(\\mathbf{S})} =&-\\frac{\\vec{S}_1\\cdot\\vec{S}_2}{\\mathrm{det}(\\mathbf{S})}.\n\\end{aligned}\\end{equation}\n\nThis result shows that $G^i\\otimes G^j$ is the inverse of the Gram matrix $\\mathbf{S}$,\n\\begin{equation}\nG^{i}\\otimes G^{j} = \\frac{1}{\\mathrm{det}|\\mathbf{S}|}\n\\begin{pmatrix}\n\\vec{S}_2\\cdot\\vec{S}_2 & -\\vec{S}_1\\cdot\\vec{S}_2 \\\\\n-\\vec{S}_1\\cdot\\vec{S}_2 & \\vec{S}_1\\cdot\\vec{S}_1\n\\end{pmatrix}\n=\\mathbf{S}^{-1}.\n\\end{equation}\n\n We can finally express the 2D right Cauchy-Green tensor (Eq. \\ref{Equation:RightCauchyGreen}), needed in section \\ref{sec:energy} to calculate the elastic energy, in terms of the Gram matrices $\\mathbf{s}$ and $\\mathbf{S}$:\n\\begin{equation}\n\\mathbf{C}=\\mathbf{F}^T\\mathbf{F} = \\mathbf{s}\\,\\mathbf{S}^{-1}.\n\\label{Equation:RightCauchyGreenExplained}\n\\end{equation}\n\nWe note that Eq. (\\ref{Equation:RightCauchyGreenExplained}) can also be used to compute the Green-Lagrange strain tensor $\\mathbf{E}=\\mathbf{F}^T\\mathbf{F}-\\mathbf{I}$ from the vertex coordinates. $\\mathbf{E}$ converges to the infinitesimal strain tensor $\\mathbf{\\varepsilon}$ in the limit of small deformations. Eq.\\ (\\ref{Equation:RightCauchyGreenExplained}) is thus the key result for evaluating strain in Surface Evolver calculations. We note that Eq.\\eqref{Equation:RightCauchyGreenExplained} also gives the correct strain for displacements of vertices normal to the surface.\n\n\n\n\\subsection{Elastic energy \\label{sec:energy}}\nIn this section we explain how the elastic contribution to the interfacial energy is determined in our simulations. According to the compressible 3D Neo Hookean model implemented in the Surface Evolver\\cite{Bouzidi_CompStruct_2004}, and commonly used in the literature \\cite{Pence2015} the elastic energy per volume is\n\\begin{equation}\nW_{3D} = \\frac{G}{2} (Tr\\, \\mathcal{C}-3)-G \\ln J +\\frac{\\Lambda}{2} (\\ln J)^2.\n\\label{eq:3D energy density 1}\n\\end{equation} \n$G$ and $\\Lambda$ are the Lam\u00e9 parameters.\n $J^2=\\mathrm{det}(\\mathcal{C})$ is an invariant of $\\mathcal{C}$, a scalar quantity independent of the reference frame. It is given by the ratio of the volumes of a material element in the current deformed and initial states. \nIn the limit of small deformations, the energy density Eq.\\ref{eq:3D energy density 1} reduces as expected to the one deduced from Hooke's law for linear elastic isotropic materials \\cite{LandauLifshitz}, using the infinitesimal strain tensor $\\mathbf{\\epsilon}$ defined by Eq. \\ref{eq:epsilon}. \n\\begin{equation}\nW_{3D} = \\frac{ \\Lambda}{2} Tr (\\,\\mathbf{\\epsilon})^2+G Tr(\\mathbf{\\epsilon}^2).\n\\label{eq:linear_energy_density}\n\\end{equation}\n\n\\par \nThe elastic skins considered in our work are so thin that their bending stiffness is negligible. Their resistance to shear deformations where the two opposite faces are displaced relative to each other is very strong, we neglect this mode of deformation and assume a state of {\\it plane stress}, consistently with the Kirchhoff hypotheses of thin shell theory \\cite{axelrad1987}. Using Cartesian coordinates with an $x_3$ axis perpendicular to an element of the skin, this is expressed as $\\mathcal{C}_{31}=\\mathcal{C}_{32}=\\mathcal{C}_{13}=\\mathcal{C}_{23}=0$. \nIn the same spirit, we consider the case where the stress normal to the skin has a negligible effect on its shape, so that we can assume $\\sigma_{33}=0$ without loss of generality.\nFor plane stress, the changes of volume and changes of skin thickness are directly related. To analyse this feature, we recall a general relation between the energy density and the Cauchy stress of hyperelastic materials \\cite{Mal1991}\n\\begin{equation}\n J \\mathbf{F}^{-1} \\mathbf{\\sigma}\\mathbf{F}^{-T} =2 \\frac{\\partial W_{3D}}{\\partial\\mathcal{C}}.\n\\end{equation}\nThe plane stress condition can thus be expressed as \n\\begin{equation}\n \\frac{\\partial W_{3D}}{\\partial\\mathcal{C}_{33}}=0.\n\\end{equation}\nUsing Eq.\\ref{eq:3D energy density 1} this yields. \n\\begin{equation}\n \\Lambda \\ln J = G (1 - \\mathcal{C}_{33}).\n \\label{eq:JC33}\n\\end{equation}\nPhysically speaking, this equation previously derived for a similar constitutive equation \\cite{Pascon2019} relates the squared ratio of the current and initial skin thicknesses given by $\\mathcal{C}_{33}$ to the ratio of the current and initial skin volumes, expressed by $J$. \nIn the aim to derive a 2D energy density, we write Eq.\\ \\eqref{eq:JC33} as a function of the components of $\\mathcal{C}$, taking into account that many of them are zero in the case of plane stress, as pointed out above:\n\\begin{equation}\n \\mathcal{C}_{33}(\\mathcal{C}_{11}\\mathcal{C}_{22}-\\mathcal{C}_{12}^2)=\\exp\\left[\\frac{2 G}{\\Lambda}(1-\\mathcal{C}_{33})\\right]\n \\label{eq:JC332}\n\\end{equation}\nTo represent the skin as a 2D material whose deformation is fully specified by $\\mathcal{C}_{11},\\mathcal{C}_{22}$ and $\\mathcal{C}_{12}$, we need to express $\\mathcal{C}_{33}$ in terms of these other variables. This can be done by solving Eq.\\ \\eqref{eq:JC332} either numerically \\cite{Pascon2019}, or analytically, using Lambert's $W$ function \\cite{corless1996}:\n\\begin{equation}\n\\mathcal{C}_{33} = \\frac{\\Lambda}{2 G }\\, W\\left[\\frac{2 G \\, \\exp(2 G \/\\Lambda)}{\\Lambda \\mathrm{det}(\\mathbf{C}) }\\right]\\\\\n= \\frac{\\Lambda}{2 G }\\, W\\left[\\frac{2 G \\, \\exp(2 G \/\\Lambda)}{\\Lambda(\\mathcal{C}_{11}\\,\\mathcal{C}_{22}-\\mathcal{C}_{12}^2)}\\right]\n\\end{equation}\nThe latter option has been implemented by R.\\ Bouzidi in the Surface Evolver software. Inserting the expression of $\\mathcal{C}_{33}$ in Eq.\\ \\eqref{eq:JC33} and the resulting expression for $\\ln J$ into the 3D energy density Eq.\\ \\eqref{eq:3D energy density 1}, we obtain the following 2D energy density for a neo-Hookean skin, where $h_0$ is the skin thickness in the reference state,\n\\begin{equation}\nW_{2D} =G h_0\\left( \\frac{1}{2} (Tr\\, \\mathcal{C}-3)- \\frac{G}{\\Lambda}(1-\\mathcal{C}_{33}) +\\frac{G}{2\\Lambda} (1-\\mathcal{C}_{33})^2 \\right).\n\\label{eq:2D energy density}\n\\end{equation}\n$G h_0$ may be interpreted as a 2D shear modulus.\nNeglecting constant terms which are irrelevant for a potential energy and expressing the result in terms of the 2D right Cauchy Green tensor using\n$\\mathrm{Tr} \\mathcal{C} = \\mathrm{Tr} \\mathbf{C} + \\mathcal{C} _{33}$, we obtain\n\\begin{equation}\nW_{2D} =\\frac{G h_0}{2}\\left( Tr\\, \\mathbf{C} +\\mathcal{C}_{33} + \\frac{G}{\\Lambda} \\mathcal{C}_{33}^2 \\right).\n\\label{eq:2D energy density final}\n\\end{equation}\n\n\n\n\nThe skin materials considered in the present paper are much easier to shear than to compress such that $G \\ll \\Lambda$. In this case, the last term in Eq.\\ \\eqref{eq:2D energy density final} can be neglected.\n \\par Besides the neo-Hookean model discussed so far, the Surface Evolver software provides an alternative energy density expression called \"linear elastic model\" which yields behavior consistent with Eq.\\ \\eqref{eq:linear_energy_density} in the limit of small deformations. However, one should be aware that for large deformations this numerical model based on the right Cauchy Green tensor is not consistent with Eq.\\ (\\ref{eq:linear_energy_density}). \n\n\n\n\n\n\\section{Pressure-deformation relations of droploons on capillarys expressed via radial stretch}\n\\label{annex:PressDefNeedle}\n\nIn the main body of the article we expressed all relations in terms of area stretch $\\lambda_A$. The same approach can be done for the radial stretch $\\lambda$ leading, however, to expressions which are less intuitive and less directly accessible by experiments and simulations. For completeness, we shall provide the resulting equations here. \n\nWe can rewrite the interfacial $A$ for a droploon on a capillary larger than a hemisphere as \n\\begin{equation}\n \\begin{split}\n A & = 2\\pi R^2\\left(1 - \\sqrt{1-\\left(\\frac{R_n}{R}\\right)^2}\\right) \\\\\n & = 2\\pi R^2 \\mathpzc{f}(R_n\/R).\n \\end{split}\n \\label{eq:interfacialarea}\n\\end{equation}\nThe function $F(R_n\/R)$ defined by Eq.(\\ref{eq:interfacialarea}) helps to express the result in a more concise way.\n\nThe term $\\ln(A\/A_0)$ in the Gibbs relation (\\ref{eq:GibbsGamma}) can then be rewritten using Eq. (\\ref{eq:interfacialarea}) to give the normalised surface stress of the droploon on the capillary\n \\begin{equation}\n \\hat{\\sigma} = 1+ 2\\alpha \\ln \\lambda + \\alpha \\ln \\xi.\n \\label{eq:GibbsNeedle}\n \\end{equation}\n The last term, depending on the geometric factor \n \\begin{equation}\n \\xi = \\frac{\\mathpzc{f}(R_n\/R)}{\\mathpzc{f}(R_n\/R_0)},\n \\label{eq:GeomFactor}\n \\end{equation}\n expresses the impact of a capillary on the elastic stress at the surface of a sphere, assuming a spherical sector shape.\n \n In the first two terms one recognises the result previously obtained for the perfect sphere (Eq. (\\ref{eq:GibbsGamma})). One can therefore rewrite\n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\alpha \\ln \\xi.\n \\label{eq:GibbsNeedleSphere}\n \\end{equation}\n \nCompared to a sphere with the same radius, the presence of the capillary introduces a corrective term in the surface stress which depends on $\\alpha$, $R$, $R_n$ and $R_0$. \n \nFor neo-Hookean droploons, the droploon shapes on capillaries are no longer perfect spherical sectors, making analytical descriptions much harder - which is why numerical simulations are required. Nevertheless, we shall make here the seemingly crude approximation that the shapes can be approximated as spherical sectors. \n\nUsing exactly the same approach as for the Gibbs interface but with the neo-Hookean relation(see Table \\ref{tab:models}), one finds for a neo-Hookean droploon on a capillary \n\n \\begin{equation}\n \\hat{\\sigma} = 1 + \\frac{\\alpha}{3} \\left( 1 - \\lambda^{-6} \\xi^{-3} \\right).\n \\label{eq:NeoHookeNeedleAnnexe1}\n \\end{equation}\n \n After some algebra, this can be rewritten as the expression for the perfect sphere with a corrective term taking account of the capillary\n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\frac{\\alpha}{3} \\left(1 - \\xi^{-3}\\right)\\lambda^{-6}.\n \\label{eq:NeoHookeNeedleAnnexe2}\n \\end{equation}\n \n In the limit of small deformations, our results for both Gibbs and neo-Hooke elastiticy yield the same relation \n \\begin{equation}\n \\hat{\\sigma} = \\hat{\\sigma}_{sphere} + \\alpha \\left(\\xi - 1 \\right)\\lambda,\n \\label{eq:HookeNeedleAnnexe3}\n \\end{equation}\nconsistently with what one would obtain for a perfectly spherical sector droploon with Hookean skin on a capillary. In all cases, the corrective term is zero in the reference state where $R=R_0$. Once the interfacial stresses are known, the pressure-deformation relation can be calculated using the Young-Laplace law given in Eq. (\\ref{eq:NormalisedPressure}).\n\nTable \\ref{tab:modelsNeedles} summarises normalised expressions derived from this simple geometrical approximation model, together with expressions for the critical stretch.\n\n\\begin{table*}[t]\n \\centering\n \\begin{tabular}{|c|c|c|}\n \\hline\n Model on capillary & Normalised surface stress $\\hat{\\sigma}$ & Critical stretch $\\lambda_c$ \\\\ \\hline\nGibbs & $\\hat{\\sigma}_{sphere} + \\alpha \\ln \\xi$ & $\\frac{R_0\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)e^{-\\frac{1}{\\alpha}}}{\\sqrt{2R_0^2(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})e^{-\\frac{1}{\\alpha}}-R_n^2}}$ \\\\ \\hline\nNeo-Hooke & $\\hat{\\sigma}_{sphere} + \\frac{\\alpha}{3} \\left(1-\\xi^{-3}\\right)\\lambda^{-6}$ & $\\frac{R_0(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})\\left(1-\\frac{1}{2\\alpha} \\right)^2}\n{ \\sqrt{2R_0^2\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)\\left(1-\\frac{1}{2\\alpha} \\right)^2-R_n^2}}$ \\\\ \\hline\nHooke & $\\hat{\\sigma}_{sphere} + \\alpha \\left(\\xi - 1 \\right)\\lambda$ & $\\frac{R_0(1+\\sqrt{1-(\\frac{R_n}{R_0})^2})\\left(\\frac{\\alpha}{\\alpha+3}\\right)^{\\frac{1}{3}} }\n{ \\sqrt{2R_0^2\\left(1+\\sqrt{1-(\\frac{R_n}{R_0})^2}\\right)\\left(\\frac{\\alpha}{\\alpha+3}\\right)^{\\frac{1}{3}}-R_n^2}}$\\\\\n\\hline\n \\end{tabular}\n \\caption{Summary of the normalised expressions for the surface stress of drops on capillaries using the approximation that the drop can be described by a spherical sector. While for Gibbs droploons these are correct, they are only approximations for Hookean and neo-Hookean droploons. The expressions for $\\hat{\\sigma}_{sphere}$ are given in Table \\ref{tab:models}. The geometric factor $\\xi$ is given in Eq. (\\ref{eq:GeomFactor}). }\n \\label{tab:modelsNeedles}\n\\end{table*}\n\n\n\n\n\n\n\\clearpage\n\n\n\n\n\\balance\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\subsection{Gopakumar-Vafa invariants}\nA smooth complex projective 4-fold $X$ is {\\em holomorphic symplectic} if\nit is equipped with a non-degenerate holomorphic 2-form $\\sigma\\in H^0(X,\\Omega^2_X)$. \nThe ordinary Gromov-Witten invariants of $X$ always vanish for non-zero curve classes. Instead a reduced Gromov-Witten theory is\ndefined by Kiem-Li's cosection localization \\cite{KiL}. \n\nGiven cohomology classes $\\gamma_i \\in H^{\\ast}(X,\\mathbb{Z})$,\nthe (reduced) Gromov-Witten invariants of $X$ in \na non-zero curve class \n$\\beta \\in H_2(X,\\mathbb{Z})$ \nare defined by\n\\begin{align}\\label{intro GWinv}\n\\mathrm{GW}_{g, \\beta}(\\gamma_1, \\ldots, \\gamma_l)\n=\\int_{[\\overline{M}_{g, l}(X, \\beta)]^{\\rm{vir}}}\n\\prod_{i=1}^l \\mathrm{ev}_i^{\\ast}(\\gamma_i),\n\\end{align}\nwhere \n\\begin{equation*}[\\overline{M}_{g, l}(X, \\beta)]^{\\mathrm{vir}}\\in A_{2-g+l}(\\overline{M}_{g, l}(X, \\beta)) \\end{equation*}\nis the (reduced) virtual class and $\\mathrm{ev}_i \\colon \\overline{M}_{g,l}(X, \\beta)\\to X$\nis the evaluation map at the $i$-th marking. \nWe refer to \\cite{O1, O3, OSY} for some references on computations for \\eqref{intro GWinv}.\nGromov-Witten invariants are in general rational numbers because the moduli space $\\overline{M}_{g,l}(X, \\beta)$ of stable maps is a Deligne-Mumford stack. \nIt is an interesting question to find out integer-valued invariants\nwhich underlie them.\n\nIn \\cite{COT1}, we studied this question and defined \\textit{genus 0 Gopakumar-Vafa invariants}\n\\begin{equation}\\label{intro gv invs1}n_{0,\\beta}(\\gamma_1, \\ldots, \\gamma_l)\\in \\mathbb{Q} \\end{equation}\nfor any non-zero curve class $\\beta$ and \\textit{genus 1 and 2 Gopakumar-Vafa invariants}\n\\begin{equation}\\label{intro gv invs2}n_{1,\\beta}(\\gamma)\\in \\mathbb{Q}, \\,\\,\\forall \\,\\, \\gamma\\in H^4(X,\\mathbb{Z}); \\quad n_{2,\\beta} \\in \\mathbb{Q} \\end{equation}\nfor any primitive curve class $\\beta$ (i.e.~it is not a multiple of a non-zero curve class in $H_2(X,{\\mathbb{Z}})$) from Gromov-Witten invariants \\eqref{intro GWinv} (see \\S \\ref{sect on gv} for details). \nThis may be compared with the previous works of Gopakumar and Vafa \\cite{GV} on Calabi-Yau 3-folds, Klemm and Pandharipande \\cite{KP} on Calabi-Yau 4-folds and Pandharipande and Zinger \\cite{PZ} on Calabi-Yau 5-folds. \n\nIn loc.~cit., we conjectured the integrality of \\eqref{intro gv invs1}, \\eqref{intro gv invs2} and provided substantial evidence for it. \nThe aim of this paper is to give a sheaf theoretic interpretation of these Gopakumar-Vafa invariants using moduli spaces of stable pairs, in analogy with the discussion of \\cite{CMT2, CT1} on ordinary Calabi-Yau 4-folds. \n\n\\subsection{GV\/Pairs correspondence}\nLet $F$ be a one dimensional coherent sheaf on $X$ and $s\\in H^0(F)$ be a section.\nFor an ample divisor $\\omega$ on $X$, we denote the slope function by $\\mu(F)=\\chi(F)\/(\\omega \\cdot [F])$.\nThe pair $(F,s)$ is called $Z_t$-\\textit{stable} $($$t\\in\\mathbb{R}$$)$ if\n\\begin{enumerate}\n\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item for any subsheaf $0\\neq F' \\subseteq F$, we have \n$\\mu(F')t$. \n\\end{enumerate}\nFor a non-zero curve class $\\beta \\in H_2(X, \\mathbb{Z})$ and $n\\in \\mathbb{Z}$, \nwe denote by\n\\begin{align*}\nP_n^t(X, \\beta)\n\\end{align*}\nthe moduli space of \n$Z_t$-stable pairs \n$(F, s)$ with $([F], \\chi(F))=(\\beta, n)$. It has a wall-chamber structure and for a \\textit{general} $t \\in \\mathbb{R}$ (i.e.~outside a finite subset of rational numbers in $\\mathbb{R}$), \nit is a projective scheme. \n\nWhen $t<\\frac{n}{\\omega \\cdot \\beta}$, $P_n^t(X, \\beta)$ is empty. The first nontrivial chamber appears when \n$t=\\frac{n}{\\omega \\cdot \\beta}+0^+$, which we call \\textit{Joyce-Song (JS) chamber} (here $0^+$ denotes a sufficiently small positive number \nwith respect to the fixed $\\omega,\\beta,n$). When $t\\gg 1$, it recovers the moduli space of \\textit{Pandharipande-Thomas (PT) stable pairs} \\cite{PT} (Proposition \\ref{prop:chambers}).\n\nFor general $t\\in \\mathbb{R}$, by Theorem \\ref{existence of proj moduli space}, we can define its $\\mathop{\\rm DT}\\nolimits_4$ virtual class following \\cite{BJ, OT} (see also \\cite{CL1}).\nHowever, by a cosection argument the virtual class vanishes, see \\cite{KiP, Sav}. \nUsing Kiem-Park's cosection localization \\cite{KiP}, we have a (reduced) virtual class\n\\begin{align*}[P^t_n(X,\\beta)]^{\\mathrm{vir}}\\in A_{n+1}(P^t_n(X,\\beta),\\mathbb{Q}), \\end{align*}\ndepending on the choice of orientation \\cite{CGJ, CL2}. More precisely, for each connected component of $P^t_n(X,\\beta)$, there \nare two choices of orientation which affect the virtual class by a sign (component-wise).\nTo define its counting invariants, let \n\\begin{align*}\\tau: H^{m}(X,\\mathbb{Z})\\to H^{m-2}(P_n^t(X,\\beta),\\mathbb{Z}), \\end{align*}\n\\begin{align*}\\tau(\\gamma):=\\pi_{P\\ast}\\left(\\pi_X^{\\ast}\\gamma \\cup\\mathop{\\rm ch}\\nolimits_{3}(\\mathbb{F})\\right),\n\\end{align*}\nwhere $\\mathbb{I}=(\\mathcal{O}\\to \\mathbb{F})$ is the universal $Z_t$-stable pair and $\\pi_P, \\pi_X$ are projections from $P_n^t(X,\\beta)\\times X$ onto its factors. \nFor $\\gamma_i \\in H^{m_i}(X, \\mathbb{Z})$, the $Z_t$-\\textit{stable pair invariants} are defined by \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l):=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\\in\\mathbb{Q}.\n\\end{align*}\nWhen $n=-1$, we also write \n$$P_{-1,\\beta}^t:=\\int_{[P_{-1}^t(X,\\beta)]^{\\rm{vir}}}1. $$\nHere is the main conjecture of this paper, which gives a sheaf theoretic interpretation of\nall genus Gopakumar-Vafa invariants using $Z_t$-stable pair invariants. \n\\begin{conj}\\emph{(Conjecture \\ref{conj on DT4\/GV})}\\label{intro conj on DT4\/GV}\nFix $n\\in\\mathbb{Z}$, $\\beta\\in H_2(X,\\mathbb{Z})$ and let $t>\\frac{n}{\\omega\\cdot \\beta}$ be generic. \nFor certain choice of orientation, we have \n\\begin{enumerate}\n\\item If $n\\geqslant 2$, then \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0.\n\\end{align*}\n\\item If $n=1$, then\n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l). \\end{align*}\n\\item If $n=0$ and $\\beta$ is primitive, then\n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=n_{1, \\beta}(\\gamma).\n\\end{align*}\n\\item If $n=-1$ and $\\beta$ is primitive, then \n\\begin{align*} \nP_{-1,\\beta}^t=n_{2,\\beta}.\n\\end{align*}\n\\end{enumerate}\n\\end{conj}\nWe verify this conjecture by a computation in an ideal geometry where curves deform in families of expected dimensions and \nhave expected generic properties (see \\S \\ref{sect on heur}). Besides this, we study several examples and prove our conjecture in those cases.\n\n\n\\subsection{Verification of conjectures I: $K3\\times K3$}\nLet $X=S\\times T$ be the product of two $K3$ surfaces. \nWhen the curve class $\\beta \\in H_2(S \\times T, {\\mathbb{Z}})$\nis of non-trivial degree over both $S$ and $T$, then\none can construct two linearly independent cosections for moduli spaces of stable maps,\nwhich imply that the (reduced) Gromov-Witten invariants of $X$ in this class vanish. Therefore we always restrict to consider curve classes of form\n\\begin{equation*}\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z}). \\end{equation*} \n\n\\begin{thm}\\emph{(Theorem \\ref{thm on g=0 conj on prod}, \\ref{thm on g=1 conj on prod}, \\ref{thm on P_-1}, Remark \\ref{rmk on pri g=0})}\nLet $X=S\\times T$ be as above. Then Conjecture \\ref{intro conj on DT4\/GV}\nholds for any primitive curve class $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$. \n\\end{thm}\nIn fact, by the global Torelli theorem (see e.g.~\\cite{Ver, Huy}), primitive curve classes on $K3$ surfaces can be deformed to irreducible curve classes. \nBy deformation invariance, we only need to deal with an irreducible curve class $\\beta$, in which case we have an isomorphism (Proposition \\ref{prop on smoothness}):\n\\begin{equation*}P^t_{n}(X,\\beta)\\cong P^t_{n}(S,\\beta)\\times T, \\end{equation*}\nand a forgetful map \n\\begin{equation}\\label{intro fort map}P^t_{n}(S,\\beta)\\to M_n(S,\\beta), \\end{equation}\nwhere $M_n(S,\\beta)$ is the coarse moduli space of one dimensional stable sheaves $F$ on $S$ with $[F]=\\beta$, $\\chi(F)=n$.\nBoth $P^t_{n}(S,\\beta)$ and $M_n(S,\\beta)$ are smooth schemes. We can then determine the $\\mathop{\\rm DT}\\nolimits_4$ virtual class of $P^t_{n}(X,\\beta)$ (Theorem \\ref{thm on vir clas}) and its pushforward (under the forgetful map) by the Thom-Porteus formula (Proposition \\ref{deg loci}). This enables us to reduce the computation\nof $Z_t$-stable pair invariants to certain tautological integrals on $M_n(S,\\beta)$.\nBy Markman's framework of monodromy operators \\cite{Markman}, we relate such integrals to certain tautological integrals\non Hilbert schemes of points on $S$ (see \\S \\ref{sect on trans} for details), \nwhich we explicitly determine using \\cite{COT1} (see the proof of Theorem \\ref{thm on g=1 conj on prod}, \\ref{thm on P_-1} for details). \n \n\n\n\n\\subsection{Verification of conjectures II: $T^*\\mathbb{P}^2$}\nLet $H \\in H^2(T^{\\ast} {\\mathbb{P}}^2)$ be the pullback of the hyperplane class and let us identify $H_2(T^{\\ast} {\\mathbb{P}}^2, {\\mathbb{Z}}) \\equiv {\\mathbb{Z}}$ by taking the degree against $H$.\n\nBy explicitly describing the moduli spaces and virtual classes, we obtain: \n\\begin{prop}\\emph{(Proposition \\ref{prop on tp2})}\\label{intro prop on tp2}\nFor certain choice of orientation, we have \n$$P_{1,1}(H^2,H^2)=1, \\quad P_{1,2}(H^2,H^2)=-1, \\quad P_{1,3}(H^2,H^2)=0, $$\n$$P_{0,1}(H^2)=P_{0,2}(H^2)=0, \\quad P_{0,3}(H^2)=1, \\quad P_{-1,1}=P_{-1,2}=P_{-1,3}=0.$$\nMoreover, $P^t_{n}(X,d)$ is independent of the choice of $t>n\/d$ in the listed cases above. \n\nIn particular, for $X=T^*\\mathbb{P}^2$, we have\n\\begin{itemize}\n\\item Conjecture \\ref{intro conj on DT4\/GV} (2) holds when $d\\leqslant 3$. \n\\item Conjecture \\ref{intro conj on DT4\/GV} (3), (4) hold. \n\\end{itemize}\n\\end{prop}\n\n \n\\subsection{Verification of conjectures III: exceptional curves on $\\mathop{\\rm Hilb}\\nolimits^2(K3)$} \nLet $S$ be a $K3$ surface and $\\mathop{\\rm Hilb}\\nolimits^2(S)$ be the Hilbert scheme of two points on $S$. Consider the Hilbert-Chow map \n$$\\pi: \\mathop{\\rm Hilb}\\nolimits^2(S)\\to \\mathop{\\rm Sym}\\nolimits^2(S) $$\nto the symmetric product of $S$. Let $D$ be the exceptional divisor fitting into Cartesian diagram: \n\\begin{align*} \\xymatrix{\nD \\ar[d]_{\\pi} \\ar[r]^{i \\quad \\,\\,\\, } & \\mathop{\\rm Hilb}\\nolimits^2(S) \\ar[d]^{\\pi} \\\\\nS \\ar[r]^{\\Delta \\quad \\,\\,\\, } & \\mathop{\\rm Sym}\\nolimits^2(S), } \\quad \\quad\n\\end{align*}\nwhere $\\Delta$ is the diagonal embedding and $\\pi: D\\to S$ is a $\\mathbb{P}^1$-bundle.\nThe following provides a verification of our (genus 0) conjecture for imprimitive curve classes. \n\\begin{thm}\\emph{(Theorem \\ref{thm on hilbS})}\\label{intro thm on hilbS}\nIn the JS chamber, Conjecture \\ref{intro conj on DT4\/GV} (1),~(2) hold for multiple fiber classes $\\beta=r[\\mathbb{P}^1]$ $($$r\\geqslant 1$$)$ of $\\pi$ as above. \n\\end{thm} \nIn fact, by the Jordan-H\\\"older filtration and a dimension counting, the JS pair invariants of $P_{n}^{\\mathrm{JS}}(X,r[\\mathbb{P}^1])$ are zero unless $n=r$ and \nin which case we have \n$$P_{n}^{\\mathrm{JS}}(X,n[\\mathbb{P}^1]) \\cong \\mathop{\\rm Hilb}\\nolimits^n(S). $$\nThen the proof makes use of the Chern class operator of tautological bundles by Lehn \\cite{Lehn}. \n\n\\subsection{Multiple fiber classes of elliptic fibrations} \nLet $p: S\\rightarrow\\mathbb{P}^{1}$ be an elliptic $K3$ surface and consider the elliptic fibration: \n$$\\bar{p}:=p\\times \\textrm{id}_T: X:=S\\times T\\to \\mathbb{P}^{1}\\times T=:Y, $$\nwhere $T$ is a $K3$ surface.\nDenote $f$ to be a generic fiber of $\\bar{p}$ and ${\\mathsf{p}}\\in H_0(T)$ be the point class.\nThe following gives a closed formula of $Z_t$-stable pair invariants for multiple fiber classes. \n\\begin{thm}\\emph{(Theorem \\ref{thm2 on g=1 of multiple fiber})}\\label{intro thm2 on g=1 of multiple fiber}\nLet $t>0$. Then for certain choice of orientation, we have \n\\begin{align*}\n\\sum_{r\\geqslant 0}P^t_{0,r[f]}(\\gamma)\\,q^r=24\\,\\left(\\int_{S \\times {\\mathsf{p}}} \\gamma\\right)\\cdot \\sum_{m\\geqslant 1}\\sum_{n | m}n^2q^m. \\end{align*}\n\\end{thm}\nAs for the proof, we note that there is an isomorphism \n$$\\bar{p}^*: \\mathop{\\rm Hilb}\\nolimits^r(Y) \\cong P^t_0(X,r[f]), \\quad I_Z\\mapsto \\bar{p}^*I_Z, $$\nunder which the (reduced) virtual classes \n$$(-1)^{n+1}[\\mathop{\\rm Hilb}\\nolimits^r(Y)]^{\\mathrm{vir}}=[P^t_0(X,r[f])]^{\\mathrm{vir}}\\in A_1(\\mathop{\\rm Hilb}\\nolimits^r(Y)) $$\ncan be identified for certain choice of orientation on the right hand side. \nThen we are left to evaluate an integral on $[\\mathop{\\rm Hilb}\\nolimits^r(Y)]^{\\mathrm{vir}}$ which can be done via the degeneration method and a Behrend function argument \\cite{B, OS}.\nWe refer to Theorem \\ref{thm1 on g=1 of multiple fiber} for a similar result for trivial elliptic fibration $E\\times E\\times T\\to E\\times T$ \nand the proof therein for details. \n\nThe formula in Theorem \\ref{intro thm2 on g=1 of multiple fiber} seems to support our speculation of a GV\/Pairs correspondence in genus 1 for imprimivite curve classes (see \\S \\ref{sect on impri} for details). \n\n\\subsection{A conjectural virtual pushforward formula} \nFinally we remark that for a general holomorphic symplectic 4-fold $X$ and an irreducible curve class $\\beta\\in H_2(X,\\mathbb{Z})$, \nwe have a forgetful map as in \\eqref{intro fort map}:\n\\begin{equation*}P^t_{n}(X,\\beta)\\to M_n(X,\\beta), \\end{equation*}\nwhere $M_n(S,\\beta)$ is the coarse moduli space of one dimensional stable sheaves $F$ on $X$ with $[F]=\\beta$, $\\chi(F)=n$.\nIn Appendix \\S \\ref{sect on app}, we conjecture a virtual pushforward formula for this map (which we verify for the product of $K3$ surfaces, see Proposition \\ref{prop on prod of k3 app}). Together with Conjecture \\ref{intro conj on DT4\/GV} (4), this formula implies a conjectural relation between genus 2 Gopakumar-Vafa invariants and certain descendent invariants on \n$M_1(X,\\beta)$ (Proposition \\ref{prop on appe}), which appears as \\cite[Conj.~2.2 (iii)]{COT1}.\n\n\\subsection{Notation and convention}\nIn this paper, all varieties and schemes are defined over $\\mathbb{C}$. \nFor a morphism $\\pi \\colon X \\to Y$ of schemes, \nand for $\\mathcal{F}, \\mathcal{G} \\in \\mathrm{D^{b}(Coh(\\textit{X\\,}))}$, we denote by \n$\\mathbf{R} \\mathcal{H} om_{\\pi}(\\mathcal{F}, \\mathcal{G})$ \nthe functor $\\mathbf{R} \\pi_{\\ast} \\mathbf{R} \\mathcal{H} om_X(\\mathcal{F}, \\mathcal{G})$. \n\nA class $\\beta\\in H_2(X,\\mathbb{Z})$ is called \\textit{effective} if there exists a non-empty curve $C \\subset X$ with class $[C] = \\beta$. An effective class $\\beta$ is called \\textit{irreducible} if it is not the sum of two effective classes, and it is called \\textit{primitive} if it is not a positive integer multiple of an effective class.\n\nA holomorphic-symplectic variety is a smooth projective variety\nwith a non-degenerate holomorphic two form $\\sigma\\in H^0(X,\\Omega^2_X)$. \nA holomorphic-symplectic variety is irreducible \\textit{hyperk\\\"ahler}\nif $X$ is simply connected and $H^0(X, \\Omega_X^2)$ is generated by a symplectic form.\nA $K3$ surface is an (irreducible) hyperk\\\"ahler variety of dimension $2$.\n\n\n\n\\subsection*{Acknowledgement} \nWe thank Luca Battistella, Chen Jiang, Young-Hoon Kiem, Sergej Monavari, Rahul Pandharipande and Hyeonjun Park for helpful discussions.\n\nY. C. is partially supported by RIKEN Interdisciplinary Theoretical and Mathematical Sciences\nProgram (iTHEMS), World Premier International Research Center Initiative (WPI), MEXT, Japan, \nJSPS KAKENHI Grant Number JP19K23397 and Royal Society Newton International Fellowships Alumni 2020 and 2021. \nG.O. is partially supported by Deutsche Forschungsgemeinschaft (DFG) - OB 512\/1-1. \nY. T. is partially supported by World Premier International Research Center Initiative (WPI initiative), MEXT, Japan, and\nGrant-in Aid for Scientific Research grant (No. 19H01779) from MEXT, Japan.\n\n\n\n\n\n\\section{Definitions and conjectures }\n\n\\subsection{Gopakumar-Vafa invariants}\\label{sect on gv}\n\nLet $X$ be a holomorphic symplectic 4-fold and $\\overline{M}_{g, l}(X, \\beta)$\nbe the moduli stack of genus $g$, $l$-pointed stable maps\nto $X$ with non-zero curve class $\\beta$. Its virtual class \\cite{BF, LT} vanishes due to a trivial factor in the obstruction sheaf.\nBy Kiem-Li's theory of cosection localization \\cite{KiL}, one can define a (reduced) virtual class\\,\\footnote{The virtual class mentioned in this paper\nis always assumed to be the reduced one.}\n$$[\\overline{M}_{g, l}(X, \\beta)]^{\\mathrm{vir}}\\in A_{2-g+l}(\\overline{M}_{g, l}(X, \\beta)). $$\nFor integral classes\n\\begin{align}\\label{gamma}\n\\gamma_i \\in H^{m_i}(X, \\mathbb{Z}), \\\n1\\leqslant i\\leqslant l,\n\\end{align}\nthe (primary) Gromov-Witten invariant is defined by\n\\begin{align}\\label{GWinv}\n\\mathrm{GW}_{g, \\beta}(\\gamma_1, \\ldots, \\gamma_l)\n=\\int_{[\\overline{M}_{g, l}(X, \\beta)]^{\\rm{vir}}}\n\\prod_{i=1}^l \\mathrm{ev}_i^{\\ast}(\\gamma_i)\\in \\mathbb{Q},\n\\end{align}\nwhere $\\mathrm{ev}_i \\colon \\overline{M}_{g,l}(X, \\beta)\\to X$\nis the $i$-th evaluation map.\n\nWhen $g=0$, the\nvirtual dimension of $\\overline{M}_{0, l}(X, \\beta)$\nis $l+2$, and (\\ref{GWinv})\nis zero unless\n\\begin{align}\\label{sum1}\n\\sum_{i=1}^{l}(m_i-2)=4.\n\\end{align}\nSimilar to the case of Calabi-Yau 4-folds and 5-folds \\cite{KP, PZ}, we make the following definition: \n\\begin{defi}\\emph{(\\cite[Def.~1.5]{COT1})}\\label{def of g=0 GV inv}\nFor any $\\gamma_1, \\ldots, \\gamma_l \\in H^{\\ast}(X,{\\mathbb{Z}})$, \nwe define the genus $0$ Gopakumar-Vafa invariant $n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l) \\in {\\mathbb{Q}}$ recursively by the multiple cover formula: \n$$\\mathrm{GW}_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l)=\\sum_{\\begin{subarray}{c}k\\geqslant 1, k|\\beta \\end{subarray}}k^{n-3}\\, n_{0, \\beta\/k}(\\gamma_1, \\ldots, \\gamma_l). $$\n\\end{defi}\nWhen $g=1$, the virtual dimension of\n$\\overline{M}_{1, l}(X, \\beta)$ is $l+1$, and (\\ref{GWinv})\nis zero unless\n\\begin{align}\\label{sum2}\n\\sum_{i=1}^{l}(m_i-2)=2.\n\\end{align}\nIn this paper, we concentrate on the case when $l=1$ and $m_1=4$. \nBecause curves in imprimitive curve classes are very difficult to control,\nwe restrict hereby to the case of a primitive curve class.\n\\begin{defi}\\emph{(\\cite[Def.~1.6]{COT1})}\\label{def of g=1 GV inv}\nAssume that $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive. For any $\\gamma\\in H^4(X, \\mathbb{Z})$, we define the genus 1 Gopakumar-Vafa invariant $n_{1, \\beta}(\\gamma)\\in \\mathbb{Q}$ by\n$$\\mathrm{GW}_{1, \\beta}(\\gamma)=n_{1,\\beta}(\\gamma) - \\frac{1}{24} \\mathrm{GW}_{0,\\beta}(\\gamma,c_2(X)), $$\nwhere $c_2(X)$ is the second Chern class of $T_X$. \n\\end{defi}\nWhen $g=2$, the virtual dimension of\n$\\overline{M}_{2, 0}(X, \\beta)$ is zero, so we can consider (\\ref{GWinv}) without insertions:\n\\begin{align*}\n\\mathrm{GW}_{2, \\beta}:=\\int_{[\\overline{M}_{2, 0}(X, \\beta)]^{\\rm{vir}}}1\\in \\mathbb{Q}.\n\\end{align*}\n\\begin{defi}\\emph{(\\cite[Def.~1.7]{COT1})}\\label{def of g=2 GV inv}\nAssume that $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive. We define the genus $2$ Gopakumar-Vafa invariant $n_{2,\\beta}\\in \\mathbb{Q}$ by\n\\[\\mathrm{GW}_{2, \\beta}=n_{2,\\beta}\n- \\frac{1}{24} n_{1,\\beta}(c_2(X))\n+ \\frac{1}{2 \\cdot 24^2} \\mathrm{GW}_{0, \\beta}(c_2(X),c_2(X))\n+ \\frac{1}{24} N_{\\mathrm{nodal},\\beta}. \\]\nHere $n_{1,\\beta}(-)$ is given in Definition \\ref{def of g=1 GV inv} and $N_{\\mathrm{nodal},\\beta}\\in \\mathbb{Q}$ is the virtual count of rational nodal curves \\cite{NO} \nas defined by \n\\begin{equation} \\label{Nnodal}\nN_{\\mathrm{nodal},\\beta}:=\n\\frac{1}{2}\\left[\n\\int_{[\\overline{M}_{0,2}(X,\\beta)]^{\\mathrm{vir}}} (\\mathop{\\rm ev}\\nolimits_1 \\times \\mathop{\\rm ev}\\nolimits_2)^{\\ast}(\\Delta_X) - \\int_{[ \\overline{M}_{0,1}(X,\\beta) ]^{\\mathrm{vir}}} \\frac{\\mathop{\\rm ev}\\nolimits_1^{\\ast}(c(X))}{1-\\psi_1}\n\\right], \n\\end{equation}\nwhere \n\\begin{itemize}\n\\item $\\Delta_X \\in H^8(X \\times X)$ is the class of the diagonal, and\n\\item $c(X) = 1 + c_2(X) + c_4(X)$ is the total Chern class of $T_X$.\n\\end{itemize}\n \\end{defi}\n\n\n\n\n\\subsection{$Z_t$-stable pair invariants}\n\nLet $\\omega$ be an ample divisor on $X$ and $t\\in\\mathbb{R}$, we recall the following notion of $Z_t$-stable pairs.\n\\begin{defi}\\label{def Zt sta}\\emph{(\\cite[Lem~1.7]{CT1})}\nLet $F$ be a one dimensional coherent sheaf and $s: \\mathcal{O}_X\\to F$ be a section. For an ample divisor $\\omega$, we denote the slope function\nby $\\mu(F)=\\chi(F)\/(\\omega \\cdot [F])$.\n\nWe say $(F,s)$ is a $Z_t$-(semi)stable pair $($$t\\in\\mathbb{R}$$)$ if \n\\begin{enumerate}\n\\renewcommand{\\labelenumi}{(\\roman{enumi})}\n\\item for any subsheaf $0\\neq F' \\subseteq F$, we have \n$\\mu(F')<(\\leqslant)t$,\n\\item for any\nsubsheaf $ F' \\subsetneq F$ \nsuch that $s$ factors through $F'$, \nwe have \n$\\mu(F\/F')>(\\geqslant)t$. \n\\end{enumerate}\n\\end{defi}\nThere are two distinguished stability conditions appearing as \nspecial cases of $Z_t$-stability. \n\\begin{defi}\\label{defi:PTJSpair}\\emph{(\\cite{PT}, \\cite[Def.~1.10]{CT1})} \n\n(i) A pair $(F,s)$ is a PT stable pair if\n$F$ is a pure one dimensional sheaf and $s$ is surjective in dimension one. \n\n(ii) A pair $(F,s)$ is a JS stable pair if $s$ is a non-zero morphism, $F$ is $\\mu$-semistable and \nfor any subsheaf $0\\neq F' \\subsetneq F$ such that $s$ factors through \n$F'$ we have $\\mu(F')<\\mu(F)$. \n\\end{defi}\n\\begin{prop}\\label{prop:chambers}\\emph{(\\cite[Prop.~1.11]{CT1})} \nFor a pair $(F,s)$ with $[F]=\\beta$ and $\\chi(F)=n$, its\n\n(i) $Z_t$-stability with $t\\to \\infty$ is exactly PT stability, \n\n(ii) $Z_t$-stability with $t=\\frac{n}{\\omega\\cdot \\beta}+0^+$ is exactly JS stability. \n\\end{prop}\nFor $\\beta \\in H_2(X, \\mathbb{Z})$ and $n\\in \\mathbb{Z}$, we denote by\n$$P^t_n(X, \\beta)\\subseteq \\mathcal{P}^t_n(X, \\beta) $$\nthe moduli stack of $Z_t$-stable (semistable) pairs $(F,s)$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\nBy Proposition \\ref{prop:chambers}, there are two disinguished moduli spaces, \nPT moduli spaces and JS moduli spaces, \nby specializing $t\\to \\infty$ and $t=\\frac{n}{\\omega\\cdot \\beta}+0^+$ respectively:\n\\begin{align*}\nP_n(X, \\beta) \\cneq P_n^{t\\to \\infty}(X, \\beta), \\quad\nP_n^{\\mathrm{JS}}(X, \\beta) \\cneq \nP_n^{t=\\frac{n}{\\omega\\cdot \\beta}+0^+}(X, \\beta). \n\\end{align*}\nBy a GIT construction, \n$P^t_n(X, \\beta)$ is a quasi-projective scheme, and $\\mathcal{P}^t_n(X, \\beta)$ admits a good moduli space\n\\begin{align*}\n\\mathcal{P}^t_n(X, \\beta) \\to \\overline{P}_n^t(X, \\beta),\n\\end{align*}\nwhere $\\overline{P}_n^t(X, \\beta)$ is a projective \nscheme which parametrizes $Z_t$-polystable objects.\nThe following result shows that moduli stacks of $Z_t$-stable pairs are indeed open substacks of moduli stacks of objects in the derived categories of coherent sheaves.\n\\begin{thm}\\label{existence of proj moduli space}\\emph{(\\cite[Thm.~0.1]{CT1})} \n$P^t_n(X, \\beta)$ admits an open immersion \n$$P^t_n(X, \\beta)\\to \\mathcal{M}_0, \\quad (F,s)\\mapsto (\\mathcal{O}_X\\stackrel{s}{\\to} F) $$\nto the moduli stack $\\mathcal{M}_0 $ of $E\\in D^b\\mathop{\\rm Coh}\\nolimits (X)$ with $\\mathop{\\rm Ext}\\nolimits^{<0}(E,E)=0$ and $\\det(E)\\cong \\mathcal{O}_X$.\n\\end{thm}\nTherefore for a general choice of $t$ (i.e.~outside a finite subset of rational numbers in $\\mathbb{R}$), $P^t_n(X, \\beta)$ is a projective scheme which can\nbe given a $(-2)$-shifted symplectic derived scheme structure \\cite{PTVV} and has a virtual class \\cite{BJ, OT} (see also \\cite{CL1}). \n\nParallel to GW theory, the virtual class of $P_n^t(X,\\beta)$ vanishes \\cite{KiP, Sav}. \nOne can define \na reduced virtual class due to Kiem-Park \\cite[Def.~8.7, Lem.~9.4]{KiP}: \n\\begin{align}\\label{red vir class}[P_n^t(X,\\beta)]^{\\mathrm{vir}}\\in A_{n+1}(P_n^t(X,\\beta),\\mathbb{Q}), \\end{align}\ndepending on the choice of orientation \\cite{CGJ, CL2}. \nTo define its counting invariants, let \n\\begin{align}\\label{equ on pri ins}\\tau: H^{m}(X,\\mathbb{Z})\\to H^{m-2}(P_n^t(X,\\beta),\\mathbb{Z}), \\end{align}\n\\begin{align*}\\tau(\\gamma):=\\pi_{P\\ast}\\left(\\pi_X^{\\ast}\\gamma \\cup\\mathop{\\rm ch}\\nolimits_{3}(\\mathbb{F})\\right),\n\\end{align*}\nwhere $\\mathbb{I}=(\\mathcal{O}\\to \\mathbb{F})$ is the universal $Z_t$-stable pair and $\\pi_P, \\pi_X$ are projections from $P_n^t(X,\\beta)\\times X$ onto its factors. \n\\begin{defi}\\label{def DT4 inv}\nLet $t\\in \\mathbb{R}$ be generic and $\\gamma_i \\in H^{m_i}(X, \\mathbb{Z})$ $(1\\leqslant i\\leqslant l)$. The $Z_t$-stable pair invariants are \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l):=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\\in\\mathbb{Q}.\n\\end{align*}\nWhen $n=-1$, we write \n$$P_{-1,\\beta}^t:=\\int_{[P_{-1}^t(X,\\beta)]^{\\rm{vir}}}1. $$\nIn PT and JS stabilites, we also write \n$$P_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l):=P_{n,\\beta}^{t\\to \\infty}(\\gamma_1,\\ldots,\\gamma_l), \\,\\,\\, \nP^{\\mathrm{JS}}_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l):=P_{n,\\beta}^{t=\\frac{n}{\\omega\\cdot \\beta}+0^+\n}(\\gamma_1,\\ldots,\\gamma_l). $$\n\\end{defi}\n\\begin{rmk}\nBy Definition \\ref{def Zt sta} and a dimension counting, $Z_t$-stable pair invariants are non-zero only if \nboth of the following conditions hold: \n\\begin{align*} \nt>\\frac{n}{\\omega\\cdot \\beta}, \\quad \\sum_{i=1}^{l}(m_i-2)=2n+2.\n\\end{align*}\n\\end{rmk}\n\nIn \\cite{CMT2, CT1}, similar invariants are used to give sheaf theoretic interpretations of Gopakumar-Vafa type invariants for ordinary Calabi-Yau 4-folds \\cite{KP}. \nBelow, we give a parallel proposal for holomorphic symplectic 4-folds using Definition \\ref{def DT4 inv}. \n\n\n\\subsection{Conjecture}\nWe state the main conjecture of this paper. \n\\begin{conj}\\label{conj on DT4\/GV}\nLet $X$ be a holomorphic symplectic 4-fold with an ample divisor $\\omega$.\nFix $n\\in\\mathbb{Z}$ and $\\beta\\in H_2(X,\\mathbb{Z})$ and let $t>\\frac{n}{\\omega\\cdot \\beta}$ be generic. \nFor certain choice of orientation, we have \n\\begin{enumerate}\n\\item If $n\\geqslant 2$, then \n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0.\n\\end{align*}\n\\item If $n=1$, then\n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=n_{0, \\beta}(\\gamma_1, \\ldots, \\gamma_l) \\in \\mathbb{Z}. \\end{align*}\n\\item If $n=0$ and $\\beta$ is primitive, then\n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=n_{1, \\beta}(\\gamma) \\in \\mathbb{Z}.\n\\end{align*}\n\\item If $n=-1$ and $\\beta$ is primitive, then \n\\begin{align*} \nP_{-1,\\beta}^t=n_{2,\\beta} \\in \\mathbb{Z}.\n\\end{align*}\n\\end{enumerate}\n\\end{conj}\n\\begin{rmk}\nBy the global Torelli theorem \\cite{Ver, Huy}, primitive curve classes on irreducible hyperk\\\"ahler varieties can be deformed to irreducible curve classes. Therefore $Z_t$-stable pair invariants \nare independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ for such cases by \\cite[Prop.~1.12]{CT1}.\n\\end{rmk}\n\\begin{rmk}\nOur conjecture implies that there is no nontrivial wall-crossing for $Z_t$-stable pairs invariants when $t>\\frac{n}{\\omega\\cdot \\beta}$, contrary to the \nordinary $\\mathrm{CY_4}$ case \\cite{CT1, CT3, CT4}.\n\\end{rmk}\n\\begin{rmk}\nSimilarly to \\cite[Conj.~0.3]{CK2}, we may use counting invariants on Hilbert schemes $I_n(X,\\beta)$ of curves to give \na sheaf theoretic interpretation of Gopakumar-Vafa invariants in which case zero dimensional subschemes \\cite{CK1} (conjecturally) will not contribute, i.e. $``\\mathop{\\rm DT}\\nolimits=\\mathop{\\rm PT}\\nolimits\"$. \nIt is curious whether one can do a $K$-theoretic refinement as \\cite{CKM1}. \n\\end{rmk}\n\n\\subsection{Heuristic argument}\\label{sect on heur}\nIn this section, we verify Conjecture \\ref{conj on DT4\/GV} using heuristic argument in an ideal geometry (ref.~\\cite[\\S 1.4, \\S 1.5]{COT1}).\nTo be specific, as the virtual dimension of $\\overline{M}_{g,0}(X,\\beta)$ is $2-g$, we assume that:\n\\begin{quote}\nAny genus $g$ curve moves in a smooth compact $(2-g)$-dimensional family.\n\\end{quote}\nIn particular, there are no curves of genus $g \\geqslant 3$.\nUnfortunately, complicated phenomena still arise even in the ideal case, for example, one can have two (resp.~one) dimensional \nfamilies of reducible rational (resp.~elliptic) curves, and any member of a rational curve family is expected to intersect nontrivially with\nsome member in the same family (see \\cite[\\S 1.4]{COT1} for details). \n\nHowever, things will be simplified if we make the following additional assumptions:\n\\begin{itemize}\n\\item $X$ is irreducible hyperk\\\"ahler,\n\\item the effective curve class $\\beta \\in H_2(X,{\\mathbb{Z}})$ is primitive,\n\\end{itemize}\nBy the global Torelli for (irreducible) hyperk\\\"ahler varieties \\cite{Ver, Huy},\nthe pair $(X,\\beta)$ is deformation equivalent (through a deformation with keeps $\\beta$ of Hodge type)\nto a pair $(X', \\beta')$, where $\\beta' \\in H_2(X,{\\mathbb{Z}})$ is irreducible, so we may without loss of generality assume:\n\\begin{itemize}\n\\item the effective curve class $\\beta \\in H_2(X,{\\mathbb{Z}})$ is irreducible.\n\\end{itemize}\nUnder these assumptions, our ideal geometry of curves simplifies to the following form:\n\\begin{enumerate}\n\\item\nThe rational curves in $X$ of class $\\beta$\nmove in a proper 2-dimensional smooth family of embedded irreducible rational curves. Except for a finite number of rational nodal curves, the rational curves are smooth, with normal bundle ${\\mathcal O}_{{\\mathbb{P}}^1} \\oplus {\\mathcal O}_{{\\mathbb{P}}^1} \\oplus \\mathcal{O}_{\\mathbb{P}^{1}}(-2)$. \n\\item\nThe arithmetic genus $1$ curves in $X$ of class $\\beta$ move in a proper 1-dimensional smooth family of embedded irreducible genus 1 curves. Except for a finite number of rational nodal curves, the genus one curves are smooth elliptic curves with normal bundle $L\\oplus L^{-1}\\oplus \\mathcal{O}$, where $L$ is a generic degree zero line bundle.\n\\item\nAll genus two curves are smooth and rigid.\n\\item\nThere are no curves of genus $g\\geqslant 3$.\n\\end{enumerate}\nWe need to compute $Z_t$-stable pair invariants in this ideal setting. \nThe key heuristic we use is that only $Z_t$-stable pairs with \\textit{connected support} will `contribute' to our invariants. \n\nThe observation is that for a $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ which is supported on a disconnected curve $C=C_1\\sqcup C_2$, we may write \n$$I= I_1 \\oplus I_2, \\quad I_1=(\\mathcal{O}_X\\to F_1), \\,\\, I_2=(\\mathcal{O}_X\\to F_2), $$\nwhere $I_i$ is supported on $C_i$ ($i=1,2$). \nThen the obstruction space \nsatisfies \n$$\\mathop{\\rm Ext}\\nolimits^2(I,I)_0=\\mathop{\\rm Ext}\\nolimits^2(I_1,I_1)_0\\oplus \\mathop{\\rm Ext}\\nolimits^2(I_2,I_2)_0. $$\nTherefore the surjective isotropic cosections (see~\\cite[Lem.~9.4]{KiP}) of obstruction spaces in the RHS give rise to a (mutually orthogonal) two dimensional isotropic cosection in the LHS. Heuristically speaking, such $Z_t$-stable pairs will not `contribute' to the reduced virtual class as the reduced obstruction space still have a surjective isotropic cosection.\n\nBy Definition \\ref{def DT4 inv} and above discussion, $Z_t$-stable pair invariants\n\\begin{align*} \nP_{n,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=\\int_{[P_n^t(X,\\beta)]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\n\\end{align*}\ncount $Z_t$-stable pairs whose support are connected and incident to cycles dual to $\\gamma_1,\\ldots, \\gamma_l$. \nSay such an incident $Z_t$-stable pair is supported on a $(2-g)$-dimensional family:\n$$p:\\mathcal{C}^g_\\beta\\to S^g_\\beta $$ \nof genus $g$ curves ($g=0,1,2$), where $\\mathcal{C}^g_\\beta$ is the total space of this family.\nEach cycle $\\gamma_i$ will cut down real dimension of $S^g_\\beta$ by $\\deg(\\gamma_i)-2$. As we have \n$$\\sum_{i=1}^{l}(\\deg(\\gamma_i)-2)=2n+2, $$\nso all insertions in total cut down real dimension of $S^g_\\beta$ by $2n+2$. \n\n${}$ \\\\\n\\textbf{The case $n\\geqslant 1$}. When $n\\geqslant 2$, the dimension cut down by insertions is bigger than the largest possible dimension of $S^g_\\beta$, so there can not be such incident stable pairs and \n$$P_{n\\geqslant 2,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=0. $$ \nThis confirms Conjecture \\ref{def DT4 inv} (1). \n\nWhen $n=1$, insertions cut down real dimension of $S^g_\\beta$ by $4$, so any incident $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ can only be supported on genus 0 family. \nAs in \\cite[\\S4.1]{CT1}, by Harder-Narasimhan and Jordan-H\\\"older filtration, we know \n$$F\\cong \\mathcal{O}_C, $$\nfor some rational curve $C$ in $S^0_{\\beta}$. Therefore incident $Z_t$-stable pairs (with $\\chi=1$) are in one to one correspondence with \nintersection points of $\\mathcal{C}^0_{\\beta}$ with cycles dual to $\\gamma_1,\\ldots,\\gamma_l$ and \n\\begin{align*} \nP_{1,\\beta}^t(\\gamma_1,\\ldots,\\gamma_l)=\\int_{S^0_\\beta}\\prod_{i=1}^lp_*(f^*\\gamma_i), \\end{align*}\nwhere $f: \\mathcal{C}^0_{\\beta}\\to X$ is the evaluation map.\nTherefore Conjecture \\ref{def DT4 inv} (2) is confirmed in this ideal case as both sides are (virtually) enumerating rational curves of class \n$\\beta$ incident to cycles dual to $\\gamma_1,\\ldots,\\gamma_l$.\n\n${}$ \\\\\n\\textbf{The case $n=0$}. Since $Z_t$-stable pairs $I=(\\mathcal{O}_X\\to F)$ supported on genus $0$ curves satisfy $\\chi(F)>0$ and \na 4-cycle $\\gamma\\in H^4(X)$ misses genus 2 curves in general position, so when $[F]=\\beta$ is irreducible, the pair \nmust be scheme theoretically supported on an elliptic curve $C$ and \n$$I=(\\mathcal{O}_X\\twoheadrightarrow \\mathcal{O}_C\\stackrel{s}{\\to} L), $$\nwhere $L$ is a line bundle on $C$ with $\\chi(C,L)=0$. By $Z_t$-stability, $s$ is non-trivial, so $s$ must \nbe an isomorphism by the stability of line bundles. Therefore incident $Z_t$-stable pairs (with $\\chi=0$) are in one to one correspondence with \nintersection points of 4-cycle $\\gamma$ with genus 1 curve family $\\mathcal{C}^1_\\beta$ of class $\\beta$ and \n\\begin{align*} \nP_{0,\\beta}^t(\\gamma)=\\int_{\\mathcal{C}^1_\\beta}f^*\\gamma, \\end{align*}\nwhere $f: \\mathcal{C}^1_{\\beta}\\to X$ is the evaluation map. \nTherefore Conjecture \\ref{def DT4 inv} (3) is confirmed in this ideal setting as both sides are (virtually) enumerating elliptic curves of class \n$\\beta$ incident to $\\gamma$.\n\n${}$ \\\\\n\\textbf{The case $n=-1$}. Any $Z_t$-stable pair $I=(\\mathcal{O}_X\\to F)$ with irreducible curve class $[F]=\\beta$ \nis scheme theoretically supported on a smooth rigid genus 2 curve $C$: \n$$I=(\\mathcal{O}_X\\twoheadrightarrow \\mathcal{O}_C\\stackrel{s}{\\to} L), $$\nwhere $L$ is a line bundle on $C$ with $\\chi(C,L)=-1$. As above, by $Z_t$-stability, $s$ must \nbe an isomorphism. Hence $P_{-1}^t(X,\\beta)$ is identified with \nthe set of all rigid genus 2 curves in class $\\beta$ in the ideal geometry, whose count gives exactly genus 2 Gopakumar-Vafa invariant $n_{2,\\beta}$. \nTherefore Conjecture \\ref{def DT4 inv} (4) is confirmed in the ideal setting.\n\n\n\\subsection{Speculations for general curve classes}\\label{sect on impri}\nFor a smooth projective Calabi-Yau 4-fold $X$ and $\\gamma\\in H^4(X,\\mathbb{Z})$, \nwe have genus $0$, $1$ Gopakumar-Vafa type invariants $n_{0,\\beta}(\\gamma), n_{1,\\beta}\\in\\mathbb{Q}$ defined \nby Klemm and Pandharipande\nfrom Gromov-Witten theory \\cite{KP} and stable pair invariants $P_{n,\\beta}(\\gamma)\\in \\mathbb{Z}$ \\cite{CMT2}. \nThey are related by the following conjectural formula \\cite[\\S 1.7]{CMT2}: \n\\begin{align}\\label{equ on pt\/gv on cy4}\n\\sum_{n,\\beta}\\frac{P_{n,\\beta}(\\gamma)}{n!}y^n q^{\\beta}\n=\\prod_{\\beta>0}\\Big(\\exp(yq^{\\beta})^{n_{0,\\beta}(\\gamma)}\\cdot \nM(q^{\\beta})^{n_{1,\\beta}}\\Big), \n\\end{align}\nwhere $M(q)=\\prod_{k\\geqslant 1}(1-q^{k})^{-k}$ is the MacMahon function. \n\nBy taking logarithmic differentiation with respect to $y$, we obtain \n\\begin{align*}\ny\\frac{d}{dy}\\log\\left(\\sum_{n,\\beta}\\frac{P_{n,\\beta}(\\gamma)}{n!}y^n q^{\\beta}\\right)=\\sum_{\\beta>0}y\\frac{d}{dy}n_{0,\\beta}(\\gamma)yq^\\beta =\\sum_{\\beta>0}n_{0,\\beta}(\\gamma)yq^\\beta.\n\\end{align*}\nIf we view it as an equality for corresponding reduced invariants on holomorphic symplectic 4-folds, \nit surprisingly recovers Conjecture \\ref{conj on DT4\/GV} (i),~(ii) (i.e. the genus zero part). \n\nWe do similar manipulations for genus one invariants. Note that $y^0q^\\beta$ parts of \\eqref{equ on pt\/gv on cy4} are \n$$\\sum_{\\beta}P_{0,\\beta}q^\\beta=\\prod_{\\beta>0}M(q^{\\beta})^{n_{1,\\beta}}. $$\nThis equality is written down by a computation in the ``$\\mathrm{CY_4}$ ideal geometry\" (ref.~\\cite[\\S 2.5]{CMT2}), where rational curves contribute zero and each super-rigid elliptic curve \n(on an ideal $\\mathrm{CY_4}$)\nin class $\\beta$ contributes by $M(q^\\beta)$ (ref.~\\cite[Thm.~5.10]{CMT2}). \nTaking logarithmic differentiation with respect to $q$:\n\\begin{align*}q\\frac{d}{dq}\\log\\left(M(q) \\right)=\\sum_{d\\geqslant 1}q^d\\sum_{i\\geqslant1,i|d}i^2. \\end{align*}\nWe then wonder whether in the holomorphic symplectic 4-folds setting, each ideal elliptic curve family in class $\\beta$ contributes to $P_{0,d\\beta}(\\gamma)$ by \n$$\\sum_{i\\geqslant1,i|d}i^2. $$\nSumming over all elliptic curve families, this would imply\n\\begin{align}\\label{general pt\/gv on hk4}P_{0,\\beta}(\\gamma)=\\sum_{d\\geqslant1,d|\\beta}n_{1,\\beta\/d}(\\gamma) \\sum_{i\\geqslant1,i|d}i^2. \\end{align}\nIt is quite curious whether the above formula gives the correct PT\/GV correspondence. For multiple fiber classes \nof elliptic fibrations, our computations show the formula seems correct (see Theorem \\ref{thm1 on g=1 of multiple fiber}, \\ref{thm2 on g=1 of multiple fiber}, Remark \\ref{rmk on impr}). As for $P_{-1,\\beta}$ and genus 2 Gopakumar-Vafa invariants, we haven't found analogous \nformula for general curve classes. \n\n\n\n\\section{Product of $K3$ surfaces}\nIn this section, we consider the product of two $K3$ surfaces: \n$$X=S\\times T, \\quad \\mathrm{with} \\,\\, \\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z}). $$ \nAs observed in \\cite[\\S 5]{COT1}, this contains all interesting curve classes on $X$ because if $\\beta \\in H_2(X, {\\mathbb{Z}})$\nis of non-trivial degree over both $S$ and $T$, one can construct two linearly independent cosections,\nwhich imply that reduced Gromov-Witten invariants of $X$ in this class vanish.\n\n\\subsection{Gopakumar-Vafa invariants}\nRecall Gopakumar-Vafa invariants specified in Definitions \\ref{def of g=0 GV inv}, \\ref{def of g=1 GV inv}, \\ref{def of g=2 GV inv}. \nThey are computed in \\cite[Prop.~5.1]{COT1} as follows: \nwrite $\\gamma,\\gamma'\\in H^{4}(X)$ as \n\\begin{align*}\n \\gamma&=A_1\\cdot 1\\otimes {\\mathsf{p}}+D_1\\otimes D_2+A_2\\cdot {\\mathsf{p}}\\otimes 1, \\\\\n\\gamma'&=A'_1\\cdot 1\\otimes {\\mathsf{p}}+D'_1\\otimes D'_2+A'_2\\cdot {\\mathsf{p}}\\otimes 1, \n\\end{align*}\nbased on K\\\"unneth decomposition:\n$$H^{4}(X)\\cong (H^0(S)\\otimes H^4(T))\\oplus (H^2(S)\\otimes H^2(T))\\oplus (H^4(S)\\otimes H^0(T)). $$\nFix also a curve class\n$$\\alpha=\\theta_1\\otimes {\\mathsf{p}}+{\\mathsf{p}}\\otimes \\theta_2\\in (H^6(X) \\cong H^2(S))\\otimes (H^4(T)\\oplus H^4(S)\\otimes H^2(T)).$$\n\\begin{prop}\\emph{(\\cite[Prop.~5.1]{COT1})}\\label{prop on gw on prod}\nFor $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$, we have \n\\begin{align*}\nn_{0,\\beta}(\\gamma, \\gamma') &=(D_1\\cdot\\beta)\\cdot (D_1'\\cdot\\beta)\\cdot\\int_T(D_2\\cdot D_2')\\cdot N_{0}\\left(\\frac{\\beta^2}{2}\\right), \\\\\nn_{0,\\beta}(\\alpha)&=(\\theta_1\\cdot \\beta)\\,N_{0}\\left(\\frac{\\beta^2}{2}\\right). \n\\end{align*}\nIf $\\beta$ is primitive, we have\n\\begin{align*}\nn_{1, \\beta}(\\gamma)= 24 A_2\\, N_1\\left(\\frac{\\beta^2}{2}\\right), \\quad n_{2,\\beta}= N_2\\left( \\frac{\\beta^2}{2} \\right), \n\\end{align*}\nwhere\n\\begin{align}\\sum_{l\\in\\mathbb{Z}}N_{0}(l)\\, q^l&=\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}, \\label{equ on N0} \\\\\n\\sum_{l \\in {\\mathbb{Z}}} N_{1}(l)\\,q^l &=\\left(\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}\\right)\\left(q \\frac{d}{dq}G_2(q)\\right), \\label{equ on N1} \\\\\n\\sum_{l\\in\\mathbb{Z}}N_{2}(l)\\, q^l&=\\left(\\frac{1}{q} \\prod_{n\\geqslant 1}\\frac{1}{(1-q^n)^{24}}\\right) \\left( 24 q \\frac{d}{dq} G_2 - 24 G_2 - 1 \\right), \\label{equ on N2}\\end{align}\nwith Eisenstein series: \n$$G_2(q) = -\\frac{1}{24} + \\sum_{n \\geqslant 1} \\sum_{d|n} d q^n. $$\n\\end{prop}\n\n\n\n\\subsection{Moduli spaces of $Z_t$-stable pairs}\nFor a point $t\\in T$, let $i_t \\colon S\\to S\\times \\{t\\}\\hookrightarrow X$ be the inclusion. \nConsider the pushforward map \n\\begin{align}\\label{equ psf map}i_*:P^t_{n}(S,\\beta)\\times T\\to P^t_{n}(X,\\beta), \\end{align}\n\\begin{align*}(\\mathcal{O}_S\\stackrel{s}{\\to} F,\\,t)\\mapsto (\\mathcal{O}_X\\twoheadrightarrow i_{t*}\\mathcal{O}_{S}\\stackrel{i_{t*}s}{\\to} i_{t*}F), \\end{align*}\nwhere $P^t_n(S,\\beta)$ is the moduli space of $Z_t$-stable pairs $(F,s)$ on $S$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\nWe restrict to the following setting. \n\\begin{set}\\label{setting} \nWe consider the case when the following conditions are satisfied:\n\\begin{enumerate}\n\\item The map \\eqref{equ psf map} is an isomorphism and $P^t_{n}(S,\\beta)$ is smooth of dimension $\\beta^2+n+1$.\n\\item There is a well-defined forgetful map \n$$f: P^t_{n}(S,\\beta)\\to M_n(S,\\beta), \\quad (\\mathcal{O}_S\\to F)\\mapsto F, $$\nto the coarse moduli scheme $M_n(S,\\beta)$ of one dimensional stable sheaves $F$ on $S$ with $[F]=\\beta$ and $\\chi(F)=n$.\n\\end{enumerate}\n\\end{set}\n\\begin{prop}\\label{prop on smoothness}\nSetting \\ref{setting} is satisfied when $\\beta$ is irreducible. \n\\end{prop}\n\\begin{proof}\nWhen $\\beta$ is irreducible, $P^t_{n}(X,\\beta)$ is independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ \\cite[Prop.~1.12]{CT1},\nso we can set $t\\to \\infty$ and work with PT stability. \nThe isomorphism follows from similar argument as \\cite[Prop.~3.11]{CMT2}.\nThe key point is that for any such $Z_t$-stable pair $(F,s)$, $F$ is stable and therefore scheme theoretically supported on $S\\times \\{t\\}$ for some \n$t\\in T$ (\\cite[Lem.~2.2]{CMT1}). \nThe smoothness of $P^t_{n}(S,\\beta)$ follows from \\cite{KY}, \\cite[Prop.~C.2]{PT2}.\n\\end{proof}\n\n\\subsection{Virtual classes}\nWe determine the virtual class of $P^t_{n}(X,\\beta)$ in Setting \\ref{setting}. Firstly recall:\n\\begin{defi}$($\\cite[Ex.~16.52,~pp.~410]{Sw}, \\cite[Lem.~5]{EG}$)$\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$. \nDenote $E_+$ to be its positive real form\\,\\footnote{This means a real half dimensional subbundle such that $Q$ is real and positive definite on it. By\nhomotopy equivalence $\\mathrm{SO}(m,\\mathbb{C})\\sim \\mathrm{SO}(m,\\mathbb{R})$, it exists and is unique up to isomorphisms.}.\nThe half Euler class of $(E,Q)$ is \n$$e^{\\frac{1}{2}}(E,Q):=\\pm\\,e(E_+)\\in H^{2n}(M,\\mathbb{Z}), $$\nwhere the sign depends on the choice of orientation of $E_+$. \n\\end{defi}\n\n\\begin{defi}$($\\cite{EG},~\\cite[Def.~8.7]{KiP}$)$\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$. \nAn isotropic cosection of $(E,Q)$ is a map \n$$\\phi: E\\to \\mathcal{O}_M, $$\nsuch that the composition \n$$\\phi\\circ \\phi^{\\vee}: \\mathcal{O}_M\\to E^{\\vee}\\stackrel{Q}{\\cong} E \\to \\mathcal{O}_M$$\nis zero. If $\\phi$ is furthermore surjective, we define the (reduced) half Euler class: \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q):=e^{\\frac{1}{2}}\\left((\\phi^{\\vee}\\mathcal{O}_M)^{\\perp}\/(\\phi^{\\vee}\\mathcal{O}_M),\\bar{Q}\\right)\\in H^{2n-2}(M,\\mathbb{Z}), $$\nas the half Euler class of the isotropic reduction.\nHere $\\bar{Q}$ denotes the induced non-degenerate symmetric bilinear form on $(\\phi^{\\vee}\\mathcal{O}_M)^{\\perp}\/(\\phi^{\\vee}\\mathcal{O}_M)$. \n\\end{defi}\nWe show reduced half Euler classes are independent of the choice of surjective isotropic cosection. \n\\begin{lem}$($\\cite[Lem.~5.5]{COT1}$)$\\label{lem on indep of cosec}\nLet $E$ be a $\\mathrm{SO}(2n,\\mathbb{C})$-bundle with a non-degenerate symmetric bilinear form $Q$ on a connected scheme $M$ and \n$$\\phi: E\\to \\mathcal{O}_M $$\nbe a surjective isotropic cosection.\nThen we can write the positive real form $E_+$ of $E$ as \n$$E_+=\\mathcal{E}_+\\oplus \\underline{\\mathbb{R}}^2$$\nsuch that \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q)=\\pm\\,e(\\mathcal{E}_+). $$\nMoreover, it is independent of the choice of surjective cosection. \n\nIn particular, when $E=\\mathcal{O}^{\\oplus2} \\oplus V$ such that $Q=\\begin{pmatrix}\n0 & 1 \\\\\n1 & 0\n\\end{pmatrix} \\oplus Q|_{V}$, we have \n$$e_{\\mathrm{red}}^{\\frac{1}{2}}(E,Q)=\\pm\\,e^{\\frac{1}{2}}(V,Q|_{V}).$$\n\\end{lem}\nRecall a $\\mathrm{Sp}(2r,\\mathbb{C})$-bundle (or symplectic vector bundle) is a complex vector bundle of rank $2r$ \nwith a non-degenerate anti-symmetric bilinear form.\nOne class of quadratic vector bundles is given by tensor product of two symplectic vector bundles $V_1, V_2$. \nTheir half Euler classes\ncan be computed using Chern classes of $V_1,V_2$. \nFor our purpose, we restrict to the following case.\n\\begin{lem}$($\\cite[Lem.~5.6]{COT1}$)$\\label{lem on compu of half euler class}\nLet $(V_1,\\omega_1)$, $(V_2,\\omega_2)$ be a $\\mathrm{Sp}(2r,\\mathbb{C})$ $($resp.~$\\mathrm{Sp}(2,\\mathbb{C})$-bundle$)$ on a connected scheme $M$. \nThen\n$$(V_1\\otimes V_2,\\omega_1\\otimes \\omega_2)$$ \ndefines a $SO(4r,\\mathbb{C})$-bundle whose half Euler class satisfies \n$$e^{\\frac{1}{2}}(V_1\\otimes V_2,\\omega_1\\otimes \\omega_2)=\\pm\\,\\big(e(V_1)-c_{2r-2}(V_1)\\cdot e(V_2)\\big). $$\n\\end{lem}\nWe determine the (reduced) virtual class of $P^t_{n}(X,\\beta)$. \n\\begin{thm}\\label{thm on vir clas}\nIn Setting \\ref{setting}, for certain choice of orientation, we have \n\\begin{equation}\\label{vir class StimesT}\n[P^t_{n}(X,\\beta)]^{\\mathrm{vir}}=\n\\left([P^t_{n}(S,\\beta)]\\cap f^*e(T_{M_n(S,\\beta)})\\right)\\times[T]-e(T)\\left([P^t_{n}(S,\\beta)]\\cap f^*c_{\\beta^2}(T_{M_n(S,\\beta)})\\right), \n\\end{equation}\nwhere $f: P^t_{n}(S,\\beta)\\to M_{n}(S,\\beta)$ is the map as in Setting \\ref{setting}. \n\\end{thm}\n\\begin{proof}\nThe proof is similar as \\cite[Prop.~4.7]{CMT2}. \nUnder the isomorphism \\eqref{equ psf map}: \n\\begin{align*}P^t_{n}(S,\\beta)\\times T\\cong P^t_{n}(X,\\beta), \\end{align*}\nthe universal stable pair $\\mathbb{I}_X=(\\mathcal{O}\\to \\mathbb{F}_X)$ of $P^t_{n}(X,\\beta)$ satisfies \n\\begin{align}\\label{equ of univ sheaf on prod}\\mathbb{F}_X=\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T}, \\end{align}\nwhere $\\mathbb{I}_S=(\\mathcal{O}\\to \\mathbb{F}_S)$ is the universal stable pair of $P^t_{n}(S,\\beta)$ and $\\Delta_T$ denotes the diagonal.\n\nAs in \\cite[Eqn.~(29)]{CMT2}, we have a distinguished triangle\n\\begin{align}\\label{equ on dist tri1}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X)\n\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0[1]\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_X,\\mathcal{O})[2], \\end{align}\nwhere $\\pi_{P_X}: P^t_{n}(X,\\beta)\\times X\\to P^t_{n}(X,\\beta)$ is the projection. \n\nFrom stable pair $\\mathbb{I}_X=(\\mathcal{O}\\to \\mathbb{F}_X)$ and Eqn.~\\eqref{equ of univ sheaf on prod}, we get a distinguished triangle\n\\begin{align}\\label{equ on dist tri2}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\n\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathcal{O},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\\to \\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X). \\end{align}\nBy adjunction, we get an isomorphism \n\\begin{align}\\label{equ on iso of rhom}\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T},\\mathbb{F}_S\\boxtimes \\mathcal{O}_{\\Delta_T})\n\\cong \\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes \\wedge^iT_T[-i],\n\\end{align}\nwhere $\\pi_{P_S}\\colon P^t_{n}(S,\\beta)\\times S\\to P^t_{n}(S,\\beta)$ is the projection. \n\nCombining \\eqref{equ on dist tri2} and \\eqref{equ on iso of rhom}, we obtain \n\\begin{align}\\label{equ on iso on rhomIf}\n\\mathbf{R}\\mathcal{H} om_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{F}_X)\\cong \\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \n\\mathbf{R}\\mathcal{H} om_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes (T_T\\oplus \\mathcal{O}_T[-1]).\n\\end{align}\nCombining \\eqref{equ on dist tri1} and \\eqref{equ on iso on rhomIf}, we obtain \n$$\\mathcal{E} xt^1_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0\\cong \\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus T_T, $$\nand an exact sequence \n\\begin{align}\\label{equ on exa seq}\n0\\to \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes T_T\\oplus \n\\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes\\mathcal{O}_T \\to \n\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0 \\to \\cdots. \\end{align}\nWe claim that the second arrow above is an isomorphism, which can be done by a dimension counting. \nIn fact, let $I=(\\mathcal{O}_S\\to F)\\in P^t_{n}(S,\\beta)$, the cohomology of the distinguished triangle \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(F,F)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_S,F)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(I,F)$$\nimplies that $\\mathop{\\rm Ext}\\nolimits^i_S(I,F)=0$ for $i\\geqslant 2$. \nIn Setting \\ref{setting}, we know $\\mathop{\\rm ext}\\nolimits^0_S(I,F)=\\beta^2+n+1$, therefore \n$$\\mathop{\\rm ext}\\nolimits^1_S(I,F)=1. $$\nAs $F$ is stable, we have \n$$\\mathop{\\rm ext}\\nolimits^2_S(F,F)=\\mathop{\\rm ext}\\nolimits^0_S(F,F)=1, \\quad \\mathop{\\rm ext}\\nolimits^1_S(F,F)=\\beta^2+2. $$ \nSo the rank of the second term of \\eqref{equ on exa seq} is $2\\beta^2+6$. One can easily check the rank of the third term in \\eqref{equ on exa seq} \nis also $2\\beta^2+6$ by Riemann-Roch formula and first condition of Setting \\ref{setting}. \nTo sum up, we get an isomorphism:\n$$\\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{I}_S,\\mathbb{F}_S)\\oplus \\mathcal{E} xt^1_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes T_T\\oplus \n\\mathcal{E} xt^0_{\\pi_{P_S}}(\\mathbb{F}_S,\\mathbb{F}_S)\\boxtimes\\mathcal{O}_T \\cong \n\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0. $$\nAs in \\cite[Prop.~4.7]{CMT2}, one can show the decomposition in the LHS is also with respect to the Serre duality pairing on $\\mathcal{E} xt^2_{\\pi_{P_X}}(\\mathbb{I}_X,\\mathbb{I}_X)_0$. The our claim follows from Lemmata \\ref{lem on indep of cosec} and \\ref{lem on compu of half euler class}.\n\\end{proof}\n\n\n\\subsection{Thom-Porteus formula}\nAs our insertion \\eqref{equ on pri ins} depends only on the fundamental cycle of the universal sheaf, it is useful to know \nthe pushforward of the virtual class \\eqref{vir class StimesT} under the forgetful map. \nIn this section, let $\\beta \\in H_2(S,{\\mathbb{Z}})$ be an irreducible curve class, then $P^t_{n}(X,\\beta)$ is independent of the choice of $t>\\frac{n}{\\omega\\cdot \\beta}$ \\cite[Prop.~1.12]{CT1},\nso we can set $t\\to \\infty$ and work with PT stability. \nConsider the forgetful map\n\\[ f : P_n(S,\\beta) \\to M_n(S,\\beta), \\quad (\\mathcal{O}_S\\to F)\\mapsto F. \\]\nRecall that $P_n(S,\\beta)$ is smooth of dimension $\\beta^2 + n + 1$ and $M_n(S,\\beta)$ is smooth of dimension $\\beta^2 + 2$.\nThe image of $f$ in $M_n(S,\\beta)$ is the locus\n\\begin{equation} \\big\\{ F \\in M_n(S,\\beta)\\,|\\,h^0(F) \\geqslant 1 \\big\\}, \\label{x} \\end{equation}\nwhere surjectivity follows since $\\beta$ is irreducible and $F$ is pure, so any non-zero section $s \\in H^0(S,F)$ must have zero-dimensional cokernel.\nThe expected dimension of sections is $\\chi(F) = n$,\nso the image is everything if $n=1$, a divisor if $n=0$ and a codimension $2$ cycle if $n=-1$.\n\nLet $\\mathbb{F}_S$ be a (twisted) universal sheaf on $M_n(S,\\beta) \\times S$.\nIf $n=1$ (or more generally, there exists a $K$-theory class pairing with $1$ with a sheaf parametrized by $M_n(S,\\beta)$)\nthe twisted sheaf can be taken to be an actual sheaf.\nFor us here the difference will not matter, since we are only interested in the Chern character of the universal sheaf,\nwhich can also be easily defined in the twisted case. We refer to \\cite{Markman} for a discussion.\n\nLet $\\pi_{M}: M_n(S,\\beta) \\times S\\to M_n(S,\\beta)$ be the projection. \nWe resolve the complex\n$\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)$ by a $2$-term complex of vector bundles: \n$$\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)\\cong (E_0 \\xrightarrow{\\sigma} E_1). $$\nThen \\eqref{x} is the \\textit{degeneracy locus}\n$$D_1(\\sigma) = \\big\\{ x \\in M_n(S,\\beta)\\,|\\, \\dim_{\\mathbb{C}} \\ker(\\sigma(x)) \\geqslant 1 \\big\\}. $$\nBy the \\textit{Thom-Porteus formula} \\cite[\\S14.4]{Ful}\n(see \\cite[Prop.~1]{GT} for a modern treatment and observe that $P_n(S,\\beta)$ is precisely what is called $\\tilde{D}_1(\\sigma)$ there), we get the following:\n\\begin{prop}\\label{deg loci}\n\\begin{equation*}f_{\\ast} [P_n(S,\\beta)] =c_{1-n}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cap [M_n(S,\\beta)]. \\end{equation*}\n\\end{prop}\n\nWe can calculate the right hand side above by Grothendieck-Riemann-Roch formula\n\\[ \\mathop{\\rm ch}\\nolimits( - \\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S) ) = - \\pi_{M\\ast}( \\mathop{\\rm ch}\\nolimits(\\mathbb{F}_S)\\cdot\\pi_S^*\\mathop{\\rm td}\\nolimits(S )). \\]\nWe obtain the following:\n\\begin{equation} \\label{Pn expressions}\n\\begin{aligned}\nf_{\\ast} [P_1(S,\\beta) ] & = 1, \\\\\nf_{\\ast} [P_0(S,\\beta) ] & = -\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_3(\\mathbb{F}_S))-2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\pi_S^*{\\mathsf{p}}), \\\\\nf_{\\ast} [P_{-1}(S,\\beta) ] & = \\frac{1}{2}\\left(c_{1}(-\\mathbf{R}\n\\pi_{M\\ast}(\\mathbb{F}_S))\\right)^2+\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_4(\\mathbb{F}_S))+2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_2(\\mathbb{F}_S)\\pi_S^*{\\mathsf{p}}), \n\\end{aligned}\n\\end{equation}\nwhere we used Poincar\\'e duality on the right to identify homology and cohomology and \n${\\mathsf{p}}\\in H^4(S)$ denotes the point class. \nA small calculation shows that the right hand side is indeed \nindependent of the choice of universal family $\\mathbb{F}$ (i.e. the formulae stay invariant under replacing $\\mathbb{F}$ by $\\mathbb{F} \\otimes \\pi_{M}^*{\\mathcal L}$\nfor ${\\mathcal L}\\in \\mathop{\\rm Pic}\\nolimits(M_n(S,\\beta))$).\nThis will be useful later on.\n\n\\subsection{Genus 0 in irreducible classes}\nIn this section, we prove Conjecture \\ref{conj on DT4\/GV} (1), (2) for irreducible curve classes. \nWe first recall a result of Fujiki \\cite{Fuji} and its generalization in \\cite[Cor.~23.17]{GHJ}.\n\\begin{thm}\\label{fujiki result}$($\\cite{Fuji}, \\cite[Cor.~23.17]{GHJ}$)$\nLet $M$ be a hyperk\\\"ahler variety of dimension $2n$. Assume $\\alpha\\in H^{4j}(M,\\mathbb{C})$ is of type $(2j, 2j)$ on all small deformation of $M$. Then there exists a constant $C(\\alpha)\\in\\mathbb{C}$ depending only on $\\alpha$ $($called Fujiki constant of $\\alpha$$)$ such that\n$$\\int_{M}\\alpha\\cdot\\beta^{2n-2j}=C({\\alpha})\\cdot q_{M}(\\beta)^{n-j}, \\quad \\forall\\,\\, \\beta\\in H^2(M, \\mathbb{C}), $$\nwhere $q_M: H^2(M, \\mathbb{C}) \\to \\mathbb{C}$ denotes the Beauville-Bogomolov-Fujiki form. \n\\end{thm}\n\\begin{thm}\\label{thm on g=0 conj on prod}\nLet $X=S\\times T$ and $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$ be an irreducible curve class. Then Conjecture \\ref{conj on DT4\/GV} (1), (2)\nhold.\n\\end{thm}\n\\begin{proof}\nBy Proposition \\ref{prop on smoothness}, we have a forgetful map \n$$\\bar{f}=(f,\\textrm{id}_T): P_{n}(X,\\beta)=P_{n}(S,\\beta)\\times T\\to M_{n}(S,\\beta)\\times T. $$\nAs our insertion \\eqref{equ on pri ins} only involves fundamental cycle of the universal one dimensional sheaf, so it is the pullback $\\bar{f}^*$\nof a cohomology class from $M_{n}(S,\\beta)\\times T$. \n \nWhen $n>1$, we have \n$$\\dim_{\\mathbb{C}} P_{n}(S,\\beta)=\\beta^2+n+1>\\beta^2+2=\\dim_{\\mathbb{C}}M_{n}(S,\\beta). $$ \nBy Theorem \\ref{thm on vir clas} and Proposition \\ref{deg loci}, it is easy to see \n$$P_{n,\\beta}(\\gamma_1,\\ldots,\\gamma_l)=0, \\quad n>1. $$\nWhen $n=1$, we take insertion $\\gamma,\\gamma'\\in H^4(X)$ for example (other cases follow from easier versions of the same argument).\nBased on K\\\"unneth decomposition:\n$$H^{4}(X)\\cong (H^0(S)\\otimes H^4(T))\\oplus (H^2(S)\\otimes H^2(T))\\oplus (H^4(S)\\otimes H^0(T)), $$\nwe write \n\\begin{align*}\n\\gamma &=A_1\\cdot 1\\otimes {\\mathsf{p}}+D_1\\otimes D_2+A_2\\cdot {\\mathsf{p}}\\otimes 1, \\\\\n\\gamma' &=A'_1\\cdot 1\\otimes {\\mathsf{p}}+D'_1\\otimes D'_2+A'_2\\cdot {\\mathsf{p}}\\otimes 1. \n\\end{align*}\nBy Eqn.~\\eqref{equ of univ sheaf on prod}, the insertion becomes (see also \\cite[Proof~of~Thm.~5.8]{COT1}):\n\\begin{align}\\label{equ on pri ins on prod}\\tau(\\gamma)=(D_1\\cdot\\beta)\\otimes D_2+A_2f^*\\pi_{M*}(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S))\\otimes 1, \\end{align}\nwhere $\\pi_S$, $\\pi_{M}$ are projections from $S\\times M_n(S,\\beta)$ to its factors. Hence\n$$\\tau(\\gamma)\\cdot\\tau(\\gamma')=(D_1\\cdot\\beta)\\cdot (D_1'\\cdot\\beta)\\otimes (D_2\\cdot D_2')+A_2A_2'f^*\\left(\\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)\\right)^2\\otimes 1+\\mathrm{others}, $$\nwhere ``others'' lie in $H^2(P_{1}(S,\\beta))\\otimes H^2(T)$. \nBy Theorem \\ref{thm on vir clas}, we get \n\\begin{align*}P_{1,\\beta}(\\gamma,\\gamma')&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\int_{P_{1}(S,\\beta)}f^*e(T_{M_1(S,\\beta)}) \\\\\n& \\quad -e(T)A_2A_2' \\int_{P_{1}(S,\\beta)}f^*\\left(c_{\\beta^2}(T_{M_1(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)^2\\right) \n\\\\\n&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\int_{M_{1}(S,\\beta)}e(T_{M_1(S,\\beta)}) \\\\\n& \\quad -e(T)A_2A_2' \\int_{M_{1}(S,\\beta)}c_{\\beta^2}(T_{M_1(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)^2 \\\\\n&=(D_1\\cdot\\beta)\\, (D_1'\\cdot\\beta)\\,\\int_T(D_2\\cdot D_2')\\,e(M_1(S,\\beta)),\n\\end{align*}\nwhere the second equality follows from Proposition \\ref{deg loci}\nand the last equality is proved using Fujiki formula (Theorem \\ref{fujiki result}) \nand the evaluation\n\\[ q_M( \\pi_{M*}\\left(\\pi_S^*{\\mathsf{p}}\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right) ) = 0, \\]\n(which follows for example from \\cite[Proof~of~Thm.~5.8]{COT1}).\nConjecture \\ref{conj on DT4\/GV} (2) then reduces to \\cite[Thm.~5.8]{COT1}.\n\\end{proof}\n\n\n\\subsection{Transport of integrals to the Hilbert schemes}\\label{sect on trans}\nTo compute the stable pair theory on $P_n(S,\\beta)$ for $n \\leqslant 0$ we will need to\nhandle more complicated descendent integrals on $M_n(S,\\beta)$.\nAs in \\cite[\\S 4.4]{COT1} which deals with the $n = 1$ case,\nwe use here the general framework of monodromy operators of Markman \\cite{Markman} (see also \\cite{OUniversality})\nto reduce to the Hilbert schemes.\n\nConsider the Mukai lattice, which is the lattice $\\Lambda = H^{\\ast}(S,{\\mathbb{Z}})$ endowed with the Mukai pairing\n\\[ \\langle x , y \\rangle := - \\int_S x^{\\vee} y, \\]\nwhere, if we decompose an element $x \\in \\Lambda$ according to degree as $(r,D,n)$, we have written $x^{\\vee} = (r,-D,n)$.\nGiven a sheaf or complex $E$ on $S$ the Mukai vector of $E$ is defined by\n\\[ v(E) = \\sqrt{\\mathop{\\rm td}\\nolimits_S} \\cdot \\mathop{\\rm ch}\\nolimits(E) \\in \\Lambda. \\]\nLet $M(v)$ be a proper smooth moduli space of stable sheaves on $S$ with Mukai vector $v \\in \\Lambda$ (where stability is with respect to some fixed polarization).\nWe assume that there exists a universal family ${\\mathbb{F}}$ on $M(v) \\times S$.\nIf it does not exists, everything below can be made to work by working with the Chern character $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}})$\nof a quasi-universal family, see \\cite{Markman} or \\cite{OUniversality}.\nLet $\\pi_M, \\pi_S$ be the projections to $M(v)$ and $S$.\nOne has the Mukai morphism \n$$\\theta_{{\\mathbb{F}}} : \\Lambda \\to H^2(M(v)), $$\n\\[ \\theta_{{\\mathbb{F}}}(x) = \\left[ \\pi_{M \\ast}( \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}) \\cdot \\sqrt{\\mathop{\\rm td}\\nolimits_S} \\cdot x^{\\vee} ) \\right]_{\\deg = 2}, \\]\nwhere $[ - ]_{\\deg = k}$ stands for extracting the degree $k$ component\nand (as we will also do below) we have suppressed the pullback maps from the projection to $S$.\nDefine the universal class\n\\[ u_v = \\exp\\left( \\frac{ \\theta_{{\\mathbb{F}}}(v) }{\\langle v,v \\rangle} \\right) \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}) \\sqrt{\\mathop{\\rm td}\\nolimits_S}, \\]\nwhich is independent of the choice of universal family ${\\mathbb{F}}$.\nFor $x \\in \\Lambda$, consider the normalized descendents:\n\\[ B(x) := \\pi_{M\\ast}( u_v \\cdot x^{\\vee} ), \\]\nand let $B_k(x) = [ B(x) ]_{\\deg=2k}$ its degree $2k$ component.\n\n\n\\begin{example}\nFor $v=(1,0,1-d)$, the moduli space becomes the punctual Hilbert scheme: $M(v) = S^{[d]}$.\nThen we have\n\\[ u_v = \\exp\\left( \\frac{-\\delta}{2d-2} \\right) \\mathop{\\rm ch}\\nolimits( {\\mathcal I}_{{\\mathcal Z}} ) \\sqrt{\\mathop{\\rm td}\\nolimits_S}, \\]\nwhere we let $\\delta = \\pi_{\\ast} \\mathop{\\rm ch}\\nolimits_3( {\\mathcal O}_{{\\mathcal Z}} )$ (so that $-2 \\delta$ is the class of the locus of non-reduced subschemes).\n\nWe define the standard descendents on the Hilbert scheme by\n\\[ {\\mathfrak{G}}_d(\\alpha) = \\pi_{\\ast}( \\pi_S^{\\ast}(\\alpha) \\mathop{\\rm ch}\\nolimits_d({\\mathcal O}_{{\\mathcal Z}}) ) \\in H^{\\ast}(S^{[d]}). \\]\nOne obtains that\n\\begin{align*}\nB_1({\\mathsf{p}}) & = - \\frac{\\delta}{2d-2}, \\\\\nB_2({\\mathsf{p}}) & = \\frac{1}{2} \\frac{\\delta^2}{(2d-2)^2} - {\\mathfrak{G}}_2({\\mathsf{p}}). \n\\end{align*}\nFor a divisor $D \\in H^2(S)$, one finds\n\\begin{align*}\nB_1(D) & = {\\mathfrak{G}}_2(D), \\\\\nB_2(D) & = {\\mathfrak{G}}_3(D) - \\frac{\\delta}{2d-2} {\\mathfrak{G}}_2(D).\n\\end{align*}\n\\end{example}\nUsing the descendents $B_k(x)$, one allows to move between any two moduli spaces of stable sheaves on $S$\njust by specifying a Mukai lattice isomorphism $g : \\Lambda \\to \\Lambda$.\nWe give the details in the case of our interest, \nsee \\cite{Markman, OUniversality} for the general case.\n\nAs before let $\\beta \\in \\mathop{\\rm Pic}\\nolimits(S)$ be an irreducible effective class of square $\\beta \\cdot \\beta = 2d-2$,\nand let $n \\in {\\mathbb{Z}}$. We want to connect the moduli spaces\n\\[ M_n(S,\\beta) \\,\\, \\rightsquigarrow \\,\\, S^{[d]}. \\]\nLet $\\beta = e + (d-1) f$ where $e, f \\in H^2(S,{\\mathbb{Z}})$ span a hyperbolic lattice: ${\\mathbb{Z}} e \\oplus {\\mathbb{Z}} f \\cong \\binom{0\\ 1}{1\\ 0}$.\nWe do not require $e,f$ to be effective here.\nDefine the isomorphism $g : \\Lambda \\to \\Lambda$ by\n\\begin{align*}\n1 \\mapsto (0,-e, n ), \\quad {\\mathsf{p}} \\mapsto (0,f,0), \\quad e \\mapsto (1, -nf, 0), \\quad f \\mapsto (0,0,-1), \\quad\ng|_{ \\{ 1,{\\mathsf{p}}, e, f \\}^{\\perp}} = \\textrm{id}.\n\\end{align*}\nOne sees that $g$ is an isometry of the Mukai lattice and that\n\\[ g_{\\ast} ( 0, \\beta, n) = (1,0, 1-d). \\]\nThen one has:\n\\begin{thm}$($Markman \\cite{Markman}, reformulation as in \\cite[Thm.~4]{OUniversality}$)$ \\label{thm:Markman} For any $k_i \\geqslant 0$ and $\\alpha_i \\in H^{\\ast}(S)$ and any polynomial $P$,\n\\[\n\\int_{ M_n(S,\\beta) } P( B_{k_i}(\\alpha_i) , c_j( T_{M_n(S,\\beta)} ) ) \n=\n\\int_{ S^{[d]} } P( B_{k_i}(g \\alpha_i) , c_j( T_{S^{[d]}} ) ). \n\\]\n\\end{thm}\n\n\n\n\n\\subsection{Genus 1 in irreducible classes}\nRecall the genus $1$ Gopakumar-Vafa invariants (Proposition \\ref{prop on gw on prod}).\nOn the stable pair side, we have the following:\n\\begin{thm} \\label{thm on g=1 conj on prod}\nLet $\\beta\\in H_2(S,\\mathbb{Z})\\subseteq H_2(X,\\mathbb{Z})$ be an irreducible curve class. Then for certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 pro}P_{0,\\beta}(\\gamma)=e(T)\\, N_{1}\\left(\\frac{\\beta^2}{2}\\right) \\int_{S\\times{\\mathsf{p}}}\\gamma. \\end{align}\nIn particular, Conjecture \\ref{conj on DT4\/GV} (3) holds in this case. \n\\end{thm}\n\\begin{proof\nThe strategy is as follows: First we write our stable pair invariants as integrals on the moduli spaces $M_0(S,\\beta)$,\nthen express the integrand in terms of the classes $B_k(x)$ and then use\nMarkman's Theorem~\\ref{thm:Markman} to reduce to an integral over the Hilbert scheme, which is known by the results of \\cite{COT1}.\n\nBy Eqn.~\\eqref{equ on pri ins on prod} and Theorem \\ref{thm on vir clas} (choose the inverse orientation there), we have \n$$P_{0,\\beta}(\\gamma)=e(T)\\, \\int_{S\\times{\\mathsf{p}}}\\gamma\\cdot\\int_{P_{0}(S,\\beta)}f^*\\left(c_{\\beta^2}(T_{M_0(S,\\beta)})\\cdot \\pi_{M*}\\left(\\pi_S^*({\\mathsf{p}})\\cdot \\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\right)\\right).$$\nUsing Proposition \\ref{deg loci}, we find\n\\[ P_{0,\\beta}(\\gamma)=e(T)\\,\n\\int_{S\\times{\\mathsf{p}}}\\gamma\\cdot\\int_{M_{0}(S,\\beta)}c_{\\beta^2}(T_{M_0(S,\\beta)})\\cdot c_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cdot \n\\pi_{M*}\\left(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\cdot \\pi_S^*({\\mathsf{p}})\\right). \\]\nA calculation shows that we have\n\\[ B_1({\\mathsf{p}}) = \\pi_{\\ast}( \\mathop{\\rm ch}\\nolimits_1({\\mathbb{F}}_S)\\,\\pi_S^{\\ast}({\\mathsf{p}}) ). \\]\nMoreover, the expressions \\eqref{Pn expressions} are\ninvariant under replacing $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S)$ by $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S) \\exp( \\ell )$ for any line bundle $\\ell \\in H^2(M_n(S,\\beta))$.\nHence we can use $\\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S') := \\mathop{\\rm ch}\\nolimits({\\mathbb{F}}_S) \\exp( \\theta_{{\\mathbb{F}}_S}(v) \/ \\langle v,v \\rangle )$ which shows that\n\\begin{align*}\nc_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\n& = -\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_3(\\mathbb{F}_S'))-2\\pi_{M\\ast}(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S')\\,\\pi_S^*{\\mathsf{p}}) \\\\\n& = - B_1\\left( \\sqrt{\\mathop{\\rm td}\\nolimits_S}^{-1} \\right) - 2 B_1( {\\mathsf{p}} ) \\\\\n& = - B_1( 1 + {\\mathsf{p}} ).\n\\end{align*}\nWe obtain that:\n\\begin{align*}\n& \\int_{M_{0}(S,\\beta)}c_{2d-2}(T_{M_0(S,\\beta)})\\cdot c_{1}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S))\\cdot \n\\pi_{M*}\\left(\\mathop{\\rm ch}\\nolimits_1(\\mathbb{F}_S)\\cdot \\pi_S^*{\\mathsf{p}}\\right) \\\\\n= & - \\int_{M_{0}(S,\\beta)}c_{2d-2}(T_{M_0(S,\\beta)}) B_1(1 + {\\mathsf{p}}) B_1({\\mathsf{p}}) \\\\\n= & - \\int_{S^{[d]}}c_{2d-2}(T_{S^{[d]}}) B_1(-e+f) B_1(f) \\\\\n= & - \\int_{S^{[d]}}c_{2d-2}(T_{S^{[d]}}) {\\mathfrak{G}}_2(-e+f) {\\mathfrak{G}}_2(f) \\\\\n= & - ((-e+f) \\cdot f) C( c_{2d-2}(T_{S^{[d]}}) ) \\\\\n= & N_1(d-1),\n\\end{align*}\nwhere we used the $k=1$ case of \\cite[Thm.~4.2]{COT1} in the last step. \n\\end{proof}\n\n\n\n\\subsection{Genus 2 in irreducible classes}\nLet $\\beta_d \\in H_2(S,{\\mathbb{Z}}) \\subseteq H_2(X,{\\mathbb{Z}})$ be an irreducible curve class of square $\\beta_d^2 = 2d-2$.\nBelow, we use similar method to compute stable pair invariants $P_{-1,\\beta_d}$ on $X$ for all $d$.\n\\begin{thm}\\label{thm on P_-1} For certain choice of orientation, we have \n\\begin{align*} \\sum_{d \\in\\mathbb{Z}} P_{-1,\\beta_d}\\, q^d &= \n\\left(\\prod_{n \\geqslant 1} (1-q^n)^{-24}\\right) \\left(24q \\frac{d}{dq} G_2(q) - 24G_2(q) - 1 \\right) \\\\\n&= 72 q^2 + 1920 q^3 + 28440 q^4 + 305280 q^5 + 2639760 q^6 + \n 19450368 q^7 + \\cdots .\n\\end{align*}\nIn particular, Conjecture \\ref{conj on DT4\/GV} (4) holds in this case. \n\\end{thm}\n\n\n\\begin{proof}\nAs in the genus $1$ case, by Theorem \\ref{thm on vir clas} and Proposition \\ref{deg loci} we have:\n\\[\nP_{-1,\\beta}=-e(T)\\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot c_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)).\n\\]\nWith the same discussion as before one gets:\n\\[\nc_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)) = \\frac{1}{2} B_1(1 + {\\mathsf{p}})^2 + B_2(1 + {\\mathsf{p}}).\n\\]\nHence applying Markman's Theorem~\\ref{thm:Markman}, we conclude\n\\begin{align*}\n& \\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot c_{2}(-\\mathbf{R} \\pi_{M\\ast}(\\mathbb{F}_S)) \\\\\n= & \\int_{M_{-1}(S,\\beta)}c_{2d-2}(T_{M_{-1}(S,\\beta)})\\cdot \\left( \\frac{1}{2} B_1(1 + {\\mathsf{p}})^2 + B_2(1 + {\\mathsf{p}}) \\right) \\\\\n= & \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \\left( \\frac{1}{2} B_1(-e+f - {\\mathsf{p}})^2 + B_2(-e+f-{\\mathsf{p}} ) \\right) \\\\\n= & \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \n\\frac{1}{2} \\left[ {\\mathfrak{G}}_2(-e+f) + \\frac{\\delta}{2d-2} \\right]^2 \\\\\n& \\ + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) \\left( {\\mathfrak{G}}_3(-e+f) - \\frac{\\delta}{2d-2} {\\mathfrak{G}}_2(-e+f) - \\frac{1}{2} \\frac{\\delta^2}{(2d-2)^2} + {\\mathfrak{G}}_2({\\mathsf{p}}) \\right) \\\\\n= & \\frac{1}{2} \\left( (-e+f)^2 + \\frac{ \\delta \\cdot \\delta}{(2d-2)^2} \\right) N_1(d-1)\n- \\frac{1}{2} \\frac{ \\delta \\cdot \\delta }{(2d-2)^2} N_1(d-1) + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) \\\\\n= & - N_1(d-1) + \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}).\n\\end{align*}\nThus we conclude that\n\\begin{align*}\\label{equ on P-1 pro}\nP_{-1,\\beta}=e(T) \\left( N_1(d-1) - \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) \\right).\n\\end{align*}\nThe desired formula now follows by the evaluation given in \\cite[Prop.~4.6]{COT1}:\n\\[ \\sum_{d \\geqslant 0} q^d \\int_{S^{[d]}} c_{2d-2}( T_{S^{[d]}} ) {\\mathfrak{G}}_2({\\mathsf{p}}) = \\prod_{n = 1} (1-q^n)^{-24} \\left( G_2(q) + \\frac{1}{24} \\right). \\]\nFinally, comparing with Proposition \\ref{prop on gw on prod}, we are done. \n\\end{proof}\n\\begin{rmk}\\label{rmk on pri g=0}\nBy the global Torelli theorem, primitive curve classes on $K3$ surfaces can be deformed to irreducible curve classes. Combining Theorem \\ref{thm on g=0 conj on prod}, Theorem \\ref{thm on g=1 conj on prod}, Theorem \\ref{thm on P_-1}, we know \nConjecture \\ref{conj on DT4\/GV} also holds for primitive curve classes $\\beta\\in H_2(S)\\subseteq H_2(X)$. \\end{rmk}\n\n\n\\subsection{Genus 1:~multiple fiber classes of elliptic fibrations}\nLet $X=E\\times E\\times T$ be the product two copies of an elliptic curve $E$ and a $K3$ surface $T$. It gives the trivial elliptic fibration \n\\begin{align}\\label{equ on trivial ell fib}\\pi: X\\to Y:=E\\times T. \\end{align} \nFor multiple fiber classes of $\\pi$ \\eqref{equ on trivial ell fib}, we have the following closed evaluation:\n\\begin{thm}\\label{thm1 on g=1 of multiple fiber}\nLet $t > 0$ and $\\gamma \\in H^4(X)$. For certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 multiple fiber}\n\\sum_{r\\geqslant 0}P^t_{0,r[E]}(\\gamma)\\,q^r=24\\,\\left(\\int_{E \\times E \\times {\\mathsf{p}}} \\gamma\\right)\\cdot\\sum_{m\\geqslant 1} \\sum_{n | m}n^2q^m. \\end{align}\n\\end{thm}\n\\begin{proof}\nBy \\cite[Prop.~5.3]{CT1}, we know $P^t_0(X,n[E])$ is independent of the choice of $t>0$, so we may set $t\\to \\infty$ and work with PT stability. \nAs in \\cite[Lem.~3.5]{CMT2}, there is an isomorphism \n$$\\pi^*: \\mathop{\\rm Hilb}\\nolimits^n(Y) \\cong P_0(X,n[E]), \\quad I_Z\\mapsto \\pi^*I_Z. $$\nFor $I=\\pi^*I_Z\\in P_0(X,n[E])$, by projection formula and \n$$\\pi_*\\mathcal{O}_X\\cong \\mathcal{O}_{Y}\\oplus K_{Y}[-1], $$\nwe obtain \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_X(I,I)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_{Y}(I_Z,I_Z)\\oplus \\mathop{\\dR\\mathrm{Hom}}\\nolimits_{Y}(I_Z,I_Z\\otimes K_{Y})[-1]. $$\nBy taking the traceless part, we get \n\\begin{align}\\label{equ1 on ell fib}\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0&\\cong \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0\\oplus \\mathop{\\rm Ext}\\nolimits^1_{Y}(I_Z,I_Z)_0 \\\\ \\nonumber \n&\\cong \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0\\oplus \\mathop{\\rm Ext}\\nolimits^2_{Y}(I_Z,I_Z)_0^{\\vee}, \n\\end{align}\nwhere we use Serre duality in the second isomorphism.\n\nNext we compare cosections on these obstruction spaces. By \\cite[Lem.~9.4]{KiP}, we have a surjective isotropic cosection\n\\begin{align*}\\phi_X: \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\stackrel{\\mathrm{At}(I)}{\\longrightarrow} \\mathop{\\rm Ext}\\nolimits^3_X(I,I\\otimes T^*X)\\stackrel{\\mathrm{tr}}{\\longrightarrow}\nH^3(X,T^*X)\\stackrel{H\\sigma_X }{\\longrightarrow} H^4(X,\\wedge^4T^*X) \\stackrel{\\int}{\\longrightarrow}\\mathbb{C}, \\end{align*}\nwhere $\\mathrm{At}(I)\\in \\mathop{\\rm Ext}\\nolimits^1_X(I,I\\otimes T^*X)$ denotes the Atiyah class of $I$, $H\\in H^1(X,T^*X)$ is an ample divisor \nand $\\sigma_X\\in H^0(X,\\wedge^2T^*X)$ is a holomorphic symplectic form of $X$. \n\nBy the compatibility of Atiyah classes with map $\\pi: X\\to Y$ (ref.~\\cite[Prop.~3.14]{BFl}), we have a commutative diagram \n\\begin{align}\\label{com diag on atiyah class}\\xymatrix{\n\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\ar[r]^{\\mathrm{At}(I) \\,\\,\\, \\quad } & \\mathop{\\rm Ext}\\nolimits^3_X(I,I\\otimes T^*X) \\ar[r]^{ \\quad \\mathrm{tr} } & H^3(X,T^*X) \\ar[r]^{\\mathrm{pr}\\quad \\,\\,\\,} &\nH^{1,1}(S)\\otimes H^{0,2}(T) \\\\\n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\ar[u]_{i} \\ar[r]^{\\mathrm{At}(I_Z) \\,\\,\\, \\quad } & \\mathop{\\rm Ext}\\nolimits^3_Y(I_Z,I_Z\\otimes T^*Y) \\ar[u]_{ } \\ar[r]^{\\quad \\,\\, \\mathrm{tr} } & H^3(Y,T^*Y) \\ar[r]^{\\cong\\quad \\quad } & H^{1,1}(E)\\otimes H^{0,2}(T) \\ar[u]_{\\pi^*}, } \\end{align}\nwhere $i$ is the embedding in \\eqref{equ1 on ell fib}, $tr$ denotes the trace map and \n$pr$ is the projection with respect to K\\\"unneth decomposition.\nWe define a cosection \n\\begin{align*}\\phi_Y: \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\stackrel{\\mathrm{At}(I_Z)}{\\longrightarrow} \\mathop{\\rm Ext}\\nolimits^3_Y(I_Z,I_Z\\otimes T^*Y)\\stackrel{\\mathrm{tr}}{\\longrightarrow}\nH^3(Y,T^*Y)\\cong H^{1,1}(E)\\otimes H^{0,2}(T)\\stackrel{\\epsilon}{\\longrightarrow} \\mathbb{C}, \\end{align*}\n$$\\mathrm{where} \\quad \\epsilon(\\alpha)=\\int_{X} H \\sigma_X\\cdot\\pi^*\\alpha, \\quad \\alpha\\in H^{1,1}(E)\\otimes H^{0,2}(T). $$\nIt is easy to see $\\phi_Y$ is a positive multiple of the standard cosection of $\\mathop{\\rm Hilb}\\nolimits^n(Y)$ (see~e.g.~\\cite[Eqn.~(6)]{O2}), \nhence its reduced virtual class keeps the same.\n\nBy diagram \\eqref{com diag on atiyah class}, we have a commutative diagram: \n\\begin{align*}\\xymatrix{\n\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0 \\ar[r]^{\\quad \\quad \\phi_X} & \\mathbb{C} \\\\\n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0 \\ar[u]_{i} \\ar[ur]_{\\quad \\phi_Y}. & } \\end{align*}\nWe claim that $\\mathop{\\rm Ker}\\nolimits(\\phi_Y)$ is a maximal isotropic subspace of \n$\\mathop{\\rm Ker}\\nolimits(\\phi_X)\/\\mathrm{Im}(\\phi^{\\vee}_X)$. In fact, by taking dual, we have a commutative diagram \n\\begin{align*}\\xymatrix{\n\\mathbb{C} \\ar[d]^{=} \\ar[r]^{ \\phi^{\\vee}_X \\quad \\quad \\,\\, } & \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0^{\\vee} \\ar[d]^{i^{\\vee}} \\ar[r]^{Q_{\\mathrm{Serre}} \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad}_{\\cong \\quad\\quad\\quad\\quad\\quad\\quad\\quad\\quad } & \\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0\\cong \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0\\oplus \n\\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0^{\\vee} \\ar[dl]^{\\pi_2} \\\\\n\\mathbb{C}\\ar[r]^{ \\phi^{\\vee}_Y \\quad \\quad \\quad } & \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0^{\\vee}. & } \\end{align*}\nSince $\\phi_Y$ is surjective, so $\\phi^{\\vee}_Y$ is injective, therefore \n$$\\mathrm{Im}(\\phi^{\\vee}_X)\\cap\\mathop{\\rm Ker}\\nolimits(\\phi_Y)\\subseteq \\mathrm{Im}(\\phi^{\\vee}_X)\\cap \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0=0, $$\nand $\\mathop{\\rm Ker}\\nolimits(\\phi_Y)$ defines a subspace of $\\mathop{\\rm Ker}\\nolimits(\\phi_X)\/\\mathrm{Im}(\\phi^{\\vee}_X)$. \nThis is a maximal isotropic subspace as $i: \\mathop{\\rm Ext}\\nolimits^2_Y(I_Z,I_Z)_0\\to\\mathop{\\rm Ext}\\nolimits^2_X(I,I)_0$ is so. \n\nThe above construction works in family and therefore we have \n$$[P_0(X,n[E])]^{\\mathrm{vir}}=[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}\\in A_1(P_0(X,n[E])), $$\nfor certain choice of orientation in the LHS. Consider a commutative diagram:\n\\begin{align*} \\xymatrix{\nX \\ar[d]_{\\pi} & X\\times P_0(X,n[E]) \\ar[d]_{\\bar{\\pi}=(\\pi,(\\pi^*)^{-1})} \\ar[l]_{\\pi_X \\quad\\quad} \\ar[r]^{\\quad \\pi_P } & P_0(X,n[E]) \\ar[d]_{(\\pi^*)^{-1}}^{\\cong} \\\\\nY & Y\\times \\mathop{\\rm Hilb}\\nolimits^n(Y) \\ar[l]_{\\pi_{Y} \\quad\\quad } \\ar[r]^{\\,\\, \\pi_M } & \\mathop{\\rm Hilb}\\nolimits^n(Y), } \\quad \\quad\n\\end{align*}\nand denote $\\mathcal{Z}\\hookrightarrow Y\\times \\mathop{\\rm Hilb}\\nolimits^n(Y)$ to be the universal 0-dimensional subscheme. Then \n\\begin{align*}\nP_{0,n[E]}(\\gamma)&=\\int_{[P_0(X,n[E])]^{\\mathrm{vir}}}\\pi_{P*}(\\pi_X^*\\gamma\\cdot \\bar{\\pi}^*\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}}\\pi_{M*}\\bar{\\pi}_*(\\pi_X^*\\gamma\\cdot \\bar{\\pi}^*\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}} \\pi_{M*}(\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})\\cdot\\bar{\\pi}_*(\\pi_X^*\\gamma) ) \\\\\n&=\\int_{[\\mathop{\\rm Hilb}\\nolimits^n(Y)]^{\\mathrm{vir}}} \\pi_{M*}(\\mathop{\\rm ch}\\nolimits_3(\\mathcal{O}_{\\mathcal{Z}})\\cdot\\pi_{Y}^*\\pi_*\\gamma ).\n\\end{align*}\nThe statement now follows from Proposition~\\ref{prop:K3xE calculation} below.\n\\end{proof}\n\n\\begin{prop}\n\\label{prop:K3xE calculation}\nLet $\\omega \\in H^2(E,{\\mathbb{Z}})$ be the class of point and $D \\in H^2(T,{\\mathbb{Q}})$ any class. Then for any $n \\geqslant 1$ we have:\n\\begin{align*}\n\\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times E) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_Y^{\\ast}( \\omega \\otimes 1 ) \\big) & = (-1)^{n+1} e(T) \\sum_{d|n} d^2, \\\\\n\\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times E) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_Y^{\\ast}( 1 \\otimes D ) \\big) & = 0.\n\\end{align*}\n\\end{prop}\n\\begin{proof}\nWrite $\\mathop{\\rm Hilb}\\nolimits = \\mathop{\\rm Hilb}\\nolimits^n(T \\times E)$ and consider the diagram\n\\[\n\\begin{tikzcd}\nT \\times E & \\mathop{\\rm Hilb}\\nolimits \\times T \\times E \\ar[swap]{l}{\\pi_{T \\times E}} \\ar{r}{\\pi_M} \\ar{d}{\\tilde{p}} & \\mathop{\\rm Hilb}\\nolimits \\ar{d}{p} \\\\\n& \\frac{ \\mathop{\\rm Hilb}\\nolimits \\times T \\times E }{E} \\ar{r}{\\pi_{M\/E}} & \\mathop{\\rm Hilb}\\nolimits\/E,\n\\end{tikzcd}\n\\]\nwhere the quotient by $E$ is taken in the stacky sense.\nThe universal subscheme ${\\mathcal Z} \\subset \\mathop{\\rm Hilb}\\nolimits \\times T \\times E$ has a natural $E$-linearization and hence arises from the pullback of\na subscheme ${\\mathcal Z}\/E \\subset (\\mathop{\\rm Hilb}\\nolimits \\times T \\times E)\/E$. Moreover, as in \\cite{O2},\nthere exists a natural (0-dimensional) virtual class $[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}}$ such that\n\\[ [ \\mathop{\\rm Hilb}\\nolimits ]^{\\text{vir}} = p^{\\ast} [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}}. \\]\nSince the virtual class of $\\mathop{\\rm Hilb}\\nolimits\/E$ arises from a symmetric obstruction theory (on an \\'etale cover of $\\mathop{\\rm Hilb}\\nolimits\/E$), its degree \ncan be computed by as an Behrend weighted Euler characteristic \\cite{B}:\n\\[ \\int_{ [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}} } 1 = e\\left( \\mathop{\\rm Hilb}\\nolimits\/E, \\nu \\right). \\]\nWe argue now as follows: Applying the pushpull formula and using $p \\circ \\pi_M = \\pi_{M\/E} \\circ \\tilde{p}$ we have\n\\begin{align*}\nN_n & := \\int_{[ \\mathop{\\rm Hilb}\\nolimits ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 ) \\big) \\\\\n& = \\int_{[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}} } \\pi_{M\/E \\ast} \\tilde{p}_{\\ast} \\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 ) \\big) \\\\\n& = \\int_{[ \\mathop{\\rm Hilb}\\nolimits\/E ]^{\\text{vir}} } \\pi_{M\/E \\ast} \\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) \\tilde{p}_{\\ast}( \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 )) \\big).\n\\end{align*}\nThen by checking on fibers we have $\\tilde{p}_{\\ast}( \\pi_{T \\times E}^{\\ast}( \\omega \\otimes 1 )) = 1$ \nas well as $\\pi_{M\/E \\ast} \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) = n$. This implies that\n\\[ N_n = n \\int_{ [\\mathop{\\rm Hilb}\\nolimits\/E]^{\\text{vir}} } 1 = n\\cdot e\\left( \\mathop{\\rm Hilb}\\nolimits\/E, \\nu \\right) = 24 (-1)^{n-1} \\sum_{d|n} d^2, \\]\nwhere for the last equality we have used \\cite[Cor.~1]{OS}.\n\nFor the second integral we argue identically, but observe that we have\n\\[ \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}}) \\pi_{T \\times E}^{\\ast}( 1 \\otimes D ) = \\tilde{p}^{\\ast}( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}\/E}) \\pi_T^{\\ast}(D)), \\]\nso when pushing forward by $\\tilde{p}$ the integral vanishes.\n\\end{proof}\nSimilarly, we can consider a nontrivial elliptic fibration: \n$$\\bar{p}=(p,\\textrm{id}_T): X=S\\times T\\to \\mathbb{P}^{1}\\times T, $$\nwhere $p: S\\rightarrow\\mathbb{P}^{1}$ is an elliptic $K3$ surface with a section $i$. Let $f$ be a generic fiber of $\\bar{p}$.\n\\begin{thm}\\label{thm2 on g=1 of multiple fiber}\nLet $t>0$ and $\\gamma \\in H^4(X)$. Then for certain choice of orientation, we have \n\\begin{align}\\label{equ on P0 multiple fiber}\n\\sum_{r\\geqslant 0}P^t_{0,r[f]}(\\gamma)\\,q^r=24\\,\\left(\\int_{S \\times {\\mathsf{p}}} \\gamma\\right)\\cdot \\sum_{m\\geqslant 1}\\sum_{n | m}n^2q^m. \\end{align}\n\\end{thm}\n\\begin{proof}\nThe first proof is parallel to the proof of Theorem \\ref{thm1 on g=1 of multiple fiber}. \nFor the second part, we need to evaluate\n\\begin{equation} \\int_{[ \\mathop{\\rm Hilb}\\nolimits^n(T \\times {\\mathbb{P}}^1) ]^{\\text{vir}} } \\pi_{M \\ast}\\big( \\mathop{\\rm ch}\\nolimits_3({\\mathcal O}_{{\\mathcal Z}})\\,\\pi_Y^{\\ast}( \\omega \\otimes 1 ) \\big), \\label{aaas} \\end{equation}\nwhere $\\omega \\in H^2({\\mathbb{P}}^1)$ is the class of a point.\nWe consider the degeneration of $T \\times {\\mathbb{P}}^1$ given by the product of $T$ with the degeneration of ${\\mathbb{P}}^1$ into a chain of three ${\\mathbb{P}}^1$'s.\nBy specializing the insertion $\\omega$ to the middle factor, we\nare reduced to an integral of the relative Hilbert schemes $\\mathop{\\rm Hilb}\\nolimits^n(T \\times {\\mathbb{P}}^1 \/ T_0 \\cup T_{\\infty} )$ with the same integrand.\nBut this integral is also the outcome of applying the degeneration formula to the integrals considered in Proposition~\\ref{prop:K3xE calculation}\n(under the degeneration of $E$ to a nodal ${\\mathbb{P}}^1$).\nHence \\eqref{aaas} is given by $(-1)^{n+1} e(T) \\sum_{d|n} d^2$ as well.\nFor the analogue of the second integral in Proposition~\\ref{prop:K3xE calculation}, the localization formula applied to the scaling action of ${\\mathbb{C}}^{\\ast}$ on ${\\mathbb{P}}^1$ shows that it vanishes.\n\\end{proof}\n\\begin{rmk}\\label{rmk on impr}\nOn the product of two $K3$ surfaces, \ngenus 1 Gopakumar-Vafa invariants in imprimitive classes are defined in \\cite[Def.~A.1]{COT1}. In particular, for multiple fiber classes \n$\\beta=r[f]$ above, by using \\cite[Eqn.~(5.7)]{COT1}, we know $n_{1,r[f]}(\\gamma)=0$ if $r>1$. \n\\end{rmk}\n\n\n\n\n\\section{Hilbert schemes of two points on $K3$}\n\n\\subsection{Rational curves on exceptional locus}\nLet $S$ be a $K3$ surface. \nConsider the Hilbert-Chow map \n$$\\pi: \\mathop{\\rm Hilb}\\nolimits^2(S)\\to \\mathop{\\rm Sym}\\nolimits^2(S) $$\nto the symmetric product of $S$. Let $D$ be the exceptional divisor fitting into Cartesian diagram: \n\\begin{align*} \\xymatrix{\nD \\ar[d]_{\\pi} \\ar[r]^{i \\quad \\,\\,\\, } & \\mathop{\\rm Hilb}\\nolimits^2(S) \\ar[d]^{\\pi} \\\\\nS \\ar[r]^{\\Delta \\quad \\,\\,\\, } & \\mathop{\\rm Sym}\\nolimits^2(S), } \\quad \\quad\n\\end{align*}\nwhere $\\Delta$ is the diagonal embedding. Note that $\\pi: D\\to S$ is a $\\mathbb{P}^1$-bundle and any fiber of it has normal bundle \n$\\mathcal{O}_{\\mathbb{P}^1}(-2,0,0)$. \n\\begin{thm}\\label{thm on hilbS}\nWhen $t=\\frac{n}{\\omega\\cdot\\beta}+0^+$ $($i.e.~in JS chamber$)$,\nConjecture \\ref{conj on DT4\/GV} (1),~(2) hold for multiple fiber classes $\\beta=r[\\mathbb{P}^1]$ $($$r\\geqslant 1$$)$ of $\\pi$ as above. \n\\end{thm} \n\\begin{proof}\nBy Jordan-H\\\"older filtration, the JS moduli space $P_n^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])$ is nonempty only if \n$$d\\,|\\,n, \\,\\,\\, n>0. $$\nso we may assume $n=m\\cdot d$ for $m\\in \\mathbb{Z}_{\\geqslant 1}$. Consider the map \n\\begin{align}f: P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])\\to \\mathop{\\rm Sym}\\nolimits^d(S), \\quad (F,s)\\mapsto \\pi_*[F]. \\end{align}\nAs the insertion \\eqref{equ on pri ins} only involves fundamental cycle of the universal one dimensional sheaf $\\mathbb{F}$, we have \n$$[\\mathbb{F}]=\\bar{f}^*[\\mathcal{Z}], $$\nwhere $[\\mathcal{Z}]\\hookrightarrow \\mathop{\\rm Sym}\\nolimits^d(S)\\times S$ is the class of incident subvariety and $\\bar{f}=(f,\\textrm{id}_S)$.\nTherefore \n$$\\int_{[P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\rm{vir}}}\\prod_{i=1}^l\\tau(\\gamma_i)\n=\\int_{[P_{md}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\rm{vir}}}f^*\\Phi, $$\nfor some $\\Phi\\in H^{2(md+1)}(\\mathop{\\rm Sym}\\nolimits^d(S))$. When $m>1$, we have $md+1>2d$, therefore $\\Phi=0$ and \n$$P_{md,d[\\mathbb{P}^1]}^{\\mathrm{JS}}(\\gamma_1,\\ldots,\\gamma_l)=0, \\quad \\mathrm{if}\\,\\,m>1. $$\nFor $m=1$, we have an isomorphism \n\\begin{align*}\n \\mathop{\\rm Hilb}\\nolimits^d(S) &\\stackrel{\\pi^*}{\\cong} P_{d}^{\\mathrm{JS}}(D,d[\\mathbb{P}^1]) \\cong P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1]), \\\\\nI_Z &\\mapsto \\pi^*I_Z\\mapsto (\\mathcal{O}_X\\to i_*\\pi^*\\mathcal{O}_Z). \n\\end{align*}\nFor $I_X=(\\mathcal{O}_X\\to i_*\\pi^*\\mathcal{O}_Z)$, we write $I_D=(\\mathcal{O}_D\\to\\pi^*\\mathcal{O}_Z)$. As in \\cite[Prop.~4.3]{CMT2}, \\cite[Prop.~4.2]{CKM2}, \nwe have a canonical isomorphism \n$$\\mathop{\\rm Ext}\\nolimits^0_D(I_D,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\rm Ext}\\nolimits^1_X(I_X,I_X)_0, $$\nand an inclusion of maximal isotropic subspace \n\\begin{align}\\label{equ1 on excep curve}\\mathop{\\rm Ext}\\nolimits^1_D(I_D,\\pi^*\\mathcal{O}_Z)\\hookrightarrow \\mathop{\\rm Ext}\\nolimits^2_X(I_X,I_X)_0. \\end{align}\nFrom distinguished triangle \n$$I_D\\to \\mathcal{O}_D\\to \\pi^*\\mathcal{O}_Z, $$\nwe obtain a distinguished triangle \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\pi^*\\mathcal{O}_Z,\\pi^*\\mathcal{O}_Z)\\to \\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\mathcal{O}_D,\\pi^*\\mathcal{O}_Z)\\to\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(I_D,\\pi^*\\mathcal{O}_Z). $$\nBy projection formula, we have \n$$\\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\pi^*\\mathcal{O}_Z,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_Z,\\mathcal{O}_Z), \\,\\,\\, \\mathop{\\dR\\mathrm{Hom}}\\nolimits_D(\\mathcal{O}_D,\\pi^*\\mathcal{O}_Z)\\cong \\mathop{\\dR\\mathrm{Hom}}\\nolimits_S(\\mathcal{O}_S,\\mathcal{O}_Z). $$ \nTherefore we get an exact sequence \n\\begin{align}\\label{equ2 on excep curve}0=H^1(S,\\mathcal{O}_Z)\\cong H^1(D,\\pi^*\\mathcal{O}_Z)\\to \\mathop{\\rm Ext}\\nolimits^1_D(I_D,\\pi^*\\mathcal{O}_Z) \\to \\mathop{\\rm Ext}\\nolimits^2_S(\\mathcal{O}_Z,\\mathcal{O}_Z)\\to 0. \\end{align}\nBy Serre duality, we have \n\\begin{align}\\label{equ3 on excep curve} \\mathop{\\rm Ext}\\nolimits^2_S(\\mathcal{O}_Z,\\mathcal{O}_Z)\\cong \\mathop{\\rm Ext}\\nolimits^0_S(\\mathcal{O}_Z,\\mathcal{O}_Z)^{\\vee}\\cong H^0(S,\\mathcal{O}_Z)^{\\vee}. \\end{align}\nCombining Eqns.~\\eqref{equ1 on excep curve}, \\eqref{equ2 on excep curve}, \\eqref{equ3 on excep curve}, we obtain a maximal isotropic subspace\n$$H^0(S,\\mathcal{O}_Z)^{\\vee}\\hookrightarrow\\mathop{\\rm Ext}\\nolimits^2_X(I_X,I_X)_0. $$\nWorking in family, we see that the dual of tautological bundle $\\mathcal{O}_S^{[d]}$ on $\\mathop{\\rm Hilb}\\nolimits^d(S)$ is a maximal isotropic subbundle \nof the obstruction bundle of $P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])$. By Lemma \\ref{lem on indep of cosec}, we obtain\n$$[P_{d}^{\\mathrm{JS}}(X,d[\\mathbb{P}^1])]^{\\mathrm{vir}}=[\\mathop{\\rm Hilb}\\nolimits^d(S)]\\cap c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right), $$\nfor certain choice of orientation. As for insertions, consider the following diagram \n\\begin{align*} \\xymatrix{\nS & D \\ar[l]_{\\pi} \\ar[r]^{i} & X \\\\\nS\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[d]^{\\pi_M} \\ar[u]_{\\pi_S} & D\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[u]_{\\pi_D} \\ar[d]^{\\pi_M} \\ar[l]_{\\bar{\\pi}=(\\pi,\\textrm{id})} \\ar[r]^{\\bar{i}=(i,\\textrm{id})} & X\\times \\mathop{\\rm Hilb}\\nolimits^d(S) \\ar[d]^{\\pi_M} \\ar[u]_{\\pi_X} \\\\\n\\mathop{\\rm Hilb}\\nolimits^d(S) &\\mathop{\\rm Hilb}\\nolimits^d(S) & \\mathop{\\rm Hilb}\\nolimits^d(S), } \n\\end{align*}\nlet $\\mathcal{Z}\\hookrightarrow\\mathop{\\rm Hilb}\\nolimits^d(S)\\times S$ denote the universal zero dimensional subscheme, then \n\\begin{align*}\n\\tau(\\gamma)&=\\pi_{M*}\\left(\\pi_X^*\\gamma\\cdot\\mathop{\\rm ch}\\nolimits_3(\\bar{i}_*\\bar{\\pi}^*\\mathcal{O}_{\\mathcal{Z}})\\right) \\\\\n&=\\pi_{M*}\\left(\\pi_X^*\\gamma\\cdot\\bar{i}_*\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\bar{i}_*\\left(\\bar{i}^*\\pi_X^*\\gamma\\cdot\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\bar{\\pi}_{*}\\left(\\pi_D^*i^*\\gamma\\cdot\\bar{\\pi}^*[\\mathcal{Z}]\\right) \\\\\n&=\\pi_{M*}\\left(\\bar{\\pi}_{*}\\pi_D^*i^*\\gamma\\cdot [\\mathcal{Z}]\\right)\\\\\n&=\\pi_{M*}\\left(\\pi_S^*\\pi_*i^*\\gamma\\cdot [\\mathcal{Z}]\\right)\\in H^2(\\mathop{\\rm Hilb}\\nolimits^d(S)),\n\\end{align*}\nwhich depends only on $[\\mathcal{Z}]$ and hence it is a pullback from $\\mathop{\\rm Sym}\\nolimits^d(S)$ by the Hilbert-Chow map\n$$\\mathrm{HC} \\colon \\mathop{\\rm Hilb}\\nolimits^d(S)\\to \\mathop{\\rm Sym}\\nolimits^d(S). $$ \nTo sum up, we have \n\\begin{align}\\label{equ on exc cur} P_{d,d[\\mathbb{P}^1]}^{\\mathrm{JS}}(\\gamma_1,\\ldots,\\gamma_l)=\\int_{\\mathop{\\rm Hilb}\\nolimits^d(S)}c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)\\cdot \n\\prod_{i=1}^l\\pi_{M*}\\left(\\pi_S^*\\pi_*i^*\\gamma_i\\cdot [\\mathcal{Z}]\\right). \\end{align}\nWhen $d=1$, this reduces to \\cite[Lem.~3.7]{COT1}. When $d>1$, we claim the above integral is zero. \nIn fact, by \\cite[Thm.~4.6]{Lehn}, \nwe have the formula \n\\begin{align*}\n\t\\sum_{m\\geqslant 0} c\\left(\\mathcal{O}_S^{[m]}\\right)z^m =\n\t\\mathrm{exp}\\left(\\sum_{m\\geqslant 1} \\frac{(-1)^{m-1}}{m} q_m(1) z^m \\right) \\cdot 1\n\t\\end{align*}\nwhere $q_m(1)$ are linear maps (called Nakajima operators)\n\\begin{align*}\n\tq_m(1) \\in \\mathop{\\rm End}\\nolimits(\\mathbb{H}), \\quad \n\t\\mathbb{H}=\\bigoplus_{m\\geqslant 0} H^{\\ast}(\\mathrm{Hilb}^m(S), \\mathbb{Q}),\n\t\\end{align*}\nwhich is of bidegree $(m, 2m-2)$. \nBy looking at the bidegree $(d, 2d-2)$-part, \nwe have \n\\begin{align*}\n\tc_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)=q_d(1)(1), \\quad \\mathrm{where} \\,\\, 1\\in H^0(\\mathop{\\rm Hilb}\\nolimits^0(S)). \n\t\\end{align*}\nBy the definition of $q_d(1)$ in~\\cite[Def.~2.3]{Lehn}, we have $q_d(1)(1)=p_{1\\ast}[\\mathcal{Q}]$\nwhere $\\mathcal{Q}$ is the cycle on \n$\\mathop{\\rm Hilb}\\nolimits^d(S) \\times S$ supported on \n$(x, \\xi)$ with $\\mathrm{Supp}(\\xi)=x$. \nTherefore \nwe know $c_{d-1}\\left(\\mathcal{O}_S^{[d]}\\right)$ is supported on $\\mathrm{HC}^{-1}(\\Delta)$, where\n$$\\Delta=\\big\\{(x,\\cdots,x)\\in\\mathop{\\rm Sym}\\nolimits^d(S) \\big\\}\\subseteq \\mathop{\\rm Sym}\\nolimits^d(S)$$ is the small diagonal. \nOur insertion is a pullback from $\\mathop{\\rm Sym}\\nolimits^d(S)$ and gives $(d+1)$-dimensional constrain on $\\mathop{\\rm Sym}\\nolimits^d(S)$. If $d>1$, \n$d+1>2=\\dim_{\\mathbb{C}}\\Delta$, therefore the integral \\eqref{equ on exc cur} is zero. \n\\end{proof}\n\n\\subsection{Small degree curve classes on $X=T^*\\mathbb{P}^2$}\nWhen the $K3$ surface $S$ has a\n$(-2)$-curve $C \\subset S$, \nthe Hilbert scheme $\\mathop{\\rm Hilb}\\nolimits^2(S)$ contains \n$\\mathrm{Sym}^2(C) \\subset \\mathop{\\rm Hilb}\\nolimits^2(S)$\nas a Lagrangian subvariety.\nFor curve classes coming from $\\mathrm{Sym}^2(C) \\cong \n\\mathbb{P}^2$, our invariants can be studied on the local model $X=T^*\\mathbb{P}^2$. \n\nWe have an identification of curve classes:\n\\[ H_2(X,{\\mathbb{Z}}) = H_2({\\mathbb{P}}^2, {\\mathbb{Z}}) = {\\mathbb{Z}} [ \\ell ], \\]\nwhere $\\ell \\subset {\\mathbb{P}}^2$ is a line.\nLet $H \\in H^2(T^{\\ast} {\\mathbb{P}}^2)$ be the pullback of hyperplane class\nand identify $H_2(T^{\\ast} {\\mathbb{P}}^2, {\\mathbb{Z}}) \\equiv {\\mathbb{Z}}$ by its degree against $H$.\nGopakumar-Vafa invariants are given as follows: \n\\begin{prop}\\emph{(\\cite[Cor.~6.2]{COT1})}\\label{cor on inte on local p2}\n\\begin{align*}\nn_{0,d}(H^2,H^2)&=\n\\left\\{\\begin{array}{rcl} 1 &\\mathrm{if} \\,\\, d=1, \\\\ \n -1 &\\mathrm{if} \\,\\, d=2, \\\\\n 0 & \\,\\, \\mathrm{otherwise}. \n\\end{array} \\right. \\\\\nn_{1,1}(H^2)&=0, \\quad n_{2,1}=0. \n\\end{align*}\n\\end{prop}\nIn the stable pair side, we compute invariants for small degree curve classes.\n\\begin{prop}\\label{prop on tp2}\nFor certain choice of orientation, we have \n$$P_{1,1}(H^2,H^2)=1, \\quad P_{1,2}(H^2,H^2)=-1, \\quad P_{1,3}(H^2,H^2)=0, $$\n$$P_{0,1}(H^2)=P_{0,2}(H^2)=0, \\quad P_{0,3}(H^2)=1,\\quad P_{-1,1}=P_{-1,2}=P_{-1,3}=0. $$\nMoreover, $P^t_{n}(X,d)$ is independent of the choice of $t>n\/d$ in the listed cases above. \n\nIn particular, for $X=T^*\\mathbb{P}^2$, we have\n\\begin{itemize}\n\\item Conjecture \\ref{conj on DT4\/GV} (2) holds when $d\\leqslant 3$. \n\\item Conjecture \\ref{conj on DT4\/GV} (3), (4) hold. \n\\end{itemize}\n\\end{prop}\n\\begin{proof}\nAs noted in \\cite[Proof~of~Lem.~6.3]{COT1}, we have a diagram \n\\begin{align*} \\xymatrix{\nX=T^{\\ast}\\mathbb{P}^2 \\ar@{^{(}->}[r]^{i\\quad } & \\mathcal{O}_{\\mathbb{P}^2}(-1)^{\\oplus 3} \\ar[d]^{\\pi} \\\\\n & T, } \n\\end{align*} \nwhere $i$ is a closed imbedding and $\\pi$ contracts $\\mathbb{P}^2$ to a point in \nan affine scheme $T$. \nIt is easy to see that any one dimensional closed subscheme $C\\subset X$ with $[C]=d$ ($d=1,2$) satisfies $\\chi(\\mathcal{O}_C)\\geqslant 1$.\nTherefore by \\cite[Prop.~1,12]{CT1}, we know for $n=-1,0,1$ and $d\\leqslant 3$, the moduli space \n$P_n^t(X,d)$ is independent of the choice of $t>n\/d$. So we may take $t\\to \\infty$ and work with PT stability. \nUsing similar analysis as \\cite[Prop.~3.9]{CKM2}, we know all stable pairs $(\\mathcal{O}_X\\stackrel{s}{\\to} F)$ in the above cases are scheme theoretically supported on \nthe zero section $\\mathbb{P}^2\\subset X$ and $F$ are stable. \nThen obviously $P_{-1}(X,d)=\\emptyset$ if $d\\leqslant 3$ and corresponding invariants vanish. \n\nWhen $n=1$, $d\\leqslant 3$, the isomorphism \n$$P_1(X,d)\\cong M_1(X,d), \\quad (\\mathcal{O}_X\\to F)\\mapsto F, $$\nto the moduli space of one dimensional stable sheaves $F$ with $[F]=d[\\ell]$ and $\\chi(F)=1$ will reduce the computation\nto the corresponding one on $M_1(X,d)$ \\cite[Prop.~6.5]{COT1}.\n\nWhen $d=1,2$, we have $P_0(X,d)=\\emptyset$, so invariants are zero. \nFor $d=3$, the support map \n$$P_0(X,3)\\cong P_0(\\mathbb{P}^2,3)\\stackrel{\\cong}{\\to} |\\mathcal{O}_{\\mathbb{P}^2}(3)|\\cong\\mathbb{P}^9, \\quad F\\mapsto \\mathrm{supp}(F) $$\nis an isomorphism. The universal one dimensional sheaf satisfies $\\mathbb{F}=\\mathcal{O}_{\\mathcal{C}}$ for the universal $(1,3)$-divisor\n$\\mathcal{C}\\hookrightarrow \\mathbb{P}^9\\times \\mathbb{P}^2$. \nLet $\\pi_M \\colon\nP_0(X,3)\\times \\mathbb{P}^2\\to P_0(X,3)$ be the projection. Bott's formula implies that \n\\begin{align*}\n \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}(-\\mathcal{C})\\boxtimes T^*\\mathbb{P}^2) &\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)[-2]^{\\oplus 8}, \\\\\n\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}(\\mathcal{C})\\boxtimes T^*\\mathbb{P}^2) &\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}, \\\\\n\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O},\\mathcal{O}\\boxtimes T^*\\mathbb{P}^2)&\\cong \\mathcal{O}_{\\mathbb{P}^9}[-1].\n\\end{align*}\nTherefore, we have \n\\begin{align*}\n&\\quad \\, \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}_{\\mathcal{C}},\\mathcal{O}_{\\mathcal{C}}\\boxtimes T^*\\mathbb{P}^2)[1]\\\\\n&\\cong \\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}(-\\mathcal{C})\\to \\mathcal{O},(\\mathcal{O}(-\\mathcal{C})\\to \\mathcal{O}) \\boxtimes T^*\\mathbb{P}^2)[1] \\\\\n&\\cong \\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\oplus \\mathcal{O}_{\\mathbb{P}^9}(1)^{\\oplus 8} \\oplus \\mathcal{O}_{\\mathbb{P}^9} \\oplus \\mathcal{O}_{\\mathbb{P}^9}. \n\\end{align*}\nBy Grothendieck-Verdier duality, it is easy to see \n$$\\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\oplus \\mathcal{O}_{\\mathbb{P}^9}$$ \nis a maximal isotropic subbundle of $\\mathbf{R}\\mathcal{H}om_{\\pi_M}(\\mathcal{O}_{\\mathcal{C}},\\mathcal{O}_{\\mathcal{C}}\\boxtimes T^*\\mathbb{P}^2)[1]$. \nThe reduced virtual class satisfies \n$$[P_0(X,3)]^{\\mathrm{vir}}=\\pm e\\left(\\mathcal{O}_{\\mathbb{P}^9}(-1)^{\\oplus 8}\\right)\\cap [\\mathbb{P}^9] \\in H_2(\\mathbb{P}^9). $$\nLet $h\\in H^2(\\mathbb{P}^9)$ denote the hyperplane class. It is straightforward to check \n$$\\tau_0(H^2)=[h]. $$ \nBy integration again the virtual class, we have the desired result.\n\\end{proof}\n\n\n \n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzigce b/data_all_eng_slimpj/shuffled/split2/finalzzigce new file mode 100644 index 0000000000000000000000000000000000000000..26c23eb5df04e9754721ab010396598aab134d0a --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzigce @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\tCounting stars is a powerful method for probing galactic structure,\nbut until recently it has been limited to stars $I<19$. At fainter magnitudes\ngalaxies vastly outnumbers stars. Although galaxies are typically resolved \neven in ground-based images and therefore can usually be distinguished from \nthe point-like stars, some galaxies with steep surface-brightness profiles\navoid detection and pollute the sample. The problem grows worse rapidly at \nlower flux levels since the galaxies become smaller, fainter, and more \nnumerous. Heretofore, intrinsically faint stars could therefore be studied\nonly when they were found nearby. For the faintest stars, the volume probed \nwas so small that measurements of the luminosity function (LF) were both\nhighly uncertain and highly controversial. One result of this is that most\npeople have assumed that the mass function (MF), which is derived from the LF\nusing a mass-luminosity relation, continued with its Salpeter slope\n\\begin{equation} \n{d N\\over d\\log M}\\propto M^\\alpha\\qquad (\\alpha=-1.35,\\rm Salpeter)\n\\label{eq:salp}\n\\end{equation}\nas measured for relatively massive stars. This then led to the assumption\nthat there was a large quantity of stellar matter which was not observed\nbut must ``certainly'' be there if only our instruments were powerful enough\nto see them. Hence, people would routinely quote high mass-to-light\nratios $(M\/L\\sim 10)$ for the luminous components of galaxies believing that\n``dark matter'' was needed only to account for the rest. In the case of the \nMilky Way disk, at least, we now have the powerful instrument at our disposal,\nbut we do not see the stars. In the case of the bulge, we are able to see\nmuch fainter than before, although we still do not probe directly the region\nof the MF corresponding to the place where the disk MF turns over. \nNevertheless, we must begin to suspect that the disk and bulge MFs are similar\nand that the large mass which is dynamically determined to be associated with \nthe luminous components of galaxies is not in the form of low-luminosity\nstars.\n\n\\section{The Disk Mass and Luminosity Functions}\n\n\tGould, Bahcall, \\& Flynn~\\cite{gbfI} identified 192 M dwarf stars in \n22 fields\nimaged by the Wide Field Camera (WFC2) on HST to an average limiting\nmagnitude of $I=23.7$, about 100 times fainter than the limit of typical of \nground-based surveys. We combined these with a brighter sample of 65 M dwarfs\nidentified in 162 fields imaged with the pre-repair Planetary Camera. We \nfound that the LF clearly peaks at about $M_V\\sim 12$ ($M_I\\sim 9$). The\ntransformation from an LF to an MF requires some care because the \nmass-luminosity relation is non-linear. However, using the empirically \nmeasured relation of Henry \\& McCarthy~\\cite{hm}, we found that the MF peaks\nat about $M\\sim 0.6\\,M_\\odot$. The detailed structure of the faint end of \nthe LF remained poorly determined because there were only a total of 23 stars\nwith $M_V>13.5$. However, we have now analyzed an additional 31 WFC2 fields\nwhich contain a total of 24 stars in this faint region.~\\cite{gbfII} \nWe now find a clear\nbreak in the MF at $M\\sim 0.6\\,M_\\odot$. In contrast to Eq.~\\ref{eq:salp},\n\\begin{equation}\n\\alpha\\sim -1.2 \\quad (M>0.6 M_\\odot);\\qquad \\alpha\\sim 0.4\\quad\n(M<0.6\\,M_\\odot)\\label{eq:realmf}\n\\end{equation}\nEven after correcting for binaries (to which HST is almost completely \ninsensitive) the slope at the low-mass end is only $\\alpha\\sim 0.1$.\nThere are perhaps hints of a rise in the MF at the very last bin, but the\nstatistics are too poor to resolve this issue.\n\n\\section{Bulge Luminosity Function and Mass Function}\n\n\tLight, Baum, \\& Holtzman~\\cite{lbh} have used the WFC2 to measure\nthe LF of the galactic bulge in Baade's Window to an apparent magnitude \n$V\\sim 26$.\nThis is not as deep as the images used to measure the disk LF primarily\nbecause the bulge fields are limited by crowding. Moreover, since the bulge \nis 8 kpc away, while the disk stars can be seen as close as 0.5 kpc, \n(corresponding to an additional factor of 250 in apparent brightness), the\nbulge LF measurement is cutoff about 10 magnitudes (factor 10,000 in \nluminosity) brighter than the disk LF. Even so, this is a factor $\\sim 100$\nimprovement on pre-HST efforts. The results are noteworthy: to the limit\nto which it can be measured, $M_V\\sim 10$, the bulge LF coincides with the\ndisk LF. Since the heavy-element abundance of bulge stars is similar to those\nin the solar neighborhood, the mass-luminosity relation should be similar.\nHence the MFs of the two populations should also be similar. This suggests\nthat perhaps the MFs are also the same at the low mass end. If so, this\nleads to some rather dramatic conclusions.\n\n\tThe dynamically-measured mass of the bulge is \n$\\sim 2\\times 10^{10}\\,M_\\odot$. Han finds that the stars observed by\nLight et al.\\ account for half of this mass, but can account for no more than\n1\/10 of the observed microlensing events.~\\cite{han} \nIf the bulge LF is extended using the disk LF and similarly converted into an \nMF, this would account for 70\\% of the bulge mass, but less than 1\/2 the\nmicrolensing events and essentially none of the short events. Only when\nHan adds in the remaining 30\\% of the mass in brown dwarfs \n($M\\sim 0.08\\,M_\\odot$) can he account for these short events. In brief,\nstar count work on the luminous populations seems to suggest that much of\nthe mass in these components is composed of brown dwarfs or other dark objects\nof similar mass.\n\n\\section{Hubble Deep Field Search For Halo Stars}\n\n\tThe Hubble Deep Field (HDF) with a total of 10 days of integration\nprovides a unique opportunity to probe for extreme halo objects. \nFlynn, Gould, \\& Bahcall~\\cite{fgb} found that stars could be separated from\ngalaxies to a limiting magnitude $I=26.3$, about 10 times fainter than\ntypical WFC2 fields used to measure the disk LF. Most known populations of\nstars in the Galaxy will not generate counts near this faint limit simply\nbecause to do so they would have to be so far away that they would be outside\nthe Galaxy! Since the faintest magnitudes reached by HDF are essentially free\nof known populations, it can be used to search for objects that are so \nintrinsically faint that they would have escaped notice in earlier studies.\nThe only ``expected'' candidate of this type are the white dwarfs, for which\nHDF give us the first meaningful limits:\n\\begin{equation} \nf < 0.31 \\times 10^{0.72[(V-I)-1.8]},\\label{eq:wdlimit}\n\\end{equation}\nwhere $f$ is the halo fraction of $0.5\\,M_\\odot$ white dwarfs and $(V-I)$ is\ntheir color. Thus, HDF tells us white dwarfs in the expected color range\nmake up no more than 1\/2 to 1\/3 of the halo. More generally, HDF constrains\nall classes of objects with absolute magnitude $M_I$, mass $M$, and halo \nfraction $f$ by,\n\\begin{equation}\nM_I > 17.2 + {5\\over 3}\\log\\biggl(f{0.08\\,M_\\odot\\over M}\\biggr)\\qquad\n(V-I>1.8),\\label{eq:alllimit}\n\\end{equation}\nwhere I have scaled the mass to the maximum of the brown dwarf regime. This\nlimit is 10 times fainter than the faintest star ever observed and 100 times\nfainter than the faintest halo star ever observed. In brief, a significant\npopulation (but not the whole halo) of white dwarfs is still permitted, but\nordinary halo stars simply do not contribute to the mass of the Galaxy.\n\n\\section{HDF Limits on Intergalactic Stars}\n\n\tIntergalactic stars are not often regarded as candidates for dark \nmatter, but many cosmological scenarios produce stars at very early times\nand these must be distributed approximately as the dark matter. Thus, it\nis of interest to determine their density. HDF can be used to search\nfor K giant stars over a volume of about 70 cubic kiloparsecs outside the\nGalaxy (but inside the Local Group). The density is at least a factor\n3000 times lower than the local density of giant stars and so more than\n300,000 times below the local dark matter density (assuming a locally \nmeasured MF). Of course, the Local Group dark matter density is\nabout 10,000 times lower than the nearby density, so intergalactic\nstars make up less than 1\/30 of the dark matter in the Local Group.\n\n\\section*{Acknowledgments}\nThis work was supported in part by NSF grant AST 9420746.\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:introduction:1}\n\nLow-density parity-check (LDPC) codes offer excellent tradeoffs between performance and complexity for error correction in communication systems. Quasi-cyclic (QC) LDPC codes in particular have proved extremely attractive due to their implementation advantages, both in encoding and decoding \\cite{Li_QC_encoders:06, Dai:08, Mansour:07}. Many analyses of QC-LDPC codes have been carried out based on optimization of parameters such as the minimum Hamming distance of the code or the girth of the Tanner graph. However, it has been shown that an excellent first-order measure of performance over the AWGNC is the minimum \\emph{pseudo-weight} of the code \\cite{Wiberg}. So far, few results exist in the literature on the minimum pseudo-weight of QC-LDPC and related codes. \n\nSpectral graph analysis was used in~\\cite{Tanner:01:1}, and more\nrecently, in~\\cite{Vontobel:Koetter:04:1}, to obtain bounds on the\nminimum Hamming weight, and minimum AWGNC pseudo-weight, respectively, of a length-$n$\n$(c,d)$-regular code $\\mathcal{C}$ over the binary field $\\Ftwo$:\n $$d_{\\mathrm{min}}\\geqw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) \\geq n \\frac{2c -\n \\lambda_2}{\\lambda_1 - \\lambda_2}; d_{\\mathrm{min}} \\geq n\\frac{2}{d}\n \\frac{2c +d - 2 - \\lambda_2}{\\lambda_1 - \\lambda_2},$$ with\n $\\lambda_1 > \\cdots > \\lambda_s$ being the distinct ordered\n eigenvalues of $\\matr{H}^{\\mathsf{T}} \\matr{H}\\in \\mathbb{R}^{n\\times n}$ (where $\\matr{H}$ is viewed as\n a matrix in $ \\mathbb{R}^{m\\times n}$). These bounds are, for most codes,\n loose. However, in particular cases, like the projective geometry\n codes \\cite{Vontobel:Smarandache:05:1, Smarandache:Vontobel:07,Kou:Lin:Fossorier:01:1}, they are attained.\n A current problem with these bounds is that for most LDPC codes, it is not practical to evaluate the eigenvalues $\\lambda_1,\\lambda_2$ due to the size of the matrix $\\matr{H}^{\\mathsf{T}} \\matr{H}$. \n \n In this paper we show how to compute the AWGN pseudo-weight lower bound for quasi-cyclic\n (QC) and related codes by utilizing the $\\mathcal A$-submodule\n structure of quasi-cyclic codes, ${\\cal A}= \\mathbb{R}[X]\/(X^r-1)$ \\cite{Lally:Fitzpatrick:01, Ling:Sole:01, Ling:Sole:03}. \n In particular, we begin by showing how the\n polynomial parity-check matrix that describes a cyclic code can be used\n to compute the required eigenvalues, and then generalize this approach to compute the required eigenvalues for QC codes. \n We also define the class of ``nested circulant'' matrices, and show that these have eigenvalues\n which are given by evaluating a multivariate associated\n polynomial at points whose coordinates are particular roots of\n unity. Finally, we give a necessary \n condition for the pseudo-weight lower bound to be attained when $\\matr{H}$ is circulant\n and show a few classes of cyclic codes satisfying this criterion.\n\n\n\n\n\\section{Basic Notation and Definitions}\n\\label{sec:notation:1}\n\nAll codes in this paper will be binary linear codes of a certain\nlength $n$ specified through a (scalar) parity-check matrix\n$\\matr{H}=(h_{j,i}) \\in \\GF{2}^{m \\times n}$ as the set of all\nvectors $\\vect{c} \\in \\Ftwo^n$ \\ such that $\\matr{H} \\cdot \\vect{c}^\\mathsf{T} =\n\\vect{0}^\\mathsf{T}$, where ${}^\\mathsf{T}$ denotes transposition. The minimum\nHamming distance of a code $\\code{C}$ will be denoted by $d_{\\mathrm{min}}(\\code{C})$. The\nfundamental cone $\\fch{K}{H}$ of $\\matr{H}$ is the set of all vectors\n$\\boldsymbol{\\omega} \\in \\mathbb{R}^n$ that satisfy\n \\begin{alignat}{2}\n \\omega_i\n &\\geq 0 \n \\ \n &&\\text{for all $i \\in \\set{I}(\\matr{H})$} \\; , \n \\label{eq:fund:cone:def:1} \\\\\n \\omega_i\n &\\leq\n \\sum_{i' \\in \\set{I}_j(\\matr{H}) \\setminus i} \\!\\!\n \\omega_{i'}\n \\ \n &&\\text{for all $j \\in \\set{J}(\\matr{H})$, \\ \n $i \\in \\set{I}_j(\\matr{H})$} \\; ,\n \\label{eq:fund:cone:def:2}\n \\end{alignat}\n where $\\set{J}(\\matr{H})$ and $\\set{I}(\\matr{H})$ denote the sets of row\n and column indices of $\\matr{H}$ respectively, and $\\set{I}_j(\\matr{H}) \\triangleq \\{ i \\in\n \\set{I} \\ | \\ h_{j,i} = 1 \\}$ for each $j \\in \\set{J}(\\matr{H})$. A vector $\\boldsymbol{\\omega} \\in \\fch{K}{H}$ is called a \\emph{pseudo-codeword}. The AWGNC \\emph{pseudo-weight} of a\n pseudo-codeword $\\boldsymbol{\\omega}$ is defined to be $w_{\\mathrm{p}}(\\boldsymbol{\\omega}) =\n w_{\\mathrm{p}}^{\\mathrm{AWGNC}}(\\boldsymbol{\\omega}) \\triangleq \\lVert \\boldsymbol{\\omega} \\rVert_1^2 \/ \\lVert \\boldsymbol{\\omega}\n \\rVert_2^2$. (For a motivation of these definitions,\n see~\\cite{Vontobel:Koetter:05:1:subm,\n Koetter:Li:Vontobel:Walker:07:1}). The minimum of the AWGNC\n pseudo-weight over all nonzero pseudo-codewords is called the minimum\n AWGNC pseudo-weight and is denoted by $w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H})$.\n\nFor any integer $s \\ge 1$, let $R_s = \\{ \\exp(\\imath 2 \\pi r \/ s) \\; : \\; 0 \\le r < s \\}$\ndenote the set of complex $s$-th roots of unity, and let $R_s^{-} =\nR_s\\backslash \\{ 1 \\}$. The symbol ${}^*$ denotes complex conjugation. Also, an $r \\times r$ circulant matrix $\\matr{B}$, whose entries are square $L \\times L$ matrices, will be called an \\emph{$L$-block circulant} matrix; we shall denote this by\n\\[\n\\matr{B} = \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\n\\] \nwhere the (square $L \\times L$ matrix) entries in the first column of $\\matr{B}$ are $\\matr{b}_0$, $\\matr{b}_1$, ... , $\\matr{b}_{r-1}$ respectively. \n\n Finally, $\\mathbb{Z}$, $\\mathbb{R}$, ${\\mathbb C}$, and $\\Ftwo$ will be the ring of integers,\n the field of real numbers, the complex field, and the finite field\n of size $2$, respectively. For a positive integer $L$, $[L]$ will\n denote the set of nonnegative integers smaller than $L$:\n $[L]=\\{0,1,\\ldots, L-1\\}$.\n\n \\section{Computing the Eigenvalues of $\\matr{H}^{\\mathsf{T}}\\matr{H}$ \nfor a QC Code}\n\\label{sec:eigenvalues}\n\nIn this section we will show that the polynomial representation of a\nQC code will prove very helpful in computing the eigenvalues of the\nlarge matrix $\\matr{H}^{\\mathsf{T}}\\matr{H}$, easing in this way the computation\nof the lower bound\n\\begin{align}\n\\label{lowerbound} d_{\\mathrm{min}}&\\geqw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) \\geq n \\frac{2c -\n \\lambda_2}{\\lambda_1 - \\lambda_2}.\n\\end{align}\nThis section is organized in three subsections. In Sec.\n\\ref{subsec:circulantmatrix} and \\ref{subsec:QCcodes} we provide\nsome background on circulant matrices and QC codes. Section\n\\ref{subsec:eigenvaluesQC} will contain the main result on the\neigenvalues of $\\matr{H}^{\\mathsf{T}}\\matr{H}$, where $\\matr{H}$ is the\nparity-check matrix of a QC code.\n\\subsection{ Eigenvalues of a Circulant Matrix} \n\\label{subsec:circulantmatrix}\n\nThe eigenvalues of a square circulant matrix are well known\n\\cite{MacWilliams:Sloane:98}. If $\\matr{B}\\in {\\mathbb C}^{n\\times n}$ is a\ncirculant matrix and $w(X)= b_0+b_1X+\\ldots +b_{n-1}X^{n-1}$ its\n(column) associated polynomial, then the eigenvalues of $\\matr{B}$ are\ngiven by this polynomial's evaluation at the complex $n$-th roots of unity, i.e. $w(x)$ for all $x \\in R_n$. \n\nThe following gives a proof of this result based on the polynomial representation of a circulant\nmatrix. It may be seen as a special case of the method we present later for QC codes.\n\nLet $\\lambda$ be an eigenvalue of $\\matr{B}$. Then there exists a nonzero\nvector $\\vect{v}=(v_0, \\ldots,\n v_{n-1})^\\mathsf{T} \\in {\\mathbb C}^{n}$ such that\n\\begin{align*}\n&\\matr{B}\\vect{v}=\\lambda\\vect{v}.\n \\end{align*}\n In polynomial form, this equation is equivalent to (here $v(X) = v_0+v_1X+\\ldots +v_{n-1}X^{n-1}$):\n \\begin{align*}\n &w(X)v(X)=\\lambda v(X) \\mod (X^n-1) {~\\rm iff}\n \\\\& X^n-1~|~w(X)v(X)-\\lambda v(X) {~\\rm in~} {\\mathbb C}{~\\rm iff}\\\\\n &w(x)v(x)=\\lambda v(x), \\forall x \\in R_n{~\\rm iff}\\\\\n &(w(x)-\\lambda)v(x)=0, \\forall x\\in R_n \\; .\\\\\n\\end{align*}\nFor each $x\\in R_n$, $\\lambda= w(x)$ is a solution of the above\nequation, and therefore it is an eigenvalue for the matrix $\\matr{B}$.\nThere are $n$ such solutions, therefore, these are all possible\neigenvalues of $\\matr{B}$.\n\nIn the next theorem we will consider an \\emph{$L$-block circulant} matrix instead of a circulant matrix. This theorem may be found in \\cite{Tee:05}; \nwe provide here an alternative proof based on the polynomial representation.\n\\begin{theorem}\\label{theorem2} \n Let $\\matr{B}=\\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ be an\n $L$-block circulant matrix. Let $\\matr{W}(X)= \\matr{b}_0+\\matr{b}_1X+\\ldots\n +\\matr{b}_{r-1}X^{r-1}$ its (column) associated matrix polynomial.\n Then the eigenvalues of $\\matr{B}$ are given by the union of the\n eigenvalues of the $L\\times L$ matrices $\\matr{W}(x)$, for all $x\\in\n R_r$.\n\\end{theorem} \n \n\\IEEEproof The proof follows the reasoning in the theorem above.\n\nLet $\\lambda$ be an eigenvalue of $\\matr{B}$. Then there exists a nonzero\nvector $\\vect{v}\\triangleq(v_0, \\ldots, v_{rL-1})^\\mathsf{T}\\in {\\mathbb C}^{rL}$ such\nthat\n\\begin{align}\\label{eigenvalue}\n &\\matr{B}\\vect{v}=\\lambda\\vect{v}.\n \\end{align}\n Let $\\vect{p}(X)\\in {\\mathbb C}^{L}[X]$ given by $\\vect{p}(X)=(v_0, \\ldots,\n v_{L-1})^\\mathsf{T}+(v_L, \\ldots, v_{2L-1})^\\mathsf{T} X+\\ldots+ (v_{r(L-1)},\n \\ldots, v_{rL-1})^\\mathsf{T} X^{r-1}.$ In polynomial form,\n equation~\\eqref{eigenvalue} is equivalent to:\n \\begin{align*}\n &\\matr{B}(X) \\vect{p}(X)=\\lambda \\vect{p}(X) \\mod (X^r-1) {~\\rm iff}\n \\\\& X^r-1~|~\\matr{B}(X)\\vect{p}(X)-\\lambda \\vect{p}(X) {~\\rm in~} {\\mathbb C} {~\\rm iff}\\\\\n &\\matr{B}(x)\\vect{p}(x)=\\lambda \\vect{p}(x), \\forall x\\in R_r.\n\\end{align*}\nThe last equation is the equation for the eigenvalues of the matrix\n$\\matr{B}(x)$. Each such matrix has $L$ eigenvalues, counting\nmultiplicities, and there are $r$ distinct complex numbers in $R_r$; this accounts for the total number $rL$ of eigenvalues of\n$\\matr{B}$. The eigenvectors can also be deduced from the above.\n\n\n\\subsection{Definition and Properties of QC Codes} \n\\label{subsec:QCcodes}\nA linear QC-LDPC code $\\code{C}_{\\rm QC}\\triangleq\\codeCQC{r}$ of length\n$n = rL$ can be described by an $rJ \\times rL$ (scalar) parity-check\nmatrix $\\matr{\\bar H}_{\\rm QC}^{(r)}\\triangleq\\matr{\\bar H}$ that is formed by a $J\n\\times L$ array of $r \\times r$ circulant matrices.\n\\begin{align}\n\\matr{\\bar H}=\\left[\n\\begin{array}{cccc}\n\\matr{P}_{1,1} & \\matr{P}_{1,2} & \\ldots & \\matr{P}_{1,L} \\\\\n\\matr{P}_{2,1} & \\matr{P}_{2,2} & \\ldots & \\matr{P}_{2,L} \\\\\n\\vdots &\\vdots &\\ldots &\\vdots \\\\\n\\matr{P}_{J,1} & \\matr{P}_{J, 2} & \\ldots & \\matr{P}_{J, L} \n\\end{array}\n \\right], \n\\end{align}\nwhere the entries $\\matr{P}_{i,j}$ are $r\\times r$ circulant matrices.\nClearly, by choosing these circulant matrices to be low-density, the\nparity-check matrix will also be low-density.\n\nWith the help of the well-known isomorphism between the ring of\n$r\\times r$ circulant matrices and the ring of polynomials modulo $X^r\n- 1$, to each matrix $\\matr{P}_{i,j}$ we can associate a polynomial\n$p_{i,j}(X)$, and thus a QC-LDPC code can equivalently be described by\na polynomial parity-check matrix $\\matr{P}(X)$ of size $J \\times L$,\nwith polynomial operations performed modulo $X^r-1$:\n\\begin{align}\n\\matr{P}(X)=\\left[\n\\begin{array}{cccc}\np_{1,1}(X) & p_{1,2}(X) & \\ldots & p_{1,L}(X) \\\\\np_{2,1}(X) & p_{2,2}(X) & \\ldots & p_{2,L}(X) \\\\\n\\vdots &\\vdots &\\ldots &\\vdots \\\\\np_{J,1}(X) & p_{J, 2}(X) & \\ldots & p_{J, L}(X) \n\\end{array}\n \\right].\n\\end{align}\n \nBy permuting the rows and columns of the scalar parity-check matrix\n$\\matr{\\bar H}$,\\footnote{i.e., by taking the first row in the first block\n of $r$ rows, the first row in the second block of $r$ rows, etc.,\n then the second row in the first block, the second row in the second\n block, etc., and similarly for the columns.} we obtain an equivalent\nparity-check matrix representation $\\matr{H}$ for the QC code\n$\\codeCQC{r}$,\n\n\\begin{align}\\matr{H}\n &\\triangleq\n \\begin{bmatrix}\n \\matr{H}_0 & \\matr{H}_{r-1} & \\cdots \n & \\matr{H}_1 \\\\ \n \\matr{H}_1 & \\matr{H}_0 & \\cdots \n & \\matr{H}_2 \\\\ \n \\vdots & \\vdots & \\ddots \n & \\vdots \\\\ \n \\matr{H}_{r-1} & \\matr{H}_{r-2} & \\cdots \n & \\matr{H}_0\n \\end{bmatrix}.\n\\label{eq:matrix_1_bijection}\n\\end{align}\nwhere $\\matr{H}_0, \\matr{H}_1, \\ldots, \\matr{H}_{r-1}$ are scalar $J\n\\times L$ matrices. The connection between the two representations is\n\\begin{align}\n\\matr{H}_0 + \\matr{H}_1 X + \\cdots + \\matr{H}_{r-1} X^{r-1}=\\matr{P}(X).\n\\label{eq:matrix_2_bijection}\n\\end{align}\n\n\\subsection{The Eigenvalues of the Matrix $\\matr{H}^\\mathsf{T}\\cdot \\matr{H}$ of a QC Code}\n\\label{subsec:eigenvaluesQC}\n\nNote that for a fixed value of $r \\ge 1$, (\\ref{eq:matrix_2_bijection}) provides a simple bijective correspondence between the set of polynomial matrices $\\matr{P}(X) \\in (\\mathbb{R}[X]\/(X^r-1))^{J \\times L}$ and the set of parity-check matrices of the form (\\ref{eq:matrix_1_bijection}). Furthermore, the product of two such polynomial matrices, where defined, yields another which corresponds via this bijection with the product of the corresponding parity-check matrices in the form (\\ref{eq:matrix_1_bijection}). Also note that transposition of a polynomial matrix in the form (\\ref{eq:matrix_2_bijection}) corresponds to transposition of the corresponding parity-check matrix in the form (\\ref{eq:matrix_1_bijection}), under this bijection.\n\nIt follows that $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ is an $L$-block circulant matrix; applying Theorem~\\ref{theorem2} to this matrix yields the following corollary.\n\n\\begin{corollary}\n \n\n The eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ are given\n by the union of the eigenvalues of the $L\\times L$ matrices\n $\\matr{P}^\\mathsf{T}(x^*)\\cdot \\matr{P}(x),$ for $x\\in R_r$.\n\\label{cor:QC_codes}\n\\end{corollary} \n\\IEEEproof We apply Theorem~\\ref{theorem2} to the $L$-block circulant\nmatrix $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}\\triangleq \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ and form the matrix $\\matr{W}(X)= \\matr{b}_0+\\matr{b}_1X+\\ldots\n+\\matr{b}_{r-1}X^{r-1}$. This is equal to the product of the two\nmatrix polynomials of $\\matr{H}^\\mathsf{T}$ and $\\matr{H}$,\nwhich are $\\matr{H}_0^{\\mathsf{T}} + \\matr{H}_{r-1}^{\\mathsf{T}} X +\n\\cdots + \\matr{H}_{1}^{\\mathsf{T}} X^{r-1} = X^r\\matr{P}^\\mathsf{T}(1\/X)$ and $\\matr{H}_0 + \\matr{H}_1 X +\n\\cdots + \\matr{H}_{r-1} X^{r-1} = \\matr{P}(X)$, respectively. Therefore\n$\\matr{W}(X)=(X^r\\matr{P}^\\mathsf{T}(1\/X))\\cdot\\matr{P}(X)$ and so the eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ are the eigenvalues of $\\matr{P}^\\mathsf{T}(1\/x)\\cdot\n\\matr{P}(x)$, for all $x\\in R_r$; these are then equal to the eigenvalues of $\\matr{P}^\\mathsf{T}(x^*)\\cdot\n\\matr{P}(x)$, for all $x\\in R_r$ (as $x^*=1\/x$ for all such $x$).\n\n\\begin{example}\\label{Tannerexample}\n Let $r=31$ and consider the $(3,5)$-regular QC-LDPC code given by the\n scalar $93 \\times 155$ matrix\\footnote{Here $\\matr{I}_\\ell$ denotes the $31 \\times 31$ identity matrix with rows\nshifted cyclically to the left by $\\ell$ positions.}\n\\begin{align*}\n {\\matr{\\bar H}} &= \\begin{bmatrix}\n \\matr{I}_1 & \\matr{I}_2 & \\matr{I}_4 & \\matr{I}_8 & \\matr{I}_{16}\\\\\n \\matr{I}_5 & \\matr{I}_{10} & \\matr{I}_{20} & \\matr{I}_9 & \\matr{I}_{18}\\\\\n \\matr{I}_{25}& \\matr{I}_{19} & \\matr{I}_7 & \\matr{I}_{14}&\n \\matr{I}_{28}\n \\end{bmatrix}.\n \\end{align*}\n The polynomial parity-check matrix $\\matr{P}(X)\\in\n (\\mathbb{R}[X]\/(X^r-1))^{3 \\times 5}$ is\n \\begin{align*}\n \\matr{P}(X)\n &= \\begin{bmatrix}\n X & X^2 & X^4 & X^8 & X^{16}\\\\\n X^5 & X^{10} & X^{20} & X^9 & X^{18}\\\\\n X^{25} & X^{19} & X^7 & X^{14}& X^{28}\n \\end{bmatrix} \\; .\n \\end{align*}\n This code is the famous $(3,5)$-regular QC-LDPC code of length $155$\n presented in~\\cite{Tanner:Sridhara:Fuja:01:1}. Note that the code\n parameters are $[155, 64, 20]$. The corresponding matrix\n $\\matr{H}$ in the form (\\ref{eq:matrix_1_bijection}) is a $31\\times 31 $ matrix with block entries\n $\\matr{H}_i$, $i\\in [31]$ obtained by decomposing $\\matr{P}(X)$\n according to the powers of $X$:\n \\begin{align}\n\\matr{P}(X)=\\matr{H}_0 + \\matr{H}_1 X + \\cdots + \\matr{H}_{30} X^{30}.\n\\end{align}\nObviously only $15$ matrices among the $\\matr{H}_i$ are nonzero, and all\nof these contain only one $1$, the other entries being zero.\n\nThe matrix $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ is a $5$-block circulant matrix. Corollary \\ref{cor:QC_codes} above tells us\nthat in order to compute its eigenvalues, we need to form the matrices\n$\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot \\matr{P}(\\rho^i)$, for all $i\\in [31]$ (here $\\rho$ denotes a primitive complex $31$-th root of unity). We\nhave that\n\\begin{align*}\n \\matr{P}^\\mathsf{T}(1\/x) &= \\begin{bmatrix}\n x^{30} & x^{29} & x^{27} & x^{23} & x^{15}\\\\\n x^{26} & x^{21} & x^{11} & x^{22} & x^{13}\\\\\n x^{6} & x^{12} & x^{24} & x^{17}& x^{3}\n \\end{bmatrix}^\\mathsf{T} \\;\n \\end{align*}\nand\n\\begin{align*}\n & \\matr{P}^\\mathsf{T}(1\/x)\\cdot \\matr{P}(x)=\n \\begin{bmatrix}\n 3 & a & e^* & c & e^* \\\\\n a^* & 3 & b & a^* & d \\\\\n e & b^* & 3 & c & b^* \\\\\n c^* & a & c^* & 3 & d \\\\\n e & d^* & b & d^* & 3 \\end{bmatrix} \\;,\n\\end{align*}\nfor all $x \\in R_{31}$, where \n\\begin{align*}\n a&=x + x^5 + x^{25}; b = x^2 + x^{10} + x^{19};\n c = x^4 + x^7 + x^{20};\\\\\n d& =x^8 + x^9 + x^{14}; e = x^{16} + x^{18} +\n x^{28}.\n\\end{align*}\nObviously for $i\\in [31]$, each matrix $\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot\n\\matr{P}(\\rho^i)$ is Hermitian (in fact nonnegative definite), hence each has $5$ real nonnegative eigenvalues,\ngiving a total of $31\\cdot 5=155$ nonnegative eigenvalues for $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$.\n\nWe obtain that for each $i \\in [31], i\\neq 0$,\nthe associated polynomial of $\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot\n\\matr{P}(\\rho^i)$ may be written as (using $\\rho^{31} = 1$)\n\\begin{eqnarray*}\nu(\\lambda) & = & \\lambda^2(\\lambda^3 - 15 \\lambda^2 + 62 \\lambda - 62) \\\\\n& = & \\lambda^2(\\lambda-\\lambda_2)(\\lambda-\\lambda_3)(\\lambda-\\lambda_4)\n\\end{eqnarray*}\nwhere $\\lambda_2 = 8.6801$, $\\lambda_3 = 4.8459$ and $\\lambda_4 =\n1.4740$. Also, for $i=0$ the associated polynomial of\n$\\matr{P}^\\mathsf{T}(\\rho^{-i})\\cdot \\matr{P}(\\rho^i)$ may be written as\n$u(\\lambda) = \\lambda^4(\\lambda-\\lambda_1)$ where $\\lambda_1 = 15$.\nThis yields the nonzero eigenvalues of $\\matr{H}^\\mathsf{T}\\cdot\\matr{H}$ as $\\{ \\lambda_1, \\lambda_2, \\lambda_3,\n\\lambda_4 \\}$ with multiplicities $1$, $30$, $30$ and $30$\nrespectively.\n\\end{example} \n \n\n\\section{Eigenvalues of Nested Circulant Matrices}\n\\label{sec:nestedcirculanteigenvalues}\nIn this section we define the class of \\emph{nested circulant} matrices,\nand show that they have eigenvalues which are given by evaluating a\nmultivariate associated polynomial at points whose coordinates are\nparticular roots of unity. \n\n\\begin{theorem}\\label{theorem:nested_2} \n Let $\\matr{B}=\\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{r-1})\\in {\\mathbb C}^{rL\\times rL}$ be an $L$-block\n circulant matrix. Suppose that each subblock $\\matr{b}_i$,\n $i \\in [r]$, is also circulant, with associated polynomial\n $p^{(i)}(X) = \\sum_{j=0}^{L-1} b_{i,j} X^j$. Define the\n associated polynomial of $\\matr{B}$ by\n\\[ \nq(X,Y) = \\sum_{i=0}^{r-1} \\sum_{j=0}^{L-1} b_{i,j} X^i Y^j \\; .\n\\] \nThen the set of eigenvalues of $\\matr{B}$ is given by \n\\[\n\\{ q(x,y) \\; : \\; x \\in R_r, y \\in R_L \\} \\; .\n\\]\n\\end{theorem} \n\n\\IEEEproof For each $j \\in [L]$ define $u^{(j)}(X) =\n\\sum_{i=0}^{r-1} b_{i,j} X^i$. By Theorem~\\ref{theorem2}, the\neigenvalues of $\\matr{B}$ are equal to those of the matrices given by\n$\\matr{W}(x)$ for $x \\in R_r$; each of these is circulant with\nassociated polynomial (in $Y$) given by\n\\[\n\\sum_{j=0}^{L-1} u^{(j)}(x) Y^j = q(x,Y) \\; .\n\\]\nThus the eigenvalues of each $\\matr{W}(x)$ are equal to $q(x,y)$ for\n$y \\in R_L$, and the result follows.\n\nWe next define what is meant by a \\emph{nested circulant} matrix.\n\\begin{definition}\\label{m_nested_circulant}\nLet $m \\ge 1$ and let $i_t$ be a positive integer for each $t=1,2,\\cdots,m$. Also let $\\matr{B} =\n \\mathrm{circ}(\\matr{b}_0, \\matr{b}_1, \\cdots, \\matr{b}_{i_1-1})$ be\n a block-circulant matrix such that for every $t=1,2,\\cdots,m-1$,\n $j_t \\in [i_t]$ \n\\begin{align*} &\\matr{b}_{j_1,j_2,\\cdots,j_{t}}=\\\\ &\n \\mathrm{circ}(\\matr{b}_{j_1,j_2,\\cdots,j_{t},0},\n \\matr{b}_{j_1,j_2,\\cdots,j_{t},1}, \\cdots,\n \\matr{b}_{j_1,j_2,\\cdots,j_{t},i_{t+1}-1})\\end{align*}\nis also block-circulant,\n and that $\\matr{b}_{j_1,j_2,\\cdots,j_{m}} = b_{j_1,j_2,\\cdots,j_{m}}$ are\n scalars. Then $\\matr{B}$ is said to be an $m$-nested circulant\n matrix (with dimension $n = \\prod_{t=1}^{m} i_t$). The associated polynomial of $\\matr{B}$ is defined by\n\\begin{equation}\n q(X_1,X_2,\\cdots,X_{m}) = \\sum_{j_1=0}^{i_1-1} \\sum_{j_2=0}^{i_2-1} \n\\cdots \\sum_{j_{m}=0}^{i_{m}-1} b_{j_1,j_2,\\cdots,j_{m}} \\prod_{t=1}^{m} X_{t}^{j_t} \n\\label{eq:definition_char_poly} \n\\end{equation}\n\\end{definition}\nNote that the $1$-nested circulants are precisely the circulant\nmatrices, and that the $2$-nested circulants are precisely the \n$i_2$-block-circulant matrices with circulant subblocks. Also note that the\nassociated polynomial $q(X_1,X_2,\\cdots,X_{m})$ provides a succinct\ndescription of the matrix $\\matr{B}$.\n\nA straightforward generalization of Theorem \\ref{theorem:nested_2} is\nas follows.\n\\begin{theorem}\\label{theorem:nested_m} \n Let $\\matr{B}$ be an $m$-nested circulant matrix with associated\n polynomial $q(X_1,X_2,\\cdots,X_{m})$ given by\n (\\ref{eq:definition_char_poly}) above. Then the set of eigenvalues\n of $\\matr{B}$ is given by\n\\[\n\\{ q(x_1,x_2,\\cdots,x_{m}) \\; : \\; x_t \\in R_{i_t} \\quad \\forall t = 1,2,\\cdots,m \\}\n\\]\n\\end{theorem} \n\n\\IEEEproof The proof uses induction, and follows the lines of the\nproof of Theorem \\ref{theorem:nested_2} in a rather straightforward\nmanner.\n\n\\begin{example}\\label{fully_nested_circulant}\n Here we take an example of an $3$-nested circulant (i.e. $m=3$),\n where $i_t=2$ for $t=1,2,3$. The eigenvalues of\n\\[\n\\matr{B} = \\begin{bmatrix} \n0 & 1 & 0 & 0 & 0 & 1 & 1 & 1 \\\\\n1 & 0 & 0 & 0 & 1 & 0 & 1 & 1 \\\\\n0 & 0 & 0 & 1 & 1 & 1 & 0 & 1\\\\\n0 & 0 & 1 & 0 & 1 & 1 & 1 & 0 \\\\\n0 & 1 & 1 & 1 & 0 & 1 & 0 & 0 \\\\\n1 & 0 & 1 & 1 & 1 & 0 & 0 & 0 \\\\\n1 & 1 & 0 & 1 & 0 & 0 & 0 & 1\\\\\n1 & 1 & 1 & 0 & 0 & 0 & 1 & 0\\end{bmatrix}\n\\]\nare equal to the eigenvalues of \n\\[\n\\matr{B'} = \\begin{bmatrix} \n0 & 1+x & x & x \\\\\n1+x & 0 & x & x \\\\\nx & x & 0 & 1+x \\\\\nx & x & 1+x & 0\\end{bmatrix}\n\\]\nfor $x \\in \\{-1,1\\}$, which are equal to the eigenvalues of \n\\[\n\\matr{B''} = \\begin{bmatrix} \nxy & 1+x+xy \\\\\n1+x+xy & xy\\end{bmatrix}\n\\]\nfor $x \\in \\{-1,1\\}$ and $y \\in \\{-1,1\\}$. Finally, these are equal to the set \n\\[\n\\{ q(x,y,z) \\; : \\; x,y,z \\in \\{-1,1\\} \\}\n\\]\nwhere the associated polynomial of $\\matr{B}$ is $q(x,y,z) = xy +\nz(1+x+xy)$. In this example $b_{0,0,0} = 0$, $b_{0,0,1} = 1$,\n$b_{0,1,0} = 0$, $b_{0,1,1} = 0$, $b_{1,0,0} = 0$, $b_{1,0,1} = 1$,\n$b_{1,1,0} = 1$, $b_{1,1,1} = 1$; these may be easily obtained by\nmatching the elements of the first column of $\\matr{B}$ with the binary\nexpansion of the corresponding row position.\n\nThis example may be generalized to the case where $n = 2^m$ and the\ncirculant is $m$-nested; the eigenvalues are real.\nNote that the choice of the first column in $\\matr{B}$ determines which\nterms in $\\{ 1,x,y,z,xy,yz,zx,xyz \\}$ are included in the\nassociated polynomial, and hence controls the eigenvalues of\n$\\matr{B}$.\n\\end{example}\n\n\\begin{theorem}\\label{theorem:nested_H_nested_B}\n If $\\matr{H}$ is an $m$-nested circulant matrix, then $\\matr{B} =\n \\matr{H}^{\\mathsf{T}} \\matr{H}$ is an $m$-nested circulant matrix. \n\\end{theorem}\n\n\\IEEEproof It is straightforward to prove the stronger result that if $\\matr{A}$ and $\\matr{B}$ are $m$-nested circulants with specified nested dimensions, then $\\matr{A}^{\\mathsf{T}} \\matr{B}$ is also $m$-nested circulant, with the same nested dimensions. The proof proceeds by induction on $m$. The base case $m=1$ is straightforward. Next, let $\\matr{A}$ be block-circulant with block entries in the first column equal to some $(m-1)$-nested circulants $\\matr{A}_i$, and let $\\matr{B}$ be block-circulant with block entries in the first column equal to some $(m-1)$-nested circulants\n$\\matr{B}_j$. The matrix $\\matr{A}^{\\mathsf{T}} \\matr{B}$ is then block-circulant, and each block entry is a sum of matrices of the form $\\matr{A}_i^{\\mathsf{T}} \\matr{B}_j$. By the principle of induction, each\nof these matrices is an $(m-1)$-nested circulant, and it is easy to show that a sum of $t$-nested circulants (of the same nested dimensions) is another $t$-nested circulant (with these nested dimensions).\n\n\\section{Conditions for the Pseudo-Weight Lower Bound to Hold with Equality}\n\\label{sec:conditions} \n\nIt is straightforward to show that a necessary condition for the bound of \\cite{KV-lower-bounds}\nto hold with equality is that the eigenvalues of $\\matr{B} =\n\\matr{H}^{\\mathsf{T}} \\matr{H} \\in \\mathbb{R}^{n\\times n}$ are $\\lambda_1$ with multiplicity $1$ and\n$\\lambda_2 < \\lambda_1$ with multiplicity $n-1$.\n\nIf $\\matr{H}$ is circulant with (row) associated polynomial $w(X)$ of degree $k \\le n$, the eigenvalues of $\\matr{B}$\nare precisely $\\{ \\left| w(x) \\right|^2 \\; : \\; x \\in R_n \\}$;\ntherefore the largest eigenvalue of $\\matr{B}$ is $\\lambda_1 = \\left|\n w(1) \\right|^2 = d^2$ where $d$ is the number of nonzero\ncoefficients in $w(X)$ (noting that $\\left| w(1) \\right|^2 > \\left|\n w(x) \\right|^2$ for all $x \\in R_n^{-}$). Let $\\tilde{w}(X) = X^k w(1\/X)$ denote the \\emph{reciprocal\npolynomial} of $w(X)$ which is obtained by reversing the order of\ncoefficients in $w(X)$. Now assume that the bound of\n\\cite{KV-lower-bounds} holds with equality. Then we must have\n\\[\n\\left| w(x) \\right|^2 = w(x)w^{*}(x) = \\lambda_2 \\quad \\forall \\: x \\in R_n^{-}\n\\] \nfor some positive real number $\\lambda_2$, i.e.\n\\[\nw(x)w(1\/x) = \\lambda_2 \\quad \\forall \\: x \\in R_n^{-} \\; .\n\\]\nThis is equivalent to\n\\[\nw(x)\\tilde{w}(x) = \\lambda_2 x^k \\quad \\forall \\: x \\in R_n^{-}\n\\]\nThus $R_n^{-}$ is a subset of the roots of the polynomial\n$w(X)\\tilde{w}(X) - \\lambda_2 X^k$, and so\n\\begin{equation}\nw(X)\\tilde{w}(X) - \\lambda_2 X^k = (1+X+X^2+\\cdots +X^{n-1}) r(X)\n\\label{eq:equality_in_bound_circ}\n\\end{equation}\nwhere $r(X)$ is a polynomial of degree $2k-n+1 \\ge 0$ with integer\ncoefficients. In the following we give details of this condition for\nsome codes which attain the bound of \\cite{KV-lower-bounds} with\nequality.\n\\vspace{-2mm}\n\\begin{example}\\label{EG22example}\n The $\\mathrm{EG}(2,2)$ code with $q=2$, $n=3$, $k=1$, $d=2$ has\n $w(X) = 1+X$. Here $\\lambda_1 = d^2 = 4$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\[\n(1+X)^2 - X = 1+X+X^2\n\\]\nso in this case $\\lambda_2 = 1$ and $r(X) = 1$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 3 = q+1 \\; .\n\\]\n\\end{example}\n\\vspace{-5mm}\n\\begin{example}\\label{PG22example}\n The $\\mathrm{PG}(2,2)$ code with $q=2$, $n=7$, $k=3$, $d=3$ has\n $w(X) = 1+X+X^3$. Here $\\lambda_1 = d^2 = 9$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\[\n(1+X+X^3)(1+X^2+X^3) - 2X^3 = 1+X+\\cdots+X^6\n\\]\nso in this case $\\lambda_2 = 2$ and $r(X) = 1$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 4 = q+2 \\; .\n\\]\n\\end{example}\n\\vspace{-5mm}\n\\begin{example}\\label{PG24example}\n The $\\mathrm{PG}(2,4)$ code with $q=2$, $n=21$, $k=11$, $d=5$ has\n $w(X) = 1+X^2+X^7+X^8+X^{11}$. Here $\\lambda_1 = d^2 = 25$ and\n (\\ref{eq:equality_in_bound_circ}) holds in the form\n\\begin{eqnarray*}\n& (1+X^2+X^7+X^8+X^{11})(1+X^3+X^4+X^9+X^{11}) \\\\ \n& - 4X^{11} = (1+X+X^2+\\cdots+X^{20})(1-X+X^2)\n\\end{eqnarray*}\nso in this case $\\lambda_2 = 4$ and $r(X) = 1-X+X^2$. Here\n\\[\nd_{\\mathrm{min}} = w_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H})= n \\left(\n \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \\right) = 6 = q+2 \\; .\n\\]\n\\end{example}\n\\vspace{-2mm}\nNote that for a general $\\mathrm{PG}(2,q)$ code, for the bound to hold\nwith equality we require\n\\begin{eqnarray*}\nw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = q+1 & = & n \\left( \\frac{2d-\\lambda_2}{d^2-\\lambda_2}\n \\right) \\\\ \n & = & (q^2+q+1) \\left( \\frac{2(q+1)-\\lambda_2}{(q+1)^2-\\lambda_2} \\right) \\; .\n\\end{eqnarray*}\nand therefore we must have $\\lambda_2 = q$. Also, for a general\n$\\mathrm{EG}(2,q)$ code, for the bound to hold with equality we\nrequire\n\\begin{eqnarray*}\nw_{\\mathrm{p}}^{\\mathrm{min}}(\\matr{H}) = q+1 & = & n \\left( \\frac{2d-\\lambda_2}{d^2-\\lambda_2} \n\\right) \\\\\n& = & (q^2-1) \\left( \\frac{2q-\\lambda_2}{q^2-\\lambda_2} \\right) \\; .\n\\end{eqnarray*}\nand therefore we must have $\\lambda_2 = q$ if $q>2$, whereas for\n$q=2$, any $\\lambda_2$ will achieve the bound.\n\n\n\\section{Conclusions and Future Work}\n\\label{sec:conclusions:1}\n\nA method has been presented for evaluation of the eigenvalue-based lower bound on the AWGNC pseudo-weight based on spectral\nanalysis, for QC and related codes. It was shown that the relevant eigenvalues may be found by computing the\neigenvalues of a certain number of small matrices. We also presented a \nnecessary condition for the bound to be attained with\nequality and gave a few examples of codes for which this happens.\nFuture work involves optimization of QC code designs based on these bounds. \n\n\\section{Acknowledgment}\nThe first author was supported by NSF Grant DMS-0708033 and TF-0830608.\n \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec1}\n\nThe success of Deep Neural Networks (DNNs) \\cite{He2016Deep, wen2021zigan, hu2018squeeze}, \\cite{liu2016ssd, 9546634} largely owes to the completeness of training data, which means that the collected data should be carefully annotated, low level noised, and sufficiently large. But in many real scenarios that need professional labeling knowledge, such as fine-grained image classification for different kinds of birds, it is generally hard to access a large number of label-rich training samples.\n\nOne solution to alleviate the over-dependence of DNNs on label-rich training data is Few-Shot Learning (FSL)~\\cite{sun2019meta,chen2019closer,ryu2020metaperturb,yang2020restoring,baik2020meta,yang2021free,liu2020negative}, which aims to mine the transferable meta-knowledge from the base classes so that DNNs can utilize such knowledge to easily recognize new classes given only a few training examples. However, the main challenge for the FSL \\cite{sun2019meta,chen2019closer,ryu2020metaperturb,yang2020restoring,baik2020meta,yang2021free} is that learning to classify from few examples having limited representative capability inevitably brings the overfitting issue. Therefore, researchers mainly focus on leveraging meta-learning technology~\\cite{vinyals2016matching, snell2017prototypical,ryu2020metaperturb, sung2018learning, hao2019collect, wu2019parn} to deal with the FSL problem. However, the above-mentioned FSL methods focus to classify coarse-grained generic object categories, which are less suitable to address the few-shot fine-grained classification task~\\cite{wu2021object, zhu2020multi, wertheimer2021few, li2020bsnet}, that requires to emphasize the local feature variations or subtle feature differences.\n\n\n\nInspired by the meta-learning success for generic object classification, some researchers~\\cite{tang2020revisiting, wang2021few, li2020bsnet, zhu2020multi, wu2021object, dong2020learning, huang2020low, li2019revisiting, li2019distribution, garcia2017few, wertheimer2021few, tang2020blockmix, tang2022learning, huang2021toan} start to extend the study of FSL from generic object classes to fine-grained classes, where the main challenge is that fine-grained recognition is more dependent on \\textbf{mining spatially local discriminative parts} of an input image, rather than global features extracted by generic meta-learning models such as prototypical networks~\\cite{snell2017prototypical}. As illustrated in Fig. \\ref{fig1}(a), many few-shot fine-grained works mine discriminative parts of the whole image based on the attention mechanism~\\cite{zhu2020multi}, feature map reconstruction~\\cite{wertheimer2021few}, and feature-level spatial alignment~\\cite{wu2021object}. However, these methods fail to leverage cross-image object semantic relations between the training examples (denoting the support images) and test examples (denoting the query images).\n\n\\begin{figure}[t]\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.2cm}\n\\centering\n\\includegraphics[height=4.5cm, width=8.3cm]{ACM_motivation.png}\n\\caption{(a) Previous fine-grained works attempt to learn discriminative local image parts but have no interaction between support-query images, which may cause relation matching confusion between different fine-grained objects and yield misclassification. (b) In contrast, HelixFormer consisting of RMP and REP modules (details will be given in Sec. \\ref{sec3}) can mine image pair-level semantically related parts, \\textit{e.g.} birds' wings, and further learn how to distinguish these subtle feature differences to avoid misclassification.}\n\\label{fig1}\n\\end{figure}\n\nAs a matter of fact, \\textit{to recognize a novel class's samples, humans tend to compare the local semantic parts' differences between the ever-seen and newly-seen objects, so that the subtle feature differences can be identified using only a few examples.} Thus, benefiting from such a fact, we are encouraged to model the cross-image object semantic relation to find the discriminative local regions in the Transformer, as illustrated in Fig. \\ref{fig1}(b). We further give an in-depth analysis on how to design such a cross-attention mechanism under both few-shot and fine-grained scenarios via the following two aspects.\n\nFirstly, under the FSL setting, we consider that the cross-image object semantic relation calculated from the support to query image, should be consistent with that calculated from the query to support image, namely, \\textbf{cross-image object semantic relation consistency}, which motivates us to use a symmetrical structure to model the cross-image object semantic relations. Moreover, similar to humans that need to distinguish from multiple relations between a target object and its various neighbors to determine its belonged category, cross-image object semantic relation should be modeled as \\zb{a parts-to-parts (many-to-many) matching process, that involves several attention heads of a Transformer running in parallel with each head focusing on a certain part.}\n\nSecondly, such cross-image semantic relation learning should not only consider the few-shot scenario, but also consider the fine-grained setting which requires to strengthen subtle local features' discriminability. Different fine-grained objects in real world may present arbitrary poses, scales and appearances, causing different similarity in both global appearance and various local regions, hurting the classification performance. This challenge may be alleviated if the learned cross-image semantic relation can serve to emphasize those discriminative local features by integrating into a well-performed fine-grained baseline model, \\zb{and enhance the representation learning process for those discovered semantically-similar regions.}\n\nIn view of the above, we propose a Transformer-based double-helix model, namely HelixFormer to solve the few-shot fine-grained image classification task. Benefiting from the self-attention mechanism of Transformer, the HelixFormer exploits the multi-head key-query-value outputs to do interaction between different images, and predicts their object-level semantic relation. HelixFormer is mainly composed of a cross-image object Relation Mining Process (RMP) across the support-query branches, and a feature Representation Enhancement Process (REP) within each single branch. Specifically, we feed the input image pairs into the support and query branches respectively. First, by treating the input feature patches (extracted by the Convolution Neural Networks (CNNs) based backbone such as Conv-4~\\cite{sung2018learning}) as tokens and considering the cross-branch token information interaction, RMP produces \\textbf{Cross-image Semantic Relation Maps (CSRMs)} for each branch in a bidirectional and symmetrical manner, which ensures the cross-image relation consistency. Meanwhile, we formulate the multi-head cross-attention mechanism as modeling many-to-many inter-object semantic relations, where one object would interact with multiple other objects. Second, the above CSRMs encode image patch-level semantic relation between different fine-grained objects, and further help the subsequent REP \\zb{to enhance the learning of the feature representations within either the support or query feature encoding branch, which boosts the baseline model's ability to distinguish subtle feature differences of fine-grained objects.}\n\nThe main contributions of this work can be summarized as follows:\n\n\\begin{enumerate}[1)]\n\\item We propose a novel HelixFormer architecture, leveraging on the cross-image object semantic relation learning at patch level, to perform the few-shot fine-grained learning. To our knowledge, this is the first work to introduce semantic relation mining in the Transformer model for few-shot fine-grained task.\n\n\\item To ensure the semantic relation consistency between a pair of support-query images in FSL, we design a double-helix RMP structure that can generate consistent patch-level CSRMs in different branches. Furthermore, with the aid of CSRMs, we develop a REP to enhance the feature learning for those semantically-similar regions presenting subtle feature differences.\n\n\\item We have conducted extensive experiments on five few-shot fine-grained benchmarks. Both qualitative and quantitative results validate that the HelixFormer can effectively learn the cross-image object semantic relations, and further utilize such relations to enhance the model's generalization ability.\n\\end{enumerate}\n\n\\section{Related Works}\n\\label{sec2}\n\n\\noindent\\textbf{Few-Shot Fine-Grained Learning (FSFGL).} Recent FSL works \\cite{alet2019neural, yin2019meta, 9729102, ren2018learning, ye2022makes, lee2019learning, kirsch2019improving} can be roughly categorized into three types: 1) Optimization-based methods \\cite{finn2017model,rusu2018meta} that focus on learning good initialization parameters in order to quickly adapt the few-shot model to novel classes; 2) Metric-based methods \\cite{vinyals2016matching, snell2017prototypical, oreshkin2018tadam, sung2018learning} that aim to design a distance metric, so that the few-shot model can learn the semantic relation between different input images; 3) Data augmentation-based methods \\cite{reed2017few, chen2019image, zhang2018metagan, hariharan2017low, wang2018low, li2020adversarial} that produce new samples to enlarge the training set for model training. Recently, inspired by the rapid development of meta-learning, researchers~\\cite{tang2020revisiting, wang2021few, li2020bsnet, zhu2020multi, wu2021object, dong2020learning, huang2020low, li2019revisiting, li2019distribution, garcia2017few, wertheimer2021few} start to explore the generalization ability of FSL model on novel fine-grained sub-classes where only a few training examples are given. For example, a multi-attention meta-learning method~\\cite{zhu2020multi} is proposed to learn diverse and informative parts of fine-grained images. Besides, the work~\\cite{wertheimer2021few} tackles the FSFGL problem from the perspective of reconstructing the query image to learn a classifier. More recently, the work~\\cite{wu2021object} tries to increase the fine-grained classification accuracy via long-shot-range spatial alignment between support and query features. Motivated by these works in the FSFGL community, we further extend the study of FSFGL to a Transformer-based structure, and investigate its effectiveness in strengthening the support-query relation matching process only given a few samples.\n\n\\noindent\\textbf{Cross-attention Models.} In this part, we review existing cross-attention works \\cite{chen2021crossvit, lin2021cat, wang2021crossformer, wei2020multi, Yang3690, ke2021prototypical, huang2019ccnet} and find that they are mainly based on the attention modeling of cross-scale features \\cite{chen2021crossvit, lin2021cat, wang2021crossformer}, cross-modality relationships \\cite{wei2020multi, Yang3690}, joint spatio-temporal information \\cite{ke2021prototypical}, and inner-image multi-patches \\cite{huang2019ccnet} to capture the intra-object relations and contextual information. Different from the above methods, our work aims to exploit the cross-image object semantic relations (\\textit{i.e.} finding the discriminative local spatial regions between objects that are helpful for fine-grained recognition) to address the FSFGL issue. On the other hand, there are only two works \\cite{hou2019cross, zhuang2020learning} employing cross-attention mechanism to perform the FSL task. Our work differs from the two works as follows: 1) We study a symmetrical Transformer-based structure, which fully considers the symmetry property between support and query images in FSL, and imposes a cross-image object semantic relation consistency between support-query and query-support matching process; 2) We develop a two-step relation matching process (a two-branch relation mining process and a representation enhancement process), which has the merit of improving the baseline model's ability to distinguish the subtle local feature differences.\n\n\n\\noindent\\textbf{Transformer for Vision Tasks.} Vision Transformer (ViT) \\cite{dosovitskiy2020image} shows promising performance on a variety of tasks including image classification \\cite{dosovitskiy2020image}, object detection \\cite{chu2021twins, wang2021pyramid}, segmentation \\cite{chu2021twins, wang2021pyramid}, and pose estimation \\cite{yuan2021hrformer}. The goal of ViT is to model long-range dependencies across different input sequence elements (or input tokens), by splitting an input image into a sequence of image patches with size of 16$\\times$16 pixels. More recently, to reduce the computational cost, many researchers incorporate the multi-scale branch into Transformer, via inserting depth-wise convolutions into the self-attention \\cite{yuan2021hrformer} or exploiting the multi-resolution parallel structure \\cite{liu2021swin}. The above works attempt to model the global self-attention within an input image through the single-branch structure, while our HelixFormer tries to identify the local patch-level cross-attention between different input images. Besides, CrossTransformer \\cite{doersch2020crosstransformers} and CrossViT \\cite{chen2021crossvit} are recently developed Transformer-based dual-branch structures, where the CrossTransformer utilizes the dual-branch structure to achieve a coarse-grained spatial correspondence, and CrossViT feeds the image patches of different sizes into two separate branches to extract multi-scale information. We would like to emphasize that, compared with the above dual-branch network structure, our HelixFormer is actually a double-helix dual-branch structure, which means that the cross-object semantic relations from the support to query branch and vice versa are symmetric and complementary, ensuring the semantic consistency assumption of relation pairs.\n\n\n\\section{The Proposed Method}\n\\label{sec3}\n\nThe overall framework of the proposed HelixFormer is illustrated in Fig. \\ref{fig2}. For easy understanding, we first give the problem formulation and the episodic training strategy for Few-Shot Learning (FSL) task. Then we introduce the proposed HelixFormer and discuss its superiority over several variants of existing cross-attention models. Finally, we give the overall objectives and cross-attention learning strategy of our model.\n\n\\subsection{Preliminaries}\n\n\\noindent\\textbf{FSL Setting.} Given a set of base classes {\\small $D_{base}$} and a CNN-based feature embedding network (or backbone) $F$, the purpose of few-shot learning is to learn a task-agnostic $F$ on {\\small $D_{base}$} via an episodic task sampling strategy, so that the $F$ can be generalized to novel classes {\\small $D_{novel}$}, where {\\small $D_{base} \\cap D_{novel} = \\emptyset$}.\nFor a typical FSL setting, each episodic task represents an $N$-way $K$-shot classification task, where both support set $S$ and query set $Q$ are sampled from the same $N$ classes. During the meta-training stage, each episodic task is sampled from the base classes {\\small $D_{base}$}.\n\n\\begin{figure*}\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.5cm}\n\\centering\n\\includegraphics[height=6.4cm]{ACM_framework.png}\n\\caption{The overview of the proposed HelixFormer, which takes a pair of support-query features {\\small $(f_S, f_Q)$} extracted by a CNN backbone as its input, and outputs a pair of {\\small $(\\hat{f}_S, \\hat{f}_Q)$} for doing the subsequent object classification. Note that for simplicity, we omit the multi-head attention in this figure.}\n\\label{fig2}\n\\end{figure*}\n\n\\noindent\\textbf{Two-branch Baseline Introduction.} Inspired by the two-branch network structure to learn semantic representations in the relation network (RN)~\\cite{sung2018learning}, we employ the RN as our baseline model. As illustrated in Fig. \\ref{fig2}, given an input image pair $(x_S, x_Q)$, the RN first produces the high-level semantic feature pairs $(f_S, f_Q)$ via a convolution-based backbone $F$. Then, a classification network $H$ is used to predict whether the query image $x_Q$ has the same class label with the $n$-th class support image $x_{n,S}$. Thus, the loss function of our baseline model $L_{cls}(F, H;x_S)$ can be formulated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\begin{aligned}\n\\label{eq1}\nL_{cls}(F,H;x_S) = &\\sum\\nolimits_{(x_Q,y_Q)\\in Q} log \\ P(y_Q=n|x_Q;x_S) \\\\& = \\frac{exp(H([F(x_{n,S}), F(x_Q)]))}{\\sum_{n^{\\prime} \\in N} exp(H([F(x_{n^{\\prime},S)}, F(x_Q)]))}\n\\end{aligned}\n\\end{equation}\n\\end{small}\n\n\\noindent where $[F(x_{n,S}), F(x_Q)]$ denotes the concat operation between $F(x_{n,S})$ and $F(x_Q)$, and $y_Q$ is the label of query image. During the meta-test or model evaluation stage, each episodic task is sampled from the novel classes {\\small $D_{novel}$}, given the prototype of $K$ labeled support images of the $n$-class $x_{n,S}$. Note that the label $y_Q$ is available for model training on {\\small $D_{base}$}, but it can only be used for model evaluation on {\\small $D_{novel}$}.\n\n\\subsection{HelixFormer}\n\\label{sec3}\n\nThe purpose of this work is to capture the cross-image object semantic relations for improving the generalization ability of fine-grained model. By means of the multi-head attention module in Transformer, a many-to-many matching process of semantically related regions can be established. \\textbf{Note that} \\textit{just a single HelixFormer} is sufficient in capturing the cross-attention, and please refer to our supplementary material for the study of stacking HelixFormer.\n\n\\noindent\\textbf{Bidirectional Relation Mining Process.} Given a pair of images $(x_S, x_Q)$ sampled from the $S$ and $Q$ respectively, the backbone $F$ is first used to extract a pair of high-level features $(f_S, f_Q)$ where $f_S = F(x_S) \\in \\mathbb{R}^{C\\times H\\times W}$, and $C$ denotes the channel number, $H$ and $W$ are the height and width of the features, respectively. Although the feature pairs $(f_S, f_Q)$ contain rich semantics, they lack interaction from each other, and do not consider the cross-object semantic relations between support-query images.\n\nTo fully encode the relations between the two branches, we treat the feature maps $f \\in \\mathbb{R}^{C\\times H\\times W}$ as a sequence of $HW$ tokens, with each token having $C$ channels, which can be formulated as $f = [f^1, f^2, ..., f^{HW}]$, where $f^i \\in \\mathbb{R}^{C}$.\n\nIn detail, given the token embedding with weight parameters $W_S^e$, $W_S^k$, $W_S^v$ for support branch $S$, and parameters $W_Q^e$, $W_Q^k$, $W_Q^v$ for query branch $Q$, the query vector $e^i$, key vector $k^i$, and value vector $v^i$ can be calculated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq2.0}\n\\left\\{ \\begin{array}{l}\ne_S^i = W_S^e \\ f_S^i\\\\\nk_S^i = W_S^k \\ f_S^i\\\\\nv_S^i = W_S^v \\ f_S^i\n\\end{array} \\right.\\quad \\quad \\left\\{ \\begin{array}{l}\ne_Q^i = W_Q^e \\ f_Q^i\\\\\nk_Q^i = W_Q^k \\ f_Q^i\\\\\nv_Q^i = W_Q^v \\ f_Q^i\n\\end{array} \\right.\n\\end{equation}\n\\end{small}\n\n\\noindent where for avoiding the confusion of symbol definition, $e^i$ denotes the query vector in the RMP, and $Q$ represents the query branch for the whole pipeline. Besides, according to the ablation studies in Sec. \\ref{sec4.4}, HelixFormer employs the convolution-based token embedding and removes the position embedding, since local spatial information has been encoded in the convolution feature maps. Further, we achieve the RMP by a symmetrical cross-attention from two directions: 1) We obtain support branch-related features using the value vector $v_Q^i$ from another branch $Q$, formulated as Q$\\rightarrow$S; 2) We also obtain query branch-related features using the value vector $v_S^i$ of the support branch $S$, formulated as S$\\rightarrow$Q.\n\n\\textbf{For Q$\\rightarrow$S direction}, let $\\mathbf{A}_{Q,S} \\in \\mathbb{R}^{HW \\times HW}$ denote the matrix of attention scores obtained via the matrix multiplication as follows:\n\n\\begin{equation}\n\\label{eq2}\n\\mathbf{A}_{Q,S} = K_Q \\ E_S^\\mathrm{T}\n\\end{equation}\n\n\\noindent where $K_Q = [k_Q^1, ..., k_Q^{HW}] \\in \\mathbb{R}^{HW \\times C}$ and $E_S = [e_S^1, ..., e_S^{HW}] \\in \\mathbb{R}^{HW \\times C}$, which can be obtained using Eq. \\ref{eq2.0}. Note that the designed token embedding way does not change the channel number for each input token $f^i \\in \\mathbb{R}^C$. Moreover, to perform normalization for attention scores in Transformer and find the semantically related regions according to clues from another branch $Q$, a softmax layer with a scaling factor is employed as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq3}\nR_{Q,S} = Softmax(\\mathbf{A}_{Q,S} \/ \\sqrt{C}) \\ V_Q\n\\end{equation}\n\\end{small}\n\n\\noindent where $V_Q = [v_Q^1, v_Q^2, ..., v_Q^{HW}] \\in \\mathbb{R}^{HW \\times C}$, and $R_{Q,S} \\in \\mathbb{R}^{HW \\times C}$ represents the Cross-image Semantic Relation Maps (CSRMs), encoding the patch-level semantic relations from query to support branch. Then, the CSRMs are reshaped to the same dimension as the backbone features $f_S \\in \\mathbb{R}^{C \\times H \\times W}$, in order to enhance the semantically similar backbone features in the REP operation.\n\n\\textbf{For S$\\rightarrow$Q direction}, the CSRMs $R_{S,Q} \\in \\mathbb{R}^{HW \\times C}$ also can be easily obtained by performing a symmetrical process described in Eqs. \\ref{eq2} and \\ref{eq3}, which can be written as follows:\n\n\\vspace{-0.10cm}\n\\begin{small}\n\\begin{equation}\n\\begin{array}{l}\n\\mathbf{A}_{S,Q} = K_S \\ E_Q^\\mathrm{T}, \\\\\nR_{S,Q} = Softmax(\\mathbf{A}_{S,Q} \/ \\sqrt{C}) \\ V_S.\n\\end{array}\n\\end{equation}\n\\end{small}\n\n\\noindent\\textbf{Representation Enhancement Process.} \\zb{Based on the above RMP, bidirectional semantic relations of support-to-query features and query-to-support features have been symmetrically encoded into the matrix of attention scores, from both directions $\\mathbf{A}_{Q,S}$ and $\\mathbf{A}_{S,Q}$, so that these cross-image object semantically-similar parts can be first found}. In this part, we design a REP that can further guide the classification network to learn how to distinguish these semantically similar features obtained by the RMP.\n\nGiven the high-level features $(f_S, f_Q)$ learned from the CNNs-based backbone, and the CSRMs $(R_{Q,S}, R_{S,Q})$ calculated from the Q$\\rightarrow$S and S$\\rightarrow$Q, the REP can be formulated as follows:\n\n\\begin{small}\n\\begin{equation}\n\\label{eq5}\n\\left\\{ \\begin{array}{l}\n\\hat{f}_S = MLP(Norm(f_S \\odot R_{Q,S})) \\\\\n\\hat{f}_Q = MLP(Norm(f_Q \\odot R_{S,Q}))\n\\end{array} \\right.\n\\end{equation}\n\\end{small}\n\n\\noindent where $\\odot$ denotes an element-wise multiplication operation, and CSRMs are defined as a soft relation mask that can strengthen the features $f$ extracted by CNNs-based backbone. Besides, $MLP$ is the Feed Forward module as illustrated in Fig. \\ref{fig2}, which allows the backbone to focus on the subtle feature differences of the predicted semantically similar regions. The experimental analyses of the REP are shown in Sec. \\ref{sec4.4}. Overall, \\zb{$\\hat{f}_S$ and $\\hat{f}_Q$ are defined as the output features of the REP, and then will be fed into the classification head as illustrated in Fig. \\ref{fig2}.}\n\n\n\n\n\\begin{figure*}\n\\vspace{-6pt}\n\\setlength{\\abovecaptionskip}{0.1cm}\n\\setlength{\\belowcaptionskip}{-0.3cm}\n\\centering\n\\includegraphics[height=5.8cm]{ACM_fig3.png}\n\\caption{Other Transformer-based cross-attention model designs. (a) Q$\\rightarrow$S: Cross-attention from query to support; (b) S$\\rightarrow$Q: Cross-attention from support to query; (c) S$\\rightleftharpoons$Q: Bidirectional asymmetric cross-attention, which is a sequential stack of the above S$\\rightarrow$Q and Q$\\rightarrow$S variants; (d) Q$\\rightleftharpoons$S: Bidirectional asymmetric cross-attention by stacking the Q$\\rightarrow$S and S$\\rightarrow$Q variants.}\n\\label{fig3}\n\\end{figure*}\n\n\\noindent\\textbf{Differences among Transformer-based Cross-attention Variants.}\n\\label{sec3.3}\nGiven high-level feature pairs $(f_S, f_Q)$ extracted from CNN-based backbone $F$, there are many Transformer-based alternatives to model support-query patch-level relations of the extracted high-level features pairs $(f_S, f_Q)$. They can be categorized into three classes as follows.\n\n\n\n\n\\textbf{1)} \\zb{Unidirectional cross-attention structure: As shown in Figs. \\ref{fig3}(a) and \\ref{fig3}(b), only the features from a single branch (support or query branch) are enhanced by means of cross-attention from another branch. For such a case, enhanced features $\\hat{f}$ and original backbone features $f$ are used as the input of classification head. Such a unidirectional cross-attention way fails to achieve high classification accuracy, since this way only considers the semantic enhancement of only a single branch. }\n\n\\textbf{2)} \\zb{Bidirectional but asymmetric structure: As illustrated in Figs. \\ref{fig3}(c) and \\ref{fig3}(d), S$\\rightleftharpoons$Q (or Q$\\rightleftharpoons$S) is a sequential stack of the above S$\\rightarrow$Q and Q$\\rightarrow$S variants (or Q$\\rightarrow$S and S$\\rightarrow$Q variants), where both the query features and support features are enhanced. But this approach does not consider support-query and query-support feature matching processes in parallel, which is detrimental to the cross-image object relation consistency assumption.}\n\n\\textbf{3)} \\zb{Bidirectional and symmetrical structure: S$\\leftrightarrow$Q refers to HelixFormer, which is shown in Fig.2. Compared with the above structures, HelixFormer is a more general form of cross-attention models, and thus has much less inductive bias. For unidirectional or bidirectional asymmetric structure, a kind of \\textit{uneven\/biased} cross-attention learning between support and query patch-level features is injected into the whole network. But for HelixFormer, the learned cross-image patch-level attention relations are symmetric and complementary. The detailed experimental and visual analyses are shown in Sec. \\ref{sec4.4}}\n\n\n\\vspace{-0.20cm}\n\\subsection{Overall Objectives and Cross-attention Learning Strategy}\n\n\\noindent\\textbf{Overall Objectives.} For finding a many-to-many semantic relation matching between the support and query branches, we utilize the multi-head attention mechanism, which consists of multiple attention layers with different token embedding parameters. The overall loss function of the $n$-th class on base classes {\\small $D_{base}$} of HelixFormer can be written as follows:\n\n\\begin{small}\n\\begin{equation}\n\\begin{aligned}\n\\label{eq6}\nL_{cls}(F, W, \\phi, H; x_S) = &\\sum\\nolimits_{(x_Q,y_Q)\\in Q} log \\ P(y_Q=n|x_Q; x_S) \\\\& = \\frac{exp(H([\\hat{f}_{n,S},\\hat{f}_Q]))}{\\sum_{n^{\\prime} \\in N} exp(H([\\hat{f}_{n^{\\prime},S}, \\hat{f}_Q]))}\n\\end{aligned}\n\\end{equation}\n\\end{small}\n\n\\noindent where $W$ and $\\phi$ denote learnable parameters in RMP and REP, respectively.\n\n\n\\begin{table*}[]\n\\centering\n\\setlength{\\tabcolsep}{1.25mm}{\n\\begin{tabular}{c|c|c|c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Setting} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{Stanford Dogs} & \\multicolumn{2}{c}{Stanford Cars} & \\multicolumn{2}{c}{NABirds} \\\\\n & & & 1-shot & 5-shot & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRelationNet~(CVPR-18)~\\cite{sung2018learning} & In. & Conv-4 & \\, 43.29\u00b10.46$^\\diamond$ & \\, 55.15\u00b10.39$^\\diamond$ & \\, 47.79\u00b10.49$^\\diamond$ & \\, 60.60\u00b10.41$^\\diamond$ & 64.34\u00b10.81* & 77.52\u00b10.60* \\\\\nGNN$^\\dagger$~(ICLR-18)~\\cite{garcia2017few} & In. & Conv-4 & 46.98\u00b10.98 & 62.27\u00b10.95 & 55.85\u00b10.97 & 71.25\u00b10.89 & - & - \\\\\nCovaMNet~(AAAI-19)~\\cite{li2019distribution} & In. & Conv-4 & 49.10\u00b10.76 & 63.04\u00b10.65 & 56.65\u00b10.86 & 71.33\u00b10.62 & 60.03\u00b10.98* & 75.63\u00b10.79* \\\\\nDN4~(CVPR-19)~\\cite{li2019revisiting} & In. & Conv-4 & 45.73\u00b10.76 & 66.33\u00b10.66 & 61.51\u00b10.85 & 89.60\u00b10.44 & 51.81\u00b10.91* & 83.38\u00b10.60* \\\\\nLRPABN~(TMM-20)~\\cite{huang2020low} & In. & Conv-4 & 45.72\u00b10.75 & 60.94\u00b10.66 & 60.28\u00b10.76 & 73.29\u00b10.58 & 67.73\u00b10.81* & 81.62\u00b10.58* \\\\\nMattML~(IJCAI-20)~\\cite{zhu2020multi} & In. & Conv-4 & 54.84\u00b10.53 & 71.34\u00b10.38 & 66.11\u00b10.54 & 82.80\u00b10.28 & - & - \\\\\nATL-Net~(IJCAI-20)~\\cite{dong2020learning} & In. & Conv-4 & 54.49\u00b10.92 & 73.20\u00b10.69 & 67.95\u00b10.84 & 89.16\u00b10.48 & - & - \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & In. & Conv-4 & 49.37\u00b10.20 & 67.13\u00b10.17 & 58.90\u00b10.22 & 79.65\u00b10.15 & - & - \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & In. & Conv-4 & 55.53\u00b10.45 & 71.68\u00b10.36 & 70.13\u00b10.48 & 84.29\u00b10.31 & 75.60\u00b10.49 & 87.21\u00b10.29 \\\\\nOurs & In. & Conv-4 & \\textbf{59.81}\u00b10.50 & \\textbf{73.40}\u00b10.36 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 & \\textbf{78.63}\u00b10.48 & \\textbf{90.06}\u00b10.26 \\\\ \\hline\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & In. & ResNet-12 & 64.15\u00b10.49 & 78.28\u00b10.32 & 77.03\u00b10.46 & 88.85\u00b10.46 & 83.76\u00b10.44 & 92.61\u00b10.23 \\\\\nOurs & In. & ResNet-12 & \\textbf{65.92}\u00b10.49 & \\textbf{80.65}\u00b10.36 & \\textbf{79.40}\u00b10.43 & \\textbf{92.26}\u00b10.15 & \\textbf{84.51}\u00b10.41 & \\textbf{93.11}\u00b10.19 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab1}5-way classification accuracy ($\\%$) on the Stanford Dogs, Stanford Cars and NABirds datasets respectively, $^\\diamond$, and * represent that the corresponding results are reported in~\\cite{zhu2020multi}, and~\\cite{huang2020low}, respectively. Other results are reported in their original papers. ``\\;In.\\;'' denotes the inductive few-shot learning.}\n\\vspace{-0.4cm}\n\\end{table*}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{CUB} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nFEAT~(CVPR-20)~\\cite{ye2020few} & Conv-4 & 68.87\u00b10.22 & 82.90\u00b10.15 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & Conv-4 & 69.64 & 87.31 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & Conv-4 & 73.48 & 88.43 \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & Conv-4 & 73.07\u00b10.46 & 86.24\u00b10.29 \\\\\nOurs & Conv-4 & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 \\\\ \\hline\nRelationNet*~(CVPR-18)~\\cite{sung2018learning} & ResNet-34 & 66.20\u00b10.99 & 82.30\u00b10.58 \\\\\nDeepEMD~(CVPR-20)~\\cite{zhang2020deepemd} & ResNet-12 & 75.65\u00b10.83 & 88.69\u00b10.50 \\\\\nICI~(CVPR-20)~\\cite{wang2020instance} & ResNet-12 & 76.16 & 90.32 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & ResNet-12 & 78.47 & 90.90 \\\\\nFRN (Baseline)~\\cite{wertheimer2021few} & ResNet-12 & 80.80\u00b10.20 & - \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & ResNet-12 & \\textbf{83.16} & \\textbf{92.59} \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & ResNet-12 & 77.77\u00b10.44 & 89.87\u00b10.24 \\\\\nOurs (Baseline) & ResNet-12 & 72.61\u00b10.47 & 85.60\u00b10.29 \\\\\nOurs & ResNet-12 & 81.66\u00b10.30 & 91.83\u00b10.17 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab2}5-way classification accuracy ($\\%$) on the CUB (using bounding-box cropped images). ``\\;FRN (Baseline)\\;'' represents the classification results achieved by \\textbf{their baseline model}.}\n\\vspace{-0.4cm}\n\\end{table}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{1.5mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{Aircraft} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nProtoNet~(NIPS-17)~\\cite{snell2017prototypical} & Conv-4 & 47.72 & 69.42 \\\\\nDSN~(CVPR-20)~\\cite{simon2020adaptive} & Conv-4 & 49.63 & 66.36 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & Conv-4 & 49.67 & 69.06 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & Conv-4 & 53.20 & 71.17 \\\\\nOurs & Conv-4 & \\textbf{70.37}\u00b10.57 & \\textbf{79.80}\u00b10.42 \\\\ \\hline\nProtoNet~(NIPS-17)~\\cite{snell2017prototypical} & ResNet-12 & 66.57 & 82.37 \\\\\nDSN~(CVPR-20)~\\cite{simon2020adaptive} & ResNet-12 & 68.16 & 81.85 \\\\\nCTX~(NIPS-20)~\\cite{doersch2020crosstransformers} & ResNet-12 & 65.60 & 80.20 \\\\\nFRN~(CVPR-21)~\\cite{wertheimer2021few} & ResNet-12 & 70.17 & \\textbf{83.81} \\\\\nOurs & ResNet-12 & \\textbf{74.01}\u00b10.54 & 83.11\u00b10.41 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab3}5-way classification accuracy ($\\%$) on the Aircraft dataset.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\\noindent\\textbf{Cross-attention Learning Strategy.} To transfer the semantic relations from base classes {\\small $D_{base}$} to novel classes {\\small $D_{novel}$}, we employ a two-stage training strategy. Firstly, to ensure that the backbone $F$ has sufficient high-level semantic knowledge for subsequent cross-attention matching process, the $F$ is trained on base classes {\\small $D_{base}$} by optimizing Eq. \\ref{eq1}. Secondly, we insert the HelixFormer at the end of backbone $F$, and finetune the entire framework to compare the support and query images by optimizing Eq. \\ref{eq6} on {\\small $D_{base}$}.\n\n\\section{Experiments}\nWe evaluate the proposed method on five FSFGL benchmarks including Stanford Dogs, Stanford Cars, NABirds, CUB, and Aircraft. Additionally, we also perform cross-domain few-shot experiments to further show the transferability and adaptability of the HelixFormer. The following experiments are implemented by Pytorch, and all images are resized to 84$\\times$84 pixels for fair comparison.\n\n\\vspace{-0.10cm}\n\\subsection{Dataset}\n\\label{sec4.1}\n\\vspace{-0.10cm}\n\\noindent\\textbf{Stanford Dogs} \\cite{khosla2011novel} contains a total of 20580 images and 120 sub-classes of dogs. Following~\\cite{zhu2020multi}, we adopt 70, 20, 30 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{Stanford Cars} \\cite{krause20133d} consists of 16,185 images from 196 sub-classes, and we adopt 130, 17, 49 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{NABirds} \\cite{van2015building} provides 555 sub-classes of birds from North American. We use 350, 66, 139 categories for meta-train, meta-validation and meta-test, respectively, which is consistent with~\\cite{huang2020low}. \\textbf{CUB} \\cite{wah2011caltech} has 11,788 bird images containing 200 classes. We follow the commonly used split way \\cite{tang2020revisiting, wertheimer2021few}, which employs 100, 50, and 50 classes for meta-train, meta-validation and meta-test, respectively. \\textbf{Aircraft} \\cite{maji2013fine} includes 100 classes and 10000 aircraft images. Following the split way in \\cite{wertheimer2021few}, we use 50, 25, and 25 classes for meta-train, meta-validation and meta-test, respectively.\n\n\\vspace{-0.25cm}\n\\subsection{Experimental Setup}\n\\label{sec4.2}\n\\vspace{-0.10cm}\n\\noindent\\textbf{Stage One: Pre-training Backbone.} Following the setting in \\cite{chen2020new, snell2017prototypical, rusu2018meta, ye2020few}, we insert a fully-connected layer at the end of the selected backbone such as Conv-4 or ResNet-12, and train the backbone on base classes {\\small $D_{base}$}. In this stage, the backbone is trained from scratch using SGD optimizer with a batch size of 128, a momentum of 0.9, a weight decay of 0.0005, and an initial learning rate of 0.1. To keep consistent with the setting in \\cite{wu2021object}, the learning rate decays at 85 and 170 epochs. We remove the fully-connected layer for performing the next meta-training stage.\n\n\\noindent\\textbf{Stage Two: Meta-training HelixFormer.} In this stage, we first insert the proposed HelixFormer at the end of the pre-trained backbone, and then finetune the whole model to perform cross-attention for each input image pair. A learning rate of 0.001 is employed for all modules. SGD with a weight decay of 0.001 and Adam are used to optimize the backbone and HelixFormer, respectively. The whole training process lasts 130 epochs, and the learning rate decays to 0.0001 and 0.00001 at 70 and 110 epochs, respectively. \\zb{The number of multi-head attention in the single-layer HelixFormer is set to $2$, and the corresponding experimental analysis is shown in Sec. \\ref{sec4.4}}. For model evaluation, we report the results with 95$\\%$ confidence intervals over 2000 test episodes, and the best model is chosen according to its accuracy on the validation set.\n\n\n\\vspace{-0.20cm}\n\\subsection{Experimental Results}\n\\label{sec4.3}\n\\vspace{-0.10cm}\n\n\\noindent\\textbf{Few-shot Fine-grained Image Classification Results.}\nIt is generally recognized that spatially local feature relations across different fine-grained objects are particularly important for fine-grained classification. Thus, we first validate the effectiveness of HelixFormer on a wide range of fine-grained benchmark datasets, which is shown in Tables \\ref{tab1}-\\ref{tab3}.\n\n\nFirst, Table \\ref{tab1} reports the classification accuracies on the Standard Dogs, Standard Cars, and NABirds. It can be seen from Table \\ref{tab1} that the proposed HelixFormer can be applied on different backbones such as Conv-4 \\cite{sung2018learning} and ResNet-12 \\cite{chen2019closer}. Moreover, we compare the proposed HelixFormer with state-of-the-art general few-shot learning methods (including DN4 \\cite{li2019revisiting} and FRN \\cite{wertheimer2021few}, \\textit{etc.}) and few-shot fine-grained image classification methods (including MattML \\cite{zhu2020multi}, LSC+SSM \\cite{wu2021object}, \\textit{etc.}). FRN considers the few-shot learning as a feature reconstruction problem by reducing the reconstruction errors of images belonging to the same classes, while LSC+SSM attempts to align spatial distribution of feature maps between support and query images, both of which are state-of-the-art general and fine-grained few-shot classification methods, respectively. The experimental results show that the proposed HelixFormer outperforms these methods with a considerable margin, further demonstrating that learning the cross-attention using the proposed double-helix structure can boost the accuracy of fine-grained object recognition.\n\nFurthermore, we also conduct experiments on two more challenging datasets (CUB and Aircraft), as shown in Table \\ref{tab2} and \\ref{tab3}. We also observe a consistent accuracy increase using our method. Besides, we would like to emphasize that the HelixFormer significantly boosts the accuracy of Baseline from $72.61\\%$ to $81.66\\%$ on CUB dataset and has been verified on \\textit{\\textbf{five} commonly used fine-grained datasets}, comprehensively showing the effectiveness of HelixFormer on fine-grained recognition.\n\n\n\\noindent\\textbf{Few-shot Fine-grained Cross-domain Image Classification Results.}\nConsidering that the distribution differences between the training and test data often exist, we conduct few-shot fine-grained recognition experiments under cross-domain setting, to validate the effectiveness of the HelixFormer in alleviating the impact of the domain differences, and the results are reported in Table \\ref{tab4}.\n\nWe carry out the cross-domain adaptation from generic bird categories (widely collected from Internet) to a particular country (America). It can be seen from Table \\ref{tab4} that our method has higher accuracy than both the Baseline model and LSC+SSM model \\cite{wu2021object}, demonstrating that HelixFormer also can improve the transferability and domain adaptability of the existing models.\n\n\n\\begin{table}[t]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Backbone} & \\multicolumn{2}{c}{CUB $\\to$ NABirds} \\\\\n & & 1-shot & 5-shot \\\\ \\hline\nLSC+SSM~(Baseline)~\\cite{wu2021object} & ResNet-12 & 45.70\u00b10.45 & 63.84\u00b10.40 \\\\\nLSC+SSM~(ACM MM-21)~\\cite{wu2021object} & ResNet-12 & 48.50\u00b10.48 & 66.35\u00b10.41 \\\\ \\hline\nBaseline (RN~\\cite{sung2018learning}) & Conv-4 & 43.55\u00b10.45 & 55.53\u00b10.42 \\\\\nOurs & Conv-4 & \\textbf{47.87}\u00b10.47 & \\textbf{61.47}\u00b10.41 \\\\\\hline\nBaseline (RN~\\cite{sung2018learning}) & ResNet-12 & 46.22\u00b10.45 & 63.23\u00b10.42 \\\\\nOurs & ResNet-12 & \\textbf{50.56}\u00b10.48 & \\textbf{66.13}\u00b10.41 \\\\ \\hline\n\\end{tabular}\n\\caption{\\label{tab4} 5-way few-shot fine-grained classification results by adapting from the CUB-trained model to NABirds dataset using different backbones.}\n}\n\\vspace{-0.6cm}\n\\end{table}\n\n\\begin{figure}\n \\centering\n \\includegraphics[height=4.8cm, width=6.0cm]{acm_fig4.png}\n \\vspace{-6pt}\n \\caption{Visualization results of the backbone features and output features using different cross-attention model designs, respectively.}\n \\label{fig4}\n\\end{figure}\n\n\n\\vspace{-0.10cm}\n\\subsection{Insight Analyses}\n\\label{sec4.4}\n\\vspace{-0.10cm}\n\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.2mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{Token} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nB\/L (RN~\\cite{sung2018learning}) & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\leftrightarrow$Q & Fc. & 73.80\u00b10.47 & 89.34\u00b10.27 & 72.94\u00b10.50 & 88.85\u00b10.26 \\\\\nRN with S$\\leftrightarrow$Q & Cv. & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab5} The study of token embedding by employing fully-connected embedding (\\textit{i.e.} Fc.) or convolutional embedding (\\textit{i.e.} Cv.), respectively. B\/L denotes the baseline model~\\cite{sung2018learning}, and S$\\leftrightarrow$Q denotes the proposed HelixFormer.}\n\\end{table}\n\n\n\n\\noindent\\textbf{Results of using Fully-connected or Convolutional-based Token Embedding.}\nThe choice on feature embedding way (in a fully-connected or convolutional way) of the input tokens is essential to guarantee the good performance of the proposed HelixFormer. Table \\ref{tab5} reports the results of using different embedding ways, showing that the accuracy using convolution projection outperforms that using fully-connected token embedding. The reason is that few-shot fine-grained recognition needs to capture more local detail features, which are exactly provided by the local convolution projection.\n\n\\begin{figure*}\n\\vspace{-6pt}\n \\centering\n \\includegraphics[height=4.0cm, width=17.3cm]{ACM_vis_1.png}\n \\vspace{-6pt}\n \\caption{Visualization results of features extracted by \\zb{backbone, the RMP, and the REP, respectively}. Due to the global feature variations (\\textit{e.g.,} the color of an object) and local feature changes (\\textit{e.g.,} headlight translation of a car and beak rotation of a bird), it is hard to find the cross-image patch-level matching of semantic features, as shown in the heatmaps from the backbone. By HelixFormer, the key cross-image object semantic relations, such as birds' wings or cars' headlights, can be effectively matched. \\textit{Please refer to our supplementary material for more visualization results.}}\n \\label{fig5}\n\\end{figure*}\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.8mm}{\n\\begin{tabular}{c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with Q$\\rightarrow$S & 75.37\u00b10.46 & 88.78\u00b10.26 & 72.12\u00b10.50 & 88.23\u00b10.26 \\\\\nRN with S$\\rightarrow$Q & 77.24\u00b10.49 & 90.06\u00b10.27 & 73.34\u00b10.51 & 89.44\u00b10.26 \\\\\nRN with S$\\leftrightarrow$Q & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab6} Results using different cross-attention structures: the unidirectional cross-attention (including Q$\\rightarrow$S and S$\\rightarrow$Q) and the bidirectional structure. Q$\\rightarrow$S denotes that support features are reconstructed only using query images, according to the semantic relations between support-query features, and vice versa. The definition of S$\\leftrightarrow$Q follows Table~\\ref{tab5}.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{0.38mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{SY?} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\rightleftharpoons$Q & \\scriptsize{\\XSolid} & 77.69\u00b10.48 & 89.39\u00b10.27 & 74.56\u00b10.50 & \\textbf{89.89}\u00b10.22 \\\\\nRN with Q$\\rightleftharpoons$S & \\scriptsize{\\XSolid} & 77.46\u00b10.47 & 89.86\u00b10.25 & 74.42\u00b10.50 & 88.63\u00b10.24 \\\\\nRN with S$\\leftrightarrow$Q & \\Checkmark & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & 89.68\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab7} The study of bidirectional cross-attention using asymmetric or symmetric structure. SY is short for symmetric, and S$\\rightleftharpoons$Q denotes the asymmetric but bidirectional structure and the definition of S$\\leftrightarrow$Q follows Table~\\ref{tab5}.}\n\\vspace{-12pt}\n\\end{table}\n\n\n\\begin{table}[]\n\\centering\n\\small\n\\setlength{\\tabcolsep}{0.65mm}{\n\\begin{tabular}{c|c|c|c|c|c}\n\\hline\n\\multirow{2}{*}{Method} & \\multirow{2}{*}{REP?} & \\multicolumn{2}{c}{CUB} & \\multicolumn{2}{c}{Stanford Cars} \\\\\n & & 1-shot & 5-shot & 1-shot & 5-shot \\\\ \\hline\nRN~\\cite{sung2018learning} & - & 66.08\u00b10.50 & 79.04\u00b10.35 & 56.55\u00b10.52 & 70.52\u00b10.39 \\\\\nRN with S$\\leftrightarrow$Q & w\/o & 77.90\u00b10.45 & 89.39\u00b10.27 & 73.52\u00b10.49 & 88.48\u00b10.25 \\\\\nRN with S$\\leftrightarrow$Q & with & \\textbf{79.34}\u00b10.45 & \\textbf{91.01}\u00b10.24 & \\textbf{75.46}\u00b10.37 & \\textbf{89.68}\u00b10.25 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab8} The results of removing the REP in HelixFormer.}\n\\vspace{-12pt}\n\\end{table}\n\n\n\\noindent\\textbf{Unidirectional or Bidirectional RMP.}\nIn this part, we study the impact of using unidirectional or bidirectional cross-attention structure (as introduced in Fig. \\ref{fig3} of Sec. \\ref{sec3.3}) on the classification results. Experimental results in Table \\ref{tab6} show that compared with the unidirectional structure (Q$\\rightarrow$S or S$\\rightarrow$Q), the bidirectional cross-attention structure (S$\\leftrightarrow$Q) can well improve the generalization ability of few-shot learning models.\n\n\\noindent\\textbf{Asymmetric and Symmetric RMP.} For bidirectional attention, we observe from Table \\ref{tab7} that a symmetrical structure has an advantage in improving the model accuracy. We further visualize their heatmaps in Fig. \\ref{fig4}, and find that the symmetrical structure captures relatively more accurate inter-object semantic relations.\n\n\\noindent\\textbf{The Role of REP.} We further show the effectiveness of the REP in the proposed HelixFormer, by directly feeding the CSRMs pairs $(R_{Q,S}, R_{S,Q})$ as the input of the classification head. We observe the performance deterioration by comparing the last two rows in Table \\ref{tab8}, showing the importance of learning to distinguish subtle feature differences by the REP.\n\n\\noindent\\textbf{Multi-head Attention in HelixFormer.} Table \\ref{tab9} shows the 5-way 1-shot classification accuracy by changing the number of multi-head attention using Conv-4 backbone.\n\n\\begin{table}[]\n\\centering\n\\setlength{\\tabcolsep}{1.2mm}{\n\\begin{tabular}{c|c|c|c}\n\\hline\nMethod & \\# Multi-head & CUB & Stanford Cars \\\\ \\hline\nRN with S$\\leftrightarrow$Q & 1 & 77.52\u00b10.49 & 74.62\u00b10.50 \\\\\nRN with S$\\leftrightarrow$Q & 2 & \\textbf{79.34}\u00b10.45 & \\textbf{75.46}\u00b10.37 \\\\\nRN with S$\\leftrightarrow$Q & 4 & 78.68\u00b10.47 & 73.62\u00b10.48 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab9} Results of changing the number of multi-head attention.}\n\\vspace{-0.2cm}\n\\end{table}\n\n\n\\begin{table}[]\n\\small\n\\centering\n\\setlength{\\tabcolsep}{1.0mm}{\n\\begin{tabular}{c|c|c|c|c}\n\\hline\nMethod & Backbone & \\#FLOPs. & \\#Params. & CUB \\\\ \\hline\nRN~\\cite{sung2018learning} & ResNet-12 & 2.48G & 9.1M & 72.61\u00b10.47 \\\\\nRN~\\cite{sung2018learning} & ResNet-50 (\\textbf{Deeper}) & 3.69G & 24.6M & 69.00\u00b10.52 \\\\\nRN~\\cite{sung2018learning} & ResNet-101 (\\textbf{Deeper}) & 5.98G & 43.2M & 68.71\u00b10.54 \\\\\nRN with S$\\leftrightarrow$Q & ResNet-12 & 2.53G & 9.5M & \\textbf{81.66}\u00b10.30 \\\\ \\hline\n\\end{tabular}}\n\\caption{\\label{tab10} 5-way 1-shot classification accuracy (\\%) for relation network baseline~\\cite{sung2018learning} with different backbones.}\n\\vspace{-0.6cm}\n\\end{table}\n\n\n\\noindent\\textbf{Study of Parameter Size and Feature Visualization.}\nIn the few-shot learning community, models with smaller parameter sizes are usually adopted to avoid over-fitting for the limited number of training samples. In other words, increasing the number of model parameters may not improve its generalization performance. Table \\ref{tab10} shows that the HelixFormer more effectively boosts the model accuracy, compared with other models with more parameters and FLOPs. Furthermore, we visualize the support and query features which are extracted from the backbone, the proposed RMP, and REP of HelixFormer respectively. The visualized results are illustrated in Fig. \\ref{fig5}, and the heatmaps are obtained via a max-pooling operation along the channel dimension of feature maps to preserve the spatial information. Besides, we also visualize the support-query features using asymmetric or symmetric cross-attention structure in Fig. \\ref{fig5}. These visualization results illustrate that the semantically similar regions between support and query images can be well matched via HelixFormer.\n\n\\section{Conclusion}\nIn this work, we proposed a Transformer-based double-helix cross-attention model, namely HelixFormer, which is composed of a Relation Mining Process (RMP) to discover the cross-image object semantic relation and produce patch-level cross-relation maps, and a Representation Enhancement Process (REP) to enhance the identified discriminative local features for final recognition. Experimental results demonstrate that such a bidirectional and symmetrical structure has the merit of ensuring the cross-object semantic relation consistency, improving the model generalization ability on few-shot fine-grained learning.\n\n\n\\clearpage\n\\section{Acknowledgement}\nThis work is supported by National Natural Science Foundation of China (No. 62071127 and No. 62101137), Zhejiang Lab Project (No. 2021KH0AB05).\n\n\n\n\n\n{\\small\n\\bibliographystyle{ACM-Reference-Format}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n \n\n\n Real-time detection of various categories of objects in images is one of the\n key tasks in computer vision.\n This topic has been extensively studied in the past a few years due to its\n important applications in surveillance, intelligent video analysis {\\em etc}. \n %\n \n \n \n %\n Viola and Jones proffered the first real-time face detector\n \\cite{Viola2004Robust,Viola2002Fast}.\n To date, it is still considered one of the state-of-the-art, and their framework \n is the basis of many incremental work afterwards. \n Object detection is a highly asymmetric classification\n problem with the exhaustive scanning-window search being used to locate the target in an\n image. Only a few are true target objects among the millions of scanned patches.\n Cascade classifiers \n \n \n \n have been proposed for efficient detection, which takes the asymmetric\n structure into consideration. \n Under the assumption of each node of the cascade classifier makes independent classification\n errors, the detection rate and false positive rate of the entire cascade are:\n $ F_{\\rm dr} = \\prod_{ t =1}^N d_t $ and\n $ F_{\\rm fp} = \\prod_{ t =1}^N f_t $, respectively.\n As pointed out in \\cite{Viola2004Robust,Wu2005Linear}, these two equations \n suggest a {\\em node learning objective}: \n Each node should have an extremely high detection rate $d_t $ \n ({\\em e.g.}, $99.7\\%$) and \n a moderate false positive rate $ f_t $ ({\\em e.g.}, $50\\%$). \n With the above values of $ d_t $ and $ f_t $, assume that \n the cascade has $ N = 20 $ nodes, then $ F_{\\rm dr} \\approx 94\\%$\n and $ F_{\\rm fp} \\approx 10^ {-6} $, which is usually the design goal. \n \n\n\n A drawback of standard boosting like AdaBoost is that it does not \n take advantage of\n the cascade classifier. AdaBoost only minimizes the overall classification error and does\n not minimize the number of false negatives. \n In this sense, the features selected are not optimal for \n the purpose of rejecting negative examples.\n %\n %\n %\n At the feature selection and classifier training level, Viola and Jones\n leveraged the asymmetry property, to some extend, by\n replacing AdaBoost with AsymBoost \\cite{Viola2002Fast}.\n AsymBoost incurs more loss for misclassifying a positive example by simply\n modifying AdaBoost's exponential loss. \n Better detection rates were observed over the standard AdaBoost. Nevertheless, \n AsymBoost addresses the node learning goal {\\em indirectly}\n and still may not be the optimal solution.\n Wu {\\em et al.} explicitly studied the node learning goal and they proposed \n to use linear asymmetric classifier (LAC) and Fisher linear discriminant analysis (LDA)\n to adjust the linear coefficients of the selected weak classifiers\n \\cite{Wu2005Linear,Wu2008Fast}.\n Their experiments indicated that with this post-processing technique, \n the node learning objective can be better met, which is translated into improved \n detection rates. \n In Viola and Jones' framework, boosting is used to select \n features and at the same time to\n train a strong classifier. Wu {\\em et al.}'s work separates these two tasks:\n they still use AdaBoost or AsymBoost to select features; and at the second step, \n they build a strong classifier using LAC or LDA. \n Since there are two steps here, in Wu {\\em et al.}'s work \n \\cite{Wu2005Linear,Wu2008Fast}, \n the node learning objective is only considered at the \n second step. At the first step---feature selection---the node learning objective is\n not explicitly considered. We conjecture that\n {\\em further improvement may be gained\n if the node learning objective is explicitly \n taken into account at both steps}. \n We design new boosting algorithms to implement this idea and verify\n this conjecture. \n %\n %\n Our major contributions are as follows. \n \\begin{enumerate}\n \\item\n We develop new boosting-like algorithms via directly\n minimizing the objective function of \n linear asymmetric classifier, which is termed as LACBoost (and\n FisherBoost from Fisher LDA). Both of them can be used to\n select features that is optimal for achieving the node learning goal in training a \n cascade classifier. To our knowledge, this is the first attempt to design such a \n feature selection method. \n \\item\n LACBoost and FisherBoost share similarities with LPBoost \n \\cite{Demiriz2002LPBoost} in the sense that both use\n column generation---a technique originally proposed\n for large-scale linear programming (LP). \n Typically, the Lagrange dual problem \n is solved at each iteration in column generation. We instead solve\n the primal quadratic programming (QP) problem, which has a special structure\n and entropic gradient (EG)\n can be used to solve the problem very efficiently. \n Compared with general interior-point based QP solvers, EG is much faster. \n Considering one needs to solve QP problems a few thousand times for training\n a complete cascade detector, the efficiency improvement is enormous. \n Compared with training an AdaBoost based cascade detector, the time needed \n for LACBoost (or FisherBoost) is comparable.\n \n \n \n \n \n This is because for both cases, the majority of the time is spent on weak\n classifier training and bootstrapping. \n \\item\n We apply LACBoost and FisherBoost to face detection and better performances are observed over\n the state-of-the-art methods \\cite{Wu2005Linear,Wu2008Fast}. The results\n confirm our conjecture and show the effectiveness of LACBoost and FisherBoost. \n LACBoost can be immediately applied to other asymmetric classification problems.\n \\item\n We also analyze the condition that makes the validity of LAC,\n and show that the multi-exit cascade might be more suitable\n for applying LAC learning of \\cite{Wu2005Linear,Wu2008Fast} (and our LACBoost)\n rather than Viola-Jones standard cascade.\n \\end{enumerate}\n Besides these, the LACBoost\/FisherBoost algorithm differs from traditional boosting algorithms\n in that LACBoost\/FisherBoost does not minimize a loss function. This opens new possibilities\n for designing new boosting algorithms for special purposes. \n We have also extended column generation for optimizing nonlinear \n optimization problems. \n %\n %\n %\n Next we review some related work that is closest to ours. \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n \n\n\n \\textbf{Related work}\n %\n %\n %\n There is a large body of previous work in object detection\n \\cite{pham07,Wu2003Rare};\n of particular relevance to our work is boosting object detection originated\n from Viola and Jones'\n framework.\n There are three important components that make Viola and Jones' framework\n tremendously successful:\n (1) The cascade classifier that efficiently filters out most negative patches\n in early nodes; and also contributes to enable the final classifier to have\n a very high detection rate;\n (2) AdaBoost that selects informative features and at the same time trains \n a strong classifier;\n (3) The use of integral images, which makes the computation of Haar features \n extremely fast. \n Most of the work later improves one or more of these three components. \n In terms of the cascade classifier, a few different approaches such as \n soft cascade \\cite{Bourdev05SoftCascade}, dynamic cascade \n \\cite{Rong2007}, and multi-exit cascade \\cite{pham08multi}. \n We have used the multi-exit cascade in this work.\n The multi-exit cascade tries to improve the classification performance by \n using all the selected weak classifiers for each node. \n \n \n \n So for the $ n $-th strong classifier (node), it uses all the weak classifiers\n in this node as well as those in the previous $ n - 1 $ nodes. \n We show that the LAC post-processing can enhance the multi-exit cascade.\n More importantly, we show that the multi-exit cascade better meets LAC's \n requirement of data being Gaussian distributions. \n \n The second research topic is the learning algorithm \n for constructing a classifier. \n Wu {\\em et al.} use fast forward feature selection to \n accelerate the training procedure\n \\cite{Wu2003Rare}. They have also proposed LAC to learn a better strong\n classifier \\cite{Wu2005Linear}. \n Pham and Cham recently proposed online asymmetric boosting \n with considerable improvement in training time \\cite{pham07}.\n By exploiting the feature\n statistics, they have also designed a fast method to train weak classifiers\n \\cite{Pham2007Fast}. \n %\n %\n %\n Li {\\em et al.} advocated FloatBoost to discard\n some redundant weak classifiers during AdaBoost's \n greedy selection procedure \\cite{Li2004Float}. \n Liu and Shum proposed KLBoost to select features and train a strong \n classifier \\cite{Liu2003KL}. \n Other variants of boosting have been applied to detection. \n %\n \\comment{\n For example,\n Promising results were reported with\n LogitBoost \\cite{Tuzel2008PAMI} that employs the logistic regression \n loss, and GentleBoost \\cite{Torralba2007} that uses adaptive \n Newton steps to fit the additive model.\n \n\n \n New features have also been designed for improving the detection\n performance. Viola and Jones' \n Haar features are not sufficiently discriminative for detecting\n more complex objects like pedestrians, or multi-view faces. \n Covariance features \\cite{Tuzel2008PAMI} and histogram of oriented gradients \n \\cite{Dalal2005HOG} have been proposed in this context. Both of them are\n possible to the use the idea of integral images\/histograms to reduce the\n computation complexity. \n \n }\n\n \n\n\n\n\n \\textbf{Notation} \n The following notation is used. \n A matrix is denoted by a bold upper-case\n letter ($\\mathbf X$); a column vector is denoted by a bold lower-case\n letter ($ \\mathbf x $). \n The $ i$th row of $\\mathbf X $ is denoted by $ \\mathbf X_{i:} $ \n and the $ i $-th column $ \\mathbf X_{:i}$.\n The identity matrix is $ \\bf I $ and its size should be clear\n from the context. $ \\bf 1 $ and \n $ \\bf 0 $ are column vectors of $ 1$'s and $ 0$'s,\n respectively. \n We use $ \\psd, \\nsd $ to denote component-wise inequalities. \n\n\n\n Let $ \\{ (\\mathbf x_i, y_i ) \\}_{i = 1, \\cdots, m}$ be the set of\n training data, where $ \\mathbf x_i \\in \\mathcal X$ and $ y_i \\in \\{-1,+1\\}\n $, $ \\forall i$. \n The training set consists of $ m_1 $ positive training points\n and $ m_2 $ negative ones; $ m_1 + m_2 = m $. \n %\n Let $ h ( \\cdot ) \\in \\mathcal H $ be a weak\n classifier that projects an input vector $ \\mathbf x $ into \n $\\{-1, +1 \\}$. Here we only consider discrete classifier\n outputs.\n We assume that the set $ \\mathcal H $ is finite and we\n have $ n $ possible weak classifiers. Let the matrix $ \\H \\in\n \\mathbb{R}^{ m \\times n }$ where the $ (i,j)$ entry of $ \\H $ is\n $ \\H_{ij} = h_j ( \\mathbf x_i ) $. \n $ \\H_{ij} $ is the label predicted by weak classifier $\n h_j(\\cdot) $ on the training datum $ \\mathbf x_i $. \n \\comment{\n \n Therefore each\n column $ \\H_{ :j } $ of the matrix $ \\H $ consists of the\n output of weak classifier $ h_j(\\cdot) $ on all the training\n data; while each row $ \\H_{ i: } $ contains the outputs of all\n weak classifiers on the training datum $ \\mathbf x_i $. \n \n }\n We define a matrix $ \\mathbf A \\in \\mathbb{R}^{ m \\times n }$ such that its\n $( i, j )$ entry is\n $ \\mathbf A_{ij} = y_i h_j ( \\mathbf x_i ) $.\n \\comment{\n \n Boosting algorithms entirely depend on the matrix $ \\mathbf A $ and\n do not directly interact with the training examples. \n Our following discussion will largely focus on the matrix $\\mathbf A$.\n We write the vector obtained by multiplying a matrix $ \\mathbf A $ \n with a vector \n $ \\mathbf w $\n as $ \\mathbf A \\mathbf w $ and its $i$-th entry as $ (\\mathbf A \\mathbf w)_i$, which is the margin\n of the training datum $ \\mathbf x_i $: $ \\rho_i = \\mathbf A_{ i :} \\mathbf w = (\\mathbf A \\mathbf w)_i $.\n \n }\n\n\n \\comment{\n \n The paper is organized as follows. We briefly review the concept of LAC\n in Section \\ref{sec:LAC} before we propose our LACBoost and FisherBoost\n in Section \\ref{sec:LACBoost}. We present the experiments in Section\n \\ref{sec:exp} and conclude the paper in Section \\ref{sec:con}.\n \n }\n\n\n\n\\section{Linear Asymmetric Classification}\n\\label{sec:LAC}\n\n\n\n\n Before we propose our LACBoost and FisherBoost, we \n briefly overview the concept of LAC. \n %\n %\n Wu {\\em et al.} \\cite{Wu2008Fast} have proposed linear asymmetric classification\n (LAC) as a post-processing step \n for training nodes in the cascade framework. \n LAC is guaranteed to get an optimal solution\n under the assumption of Gaussian data distributions.\n\n\n Suppose that we have a linear classifier \n $ f(\\mathbf x) = {\\bf sign}(\\mathbf w^{\\!\\top} \\mathbf x - b)$,\n if we want to find a pair of $\\{ \\mathbf w , b \\}$ with a very\n high accuracy on the positive data $\\mathbf x_1$ and a moderate accuracy on\n the negative $\\mathbf x_2$, which is expressed as the following problem:\n \\begin{align}\n \\begin{split}\n \\max_{\\mathbf w \\neq {\\bf 0}, b} \\, \\Pr_{\\mathbf x_1 \\sim ( \\mu_1, {\\bf \\Sigma}_1) }\n \\{ \\mathbf w ^{\\!\\top} \\mathbf x_1 \\geq b \\}, \\,\\,\n {\\rm s.t.} \\, \\Pr_{\\mathbf x_2 \\sim (\\mu_2,{\\bf \\Sigma}_2)} \n \\{ \\mathbf w^{\\!\\top} \\mathbf x_2\n \\leq b \\} = \\lambda,\n \\label{EQ:LAC}\n \\end{split}\n \\end{align}\n where $\\mathbf x \\sim (\\mu,{\\bf \\Sigma})$ denotes \n a symmetric distribution with mean $\\mu$ and covariance ${\\bf \\Sigma}$.\n %\n %\n If we prescribe $\\lambda$ to $0.5$ and assume that for any $\\mathbf w$, \n $\\mathbf w^{\\!\\top} \\mathbf x_1$\n is Gaussian and $\\mathbf w^{\\!\\top} \\mathbf x_2$ is symmetric, then \n \\eqref{EQ:LAC} can be approximated by \n \\begin{equation}\n \\label{EQ:LAC1}\n \\max_{\\mathbf w \\neq \\bf 0} \\;\\;\n \\frac{ \\mathbf w^{\\!\\top} ( \\mu_1 - \\mu_2 ) } \n { \\sqrt{ \\mathbf w^{\\!\\top} {\\bf \\Sigma}_1 \\mathbf w } }.\n \\end{equation}\n %\n %\n %\n \\eqref{EQ:LAC1} is similar to LDA's optimization problem\n \\begin{equation}\n \\label{EQ:LDA1}\n \\max_{\\mathbf w \\neq \\bf 0} \\;\\;\n \\frac{ \\mathbf w^{\\!\\top} ( \\mu_1 - \\mu_2 ) } \n { \\sqrt{ \\mathbf w^{\\!\\top} ( {\\bf \\Sigma}_1 + {\\bf \\Sigma}_2 ) \\mathbf w } }.\n \\end{equation}\n \\eqref{EQ:LAC1} can be solved by eigen-decomposition and a close-formed \n solution can be derived:\n %\n %\n %\n\\begin{equation}\n \\mathbf w^\\star = {\\bf \\Sigma}_1^{-1} ( \\mu_1 - \\mu_2 ),\n \\quad \n b^{\\star} = { \\mathbf w^{\\star} } ^{{\\!\\top}} \\mu_2.\n\\label{EQ:LAC_SOL}\n\\end{equation}\nOn the other hand, each node in cascaded boosting classifiers has the following form:\n\\begin{equation}\n \\label{EQ:nodeclassifier}\n f(\\mathbf x) = {\\bf sign}(\\mathbf w^{\\!\\top} \\H (\\mathbf x) - b),\n\\end{equation}\nWe override the symbol $ \\H (\\mathbf x)$ here, \nwhich denotes the output vector of all weak classifiers over the datum $ \\mathbf x $. \nWe can cast each node as a linear classifier over the feature space\nconstructed by the binary outputs of all weak classifiers.\nFor each node in cascade classifier, we wish to maximize the detection\nrate as high as possible, and \nmeanwhile keep the false positive rate to an\nmoderate level ({\\em e.g.}, $50.0\\%$). \nThat is to say, the problem\n\\eqref{EQ:LAC} expresses the node learning goal. \nTherefore, we can use boosting algorithms ({\\em e.g.}, AdaBoost) as feature\nselection methods, and then use LAC to learn a linear classifier over\nthose binary features chosen by boosting.\nThe advantage is that LAC considers the asymmetric node learning explicitly.\n\nHowever, there is a precondition of LAC's validity. That \nis, for any $\\mathbf w$, $\\mathbf w^{\\!\\top} \\mathbf x_1$ is a Gaussian and $\\mathbf w^{\\!\\top} \\mathbf x_2$\nis symmetric. \nIn the case of boosting classifiers, $\\mathbf w^{\\!\\top} \\mathbf x_1$ and $\\mathbf w^{\\!\\top} \\mathbf x_2$ can be \nexpressed as the margin of positive data and negative data.\nEmpirically Wu {\\em et al.} \\cite{Wu2008Fast}\nverified that $\\mathbf w^{\\!\\top} \\mathbf x$ is Gaussian approximately for a cascade face detector.\nWe discuss this issue in the experiment part in more detail.\n\n\n\n\\section{Constructing Boosting Algorithms from LDA and LAC}\n\\label{sec:LACBoost} \n \n \n In kernel methods, the original data are non\\-linear\\-ly \n mapped to a feature space and \n usually the mapping\n function $ {\\phi } ( \\cdot ) $ is not explicitly available.\n It works through the inner product of \n $ {\\phi } ( \\mathbf x_i ) ^{\\!\\top} {\\phi } ( \\mathbf x_j ) $. \n In boosting \\cite{Ratsch2002BoostSVM},\n the mapping function can be seen as explicitly known\n through:\n $\n {\\phi } ( \\mathbf x ) : \\mathbf x \\mapsto [ h_1(\\mathbf x),\\dots,h_n(\\mathbf x) ]. \n $\n Let us consider the Fisher LDA case first because the solution to LDA\n will generalize to LAC straightforwardly, by looking at\n the similarity between \\eqref{EQ:LAC1} and \\eqref{EQ:LDA1}.\n\n Fisher LDA \n maximizes the between-class variance and minimizes the within-class\n variance. In the binary-class case, we can equivalently\n rewrite \\eqref{EQ:LDA1} into\n \\begin{equation}\n \\label{EQ:100}\n \\max_\\mathbf w \\;\\; \\frac{ ( \\mu_1 - \\mu_2 ) ^ 2 }\n { \\sigma_1 + \\sigma_2 } \n = \n \\frac{ \\mathbf w ^{\\!\\top} \\mathbf C_b \\mathbf w }\n { \\mathbf w ^{\\!\\top} \\mathbf C_w \\mathbf w },\n \\end{equation}\n where $ \\mathbf C_b $ and $ \\mathbf C_w $ are the between-class and within-class\n scatter matrices; $ \\mu_1 $ and $ \\mu_2 $ are\n the projected centers of the two classes.\n The above problem can be equivalently reformulated as \n \\begin{equation}\n \\label{EQ:101}\n \\min_\\mathbf w \\;\\; \\mathbf w ^{\\!\\top} \\mathbf C_w \\mathbf w - \\theta ( \\mu_1 - \\mu_2 )\n \\end{equation}\n for some certain constant $ \\theta $ and under the assumption that\n $ \\mu_1 - \\mu_2 \\geq 0 $.\\footnote{In our face detection experiment,\n we found that this assumption could always be satisfied.}\n Now in the feature space, our data are \n $ {\\phi }( \\mathbf x_i ) $, $ i=1\\dots m$.\n We have\n \\begin{align}\n \\mu_1\n & = \\frac{ 1 } { m_1 } \\mathbf w^{\\!\\top} \\sum_{y_i = 1} {\\phi }(\\mathbf x_i) \n \n = \\frac{ 1 } { m_1 } \\sum_{y_i = 1} \\mathbf A_{ i: } \\mathbf w\n \n \n \n = \\frac{ 1 } { m_1 } \\sum_{y_i = 1} (\\mathbf A \\mathbf w)_i\n = \\boldsymbol e_1 ^{\\!\\top} \\mathbf A \\mathbf w ,\n \\end{align}\n where $ \\mathbf A_{ i: } $ is the $ i $-th row of $ \\mathbf A$.\n %\n %\n %\n \\begin{align}\n \\mu_2\n & = \n \\frac{ 1 } { m_2 } \\mathbf w^{\\!\\top} \\sum_{y_i = -1} {\\phi }(\\mathbf x_i)\n = \\frac{ 1 } { m_2 } \\sum_{y_i = -1} \\H_{ i: } \\mathbf w\n \n \n = - \\boldsymbol e_2 ^{\\!\\top} \\mathbf A \\mathbf w,\n \\end{align}\n %\n %\n Here the $ i $-th entry of $ \\boldsymbol e_1 $ is defined as \n $ \\boldsymbol e_{1i} = 1\/m_1 $ if $ y_i = +1 $, otherwise \n $ \\boldsymbol e_{1i} = 0$. Similarly \n $ \\boldsymbol e_{2i} = 1\/m_2 $ if $ y_i = -1 $, otherwise \n $ \\boldsymbol e_{2i} = 0$. We also define $ \\boldsymbol e = \\boldsymbol e_1 + \\boldsymbol e_2 $.\n %\n %\n %\n %\n For ease of exposition, we order the training data according to their\n labels. So\n the vector $ \\boldsymbol e \\in \\mathbb{R}^{m}$:\n \\begin{equation}\n \\boldsymbol e = [ 1\/m_1,\\cdots, 1\/m_2,\\cdots ]^{\\!\\top}, \n \\label{EQ:e}\n \\end{equation}\n and the first $ m_1$ components of $ \\boldsymbol \\rho $ correspond to the\n positive training data and the remaining ones\n correspond to the $ m_2$\n negative data. \n So we have $ \\mu_1 - \\mu_2 = \\boldsymbol e^{\\!\\top} \\boldsymbol \\rho $, \n $ \\mathbf C_w = {m_1 }\/{ m } \\cdot {\\bf \\Sigma}_1 + {m_2 }\/{ m } \\cdot {\\bf \\Sigma}_2 $\n with\n $ {\\bf \\Sigma}_{1,2} $ the covariance matrices. \n By noticing that\n \\[\n \\mathbf w^{\\!\\top} {\\bf \\Sigma}_{1,2} \\mathbf w = \\frac{1}{m_{1,2} ( m_{1,2} - 1 ) }\n \\sum_{i>k, y_i=y_k = \\pm 1}\n (\\rho_i - \\rho_k )^2,\n \\]\n %\n %\n we can easily rewrite the original problem into:\n \\begin{align}\n \\min_{\\mathbf w,\\boldsymbol \\rho}\n \n \n \n \\tfrac{1}{2} \\boldsymbol \\rho ^{\\!\\top} \\mathbf Q \\boldsymbol \\rho - \\theta \\boldsymbol e^{\\!\\top}\n \\boldsymbol \\rho,\n \n \\quad {\\rm s.t.} ~&\\mathbf w \\psd {\\bf 0},\n {\\bf 1}^{\\!\\top} \\mathbf w = 1,\n \n \n {\\rho}_i = ( \\mathbf A \\mathbf w )_i,\n i = 1,\\cdots, m.\n \\label{EQ:QP1}\n \\end{align}\n Here\n $ \\mathbf Q = \\begin{bmatrix} \\mathbf Q_1 & {\\bf 0} \\\\ {\\bf 0} & \\mathbf Q_2 \\end{bmatrix} $\n is a block matrix with\n \\[\n \\mathbf Q_1 = \n \\begin{bmatrix}\n \\tfrac{1}{m} & -\\tfrac{1}{ m (m_1-1)} & \\ldots & -\\tfrac{1}{m(m_1-1)} \\\\\n -\\tfrac{1}{m(m_1-1)} & \\tfrac{1}{ m } & \\ldots & -\\tfrac{1}{m(m_1-1)} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{m(m_1-1)} & -\\tfrac{1}{m (m_1-1)} & \\ldots &\\tfrac{1}{m } \n \\end{bmatrix},\n \\]\n and $ \\mathbf Q_2 $ is similarly defined by replacing $ m_1$ with $ m_2 $ in $ \\mathbf Q_1$.\n \\comment{\n \n \\[\n \\mathbf Q_2 = \n \\begin{bmatrix}\n \\tfrac{1}{m} & -\\tfrac{1}{m(m_2-1)} & \\ldots &\n -\\tfrac{1}{m(m_2-1)} \\\\\n -\\tfrac{1}{m(m_2-1)} & \\tfrac{1}{m} & \\ldots &\n -\\tfrac{1}{m(m_2-1)} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{m(m_2-1)} & -\\tfrac{1}{m(m_2-1)} & \\ldots\n &\\tfrac{1}{m} \n \\end{bmatrix}. \n \\]\n \n }\n Also note that we have introduced a constant $ \\frac{1}{2} $ before the quadratic term\n for convenience. The normalization \n constraint $ { \\bf 1 } ^{\\!\\top} \\mathbf w = 1$\n removes the scale ambiguity of $ \\mathbf w $. Otherwise the problem is\n ill-posed. \n\n In the case of LAC, the covariance matrix of the negative data is not involved,\n which corresponds to the matrix $ \\mathbf Q_2 $ is zero. So we can simply set \n $ \\mathbf Q = \\begin{bmatrix} \\mathbf Q_1 & {\\bf 0} \\\\ {\\bf 0} & \\bf 0 \\end{bmatrix} $ and\n \\eqref{EQ:QP1} becomes the optimization problem of LAC. \n \n At this stage, it remains unclear about how to solve the problem \\eqref{EQ:QP1}\n because we do not know all the weak classifiers. \n The number of possible weak classifiers\n could be infinite---the dimension of the \n optimization variable $ \\mathbf w $ is infinite.\n So \\eqref{EQ:QP1} is a semi-infinite quadratic program (SIQP).\n We show how column generation can be used to solve this problem.\n To make column generation applicable, we need to derive a\n specific Lagrange dual of the primal\n problem.\n\n\n\n\\textbf{The Lagrange dual problem}\n We now derive the Lagrange dual of the quadratic problem \\eqref{EQ:QP1}. \n Although we are only interested in the variable $ \\mathbf w $, we need to\n keep the auxiliary variable $ \\boldsymbol \\rho $ in order to obtain\n a meaningful dual problem. The Lagrangian of \\eqref{EQ:QP1}\n is\n $ L ( \n \\underbrace{ \\mathbf w, \\boldsymbol \\rho}_{\\rm primal}, \\underbrace{ \\u, r }_{\\rm dual}\n ) \n = \\tfrac{1}{2} \\boldsymbol \\rho ^{\\!\\top} \\mathbf Q \\boldsymbol \\rho - \\theta \\boldsymbol e^{\\!\\top} \\boldsymbol \\rho \n + \\u ^{\\!\\top} ( \\boldsymbol \\rho - \\mathbf A \\mathbf w ) - \\mathbf q ^{\\!\\top} \\mathbf w + r ( {\\bf 1} ^{\\!\\top} \\mathbf w - 1 )\n $ with $ \\mathbf q \\psd \\bf 0 $. \n $ \\sup_{\\u, r} \\inf_{ \\mathbf w, \\boldsymbol \\rho } L ( \\mathbf w, {\\boldsymbol \\rho}, \\u, r ) $\n gives the following Lagrange dual:\n \\begin{align}\n \\max_{\\u, r} ~& -r - \\overbrace{\n \\tfrac{1}{2} \n (\\u - \\theta \\boldsymbol e)^{\\!\\top} \\mathbf Q^{-1} (\\u - \\theta \\boldsymbol e)\n }^{\\rm regularization}, \n %\n \n %\n {\\rm \\;\\; s.t.} \n ~\n %\n \n %\n \\sum_{i=1}^m u_i \\mathbf A_{i:} \\nsd r {\\bf 1 } ^{\\!\\top}. \n \\label{EQ:dual}\n \\end{align}\n In our case, $ \\mathbf Q $ is rank-deficient and its inverse does not exist\n (for both LDA and LAC).\n We can simply regularize $ \\mathbf Q $ with $ \\mathbf Q + \\delta {\\bf I} $ with \n $ \\delta $ a very small constant. \n One of the KKT optimality conditions between the dual and primal\n is\n $ \\boldsymbol \\rho^\\star = - \\mathbf Q^{-1} ( \\u ^ \\star - \\theta \\boldsymbol e )$,\n which can be used to establish the connection between the dual optimum and\n the primal optimum. \n This is obtained by the fact that \n the gradient of $ L $ w.r.t. $ \\boldsymbol \\rho $ must vanish at \n the optimum, $ { \\partial L } \/ { \\partial \\rho_i } = 0 $,\n $ \\forall i = 1\\cdots n $.\n\n Problem \\eqref{EQ:dual} can be viewed as a regularized LPBoost problem.\n Compared with the hard-margin LPBoost \\cite{Demiriz2002LPBoost},\n the only difference is the regularization term in the cost function.\n The duality gap between the primal \\eqref{EQ:QP1} and the \n dual \\eqref{EQ:dual} is zero. In other words, the solutions of\n \\eqref{EQ:QP1} and \\eqref{EQ:dual} coincide. \n Instead of solving \\eqref{EQ:QP1} directly, one calculates the\n most violated constraint in \\eqref{EQ:dual} iteratively for\n the current solution and adds this constraint to the\n optimization problem. In theory, any column that violates\n dual feasibility can be added. To speed up the convergence,\n we add the most violated constraint by solving the following\n problem:\n %\n %\n %\n \\begin{equation}\n h' ( \\cdot ) = {\\rm argmax}_{h( \\cdot ) } ~ \n \n \\sum_{i=1}^m u_i y_i h ( \\mathbf x_i).\n \\label{EQ:pickweak}\n \\end{equation}\n %\n %\n %\n This is exactly the same as the one that standard AdaBoost\n and LPBoost use for producing the best weak classifier. That\n is to say, to find the weak classifier that has minimum weighted\n training error. We summarize the LACBoost\/FisherBoost\n algorithm in\n Algorithm~\\ref{alg:QPCG}.\n By simply changing $ \\mathbf Q_2 $, Algorithm~\\ref{alg:QPCG} can be used to\n train either LACBoost or FisherBoost.\n Note that to obtain an actual strong classifier,\n one may need to include an offset $ b $, {\\em i.e.} the final classifier\n is $ \\sum_{j=1}^n h_j (\\mathbf x) - b $ because from the cost function\n of our algorithm \\eqref{EQ:101}, we can see that the cost function itself\n does not minimize any classification error. It only finds a projection\n direction in which the data can be maximally separated. A simple line\n search can find an optimal $ b $. \n Moreover, when training a cascade, we need to tune this offset anyway\n as shown in \\eqref{EQ:nodeclassifier}.\n \n %\n %\n \n %\n\n\n The convergence of Algorithm~\\ref{alg:QPCG} is guaranteed by\n general column generation or cutting-plane algorithms, which\n is easy\n to establish. When a new $ h'(\\cdot) $ that violates dual\n feasibility is added, the new optimal value of the dual\n problem (maximization) would decrease. Accordingly, the\n optimal value of its primal problem decreases too because they\n have the same optimal value due to zero duality gap. Moreover\n the primal cost function is convex, therefore in the end it\n converges to the global minimum. \n\n\n\n\n \n\n \n \\linesnumbered\\SetVline\n \\begin{algorithm}[t]\n \\caption{Column generation for QP.} \n %\n %\n %\n \\centering\n \\begin{minipage}[]{0.91\\linewidth}\n %\n \\KwIn{Labeled training data $(\\mathbf x_i, y_i), i = 1\\cdots m$;\n termination threshold $ \\varepsilon > 0$;\n regularization\n parameter $ \\theta $; maximum number of iterations\n $ n_{\\rm max}$.\n }\n %\n %\n %\n { {\\bf Initialization}:\n $ m = 0 $;\n $ \\mathbf w = {\\bf 0} $;\n and $ u_i = \\frac{1}{ m }$, $ i = 1$$\\cdots$$m$. \n }\n\n \\For{ $ \\mathrm{iteration} = 1 : n_\\mathrm{max}$}\n {\n \n \n %\n %\n \\ensuremath{ - \\,} \n Check for the optimality: \\\\\n {\\bf if}{ $ \\mathrm{iteration} > 1 $ \\text{ and } $\n \\sum_{ i=1 }^m u_i y_i h' ( \\mathbf x_i ) \n < r + \\varepsilon $},\n \\\\\n { \\bf then}\n \\\\\n $~ ~ ~$ break; and the problem is solved; \n \n \\ensuremath{ - \\,} \n Add $ h'(\\cdot) $ to the restricted master problem, which\n corresponds to a new constraint in the dual;\n %\n %\n \n \n \n\n \\ensuremath{ - \\,} \n Solve the dual problem \\eqref{EQ:dual}\n (or the primal problem \\eqref{EQ:QP1}) \n and update $ r $ and\n $ u_i$ ($ i = 1\\cdots m$). \n\n\n \\ensuremath{ - \\,} \n Increment the number of weak classifiers\n $n = n + 1$. \n }\n \\KwOut{\n \n \n \n \n %\n The selected features are $ h_1, h_2, \\dots, h_n $.\n The final strong classifier is:\n $ F ( \\mathbf x ) = \\textstyle \\sum_{j=1}^{ n } w_j h_j( \\mathbf x ) - b $.\n Here the offset $ b $ can be learned by a simple search. \n \n }\n \\end{minipage}\n \\label{alg:QPCG}\n \\end{algorithm}\n \n \n\n\n At each iteration of column generation,\n in theory, we can solve either the dual \\eqref{EQ:dual} \n or the primal problem \\eqref{EQ:QP1}. \n However, \n in practice, it could be much faster to solve the primal problem because\n \n \n (i) Generally,\n the primal problem has a smaller size, hence faster to solve.\n The number of variables of \\eqref{EQ:dual} is $ m $ at each iteration,\n while the number of variables is the number of iterations \n for the primal problem. \n For example, in Viola-Jones' face detection framework, \n the number of training data $ m = \n 10,000 $ and $ n_{\\rm max} = 200 $. In other words, the \n primal problem has at most $ 200 $ variables in this case;\n \n (ii)\n The dual problem is a standard QP problem. It has no special structure\n to exploit. As we will show, the primal problem belongs to\n a special class of problems and\n can be efficiently \n solved using entropic\/exponentiated \n gradient descent (EG) \\cite{Beck03Mirror,Globerson07Exp}. \n A fast QP solver is extremely important for training a \n object detector because we need to the solve a few thousand \n QP problems. \n \n \n\n\n We can recover both of the dual variables \n $ \\u^\\star, r^\\star $ easily from \n the primal variable $ \\mathbf w^\\star $:\n \\begin{align}\n \\u^\\star &= - \\mathbf Q\\boldsymbol \\rho^\\star + \\theta \\boldsymbol e; \\label{EQ:KA}\\\\\n r^\\star &= \\max_{ j = 1 \\dots n } \n \\bigl\\{ \\textstyle \\sum_{i=1}^m u_i^\\star \\mathbf A_{ij} \\bigr\\}.\n \\label{EQ:KAb}\n \\end{align}\n The second equation is obtained by the fact that \n in the dual problem's constraints, at optimum,\n there must exist at least one $ u_i^\\star$ \n such that the equality holds. That is to say,\n $ r^\\star $ is the largest {\\em edge}\n over all weak classifiers. \n \n\n\n\n We give a brief introduction to the EG algorithm before we proceed. \n Let us first define the unit simplex \n $ \\Delta_n = \\{ \n \\mathbf w \\in \\mathbb{R}^n : {\\bf 1 } ^ {\\!\\top} \\mathbf w = 1, \\mathbf w \\psd {\\bf 0 }\n \\} $. \n EG efficiently solves the convex optimization problem\n \\begin{equation}\n \\label{EQ:EG1}\n \\min_\\mathbf w \\,\\,\\, f(\\mathbf w), \\,\n {\\rm s.t.} \\,\\, \\mathbf w \\in \\Delta_n, \n \\end{equation}\n under the assumption that the objective function $ f(\\cdot) $\n is a convex Lipschitz continuous function with Lipschitz\n constant $ L_f $ w.r.t. a fixed given norm $ \\lVert \\cdot \\rVert$.\n The mathematical definition of $ L_f $ is that\n $ | f(\\mathbf w) -f (\\mathbf z) | \\leq L_f \\lVert \\mathbf x - \\mathbf z \\rVert$ holds\n for any $ \\mathbf x, \\mathbf z $ in the domain of $ f(\\cdot)$.\n The EG algorithm is very simple:\n %\n %\n \\begin{enumerate}\n \\item\n Initialize with $\\mathbf w^0 \\in \\text{the interior of } \\Delta_n$;\n \\item\n Generate the sequence $ \\{ \\mathbf w^k \\} $, $ k=1,2,\\cdots$\n with:\n \\begin{equation}\n \\label{EQ:EQ2}\n \\mathbf w^k_j = \\frac{ \\mathbf w^{k-1}_j \\exp [ - \\tau_k f'_j ( \\mathbf w^{k-1} ) ] } \n { \\sum_{j=1}^n \\mathbf w^{k-1}_j \\exp [ - \\tau_k f'_j ( \\mathbf w^{k-1} ) ] }. \n \\end{equation}\n Here $ \\tau_k $ is the step-size. \n $ f'( \\mathbf w ) = [ f_1'(\\mathbf w), \\dots, f_n'(\\mathbf w) ] ^{\\!\\top} $\n is the gradient of $ f(\\cdot) $;\n \\item\n Stop if some stopping criteria are met.\n \\end{enumerate}\n The learning step-size can be determined by \n \n $ \\tau_k = \\frac{ \\sqrt{ 2\\log n } } { L_f }\n \\frac{1}{ \\sqrt{ k } },\n $\n \n following \\cite{Beck03Mirror}.\n In \\cite{Globerson07Exp}, the authors have \n used a simpler strategy to set the learning rate. \n\n EG is a very useful tool for solving large-scale \n convex minimization problems over the unit simplex. \n Compared with standard QP solvers like Mosek \n \\cite{Mosek}, EG is much faster. EG makes it possible \n to train a detector using almost the same amount of time\n as using standard AdaBoost as the majority of time is\n spent on weak classifier training and bootstrapping. \n\n\n In the case that $ m_1 \\gg 1 $, \n \\[\n \\mathbf Q_1 =\n \\frac{1}{m}\n \\begin{bmatrix}\n 1 & -\\tfrac{1}{ m_1-1} & \\ldots & -\\tfrac{1}{ m_1-1 } \\\\\n -\\tfrac{1}{ m_1-1 } & 1 & \\ldots & -\\tfrac{1}{ m_1-1} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n -\\tfrac{1}{ m_1-1 } & -\\tfrac{1}{ m_1-1 } & \\ldots & 1 \n \\end{bmatrix} \\approx \\frac{1}{m} \\bf I.\n \\]\n Similarly, for LDA, $ \\mathbf Q_2 \\approx \\frac{1}{m} \\bf I$\n when $ m_2 \\gg 1 $. Hence,\n \\begin{equation}\n \\label{EQ:Q10} \n \\mathbf Q \\approx \n \\begin{cases}\n \\frac{1}{m} \\bf I; & \\text{for Fisher LDA}, \\\\\n \\frac{1}{m} \\begin{bmatrix}\n {\\bf I} & {\\bf 0} \\\\\n {\\bf 0} & {\\bf 0}\n \\end{bmatrix},\n & \\text{for LAC}.\n \\end{cases}\n \\end{equation}\n Therefore, the problems involved can be simplified when $ m_1 \\gg 1 $ and\n $ m_2 \\gg 1 $ hold.\n The primal problem \\eqref{EQ:QP1} equals\n \\begin{align}\n \\min_{\\mathbf w,\\boldsymbol \\rho} ~& \\tfrac{1}{2} \\mathbf w ^{\\!\\top} ( \\mathbf A^{\\!\\top} \\mathbf Q \\mathbf A) \\mathbf w \n - ( \\theta \\boldsymbol e ^{\\!\\top}\n \\mathbf A ) \\mathbf w,\n %\n \n %\n \\;\\;\n {\\rm s.t.}\n \\;\n %\n \n %\n \\mathbf w \\in \\Delta_n.\n \\label{EQ:QP2}\n \\end{align}\n \n \n \n \n \n We can efficiently solve \\eqref{EQ:QP2} \n using the EG method. \n In EG there is an important parameter $ L_f $, which is\n used to determine the step-size. \n $ L_f $ can be determined by the $\\ell_\\infty $-norm of $ | f' (\\mathbf w) | $.\n In our case $ f' (\\mathbf w) $ is a linear function, which is trivial to compute.\n The convergence of EG is guaranteed; see \\cite{Beck03Mirror} for details.\n \n In summary, when using EG to solve the primal problem, \n Line $ 5 $ of Algorithm~\\ref{alg:QPCG} is: \n \n \\ensuremath{ - \\,} \n {\\em Solve the primal problem \\eqref{EQ:QP2} using EG, and update \n the dual variables $ \\u $ with \\eqref{EQ:KA}, and $ r $ with \\eqref{EQ:KAb}.\n }\n\n\n\n \\comment{\n \n %\n \n FIXME for the journal version\n \n %\n In our case \\eqref{EQ:QP2},\n $ L_f $ can be set to $ | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty + \\theta m \\norm[1]{ \\boldsymbol e ^{\\!\\top} \\mathbf A }$\n with $ | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty $\n the maximum magnitude of the matrix $ \\mathbf A ^{\\!\\top} \\mathbf A $.\n $ L_f $ is the upper bound of the $ \\ell_1$-norm of the cost function's gradient:\n $ \\norm [1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) + \\theta m \\boldsymbol e^{\\!\\top} \\mathbf A } \\leq L_f $. \n %\n %\n With the triangle inequality, we have\n $ \\norm [1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) - \\theta m \\boldsymbol e^{\\!\\top} \\mathbf A } $ $\n \\leq \\norm[1]{ \\mathbf w^{\\!\\top} (\\mathbf A^{\\!\\top} \\mathbf A ) } $ $ + \\theta m \\norm[1]{ \\boldsymbol e^{\\!\\top} \\mathbf A } $ $\n \\leq | \\mathbf A ^{\\!\\top} \\mathbf A |_\\infty $ $ + \\theta m \\norm[1]{ \\boldsymbol e ^{\\!\\top} \\mathbf A }$.\n\n\n\n The following theorem ensures the convergence of the EG algorithm \n for our problem.\n \\begin{theorem}\n Under the assumption that the learning step-size\n satisfies $ 0 < \\tau_k < \\frac{1} { n | \\mathbf A^{\\!\\top} \\mathbf A |_\\infty } $,\n then we have\n $ f( \\mathbf w^\\star ) \\leq f (\\mathbf w ^{ k } ) \\leq f (\\mathbf w ^\\star ) \n + \\frac{1} { \\tau_k ( k -1 ) } {\\rm KL} [ \\mathbf w^\\star \\Vert \\mathbf w^0 ] $,\n where $ f(\\cdot) $ denotes the objective function in \\eqref{EQ:QP2};\n and $ | \\mathbf X |_\\infty $ denotes the maximum magnitude element of \n the matrix $ \\mathbf X $; $ {\\rm KL} ( \\u \\Vert \\v )$ computes the \n Kullback\u2013Leibler divergence of $ \\u, \\v \\in \\Delta_n $.\n \\end{theorem}\n The proof follows the proof of Theorem 1 in \\cite{Globerson07Exp}. \n \n }\n\n\n\n\n\\section{Applications to Face Detection}\n\n\n \n\\begin{figure}[t!]\n \\centering\n \\includegraphics[width=.45\\textwidth]{adaboost_toy}\n \\includegraphics[width=.45\\textwidth]{fisherboost_toy}\n \\caption{Decision boundaries of \n AdaBoost (left) and FisherBoost (right) on $2$D artificial data\n (positive data represented by $ \\square $'s and negative data\n by $\\times$'s). Weak classifiers are decision stumps.\n In this case, FisherBoost intends to correctly classify more\n positive data in this case. \n }\n \\label{fig:toy}\n\\end{figure}\n\n\n First, let us show a simple example on a synthetic dataset \n (more negative data than positive data)\n to illustrate\n the difference between FisherBoost and AdaBoost.\n Fig. \\ref{fig:toy} demonstrates the subtle difference of the classification \n boundaries obtained by AdaBoost and FisherBoost. \n We can see that\n FisherBoost seems to focus more on correctly classifying positive data points.\n This might be due to the fact that AdaBoost only optimizes the overall \n classification accuracy. \n This finding is consistent with the result in \\cite{Paisitkriangkrai2009CVPR}.\n \n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=.32\\textwidth]{p1}\n \\includegraphics[width=.32\\textwidth]{p2}\n \\includegraphics[width=.32\\textwidth]{p3}\n \\caption{Normality test (normal probability plot)\n for the face data's margin distribution of nodes $1$, $2$, $3$.\n The $ 3 $ nodes contains $ 7 $, $ 22 $, $ 52 $ weak classifiers respectively.\n Curves close to a straight line mean close to a Gaussian.\n }\n \\label{fig:normplot}\n\\end{figure}\n\n\n\n\n\n\\textbf{Face detection}\n In this section, we compare our algorithm with other \n state-of-art face detectors.\n We first show some results about the validity \n of LAC (or Fisher LDA) post-processing for improving \n node learning in object detection. \n Fig. \\ref{fig:normplot} illustrates the normal probability plot of \n margins of positive training\n data, for the first three nodes in the multi-exit with LAC cascade. \n Clearly, the larger\n number of weak classifiers being used,\n the more closely the margin follows Gaussian distribution.\n In other words, \n LAC may achieve a better performance if a larger number of weak classifiers\n are used. The performance could be poor with too fewer weak classifiers.\n The same statement applies to Fisher LDA, and LACBoost, FisherBoost, too. \n Therefore, we do not\n apply LAC\/LDA in the first eight nodes because the margin distribution could be\n far from a Gaussian distribution.\n Because the late nodes of a multi-exit cascade contain more weak classifiers, \n we conjecture that the multi-exit cascade \n might meet the Gaussianity requirement better. We have compared \n multi-exit cascades with LDA\/LAC post-processing against \n standard cascades with LDA\/LAC post-processing in \\cite{Wu2008Fast}\n and slightly improved performances were obtained. \n\n\n\n Six methods are evaluated with the multi-exit cascade framework\n \\cite{pham08multi},\n which are AdaBoost with LAC post-processing,\n or LDA post-processing,\n AsymBoost with LAC or LDA post-processing \\cite{Wu2008Fast}, and our\n FisherBoost,\n LACBoost. We have also implemented Viola-Jones'\n face detector as the baseline \\cite{Viola2004Robust}.\n As in \\cite{Viola2004Robust}, five basic types of Haar-like features \n are calculated, which makes up of a $162, 336$ dimensional over-complete\n feature set on an image of $24 \\times 24$ pixels.\n To speed up the weak classifier training, as in \\cite{Wu2008Fast},\n we uniformly sample $10\\%$ of features for\n training weak classifiers (decision stumps). \n The training data are $9,832$ mirrored $24 \\times 24$ face images \n ($5,000$ for training and $4,832$ for validation) and $7,323$ large background images,\n which are the same as in \\cite{Wu2008Fast}.\n\n Multi-exit cascades with $22$ exits and $2,923$ weak classifiers are\n trained with various methods. \n For fair comparisons, we have used the same cascade structure and \n same number of weak classifiers for all the compared learning methods.\n The indexes of exits are pre-set to simplify the training\n procedure.\n For our FisherBoost and LACBoost, we have an important parameter \n $\\theta$, which is chosen from \n $\\{\n \\frac{1}{10}, \n \\frac{1}{12},\n \\frac{1}{15},\n $\n $\n \\frac{1}{20}, \n \\frac{1}{25},\n \\frac{1}{30},\n $\n $\n \\frac{1}{40},\n \\frac{1}{50} \n \\}$.\n We have not carefully tuned this parameter using cross-validation.\n Instead, we train a $10$-node cascade for each candidate $ \\theta$, \n and choose the one with the best {\\em training} \n accuracy.\\footnote{ To train a complete $22$-node cascade\n and choose the best $ \\theta $\n on cross-validation data may give better detection rates.} \n At each exit, negative examples misclassified by current cascade are\n discarded, and new negative examples are bootstrapped from the background\n images pool. \n Totally, billions of negative examples are extracted from the pool.\n The positive training data and validation data keep unchanged during the\n training process.\n\nOur experiments are performed on a workstation with $8$ Intel Xeon\nE$5520$ CPUs and $32$GB RAM.\nIt takes about $3$ hours to train the multi-exit cascade with AdaBoost or AsymBoost.\nFor FisherBoost and LACBoost, it takes less than $ 4 $ hours to train\na complete multi-exit cascade.\\footnote{Our implementation is in C++ and \n only the weak classifier\n training part is parallelized using OpenMP. \n}\nIn other words, \n our EG algorithm takes less than $ 1 $ hour for \n solving the primal QP problem (we need to solve a QP at each iteration).\n A rough estimation of the computational complexity is as follows. \n Suppose that the number of training\n examples is $ m $, number of weak classifiers is $ n $,\n At each iteration of the cascade training,\n the complexity for solving the primal QP using EG is\n $ O( m n + k n^2) $ with $ k $ the iterations \n needed for EQ's convergence.\n The complexity for training the weak classifier is\n $ O( m d ) $ with $d$ the number of all Haar-feature patterns.\n In our experiment, $ m = 10,000 $,\n $ n \\approx 2900 $,\n $d = 160,000$,\n $ k < 500 $.\nSo the majority of the training computation is on the weak classifier training.\n\n\n\n\n We have also experimentally observed the speedup of EG against standard QP solvers. \n We solve the primal QP defined by \\eqref{EQ:QP2} using EG and Mosek \\cite{Mosek}.\n The QP's size is $ 1,000 $ variables. \n With the same accuracy tolerance (Mosek's primal-dual gap is set to $ 10^{-7}$\n and EG's convergence tolerance is also set to $ 10^{-7}$), \n Mosek takes $1.22 $ seconds and EG is\n $ 0.0541 $ seconds. So EG is about $ 20 $ times faster. \n Moreover, at iteration $ n + 1 $ of training the cascade,\n EG can take advantage of the last iteration's solution\n by starting EG from a small perturbation of the previous solution. \n Such a warm-start gains a $ 5 $ to $ 10\\times $ speedup in our experiment,\n while there is no off-the-shelf warm-start QP solvers available yet.\n\n\n We evaluate the detection performance on the MIT+CMU frontal\n face test set. \nTwo performance metrics are used here: each node and the entire cascade.\nThe node metric is how well the classifiers meet the node learning objective.\nThe node metric provides useful information about \nthe capability of each method to achieve the node learning goal.\nThe cascade metric uses the receiver operating characteristic (ROC)\nto compare the entire cascade's peformance.\nMultiple issues have impacts on the cascade's performance: classifiers,\nthe cascade structure, bootstrapping {\\em etc}.\n\n\n\nWe show the node comparison results in Fig. \\ref{fig:node1}.\nThe node performances between FisherBoost and LACBoost\nare very similar. From Fig.~\\ref{fig:node1}, as reported in \\cite{Wu2008Fast},\nLDA or LAC post-processing can considerably reduce the\nfalse negative rates. \nAs expected, our proposed FisherBoost and LACBoost can further reduce the false\nnegative rates significantly. \nThis verifies the advantage of selecting features with the node learning goal \nbeing considered. \n\n \nFrom the ROC curves in Fig.~\\ref{fig:ROC1}, we can see that FisherBoost and LACBoost\n outperform all the other methods.\n In contrast to the results of the detection rate for each node, \n LACBoost is slightly worse than FisherBoost in some cases.\n That might be due to that many factors have impacts on the final result of\n detection.\n \n \n \n \n LAC makes the assumption of Gaussianity and symmetry data distributions,\n which may not hold well in the early nodes. \n This could explain why\n LACBoost does not always perform the best.\n Wu {\\em et al.} have observed the same phenomenon that LAC post-processing\n does not outperform LDA post-processing in a few cases.\n However, we believe that for harder detection tasks, the \n benefits of LACBoost would be more impressive.\n\n The error reduction results of FisherBoost and LACBoost in \n Fig.~\\ref{fig:ROC1} are not as great as those in Fig. \\ref{fig:node1}.\n This might be explained by the fact that the\n cascade and negative data bootstrapping remove\n of the error reducing effects, to some extend.\n We have also compared our methods with the boosted greedy sparse LDA (BGSLDA) in\n \\cite{Paisitkriangkrai2009CVPR}, which is considered one of the state-of-the-art.\n \n \n \n We provide the ROC curves in the supplementary package.\n Both of our methods outperform \n BGSLDA with AdaBoost\/AsymBoost by about $ 2\\%$ in the detection rate. \n Note that BGSLDA uses the standard cascade. \n So besides the benefits of our FisherBoost\/LACBoost,\n the multi-exit cascade also brings effects. \n\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{Fisher_node}\n \\includegraphics[width=.45\\textwidth]{LACBoost_node}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_AsymBoost_noderates}\n \\includegraphics[width=.45\\textwidth]{LACBoost_vs_AsymBoost_noderates}\n \\end{center}\n \\caption{Node performances on the validation data. \n ``Ada'' means that features are selected using AdaBoost;\n ``Asym'' means that features are selected using AsymBoost.\n }\n \\label{fig:node1}\n\\end{figure}\n\n\n\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{Fisher_ROC}\n \\includegraphics[width=.45\\textwidth]{LACBoost_ROC}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_AsymBoost}\n \\includegraphics[width=.45\\textwidth]{LACBoost_vs_AsymBoost}\n \\end{center}\n \\caption{Cascade performances using ROC curves \n (number of false positives versus detection rate) on the MIT+CMU test data.\n ``Ada'' means that features are selected using AdaBoost. Viola-Jones cascade\n is the method in\n \\cite{Viola2004Robust}.\n ``Asym'' means that features are selected using AsymBoost.\n }\n \\label{fig:ROC1}\n\\end{figure}\n\n\n\n\n\n\n\\comment{\n\\begin{figure}[t]\n \\begin{center}\n \\includegraphics[width=.45\\textwidth]{FisherBoost_vs_BGSLDA}\n \\end{center}\n \\caption{Cascade performances on the MIT+CMU test data. We compare our methods with BGSLDA in\n \\cite{Paisitkriangkrai2009CVPR}.}\n \\label{fig:ROC_BGSLDA}\n\\end{figure}\n}\n\n\\section{Conclusion}\n\n\n By explicitly taking into account the node learning goal in cascade classifiers,\n we have designed \n new boosting algorithms for more effective object detection. \n Experiments validate the superiority of our FisherBoost and LACBoost. \n We have also proposed to use entropic gradient to efficiently \n implement FisherBoost and LACBoost. The proposed algorithms are easy to implement\n and can be applied other asymmetric classification tasks in computer vision.\n We are also trying to design new asymmetric boosting algorithms\n by looking at those asymmetric kernel classification methods. \n\n\n\\bibliographystyle{splncs}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe light pseudoscalar particle named {\\it axion} is an important element of the\nStandard Model and its generalizations. Axion arises \\cite{1,2} due to breaking\nof the Peccei-Quinn symmetry which was introduced \\cite{3} in quantum\nchromodynamics (QCD) in order to avoid strong {\\it CP} violation and large\nelectric dipole moment of a neutron (numerous experiments exclude both these\neffects to a high level of precision).\nWhat is more, axions provide an elegant solution for the problem of dark matter in\nastrophysics and cosmology \\cite{4,5}. This is the reason why a lot of experiments for\nsearching axions has been performed in different countries \\cite{6}.\nSpecifically, strong constraints on the coupling constants of an axion and other axion-like\nparticles with photons, electrons and nucleons were obtained from astrophysical\nobservations. Up to the present, however, there is the so-called {\\it window} in the\nvalues of an axion mass, where these constraints are either missing or not sufficiently\nstrong.\n\nThere are also massless and light scalar particles predicted in many extensions of the\nStandard Model \\cite{7}. Exchange of such particles between atoms of two macrobodies\nleads to corrections to the Newton law of gravitation at separations below a micrometer.\nBy coincidence, at so small separations Newton's gravitational law is not verified experimentally\nwith sufficient precision. Within a submicrometer interaction range experiment does not\nexclude corrections which exceed the Newton gravitational force by many orders of\nmagnitude \\cite{8}. Similar corrections are predicted in extra-dimensional models with a\nlow-energy compactification scale \\cite{9,10}. Many experiments of E\\\"{o}tvos- and Cavendish-type have\nbeen performed during the last few years searching for possible corrections to the Newton\nlaw of gravitation \\cite{11}.\n\nRecently it was found \\cite{12,13,14,15} that strong model-independent constraints on the coupling\nconstants of axions with nucleons follow from measurements of the Casimir-Polder and Casimir force.\nSome of these constraints overlap with an axion window and, thus, are complementary to\nastrophysical limits. As to corrections to Newton's law of gravitation, measurements of the van der Waals\nand Casimir forces have long been used to constrain their parameters \\cite{16,17}.\nNew, more precise measurements of the Casimir force allowed significant strengthening of previously\nobtained constraints on non-Newtonian gravity over the region of separations below $1\\,\\mu$m\n\\cite{18,19,20,21,22}.\n\nIn this paper, we review constraints on the coupling constants of an axion to a proton and a neutron,\nand corrections to Newton's law of gravitation which follow from the most precise measurements of the\nCasimir interaction \\cite{23,24}. We compare the obtained constraints on an axion with the alternative\nconstraints following from some other laboratory experiments. The constraints on the coupling constants\nof an axion and on non-Newtonian gravity, following from measurements of the Casimir interaction, are\nmutually compared and some conclusions inherent to both of them are obtained.\n\nThe paper is organized as follows. In Section 2 we consider the types of effective potentials which arise\ndue to one- and two-axion exchange. These are compared with the effective potentials originating from\nthe exchange of massless and massive scalar particles. Section 3 is devoted to the constraints on\naxion-nucleon coupling constants which follow from measurements of the Casimir-Polder force acting\nbetween the condensate of ${}^{87}$Rb atoms and a glass silica plate. In Section 4 the constraints on\naxion to nucleon coupling constants are presented obtained from measurements of the gradient of the\nCasimir force between a microsphere and a plate coated with a nonmagnetic metal Au or a magnetic\nmetal Ni. These experiments were performed by means of a dynamic atomic force microscope (AFM).\nSection 5 contains similar constraints obtained from measurements of the gradient of the Casimir force\nbetween Au-coated surfaces of a sphere and a plate using a micromachined oscillator.\nIn Section 6 the constraints on the coupling constants of an axion are provided which follow from\nmeasurements of the Casimir force between corrugated surfaces. In Section 7 we compare the\nconstraints on an axion found from measurements of the Casimir interaction with those obtained from\nsome other laboratory experiments. Section 8 is devoted to the constraints on non-Newtonian gravity\nderived from the Casimir effect. In Section 9 the reader will find our conclusions and discussion.\n\nThroughout the paper we use units in which $\\hbar=c=1$.\n\n\\section{Types of effective potentials}\n\nBelow we consider effective potentials arising from the interaction of nucleons (protons and neutrons) with\nan axion and other axion-like particles predicted in different variants of the Grand Unification Theories.\nAxions also interact with electrons and photons. These interactions are, however, much weaker than\naxion-nucleon interaction \\cite{25} and for our purposes can be neglected. In any case, their account would\nlead to only a minor strengthening of the constraints on axion-nucleon coupling constants obtained from the\nforce measurements between macroscopic bodies.\n\nWe assume that the interaction of axion-like particles $a$ with nucleons $\\psi$ is described by the\nLagrangian \\cite{4}\n\\begin{equation}\n{\\cal L}=-i g_{ak}\\bar{\\psi}\\gamma_5\\psi a,\n\\label{eq1}\n\\end{equation}\n\\noindent\nwhere $g_{ak}$ is the coupling constant of an axion to a proton ($k=p$) or to a neutron ($k=n$).\nIn doing so the pseudoscalar coupling of axions and other axion-like particles to nucleons is assumed\n(note that the pseudovector coupling introduced for the\noriginal QCD axions results in the nonrenormalizable\ntheory \\cite{15}). The exchange of one axion between two nucleons of spins\n$\\mbox{\\boldmath$\\sigma$}_{1,2}\/2$ situated at the points\n$\\mbox{\\boldmath$r$}_1\\neq\\mbox{\\boldmath$r$}_2$ with coupling (\\ref{eq1}) results in the following\neffective potential \\cite{25,26}\n\\begin{eqnarray}\n&&\nV(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2;\\mbox{\\boldmath$\\sigma$}_{1},\\mbox{\\boldmath$\\sigma$}_{2})\n=\\frac{g_{ak}g_{al}}{16\\pi m_km_l}\\left[\n\\vphantom{\\left(\n\\frac{m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)}\n(\\mbox{\\boldmath$\\sigma$}_{1}\\cdot\\mbox{\\boldmath$n$})\n(\\mbox{\\boldmath$\\sigma$}_{2}\\cdot\\mbox{\\boldmath$n$})\\right.\n\\nonumber\\\\\n&&~~~~~~~~~\n\\times\\left(\n\\frac{m_a^2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}+\n\\frac{3m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}+\n\\frac{3}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)\n\\nonumber\\\\\n&&~~~~\n\\left.-\n(\\mbox{\\boldmath$\\sigma$}_{1}\\cdot\\mbox{\\boldmath$\\sigma$}_{2})\n\\left(\n\\frac{m_a}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^2}+\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}\\right)\n\\right]\\,e^{-m_a|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}.\n\\label{eq2}\n\\end{eqnarray}\n\\noindent\nHere, $g_{ak}$ and $g_{al}$ are the axion-proton ($k,l=p$) or \naxion-neutron ($k,l=n$) interaction constants, $m_k,\\,m_l$ are the nucleon masses,\n$m_a$ is the axion mass, and the unit vector\n$\\mbox{\\boldmath$n$}=\n(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2)\n\/|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|$.\n\nAs is seen in (\\ref{eq2}), the effective potential depends on the nucleon spins. Because of this,\nthe resulting interaction between two unpolarized test bodies averages to zero.\nTaking into account that already performed experiments on measuring the Casimir interaction\n\\cite{23,24} deal with unpolarized test bodies, it seems impossible to use them for constraining\nthe axion to nucleon coupling constants basing on the simplest process of one-axion exchange.\n\nThe situation changes when we consider the process of two-axion exchange between the two\nnucleons. In this case the Lagrangian (\\ref{eq1}) leads to the following effective\npotential \\cite{25,27,28}\n\\begin{equation}\nV_{kl}(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{g_{ak}^2g_{al}^2}{32\\pi^3m_km_l}\\,\n\\frac{m_a}{(\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2)^2}\\,\nK_1(2m_a|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|),\n\\label{eq3}\n\\end{equation}\n\\noindent\nwhere $K_1(z)$ is the modified Bessel function of the second kind. Note that (\\ref{eq3}) is\nderived under the condition\n$|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|\\gg 1\/m_{k,l}$\nwhich is satisfied with a large safety margin in all the experiments considered below.\nEquation (\\ref{eq3}) does not depend on the nucleon spins. Thus, after the integration over the\nvolumes of test bodies, it leads to some additional force of the axionic origin which can be\nconstrained from the measurement results.\n\nNow we address to exchange of massless and light scalar particles between the atoms of two\nmacroscopic bodies. The exchange of one light scalar particle of mass $M$ between two pointlike\nparticles with masses $m_1$ and $m_2$ spaced at the points\n$\\mbox{\\boldmath$r$}_1$ and $\\mbox{\\boldmath$r$}_2$ results in the spin-independent\nYukawa-type effective potential \\cite{8}. It is convenient to parametrize this potential as a\ncorrection to Newton's law of gravitation:\n\\begin{equation}\nV(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{Gm_1m_2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\,\\left(1+\n\\alpha e^{-|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|\/\\lambda}\\right).\n\\label{eq4}\n\\end{equation}\n\\noindent\nHere, $\\alpha$ is a dimensionless constant characterizing the strength of Yukawa interaction,\n$\\lambda=1\/M$\n is the Compton wavelength of light scalar particle characterizing the interaction range, and $G$ is the\nNewtonian gravitational constant. As was noted in Section 1, the effective potential (\\ref{eq4}) arises\nalso in extradimensional models with a low-energy compactification scale \\cite{9,10}.\nIn this case the quantity $\\lambda$ has the meaning of the characteristic size of a multidimensional\ncompact manifold.\n\nThe exchange of one massless scalar particle leads to an effective\n potential which is inversely proportional to the separation\n distance.\nThe exchange of an even number of massless pseudoscalar particles\n(for instance, by the arions) results in the effective potentials\ninversely proportional to higher powers of the separation.\nSimilar potentials arise also due to the exchange of two\nneutrinos,\ntwo goldstinos, or other massless fermions \\cite{29,30}.\nThe power-type effective potentials are also usually\nparametrized as corrections to Newton's law of gravitation\n\\begin{equation}\nV_n(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{Gm_1m_2}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\,\\left[1+\n\\Lambda_n\n\\left(\\frac{r_0}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|}\\right)^{n-1}\n\\right].\n\\label{eq5}\n\\end{equation}\n\\noindent\nHere, $\\Lambda_n$ is a dimensionless constant, $n$ is a positive\ninteger, and $r_0=10^{-15}\\,$m is chosen to preserve the correct\ndimension of energy at different $n$. Note that the exchange by\ntwo axion-like particles in the limiting case $m_a\\to 0$\nin accordance to (\\ref{eq3}) results in the potential \\cite{31}\n\\begin{equation}\nV_{kl}(|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|)=-\n\\frac{g_{ak}^2g_{al}^2}{64\\pi^3m_km_l}\\,\n\\frac{1}{|\\mbox{\\boldmath$r$}_1-\\mbox{\\boldmath$r$}_2|^3}.\n\\label{eq6}\n\\end{equation}\n\\noindent\nThis can be represented as a correction to Newton's law of\ngravitation in (\\ref{eq5}) with $n=3$ (the same power-type\ninteraction is obtained from the exchange of two arions).\nThe effective potential (\\ref{eq5}) with $n=3$ is also obtained\nfrom extra-dimensional models with noncompact (but warped)\nextra dimensions \\cite{32,33}.\n\n\\section{Constraints on an axion from measurements of the Casimir-Polder force}\n\nThe Casimir-Polder force acting between ${}^{87}$Rb atoms\nbelonging to a Bose-Einstein condensate cloud and a SiO${}_2$\nplate was measured by means of the following dynamic\nexperiment \\cite{34}. The condensate cloud was placed in a\nmagnetic trap with frequencies $\\omega_{0z}=1438.85\\,$rad\/s\nin the perpendicular direction to the plate and\n$\\omega_{0t}=40.21\\,$rad\/s in the lateral direction.\nThe Thomas-Fermi radii of the condensate cloud of ${}^{87}$Rb\natoms in the perpendicular and lateral directions were\n$R_z=2.69\\,\\mu$m and $R_l=97.1\\,\\mu$m, respectively.\nThe dipole oscillations of the condensate in the $z$ direction\nwith a constant amplitude $A_z=2.5\\,\\mu$m were excited.\nThe separation distance $a$ between the center of mass of a\ncondensate and a plate was varied from 6.88 to $11\\,\\mu$m,\ni.e., in the region where the thermal effects in the\nCasimir-Polder force contribute essentially.\nThe temperature of the plate was equal to either $T=310\\,$K\n(as in an environment) or $T=479\\,$K and $T=605\\,$K (which\ncorresponds to out of equilibrium situations). However, for\nconstraining the parameters of an axion, the strongest result\nfollows from the measurements in thermal equilibrium.\n\nUnder the influence of the Casimir-Polder force between\n${}^{87}$Rb atoms and a plate, the oscillation frequency\n$\\omega_{0z}$ slightly shifts to some other value $\\omega_z$.\nThe relative frequency shift is given by\n\\begin{equation}\n\\gamma_z=\\frac{|\\omega_{0z}-\\omega_z|}{\\omega_{0z}}\\approx\n\\frac{|\\omega_{0z}^2-\\omega_z^2|}{2\\omega_{0z}^2}.\n\\label{eq7}\n\\end{equation}\n\\noindent\nThis frequency shift was measured \\cite{34} as a function of\n$a$ with some measurement errors determined at a 67\\% confidence\nlevel. For example, at the shortest separation $a_1=6.88\\,\\mu$m\nthis absolute error was $\\Delta_1\\gamma_z=3.06\\times 10^{-5}$.\nThe quantity $\\gamma_z$ was also calculated using the Lifshitz\ntheory of atom-wall interaction and subsequent averaging over the\ncondensate cloud. Under the assumption that SiO${}_2$ is an ideal\ninsulator, i.e., by disregarding the influence of its dc\nconductivity,\nit was found \\cite{34} that the measurement results are in\nagreement with theory in the limits of the experimental error\n$\\Delta\\gamma_z$ (the importance of this assumption was\ndemonstrated later \\cite{23,24,35}).\n\nDue to the interaction potential (\\ref{eq3}), there may be also\nsome additional force between a condensate cloud and a plate\ncaused by the two-axion exchange between protons and neutrons\nbelonging to them. The respective additional frequency shift can\nbe calculated by the additive summation of (\\ref{eq3}) over all\nnucleons of a ${}^{87}$Rb atom and a plate with subsequent\naveraging over the condensate cloud (see \\cite{12} for details).\nUnder an assumption that the plate has an infinitely large area\n(it was shown \\cite{12} that relative corrections to the result\ndue to a finite plate area are of order $10^{-6}$) the additional\nfrequency shift due to two-axion exchange is given by \\cite{12}\n\\begin{equation}\n\\gamma_{z}^{\\rm add}(a)=\n\\frac{15A(g_{ap},g_{an})}{2\\pi A_z m_{\\rm Rb}\\omega_{0z}^2}\n\\Phi(a,m_a),\n\\label{eq8}\n\\end{equation}\n\\noindent\nwhere $m_{\\rm Rb}$ is the mass of ${}^{87}$Rb atom and the\nfunction $\\Phi(a,m_a)$ is defined as\n\\begin{eqnarray}\n&&\n\\Phi(a,m_a)=\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u}\ne^{-2m_aau}\n\\nonumber \\\\\n&&~~~~~~\\times\n\\left(1-e^{-2m_aDu}\\right)\\,\nI_1(2m_aA_zu)\\Theta(2m_aR_zu).\n\\label{eq9}\n\\end{eqnarray}\nHere, $D=7\\,$mm is the thickness of SiO${}_2$ plate and\n\\begin{equation}\n\\Theta(t)\\equiv\\frac{1}{t^3}(t^2\\sinh t-3t\\cosh t+3\\sinh t).\n\\label{eq10}\n\\end{equation}\n\\noindent\nThe constant $A(g_{ap}g_{an})$ in (\\ref{eq8}) depends on the\nmaterial properties as follows \\cite{12}\n\\begin{eqnarray}\n&&\nA(g_{ap},g_{an})=\n\\frac{\\rho_{{\\rm SiO}_2}m_a}{16\\pi^2m^2m_{\\rm H}}\n(37g_{ap}^2+50g_{an}^2)\n\\nonumber\\\\\n&&~~~~~~~\n\\times\\left(\\frac{Z_{{\\rm SiO}_2}}{\\mu_{{\\rm SiO}_2}}g_{ap}^2\n+\\frac{N_{{\\rm SiO}_2}}{\\mu_{{\\rm SiO}_2}}g_{an}^2\\right),\n\\label{eq11}\n\\end{eqnarray}\n\\noindent\nwhere $\\rho_{{\\rm SiO}_2}$ is the plate density,\n$m=(m_p+m_n)\/2$ is the mean nucleon mass,\n$Z_{{\\rm SiO}_2}$ and $N_{{\\rm SiO}_2}$ are the number of protons\nand the mean number of neutrons in a SiO${}_2$ molecule,\nrespectively. The quantity\n$\\mu_{{\\rm SiO}_2}=m_{{\\rm SiO}_2}\/m_{\\rm H}$, where\n$m_{{\\rm SiO}_2}$ is the mean mass of a SiO${}_2$ molecule and\n$m_{\\rm H}$ is the mass of atomic hydrogen.\n\nTaking into account that the observed frequency shift was in\nagreement with that originating from the Casimir-Polder force,\nthe additional frequency shift (\\ref{eq8}) due to two-axion\nexchange should be constrained by the magnitude of the experimental\nerror\n\\begin{equation}\n\\gamma_{z}^{\\rm add}(a_1)\\leq\\Delta_1\\gamma_z.\n\\label{eq12}\n\\end{equation}\n\\noindent\n{}From the numerical analysis of this equation, the constraints\non axion-nucleon coupling constants were obtained \\cite{12} under\ndifferent assumptions about a relationship between $g_{an}$ and\n$g_{ap}$. For example, under a natural assumption that\n$g_{an}=g_{ap}$ \\cite{25}, the resulting constraints are shown\nin Fig.~1, where the region of the plane above the line is\nexcluded and the region below the line is allowed.\nThese constraints cover the wide region of axion masses from\n$m_a=10^{-4}$ to 0.3\\,eV. As is seen in Fig.~1, the strength\nof constraints decreases with increasing axion mass.\nIn Section 7 we compare the constraints of Fig.~1 with those\nobtained from other measurements of the Casimir force and\ndifferent laboratory experiments.\n\n\\section{Constraints on an axion from measurements of the gradient of the Casimir force\nby means of AFM}\n\nIn the sequence of three experiments, the gradient of the Casimir\nforce was measured\nbetween the surfaces of a hollow sphere and a plate both coated\nwith Au films \\cite{36,37},\nwith Au and Ni films, respectively \\cite{38},\nand with Ni films \\cite{39,40}.\nFor technological purposes, there were also various material\nlayers\nbelow Au and Ni coatings on both a hollow sphere made of fused\nsilica (SiO${}_2$) and a sapphire (Al${}_2$O${}_3$) plate.\nThe radii of spheres were of about $50\\,\\mu$m and the plates (disks)\nwere of approximately 5\\,mm radius, i.e., by a factor of 100\nlarger than the spheres.\nMeasurements of the gradient of the Casimir force,\n$\\partial F_C(a)\/\\partial a$,\nas a function of separation $a$ between the plate and the sphere,\nwere performed by means of dynamic AFM (see \\cite{36,37} for\ndetails). In all three experiments the measurement results were\nfound in agreement with theoretical predictions of the Lifshitz\ntheory in the limits of the experimental errors\n$\\Delta F_C^{\\prime}(a)$. Calculations of the theoretical force\ngradients were performed with omitted relaxation properties of\nconduction electrons in metals (an account of the\nrelaxation properties of\nconduction electrons in computations using the Lifshitz theory\nleads to disagreement with the measurement data of many\nexperiments \\cite{22,23,24,36,37,39,40}).\n\nThe two-axion exchange between nucleons belonging to a sphere and\na plate leads to some attraction in addition to the Casimir force.\n The gradient of this additional force acting between a spherical\nenvelope (layer) of thickness $\\Delta_s$ and external radius $R$,\nand a plate of thickness $D$ can be calculated by the additive\nsummation of the interaction potentials (\\ref{eq3}) \\cite{13}\n\\begin{eqnarray}\n&&\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}=\n\\frac{\\pi}{m^2m_H^2}C_pC_s\n\\int_{1}^{\\infty}\\!du\\frac{\\sqrt{u^2-1}}{u^2}\n\\left(1-e^{-2m_auD}\\right)\n\\nonumber \\\\\n&&~~~~\n\\times\ne^{-2m_aau}\\,\\left[\\Phi(R,m_au)-\ne^{-2m_au\\Delta_s}\\Phi(R-\\Delta_s,m_au)\\right],\n\\label{eq13}\n\\end{eqnarray}\n\\noindent\nwhere the function $\\Phi(r,z)$ is defined as\n\\begin{equation}\n\\Phi(r,z)=r-\\frac{1}{2z}+e^{-2rz}\\left(r+\n\\frac{1}{2z}\\right),\n\\label{eq14}\n\\end{equation}\nthe coefficients $C_{p(s)}$ for a plate (spherical layer)\nmaterials are given by\n\\begin{equation}\nC_{p(s)}=\\rho_{p(s)}\\left(\\frac{g_{ap}^2}{4\\pi}\\,\n\\frac{Z_{p(s)}}{\\mu_{p(s)}}+\\frac{g_{an}^2}{4\\pi}\\,\n\\frac{N_{p(s)}}{\\mu_{p(s)}}\\right),\n\\label{eq15}\n\\end{equation}\n\\noindent\n$\\rho_{p(s)}$ are the plate (spherical layer)\ndensities, and the quantities $Z_{p(s)}$, $N_{p(s)}$ and\n$\\mu_{p(s)}$ have the same meaning, as explained below\n(\\ref{eq11}), but in application to the molecules (atoms)\nof a plate and a spherical layer, respectively.\n\nNow we concentrate our attention on the experiment using\nAu-coated surfaces of a spherical envelope of thickness\n$\\Delta_s^{\\! g}=5\\,\\mu$m, of radius $R=41.3\\,\\mu$m\nand a plate \\cite{36,37}. The thicknesses of the Au coating\non the sphere and the plate were\n$\\Delta_s^{\\!\\rm Au}=\\Delta_p^{\\!\\rm Au}=280\\,$nm.\nThis allows to calculate the Casimir force (but not the\nadditive force due to two-axion exchange) as between entirely\nAu bodies. In calculation of the additional force it should\nbe taken into account that in the experiment \\cite{36,37}\nthe Au layers on both the spherical envelope and the plate\nwere deposited on the layers of Al of equal thicknesses\n$\\Delta_s^{\\!\\rm Al}=\\Delta_p^{\\!\\rm Al}=20\\,$nm.\nNow the gradient of the additional force can be calculated\nby applying (\\ref{eq13}) to each pair of material layers\nforming the spherical envelope and the plate taking into\naccount the separation distances between each pair of\nmaterial layers\n\\begin{eqnarray}\n&&\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}=\n\\frac{\\pi}{m^2m_H^2}\n\\int_{1}^{\\infty}\\!du\\frac{\\sqrt{u^2-1}}{u^2}\ne^{-2m_aau}\n\\nonumber \\\\\n&&~~~~~~~~~~\n\\times\nX_p(m_au)X_s(m_au),\n\\label{eq16}\n\\end{eqnarray}\n\\noindent\nwhere\n\\begin{eqnarray}\n&&\nX_p(z)\\equiv C_{\\rm Au}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Au}}\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Al}e^{-2z\\Delta_p^{\\!\\rm Au}}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Al}}\\right)\n+C_{\\rm sa}e^{-2z(\\Delta_p^{\\!\\rm Au}+\\Delta_p^{\\!\\rm Al})},\n\\nonumber \\\\[-2mm]\n&&\n\\label{eq17} \\\\[-2mm]\n&&\nX_s(z)\\equiv C_{\\rm Au}\\left[\\Phi(R,z)-e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right]\n\\nonumber \\\\\n&&~~~~~~\n+C_{\\rm Al}e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\left[\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~\\left.\n-e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al},z)\\right]\n\\nonumber \\\\\n&&~~~~~~\n+C_{g}e^{-2z(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al})}\n\\left[\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Al}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al},z)\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~\\left.\n-\ne^{-2z\\Delta_s^{\\! g}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Al}\n-\\Delta_s^{\\!g},z)\\right].\n\\nonumber\n\\end{eqnarray}\n\\noindent\nIn these equations, the thickness of the sapphire plate\nwas put equal to\ninfinity, as it does not influence the result.\nThe coefficients $C_{\\rm Au}$, $C_{\\rm Al}$, $C_{g}$\nand $C_{sa}$ are defined in Eq.~(\\ref{eq15}) which\nshould be applied to the\natoms Au and Al and to the molecules of glass and sapphire\n[the densities of these materials entering (\\ref{eq15}) are\n$\\rho_{\\rm Au}$, $\\rho_{\\rm Al}$, $\\rho_g$ and $\\rho_{sa}$;\nthey can be found in the tables].\n\nTaking into account that no additional force was observed in\nthe experiment \\cite{36,37} within the measurement\nerror, one can write\n\\begin{equation}\n\\frac{\\partial F_{\\rm add}(a)}{\\partial a}\\leq\n\\Delta F_C^{\\prime}(a).\n\\label{eq18}\n\\end{equation}\n\\noindent\nNumerical analysis of this equation leads to new constraints\non the interaction constants $g_{ap}$ and $g_{an}$.\nThe strongest constraints are obtained at the shortest\nexperimental separation $a_1=235\\,$nm. At this separation\ndistance the experimental error determined at a 67\\% confidence\nlevel is\n$\\Delta F_C^{\\prime}(a_1)\\equiv\\Delta_1F_C^{\\prime}=\n0.5\\,\\mu$N\/m \\cite{36}.\nIn Fig.~2 we show these constraints by the solid line under\nthe assumption $g_{ap}=g_{an}$ (see \\cite{13} for the alternative\nassumptions). The region of the plane above the line is\nexcluded, and the region below the line is allowed.\nThe comparison of the solid line in Fig.~2 with the line in\nFig.~1 shows that the constraints following from measurements\nof the gradient of the Casimir force are stronger than those\nobtained from measurements of the Casimir-Polder force.\nThe largest strengthening by a factor of 170 is achieved for the\naxion mass $m_a=0.3\\,$eV.\n\nSimilar results can be obtained \\cite{13} from the measurement\ndata of experiment with a Au-coated spherical envelope of\n$R=64.1\\,\\mu$m radius and a Ni-coated plate \\cite{38}.\nThe gradient of the additional force due to two-axion exchange\nis again given by (\\ref{eq16}), where $X_s(z)$ is presented in\n(\\ref{eq17}) and $X_p(z)$ takes a more simple form due to the\nabsence of an Al layer below a Ni coating\n\\begin{equation}\nX_p(z)= C_{\\rm Ni}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Ni}}\\right)\n+C_{\\rm Si}e^{-2z\\Delta_p^{\\!\\rm Ni}}.\n\\label{eq19}\n\\end{equation}\nHere, $\\Delta_p^{\\!\\rm Ni}=154\\,$nm and $C_{\\rm Ni}$ can be\ncalculated using (\\ref{eq15}).\n\nThe constraints on the coupling constants of axions to nucleons\ncan be again obtained from (\\ref{eq18}).\nThe strongest constraints follow at the shortest separation\nequal to $a_1=220\\,$nm in this experiment. The respective total\nexperimental error determined at a 67\\% confidence level is\n$\\Delta_1F_C^{\\prime}=0.79\\,\\mu$N\/m \\cite{38}.\nThe constraints obtained under the condition $g_{ap}=g_{an}$\nare shown by the long-dashed line in Fig.~2. As can be seen in\nFig.~2, the constraints following from the experiment with Au-Ni\ntest bodies are up to a factor 1.5 weaker\nthan those obtained\nfrom the experiment with Au-Au test bodies. The main reason is\nthe smaller density of Ni, as compared with Au.\n\nIn the third experiment, a Ni-coated spherical envelope of\n$R=61.71\\,\\mu$m radius and a Ni-coated plate were used \\cite{39,40}.\nThe additional force can be again expressed by (\\ref{eq16}).\nIn this case, however, the functions $X_p(z)$ and $X_s(z)$ are\nmore complicated than in the previously considered experiments\nbecause for technological purposes there were two additional\nlayers (Al and Cr)\nbelow the Ni coating on both a spherical envelope and on a plate\n(see \\cite{13} for explicit expressions).\n\nThe constraints on $g_{ap}=g_{an}$ were again obtained from (\\ref{eq18}).\nThe strongest constraints follow at the shortest separation\ndistance ($a_1=223\\,$nm in this case). The total\nexperimental error determined at a 67\\% confidence level\nat the shortest separation is\n$\\Delta_1F_C^{\\prime}=1.2\\,\\mu$N\/m \\cite{38}.\nThe obtained constraints\nare shown by the short-dashed line in Fig.~2. They are slightly\nweaker than those following from the experiments with Au-Au\nand Au-Ni test bodies. This is again explained by\nthe smaller density of Ni in comparison with that of Au (see\nSection 7 for comparison with other laboratory constraints).\n\n\\section{Constraints on an axion from measurements of the Casimir pressure\nby means of micromachined oscillator}\n\nThe Casimir pressure $P_C(a)$ between two parallel Au-coated\nplates\nwas determined from dynamic measurements performed in sphere-plate\n geometry using a micromechanical torsional oscillator\n\\cite{41,42}.\nA sapphire sphere and a Si plate of thickness $D=5\\,\\mu$m were\ncoated with the layers of Cr of equal thickness\n$\\Delta_s^{\\!\\rm Cr}=\\Delta_p^{\\!\\rm Cr}=10\\,$nm.\nThe outer layers of Au were of thicknesses\n$\\Delta_s^{\\!\\rm Au}=180\\,$nm on the sphere and\n$\\Delta_p^{\\!\\rm Au}=210\\,$nm on the plate.\nThe resulting radius of the sphere was measured to be\n$R=151.3\\,\\mu$m. The experimental results for the Casimir pressure\n between two parallel plates spaced $a$ apart were found to be\nin agreement with the predictions of the Lifshitz theory in the\nlimits of the total experimental error in the pressure\nmeasurements $\\Delta P_C(a)$ determined at a 95\\% confidence\nlevel. Here, we recalculate this error to a 67\\% confidence\nlevel in order to obtain constraints comparable with those\nfollowing from other experiments. The theoretical results were\nobtained with omitted contribution of the relaxation properties\nof free electrons (taking these properties into account leads\nto theoretical predictions excluded by the measurement data\n\\cite{23,24,41,42}).\n\nThe additional effective pressure between two parallel plates\ndue to two-axion exchange between nucleons of a sphere and a\nplate can be calculated by the additive summation using the\ninteraction potential (\\ref{eq3}) (see \\cite{14} for details).\nThe result is the following \\cite{14}:\n\\begin{eqnarray}\n&&\nP_{\\rm add}(a)=\n-\\frac{1}{2m^2m_{\\rm H}^2R}\\int_{1}^{\\infty}\\!\\!\\!du\n\\frac{\\sqrt{u^2-1}}{u^2}\n\\nonumber \\\\\n&&~~~~~~~~~\n\\times\ne^{-2m_aau}\\tilde{X}_p(m_au)\\tilde{X}_s(m_au),\n\\label{eq20}\n\\end{eqnarray}\n\\noindent\nwhere\n\\begin{eqnarray}\n&&\n\\tilde{X}_p(z)\\equiv C_{\\rm Au}\\left(1-e^{-2z\\Delta_p^{\\!\\rm Au}}\n\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Cr}e^{-2z\\Delta_p^{\\!\\rm Au}}\n\\left(1-e^{-2z\\Delta_p^{\\!\\rm Cr}}\n\\right)\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Si}e^{-2z(\\Delta_p^{\\!\\rm Au}+\\Delta_p^{\\!\\rm Cr})}\n\\left(1-e^{-2zD}\n\\right),\n\\label{eq21} \\\\[1mm]\n&&\n\\tilde{X}_s(z)\\equiv C_{\\rm Au}\\left[\n\\vphantom{e^{-2z\\Delta_s^{\\!\\rm Au}}}\n\\Phi(R,z)\n-e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\\right]\n\\nonumber \\\\\n&&~~~\n+C_{\\rm Cr}e^{-2z\\Delta_s^{\\!\\rm Au}}\n\\left[\n\\vphantom{e^{-2m_au\\Delta_s^{\\!\\rm Au}}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au},z)\n\\right.\n\\nonumber \\\\\n&&~~~~~~~~~~~~\n\\left.\n-e^{-2z\\Delta_s^{\\!\\rm Cr}}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Cr},z)\n\\right]\n\\nonumber \\\\\n&&~~~\n+C_{sa}\ne^{-2z(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Cr})}\n\\Phi(R-\\Delta_s^{\\!\\rm Au}-\\Delta_s^{\\!\\rm Cr},z).\n\\nonumber\n\\end{eqnarray}\n\\noindent\nThe function $\\Phi(r,z)$ used here is given in (\\ref{eq14}).\nThe coefficients $C_{\\rm Au}$, $C_{\\rm Cr}$, $C_{\\rm Si}$, and\n$C_{sa}$ are the same as used above. All of them are expressed\nby (\\ref{eq15}), as applied to respective materials.\n\nThe constraints on the axion-nucleon interaction constants were\nfound from the inequality\n\\begin{equation}\n|P_{\\rm add}(a)|\\leq\\Delta P_C(a).\n\\label{eq22}\n\\end{equation}\n\\noindent\nFor different regions of axion masses the strongest constraints\nfollow from (\\ref{eq22}) at different separation distances.\nThus, within the regions\n$m_a<0.1\\,$eV, $0.1\\,\\mbox{eV}\\leq m_a<0.5\\,$eV and\n$0.5\\,\\mbox{eV}\\leq m_a<15\\,$eV the strongest constraints were\nobtained at $a=300$, 200 and 162\\,nm, respectively.\nAt these separations the total experimental errors in\nmeasurements of the Casimir pressure recalculated to a 67\\%\nconfidence level were equal to 0.22, 0.38, and 0.55\\,mPa,\nrespectively. In Fig.~3 the obtained constraints are shown\nby the solid line under the condition $g_{ap}=g_{an}$.\nThey are stronger than the constraints following from\nmeasurements of the Casimir-Polder force (see Fig.~1) and from\nmeasurements of the gradient of the Casimir force between\nAu-Au surfaces (see the solid line in Fig.~2). Thus, at\n$m_a=1\\,$eV\nthe constraints of Fig.~3 are stronger by a factor of 3.2 than\nthe strongest constraints of Fig.~2 shown by the solid line\n(a more detailed comparison is contained in Section 7).\n\n\\section{Constraints on an axion from measurements of the Casimir force\nbetween corrugated surfaces}\n\nSeveral measurements of the Casimir interaction between a sphere\nand a plate were performed in the case when the surface of at\n least one test body is not smooth, but covered with the\n longitudinal corrugations \\cite{43,44,45,46,47,48,49,50}.\n The shape of the corrugations was either sinusoidal\n \\cite{43,44,47,48,49,50} or rectangular \\cite{45,46}\n(in the latter case the sphere was smooth, and only the plate was\ncorrugated). If both the test bodies are corrugated and some\nnonzero phase shift between corrugations is present, there is\nnot only the normal Casimir force acting perpendicular to the\nsurfaces, but the lateral Casimir force as well\n\\cite{43,44,47,48}.\nHere we consider the constraints on axion-nucleon coupling\nconstants obtained \\cite{15} from measurements of the normal\n\\cite{49,50} and lateral \\cite{47,48} Casimir force between\nsinusoidally corrugated Au-coated surfaces (experiments\n\\cite{43,44} are less precise, and experiments \\cite{45,46}\nuse the rectangular corrugated Si plates and lead to weaker\nconstraints due to a smaller density of Si).\n\nWe begin with an experiment on measuring the lateral Casimir\nforce between\nsinusoidally corrugated surfaces of a sphere and a plate\n\\cite{47,48}. The corrugation axes of the longitudinal\ncorrugations on both bodies were kept parallel, and there was\nsome phase shift $\\varphi_0$ between corrugations.\nThe period of corrugations was $\\Lambda=574.4\\,$nm.\nMeasurements of the lateral Casimir force as a function of\nthe phase shift were performed over the region of separations\nbetween the mean levels of corrugations from 120 to 190\\,nm.\nThe corrugation amplitudes were\n$A_1=85.4\\,$nm and $A_2=13.7\\,$nm on the plate and on the\nsphere, respectively. The plate was made of a hard epoxy and\ncoated with a layer of Au of thickness\n$\\Delta_p^{\\!\\rm Au}=300\\,$nm.\nThe sphere was made of polystyrene and coated with a layer of\nCr of $\\Delta_s^{\\!\\rm Cr}=10\\,$nm thickness and then with a layer of\nAu of $\\Delta_s^{\\!\\rm Au}=50\\,$nm thickness.\nThe outer radius of the sphere was measured to be $R=97.0\\,\\mu$m.\nThe measurement results were compared with theoretical predictions\n of the scattering theory (which generalizes the Lifshitz theory\n for the case of arbitrary shaped bodies) and demonstrated good\n agreement in the limits of the experimental error\n $\\Delta F_C^{\\rm lat}(a)$ \\cite{47,48}.\n\n The additional lateral force due to two-axion exchange between\n sinusoidally corrugated surfaces of a sphere and a plate can be\n calculated using (\\ref{eq3}). The maximum amplitude of this\n force, which is obtained at the phase shift $\\varphi_0=\\pi\/2$,\n takes the form \\cite{15}\n\\begin{eqnarray}\n&&\n\\max|F_{\\rm add}^{\\rm lat}(a)|=\n\\frac{\\pi^2 RC_{\\rm Au}}{m_am^2m_{\\rm H}^2}\\,\n\\frac{A_1A_2}{\\Lambda\\sqrt{A_1^2+A_2^2}}\n\\nonumber \\\\[1mm]\n&&~~\n\\times\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u^3}\ne^{-2m_aua} I_1\\left(2m_au\\sqrt{A_1^2+A_2^2}\\right)\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times\n(1-e^{-2m_au\\Delta_p^{\\!\\rm Au}})\n\\left[\n\\vphantom{e^{-2m_au\\Delta_{\\rm Au}^{\\!(1)}}}\nC_{\\rm Au}+(C_{\\rm Cr}-C_{\\rm Au})\n\\right.\n\\nonumber \\\\[1mm]\n&&~~~~\\left.\n\\times e^{-2m_au\\Delta_s^{\\!\\rm Au}}\n-C_{\\rm Cr}\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Cr})}\n\\right].\n\\label{eq23}\n\\end{eqnarray}\n\\noindent\nHere, the hard epoxy and polystyrene would lead to negligibly\nsmall contributions to the force due to two-axion exchange.\nBecause of this, only metallic coatings were taken into account\nin (\\ref{eq23}).\n\nThe constraints on an axion can be obtained from the inequality\n\\begin{equation}\n\\max|F_{\\rm add}^{\\rm lat}(a)|\\leq\n\\Delta F_{C}^{\\rm lat}(a),\n\\label{eq24}\n\\end{equation}\n\\noindent\nwhere the left-hand side is given by (\\ref{eq23}).\nFor axion-like particles with masses $m_a<20\\,$eV, the strongest\nconstraints are obtained from the measure of agreement between\nexperiment and theory at $a=124.7\\,$nm. At this separation the\ntotal experimental error recalculated to a 67\\% confidence level\nfor convenience in comparison with other experiments is\n$\\Delta F_C^{\\rm lat}=2.4\\,$pN (note that according to a\nconservative estimation, the total experimental error\ncalculated in \\cite{47,48} at a 95\\% confidence level is by a\nfactor of 2 larger than the same error found at a 67\\% confidence\nlevel). The constraints on $g_{ap}=g_{an}$ obtained from\n(\\ref{eq24}) at $a=124.7\\,$nm\nare shown by the solid line in Fig.~4, where the region of the\nplane above the line is excluded and the region below the line is\nallowed. Note that this line is slightly different from the\nrespective lines in Fig.~2(a,b) in \\cite{15} because it was\nplotted there at the 95\\% confidence level.\n\nWe now turn our attention to the experiment on measuring the\nnormal Casimir force between a sinusoidally corrugated Au-coated\npolystyrene sphere of $R=99.6\\,\\mu$m radius and a sinusoidally\ncorrugated Au-coated plate made of hard epoxy \\cite{49,50}.\nThis experiment was performed at different angles between the\nlongitudinal corrugations on the sphere and on the plate varying\nfrom 0 to 2.4${}^{\\circ}$. There was no phase shift between\ncorrugations on both bodies. Below we obtain constraints on the\naxion-nucleon coupling constants from the measurement data for\nthe case of parallel corrugation axes on the sphere and the\nplate. The thicknesses of Au coatings on the sphere and on the\nplate were $\\Delta_s^{\\!\\rm Au}=110\\,$nm and\n$\\Delta_p^{\\!\\rm Au}=300\\,$nm, respectively.\nFor technological purposes, before depositing the Au coatings,\nthe sphere was first coated with a layer of Cr of thickness\n$\\Delta_s^{\\!\\rm Cr}=10\\,$nm and then\nwith a layer of Al of thickness\n$\\Delta_s^{\\!\\rm Al}=20\\,$nm.\nThe period of uniaxial sinusoidal corrugations on both bodies\nwas $\\Lambda=570.5\\,$nm, and the corrugations amplitudes were\n$A_1=40.2\\,$nm and $A_2=14.6\\,$nm on the plate and on the sphere,\nrespectively. The measurement results were compared with\ntheoretical predictions of the scattering theory and found in\ngood agreement within the limits of the total experimental error.\n\nThe additional normal force acting between a sphere and a plate\ndue to two-axion exchange was again calculated \\cite{15} using\n(\\ref{eq3})\n\\begin{eqnarray}\n&&\nF_{\\rm add}^{\\rm nor}(a)=-\\frac{\\pi RC_{\\rm Au}}{2m_am^2m_{\\rm H}^2}\n\\int_{1}^{\\infty}\\!\\!\\!du\\frac{\\sqrt{u^2-1}}{u^3}e^{-2m_aua}\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times I_0\\left(2m_au(A_1-A_2)\\right)(1-e^{-2m_au\\Delta_p^{\\,\\rm Au}})\n\\nonumber \\\\[1mm]\n&&~~~~~\n\\times\\left[C_{\\rm Au}+(C_{\\rm Al}-C_{\\rm Au})\ne^{-2m_au\\Delta_s^{\\!\\rm Au}}\\right.\n\\nonumber \\\\[1mm]\n&&~~~~~~~\n+(C_{\\rm Cr}-C_{\\rm Al})\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al})}\n\\nonumber \\\\[1mm]\n&&~~~~~~~\n\\left.\n-C_{\\rm Cr}\ne^{-2m_au(\\Delta_s^{\\!\\rm Au}+\\Delta_s^{\\!\\rm Al}\n+\\Delta_s^{\\!\\rm Cr})}\\right].\n\\label{eq25}\n\\end{eqnarray}\n\\noindent\nThe constraints on the axion-nucleon coupling constants\n$g_{an}=g_{ap}$ were found from the inequality\n\\begin{equation}\n|F_{\\rm add}^{\\rm nor}(a)|\\leq\\Delta F_C^{\\rm nor}(a).\n\\label{eq26}\n\\end{equation}\n\\noindent\nThe strongest constraints follow from (\\ref{eq26}) at the\nshortest separation distance $a_1=127\\,$nm where the total\nexperimental error determined at a 67\\% confidence level is\nequal to $\\Delta F_C^{\\rm nor}(a_1)=0.94\\,$pN \\cite{49,50}.\n\nIn Fig.~4 the obtained constraints under a condition\n$g_{an}=g_{ap}$ are shown by the dashed line. It can be seen\nthat for $m_a<5.3\\,$eV they are stronger than those following\nfrom measurements of the lateral Casimir force (the solid line),\nbut become weaker than the latter for larger axion masses.\n\n\\section{Comparison between different laboratory constraints}\n\nIt is interesting to compare all discussed above constraints,\nobtained from measurements of the Casimir interaction,\nbetween themselves and with other laboratory constraints on\naxion to nucleon coupling constants. Such a comparison is\nperformed in Fig.~5 over the wide range of axion masses\nfrom $10^{-10}$ to 20\\,eV. The constraints on $g_{an}$\nobtained \\cite{51} by means of a magnetometer using\nspin-polarized K and ${}^3$He atoms are shown by the solid\nline 1. These constraints are applicable in the region of\n$m_a$ from $10^{-10}$ to $6\\times 10^{-6}\\,$eV.\nThe solid line 2 indicates the constraints obtained \\cite{52}\nfrom the recent Cavendish-type experiment \\cite{53} in the\nregion from $m_a=10^{-6}$ to $6\\times 10^{-2}\\,$eV.\nThe weaker constraints found \\cite{25} from the older\nCavendish-type experiments \\cite{54,55} and from the\nE\\\"{o}tvos-type experiment \\cite{56}, respectively, are\nshown by the dashed lines 3 and 4 (these and the following\nconstraints are obtained under a condition $g_{an}=g_{ap}$).\nThese constraints cover the region of $m_a$ from $10^{-8}\\,$eV\nto $4\\times 10^{-5}\\,$eV (line 3) and to $10^{-5}\\,$eV (line 4).\nThe lines 5--8 are obtained \\cite{12,13,14,15} from measurements\nof the Casimir interaction. They are discussed in this paper.\nThe line 5 reproduces the line in Fig.~3 obtained for $m_a$\nfrom $10^{-3}$ to 15\\,eV from measurements of the Casimir\npressure (see Section 5). The dashed lines 6 and 7 reproduce\nthe solid line in Fig.~2 and the line in Fig.~1 found in the\nregion from $3\\times 10^{-5}$ to 1\\,eV from measurements of\nthe gradient of the Casimir force between Au-Au surfaces and\nin the region from $10^{-4}$ to 0.3\\,eV from measurements of\nthe Casimir-Polder force, respectively (see Sections 4 and 3).\nFinally, the line 8 reproduces the solid line in Fig.~4\nfound in the region of $m_a$ from 1 to 20\\,eV.\nIt follows from measurements of the lateral Casimir force\nbetween corrugated surfaces discussed in Section 6\n(measurements of the normal Casimir force between sinusoidally\ncorrugated surfaces lead to weaker constraints than those\nshown in Fig.~5).\n\nThe strength of almost all laboratory constraints shown in\nFig~5 (with exception of that shown by line 1) monotonically\ndecreases with increase of the axion mass $m_a$.\nIf one introduces the Compton wavelength of an axion\n$\\lambda_a=1\/m_a$, it is correct to say that the strength\nof almost all constraints (and all of those\nfollowing from measurements\nof the gravitational and Casimir interactions) decreases with\ndecreasing $\\lambda_a$. The same is true for the\nYukawa-type corrections to Newton's law of gravitation\n(\\ref{eq4}) whose strength decreases with decreasing\ninteraction range $\\lambda$ (see the next section).\nThis property likens the interaction potentials (\\ref{eq3})\nand (\\ref{eq4}) and specifies the interaction range where\nthe most strong constraints on respective hypothetical\nforces can be obtained from experiments on measuring the\nCasimir interaction.\n\nThe vertical lines in Fig.~5 indicate the region from\n$m_a=10^{-5}$ to $10^{-2}\\,$eV, which is often called an\naxion window \\cite{57}. As can be seen in Fig.~5,\nexperiments measuring the Casimir interaction lead to\nstrengthening of the laboratory constraints on axion to\nnucleon coupling constants near the upper border of the\naxion window and also for larger axion masses.\n\n\\section{Constraints on corrections to Newton's law of gravitation}\n\nThe constraints on corrections to the Newton law of gravitation\ndescribed by the potentials (\\ref{eq4}) and (\\ref{eq5}) can be\nobtained from the gravitational experiments of E\\\"{o}tvos- and\nCavendish-type and from measurements of the Casimir interaction.\nAs explained in Section 1, measurements of the Casimir force\nhave long been used for constraining hypothetical interactions\nof both Yukawa and power type.\nBecause of this, here we only briefly present the obtained\nresults and indicate regions where measurements of the Casimir\nforce lead to the most strong constraints, as compared to\ngravitational experiments.\n\nThe Yukawa-type interaction potential between the test bodies\nused in experiments on measuring the Casimir force is obtained\nby the integration of (\\ref{eq4}) over the volumes of bodies.\nIn so doing, at submicrometer separations the Newton\ngravitational force turns out to be negligibly small, as\ncompared to the error of force measurements. Similar to the\ncase of axion considered above, the constraints on the constants\nof Yukawa-type interaction $\\alpha$ and $\\lambda$ are obtained\nfrom a condition that this interaction was not experimentally\nobserved in the limits of the experimental error in measurements\nof the Casimir interaction.\n\nIn Fig.~6 we present the strongest constraints on the\nYukawa interaction constant $\\alpha$ in the micrometer and\nsubmicrometer interaction range $\\lambda$ obtained\nfrom measurements of the Casimir interaction.\nThe line 1 in Fig.~6 was obtained \\cite{18} from measurements\nof the lateral Casimir force between sinusoidally corrugated\nsurfaces of a sphere and a plate \\cite{47,48} (see Section 6).\nIt presents the strongest constraints on the Yukawa-type\ncorrections to Newton's law of gravitation within the\ninteraction range from $\\lambda=1.6$ to 11.6\\,nm.\nThe line 2 shows constraints found \\cite{21} from measuring\nthe normal Casimir force between sinusoidally corrugated\nsurfaces at the angle between corrugations equal to\n2.4${}^{\\circ}$ \\cite{49,50} (see Section 6). These constraints\nare the strongest ones in the interaction range from 11.6 to\n17.2\\,nm.\nThe constraints obtained from measurements of the Casimir\npressure by means of a micromachined torsional oscillator\n(see Section 5) are\nindicated by the line 3. They are the strongest ones for\n$17.2\\,\\mbox{nm}<\\lambda<89\\,$nm.\nAt larger $\\lambda$ the most strong constraints shown by the\nline 4 follow from the so-called Casimir-less experiment\n\\cite{58}, where the Casimir force was nullified by using the\ndifference force measurement scheme. These constraints are\nthe strongest ones up to $\\lambda=891\\,$nm.\nThe constraints of the line 5 are found \\cite{59} from\nmeasurements of the Casimir force between Au-coated surfaces\nof a plate and a spherical lens of large radius. They are\nthe strongest ones up to $\\lambda=3.16\\,\\mu$m.\nFor larger $\\lambda$ the strongest constraints on the\nYukawa-type corrections to Newton's gravitational law\nfollow from the Cavendish-type experiments. The first\nconstraints of such kind are indicated by the line 6\n\\cite{60,61}. Thus, measurements of the Casimir interaction\nlead to the most strong constraints on non-Newtonian\ngravity over a wide interaction range from 1.6\\,nm to a\nfew micrometers. As can be seen in Fig.~6, the strength of\nall constraints decreases with decreasing $\\lambda$, i.e.,\nwith increasing mass of a hypothetical particle which\ninitiates the additional interaction of Yukawa-type.\nThis is similar to the case of an axion considered in\nSections 3--6.\n\nConstraints on the power-type corrections to Newton's law\nof gravitation (\\ref{eq5}) follow from the gravitational\nexperiments of E\\\"{o}tvos and Cavendish type \\cite{8} and\nfrom measurements of the Casimir force \\cite{17,30}.\nAt the present time the most strong constraints follow from\nthe E\\\"{o}tvos-type experiments\n($|\\Lambda_1|\\leq 1\\times 10^{-9}$ \\cite{62} and\n$|\\Lambda_2|\\leq 4\\times 10^{8}$ \\cite{56}) and from\nthe Cavendish-type experiments\n($|\\Lambda_3|\\leq 1.3\\times 10^{20}$ \\cite{52},\n$|\\Lambda_4|\\leq 4.9\\times 10^{31}$ \\cite{52}, and\n$|\\Lambda_5|\\leq 1.5\\times 10^{43}$ \\cite{52}).\nNote that \\cite{52} uses another parametrization for the\npower-type corrections to Newtonian gravitation.\n\n\\section{Conclusions and discussion}\nIn the foregoing, we have considered the constraints on axion\nto nucleon couplings following from laboratory experiments on\nmeasuring the Casimir interaction. The obtained constraints\nare quite competitive in the region of axion masses from\n$10^{-3}$ to 20\\,eV. The most strong of them follow from a\ndynamic determination of the Casimir pressure between two\nparallel plates and from measurement of the lateral Casimir\nforce between sinusoidally corrugated surfaces.\nAll these constraints were derived by considering the process\nof two-axion exchange between two nucleons. This process is of\nthe lowest order contributing to the force acting between\nunpolarized test bodies. The obtained constraints were\ncompared with those following from other laboratory experiments.\n\nWe have also compared the constraints on an axion with previously\nobtained constraints on corrections to the Newton law of\ngravitation of Yukawa and power type. The most strong constraints\nof this kind following from measurements of the Casimir\ninteraction are collected.\nIn the interaction range below a few micrometers they are\nstronger than the constraints on Yukawa-type corrections to\nNewton's law following from the gravitational experiments of\nE\\\"{o}tvos and Cavendish type.\n\nIn future it would be interesting to perform measurements of\nthe Casimir interaction between two polarized test bodies.\nThis would lead to an additional force due to exchange of one\naxion between protons and neutrons and, as a consequence, to\nmuch stronger constraints on the axion to nucleon coupling\nconstants.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzlmub b/data_all_eng_slimpj/shuffled/split2/finalzzlmub new file mode 100644 index 0000000000000000000000000000000000000000..1a0cb80cbacbdc94e6287de3bb7351147226d2e0 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzlmub @@ -0,0 +1,5 @@ +{"text":"\\section{introduction}\nAlignment of surfaces plays a role in a wide range of scientific\ndisciplines. It is a standard problem in comparing different scans\nof manufactured objects; various algorithms have been proposed for\nthis purpose in the computer graphics literature. It is often also\na crucial step in a variety of problems in medicine and biology; in\nthese cases the surfaces tend to be more complex, and the alignment\nproblem may be harder. For instance, neuroscientists studying brain\nfunction through functional Magnetic Resonance Imaging (fMRI)\ntypically observe several people performing identical tasks,\nobtaining readings for the corresponding activity in the brain\ncortex of each subject. In a first approximation, the cortex can be\nviewed as a highly convoluted 2-dimensional surface. Because\ndifferent cortices are folded in very different ways, a synthesis of\nthe observations from different subjects must be based on\nappropriate mappings between pairs of brain cortex surfaces, which\nreduces to a family of surface alignment problems\n\\cite{Fischl99,Haxby09}. In another example, paleontologists\nstudying molar teeth of mammals rely on detailed comparisons of the\ngeometrical features of the tooth surfaces to distinguish species or\nto determine similarities or differences in diet \\cite{Jukka07}.\n\nMathematically, the problem of surface alignment can be described as\nfollows: given two 2-surfaces $\\mathcal{M}$ and $\\mathcal{N}$, find a mapping $f:\\mathcal{M}\n\\rightarrow \\mathcal{N}$ that preserves, as best possible, ``important\nproperties'' of the surfaces. The nature of the ``important\nproperties'' depends on the problem at hand. In this paper, we\nconcentrate on preserving the geometry, i.e., we would like the map\n$f$ to preserve intrinsic distances, to the extent possible. In\nterms of the examples listed above, this is the criterion\ntraditionally selected in the computer graphics literature; it also\ncorresponds to the point of view of the paleontologists studying\ntooth surfaces. To align cortical surfaces, one typically uses the\nTalairach method \\cite{Lancaster00} (which relies on geometrically\ndefined landmarks and is thus geometric in nature as well), although\nalignment based on functional correspondences has been proposed more\nrecently \\cite{Haxby09}.\n\nIn this paper we propose a procedure to ``geometrically'' align\nsurfaces, based on uniformization theory and optimal mass\ntransportation. This approach is related to the computer graphics\nconstructions in \\cite{Lipman:2009:MVF}, which rely on the\nrepresentation of isometries between topologically equivalent\nsimply-connected surfaces by M\\\"{o}bius transformations between\ntheir uniformization spaces, and which exploit that 1) the\nM\\\"{o}bius group has small dimensionality (e.g. 3 for disk-type\nsurfaces and 6 for sphere-type) and 2) changing the metric in one\npiece of a surface has little influence on the uniformization of\ndistant parts. These two observations lead, in\n\\cite{Lipman:2009:MVF}, to fast and particularly effective\nalgorithms to identify near-isometries between differently deformed\nversions of a surface. In our present context, these same\nobservations lead to a simple algorithm for surface alignment,\nreducing it to a linear programming problem.\n\nWe shall restrict ourselves to (sufficiently smooth) disk-type\nsurfaces; we map them to metric densities defined on the hyperbolic\ndisk, their canonical uniformization space. (Apart from simplifying\nthe description of the surface, this also removes any effect of\nglobal translations and rotations on the description of each\nindividual surface.) The alignment problem can then be studied in\nthe framework of Kantorovich mass-transportation\n\\cite{Kantorovich1942} between these metric densities, as follows.\nMass-transportation seeks to minimize the ``average distance'' over\nwhich mass needs to be ``moved'' (in the most efficient such moving\nprocedure) to transform one mass density $\\mu$ into another, $\\nu$.\nIn our case the uniformizing metric density (or conformal factor)\ncorresponding to an initial surface is not unique, but is defined\nonly up to a M\\\"{o}bius transformation. Because a na\\\"{\\i}ve\napplication of mass-transportation on the hyperbolic disk would not\npossess the requisite invariance under M\\\"{o}bius transformations,\nwe generalize the mass-transportation framework, and replace the\nmetric $d(x,y)$ traditionally used in defining the ``average\ndisplacement distance'' by a metric that depends on $\\mu$ and $\\nu$,\nmeasuring the dissimilarity between the two metric densities on\nneighborhoods of $x$ and $y$. Introducing neighborhoods also makes\nthe definition less sensitive to noise in practical applications.\nThe optimal way of transporting mass in this generalized framework,\nin which the orientation in space of the original surfaces is\n``factored away'', automatically defines a corresponding optimal way\nof aligning the surfaces.\n\nOur approach also allows us to define a new distance between\nsurfaces. The average distance over which mass needs transporting\n(to transform one metric density into the other) quantifies the\nextent to which the two surfaces differ; we prove that it defines a\ndistance metric between surfaces.\n\nOther distances between surfaces have been used recently for several\napplications \\cite{memoli07}. A prominent mathematical approach to\ndefine distances between surfaces considers the surfaces as special\ncases of \\emph{metric spaces}, and uses then the Gromov-Hausdorff\n(GH) distance between metric spaces \\cite{Gromov06}. The GH distance\nbetween metric spaces $X$ and $Y$ is defined through examining all\nthe isometric embedding of $X$ and $Y$ into (other) metric spaces;\nalthough this distance possesses many attractive mathematical\nproperties, it is inherently hard computationally\n\\cite{memoli05,BBK06}. For instance, computing the GH distance is\nequivalent to a non-convex quadratic programming problem; solving\nthis directly for correspondences is equivalent to integer quadratic\nassignment, and is thus NP-hard \\cite{Cela98}. In addition, the\nnon-convexity implies that the solution found in practice may be a\nlocal instead of a global minimum, and is therefore not guaranteed\nto give the correct answer for the GH distance. The distance metric\nbetween surfaces that we define in this paper does not have these\nshortcomings: because the computation of the distance between\nsurfaces in our approach can be recast as a linear program, it can\nbe implemented using efficient polynomial algorithms that are\nmoreover guaranteed to converge to the correct solution.\n\nIt should be noted that in \\cite{memoli07}, Memoli generalizes the\nGH distance of \\cite{memoli05} by introducing a quadratic mass\ntransportation scheme to be applied to metric spaces already\nequipped with a measure (mm spaces); he notes that the computation\nof this Gromov-Wasserstein distance for mm spaces is somewhat easier\nand more stable to implement than the original GH distance. In our\napproach we do not need to equip the surfaces we compare with a\nmeasure: after uniformization reduces the problem to comparing two\ndisks, we naturally \"inherit\" two corresponding conformal factors\nthat we interpret as measure densities, for which we then apply an\napproach similar to the one proposed in \\cite{memoli07}. Another\ncrucial aspect in which our work differs from \\cite{memoli07} is\nthat, in contrast to the (continuous) quadratic programming method\nproposed in \\cite{memoli07} to compute the Gromov-Wasserstein\ndistance between mm spaces, our conformal approach leads to a convex\n(even linear) problem, solvable via a linear programming method.\n\n\nIt is worth mentioning that optimal mass transportation has been\nused as well, in the engineering literature to define interesting\nmetrics between images; in this context metric is often called the\nWasserstein distance. The seminal work for this image analysis\napproach is the paper by Rubner et al.~\\cite{Rubner2000-TEM}, in\nwhich images are viewed as discrete measures, and the distance is\ncalled appropriately the ``Earth Mover's Distance''.\n\nAnother related method is presented in the papers of Zeng et al.\n\\cite{Gu2008_a,Gu2008_b}, which also use the uniformization space to\nmatch surfaces. Our work differs from that of Zeng et al. in that\nthey use prescribed feature points (defined either by the user or by\nextra texture information) to calculate an interpolating harmonic\nmap between the uniformization spaces, and then define the final\ncorrespondence as a composition of the uniformization maps and this\nharmonic interpolant. This procedure is highly dependent on the\nprescribed feature points, provided as extra data or obtained from\nnon-geometric information. In contrast, our work does not use any\nprescribed feature points, or external data, and makes use of only\nthe geometry of the surface; in particular we make use of the\nconformal structure itself to define deviation from (local)\nisometry.\n\nOur paper is organized as follows: in Section \\ref{s:prelim} we\nbriefly recall some facts about uniformization and optimal mass\ntransportation that we shall use, at the same time introducing our\nnotation. Section \\ref{s:optimal_vol_trans_for_surfaces} contains\nthe main results of this paper, constructing the distance metric\nbetween disk-type surfaces, in several steps. Section\n\\ref{s:the_discrete_case_implementation} discusses various issues\nthat concern the numerical implementation of the framework we\npropose; Section \\ref{s:examples} illustrates our results with a few\nexamples.\n\n\n\\section{Background and Notations}\n\\label{s:prelim}\n\n\nAs described in the introduction, our framework makes use of two\nmathematical theories: uniformization theory, to represent the\nsurfaces as measures defined on a canonical domain, and optimal mass\ntransportation, to align the measures. In this section we recall\nsome of their basic properties, and we introduce our notations.\n\n\\subsection{Uniformization}\n\nBy the celebrated uniformization theory for Riemann surfaces (see\nfor example \\cite{Springer57,Farkas92}), any simply-connected\nRiemann surface is conformally equivalent to one of three canonical\ndomains: the sphere, the complex plane, or the unit disk. Since\nevery 2-manifold surface $\\mathcal{M}$ equipped with a smooth Riemannian\nmetric $g$ has an induced conformal structure and is thus a Riemann\nsurface, uniformization applies to such surfaces. Therefore, every\nsimply- connected surface with a Riemannian metric can be mapped\nconformally to one of the three canonical domains listed above. We\nshall consider surfaces $\\mathcal{M}$ that are topologically equivalent to\ndisks and that come equipped with a Riemannian metric tensor $g$\n(possibly inherited from the standard 3D metric if the surface is\nembedded in $\\mathbb{R}^3$). For each such $\\mathcal{M}$ there exists a conformal\nmap $\\phi:\\mathcal{M} \\rightarrow \\D$, where $\\D =\\{z \\ | \\ |z|<1\\}$ is the\nopen unit disk. The map $\\phi$ pushes $g$ to a metric on $\\D$;\ndenoting the coordinates in $\\D$ by $z=x^1+\\bfi x^2$, we can write\nthis metric as\n$$\n\\wt{g} = \\phi_* g = \\widetilde{\\mu}(z)\\, \\delta_{ij}\\, dx^i \\otimes\ndx^j,\n$$\nwhere $\\widetilde{\\mu}(z)>0$, Einstein summation convention is used,\nand the subscript $*$ denotes the ``push-forward'' action. The\nfunction $\\widetilde{\\mu}$ can also be viewed as the \\emph{density\nfunction} of the measure $\\Vol_\\mathcal{M}$ induced by the Riemann volume\nelement: indeed, for (measurable) $A \\subset \\mathcal{M}$,\n\\begin{equation}\\label{e:volume_element}\n \\Vol_\\mathcal{M}(A) = \\int_{\\phi(A)} \\widetilde{\\mu}(z) \\, dx^1\\wedge dx^2.\n\\end{equation}\n\nIt will be convenient to use the hyperbolic metric on the unit disk\n$(1-|z|^2)^{-2}\\delta_{ij} dx^i \\otimes dx^j$ as a reference metric,\nrather than the standard Euclidean $\\delta_{ij} dx^i \\otimes dx^j$;\nnote that they are conformally equivalent (with conformal factor\n$(1-|z|^2)^{-2}$). Instead of the density $\\widetilde{\\mu}(z)$, we\nshall therefore use the {\\em hyperbolic density function}\n\\begin{equation}\\label{e:relation_hyperbolic_euclidean_density}\n\\mu^H(z):=(1-|z|^2)^{2}\\,\\widetilde{\\mu}(z)\\,,\n\\end{equation}\nwhere the superscript $H$ stands for hyperbolic. We shall often drop\nthis superscript: unless otherwise stated $\\mu=\\mu^H$, and\n$\\nu=\\nu^H$ in what follows. The density function $\\mu=\\mu^H$\nsatisfies\n$$\\Vol_\\mathcal{M}(A) = \\int_{\\phi(A)} \\mu(z)\\, d\\vol_H(z)\\,,$$\nwhere $d\\vol_H(z)=(1-|z|^2)^{-2}\\, dx^1\\wedge dx^2$.\n\nThe conformal mappings of $\\D$ to itself are the disk-preserving\nM\\\"{o}bius transformations $m \\in \\Md$, a family with three real\nparameters, defined by\n\\begin{equation}\\label{e:disk_mobius}\n m(z) = e^{\\bfi \\theta}\\frac{z-a}{1-\\bar{a}z}, \\ a\\in \\D, \\ \\theta \\in [0,2\\pi).\n\\end{equation}\nSince these M\\\"{o}bius transformations satisfy\n\\begin{equation}\\label{e:disk_mobius_is_isometry_of_hyperbolic_geom}\n (1-|m(z)|^2)^{-2}|m'(z)|^2 = (1-|z|^2)^{-2} \\,,\n\\end{equation}\nwhere $m'$ stands for the derivatives of $m$, the pull-back of $\\mu$\nunder a mapping $m\\in \\Md$ takes on a particularly simple\nexpression. Setting $w=m(z)$, with $w=y^1+\\bfi y^2$, and\n$\\widetilde{g}(w)=\\widetilde{\\mu}(w)\\delta_{ij}dy^i\\otimes dy^j =\n\\mu(w) (1-|w|^2)^{-2}\\delta_{ij}dy^i\\otimes dy^j$, the definition\n\\[\n(m^*\\widetilde{g})(z)_{kl}\\,dx^{k}\\otimes dx^{\\ell} :=\n\\mu(w)\\,(1-|w|^2)^{-2}\\,\\delta_{ij} \\, dy^{i}\\otimes dy^{j}\n\\]\nimplies\n\\begin{align*}\n(m^*\\widetilde{g})_{k\\ell}(z)\\,dx^{k}\\otimes dx^{\\ell} &=\\mu(m(z))(1-|m(z)|^2)^{-2}\\,\\delta_{ij}\\,\\frac{\\partial y^i}{\\partial x^k}\\,\\frac{\\partial y^j}{\\partial x^\\ell} \\,dx^{k}\\otimes dx^{\\ell}\\\\\n&= \\mu(m(z))\\,(1-|m(z)|^2)^{-2}\\,|m'(z)|^2\\,\\delta_{k\\ell} \\,dx^{k}\\otimes dx^{\\ell}\\\\\n&=\\mu(m(z))\\,(1-|z|^2)^{-2}\\,\\delta_{k\\ell}\\,dx^{k}\\otimes dx^{\\ell}.\\\\\n\\end{align*}\nIn other words, $(m^*\\widetilde{g})(z)_{kl}\\,dx^{k}\\otimes\ndx^{\\ell}$ takes on the simple form\n$m^*\\mu(z)\\,(1-|z|^2)^{-2}\\,\\delta_{kl}\\,dx^{k}\\otimes dx^{\\ell}$,\nwith\n\\begin{equation}\\label{e:pullback_of_metric_density_mu_by_mobius}\n m^*\\mu(z) = \\mu(m(z)).\n\\end{equation}\nLikewise, the push-forward, under a disk M\\\"{o}bius transform\n$m(z)=w$, of the diagonal Riemannian metric defined by the density\nfunction $\\mu=\\mu^H$, is again a diagonal metric, with (hyperbolic)\ndensity function $m_{*}\\mu (w)=\\left(m_{*}\\mu \\right)^H(w)$ given by\n\\begin{equation}\\label{e:push_forward_of_metric_density}\n m_* \\mu(w) = \\mu(m^{-1}(w)).\n\\end{equation}\n\nIt follows that checking whether or not two surfaces $\\mathcal{M}$ and $\\mathcal{N}$\nare isometric, or searching for (near-)\\ isometries between $\\mathcal{M}$ and\n$\\mathcal{N}$, is greatly simplified by considering the conformal mappings\nfrom $\\mathcal{M}$, $\\mathcal{N}$ to $\\D$: once the (hyperbolic) density functions\n$\\mu$ and $\\nu$ are known, it suffices to identify $m \\in \\Md$ such\nthat $\\nu(m(z))$ equals $\\mu(z)$ (or ``nearly'' equals, in a sense\nto be made precise). This was exploited in \\cite{Lipman:2009:MVF} to\nconstruct fast algorithms to find corresponding points between two\ngiven surfaces.\n\n\\subsection{Optimal mass transportation}\n\nOptimal mass transportation was introduced by G. Monge\n\\cite{Monge1781}, and L. Kantorovich \\cite{Kantorovich1942}. It\nconcerns the transformation of one mass distribution into another\nwhile minimizing a cost function that can be viewed as the amount of\nwork required for the task. In the Kantorovich formulation, to which\nwe shall stick in this paper, one considers two measure spaces\n$X,Y$, a probability measure on each, $\\mu \\in P(X)$, $\\nu \\in\nP(Y)$ (where $P(X),P(Y)$ are the respective probability measure\nspaces on $X$ and $Y$), and the space $\\Pi(\\mu,\\nu)$ of probability\nmeasures on $X \\times Y$ with marginals $ \\mu$ and $\\nu$ (resp.),\nthat is, for $A\\subset X$, $B\\subset Y$, $\\pi(A\\times Y) = \\mu(A)$\nand $\\pi(X \\times B) = \\nu(B)$. The \\emph{optimal} mass\ntransportation is the element of $\\Pi(\\mu,\\nu)$ that minimizes\n$\\int_{X \\times Y}d(x,y)d\\pi(x,y)$, where $d(x,y)$ is a cost\nfunction. (In general, one should consider an infimum rather than a\nminimum; in our case, $X$ and $Y$ are compact, $d(\\cdot,\\cdot)$ is\ncontinuous, and the infimum is achieved.) The corresponding minimum,\n\\begin{equation}\\label{e:basic_Kantorovich_transporation}\n T^R_d(\\mu,\\nu) = \\mathop{\\inf}_{\\pi \\in \\Pi(\\mu,\\nu)}\\int_{X \\times Y}d(x,y)d\\pi(x,y),\n\\end{equation}\nis the optimal mass transportation distance between $\\mu$ and $\\nu$,\nwith respect to the cost function $d(x,y)$.\n\nIntuitively, one can interpret this as follows: imagine being\nconfronted with a pile of sand on the one hand ($\\mu$), and a hole\nin the ground on the other hand ($-\\nu$), and assume that the volume\nof the sand pile equals exactly the volume of the hole (suitably\nnormalized, $\\mu,\\nu$ are probability measures). You wish to fill\nthe hole with the sand from the pile ($\\pi \\in \\Pi(\\mu,\\nu)$), in a\nway that minimizes the amount of work (represented by $\\int\nd(x,y)d\\pi(x,y)$, where $d(\\cdot,\\cdot)$ can be thought of as a\ndistance function). In the engineering literature, the distance\n$T^R_d(\\mu,\\nu)$ is often called the ``earth mover's distance''\n\\cite{Rubner2000-TEM}, a name that echoes this intuition.\n\nIn what follows, we shall apply this framework to the density\nfunctions $\\mu$ and $\\nu$ on the hyperbolic disk $\\D$ obtained by\nconformal mappings from two surfaces $\\mathcal{M}$, $\\mathcal{N}$, as described in the\nprevious subsection.\n\nThe main obstacle to applying the Kantorovich transportation\nframework directly is that the density $\\mu$, characterizing the\nRiemannian metric on $\\D$ obtained by pushing forward the metric on\n$\\mathcal{M}$ via the uniformizing map $\\phi:\\mathcal{M} \\rightarrow \\D$, is not\nuniquely defined: another uniformizing map $\\phi':\\mathcal{M} \\rightarrow \\D$\nmay well produce a different $\\mu'$. Because the two representations\nare necessarily isometric ($\\phi^{-1} \\circ \\phi'$ maps $\\mathcal{M}$\nisometrically to itself), we must have $\\mu'(m(z))=\\mu(z)$ for some\n$m \\in \\Md$. (In fact, $m=\\phi' \\circ \\phi^{-1}$.) In a sense, the\nrepresentation of (disk-type) surfaces $\\mathcal{M}$ as measures over $\\D$\nshould be considered ``modulo'' the disk M\\\"{o}bius transformations.\n\nWe thus need to address how to adapt the optimal transportation\nframework to factor out this M\\\"{o}bius transformation ambiguity.\nThis is done by designing a special distance (or cost) functional\n$d^R_{\\mu,\\nu}(z,w)$ that {\\em depends} on the conformal densities\n$\\mu$ and $\\nu$ representing the two surfaces. (A fairly simple\nargument shows that a cost function that does not depend on $\\mu$\nand $\\nu$ allows only trivial answers, such as $d(z,w)=0$ for all\n$z,w$.) As we shall see in the next section,\nthis cost function will have an intuitive explanation:\n$d^R_{\\mu,\\nu}(z,w)$ will measure how well an $R$-sized neighborhood\nof $z$ with density $\\mu$ can be matched isometrically to an\n$R$-sized neighborhood of $w$ with density $\\nu$ by means of a disk\nM\\\"{o}bius transformation.\n\n\n\\section{Optimal volume transportation for surfaces}\n\\label{s:optimal_vol_trans_for_surfaces} We want to measure\ndistances between surfaces by using the Kantorovich transportation\nframework to measure the transportation between the metric densities\non $\\D$ obtained by uniformization applied to the surfaces. The main\nobstacle is that these metric densities are not uniquely defined;\nthey are defined up to a M\\\"{o}bius transformation. In particular,\nif two densities $\\mu$ and $\\nu$ are related by $\\nu=m_*\\mu$ (i.e.\n$\\mu(z)=\\nu(m(z))$), where $m \\in \\Md$, then we want our putative\ndistance between $\\mu$ and $\\nu$ to be zero, since they describe\nisometric surfaces, and could have been obtained by different\nuniformization maps of the same surface. A standard approach to\nobtain quantities that are invariant under the operation of some\ngroup (in our case, the disk M\\\"{o}bius transformations) is by\nminimizing over the possible group operations. For instance, we\ncould set\n\\[\n\\mbox{Distance}(\\mu,\\nu)=\\inf_{m \\in \\Md} \\left( \\inf_{\\pi \\in\n\\Pi(m_*\\mu, \\nu)}\\,\\int_{\\D \\times \\D} d(z,w)\\,d\\pi(z,w)\\,\\right)\\,,\n\\]\nwhere $\\Pi(\\mu, \\nu)$ is the set of probability measures on $\\D\n\\times \\D$ with marginals $\\mu \\,\\vol_H$ and $\\nu \\,\\vol_H$. In\norder for this to be computationally feasible, we would want the\nminimum to be achieved in some $m$, which would depend on $\\mu$ and\n$\\nu$ of course; let's denote this special minimizing $m \\in \\Md$ by\n$m_{\\mu,\\nu}$. This would mean\n\\begin{align}\n\\mbox{Distance}(\\mu,\\nu)&= \\inf_{\\pi \\in \\Pi([m_{\\mu,\\nu}]_*\\mu,\n\\nu)}\\,\\int_{\\D \\times \\D}\nd(z,w)\\,d\\pi(z,w)\\nonumber\\\\\n&=\\inf_{\\pi \\in \\Pi(\\mu, \\nu)}\\,\\int_{\\D \\times \\D}\nd(m_{\\mu,\\nu}(z),w)\\,d\\pi(z,w)\\,.\\label{id_Dist}\n\\end{align}\nIf $\\nu$ were itself already equal to $m'_*\\mu$, for some $m' \\in\n\\Md$, then we would expect the minimizing M\\\"{o}bius transformation\nto be $m_{\\mu,\\nu}=m'$; for $\\pi$ supported on the diagonal\n$\\verb\"d\"=\\{(z,z)\\,;\\,z \\in \\D\\}\\subset \\D \\times \\D$, defined by\n$\\pi(A)=\\int_{A_2} \\nu(w)\\, d\\vol_H(w)$, with $A_2= \\{w; (w,w) \\in\nA\\}$, one would then indeed have $\\int_{\\D \\times\n\\D}d(m_{\\mu,\\nu}(z),w)\\,d\\pi(z,w)=0$, leading to\n$\\mbox{Distance}(\\mu,m'_*\\mu)=0$. From (\\ref{id_Dist}) one sees that\nthis amounts to using the same formula as for the standard\nKantorovich approach with just one change: {\\em the cost function\ndepends on} $\\mu$ and $\\nu$.\n\nWe shall use a variant on this construction, retaining the principle\nof using cost functions $d(\\cdot,\\cdot)$ in the integrand that\ndepend on $\\mu$ and $\\nu$, without picking them necessarily of the\nform $d(m_{\\mu,\\nu}(z),w)$. In addition to introducing such a\ndependence, we also wish to incorporate some robustness into the\nevaluation of the distance between (or dissimilarity of) $\\mu$ and\n$\\nu$. We shall do this by using a cost function\n$d^R_{\\mu,\\nu}(z,w)$ that depends on a comparison of the behavior\n$\\mu$ and $\\nu$ on {\\em neighborhoods} of $z$ and $w$, mapped by $m$\nranging over $\\Md$. The next subsection shows precisely how this is\ndone.\n\n\\subsection{Construction of $d^R_{\\mu,\\nu}(z,w)$}\n\nWe construct $d^R_{\\mu,\\nu}(z,w)$ so that it indicates the extent to\nwhich a neighborhood of the point $z$ in $(\\D,\\mu)$, the (conformal\nrepresentation of the) first surface, is isometric with a\nneighborhood of the point $w$ in $(\\D,\\nu)$, the (conformal\nrepresentation of the) second surface. We will need to define two\ningredients for this: the neighborhoods we will use, and how we\nshall characterize the (dis)similarity of two neighborhoods,\nequipped with different metrics.\n\nWe start with the neighborhoods.\n\nFor a fixed radius $R>0$, we define $\\Omega_{z_0,R}$ to be the\nhyperbolic geodesic disk of radius $R$ centered at $z_0$. The\nfollowing gives an easy procedure to construct these disks. If\n$z_0=0$, then the hyperbolic geodesic disks centered at $z_0=0$ are\nalso ``standard'' (i.e. Euclidean) disks centered at 0:\n$\\Omega_{0,R} = \\{z \\,;\\, |z|\\leq r_R \\}$, where\n$r_R=\\mbox{arctanh}(r)=R$. The hyperbolic disks around other centers\nare images of these central disks under M\\\"{o}bius transformations\n(= hyperbolic isometries): setting\n$m(z)=(z-z_0)(1-z\\bar{z_0})^{-1}$, we have\n\\begin{equation}\\label{e:neighborhood_def}\n \\Omega_{z_0,R} = m^{-1}(\\Omega_{0,R})\\,.\n\\end{equation}\nIf $m'$, $m''$ are two maps in $\\Md$ that both map $z_0$ to 0, then\n$m'' \\circ (m')^{-1}$ simply rotates $\\Omega_{0,R}$ around its\ncenter, over some angle $\\theta$ determined by $m'$ and $m''$. From\nthis observation one easily checks that (\\ref{e:neighborhood_def})\nholds for {\\em any} $m \\in \\Md$ that maps $z_0$ to $0$. In fact, we\nhave the following more general\n\\begin{lem}\\label{lem:m(omega_z)=omega_w}\nFor arbitrary $z,w \\in \\D$ and any $R>0$, every disk M\\\"{o}bius\ntransformation $m\\in \\Md$ that maps $z$ to $w$ (i.e. $w=m(z)$) also\nmaps $\\Omega_{z,R}$ to $\\Omega_{w,R}$.\n\\end{lem}\n\nNext we define how to quantify the (dis)similarity of the pairs\n$\\left(\\Omega_{z_0,R}\\,,\\, \\mu\\,\\right)$ and\n$\\left(\\Omega_{w_0,R}\\,,\\, \\nu\\,\\right)$. Since (global) isometries\nare given by the elements of the disk-preserving M\\\"{o}bius group\n$\\Md$, we will test the extent to which the two patches are\nisometric by comparing $\\left(\\Omega_{w_0,R}\\,,\\, \\nu\\,\\right)$ with\nall the images of $\\left(\\Omega_{z_0,R}\\,,\\, \\mu\\,\\right)$ under\nM\\\"{o}bius transformations in $\\Md$ that take $z_0$ to $w_0$.\n\nTo carry out this comparison, we need a norm. Any metric $g_{ij}(z)\ndx^i \\otimes dx^j$ induces an inner product on the space of\n2-covariant tensors, as follows: if $\\mathbf{a}(z) = a_{ij}(z)\n\\,dx^i \\otimes dx^j$ and $\\mathbf{b}(z) = b_{ij}(z) \\,dx^i \\otimes\ndx^j$ are two 2-covariant tensors in our parameter space $\\D$, then\ntheir inner product is defined by\n\\begin{equation}\\label{e:inner_product_2-covariant_tensor}\n \\langle \\mathbf{a}(z), \\mathbf{b}(z)\\rangle =\na_{ij}(z)\\,b_{k\\ell}(z)\\,g^{ik}(z)\\,g^{j\\ell}(z)~;\n\\end{equation}\nas always, this inner product defines a norm, $\\|\\mathbf{a}\\|_z^2 =\na_{ij}(z)\\,a_{k\\ell}(z)\\,g^{ik}(z)\\,g^{j\\ell}(z)$.\n\nNow, let us apply this to the computation of the norm of the\ndifference between the local metric on one surface,\n$g_{ij}(z)=\\mu(z)(1-|z|^2)^{-2}\\delta_{ij}$, and\n$h_{ij}(w)=\\nu(w)(1-|w|^2)^{-2}\\delta_{ij}$, the pull-back metric\nfrom the other surface by a M\\\"{o}bius transformation $m$. Using\n(\\ref{e:inner_product_2-covariant_tensor}),(\\ref{e:pullback_of_metric_density_mu_by_mobius}),\nand writing $\\mbox{\\boldmath{$\\delta$}}$ for the tensor with entries $\\delta_{ij}$, we\nhave:\n\\begin{align*}\n\\|\\mu - m^*\\nu\\|_{z}^2 & = \\|\\,\\mu(z) (1-|z|^2)^{-2}\\mbox{\\boldmath{$\\delta$}} -\n\\nu(m(z))\n(1-|z|^2)^{-2} \\mbox{\\boldmath{$\\delta$}}\\,\\|_{z}^2 \\\\\n& = \\Big(\\mu(z) -\n\\nu(m(z))\\Big)^2(1-|z|^2)^{-4}\\,\\delta_{ij}\\,\\delta_{k\\ell}\n\\,g^{ik}(z)\\,g^{j\\ell}(z)=\\left(1 -\n\\frac{\\nu(m(z))}{\\mu(z)}\\right)^2.\n\\end{align*}\n\nWe are now ready to define the distance function\n$d^R_{\\mu,\\nu}(z,w)$:\n\\begin{equation}\\label{e:d_mu,nu(z,w)_def}\n d^R_{\\mu,\\nu}(z_0,w_0) :=\n\\mathop{\\inf}_{m \\in \\Md\\,,\\,m(z_0)=w_0}\\int_{\\Omega_{z_0,R}}\n\\,|\\,\\mu(z) - (m^*\\nu)(z)\\,|\\, d\\vol_H(z),\n\\end{equation}\nwhere $d\\vol_H(z)=(1-|z|^2)^{-2} \\,dx\\wedge dy$ is the volume form\nfor the hyperbolic disk. The integral in (\\ref{e:d_mu,nu(z,w)_def})\ncan also be written in the following form, which makes its\ninvariance more readily apparent:\n\\begin{equation}\\label{e:d_invariant_form}\n \\int_{\\Omega_{z_0,R}}\\left|\\,1 - \\frac{\\nu(m(z))}{\\mu(z)}\\right| \\,d\\vol_\\mathcal{M}(z) = \\int_{\\Omega_{z_0,R}} \\|\\mu - m^*\\nu \\|_z \\, d\\vol_\\mathcal{M}(z),\n\\end{equation}\nwhere $d\\vol_\\mathcal{M}(z)=\\mu(z)(1-|z|^2)^{-2}\\,dx^1\\wedge dx^2\n=\\sqrt{|g_{ij}|}\\,dx^1\\wedge dx^2$ is the volume form of the first\nsurface $\\mathcal{M}$.\n\nThe next Lemma shows that although the integration in\n(\\ref{e:d_invariant_form}) is carried out w.r.t. the volume of the\nfirst surface, this measure of distance is nevertheless symmetric:\n\\begin{lem}\\label{lem:symmetry_of_d_integral}\nIf $m\\in \\Md$ maps $z_0$ to $w_0$, $m(z_0)=w_0$, then\n$$\n\\int_{\\Omega_{z_0,R}}\\Big|\\,\\mu(z) - m^*\\nu(z) \\,\\Big|\\, d\\vol_H(z)\n= \\int_{\\Omega_{w_0,R}}\\Big|\\,m_*\\mu(w) - \\nu(w) \\, \\Big|\\,\nd\\vol_H(w).\n$$\n\\end{lem}\n\\begin{proof}\nBy the pull-back formula\n(\\ref{e:pullback_of_metric_density_mu_by_mobius}), we have\n$$\n\\int_{\\Omega_{z_0,R}}\\Big|\\,\\mu(z) - m^*\\nu(z) \\,\\Big|\\, d\\vol_H(z)\n= \\int_{\\Omega_{z_0}}\\Big|\\,\\mu(z) - \\nu(m(z)) \\,\\Big|\\, d\\vol_H(z).\n$$\nPerforming the change of coordinates $z=m^{-1}(w)$ in the integral\non the right hand side, we obtain\n$$\n\\int_{m(\\Omega_{z_0,R})}\\, \\Big|\\,\\mu(m^{-1}(w)) - \\nu(w) \\Big|\\,\nd\\vol_H(w),\n$$\nwhere we have used that $m^{-1}$ is an isometry and therefore\npreserves the volume element $d\\vol_H(w)=(1-|w|^2)^{-2} \\,dy^1\n\\wedge dy^2$. By Lemma \\ref{lem:m(omega_z)=omega_w},\n$\\,m(\\Omega_{z_0,R})=\\Omega_{w_0,R}\\,$; using the push-forward\nformula (\\ref{e:push_forward_of_metric_density}) then allows to\nconclude.\n\\end{proof}\n\nNote that our point of view in defining our ``distance'' between $z$\nand $w$ differs from the classical point of view in mass\ntransportation: Traditionally, $d(z,w)$ is some sort of\n\\emph{physical distance} between the points $z$ and $w$; in our case\n$d^R_{\\mu,\\nu}(z,w)$ measures the dissimilarity of (neighborhoods\nof) $z$ and $w$.\n\nThe next Theorem lists some important properties of $d^R_{\\mu,\\nu}$;\nits proof is given in Appendix A.\n\\begin{thm}\\label{thm:properties_of_d}\nThe distance function $d^R_{\\mu,\\nu}(z,w)$ satisfies the following\nproperties\n\\begin{table}[ht]\n\\begin{tabular}{c l l}\n{\\rm (1)} & $~d^R_{m^*_1\\mu,m^*_2\\nu}(m^{-1}_1(\\z),m^{-1}_2({w_0})) = d^R_{\\mu,\\nu}(\\z,{w_0})~$ & {\\rm Invariance under (well-defined)}\\\\\n& & {\\rm M\\\"{o}bius changes of coordinates} \\\\\n&&\\\\\n{\\rm (2)} & $~d^R_{\\mu,\\nu}(\\z,{w_0}) = d^R_{\\nu,\\mu}({w_0},\\z)~$ & {\\rm Symmetry} \\\\\n&&\\\\\n{\\rm (3)} & $~d^R_{\\mu,\\nu}(\\z,{w_0}) \\geq 0~$ & {\\rm Non-negativity} \\\\\n&&\\\\\n{\\rm (4)} &\\multicolumn{2}{c}{$\\!\\!\\!\\!\\!\\!d^R_{\\mu,\\nu}(\\z,{w_0}) = 0 \\,\\Longrightarrow \\, \\Omega_{z_0,R}$ {\\rm in} $(\\D,\\mu)$ {\\rm and} $\\Omega_{w_0,R}$ {\\rm in} $(\\D,\\nu)$ {\\rm are isometric} }\\\\\n&&\\\\\n{\\rm (5)} & $~d^R_{m^*\\nu, \\nu}(m^{-1}(\\z),\\z)=0~$ & {\\rm Reflexivity} \\\\\n&&\\\\\n{\\rm (6)} & $~d^R_{\\mu_1,\\mu_3}(z_1,z_3) \\leq\nd^R_{\\mu_1,\\mu_2}(z_1,z_2) + d^R_{\\mu_2,\\mu_3}(z_2,z_3)~$ & {\\rm\nTriangle inequality}\n\\end{tabular}\n\\end{table}\n\n\\end{thm}\n\n\nIn addition, the function\n$d^R_{\\mu,\\nu}:\\D\\times\\D\\,\\rightarrow\\,\\mathbb{R}$ is continuous. To show\nthis, we first look a little more closely at the family of disk\nM\\\"{o}bius transformations that map one pre-assigned point $z_0 \\in\n\\D$ to another pre-assigned point $w_0 \\in \\D$, over which one\nminimizes to define $d_{\\mu}^R(z_0,w_0)$.\n\n\\begin{defn}\\label{def:M_D,z_0,w_0}\nFor any pair of points $z_0,\\,w_0 \\in \\D$, we denote by\n$M_{D,z_0,w_0}$ the set of M\\\"{o}bius transformations that map $z_0$\nto $w_0$.\n\\end{defn}\n\nThis family of M\\\"{o}bius transformations is completely\ncharacterized by the following lemma:\n\n\\begin{lem}\\label{lem:a_and_tet_formula_in_mobius_interpolation}\nFor any $z_0,w_0 \\in \\D$, the set $M_{D,z_0,w_0}$ constitutes a\n$1$-parameter family of disk M\\\"{o}bius transformations,\nparametrized continuously over $S^1$ (the unit circle). More\nprecisely, every $m \\in M_{D,z_0,w_0}$ is of the form\n\\begin{equation}\\label{e:a_of_mobius}\nm(z)= \\tau\\,\\frac{z-a}{1-\\overline{a}z}~,~~\\mbox{ {\\rm{with} }}~~ a\n= a(z_0,w_0,\\sigma) :=\\frac{z_0-w_0\n\\,\\overline{\\sigma}}{1-\\overline{z_0}\\,w_0\\,\\overline{\\sigma}}\n~~~\\mbox{{\\rm and }}~~ \\tau = \\tau(z_0,w_0,\\sigma) := \\sigma\n\\frac{1- \\overline{z_0} \\,w_0 \\,\\overline{\\sigma}}\n{1-z_0\\,\\overline{w_0}\\, \\sigma},\n\\end{equation}\nwhere $\\sigma \\in S_1:=\\{z \\in \\C\\,;\\,|z|=1\\}$ can be chosen freely.\n\\end{lem}\n\\begin{proof}\nBy (\\ref{e:disk_mobius}), the disk M\\\"{o}bius transformations that\nmap $z_0$ to $0$ all have the form\n\\[\nm_{\\psi,z_0}(z)=e^{\\bfi \\psi}\\,\\frac{z-z_0}{1-\\overline{z_0}\\,z}\\,,\n~\\mbox{ the inverse of which is }~~ m_{\\psi,z_0}^{-1}(w)=e^{-\\bfi\n\\psi}\\, \\frac{w+e^{\\bfi \\psi}z_0}{1+ e^{-\\bfi\n\\psi}\\,\\overline{z_0}w}~,\n\\]\nwhere $\\psi \\in \\mathbb{R}$ can be set arbitrarily. It follows that the\nelements of $M_{D,z_0,w_0}$ are given by the family\n$m_{\\gamma,w_0}^{-1}\\circ m_{\\psi,z_0}$, with $\\psi,\\,\\gamma \\in\n\\mathbb{R}$. Working this out, one finds that these combinations of\nM\\\"{o}bius transformations take the form (\\ref{e:a_of_mobius}), with\n$\\sigma=e^{\\bfi (\\psi-\\gamma)}$.\n\\end{proof}\nWe shall denote by $m_{z_0,w_0,\\sigma}$ the special disk M\\\"{obius}\ntransformation defined by (\\ref{e:a_of_mobius}). In view of our\ninterest in $d^R_{\\mu,\\nu}$, we also define the auxiliary function\n\\[\n\\Phi: \\D \\times \\D \\times S_1 \\longrightarrow \\C\n\\]\nby $\\Phi(z_0,w_0,\\sigma) =\n\\int_{\\Omega(z_0,R)}\\,|\\,\\mu(z)-\\nu(m_{z_0,w_0,\\sigma}(z))\\,|\\,d\\vol_H(z)$.\nThis function has the following continuity properties, inherited\nfrom $\\mu$ and $\\nu$:\n\n\\begin{lem}\\label{lem:auxiliary} $~$\\\\\n$\\bullet$ For each fixed $(z_0,w_0)$, the function $\\Phi(z_0,w_0,\\cdot)$ is continuous on $S_1$.\\\\\n$\\bullet$ For each fixed $\\sigma \\in S_1$, $\n\\Phi(\\cdot,\\cdot,\\sigma) $ is continuous on $\\D \\times \\D$.\nMoreover, the family $ \\Big(\\Phi(\\cdot,\\cdot,\\sigma)\\Big)_{\\sigma\n\\in S_1} $ is equicontinuous.\n\\end{lem}\n\\begin{proof}\nThe proof of this Lemma is given in Appendix A.\n\\end{proof}\n\nNote that since $S^1$ is compact, Lemma \\ref{lem:auxiliary} implies\nthat the infimum in the definition of $d^R_{\\mu,\\nu}$ can be\nreplaced by a minimum:\n\\[\nd^R_{\\mu,\\nu}(z_0,w_0)=\\mathop{\\min}_{m(z_0)=w_0}\\,\n\\int_{\\Omega_{z_0,R}}\\,|\\,\\mu(z)-\\nu(m(z))\\,|\\,d\\vol_H(z)~.\n\\]\n\nWe have now all the building blocks to prove\n\\begin{thm}\\label{thm:continuity_of_d^R_mu,nu}\nIf $\\mu$ and $\\nu$ are continuous from $\\D$ to $\\mathbb{R}$, then\n$d^R_{\\mu,\\nu}(z,w)$ is a continuous function on\n$\\D\\times\\D$.\n\\end{thm}\n\\begin{proof}\nPick an arbitrary point $(z_0,w_0) \\in \\D \\times \\D$, and pick\n$\\varepsilon>0$ arbitrarily small.\n\nBy Lemma \\ref{lem:auxiliary}, there exists a $\\delta>0$ such that,\nfor $|z'_0-z_0|<\\delta$, $|w'_0-w_0|<\\delta$, we have\n\\[\n\\left|\\,\\Phi(z_0,w_0,\\sigma)-\\Phi(z'_0,w'_0,\\sigma)\\,\\right|\\,\\leq\\,\n\\varepsilon~,\n\\]\nuniformly in $\\sigma$. Pick now arbitrary $z'_0,w'_0$ so that\n$|z_0-z'_0|,|w_0-w'_0|<\\delta$.\n\nLet $m_{z_0,w_0,\\sigma}$, resp. $m_{z'_0,w'_0,\\sigma'}$, be the\nminimizing M\\\"{o}bius transform in the definition of\n$d_{\\mu,\\nu}^R(z_0,w_0)$, resp. $d_{\\mu,\\nu}^R(z'_0,w'_0)$, i.e.\n\\[\nd^R_{\\mu,\\nu}(z_0,w_0)= \\Phi(z_0,w_0,\\sigma) \\ \\ \\textrm{and} \\ \\\nd^R_{\\mu,\\nu}(z'_0,w'_0)= \\Phi(z_0,w_0,\\sigma')~.\n\\]\n\nIt then follows that\n\\begin{align*}\nd^R_{\\mu,\\nu}(z_0,w_0)&=\\min_{\\tau}\\Phi(z_0,w_0,\\tau)\n\\leq \\Phi(z_0,w_0,\\sigma')\\\\\n&\\leq \\Phi(z'_0,w'_0,\\sigma')+ |\\Phi(z_0,w_0,\\sigma') -\n\\Phi(z'_0,w'_0,\\sigma')|= d^R_{\\mu,\\nu}(z'_0,w'_0)+\n|\\Phi(z_0,w_0,\\sigma') - \\Phi(z'_0,w'_0,\\sigma')|\\\\\n&\\leq d^R_{\\mu,\\nu}(z'_0,w'_0)+ \\mathop{\\sup}_{\\omega \\in\nS_1}|\\Phi(z_0,w_0,\\omega) - \\Phi(z'_0,w'_0,\\omega)| \\leq\nd^R_{\\mu,\\nu}(z'_0,w'_0)+ \\varepsilon~.\n\\end{align*}\nLikewise $d^R_{\\mu,\\nu}(z'_0,w'_0) \\leq d^R_{\\mu,\\nu}(z_0,w_0) +\n\\varepsilon$, so that $\\abs{d^R_{\\mu,\\nu}(z_0,w_0) -\nd^R_{\\mu,\\nu}(z'_0,w'_0)}<\\varepsilon$.\n\\end{proof}\n\n\\subsection{Incorporating $d^R_{\\mu,\\nu}(z,w)$ into the transportation framework}\nThe next step in constructing the distance operator between surfaces\nis to incorporate the distance $d^R_{\\mu,\\nu}(z,w)$ defined in the\nprevious subsection into the (generalized) Kantorovich\ntransportation model:\n\\begin{equation}\nT^R_d(\\mu,\\nu)=\\inf_{\\pi\\in \\Pi(\\mu,\\nu)}\\int_{\\D\\times\n\\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w).\n\\label{e:generalized_Kantorovich_transportation}\n\\end{equation}\nThe main result is that this procedure (under some extra conditions)\nfurnishes a \\emph{metric} between (disk-type) surfaces.\n\n\\begin{thm\nThere exists $\\pi^* \\in \\Pi(\\mu,\\nu)$ such that\n$$\\int_{\\D\\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi^*(z,w)=\\inf_{\\pi\\in \\Pi(\\mu,\\nu)}\\int_{\\D\\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w).$$\n\\end{thm}\n\n\\begin{proof}\nThis proof follows the same argument as in \\cite{Villani:2003},\nadapted here to our generalized setting. It uses the continuity of\nthe distance function to derive the existence of a global minimum of\n(\\ref{e:generalized_Kantorovich_transportation}). Let\n$\\Big(\\pi_k\\Big)_{k\\in \\mathbb{N}} \\in \\Pi(\\mu,\\nu)$ be a minimizer\nsequence of (\\ref{e:generalized_Kantorovich_transportation}), for\nexample by taking\n$$\n\\int_{\\D\\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi_k(z,w) < T^R_d(\\mu,\\nu) +\n\\frac{1}{k}.\n$$\nThen this sequence of measures is tight, that is, for every $\\varepsilon\n>0$, there exists a compact set $C\\subset \\D\\times\\D$ such that\n$\\pi_k(C)>1-\\varepsilon$, for all $k \\in \\mathbb{N}$. To see this, note\nthat since $\\D$ is separable and complete, the measures $\\mu$, $\\nu$\nare \\emph{tight} measures (see \\cite{Billingsley68}). This means\nthat for arbitrary $\\varepsilon >0$, there exist compact sets $A,B \\subset\n\\D$ so that $\\mu(A)>1-\\varepsilon\/2$ and $\\nu(B)>1-\\varepsilon\/2$. It then follows\nthat, for all $k \\in \\mathbb{N}$,\n$$\n\\pi_k(A\\times B) = \\pi_k(A\\times \\D) - \\pi_k(A \\times (\\D \\setminus\nB)\\,) \\geq \\mu(A) - \\nu(\\D\\setminus B) = \\mu(A) - (1 - \\nu(B)) > 1\n- \\varepsilon.\n$$\nSince the set $C = A \\times B \\subset \\D\\times \\D$ is compact, this\nproves the claimed tightness of the family $\\Big(\\pi_k\\Big)_{k\\in\n\\mathbb{N}}$. By Prohorov's Theorem \\cite{Billingsley68}, a tight\nfamily of measures is sequentially weakly compact; in our case this\nmeans that $\\Big(\\pi_k\\Big)_{k\\in \\mathbb{N}}$ has a weakly\nconvergent subsequence $\\Big(\\pi_{k_n}\\Big)_{n\\in \\mathbb{N}}$; by\ndefinition, its weak limit $\\pi^*$ satisfies, for every bounded\ncontinuous function $f$ on $\\D\\times\\D$,\n$$\n\\int_{\\D\\times \\D}f(z,w)d\\pi_{k_n}(z,w) \\rightarrow \\int_{\\D\\times\n\\D}f(z,w)d\\pi^*(z,w).\n$$\nTherefore, taking in particular the continuous function\n$f(z,w)=d^R_{\\mu,\\nu}(z,w)$, we obtain\n$$\nT^R_d(\\mu,\\nu) = \\lim_{n\\rightarrow \\infty} \\int_{\\D\\times\n\\D}f(z,w)d\\pi_{k_n}(z,w) = \\int_{\\D\\times \\D}f(z,w)d\\pi^*(z,w).\n$$\n\n\\end{proof}\n\nUnder rather mild conditions, the ``standard'' Kantorovich\ntransportation (\\ref{e:basic_Kantorovich_transporation}) on a metric\nspaces $(X,d)$ defines a metric on the space of probability\nmeasures on $X$ . We will prove that our generalization defines a\ndistance metric as well. More precisely, we shall prove first that\n$$\n\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=T^R_d(\\mu,\\nu)\n$$\ndefines a semi-metric in the set of all disk-type surfaces. We shall\nrestrict ourselves to surfaces that are sufficiently smooth to allow\nuniformization, so that they can be globally and conformally\nparameterized over the hyperbolic disk. Under some extra\nassumptions, we will prove that $\\mbox{\\bf{d}}^R$ is a metric, in the sense\nthat $\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=0$ implies that $\\mathcal{M}$ and $\\mathcal{N}$ are isometric.\n\nFor the semi-metric part we will again adapt a proof given in\n\\cite{Villani:2003} to our framework. In particular, we shall make\nuse of the following ``gluing lemma'':\n\\begin{lem}\n\\label{lem:gluing_lemma} Let $\\mu_1,\\mu_2,\\mu_3$ be three\nprobability measures on $\\D$, and let $\\pi_{12} \\in\n\\Pi(\\mu_1,\\mu_2)$, $\\pi_{23} \\in \\Pi(\\mu_2,\\mu_3)$ be two\ntransportation plans. Then there exist a probability measure $\\pi$\non $\\D \\times \\D \\times \\D$ that has $\\pi_{12},\\pi_{23}$ as\nmarginals, that is $\\int_{z_3\\in\\D} d\\pi(z_1,z_2,z_3) =\nd\\pi_{12}(z_1,z_2) $, and $\\int_{z_1\\in\\D} d\\pi(z_1,z_2,z_3) =\nd\\pi_{23}(z_2,z_3)$.\n\\end{lem}\nThis lemma will be used in the proof of the following:\n\n\\begin{thm}\nFor two disk-type surfaces $\\mathcal{M}=(\\D,\\mu)$, $\\mathcal{N}=(\\D,\\nu)$, let\n$\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})$ be defined by\n$$\n\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=T^R_d(\\mu,\\nu).\n$$\nThen $\\mbox{\\bf{d}}^R$ defines a semi-metric on the space of disk-type\nsurfaces.\n\\end{thm}\n\\begin{proof}\n\nThe symmetry of $d^R_{\\mu,\\nu}$ implies symmetry for $T^R_d$, by the\nfollowing argument:\n\\begin{align*}\nT^R_d(\\mu,\\nu) &=\n\\mathop{\\inf}_{\\pi \\in \\Pi(\\mu,\\nu)}\\int_{\\D \\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w) = \\mathop{\\inf}_{\\pi \\in \\Pi(\\mu,\\nu)}\\int_{\\D \\times \\D}d^R_{\\nu,\\mu}(w,z)d\\pi(z,w) \\\\\n&=\n\\mathop{\\inf}_{\\pi \\in \\Pi(\\mu,\\nu)}\\int_{\\D \\times \\D}d^R_{\\nu,\\mu}(w,z)d\\widetilde{\\pi}(w,z), ~~~~~~~\\mbox{ where we have set }~\\widetilde{\\pi}(w,z)=\\pi(z,w)\\\\\n&= T^R_d(\\nu,\\mu)~. ~~~~~~~~~(\\mbox{ use that }\\pi\\in\\Pi(\\mu,\\nu)\n\\Leftrightarrow \\widetilde{\\pi}\\in\\Pi(\\nu,\\mu))\n\\end{align*}\n\n\nThe non-negativity of $d^R_{\\mu,\\nu}(\\cdot,\\cdot)$ automatically\nimplies $T^R_d(\\mu,\\nu) \\geq 0$.\n\n\nNext we show that, for any M\\\"{o}bius transformation $m$,\n$T^R_d(\\mu,m_*\\mu)=0$. To see this, pick the transportation plan\n$\\pi \\in \\Pi(\\mu,m_*\\mu)$ defined by\n$$\n\\int_{\\D\\times \\D} f(z,w) d\\pi(z,w) = \\int_{\\D} f(z,m(z))\n\\mu(z)\\,d\\vol_H(z).\n$$\nOn the one hand $\\pi \\in \\Pi(\\mu,m_*\\mu)$, since\n$$\\int_{A \\times \\D} d\\pi(z,w) = \\int_{A} \\mu(z)d\\vol_H(z),$$\nand\n\\begin{align*}\n\\int_{\\D\\times B} d\\pi(z,w) &=\n\\int_{\\D\\times \\D} \\chi_B(w) d\\pi(z,w) \\\\\n&=\\int_{\\D} \\chi_B(m(z)) \\mu(z)d\\vol_H(z) = \\int_{\\D} \\chi_B(w)\n\\mu_*(w)d\\vol_H(w),\n\\end{align*}\nwhere we used the change of variables $w=m(z)$ in the last step.\nFurthermore, $\\pi(z,w)$ is concentrated on the graph of $m$, i.e. on\n$\\set{(z,m(z)) \\ ; \\ z\\in \\D} \\subset \\D \\times \\D$. Since\n$d^R_{\\mu,m_*\\mu}(z,m(z)) = 0$ for all $z \\in \\D$ we obtain\ntherefore $T_d(\\mu,m_*\\mu) \\leq \\int_{\\D\\times \\D}\nd^R_{\\mu,m_*\\mu}(z,w)d\\pi(z,w) = 0$.\n\n\nFinally, we prove the triangle inequality $T^R_d(\\mu_1,\\mu_3) \\leq\nT^R_d(\\mu_1,\\mu_2) + T^R_d(\\mu_2,\\mu_3)$ . To this end we follow the\nargument in the proof given in \\cite{Villani:2003} (page 208). This\nis where we invoke the gluing Lemma stated above.\n\nWe start by picking arbitrary transportation plans $\\pi_{12} \\in\n\\Pi(\\mu_1,\\mu_2)$ and $\\pi_{23} \\in \\Pi(\\mu_2,\\mu_3)$. By Lemma\n\\ref{lem:gluing_lemma} there exists a probability measure $\\pi$ on\n$\\D\\times \\D \\times \\D$ with marginals $\\pi_{12}$ and $\\pi_{23}$.\nDenote by $\\pi_{13}$ its third marginal, that is\n$$\\int_{z_2\\in \\D}d\\pi(z_1,z_2,z_3) = d\\pi_{13}(z_1,z_3).$$\nThen\n\\begin{align*}\nT^R_d(\\mu_1,\\mu_3) &\\leq \\int_{\\D \\times \\D} d^R_{\\mu_1,\\mu_3}(z_1,z_3)d\\pi_{13}(z_1,z_3) = \\int_{\\D \\times \\D \\times \\D} d^R_{\\mu_1,\\mu_3}(z_1,z_3)d\\pi(z_1,z_2,z_3) \\\\\n&\\leq \\int_{\\D \\times \\D \\times \\D} \\Big( d^R_{\\mu_1,\\mu_2}(z_1,z_2) + d^R_{\\mu_2,\\mu_3}(z_2,z_3) \\Big )d\\pi(z_1,z_2,z_3) \\\\\n&\\leq \\int_{\\D \\times \\D \\times \\D} d^R_{\\mu_1,\\mu_2}(z_1,z_2)\nd\\pi(z_1,z_2,z_3) +\n\\int_{\\D \\times \\D \\times \\D} d^R_{\\mu_2,\\mu_3}(z_2,z_3) d\\pi(z_1,z_2,z_3) \\\\\n&\\leq \\int_{\\D \\times \\D } d^R_{\\mu_1,\\mu_2}(z_1,z_2)\nd\\pi_{12}(z_1,z_2) + \\int_{\\D \\times \\D } d^R_{\\mu_2,\\mu_3}(z_2,z_3)\nd\\pi_{23}(z_2,z_3),\n\\end{align*}\nwhere we used the triangle-inequality for $d^R_{\\mu,\\nu}$ listed in\n(Theorem \\ref{thm:properties_of_d}). Since we can choose $\\pi_{12}$\nand $\\pi_{23}$ to achieve arbitrary close values to the infimum in\neq.~(\\ref{e:generalized_Kantorovich_transportation}) the triangle\ninequality follows.\n\\end{proof}\n\nTo qualify as a metric rather than a semi-metric, $\\mbox{\\bf{d}}^R$ (or\n$T^R_d$) should be able to distinguish from each other any two\nsurfaces (or measures) that are not ``identical'', that is\nisometric. To prove that they can do so, we need an extra\nassumption: we shall require that the surfaces we consider have no\nself-isometries. More precisely, we require that each surface $\\mathcal{M}$\nthat we consider satisfies the following definition:\n\n\\begin{defn}\nA surface $\\mathcal{M}$ is said to be a singly\n$\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}\\mbox{fittable}$ (where $\\varrho\n\\in \\mathbb{R},$ $\\varrho \\neq 0$) if, for all $R > \\varrho$, and all\n$z\\in\\D$, there is no other M\\\"{o}bius transformation $m$ other than\nthe identity for which $$\\int_{\\Omega_{z,R}} \\,\n|\\mu(z)-\\mu(m(z))|\\,d\\vol_H(z)=0.$$\n\\end{defn}\n\n\\begin{rem}\nThis definition can also be read as follows: $\\mathcal{M}$ is singly\n$\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}\\mbox{fittable}$ if and only if,\nfor all $R>\\varrho$, any two conformal factors $\\mu_1$ and $\\mu_2$\nfor $\\mathcal{M}$ satisfy:\n\\begin{enumerate}\n\\item\nFor all $z\\in \\D$ there exists a unique minimum to the function $w\n\\mapsto d^R_{\\mu_1,\\mu_2}(z,w)$.\n\\item\nFor all pairs $(z,w)\\in \\D \\times \\D$ that achieve this minimum\nthere exists a unique M\\\"{o}bius transformation for which the\nintegral in {\\rm(\\ref{e:d_mu,nu(z,w)_def})} vanishes (with $\\mu_1$\nin the role of $\\mu$, and $\\mu_2$ in that of $\\nu$).\n\\end{enumerate}\n\\end{rem}\nEssentially, this definition requires that, from some sufficiently\nlarge (hyperbolic) scale onwards, there are no isometric pieces\nwithin $(\\D,\\mu)$ (or $(\\D,\\nu)$).\n\n\n\\begin{figure}[t]\n \n\\hspace*{.3 in}\n\\begin{minipage}{2.8 in}\n \\caption{Illustration of the proof of Theorem \\ref{thm:identity_of_indiscernibles}}\\label{fig:for_proof_identity_of_indiscernibles}\n\\end{minipage}\n\\begin{minipage}{3 in}\n\\hspace*{.7 in}\n\\includegraphics[width=0.8\\columnwidth]{figures\/ingrid_illu.png}\n\\end{minipage}\n\\\n\\end{figure}\n\nWe start with a lemma, and then prove the main result of this\nsubsection.\n\\begin{lem}\\label{lem:in_every_disk_z_w_d(z,w)=0}\nLet $\\pi \\in \\Pi(\\mu,\\nu)$ be such that $\\int_{\\D \\times\n\\D}\\,d^R_{\\mu,\\nu}(z,w)\\,d\\pi(z,w)=0$. Then, for all $z_0 \\in \\D$\nand $\\delta >0$, there exists at least one point $z \\in\n\\Omega_{z_0,\\delta}$ such that $d^R_{\\mu,\\nu}(z,w)=0$ for some $w\n\\in \\D$.\n\\end{lem}\n\\begin{proof}\nBy contradiction: assume that there exists a disk\n$\\Omega_{z_0,\\delta}$ such that $d^R_{\\mu,\\nu}(z,w) >0$ for all\n$z\\in \\Omega_{z_0,\\delta}$ and all $w \\in \\D$. Since\n$$\n\\int_{\\Omega(z_0,\\delta) \\times \\D}\\,d\\pi(z,w) =\n\\int_{\\Omega(z_0,\\delta)}\\mu(z)\\, d\\vol_H(z)>0~,\n$$\nthe set $\\Omega(z_0,\\delta) \\times \\D$ contains some of the support\nof $\\pi$. It follows that\n$$\n\\int_{\\Omega(z_0,\\delta)\\times \\D} d^R_{\\mu,\\nu}(z,w) d\\pi(z,w)>0~,\n$$\nwhich contradicts\n$$\n \\int_{\\Omega(z_0,\\delta) \\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w)\n\\le \\int_{\\D \\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w) = 0~.\n$$\n\\end{proof}\n\\begin{thm}\\label{thm:identity_of_indiscernibles}\nSuppose that $\\mathcal{M}$ and $\\mathcal{N}$ are two surfaces that are singly\n$\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}$fittable. If $\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=0$ for\nsome $R > \\varrho$, then there exists a M\\\"{o}bius transformation $m \\in\n\\Md$ that is a global isometry between $\\mathcal{M}=(\\D,\\mu)$ and\n$\\mathcal{N}=(\\D,\\nu)$ (where $\\mu$ and $\\nu$ are conformal factors of $\\mathcal{M}$\nand $\\mathcal{N}$, respectively).\n\\end{thm}\n\\begin{proof}\nWhen $\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=0$, there exists (see \\cite{Villani:2003}) $\\pi\n\\in \\Pi(\\mu,\\nu)$ such that\n$$\n\\int_{\\D \\times \\D}d^R_{\\mu,\\nu}(z,w)d\\pi(z,w)=0.\n$$\n\nNext, pick an arbitrary point $z_0 \\in \\D$ such that, for some $w_0\n\\in \\D$, we have $d^R_{\\mu,\\nu}(z_0,w_0)=0$. (The existence of such\na pair is guaranteed by Lemma \\ref{lem:in_every_disk_z_w_d(z,w)=0}.)\nThis implies that there exists a unique M\\\"{o}bius transformation\n$m_0 \\in \\Md$ that takes $z_0$ to $w_0$ and that satisfies\n$\\nu(m_0(z))=\\mu(z)$ for all $z \\in \\Omega_{z_0,R}$. We define\n$$\n\\rho^* = \\sup \\{\\rho \\,;\\, d^\\rho_{\\mu,\\nu}(z_0,w_0)=0 \\};\n$$\nclearly $\\rho^* \\geq R$. The theorem will be proved if we show that\n$\\rho^*= \\infty$. We shall do this by contradiction, i.e. we assume\n$\\rho^* < \\infty$, and then derive a contradiction.\n\nSo let's assume $\\rho^* < \\infty$. Consider $\\Omega_{z_0,\\rho^*}$,\nthe hyperbolic disk around $z_0$ of radius $\\rho^*$. (See Figure\n\\ref{fig:for_proof_identity_of_indiscernibles} for illustration.)\nSet $\\varepsilon = (R - \\varrho)\/2$, and consider the points on the\nhyperbolic circle $C=\\partial \\Omega_{z_0, \\rho^*-\\varrho-\\varepsilon}$.\nFor every $z_1 \\in C$, consider the hyperbolic disk\n$\\Omega_{z_1,\\varepsilon\/2}$; by Lemma\n\\ref{lem:in_every_disk_z_w_d(z,w)=0} there exists a point $z_2$ in\nthis disk and a corresponding point $w_2 \\in \\D$ such that\n$d^R_{\\mu,\\nu}(z_2,w_2)=0$, i.e. such that\n\\[\n\\int_{\\Omega_{z_2,R}}\\,|\\mu(z)-m'^*\\nu(z)|^2\\,d\\vol_H(z) \\,=\\,0~\n\\]\nfor some M\\\"{o}bius transformation $m'$ that maps $z_2$ to $w_2$; in\nparticular, we have that\n\\begin{equation}\n\\mu(z)=\\nu(m'(z))~~\\mbox{ for all }~ z \\in \\Omega_{z_2,R}~.\n\\label{e:ingrid:mprime}\n\\end{equation}\nThe hyperbolic distance from $z_2$ to $\\partial \\Omega_{z_0,\\rho^*}$\nis at least $\\varrho+\\varepsilon\/2$.\nIt follows that the hyperbolic disk $\\Omega_{z_2,\\varrho+\\varepsilon\/4}$ is\ncompletely contained in $\\Omega_{z_0,\\rho^*}$; since\n$\\mu(z)=\\nu(m_0(z))$ for all $z \\in \\Omega_{z_0,\\rho^*}$, this must\ntherefore hold, in particular, for all $z \\in\n\\Omega_{z_2,\\varrho+\\varepsilon\/4}$. Since\n$\\Omega_{z_2,\\varrho+\\varepsilon\/4}\\subset \\Omega_{z_2,R}$, we also have\n$\\mu(z)=\\nu(m'(z))$ for all $z \\in \\Omega_{z_2,\\varrho+\\varepsilon\/4}$, by\n(\\ref{e:ingrid:mprime}). This implies $\\nu(w)=\\nu(m_0\\circ\n(m')^{-1}(w))$ for all $w \\in \\Omega_{w_2,\\varrho+\\varepsilon\/4}$. Because\n$\\mathcal{N}$ is singly $\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}$fittable, it\nfollows that $m_0\\circ (m')^{-1}$ must be the identity, or $m_0=\nm'$. Combining this with (\\ref{e:ingrid:mprime}), we have thus shown\nthat $\\mu(z)=\\nu(m_0(z))$ for all $z \\in \\Omega_{z_2,R}$.\n\nSince the distance between $z_2$ and $z_1$ is at most $\\varepsilon\/2$, we\nalso have\n\\begin{equation*}\n\\Omega_{z_2,R} \\supset \\Omega_{z_1,R - \\varepsilon\/2} = \\Omega_{z_1,\\varrho\n+ 3\\varepsilon\/2}~. \\label{e:ingrid:supset}\n\\end{equation*}\n\nThis implies that if we select such a point $z_2(z_1)$ for each\n$z_1\\in C$, then $\\Omega_{z_0, \\rho^* - \\varrho-\\varepsilon}\\cup\n\\left(\\,\\cup_{z_1 \\in C}\\,\\Omega_{z_2(z_1),R}\\right)$ covers the\nopen disk $\\Omega_{z_0,\\rho^*+\\varepsilon\/2}$. By our earlier argument,\n$\\mu(z)=\\nu(m_0(z))$ for all $z$ in each of the\n$\\Omega_{z_2(z_1),R}$; since the same is true on $\\Omega_{z_0,\n\\rho^* - \\varrho-\\varepsilon}$, it follows that $\\mu(z)=\\nu(m_0(z))$ for\nall $z$ in $\\Omega_{z_0,\\rho^*+\\varepsilon\/2}$. This contradicts the\ndefinition of $\\rho^*$ as the supremum of all radii for which this\nwas true; it follows that our initial assumption, that $\\rho^*$ is\nfinite, cannot be true, completing the proof.\n\\end{proof}\n\nFor $(\\D,\\mu)$ to be singly\n$\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}$fittable, no two hyperbolic disks\n$\\Omega_{z,R}$, $\\Omega_{w,R}$ (where $w$ can equal $z$) can be\nisometric via a M\\\"{o}bius transformation $m$, if $R > \\varrho$,\nexcept if $m=Id$. However, if $z$ is close (in the Euclidean sense)\nto the boundary of $\\D$, the hyperbolic disk $\\Omega_{z,R}$ is very\nsmall in the Euclidean sense, and corresponds to a very small piece\n(near the boundary) of $\\mathcal{M}$. This means that single\n$\\varrho\\mbox{-}_{\\mbox{\\tiny{H}}}$fittability imposes restrictions\nin increasingly small scales near the boundary of $\\mathcal{M}$; from a\npractical point of view, this is hard to check, and in many\napplications, the behavior of $\\mathcal{M}$ close to its boundary is\nirrelevant. For this reason, we also formulate the following\nrelaxation of the results above.\n\n\\begin{defn}\nA surface $\\mathcal{M}$ is said to be a singly $A\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable\n(where $A> 0$) if there are no patches (i.e. open, path-connected\nsets) in $\\mathcal{M}$ of area larger than $A$ that are isometric, with\nrespect to the metric on $\\mathcal{M}$.\n\\end{defn}\n\nIf a surface is singly $A\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable, then it is\nobviously also $A'\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable for all $A' \\geq A$;\nthe condition of being $A\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable becomes more\nrestrictive as $A$ decreases. The following theorem states that two\nsingly $A\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable surfaces at zero\n$\\mbox{\\bf{d}}^R$-distance from each other must necessarily be isometric, up to\nsome small boundary layer.\n\n\\begin{thm}\nConsider two surfaces $\\mathcal{M}$ and $\\mathcal{N}$, with corresponding conformal\nfactors $\\mu$ and $\\nu$ on $\\D$, and suppose $\\mbox{\\bf{d}}^R(\\mathcal{M},\\mathcal{N})=0$ for\nsome $R>0$. Then the following holds: for arbitrarily large\n$\\rho>0$, there exist a M\\\"{o}bius transformation $m \\in M_D$ and a value\n$A>0$ such that if $\\mathcal{M}$ and $\\mathcal{N}$ are singly\n$A\\mbox{-}_{\\!_{\\mathcal{M}}}$fittable then $\\mu(m(z))=\\nu(z)$, for all $z\n\\in \\Omega_{0,\\rho}$.\n\\end{thm}\n\\begin{proof}\nPart of the proof follows the same lines as for Theorem\n\\ref{thm:identity_of_indiscernibles}. We highlight here only the new\nelements needed for this proof.\n\nFirst, note that, for arbitrary $r>0$ and $z_0 \\in \\D$,\n\\begin{equation}\\label{e:lower_bound_for_patch_area}\n \\Vol_\\mathcal{M}(\\Omega_{z_0,r}) = \\int_{\\Omega_{z_0,r}}\\mu(z)d\\vol_H(z)\n \\geq \\Vol_H(\\Omega_{z_0,r})\\left[\\min_{z\\in\\Omega_{z_0,r}}\\mu(z)\\right] =\n \\Vol_H(\\Omega_{0,r})\\left[\\min_{z\\in\\Omega_{z_0,r}}\\mu(z)\\right].\n\\end{equation}\nThis motivates the definition of the sets $\\mathcal{O}_{A,r}$,\n\\begin{equation}\\label{e:O_big_area_set}\n \\mathcal{O}_{A,r} = \\set{ z\\in \\D \\ \\mid \\ \\min_{z'\\in\\Omega_{z,r}}\\mu(z') >\n \\frac{A}{\\Vol_H(0,\\Omega_{0,r})}\n };\n\\end{equation}\n$A>0$ is still arbitrary at this point; its value will be set below.\n\nNow pick $rA $.\n\nSince $\\mu$ is bounded below by a strictly positive constant on each\n$\\Omega_{0,\\rho'}$, we can pick, for arbitrarily large $\\rho$, $A>0$\nsuch that $\\Omega_{0,\\rho} \\subset \\mathcal{O}_{A,r}$; for this it suffices\nthat $A$ exceed a threshold depending on $\\rho$ and $r$. (Since\n$\\mu(z)\\rightarrow 0$ as $z$ approaches the boundary of $\\D$ in\nEuclidean norm, we expect this threshold to tend towards $0$ as\n$\\rho \\rightarrow \\infty$.) We assume that $\\Omega_{0,\\rho} \\subset\n\\mathcal{O}_{A,r}$ in what follows.\n\nSimilar to the proof of Theorem\n\\ref{thm:identity_of_indiscernibles}, we invoke Lemma\n\\ref{lem:in_every_disk_z_w_d(z,w)=0} to infer the existence of\n$z_0,w_0$ such that $z_0\\in\\Omega_{0,\\varepsilon\/2}$ and\n$d^R_{\\mu,\\nu}(z_0,w_0)=0$. We denote\n$$\n\\rho^* = \\sup \\{r' \\,;\\, d^{r'}_{\\mu,\\nu}(z_0,w_0)=0 \\};\n$$\nas before, there exists a M\\\"{o}bius transformation $m$ such that\n$\\nu(m(z))=\\mu(z)$ for all $z$ in $\\Omega_{z_0,\\rho^*}$. To complete\nour proof it therefore suffices to show that $\\rho^* \\geq \\rho +\n\\varepsilon\/2$, since $\\Omega_{0,\\rho}\\subset \\Omega_{z_0,\\rho+\\varepsilon\/2}$ .\n\nSuppose the opposite is true, i.e. $\\rho^* < \\rho+\\varepsilon\/2$. By the\nsame arguments as in the proof of Theorem\n\\ref{thm:identity_of_indiscernibles}, there exists, for each\n$z_1\\in \\partial \\Omega_{z_0,\\rho^*-r-\\varepsilon}$, a point $z_2 \\in\n\\Omega_{z_1,\\varepsilon\/2}$ such that $d^R_{\\mu,\\nu}(z_2,w_2)=0$ for some\n$w_2$. Since the hyperbolic distance between $z_2$ and $0$ is\nbounded above by $\\varepsilon\/2+\\rho^*-r-\\varepsilon+\\varepsilon\/2<\\rho-r+\\varepsilon\/2<\\rho$,\n$z_2 \\in \\Omega_{0,\\rho} \\subset \\mathcal{O}_{A,r}$, so that\n$\\Vol_{\\mathcal{M}}(\\Omega_{z_2,R})>A $. It then follows from the conditions\non $\\mathcal{M}$ and $\\mathcal{N}$ that $\\nu(m(z))=\\mu(z)$ for all z in\n$\\Omega_{z_0,\\rho^*} \\cup \\Omega_{z_2,R} \\supset\n\\Omega_{z_0,\\rho^*}\\cup \\Omega_{z_1,r+3\\varepsilon\/2}$. Repeating the\nargument for all $z_1\\in \\partial \\Omega_{z_0,\\rho^*-r-\\varepsilon}$ shows\nthat $\\nu(m(z))=\\mu(z)$ can be extended to all $z \\in\n\\Omega_{z_0,\\rho^*+\\varepsilon\/2}$, leading to a contradiction that\ncompletes the proof.\n\\end{proof}\n\n\n\\section{Discretization and implementation}\n\\label{s:the_discrete_case_implementation} To transform the\ntheoretical framework constructed in the preceding sections into an\nalgorithm, we need to discretize the relevant continuous objects.\nOur general plan is to recast the transportation\neq.~(\\ref{e:generalized_Kantorovich_transportation}) as a linear\nprogramming problem between discrete measures. This requires two approximation steps: \\\\\n1) approximating the surface's Uniformization, and \\\\\n2) discretizing the resulting continuous measures and finding the\noptimal transport between the discrete measures.\n\nTo show how we do this, we first review a few basic notions such as\nthe representation of (approximations to) surfaces by faceted,\npiecewise flat approximations, called {\\em meshes}, and discrete\nconformal mappings; the conventions we describe here are the same as\nadopted in \\cite{Lipman:2009:MVF}.\n\n\\subsection{Meshes, mid-edge meshes, and discrete conformal mapping}\nTriangular (piecewise-linear) meshes are a popular choice for the\ndefinition of discrete versions of smooth surfaces. We shall denote\na triangular mesh by the triple $M = (V, E, F)$, where\n$V=\\{v_i\\}_{i=1}^m \\subset \\mathbb R^3$ is the set of vertices,\n$E=\\{e_{i,j}\\}$ the set of edges, and $F=\\{f_{i,j,k}\\}$ the set\nof faces (oriented $i\\rightarrow j \\rightarrow k$). When dealing\nwith a second surface, we shall denote its mesh by $N$. We assume\nour mesh is homeomorphic to a disk.\n\nNext, we introduce ``conformal mappings'' of a mesh to the unit\ndisk. Natural candidates for discrete conformal mappings are not\nimmediately obvious. Since we are dealing with piecewise linear\nsurfaces, it might seem natural to select a continuous linear maps\nthat is piecewise affine, such that its restriction to each triangle\nis a similarity transformation. A priori, a similarity map from a\ntriangular face to the disk has 4 degrees of freedom; requiring that\nthe image of each edge remain a shared part of the boundary of the\nimages of the faces abutting the edge, and that the map be\ncontinuous when crossing this boundary, imposes 4 constraints for\neach edge. This quick back of the envelope calculation thus allows\n$4|F|$ degrees of freedom for such a construction, with $4|E|$\nconstraints. Since $3|F|\/2 \\approx |E|$ this problem is over\nconstrained, and a construction along these lines is not possible. A\ndifferent approach uses the notion of discrete harmonic and discrete\nconjugate harmonic functions due to Pinkall and Polthier\n\\cite{Pinkall93,Polthier05} to define a discrete conformal mapping\non the mid-edge mesh (to be defined shortly). This relaxes the\nproblem to define a map via a similarity on each triangle that is\ncontinuous through only \\emph{one} point in each edge, namely the\nmid point. This procedure was employed in \\cite{Lipman:2009:MVF}; we\nwill summarize it here; for additional implementation details we\nrefer the interested reader (or programmer) to that paper, which\nincludes a pseudo-code.\n\nThe mid-edge mesh $\\textbf{\\textsf{M}} = (\\textbf{\\textsf{V}}, \\textbf{\\textsf{E}}, \\textbf{\\textsf{F}})$ of a given mesh $M=(V,\nE, F)$ is defined as follows. For the vertices $\\textbf{\\textsf{v}}_r \\in \\textbf{\\textsf{V}}$, we\npick the mid-points of the edges of the mesh $M$; we call these the\nmid-edge points of $M$. There is thus a $\\textbf{\\textsf{v}}_r \\in \\textbf{\\textsf{V}}$\ncorresponding to each edge $e_{i,j} \\in E$. If $\\textbf{\\textsf{v}}_s$ and $\\textbf{\\textsf{v}}_r$\nare the mid-points of edges in $E$ that share a vertex in $M$, then\nthere is an edge $\\textbf{\\textsf{e}}_{s,r} \\in \\textbf{\\textsf{E}}$ that connects them. It follows\nthat for each face $f_{i,j,k} \\in F$ we can define a corresponding\nface $\\mf_{r,s,t} \\in \\textbf{\\textsf{F}}$, the vertices of which are the mid-edge\npoints of (the edges of) $f_{i,j,k}$; this face has the same\norientation as $f_{i,j,k}$. Note that the mid-edge mesh is not a\nmanifold mesh, as illustrated by the mid-edge mesh in Figure\n\\ref{f:discrete_type_1}, shown together with its ``parent'' mesh: in\n$\\textbf{\\textsf{M}}$ each edge ``belongs'' to only one face $\\textbf{\\textsf{F}}$, as opposed to a\nmanifold mesh, in which most edges (the edges on the boundary are\nexceptions) function as a hinge between two faces. This ``lace''\nstructure makes a mid-edge mesh more flexible: it turns out that it\nis possible to define a piecewise linear map that makes each face in\n$\\textbf{\\textsf{F}}$ undergo a pure scaling (i.e. all its edges are shrunk or\nextended by the same factor) and that simultaneously flattens the\nwhole mid-edge mesh. By extending this back to the original mesh, we\nthus obtain a map from each triangular face to a similar triangle in\nthe plane; these individual similarities can be ``knitted together''\nthrough the mid-edge points, which continue to coincide (unlike most\nof the vertices of the original triangles).\n\n\\begin{figure}[ht]\n\\centering \\setlength{\\tabcolsep}{0.4cm}\n\\begin{tabular}{@{\\hspace{0.0cm}}c@{\\hspace{0.2cm}}c@{\\hspace{0.0cm}}}\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_msh.png}\n&\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_me.png} \\\\\nDiscrete mesh& Mid-edge mesh\\\\\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_msh_zoom.png}\n&\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_me_zoom.png} \\\\\nSurface mesh zoom-in & Mid-edge mesh zoom-in\n\\end{tabular}\n\\caption{A mammalian tooth surface mesh, with the corresponding\nmid-edge mesh.}\\label{f:discrete_type_1}\n\\end{figure}\n\nTo determine the flattening map, we use the framework of discrete\nharmonic and conjugate harmonic functions, first defined and studied\nby Pinkall and Polthier \\cite{Pinkall93,Polthier05} in the context\nof discrete minimal surfaces. This framework was first adapted to\nthe present context in \\cite{Lipman:2009:MVF}; this adaptation is\nexplained in some detail in Appendix B. The flattening map is\nwell-defined at the mid-edges $\\textbf{\\textsf{v}}_s$.\nAs shown in \\cite{Lipman:2009:MVF} (see also Appendix B) the\nboundary of the mesh gets mapped onto a region with a straight\nhorizontal slit (see Figure \\ref{f:discrete_type_2}, where the\nboundary points are marked in red). We can assume, without loss of\ngenerality, that this slit coincides with the interval $[-2, 2]\n\\subset \\C$, since it would suffice to shift and scale the whole\nfigure to make this happen. The holomorphic map $z=w+\\frac{1}{w}$\nmaps the unit disk conformally to $\\mathbb{C}\\setminus [-2,2]$, with\nthe boundary of the disk mapped to the slit at $[-2,2]$; when the\ninverse of this map is applied to our flattened mid-edge mesh, its\nimage will thus be a mid-edge mesh in the unit disk, with the\nboundary of the disk corresponding to the boundary of our\n(disk-like) surface. (See Figure \\ref{f:discrete_type_2}.) We shall\ndenote by $\\Phi:\\textbf{\\textsf{V}} \\rightarrow \\C$ the concatenation of these\ndifferent conformal and discrete-conformal maps, from the original\nmid-edge mesh to the corresponding mid-edge mesh in the unit disk.\n\n\\begin{figure}[ht]\n\\centering \\setlength{\\tabcolsep}{0.4cm}\n\\begin{tabular}{@{\\hspace{0.0cm}}c@{\\hspace{0.0cm}}c@{\\hspace{0.0cm}}c@{\\hspace{0.0cm}}}\n\\includegraphics[width=0.2\\columnwidth]{figures\/discrete_slit.png} &\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_slit_zoom.png}\\\\\nMid-edge uniformization & Uniformization Zoom-in \\\\\n\\includegraphics[width=0.4\\columnwidth]{figures\/discrete_disc_uni.png}& \\includegraphics[width=0.5\\columnwidth]{figures\/discrete_conf_factors.png}\\\\\nAfter mapping to the disk & Interpolated conformal factor\n\\end{tabular}\\\\\n\\caption{The discrete conformal transform to the unit disk for the\nsurface of Figure \\ref{f:discrete_type_1}, and the interpolation of\nthe corresponding discrete conformal factors (plotted with the JET\ncolor map in Matlab). The red points in the top row's images show\nthe boundary points of the disk.} \\label{f:discrete_type_2}\n\\end{figure}\n\nNext, we define the Euclidean discrete conformal factors, defined as\nthe density, w.r.t. the Euclidean metric, of the mid-edge triangles\n(faces), i.e.\n\\[\n\\mu^E_{\\mf_{r,s,t}} =\n\\frac{\\vol_{\\mathds{R}^3}(\\mf_{r,s,t})}{\\vol(\\Phi(\\mf_{r,s,t}))}.\n\\]\nNote that according to this definition, we have\n\\[\n\\int_{\\Phi(\\mf_{r,s,t})}\\,\\mu^E_{\\mf_{r,s,t}}\\,d\\vol_E\\, =\n\\,\\frac{\\vol_{\\mathds{R}^3}(\\mf_{r,s,t})}{\\vol_E\\left(\\Phi(\\mf_{r,s,t})\\right)}\\,\n\\vol_E\\left(\\Phi(\\mf_{r,s,t})\\right)\\,=\\,\\vol_{\\mathds{R}^3}(\\mf_{r,s,t}),\n\\]\nwhere $\\vol_E$ denotes the standard Euclidean volume element\n$dx^1\\wedge dx^2$ in $\\C$, and $\\vol_{\\mathbb{R}^3}(\\mf)$ stands for the\narea of $\\mf$ as induced by the standard Euclidean volume element in\n$\\mathbb{R}^3$. The discrete Euclidean conformal factor at a mid-edge vertex\n$\\textbf{\\textsf{v}}_r$ is then defined as the average of the conformal factors for\nthe two faces $\\mf_{r,s,t}$ and $\\mf_{r,s',t'}$ that touch in\n$\\textbf{\\textsf{v}}_r$, i.e.\n\\[\n\\mu^E_{\\textbf{\\textsf{v}}_r} \\,=\\,\n\\frac{1}{2}\\,\\left(\\mu^E_{\\mf_{r,s,t}}\\,+\\,\\mu^E_{\\mf_{r,s',t'}}\\right).\n\\]\nFigure \\ref{f:discrete_type_2} illustrates the values of the\nEuclidean conformal factor for the mammalian tooth surface of\nearlier figures. The discrete hyperbolic conformal factors are\ndefined according to the following equation, consistent with the\nconvention adopted in section \\ref{s:prelim},\n\\begin{equation}\\label{e:discrete_hyperbolic_conformal_density}\n \\mu^H_{\\textbf{\\textsf{v}}_r} \\,= \\,\\mu^E_{\\textbf{\\textsf{v}}_r} \\,\\left(1 - |\\Phi(\\textbf{\\textsf{v}}_r)|^2\\right)^2.\n\\end{equation}\nAs before, we shall often drop the superscript: unless otherwise\nstated, $\\mu=\\mu^H$, and $\\nu=\\nu^H$.\n\nThe (approximately) conformal mapping of the original mesh to the\ndisk is completed by constructing a smooth interpolant $\\Gamma_\\mu\n:\\D \\rightarrow \\mathbb{R}$, which interpolates the discrete conformal\nfactor so far defined only at the vertices in $\\Phi(\\textbf{\\textsf{V}})$;\n$\\Gamma_\\nu$ is constructed in the same way. In practice we use\nThin-Plate Splines, i.e. functions of the type\n\\[\n\\Gamma_\\mu(z) = p_1(z)\\, +\\,\\sum_i\\, b_i \\,\\psi(|z-z_i|)\\,,\n\\]\nwhere $\\psi(r)=r^2\\log(r^2)$, $p_1(z)$ is a linear polynomial in\n$x^1,x^2$, and $b_i \\in \\C$; $p_1$ and the $b_i$ are determined by\nthe data that need to be interpolated. Similarly\n$\\Gamma_\\nu(w)\\,=\\,q_1(w)\\,+\\,\\sum_j \\,c_j\\, \\psi(|w-w_j|)$ for some\nconstants $c_j \\in \\C$ and a linear polynomial $q_1(w)$ in\n$y^1,y^2$. We use as interpolation centers two point sets\n$Z=\\set{z_i}_{i=1}^n,$ and $W=\\set{w_j}_{j=1}^p$ defined in the next\nsubsection for the discretization of measures. See Figure\n\\ref{f:discrete_type_2} (bottom-right) the interpolated conformal\nfactor based on the black point set.\n\nWe also note that for practical purposes it is sometimes\nadvantageous to use Smoothing Thin-Plate Splines:\n$$\\Gamma_\\mu(z) = \\mathop{\\mathrm{argmin}}_{\\gamma} \\set{ \\lambda \\sum_{r} \\abs{\\mu_{\\textbf{\\textsf{v}}_r} - \\gamma(\\Phi(\\textbf{\\textsf{v}}_r))}^2 + (1-\\lambda) \\int_{\\D} \\parr{\\frac{\\partial^2 \\gamma }{\\partial (x^1)^2}}^2 + \\parr{\\frac{\\partial^2 \\gamma }{\\partial x^1 x^2}}^2 + \\parr{\\frac{\\partial^2 \\gamma }{\\partial (x^2)^2}}^2 dx^1 \\wedge dx^2 }.$$\nwhen using these, we picked the value $0.99$ for the smoothing\nfactor $\\lambda$.\n\n\\subsection{Discretizing continuous measures and their transport}\n\\label{ss:discretizing_discrete_measures}\n\nIn this subsection we indicate how to construct discrete\napproximations ${T_{\\mbox{\\footnotesize{discr.}},d}^R}(\\xi,\\zeta)$\nfor the distance $\\mbox{\\bf{d}}^R(\\mathcal{X},\\mathcal{Y})=T^R_d(\\xi,\\zeta)$ between two surfaces\n$\\mathcal{X}$ and $\\mathcal{Y}$, each characterized by a corresponding smooth density\non the unit disk $\\D$ ($\\xi$ for $\\mathcal{X}$, $\\zeta$ for $\\mathcal{Y}$). (In\npractice, we will use the smooth functions $\\Gamma_{\\mu}$ and\n$\\Gamma_{\\nu}$ for $\\xi$ and $\\zeta$. ) We shall use discrete\noptimal transport to construct our approximation\n${T_{\\mbox{\\footnotesize{discr.}},d}^R}(\\xi,\\zeta)$, based on\nsampling sets for the surfaces, with convergence to the continuous\ndistance as the sampling is refined.\n\nTo quantify how fine a sampling set $Z$ is, we use the notion of\n\\emph{fill distance} $\\varphi(Z)$:\n\\[\n\\varphi(Z)\\,:=\\,\\sup \\set{r>0\\ \\big| \\ z \\in \\mathcal{M}:B_g(z,r)\\cap Z_h =\n\\emptyset}~,\n\\]\nwhere $B_g(z,r)$ is the geodesic open ball of radius $r$ centered at\n$z$. That is, $\\varphi(Z)$ is the radius of the largest geodesic\nball that can be fitted on the surface $\\mathcal{X}$ without including any\npoint of $Z$. The smaller $\\varphi(Z)$, the finer the sampling set.\n\nGiven the smooth density $\\xi$ (on $\\D$), we discretize it by first\ndistributing $n$ points $Z=\\{z_i\\}_{i=1}^n$ on $\\mathcal{X}$ with\n$\\varphi(Z)=h>0$. For $i=1,\\ldots,n$, we define the sets $\\Xi_i$ to\nbe the Voronoi cells corresponding to $z_i \\in Z$; this gives a\npartition of the surface $\\mathcal{X}$ into disjoint convex sets,\n$\\mathcal{X}=\\cup_{i=1}^n \\Xi_i$. We next define the discrete measure $\\xi_Z$\nas a superposition of point measures localized in the points of $Z$,\nwith weights given by the areas of $\\Xi_i$, i.e. $\\xi_Z\n\\,=\\,\\sum_{i=1}^n\\,\\xi_i \\delta_{z_i}$, with\n$\\xi_i:=\\xi(\\Xi_i)=\\int_{\\Xi_i}d\\vol_\\mathcal{X}$. Similarly we denote by\n$W=\\{w_j\\}_{j=1}^p$, $\\Upsilon_j$, and $\\zeta_j:=\\zeta(\\Upsilon_j)$\nthe corresponding quantities for surface $\\mathcal{Y}$. We shall always\nassume that the surfaces $\\mathcal{X}$ and $\\mathcal{Y}$ have the same area, which,\nfor convenience, we can take to be 1. It then follows that the\ndiscrete measures $\\xi_Z$ and $\\zeta_W$ have equal total mass\n(regardless of whether $n=p$ or not). The approximation algorithm\nwill compute optimal transport for the discrete measures $\\xi_Z$ and\n$\\zeta_W$; the corresponding discrete approximation to the distance\nbetween $\\xi$ and $\\zeta$ is then given by $T^R_d(\\xi_Z,\\zeta_W)$.\n\nConvergence of the discrete approximations $T^R_d(\\xi_Z,\\zeta_W)$ to\n$T^R_d(\\xi,\\zeta)=\\mbox{\\bf{d}}^R(\\mathcal{X},\\mathcal{Y})$ as $\\varphi(Z)$, $\\varphi(W) \\rightarrow 0$\nthen follows from the results proved in\n\\cite{Lipman:TR2009:approxOptimal}. Corollary 3.3 in\n\\cite{Lipman:TR2009:approxOptimal} requires that the distance\nfunction $d^R_{\\xi,\\zeta}(\\cdot,\\cdot)$ used to define\n$T^R_d(\\xi,\\zeta)$ be uniformly continuous in its two arguments. We\ncan establish this in our present case by invoking the continuity\nproperties of $d^R_{\\xi,\\zeta}$ proved in Theorem\n\\ref{thm:continuity_of_d^R_mu,nu}, extended by the following lemma,\nproved in Appendix A.\n\\begin{lem}\\label{lem:extension_of_d}\nLet $\\set{(z_k,w_k)}_{k\\geq 1} \\subset \\D\\times\\D$ be a sequence\nthat converges, in the Euclidean norm, to some point in $(z',w') \\in\n\\bbar{\\D}\\times\\bbar{\\D} \\setminus \\D\\times\\D$, that is\n$|z_k-z'|+|w_k-w'| \\rightarrow 0$, as $k \\rightarrow \\infty$. Then, $\\lim_{k\\rightarrow\n\\infty}d^R_{\\xi,\\zeta}(z_k,w_k)$ exists and depends only on the\nlimit point $(z',w')$.\n\\end{lem}\n\nWe shall denote this continuous extension of\n$d^R_{\\mu,\\nu}(\\cdot,\\cdot)$ to $\\bbar{\\D}\\times \\bbar{\\D}$ by the\nsame symbol $d^R_{\\mu,\\nu}$.\n\nSince $\\bbar{\\D} \\times \\bbar{D}$ is compact, (this extension of)\n$d^R_{\\xi,\\zeta}(\\cdot,\\cdot)$ is uniformly continuous: for all\n$\\varepsilon >0$, there exists a $\\delta=\\delta(\\varepsilon)$ such that, for all\n$z,z' \\in \\mathcal{X}$, $w,w' \\in \\mathcal{Y}$,\n\\[\nd_\\mathcal{X}(z,z') < \\delta(\\varepsilon) \\ , d_\\mathcal{Y}(w,w') < \\delta(\\varepsilon) \\\n\\Rightarrow \\ \\abs{d^R_{\\xi,\\zeta}(z,w) - d^R_{\\xi,\\zeta}(z',w') } <\n\\varepsilon,\n\\]\nwhere $d_\\mathcal{X}(\\cdot,\\cdot)$ is the geodesic distance on $\\mathcal{X}$, and\n$d_\\mathcal{Y}(\\cdot,\\cdot)$ is the geodesic distance on $\\mathcal{Y}$.\n\nThe results in \\cite{Lipman:TR2009:approxOptimal} then imply that\n$\\xi_Z \\rightarrow \\xi$ in the \\emph{weak} sense, as $\\varphi(Z) \\rightarrow 0$,\ni.e. that for all bounded continuous functions $f:\\bbar{\\D}\\rightarrow\n\\mathbb R$, the convergence $\\int_\\D f\\ d\\xi_Z \\rightarrow \\int_\\D f\\ d\\xi$\nholds \\cite{Billingsley68}. Similarly and $\\zeta_W \\rightarrow \\zeta$ in\nthe weak sense as $\\varphi(W) \\rightarrow 0$. Furthermore,\n\\cite{Lipman:TR2009:approxOptimal} also proves that for\n$\\max(\\varphi(Z),\\varphi(W)) < \\frac{\\delta(\\varepsilon)}{2}$\n\\[\n\\abs{T^R_d(\\xi_Z,\\zeta_W) - T^R_d(\\xi,\\zeta)} < \\varepsilon.\n\\]\nMore generally, it is shown that\n\\begin{equation}\\label{e:optimal_trans_discrete_approx_with_mod_cont}\n \\abs{T^R_d(\\xi_Z,\\zeta_W) - T^R_d(\\xi,\\zeta)} <\n\\omega_{d^R_{\\xi,\\zeta}}\\parr{\\max(\\varphi(Z),\\varphi(W))},\n\\end{equation}\nwhere $\\omega_{d^R_{\\xi,\\zeta}}$ is the modulus of continuity of\n$d^R_{\\xi,\\zeta}$, that is\n$$\\omega_{d^R_{\\xi,\\zeta}}(t)=\\sup_{d_\\mathcal{X}(z,z') + d_\\mathcal{Y}(w,w') <\nt}\\abs{d^R_{\\xi,\\zeta}(z,w)-d^R_{\\xi,\\zeta}(z',w')}.$$\n\nWe shall see below that it will be particularly useful to choose the\ncenters in $Z=\\set{z_i}_{i=1}^n$, $W=\\set{w_j}_{j=1}^p$ such that\nthe corresponding Voronoi cells are (approximately) of equal area,\ni.e. $n=N=p$ and $\\xi_i=\\xi(\\Xi_i)\\approx\\frac{1}{N}$,\n$\\zeta_j=\\zeta(\\Upsilon_j)\\approx\\frac{1}{N}$, where we have used\nthat the total area of each surface is normalized to $1$. An\neffective way to calculate such sample sets $Z$ and $W$ is to start\nfrom an initial random seed (which will not be included in the set),\nand take the geodesic point furthest from the seed as the initial\npoint of the sample set. One then keeps repeating this procedure,\nselecting at each iteration the point that lies at the furthest\ngeodesic distance from the set of points already selected. This\nalgorithm is known as the Farthest Point Algorithm (FPS)\n\\cite{eldar97farthest}. An example of the output of this algorithm,\nusing geodesic distances on a disk-type surface, is shown in Figure\n\\ref{f:discrete_type_3}. Further discussion of practical aspects of\nVoronoi sampling of a surface can be found in\n\\cite{BBK2007non_rigid_book}.\n\n\\begin{figure}[h]\n\\centering \\setlength{\\tabcolsep}{0.4cm} \\hspace*{.5 in}\n\\begin{minipage}{3 in}\n\\includegraphics[width=0.8\\columnwidth]{figures\/discrete_sam_points.png}\\hspace*{-.5 in}\n\\end{minipage}\n\\hspace*{-.5 in}\n\\begin{minipage}{3 in}\n\\caption{Sampling of the surface of Figure \\ref{f:discrete_type_1}\nobtained by the Farthest Point Algorithm.}\n\\label{f:discrete_type_3}\n\\end{minipage}\n\\end{figure}\n\n\n\\subsection{Approximating the local distance function $d^R_{\\mu,\\nu}$.}\nWe are now ready to construct our discrete version of the optimal\nvolume transportation for surfaces\n(\\ref{e:generalized_Kantorovich_transportation}). The previous\nsubsection describes how to derive the discrete measures\n$\\mu_Z,\\nu_W$ from the approximate conformal densities\n$\\Gamma_\\mu,\\Gamma_\\nu$ and the sampling sets $Z$ and $W$. For\nsimplicity, we will, with some abuse of notation, identify the\napproximations $\\Gamma_\\mu,\\Gamma_\\nu$ with $\\mu,\\nu$. The\napproximation error made here is typically much smaller than the\nerrors made in further steps (see below) and we shall neglect it.\nThe final component is approximating $d^R_{\\mu,\\nu}(z_i,w_j)$ for\nall pairs $(z_i,w_j) \\in Z\\times W$. Applying\n(\\ref{e:d_mu,nu(z,w)_def}) to the points $z_i$, $w_j$ we have:\n\\begin{equation}\\label{e:d_mu,nu(z_i,w_j)}\n d^R_{\\mu,\\nu}(z_i,w_j) = \\min_{m(z_i)=w_j}\\int_{\\Omega_{z_i, R}}\\Big|\\,\\mu(z)-\\nu(m(z))\\,\\Big |\\,d\\vol_H.\n\\end{equation}\n\n\\begin{figure}\n \n \\includegraphics[width=0.7\\columnwidth]{figures\/int_rules_100_300.png}\\\\\n \\caption{The integration centers and their corresponding Voronoi cells used\nfor calculating the integration weights for the discrete quadrature.\nLeft: $100$ centers; Right: $300$.}\\label{fig:integration_pnts}\n\\end{figure}\n\nTo obtain $d^R_{\\mu,\\nu}(z_i,w_j)$ we will thus need to compute\nintegrals over hyperbolic disks of radius $R$, which is done via a\nseparate approximation procedure, set up once and for all in a\npreprocessing step at the start of the algorithm.\n\nBy using a M\\\"{o}bius transformation $\\widetilde{m}$ such that\n$\\widetilde{m}(0)=z_0$, and the identity\n\\[\n\\int_{\\Omega_{0,R}}\\Big|\\,\\mu(\\widetilde{m}(u)) - \\nu(m \\circ\n\\widetilde{m} (u))\\,\\Big| \\,d\\vol_H(u) =\n\\int_{\\Omega_{z_0,R}}\\Big|\\,\\mu(z)) - \\nu(m (z))\\,\\Big|\n\\,d\\vol_H(z)~,\n\\]\nwe can reduce the integrals over the hyperbolic disks\n$\\Omega_{z_i,R}$ to integrals over a hyperbolic disk centered around\nzero.\n\nIn order to (approximately) compute integrals over $\\Omega_0\n=\\Omega_{0,R}=\\{z |\\ |z|\\leq r_R\\}$, we first pick a positive\ninteger $K$ and distribute centers $p_k,\\,k=1,...,K $ in $\\Omega_0$.\nWe then decompose $\\Omega_0$ into Voronoi cells $\\Delta_k$\ncorresponding to the $p_k$, obtaining $\\Omega_0 = \\cup_{k=1}^K\n\\Delta_k$; see Figure \\ref{fig:integration_pnts} (note that these\nVoronoi cells are completely independent of those used in\n\\ref{ss:discretizing_discrete_measures}.)\n\nTo approximate the integral of a continuous function $f$ over\n$\\Omega_0$ we then use\n\\[\n\\int_{\\Omega_0}\\, f(z)\\, d\\vol_H(z) \\approx \\sum_k \\,\n\\brac{\\int_{\\Delta_k}d\\vol_H(z)}\\,f(p_k) = \\sum_k \\,\\alpha_k f(p_k)\n\\]\nwhere $\\alpha_k = \\int_{\\Delta_k}d\\vol_H(z)$.\n\nWe thus have the following approximation:\n\\begin{eqnarray}\nd^R_{\\mu,\\nu}(z_i,w_j)&=&\n\\min_{m(z_i)=w_j}\\int_{\\Omega_{z_i,R}}\\Big|\\,\\mu(z) - \\nu(m\n(z))\\,\\Big|\n\\,d\\vol_H(z)\\nonumber\\\\\n&=& \\min_{m(z_i)=w_j}\\int_{\\Omega_{0,R}}\\Big|\\,\\mu(\\widetilde{m}_i\n(z)) -\n\\nu(m(\\widetilde{m}_i (z)))\\,\\Big|\\,d\\vol_H(z)\\nonumber\\\\\n&\\approx& \\min_{m(z_i)=w_j} \\sum_k \\,\\alpha_k\n\\,\\Big|\\,\\mu(\\widetilde{m}_i (p_k)) - \\nu(m(\\widetilde{m}_i\n(p_k)))\\,\\Big|~,\\label{e:discrete_approx__d_mu_nu(z_i,w_j)}\n\\end{eqnarray}\nwhere the M\\\"{o}bius transformations $\\widetilde{m}_i$, mapping 0 to\n$z_i$, are selected as soon as the $z_i$ themselves have been\npicked, and remain the same throughout the remainder of the\nalgorithm.\n\nIt can be shown that picking a set of centers $\\set{p_k}$ with\nfill-distance $h>0$ leads to an $O(h)$ approximation; in\nAppendix~\\ref{a:appendix A} we prove:\n\\begin{thm}\\label{thm:convergence_of_numerical_quadrature}\nFor continuously differentiable $\\mu,\\nu$,\n\\[\n\\left|\\,d^R_{\\mu,\\nu}(z_i,w_j)- \\min_{m(z_i)=w_j}\\sum_k \\,\\alpha_k\n\\,\\left|\\,\\mu(\\widetilde{m}_i (p_k)) - \\nu(m(\\widetilde{m}_i\n(p_k)))\\,\\right|\\, \\right| \\leq C\\,\\varphi\\left( \\set{p_k} \\right)~,\n\\]\nwhere the constant $C$ depends only on $\\mu,\\nu,R$.\n\\end{thm}\n\nLet us denote this approximation by $$\\wh{d}^R_{\\mu,\\nu}(z_i,w_j) =\n\\min_{m(z_i)=w_j}\\sum_k \\,\\alpha_k \\,\\left|\\,\\mu(\\widetilde{m}_i\n(p_k)) - \\nu(m(\\widetilde{m}_i (p_k)))\\,\\right|.$$ Since the above\ntheorem guarantees that the approximation error\n$\\abs{\\wh{d}^R_{\\mu,\\nu}(z_i,w_j) - d^R_{\\mu,\\nu}(z_i,w_j)}$ can be\nuniformly bounded independently of $z_i,w_j$, it can be shown that\n$$\\babs{T^R_d(\\mu_Z,\\nu_W) - T^R_{\\wh{d}}(\\mu_Z,\\nu_W)}\n\\leq C \\varphi \\parr{\\set{p_k}},$$ where again $C$ is dependent only\nupon $\\mu,\\nu,R$. Combining this with\neq.(\\ref{e:optimal_trans_discrete_approx_with_mod_cont}) we get that\n\\begin{equation}\\label{e:final_approx_error}\n \\babs{T^R_d(\\mu,\\nu) - T^R_{\\wh{d}}(\\mu_Z,\\nu_W)}\n\\leq \\omega_{d^R_{\\mu,\\nu}}\\parr{\\max \\parr{\\varphi(Z),\\varphi(W)}}\n+ C\\varphi \\parr{\\set{p_k}}.\n\\end{equation}\n\nIn practice, for calculating $\\wh{d}^R_{\\mu,\\nu}$, the minimization\nover $M_{D,z_i,w_j}$, the set of all M\\\"{o}bius transformations that map\n$z_i$ to $w_k$, is discretized as well: instead of minimizing over\nall $m_{z_i,w_j,\\sigma}$ (see subsection 3.1), we minimize over only\nthe M\\\"{o}bius transformations\n$\\left(m_{z_i,w_j,2\\pi\\ell\/L}\\right)_{\\ell=0,1,..,L-1}$. Taking this\ninto account as well, we have thus\n\\begin{eqnarray}\nd^R_{\\mu,\\nu}(z_i,w_j)\\approx \\min_{\\ell=1,\\ldots L} \\sum_k\n\\,\\alpha_k \\,\\Big|\\,\\mu(\\widetilde{m}_i (p_k)) - \\nu(m_{z_i,w_j,2\\pi\n\\ell\/L}(\\widetilde{m}_i\n(p_k)))\\,\\Big|~;\\label{e:discrete_approx__d_mu_nu(z_i,w_j)_bis}\n\\end{eqnarray}\nthe error made in approximation\n(\\ref{e:discrete_approx__d_mu_nu(z_i,w_j)_bis}) is therefore\nproportional to $L^{-1}+C\\varphi\\left( \\set{p_k} \\right)$.\n\n\nTo summarize, our approximation $T^R_{\\wh{d}}(\\mu_Z,\\nu_W)$ to the\nuniformly continuous $T^R_d(\\mu,\\nu)$ is based on two\napproximations: on the one hand, we compute the transportation cost\nbetween the discrete measures $\\mu_Z,\\nu_W$, approximating\n$\\mu,\\nu$; on the other hand, this transportation cost involves a\nlocal distance $\\wh{d}^R_{\\mu,\\nu}$ which is itself an\napproximation.\nThe transportation between the discrete measures will be computed by\nsolving a linear programming optimization, as explained in detail in\nthe next subsection.\nThe final approximation error (\\ref{e:final_approx_error}) depends\non two factors: 1) the fill distances $\\varphi(Z),\\varphi(W)$ of the\nsample sets $Z,W$, and 2) the approximation of the local distance\nfunction $d^R_{\\mu,\\nu}(z_i,w_j)$ between the sample points.\nCombining the discretization of the M\\\"{o}bius search with\n(\\ref{e:discrete_approx__d_mu_nu(z_i,w_j)_bis}), the total\napproximation error is thus proportional to\n$\\omega_{d^R_{\\Gamma_\\mu,\\Gamma_\\nu}}\\parr{\\varphi\\left( \\set{p_k}\n\\right)} + L^{-1} + \\varphi\\left( \\set{p_k} \\right)$.\n\nRecall that we are in fact using $\\Gamma_\\mu,\\Gamma_\\nu$ in the role\nof of $\\mu,\\nu$ (see above), which entails an additional\napproximation error. This error relates to the accuracy with which\ndiscrete meshes approximate smooth manifolds, as well as the method\nused to approximate uniformization. We come back to this question in\nAppendix B. As far as we are aware, a full convergence result for\n(any) discrete uniformization is still unknown; in any case, we\nexpect this error to be negligible (and approximately of the order\nof the largest edge in the full mesh) compared to the others.\n\n\n\\subsection{Optimization via linear programming}\n\nThe discrete formulation of\neq.~(\\ref{e:generalized_Kantorovich_transportation}) is commonly\nformulated as follows:\n\\begin{equation}\\label{e:discrete_kantorovich}\n \\sum_{i,j}d_{ij}\\pi_{ij} \\rightarrow \\min\n\\end{equation}\n\\begin{equation}\\label{e:discrete_kantorovich_CONSTRAINTS}\n\\begin{array}{l}\n \\left \\{\n \\begin{array}{l}\n \\sum_i \\pi_{ij} = \\nu_j \\\\\n \\sum_j \\pi_{ij} = \\mu_i \\\\\n \\pi_{ij} \\geq 0\n \\end{array}\n \\right . ,\n \n\\end{array}\n\\end{equation}\nwhere $\\mu_i=\\mu(\\Xi_i)$ and $\\nu_j=\\nu(\\Upsilon_j)$, and $d_{ij} =\nd^R_{\\mu,\\nu}(z_i,w_j)$.\n\nIn practice, surfaces are often only partially isometric (with a\nlarge overlapping part), or the sampled points may not have a good\none-to-one and onto correspondence (i.e. there are points both in\n$Z$ and in $W$ that do not correspond well to any point in the other\nset). In these cases it is desirable to allow the algorithm to\nconsider transportation plans $\\pi$ with marginals \\emph{smaller or\nequal} to $\\mu$ and $\\nu$. Intuitively this means that we allow that\nonly some fraction of the mass is transported and that the remainder\ncan be ``thrown away''. This leads to the following formulation:\n\\begin{equation}\\label{e:discrete_PARTIAL_kantorovich}\n\\sum_{i,j}d_{ij}\\pi_{ij} \\rightarrow \\min\n\\end{equation}\n\\begin{equation}\\label{e:discrete_PARTIAL_kantorovich_CONSTRAINTS}\n\\begin{array}{l}\n \\left \\{ \\begin{array}{c}\n \\sum_i \\pi_{ij} \\leq \\nu_j \\\\\n \\sum_j \\pi_{ij} \\leq \\mu_i \\\\\n \\sum_{i,j} \\pi_{ij} = Q \\\\\n \\pi_{ij} \\geq 0\n \\end{array}\n \\right .\n\\end{array}\n\\end{equation}\nwhere $0 < Q \\leq 1$ is a parameter set by the user that indicates\nhow much mass \\emph{must} be transported, in total.\n\nThe corresponding transportation distance is defined by\n\\begin{equation}\\label{e:discrete_trans_dist}\n T_d(\\nu,\\nu) = \\sum_{ij}d_{ij}\\pi_{ij},\n\\end{equation}\nwhere $\\pi_{ij}$ are the entries in the matrix $\\pi$ for the optimal\n(discrete) transportation plan.\n\nSince these equations and constraints are all linear, we have the\nfollowing theorem:\n\\begin{thm}\nThe equations {\\rm\n(\\ref{e:discrete_kantorovich})-(\\ref{e:discrete_kantorovich_CONSTRAINTS})}\nand {\\rm (\\ref{e:discrete_PARTIAL_kantorovich})-\n(\\ref{e:discrete_PARTIAL_kantorovich_CONSTRAINTS})} admit a global\nminimizer that can be computed in polynomial time, using standard\nlinear-programming techniques.\n\\end{thm}\n\nWhen correspondences between surfaces are sought, i.e. when one\nimagines one surface as being transformed into the other, one is\ninterested in restricting $\\pi$ to the class of permutation matrices\ninstead of allowing all bistochastic matrices. (This means that each\nentry $\\pi_{ij}$ is either 0 or 1.) In this case the number of\ncenters $z_i$ must equal that of $w_j$, i.e. $n=N=p$, and it is best\nto pick the centers so that $\\mu_i=\\frac{1}{N}=\\nu_j$, for all $i,\\\nj$. It turns out that this is sufficient to {\\em guarantee} (without\nrestricting the choice of $\\pi$ in any way) that the minimizing\n$\\pi$ is a permutation:\n\n\\begin{thm}\nIf $n=N=p$ and $\\mu_i=\\frac{1}{N}=\\nu_j$, then\n\\begin{enumerate}\n\\item\nThere exists a global minimizer of\n{\\rm(\\ref{e:discrete_kantorovich})} that is a permutation matrix.\n\\item\nIf furthermore $Q = \\frac{M}{N}$, where $M< N$ is an integer, then\nthere exists a global minimizer of {\\rm\n(\\ref{e:discrete_PARTIAL_kantorovich})} $\\pi$ such that $\\pi_{ij}\n\\in \\{0,1\\}$ for each $i,\\,j$.\n\\end{enumerate}\n\\label{t:relaxation}\n\\end{thm}\n\\begin{rem}\nIn the second case, where $\\pi_{ij} \\in \\{0,1\\}$ for each $i,\\,j$\nand $\\sum_{i,j=1}^N \\pi_{ij}=M$, $\\pi$ can still be viewed as a\npermutation of $M$ objects, ``filled up with zeros''. That is, if\nthe zero rows and columns of $\\pi$ (which must exist, by the pigeon\nhole principle) are removed, then the remaining $M \\times M$ matrix\nis a permutation.\n\\end{rem}\n\\begin{proof}\nWe first note that in both cases, we can simply renormalize each\n$\\mu_i$ and $\\nu_j$ by $N$, leading to the rescaled systems\n\\begin{equation}\n \\left \\{ \\begin{array}{c}\n \\sum_i \\pi_{ij} = 1 \\\\\n \\sum_j \\pi_{ij} = 1 \\\\\n \\pi_{ij} \\geq 0\n \\end{array}\n \\right . \\mbox{\\hspace{1 in}}\n \\left \\{\\begin{array}{c}\n \\sum_i \\pi_{ij} \\leq 1 \\\\\n \\sum_j \\pi_{ij} \\leq 1 \\\\\n \\sum_{i,j} \\pi_{ij} = M \\\\\n \\pi_{ij} \\geq 0\n\\end{array}\n \\right .\n\\label{e:discrete_kantorovich_CONSTRAINTS_disk}\n\\end{equation}\nTo prove the first part, we note that the left system in\n(\\ref{e:discrete_kantorovich_CONSTRAINTS_disk}) defines a convex\npolytope in the vector space of matrices that is exactly the\nBirkhoff polytope of bistochastic matrices. By the Birkhoff-Von\nNeumann Theorem \\cite{Lovasz86} every bistochastic matrix is a\nconvex combination of the permutation matrices, i.e. each $\\pi$\nsatisfying the left system in\n(\\ref{e:discrete_kantorovich_CONSTRAINTS_disk}) must be of the form\n$\\sum_k c_k\\tau^k$, where the $\\tau^k$ are the $N!$ permutation\nmatrices for $N$ objects, and $\\sum_k c_k = 1$, with $c_k \\geq 0$.\nThe minimizing $\\pi$ in this polytope for the linear functional\n(\\ref{e:discrete_kantorovich}) must thus be of this form as well. It\nfollows that at least one $\\tau^k$ must also minimize\n(\\ref{e:discrete_kantorovich}), since otherwise we would obtain the\ncontradiction\n\\begin{equation}\\label{e:linear_extrermas_at_vertices}\n \\sum_{ij}d_{ij}\\pi_{ij} = \\sum_k c_k \\Big (\\sum_{ij}d_{ij} \\tau^k_{ij} \\Big ) \\geq \\min_k \\Big \\{ \\sum_{ij}d_{ij} \\tau^k_{ij} \\Big \\} > \\sum_{i,j}\\,d_{ij}\\,\\pi_{ij} ~.\n\\end{equation}\n\nThe second part can be proved along similar steps: the right system\nin (\\ref{e:discrete_kantorovich_CONSTRAINTS_disk}) defines a convex\npolytope in the vector space of matrices; it follows that every\nmatrix that satisfies the system of constraints is a convex\ncombination of the extremal points of this polytope. It suffices to\nprove that these extreme points are exactly those matrices that\nsatisfy the constraints and have entries that are either 0 or 1\n(this is the analog of the Birkhoff-von Neumann theorem for this\ncase; we prove this generalization in a lemma in Appendix C); the\nsame argument as above then shows that there must be at least one\nextremal point where the linear functional\n(\\ref{e:discrete_kantorovich}) attains its minimum.\n\\end{proof}\n\nThis means that, when we seek correspondences between two surfaces,\nthere is no need to {\\em impose} the (very nonlinear) constraint on\n$\\pi$ that it be a permutation matrix; one can simply use a linear\nprogram to solve either , with Theorem \\ref{t:relaxation}\nguaranteeing that the minimizer for the ``relaxed'' problem {\\rm\n(\\ref{e:discrete_kantorovich})-(\\ref{e:discrete_kantorovich_CONSTRAINTS})}\nor {\\rm (\\ref{e:discrete_PARTIAL_kantorovich})-\n(\\ref{e:discrete_PARTIAL_kantorovich_CONSTRAINTS})} is of the\ndesired type if $n=N=p$ and $\\mu_i=\\frac{1}{N}=\\nu_j$.\n\n\\subsection{Consistency}\nIn our schemes to compute the surface transportation distance, for\nexample by solving (\\ref{e:discrete_PARTIAL_kantorovich}), we have\nso far not included any constraints on the regularity of the\nresulting optimal transportation plan $\\pi^*$. When computing the\ndistance between a surface and a reasonable deformation of the same\nsurface, one does indeed find, in practice, that the minimizing\n$\\pi^*$ is fairly smooth, because neighboring points have similar\nneighborhoods. There is no guarantee, however, that this has to\nhappen. Moreover, we will be interested in comparing surfaces that\nare far from (almost) isometric, given by noisy datasets. Under such\ncircumstances, the minimizing $\\pi^*$ may well ``jump around''. In\nthis subsection we propose a regularization procedure to avoid such\nbehavior.\n\nComputing how two surfaces best correspond makes use of the values\nof the ``distances in similarity'' $d^R_{\\mu,\\nu}(z_i,w_j)$ between\npairs of points that ``start'' on one surface and ``end'' on the\nother; computing these values relies on finding a minimizing M\\\"{o}bius \ntransformation for the functional (\\ref{e:d_mu,nu(z,w)_def}). We can\nkeep track of these minimizing M\\\"{o}bius transformations $m_{ij}$ for the\npairs of points $(z_i,w_j)$ proposed for optimal correspondence by\nthe optimal transport algorithm described above. Correspondence\npairs $(i,j)$ that truly participate in some close-to-isometry map\nwill typically have M\\\"{o}bius transformations $m_{ij}$ that are very\nsimilar. This suggests a method of filtering out possibly mismatched\npairs, by retaining only the set of correspondences $(i,j)$ that\ncluster together within the M\\\"{o}bius group.\n\nThere exist many ways to find clusters. In our applications, we\ngauge how far each M\\\"{o}bius transformation $m_{ij}$ is from the others\nby computing a type of $\\ell_1$ variance:\n\\begin{equation}\\label{e:variance_function}\n E_V(i,j) = \\sum_{(k,\\ell)}\\norm{m_{ij} - m_{k\\ell}},\n\\end{equation}\nwhere the norm is the Frobenius norm (also called the\nHilbert-Schmidt norm) of the $2\\times 2$ complex matrices\nrepresenting the M\\\"{o}bius transformations, after normalizing them\nto have determinant one. We then use $E_V(i,j)$ as a consistency\nmeasure of the corresponding pair $(i,j)$.\n\n\\section{Examples and comments}\n\\label{s:examples}\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{rcccc}\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei_1.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei_2.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei_1_conf.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei_2_conf.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei_polar.png} \\\\\n(a) &&&& Good pair (a)\\\\\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei2_1.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei2_2.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei2_1_conf.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei2_2_conf.png} &\n\\includegraphics[width=0.2\\columnwidth]{figures\/nei2_polar.png} \\\\\n(b) & & & & Erroneous pair (b)\n\\end{tabular}\n\\caption{Calculation of the local distance\n$d^R_{\\mu,\\nu}(\\cdot,\\cdot)$ between pairs of points on two\ndifferent surfaces (each row shows a different pair of points; the\ntwo surfaces are the same in the top and bottom rows). The first row\nshows a ``good'' pair of points together with the alignment of the\nconformal densities $\\mu,m^*\\nu$ based on the best M\\\"{o}bius\ntransformation $m$ minimizing $\\int_\\D \\norm{\\mu-m^*\\nu}d\\vol_\\mathcal{M}$.\nThe plot of this latter integral as a function of $m$ (parameterized\nby $\\sigma \\in [0,2\\pi)$, see (\\ref{e:disk_mobius})) is shown in the\nright-most column. The second row shows a ``bad'' correspondence\nwhich indeed leads to a higher local distance\n$d^R_{\\mu,\\nu}$.}\\label{fig:good_bad_pair_correspondence}\n\n\\end{figure}\n\nIn this section we present a few experimental results using our new\nsurface comparison operator. These concern an application to\nbiology; in a case study of the use of our approach to the\ncharacterization of mammals by the surfaces of their molars, we\ncompare high resolution scans of the masticating surfaces of molars\nof several lemurs, which are small primates living in Madagascar.\nTraditionally, biologists specializing in this area carefully\ndetermine landmarks on the tooth surfaces, and measure\ncharacteristic distances and angles involving these landmarks. A\nfirst stage of comparing different tooth surfaces is to identify\ncorrespondences between landmarks. Figure\n\\ref{fig:good_bad_pair_correspondence} illustrates how\n$d^R_{\\mu,\\nu}(z,w)$ can be used to find corresponding pairs of\npoints on two surfaces by showing both a ``good'' and a ``bad''\ncorresponding pair. The left two columns of the figure show the pair\nof points in each case; the two middle columns show the best fit\nafter applying the minimizing M\\\"{o}bius on the corresponding disk\nrepresentations; the rightmost column plots $ \\int_{\\Omega_{z_0,R}}\n\\,|\\,\\mu(z) - (m_{z_0,w_0,\\sigma}^*\\nu)(z)\\,|\\, d\\vol_H(z)$, the\nvalue of the ``error'', as a function of parameter $\\sigma$,\nparametrizing the M\\\"{o}bius transformations that map a give point $z_0$\nto another given point $w_0$ (see Lemma\n\\ref{lem:a_and_tet_formula_in_mobius_interpolation}). The ``best''\ncorresponding point $w_0$ for a given $z_0$ is the one that produces\nthe lowest minimal value for the error, i.e. the lowest\n$d^R_{\\mu,\\nu}(z_0,w_0)$.\n\nFigure \\ref{fig:120_corrs} show the top 120 most consistent\ncorresponding pairs (in groups of 20) for two molars belonging to\nlemurs of different species. Corresponding pairs are indicated by\nhighlighted points of the same color. These correspondences have\nsurprised the biologists from whom we obtained the data sets; their\nexperimental measuring work, which incorporates finely balanced\njudgment calls, had defied earlier automatizing attempts.\n\nOnce the differences and similarities between molars from different\nanimals have been quantified, they can be used (as part of an\napproach) to classify the different individuals. Figure\n\\ref{fig:distance_graph_embedded} illustrates a preliminary result\nfrom \\cite{Daubechies10} that illustrates the possibility of such\nclassifications based on the distance operator between surfaces\nintroduced in this paper. The figure illustrates the pairwise\ndistance matrix for eight molars, coming from individuals in four\ndifferent species (indicated by color). The clustering was based on\nonly the distances between the molar surfaces; it clearly agrees\nwith the clustering by species, as communicated to us by the\nbiologists from whom we obtained the data sets.\n\nOne final comment regarding the computational complexity of our\nmethod. There are two main parts: the preparation of the distance\nmatrix $d_{ij}$ and the linear programming optimization. For the\nlinear programming part we used a Matlab interior point\nimplementation with $N^2$ unknowns, where $N$ is the number of\npoints spread on the surfaces. In our experiments, the optimization\ntypically terminated after $15-20$ iterations for $N=150-200$\npoints, which took about 2-3 seconds. The computation of the\nsimilarity distance $d_{ij}$ took longer, and was the bottleneck in\nour experiments. If we spread $N$ points on each surface, and use\nthem all (which was usually not necessary) to interpolate the\nconformal factors $\\Gamma_\\mu, \\Gamma_\\nu$, if we use $P$ points in\nthe integration rule, and take $L$ points in the M\\\"{o}bius\ndiscretization (see Section \\ref{s:the_discrete_case_implementation}\nfor details) then each approximation of $d^R_{\\mu,\\nu}(z_i,w_j)$ by\n(\\ref{e:discrete_approx__d_mu_nu(z_i,w_j)_bis}) requires $O(L \\cdot\nP \\cdot N)$ calculations, as each evaluation of\n$\\Gamma_\\mu,\\Gamma_\\nu$ takes $O(N)$ and we need $L\\cdot P$ of\nthose. Since we have $O(N^2)$ distances to compute, the computation\ncomplexity for calculating the similarity distance matrix $d_{ij}$\nis $O(L\\cdot P \\cdot N^3)$. In practice this step was the most time\nconsuming and took around two hours for $N=300$. However, we have\nnot used any code optimization and we believe these times can be\nreduced significantly.\n\n\n\n\n\\begin{figure}[h]\n\\centering\n\\begin{tabular}{cccc}\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_1.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_3.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_5.png} \\\\\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_2.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_4.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_6.png} \\\\\n\\hline\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_7.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_9.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_11.png} \\\\\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_8.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_10.png} &\n\\includegraphics[width=0.3\\columnwidth]{figures\/i16_k06_12.png} \\\\\n\\end{tabular}\n\\caption{The top 120 most consistent corresponding pairs between two\nmolar teeth models.} \\label{fig:120_corrs}\n\\end{figure}\n\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=0.9\\columnwidth]{figures\/embeded_graph3.png}\n\\caption{Embedding of the distance graph of eight teeth models using\nmulti-dimensional scaling. Different colors represent different\nlemur species. The graph suggests that the geometry of the teeth\nmight suffice to classify species.}\n\\label{fig:distance_graph_embedded}\n\\end{figure}\n\n\n\n\\section{Acknowledgments}\nThe authors would like to thank C\\'{e}dric Villani and Thomas\nFunkhouser for valuable discussions, and Jesus Puente for helping\nwith the implementation. We are grateful to Jukka Jernvall, Stephen\nKing, and Doug Boyer for providing us with the tooth data sets, and\nfor many interesting comments. ID gratefully acknowledges (partial)\nsupport for this work by NSF grant DMS-0914892, and by an AFOSR\nComplex Networks grant; YL thanks the Rothschild foundation for\npostdoctoral fellowship support.\n\n\\bibliographystyle{amsplain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Introduction}\\label{sec:intro}\n\nThe Transformer architecture \\cite{vaswani2017attention} has been widely successful in a wide range of natural language processing tasks, including machine translation \\cite{edunov2018understanding}, language modeling \\cite{roy2020efficient}, question-answering \\cite{karpukhin2020dense}, and many more. Transformers pre-trained on large amounts of text with a language modeling (LM) objective, have become the standard in NLP, exhibiting surprising amounts of linguistic and world knowledge \\cite{peters2018elmo, devlin2018bert, petroni2019language, hewitt2019structural,Roberts2020t5kb}.\n\nThe contextualizing component of the Transformer is the attention layer where all positions in an input sequence of length $L$ aggregate information from the entire sequence in parallel. At its core, given $L$ query, key and value vectors, the \\textit{dot-product attention} function outputs\\footnote{Usually, the term is $\\mathrm{softmax}(QK^\\top \/ \\sqrt{d})V$ but $\\sqrt{d}$ can be dropped via scaling of queries and keys.} $\\mathrm{softmax}(QK^\\top)V$ where the \\softmax{} function is applied row-wise on the matrix $QK^\\top \\in \\mathbb{R}^{L \\times L}$,\nconsisting of similarity scores of the query-key pairs. Unfortunately, computing $\\Omega(L\\cdot L)$ similarity scores is prohibitive for long sequences. \n\nTo alleviate this, past work proposed to compute an approximation\nof $\\mathrm{softmax}(QK^\\top)$. One major line of research focused on \\textit{sparse attention} variants, where only a few similarity scores are computed per position, and the rest are ignored. Methods differ by which query-key pairs are selected \\cite{child2019generating, ye2019bp, qiu2019blockwise, roy2020efficient, kitaev2020reformer, beltagy2020longformer,gupta2020gmat}. \nA second line of research explored \\textit{dense} variants \\cite{katharopoulos2020transformers,Wang2020LinformerSW,tay2020sparse} (cf.\\ \\cite{tay2020efficient} for a survey). E.g., instead of computing the attention scores exactly for only a few query-key pairs, \\cite{Choromanski2020RethinkingAW} compute an approximation of scores for all pairs.\n\nIn this work, we point to a lacuna in current research on efficient Transformers. While recent work focused on approximating the attention scores $\\mathrm{softmax}(QK^\\top)$, the true target of approximation should be the output of the attention sub-layer, namely $H = \\mathrm{softmax}(QK^\\top)V$, which also includes the value vectors, $V$. We show that ignoring value vectors leads to unwarranted consequences both theoretically and empirically.\n\nTo demonstrate the importance of value-aware approximation, we analyze \\emph{optimal sparse attention}, that is, the case where, in hindsight, the model computes dot product similarity only with the most similar key vectors, while still ignoring the value vectors.\nWe show that in the popular masked language modeling (MLM) setup, optimal sparse attention dramatically \\emph{under-performs} compared to an optimal approximation of the true output of the attention sub-layer, $H$, leading to an error increase of $8$-$20$ points. Next, by theoretically focusing on the case where queries compute similarity to the \\emph{single} most similar key vector, we show that approximating $\\mathrm{softmax}(QK^\\top)$ is equivalent to approximating $H$ when the value vectors $V$ satisfy strong orthogonality and norm constraints. Conversely, when they do not, ignoring $V$ can lead unbounded approximation error. \n\nSecond, we discuss the kernel-based view of attention, where efficiency is gained by replacing the exponential kernel (corresponding to $\\mathrm{softmax}$) with other kernel functions \\cite{katharopoulos2020transformers}. We theoretically show that while in the exponential kernel case (corresponding to $\\mathrm{softmax}$), the effect of the norm of the value vectors is potentially small, switching to other kernels can dramatically increase the importance of the value vectors. We empirically test this by comparing optimal sparse attention\ngiven different kernel functions, and see that indeed approximation quality decreases when replacing the exponential kernel, \n\nTo conclude, we theoretically and empirically show that approximating the attention score matrix alone is insufficient, and propose that the research community should instead approximate the true output of the sub-attention layer, which importantly includes value vectors. Our code and trained models are available at \\url{https:\/\/github.com\/ag1988\/value_aware_attn}.\n\n\\comment{\n\\jb{if ankit agrees, delete everything from here.}\n\n\nIn this work, working in a LM set-up, we do a comparative study of various approximation methods and new baselines. We train LMs using the original attention function and then evaluate the trained model after replacing the original attention function with a given approximation. This methodology saves us from training a large number of models from scratch and allows us to include new oracle baselines. An overview of our contributions is as follows:\n\\begin{itemize}[leftmargin=*,topsep=0pt,itemsep=0pt,parsep=0pt]\n \\item Sparse methods aim towards restricting the attention of a given query only to its most similar keys irrespective of the associated value vectors. We compare the approximation quality of sparse methods such as LSH attention \\cite{kitaev2020reformer}, sliding-window attention \\cite{beltagy2020longformer} and dense methods such as ORF attention \\cite{Choromanski2020RethinkingAW} with that of an oracle baseline \\textit{top-keys-r} where each query attends only only to its $r$ most similar keys. We find that the current methods do not perform on par with this oracle.\n \\item Working with a \\textit{kernel} view of attention \\cite{tsai2019transformer}, we experiment with various similarity metrics besides \\softmax{} and show that success of the \\textit{top-keys-r} heuristic is tied to the similarity metric (kernel) used. E.g. we find that it does not perform well in case of the polynomial kernel.\n \\item Most importantly, we point out the above methods which do not utilize the value vectors $V$ can be sub-optimal as the final output of the attention layer also depends on the value vectors $V$. We include a stronger oracle baseline \\textit{optimal-r} where, for each query, the true attention output is approximated by the optimal convex combination of at most $r$ value vectors. For instance, \\textit{optimal-$1$} approximates the true output $o = \\sum_i \\alpha_i\\cdot v_i$ corresponding to a query by the value vector closest to $o$. On the other hand, the \\textit{top-keys-$1$} oracle instead outputs the value vector corresponding to the most similar key (i.e. $v_i$ with highest $\\alpha_i$) which might not be optimal. We show that simple facts from convex geometry guarantee that the \\textit{optimal-r} oracle gives zero approximation error for $r\\geq d$ where $d$ is the dimension of the vectors. I.e. $o$ can always be expressed as a convex combination of some $d \\ll L$ value vectors. \\ag{emperical and theoretical evidence that people should work on achieving this and not that}\n\\end{itemize}\n\n\n\\jb{Right now there is an important part missing, which is what is our contribution: 'In this work, we...' with an explanation of what is it that you do: show that values can be important, have some theory on it, and some empirical experiments, and what are the main findings}\n}\n\n\\section{Background}\\label{sec:method}\n\n\n\nWe review the kernel-based view of attention \\cite{tsai2019transformer}, which will be instructive in \\S\\ref{sec:optimal}.\n\n\\paragraph{Generalized Attention} \nLet $\\kappa(x,y) = \\inner{\\Phi(x)}{\\Phi(y)} \\geq 0$ be a kernel function with feature map $\\Phi:\\mathbb{R}^d \\mapsto \\mathcal{H}$ for some implicit reproducing kernel Hilbert space (RKHS) $\\mathcal{H}$. Given a query vector $q$, keys $k_1,\\ldots,k_L$, values $v_1,\\ldots,v_L$, all in $\\mathbb{R}^d$:\n\\begin{equation}\\label{eqn:attention}\n\\mathrm{att}_\\kappa(q,k_1,\\ldots,v_1,\\ldots) = \\frac{\\sum_{i=1}^{L} \\kappa(q,k_i)v_i}{\\sum_{i=1}^{L} \\kappa(q,k_i)},\n\\end{equation}\nwhere the normalization induces a probability distribution $\\balpha$ over the value vectors with $\\alpha_i = \\kappa(q,k_i) \/ \\sum_i \\kappa(q,k_i)$. The most popular use case is the exponential kernel $\\kappa(x,y) = \\exp(\\inner{x}{y})$, referred to as dot-product attention in Transformers. \nSome other examples include the degree-$2$ \\textit{polynomial} kernel $\\kappa(x,y) = \\inner{x}{y}^2$ and the recently proposed \\textit{elu} kernel $\\inner{\\Phi(x)}{\\Phi(y)}$ with $\\Phi(\\cdot) = 1 + \\mathrm{ELU}(\\cdot)$ \\cite{katharopoulos2020transformers}.\n\nGiven $L \\gg d$ queries, the attention function (Eq. \\ref{eqn:attention}) requires computing $L\\cdot L$ similarity scores for the query-key pairs, which is prohibitive for long sequences. \n\\textit{Sparse attention} variants relax this requirement and compute only a few similarity scores, ignoring the rest:\n\\begin{equation}\\label{eqn:sparse_attention}\n\\mathrm{att}_{\\kappa, S} = \\frac{\\sum_{i\\in S} \\kappa(q,k_i)v_i}{\\sum_{i\\in S} \\kappa(q,k_i)},\n\\end{equation}\nfor some $S \\subseteq \\{1,\\ldots,L\\}, |S|\\ll L$.\nMethods differ in how $S$ is determined given the queries and keys, and include use of locality bias \\cite{beltagy2020longformer}, global memory \\cite{gupta2020gmat}, and LSH hashing \\cite{kitaev2020reformer}, among others.\nConversely, instead of exactly computing the attention scores only on a few query-key pairs, \\textit{dense} variants compute an approximation of the true kernel values for all pairs. Such methods output $\\sum_i \\beta_i\\cdot v_i$ for some approximation $\\bbeta$ of the the true attention distribution $\\balpha$ \\cite{Choromanski2020RethinkingAW,peng2021random}.\n\n\n\\comment{\n\\jb{Again, I think prob. it's good to have the intro at a slightly higher level that is clear to everyone and then have a section about the kernel-based view where people will learn about this}\n\nThe majority \\jb{why majority and not all?} of the above variants (LSH, Routing, ORF, etc \\jb{you did not define these things, so either explain or delete}) do not utilize the value vectors while producing the approximation. Given a query, methods such as LSH, Routing attention, etc \\jb{these methods were not mentioned so are unclear} aim to restrict $S$ only to the most similar keys irrespective of the associated value vectors. \\jb{you should say something that 'what we really care about is not $QK^\\top$, but the output of the layer, which is also affected by $V$}.\nGiven this, it is not immediately clear if this is a reasonable strategy or whether value-aware approximations can be significantly more accurate \\jb{what do you mean by value-aware? be specific that you mean to approximate with $V$}.\n}\n\n\n\n\n\n\n\n\n\n\\section{Optimal Sparse Attention}\\label{sec:optimal}\n\n\nPrior methods for approximating attention, ignored the contribution of the values vectors $V$. As the true output of the attention sub-layer also depends on $V$, a natural question is whether it is possible to design better approximation methods by incorporating $V$, and if so, how much improvement is even possible? \n\nTo answer this, we focus on sparse attention, and analyze the difference between an oracle sparse approximation that considers the value vectors, and an oracle approximation that does not. That is, we look at the difference between the two approximations from the perspective of \\emph{expressivity}, ignoring any memory and computational constraints. We denote an optimal value-aware approximation that uses $r$ key vectors per query by \\emph{optimal-v-aware-r}, and an optimal approximation that ignores value vectors by \\emph{optimal-v-oblivious-r}. \nWe define \\emph{optimal-v-oblivious-r} as the output of Eq.~\\ref{eqn:sparse_attention} in which $S$ is selected to be the $r$ indices with the highest attention scores $\\alpha_i$'s. This is a natural baseline since this is what current sparse methods are trying to emulate.\nWe now explicitly derive and analyze the value-aware objective.\n\n\n \n\\paragraph{Value-aware objective} Let $o = \\sum_{i=1}^L \\alpha_iv_i$ be a convex combination of $v_1,\\ldots,v_L \\in \\mathbb{R}^d$, corresponding to the true output of the attention sub-layer.\nLet $C_r = \\{\\sum_{i=1}^L \\beta_iv_i: \\forall i\\ \\beta_i \\geq 0, \\sum_i \\beta_i=1, |\\{\\beta_i: \\beta_i > 0\\}| \\leq r\\}$ denote the set of points in the polytope of $v_i$'s that can be expressed as a convex combination of at most $r$ value vectors $v_i$.\nThe goal of value-aware approximation is to solve for the point in the constrained region $C_r$ closest to the true output $o$, i.e. $\\mathrm{argmin}_{\\tilde{o} \\in C_r} ||o-\\tilde{o}||^2$. As mentioned, this solution is termed \\textit{optimal-v-aware-r}.\n\nWe consider two extreme cases of $r$: $r=1$ and $r\\geq d+1$. For $r\\geq d+1$, the Carath{\\'e}odory Theorem \\cite{Caratheodory} states that $o = \\sum_i \\alpha_iv_i$ can be expressed as a convex combination of at most $d+1$ $v_i$'s. Hence, if $r \\geq d+1$ then $o \\in C_r$ and the optimal approximation error is $0$.\nIn most popular architectures, such as BERT \\cite{devlin2018bert}, $d=64 \\ll L$. This means that from the point of expressivity, \\emph{optimal-v-aware-65} can obtain a perfect approximation. Conversely, we will show in \\S\\ref{sec:experiments} that the performance of \\emph{optimal-v-oblivious-65} is substantially lower.\n\n\nAt the other extreme, when $r=1$ (a single value vector), the above objective is equivalent to $\\mathrm{argmin}_{i \\in (1,\\dots,L)} ||o-v_i||^2$ and can be simplified as\n\\setlength{\\abovedisplayskip}{2pt}\n\\setlength{\\belowdisplayskip}{2pt}\n\\begin{equation}\\label{eqn:top_1}\n\\begin{split}\n &\\ \\mathrm{argmin}_{i} ||o||^2 + ||v_i||^2 -2\\inner{v_i}{o} \\\\\n= &\\ \\mathrm{argmin}_{i} ||v_i||^2 -2\\inner{v_i}{\\sum_{j} \\alpha_jv_j} \\\\ \n= &\\ \\mathrm{argmin}_{i} ||v_i||^2(0.5-\\alpha_i) -\\sum_{j\\neq i} \\alpha_j\\inner{v_i}{v_j}.\n\\end{split}\n\\end{equation}\n\nThis equation induces a ranking over value vectors that \\emph{depends} on the value vectors themselves, in contrast to a value-oblivious ranking induced solely by attention weights $\\balpha$. \n\\setlength{\\abovedisplayskip}{6pt}\n\\setlength{\\belowdisplayskip}{6pt}\n\nIf $v_1,\\ldots,v_L$ are orthogonal, the above equation further simplifies to $\\mathrm{argmin}_{i} ||v_i||^2(0.5-\\alpha_i) - \\sum_{j\\neq i} \\alpha_j\\cdot0 = \\mathrm{argmin}_{i} ||v_i||^2(0.5-\\alpha_i)$. In this case, if some $\\alpha_i \\geq 0.5$ or if $v_1,\\ldots,v_L$ have equal norms, this would further simplify to $\\mathrm{argmax}_{i} \\alpha_i$, and would therefore be independent of the value-vectors $v_i$'s, implying that a value-oblivious approximation would work well.\n\nBut such assumptions on $v_1, \\ldots, v_L$ do not hold in general and thus an approximation that only depends on $\\alpha_i$'s can be sub-optimal. E.g., let $v_1, v_2, v_3$ be orthogonal vectors $(1,0,0)$, $(0,2,0)$, $(0,0,3)$ respectively and let $\\alpha_1, \\alpha_2, \\alpha_3$ be $0.25, 0.35, 0.4$. Then $v_3$ with the highest attention weight $\\alpha_3$ has a squared distance of $3.79$ from the true output $\\sum_i \\alpha_iv_i$ whereas $v_1$ with the least attention weight $\\alpha_1$ has only $2.49$. In this case, \\emph{optimal-v-aware-1} induces exactly the opposite ranking of value vectors compared to \\emph{optimal-v-oblivious-1}. Moreover, if we increase the value $3$ in $v_3$ to infinity, the approximation error will also infinitely grow. This example and, in general, Eq.~\\ref{eqn:top_1} also show that the optimal ranking can be significantly different from the one induced by $\\alpha_i ||v_i||$ proposed recently by \\cite{kobayashi-etal-2020-attention} for obtaining better interpretability of attention models.\n\n\n\n\n\\paragraph{Effect of kernel function} Recently, Linear Transformer \\cite{katharopoulos2020transformers} proposed to replace the existing exponential kernel with more efficient kernels. We now show that replacing the exponential kernel with a polynomial kernel can lead to a drop in quality for current sparse approximation methods.\n\nIntuitively, because the kernel function affects the skewness of $\\balpha$, it also affects the difference between the ranking induced by the optimal-value-aware approximation and the optimal-value-oblivious one. For simplicity, consider the case of orthogonal value vectors in which Eq.~\\ref{eqn:top_1} simplifies to $\\mathrm{argmin}_{i} ||v_i||^2(0.5-\\alpha_i)$. From Eq.~\\ref{eqn:attention}, we have $\\alpha_i = \\kappa(q,k_i) \/ \\sum_j \\kappa(q,k_j)$ which is $\\inner{q}{k_i}^C \/ \\sum_j \\inner{q}{k_j}^C$ for the degree-$C$ polynomial kernel. For $C = 0$, we have $\\alpha_i = 1\/L$, which gives $\\mathrm{argmin}_{i} ||v_i||^2$. In this case, the value vectors become crucial when $\\balpha$ is uniform. On the other hand, assuming distinct inner products, for $C \\gg 0$ we will obtain $\\max_i \\alpha_i \\geq 0.5$, thereby reducing us to $\\mathrm{argmax}_{i} \\alpha_i$, where value vectors do not affect the approximation. The complexity of the Transformer grows exponentially with the degree $C$ and thus in practice a low $C$ must be used (e.g., degree-$2$ polynomial).\nIn such case, $\\balpha$ is likely to be less skewed compared to the exponential kernel and more likely to induce a sub-optimal ranking.\n\nIn the next section, we empirically verify the above observations and show a significant performance gap between value-oblivious approximations and value-aware ones.\n\n\n\n\\section{Experiments}\\label{sec:experiments}\n\nWe empirically verify our observations in the context of training causal and masked language models, which are known to strongly correlate with performance on downstream applications \\cite{radford2019language,devlin2018bert}. \n\n\\paragraph{Masked LM task} We form examples by sampling sequences and replacing sub-words with \\texttt{} following the procedure in \\cite{devlin2018bert}. The model is trained to maximize the log probability of the masked out tokens and we evaluate the \\emph{error} of the model as the percentage of masked tokens predicted incorrectly. As approximate attention becomes increasingly relevant for long sequences, we train \\robertal{} on sequences of length $4096$ (Fig.~\\ref{figure:mlm_training}). Training was warm-started using \\roberta{}-base \\cite{liu2019roberta}. Full details on the experimental setup are in \\S\\ref{sec:mlm_data}. After training the model for $\\sim2.5$M steps, the error of the model (that is, proportion of incorrect predictions) on the evaluation set was $24.2$ (compared to $26.6$ for an analogous training on $512$-long sequences), ensuring that tokens in \\robertal{} indeed attend over longer distances and result in higher quality representations. We then replace the attention function of the trained model with various approximation schemes and evaluate the resulting model on the evaluation set.\n\nWe first compare \\emph{optimal-v-oblivious-r} to \\emph{optimal-v-aware-r}. We know that the approximation error of value-aware approximation is $0$ for $r > 64$. For $r=1$, we exhaustively go through all possible values and choose the one that minimizes the value-aware objective. As seen in Fig.~\\ref{figure:mlm_eval_top_r} and Table~\\ref{table:mlm_error}, \nthere is substantial gap between the two approximations. For instance, \\emph{optimal-v-oblivious-65} gives an MLM error of $43.5$ whereas the error of \\emph{optimal-v-aware-65} is $24.2$, since it can perfectly approximate full attention. Moreover, we compare \\emph{optimal-v-oblivious-r} to existing approximations: (a) \\emph{sliding-window-r}, where a position attends to $r\/2$ positions to its left and right), (b) LSH attention \\cite{kitaev2020reformer} and (c) ORF attention \\cite{Choromanski2020RethinkingAW}. Fig.~\\ref{figure:mlm_eval_top_r} shows that \\emph{sliding-window-r} trails behind \\emph{optimal-v-oblivious-r}.\nLSH attention, which tries to emulate \\emph{optimal-v-oblivious-r}, either requires a large number of hash rounds or a large chunk size. Similarly, the dense approximation, ORF, provides an unbiased approximation of the exponential kernel but suffers from high variance in practice.\n\n\n\\begin{table}[h]\\setlength{\\tabcolsep}{3.6pt}\n \\scriptsize\n \\centering\n \\begin{tabular}{c|c|c|c|c|c|c|c}\\hline\n exact & \\begin{tabular}[c]{@{}c@{}} OVO\\\\ $1$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} OVO\\\\ $65$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} OVA\\\\ $1$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} OVA\\\\ $65$\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} ORF\\\\ $256$ features\\end{tabular} & \\begin{tabular}[c]{@{}c@{}} LSH\\\\ $r=64$ \\\\ $4$-rounds \\end{tabular} & \\begin{tabular}[c]{@{}c@{}} LSH\\\\ $r=512$ \\\\ $16$-rounds \\end{tabular} \\\\\\hline\n 24.2 & 96.6 & 43.5 & 88.6 & 24.2 & 89.79 & 90.39 & 26.11 \\\\\\hline\n \\end{tabular}\n \\caption{MLM error of \\robertal{} on the evaluation set using approximate attention described in \\S\\ref{sec:experiments}. OVO: \\emph{optimal-v-oblivious}, OVA: \\emph{optimal-v-aware}. In LSH, each query attends to a total of $r$ keys per hash round.}\n \\label{table:mlm_error}\n\\end{table}\n\n\n\n\n\n\\begin{table}[h]\\setlength{\\tabcolsep}{5pt}\n \\scriptsize\n \\centering\n \\begin{tabular}{c|c|c|c|c|c}\\hline\n & exact & OVO-1 & OVO-65 & OVA-1 & OVA-65 \\\\\\hline\n exponential & 30.5 & 1031.1 & 33.5 & 280.3 & 30.5 \\\\\\hline\n polynomial (deg 2) & 34.2 & 6700.2 & 310.2 & 1005.4 & 34.2 \\\\\\hline\n elu & 35.3 & 1770.6 & 62.7 & 837.4 & 35.3 \\\\\\hline\n \\end{tabular}\n \\caption{Evaluation perplexity of models using approximate attention. OVO: \\emph{optimal-v-oblivious}, OVA: \\emph{optimal-v-aware}.}\n \\label{table:lm_loss}\n\\end{table}\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.45]{figures\/rbt_4096_top_r.pdf}\n \\caption{Evaluation of MLM error of \\robertal{} after replacing vanilla attention with approximation schemes. Dashed line denotes error using vanilla attention.}\n\\label{figure:mlm_eval_top_r}\n\\end{figure}\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[scale=0.45]{figures\/kernels_512_top_r.pdf}\n \\caption{Evaluation loss (base e) of \\emph{optimal-v-oblivious-r} oracle on the causal LM task for distinct kernel functions.}\n\\label{figure:kernels_top_r}\n\\end{figure}\n\n\\paragraph{Causal LM task} To investigate the effect of the kernel function on the quality of value-oblivious methods, we train a $6$-layer Transformer LM over 512 tokens on WikiText-103 \\cite{Merity2017PointerSM} (details in \\S\\ref{sec:wikitext}). We train $3$ models with identical hyperparameters using the exponential, degree-$2$ polynomial, and elu kernels respectively and evaluate the trained models with value-aware and value-oblivious approximations. \nAgain, \\emph{optimal-v-aware-r} substantially outperforms \\emph{optimal-v-oblivious-r} (Table~\\ref{table:lm_loss}), pointing to the potential of working on approximating the value-aware objective. \nMore importantly, comparing the approximation quality across different kernel functions (Fig.~\\ref{figure:kernels_top_r}), we see that the gap between the three kernels is small when using full attention (512 keys) vectors. However, convergence is much slower for the elu kernel, and especially the degree-$2$ polynomial, demonstrating that the approximation based on the top-$r$ key vectors is sub-optimal when switching to a less skewed kernel, which is more affected by the value vectors.\n\n\n\\section{Conclusions}\nIn this work, we provide theoretical and empirical evidence against current practice of focusing on approximating the attention matrix in Transformers, while ignoring the value vectors. We propose a value-aware objective and argue that the efforts to develop more efficient Transformers should consider this objective function as a research target.\n\n\\section{}{8pt plus 4pt minus 2pt}{0pt plus 2pt minus 2pt}\n\n\\usepackage{float} \n\n\n\\title{Value-aware Approximate Attention}\n\n\\author{\n Ankit Gupta \\\\\n Tel Aviv University \\\\\n {\\tt {\\normalsize ankitgupta.iitkanpur@gmail.com}} \\\\\\And\n Jonathan Berant \\\\\n Tel Aviv University, \\\\\n Allen Institute for AI \\\\\n {\\tt {\\normalsize joberant@cs.tau.ac.il}} \\\\}\n\n\n\\begin{document}\n\\maketitle\n\n\\setlength{\\abovedisplayskip}{6.5pt}\n\\setlength{\\belowdisplayskip}{6.5pt}\n\n\n\\input{0_abstract}\n\\input{1_introduction}\n\\input{2_method}\n\\input{3_optimal}\n\\input{4_experiments}\n\n\\section*{Acknowledgments}\nThis research was partially supported by \nThe Yandex Initiative for Machine Learning, and the European Research Council (ERC) under the European Union Horizons 2020 research and innovation programme (grant ERC DELPHI 802800).\n\n\n\\section{Supplemental Material}\n\\label{sec:supplemental}\n\n\\subsection{Masked LM task}\\label{sec:mlm_data}\nThe instances for the MLM task (\\S\\ref{sec:experiments}) were formed separately using the corpora listed in Table ~\\ref{table:mlm_data}. For each dataset, after appending \\texttt{<\/s>} token at the end of each document, the documents were arranged in a random order and concatenated into a single long text which was then tokenized into a list of sub-words. Depending upon the final input sequence length $L$ of the experiment ($512$\/$4096$) this list was chunked into full length $L-2$ sequences which were then masked randomly following \\cite{devlin2018bert} and enclosed within \\texttt{} and \\texttt{<\/s>} tokens. To handle sequences longer than $512$ tokens, the positional embeddings were used following \\cite{gupta2020gmat}. The learning curves of \\robertas{} and \\robertal{} are in Fig.~\\ref{figure:mlm_training}.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[scale=0.45]{figures\/rbt_full_4096_mix.pdf}\n \\caption{Evaluation error on the MLM task using vanilla attention (computing the full attention matrix).}\n\\label{figure:mlm_training}\n\\end{figure}\n\n\\begin{table}[h!]\\setlength{\\tabcolsep}{6pt} \n \\scriptsize\n \\centering\n \\begin{tabular}{c|c|c|c}\\hline\n corpus & all & training & evaluation \\\\\\hline\n Wikipedia ($10$\/$2017$) & $2.67$B & $1.53$B & $1.02$M\\\\\\hline\n BookCorpus \\cite{Zhu_2015_ICCV} & $1.06$B & $1.02$B & $1.02$M\\\\\\hline\n ArXiv \\cite{Cohan2018ADA} & $1.78$B & $1.53$B & $1.02$M\\\\\\hline\n PubMed \\cite{Cohan2018ADA} & $0.47$B & $510$M & $1.02$M\\\\\\hline\n PG19 \\cite{raecompressive2019} & $3.06$B & $510$M & $1.02$M\\\\\\hline\n \\end{tabular}\n \\caption{Number of tokens in the datasets used for MLM training.}\n \\label{table:mlm_data}\n\\end{table}\n\n\\paragraph{Hyperparameters} For convenience, we denote the training hyperparameters using the following abbreviations, INS: number of training instances, BSZ: number of instances in a batch, ISZ: instance size, SQL: final input sequence length after rearranging BSZ instances each of length ISZ, LR: learning rate, WRM: linear LR warm-up proportion, EP: number of epochs, STP: number of optimizer steps, GAC: gradient accumulation steps, POSq: whether (y\/n) $q$ part is included in positional embeddings. The hyperparameters are listed in Table~\\ref{table:hyperparams_mlm}.\n\n\\begin{table}[h!]\\setlength{\\tabcolsep}{2pt}\n \\scriptsize\n \\centering\n \\begin{tabular}{c|c|c|c|c|c|c|c|c|c} \\hline\n \n model & init & BSZ & ISZ & SQL & LR & WRM & EP & STP & POSq \\\\\\hline\n \\robertas{} & \\roberta{} & $8$ & $512$ & $512$ & $5$e-$6$ & $0.1$ & $2$ & $2.476$M & n \\\\\\hline\n \\robertal{} & \\roberta{} & 8 & $512$ & $4096$ & $5$e-$6$ & $0.1$ & $2$ & $2.476$M & y \\\\\\hline\n \\end{tabular}\n \\caption{Training hyperparameters. Common parameters: INS=$10$M, dropout-rate=$0.0$, optimizer=Bert-Adam, $\\beta_1$=$0.9$, $\\beta_2$=$0.98$, weight-decay=$0.01$, max-grad-norm=$5.0$, seed=$42$, GAC=$1$.}\n \\label{table:hyperparams_mlm}\n\\end{table}\n\n\\paragraph{Details of LSH attention} Given $L$ queries and $L$ keys in $\\mathbb{R}^d$, in each hash round, we sample a new matrix $R \\in \\mathbb{R}^{\\frac{C}{2} \\times d}$ of standard gaussians and hash the queries and keys as $H_R(x) = \\mathrm{argmax}([-Rx;Rx]) \\in \\{1,\\ldots,C\\}$. We rearrange the queries (and similarly keys) according to their hash value, breaking ties using the original position, and then chunk them into $L\/B$ chunks of $B$ vectors each. Denoting these chunks as $Q_1,\\ldots,Q_{L\/B}$ and $K_1,\\ldots,K_{L\/B}$, for each query in $Q_i$ we compute its similarity scores with respect to all keys in $K_{i-1},K_i$. I.e.~in each hash round a query attends to $r=2B$ keys. For each query, these similarity scores are accumulated over different hash rounds, and at the end normalized by their sum to get normalized attention scores over the keys. As recommended in the original paper \\cite{kitaev2020reformer}, we use $C=2L\/B=4L\/r$ which in practice can be sub-optimal as rearrangement destroys the original locality structure. \n\n\\paragraph{Details of ORF attention} Given $L$ queries and $L$ keys in $\\mathbb{R}^d$ we divide each vector by $d^{\\frac{1}{4}}$ to account for the temperature term in dot-product attention. For a given number $F$ of features, we sample a random orthogonal matrix $R \\in \\mathbb{R}^{F \\times d}$ as described in \\cite{saxe2013exact} and provided as a tensor initialization option in PyTorch. We then map each vector to the feature space as $\\Phi(x) = \\frac{1}{\\sqrt{F}}\\exp\\left(Rx - \\frac{||x||^2}{2}\\right)\\in \\mathbb{R}^F$ where ($-$) and $\\exp$ operations are applied element-wise. Similarity score of a query-key pair $(q, k)$ is computed as $\\inner{\\Phi(q)}{\\Phi(k)}$ and and is normalized by the sum of the similarity scores of $q$ with all the keys. Computing this directly leads to numerical instability so we instead compute $\\Phi(q) = \\frac{1}{\\sqrt{F}}\\exp\\left(Rq - \\frac{||q||^2}{2} - \\max(Rq)\\right)$ for queries and $\\Phi(k) = \\frac{1}{\\sqrt{F}}\\exp\\left(Rk - \\frac{||k||^2}{2} - \\max(RK)\\right)$ where $K$ is the matrix of all keys and $\\max$ is over all elements of input. \n\nThe main idea behind ORF attention is that, for a vector $w$ of standard gaussians, $\\inner{w}{x} \\sim \\mathcal{N}(0,||x||^2)$ and from the properties of log-normal distributions, $\\mathbb{E}_{w}[\\exp(\\inner{w}{x})] = \\exp(\\frac{||x||^2}{2})$. So, $\\mathbb{E}_{w}[\\exp(\\inner{w}{q})\\cdot\\exp(\\inner{w}{k})] = \\mathbb{E}_{w}[\\exp(\\inner{w}{q+k})] = \\exp(\\frac{||q+k||^2}{2}) = \\exp(\\inner{q}{k}+\\frac{||q||^2}{2}+\\frac{||k||^2}{2})$. Appropriately scaling both sides gives, $\\mathbb{E}_{w}[\\exp(\\inner{w}{q} - \\frac{||q||^2}{2})\\cdot\\exp(\\inner{w}{k}-\\frac{||k||^2}{2})] = \\exp(\\inner{q}{k})$, which is exactly the term for the exponential kernel.\n\n\n\\subsection{Causal LM task}\\label{sec:wikitext}\nFor this task, we used the language modeling framework provided by Faiseq\\footnote{\\url{https:\/\/github.com\/pytorch\/fairseq}}.\n\\paragraph{Model and training details} number of decoder layers: $6$, hidden size: $512$, head size: $64$, number of model parameters: $156$M, dataset: WikiText-$103$, training examples: $1801350$, input sequence length: $512$, $\\beta_1$=$0.9$, $\\beta_2$=$0.98$, weight-decay: $0.01$, gradient clip-norm: none, learning rate: $0.0005$, learning rate schedule: inverse square root, number of warmup updates: $4000$, batch size: $128$, epochs: $20$, number of steps: $31520$, minimum context-window during evaluation on test-set: $400$.\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\\noindent\n{\\it Introduction.}\nBesides the mass, electric charge and other quantum numbers, the calculable electric and magnetic dipole moments are among the basic attributes of elementary particles.\nTogether with the Yukawa coupling inferred from the mass, they provide excellent opportunities to tests the standard model (SM) and probe new physics. \n\nThe 4.2$\\sigma$ deviation of the measured value of the muon anomalous magnetic moment from the SM prediction, $\\Delta a_\\mu = (2.51 \\pm 0.59)\\times 10^{-9}$~\\cite{Abi:2021gix, Aoyama:2020ynm}, can be comfortably explained even by very heavy new particles as a result of the chiral enhancement from the Higgs coupling to new particles, see Fig.~\\ref{fig:diags} (left). With order one couplings, the scale of new physics up to $\\sim 10$ TeV is expected, and it further extends to $\\gtrsim 50$ TeV for couplings close to the perturbativity limit~\\cite{Dermisek:2020cod,Capdevilla:2021rwo,Dermisek:2021ajd,Allwicher:2021jkr}. Such heavy particles are far beyond the reach of the Large Hadron Collider (LHC) and currently envisioned future experiments.\n\n\nIn this letter we show that any new interaction resulting in a chirally enhanced contribution to the muon magnetic moment necessarily modifies the muon Yukawa coupling, and thus the decay of the Higgs boson to muon pairs, or, if $h\\to \\mu^+\\mu^-$ is not modified, the muon electric dipole moment ($\\mu$EDM) of certain size must be generated (with one exception noted later). These three observables are highly correlated, and near future measurements of $h\\to \\mu^+\\mu^-$ will carve an ellipse in the plane of dipole moments for any such model. Together with the improved measurement of the electric dipole moment many models able to explain $\\Delta a_\\mu$ can be efficiently tested.\nFurthermore, in some scenarios the heaviest possible spectrum will be tested the most efficiently. \n\nThe main results can be intuitively understood from Fig.~\\ref{fig:diags}. No matter what the quantum numbers of $X$, $Y$ and $Z$ particles are, as long as they can form the diagram on the left, the photon can be removed and the $Y-Z-H$ coupling and its conjugate can be used again to generate the diagram on the right top. This effectively generates dimension 6 mass operator, $\\bar{\\mu}_{L}\\mu_{R}H\\left(H^{\\dagger}H\\right)$. In addition, for models where $X$ is a scalar participating in electroweak symmetry breaking, for example the SM Higgs boson, the same operator could be generated at tree level as in the diagram on the right bottom. We refer to these cases hereafter as loop models and tree models, respectively. The tree models have been studied in connection with $\\Delta a_{\\mu}$ in~\\cite{Kannike:2011ng,Dermisek:2013gta,Dermisek:2014cia,Poh:2017tfo,Crivellin:2018qmi,Dermisek:2020cod,Dermisek:2021ajd}, whereas examples of loop models,~\\cite{Moroi:1995yh,Huang:2001zx,Cheung:2009fc,Endo:2013lva,Freitas:2014pua,Thalapillil:2014kya,Omura:2015nja,Calibbi:2018rzv,Crivellin:2018qmi,Crivellin:2020tsz,Babu:2020hun,Capdevilla:2021rwo,Crivellin:2021rbq,Babu:2021jnu,Bigaran:2021kmn,MuonCollider:2022xlm}, include scenarios with familiar particles in the loop: superpartners, top quark, or the $\\tau$ lepton; and particles solely introduced to explain $\\Delta a_\\mu$. The generated operator contributes differently to the muon mass and Yukawa coupling as a result of different combinatorial factors. This necessarily modifies the rate for $h\\to \\mu^+\\mu^-$, unless the modified Yukawa coupling has the same magnitude as that in the SM, which is possible with complex couplings that in turn predict a certain value of $\\mu$EDM. \n\n\n\n\n\n\n\\begin{figure}[t]\n\\includegraphics[width=0.8\\linewidth]{loop_tree_diags_v2.pdf}\n\\caption{A generic diagram with chiral enhancement contributing to muon dipole moments (left), a corresponding diagram contributing to the dimension 6 mass operator at 1-loop level (right top) and at tree level, if possible, (right bottom).}\n\\label{fig:diags}\n\\end{figure}\n\nPossible correlations between $\\Delta a_\\mu$ and $h\\to \\mu^+\\mu^-$ were pointed out before~\\cite{Kannike:2011ng,Dermisek:2013gta,Dermisek:2014cia,Thalapillil:2014kya,Poh:2017tfo,Crivellin:2020tsz,Babu:2020hun,Dermisek:2020cod,Dermisek:2021ajd,Crivellin:2021rbq}. Similarly $\\mu$EDM was also studied in connection with $\\Delta a_{\\mu}$ but only as a possible effect if couplings are complex~\\cite{Cheung:2009fc,Crivellin:2018qmi,Babu:2020hun,Bigaran:2021kmn,MuonCollider:2022xlm}. The sharp correlation between all three observables has not been noticed. As we will see, with complex couplings, predictions for $h\\to \\mu^+\\mu^-$ cannot be made based only on $\\Delta a_\\mu$. Rather a given $h\\to \\mu^+\\mu^-$ translates into a prediction for $\\mu$EDM and vice versa. \n\n\n\n\\noindent\n{\\it Effective lagrangian.} For our discussion, the relevant terms of the effective Lagrangian are:\n\\begin{eqnarray}\n\\mathcal{L}\\supset &-&y_{\\mu}\\bar{l}_{L}\\mu_{R}H \\;-\\; C_{\\mu H}\\bar{l}_{L}\\mu_{R}H\\left(H^{\\dagger}H\\right) \\nonumber \\\\\n&-& C_{\\mu \\gamma} \\bar{l}_{L}\\sigma^{\\rho\\sigma}\\mu_{R} H F_{\\rho\\sigma} + h.c.,\n\\label{eq:eff_lagrangian}\n\\end{eqnarray}\nwhere the components of the lepton doublet are $l_{L}=(\\nu_{\\mu}, \\mu_{L})^{T}$, $\\sigma^{\\rho\\sigma}=\\frac{i}{2}[\\gamma^{\\rho},\\gamma^{\\sigma}]$, and all the parameters can be complex. The first term is the usual muon Yukawa coupling in the SM. When the Higgs field develops a vacuum expectation value, $H=(0,v+h\/\\sqrt{2})^T$ with $v=174$ GeV, the dimension 6 operator in the second term generates additional contributions to the muon mass and muon coupling to the Higgs boson, while the dimension 6 operator in the third term corresponds to muon dipole moments. Defining the muon Yukawa coupling and the electric and magnetic dipole moments in terms of Dirac spinors in the basis where the muon mass, $m_\\mu$, is real and positive,\n\\begin{eqnarray}\n\\mathcal{L}\\supset &&- m_{\\mu} \\bar{\\mu}\\mu - \\frac{1}{\\sqrt{2}} \\left(\\lambda^{h}_{\\mu\\mu}\\bar{\\mu}P_{R}\\mu h + h.c.\\right)\\nonumber \\\\\n&&\\;\\;+ \\frac{\\Delta a_{\\mu}e}{4m_{\\mu}}\\bar{\\mu}\\sigma^{\\rho\\sigma}\\mu F_{\\rho\\sigma} - \\frac{i}{2}d_{\\mu}\\bar{\\mu}\\sigma^{\\rho\\sigma}\\gamma^{5}\\mu F_{\\rho\\sigma},\n\\label{eq:eff_lagrangian_2}\n\\end{eqnarray}\nwe have \n\\begin{eqnarray}\nm_{\\mu}&=&\\left(y_{\\mu}v + C_{\\mu H}v^{3}\\right)e^{-i\\phi_{m_{\\mu}}}, \\label{eq:mmu}\\\\\n\\lambda_{\\mu\\mu}^{h}&=&\\left(y_{\\mu} + 3C_{\\mu H}v^{2}\\right)e^{-i\\phi_{m_{\\mu}}}, \\label{eq:mmu_lamhmu} \\\\\n\\Delta a_{\\mu} &=&- \\frac{4m_{\\mu}v}{e}\\textrm{Re}[C_{\\mu \\gamma}e^{-i\\phi_{m_{\\mu}}}], \\label{eq:mdipole}\\\\\nd_{\\mu} &=& 2v\\textrm{Im}[C_{\\mu \\gamma}e^{-i\\phi_{m_{\\mu}}}],\n\\label{eq:edipole}\n\\end{eqnarray}\nwhere $e$ is positive and $\\phi_{m_{\\mu}}$ is the phase of the rotation required to make the mass term real and positive. All the parameters are real except for $\\lambda_{\\mu\\mu}^{h}$ which can be complex. $\\lambda_{\\mu\\mu}^{h}$ and $m_{\\mu}$ do not follow the expected scaling in the SM and \n\\begin{equation}\nR_{h\\to \\mu^+\\mu^-} \\equiv \\frac{BR(h\\to \\mu^+\\mu^-)}{BR(h\\to \\mu^+\\mu^-)_{SM}} = \\left(\\frac{v}{m_{\\mu}}\\right)^{2}\\big|\\lambda_{\\mu\\mu}^{h}\\big|^{2}\n\\end{equation}\nin general deviates from 1.\n\n\n\n\n\n\n\n\n\n\n\\noindent\n{\\it The muon ellipse.} The crucial observation is that the couplings which generate chirally-enhanced contributions to $C_{\\mu\\gamma}$, also necessarily generate $C_{\\mu H}$ with the same phase. Although all the couplings in diagrams in Fig.~\\ref{fig:diags} can be complex, the same combination of couplings enter both $C_{\\mu\\gamma}$ and $C_{\\mu H}$ for tree models, with an additional factor of $\\lambda_{YZ} \\lambda_{YZ}^*$ for loop models. Thus the two Wilson coefficients are related by a {\\it real factor}, $k$, define as\n\\begin{equation}\nC_{\\mu H} = \\frac{k}{e} C_{\\mu\\gamma}.\n\\label{eq:WC_relation}\n\\end{equation}\nThis allows us to write $\\lambda_{\\mu\\mu}^{h}$ and thus $R_{h\\to \\mu^+\\mu^-}$ in terms of electric and magnetic dipole moments:\n\\begin{flalign}\nR_{h\\to \\mu^+\\mu^-}=\\left(\\frac{\\Delta a_{\\mu}}{2\\omega} - 1\\right)^{2} + \\left(\\frac{m_{\\mu}d_{\\mu}}{e\\omega}\\right)^{2},\n\\label{eq:ellipse}\n\\end{flalign}\nwhere $\\omega = m_{\\mu}^{2}\/kv^{2}$. Note that $\\Delta a_{\\mu}$ can both increase or decrease $R_{h\\to \\mu^+\\mu^-}$ depending on its sign and the sign of $k$, while $d_{\\mu}$ can only increase $R_{h\\to \\mu^+\\mu^-}$.\n\n\\begin{figure}[t]\n\\includegraphics[width=0.65\\linewidth]{contour_generick.pdf} \n\\includegraphics[width=0.65\\linewidth]{contour_SMVLL_R1c.pdf} \n\\includegraphics[width=0.65\\linewidth]{contour_genericks2351025_final.pdf} \n\\caption{Contours of constant $R_{h\\to\\mu^+\\mu^-}$ in the $\\Delta a_{\\mu}$ -- $d_{\\mu}$ plane in models with $k=64\\pi^2$ (e.g. SM+VL, $\\mathcal{Q}=1$) (top); $k=64\\pi^2, \\;64\\pi^2\/3$, and $64\\pi^2\/5$ (e.g. SM+VL, $\\mathcal{Q}=1$, 3, and 5) (middle); and for smaller values of $k$ relevant for models with extended Higgs sectors and loop models (bottom). The light and dark green shaded regions show the $\\pm 1\\sigma$ and $\\pm 2\\sigma$ ranges of $\\Delta a_{\\mu}$, respectively.}\n\\label{fig:SM_ellipse}\n\\end{figure}\n\n\nConcretely, in five possible extensions of the SM with vectorlike leptons that can generate chirally enhanced contributions, $k$ is completely determined by the quantum numbers of new leptons,\n\\begin{equation}\nk=\\frac{64\\pi^{2}}{\\mathcal{Q}},\n\\label{eq:tree_X_SM}\n\\end{equation}\nwhere $\\mathcal{Q}= 1$ for $X$ and $Y$ leptons in $\\mathbf{2}_{-1\/2}\\oplus\\mathbf{1}_{-1}$ or $\\mathbf{2}_{-1\/2}\\oplus\\;\\mathbf{3}_{0}$ representations of $SU(2)\\times U(1)_Y$; $\\mathcal{Q}=3$ for $\\mathbf{2}_{-3\/2}\\oplus\\mathbf{1}_{-1}$ or $\\mathbf{2}_{-3\/2}\\oplus\\mathbf{3}_{-1}$; and $\\mathcal{Q}=5$ for $\\mathbf{2}_{-1\/2}\\oplus\\mathbf{3}_{-1}$~\\cite{Kannike:2011ng}. \n\n\nIn Fig.~\\ref{fig:SM_ellipse} (top), we show contours of constant $R_{h\\to \\mu^+\\mu^-}$ in the plane of the muon dipole moments for $k=64\\pi^2$ which corresponds, for example, to the SM extended by a vectorlike doublet and singlet leptons whose quantum numbers mirror their respective SM counterparts, i.e. $\\mathcal{Q}=1$. The region where $h\\to \\mu^{+}\\mu^{-}$ is found to be within $10\\%$ of the SM value, indicating the ultimate LHC precision, is shaded red. The region outside $R_{h\\to \\mu^+\\mu^-}=2.2$ (shaded gray) is already ruled out by measurements of $h\\to \\mu^{+}\\mu^{-}$~\\cite{ATLAS:2020fzp}. This model (range of $k$) is somewhat special as, in spite of the large contribution from $C_{\\mu H}$ to the muon mass and Yukawa coupling, the SM-like $R_{h\\to \\mu^+\\mu^-}$ and $d_{\\mu} = 0$ are consistent with $\\Delta a_{\\mu}$ within $1\\sigma$.\\footnote{ This illustrates the only exception to generating non-zero $\\mu$EDM when $R_{h\\to \\mu^+\\mu^-}=1$, advertised in the introduction. It corresponds to the case when $\\lambda_{\\mu\\mu}^{h} = -m_\\mu\/v$.}\nNote however, that the current central value of $\\Delta a_{\\mu}$ requires $R_{h\\to \\mu^+\\mu^-} = 1.32$ for $d_{\\mu}=0$ and it can be as large as the current upper limit when $|d_{\\mu}|\\sim 1\\times 10^{-22}\\, {\\rm e\\cdot cm}$. \n\nThe situation is dramatically different for the other two scenarios, $\\mathcal{Q} = 3$ and 5, with the comparison of all three scenarios shown in Fig.~\\ref{fig:SM_ellipse} (middle). As $k$ decreases, the center of the ellipse moves to larger values of $\\Delta a_{\\mu}$. Contours of $R_{\\mu}= 1\\pm 10\\%$ are now consistent with the whole $2\\sigma$ range of $\\Delta a_{\\mu}^{exp}$. \n However, in contrast to the case with $\\mathcal{Q}=1$, consistency of $R_{\\mu}= 1\\pm 10\\%$ with $\\Delta a_{\\mu}$ necessarily implies a nonzero value of $d_{\\mu}$. In fact, the consistency sharply requires values of $d_{\\mu}\\simeq 2.7-3.4\\times10^{-22} \\,{\\rm e\\cdot cm}$ ($\\mathcal{Q}=3$) and $d_{\\mu}\\simeq 3.6-5.1\\times10^{-22} \\,{\\rm e\\cdot cm}$ ($\\mathcal{Q}=5$), which are within the expected sensitivity of future measurements. Thus, for these scenarios, the correlation of three observables requires deviations from SM predictions either in $R_{\\mu}$ or $d_{\\mu}$ that are observable in near future. \n\n\n\nModels where the SM Higgs acts as only a single component of an extended Higgs sector participating in EWSB, such as in a 2HDM, also fall into the class of tree models. However, the mixing in the extended Higgs sector will generically introduce an additional free parameter. In the case of a 2HDM type-II, it is the ratio of vacuum expectation values of the two Higgs doublets, $\\tan\\beta$. From the results of Refs.~\\cite{Dermisek:2021ajd,Dermisek:2021mhi} we can find that the modification to Eq.~\\ref{eq:tree_X_SM}, assuming a common mass for all new particles, becomes\n\\begin{equation}\nk=\\frac{64\\pi^{2}}{\\mathcal{Q}(1+\\tan^{2}\\beta)},\n\\label{eq:tree_X_2HDM}\n\\end{equation}\nwhich remains a very good approximation for arbitrary splitting between the masses of new leptons and similar or smaller Higgs masses compared to masses of new leptons.\n For low $\\tan\\beta$ the results are similar as for the SM extensions with VLs discussed above. However, as $\\tan\\beta$ increases, much smaller $k$ values are possible. Contours of constant $R_{h\\to \\mu^+\\mu^-} = 1$ for a few representative choices of smaller $k$ are shown in Fig.~\\ref{fig:SM_ellipse} (bottom). We also show corresponding $\\pm 10\\%$ and $\\pm 1\\%$ regions for cases when the region does not extend all the way to $d_{\\mu} = 0$ in the $1\\sigma$ range of $\\Delta a_{\\mu}$. For the 2HDM type-II extended with vectorlike leptons with the same quantum numbers as SM leptons, $\\mathcal{Q}=1$, the plotted values of $k$ correspond to $ \\tan \\beta \\simeq 5, 8, 14, 18$ and 25; while the 3 cases in the middle plot correspond to $ \\tan \\beta \\simeq 0, 1.4$ and 2 (with the first case not being physical).\n \n \n %\n\n\nThe loop models with two new fermions and 1 scalar (FFS) or one new fermion and 2 scalars (SSF) represent infinite classes of models as the required couplings alone do not completely determine the quantum numbers of new particles. In this case the $k$ factor is directly linked to the coupling responsible for the chiral enhancement, $\\lambda_{YZ}$, see Fig.~\\ref{fig:diags}. For the FFS models and, in the limit of a common mass of all new particles, we find\n\\begin{equation}\nk=\\frac{4}{\\mathcal{Q}}|\\lambda_{YZ}|^{2}.\n\\label{eq:loop_Q}\n\\end{equation}\nFor SSF models the $Y-Z-H$ coupling $A_{YZ}$ is dimensionful and in the above formula $|\\lambda_{YZ}|^2$ should be replaced by $|A_{YZ}|^{2}\/M^{2}$ where $M$ is the mass of new particles. The $\\mathcal{Q}$ factor is determined by the charges of new particles, and it can be obtained, for example, from the entries in Tables~1 and 2 of Ref.~\\cite{Crivellin:2021rbq}. We find that for hypercharge choices $\\pm 1\/2$ and $\\pm 1$ of the scalar in FFS models or fermion in SSF models, $|\\mathcal{Q}|$ varies between 1\/5 and 6. For small $\\mathcal{Q}$ and large $\\lambda_{YZ}$, $k$ can be as large as in the tree models, the SM+VL or 2HDM type-II at small $ \\tan \\beta$, while for larger $\\mathcal{Q}$ and\/or small $\\lambda_{YZ}$ the range of $k$ coincides with predictions of 2HDM type-II at larger $ \\tan \\beta$. For example, for $\\mathcal{Q}= 1$, the range of $\\lambda_{YZ}$ from $0.5$ to $\\sqrt{4\\pi}$ corresponds to $k = 1$ to $\\simeq 50$.\n\n\nFrom Fig.~\\ref{fig:SM_ellipse} we see that as $k$ is decreasing from the values typical for SM+VLs, the consistency of $\\Delta a_{\\mu}$ with $R_{h\\to \\mu^+\\mu^-} = 1\\pm 10\\%$ requires larger $|d_{\\mu}|$. However, the range of predicted $|d_{\\mu}|$ is also growing, and at some point, for $k \\lesssim 20$, it extends to $d_{\\mu} = 0$. Further decreasing $k$ to about 2, even the $R_{h\\to \\mu^+\\mu^-} = 1 \\pm 1\\%$ range extends to $d_{\\mu} = 0$. These findings are also clearly visible in Fig.~\\ref{fig:edm_vs_k}, where we plot these contours of $R_{h\\to \\mu^+\\mu^-}$ in the $k$ -- $|d_{\\mu}|$ plane. \n\n\\begin{figure}[t]\n\\includegraphics[width=0.75\\linewidth]{edm_vs_k.pdf} \n\\caption{Future exclusion regions in the $k$ -- $|d_{\\mu}|$ plane assuming $R_{h\\to \\mu^+\\mu^-}$ is measured to be SM-like, $R_{h\\to \\mu^+\\mu^-} = 1\\pm0.1$ (red) and $R_{h\\to \\mu^+\\mu^-} = 1\\pm0.01$ (blue) assuming $\\Delta a_{\\mu}$ within $1\\sigma$ of the measured value. The regions would extend to the solid lines with the same color if the central value of $\\Delta a_{\\mu}$ was assumed. The green line (shaded region) corresponds to $R_{h\\to \\mu^+\\mu^-} = 1$ assuming the central value ($1\\sigma$ range) of $\\Delta a_{\\mu}$.}\n\\label{fig:edm_vs_k}\n\\end{figure}\n\nFinally, allowing for any $R_{h\\to \\mu^+\\mu^-}$ as a future possible measured value (up to the current limit), in Fig.~\\ref{fig:k} we plot contours of $k$ in the $R_{h\\to \\mu^+\\mu^-}$ -- $|d_{\\mu}|$ plane assuming the central value $\\Delta a_{\\mu}$.\nNote that there are two values of $k$ resulting in the same $R_{h\\to \\mu^+\\mu^-}$ and $|d_{\\mu}|$ except for the boundary of the shaded region. The boundary corresponds to the upper limit on possible $\\mu$EDM if $R_{h\\to \\mu^+\\mu^-} < 1$,\n\\begin{equation}\n|d_{\\mu}| \\leq \\frac{e \\,\\Delta a_{\\mu}}{2m_\\mu}\\sqrt{\\frac{R_{h\\to \\mu^+\\mu^-}}{1-R_{h\\to \\mu^+\\mu^-}}}.\n\\end{equation}\nNote that models with small $k$ can generate $d_{\\mu}$ up to the current experimental upper limit, $|d_{\\mu}|=1.8\\times 10^{-19}\\;{\\rm e\\cdot cm}$~\\cite{Muong-2:2008ebm}.\n\\begin{figure}[t]\n\\includegraphics[width=0.6\\linewidth]{k_contours_combined_v5.pdf} \n\\caption{Contours of constant $k$ in the $R_{h\\to\\mu\\mu}$ -- $|d_{\\mu}|$ plane assuming the central value $\\Delta a_{\\mu}$.}\n\\label{fig:k}\n\\end{figure}\n\n\n\n\\noindent\n{\\it Discussion and conclusions.} \nWe have seen that every model with chirally-enhanced contributions to $\\Delta a_{\\mu}$ can be parametrized by the $k$ factor that relates the dipole operator to the contribution to the muon mass and thus specifies the correlation between $\\Delta a_{\\mu}$, $d_{\\mu}$ and $R_{h\\to \\mu^+\\mu^-}$. In the SM with VLs the $k$ factor is fully fixed by quantum numbers; in similar models with extended Higgs sectors a mixing parameter will enter $k$, for example $\\tan\\beta$ in the 2HDM; and in loop models $k$ is directly related to the coupling responsible for chiral enhancement. Through this correlation large classes of models or vast ranges of model parameters can be efficiently tested.\n\nThe $R_{h\\to \\mu^+\\mu^-}$ is expected to be measured with $\\sim10\\%$ precision at the LHC and $\\sim 1\\%$ at the hadron version of the Future Circular Collider~\\cite{Abada:2019lih}. The limits on $\\mu$EDM are expected to reach $|d_{\\mu}|\\sim 1\\times 10^{-21}\\,{\\rm e\\cdot cm}$ at the Muon g-2 experiment at Fermilab~\\cite{Chislett:2016jau}, and could reach $6\\times 10^{-23}\\,{\\rm e\\cdot cm}$ at the Paul Scherrer Institute~\\cite{Adelmann:2021udj}. Thus near future measurements have the potential to reduce the number of SM extensions with vectorlike leptons to one specific $\\mathcal{Q}$, or rule out all of them, irrespectively of the scale of new physics or the size of couplings.\nFor 2HDM type-II, already the LHC measurement of $R_{h\\to \\mu^+\\mu^-}$ will limit $\\tan\\beta$ to $\\gtrsim 6$, and for loop models, it will limit the size of the coupling resulting in chiral enhancement, again irrespectively on other details of the model. This immediately sets the upper bound for the scale of new physics to $\\sim 18$ TeV for the 2HDM~\\cite{Dermisek:2020cod, Dermisek:2021ajd} and, for example, $\\sim14$ TeV for FFS loop models with $SU(2)$ doublets and singlets and $\\mathcal{Q} =1$. Improving the measurement of $R_{h\\to \\mu^+\\mu^-}$ to within $1\\%$ will further reduce these upper limits to $\\sim 10$ TeV and $\\sim 8$ TeV respectively. Similar reasoning can be used to obtain an upper limit on the lightest new particle in any given scenario.\nThus the correlation between $\\Delta a_{\\mu}$, $d_{\\mu}$ and $R_{h\\to \\mu^+\\mu^-}$ can test most efficiently the high end of the spectrum that is far beyond the reach of currently envisioned future colliders.\n\n\nThe discussion in this letter has been limited to couplings necessary to generate a chirally-enhanced contribution to $\\Delta a_{\\mu}$. Effects of other possible dimension 6 operators involving muon fields, SM Higgs doublet and derivatives can be absorbed into the definition of the muon Yukawa coupling, $y_\\mu$, or are parametrically suppressed by $m_\\mu\/v$, similar as in the discussion in Ref.~\\cite{Dermisek:2021mhi}. However additional couplings in a given model, not contributing to $\\Delta a_{\\mu}$ might in principle enter the formula for $k$ (or even generate a complex $k$ parameter), for example scalar quartic couplings involving new scalars and the SM Higgs doublet~\\cite{Thalapillil:2014kya,Crivellin:2021rbq}. In such cases $R_{h\\to \\mu^+\\mu^-}$ still carves an ellipse in the plane of dipole moments but with the center shifted (to non-zero $d_{\\mu}$ for complex $k$) depending on the size of additional couplings. Furthermore, in certain models there can be sizable contributions to $\\Delta a_{\\mu}$ from other operators due to renormalization group mixing, for example from four-fermion operators in models with leptoquarks~\\cite{Gherardi:2020det,Aebischer:2021uvt}. However, the couplings required also necessarily generates $C_{\\mu H}$ with the same phase resulting in a shift of $k$ by a real number. These effects, together with the general study of a complete set of dimension 6 operators, will be discussed elsewhere~\\cite{Dermisek}.\n\n\n\n\n\\vspace{0.3cm}\n\\acknowledgments\nThe work of R.D. was supported in part by the U.S. Department of Energy under Award No. {DE}-SC0010120. TRIUMF receives federal funding via a contribution agreement with the National Research Council of Canada.\n\n\n\n\n\n\n\n\n\n\n\n\\vspace{0.05cm}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction.}\n\n${\\rm CAT}(0)$ cube complexes provide an ideal setting to study non-positive curvature in a discrete context. On the one hand, their geometry is sufficiently rich to ensure that large classes of groups admit interesting actions on them. Right-angled Artin groups, hyperbolic or right-angled Coxeter groups, hyperbolic $3$-manifold groups, random groups at sufficiently low density all act geometrically on ${\\rm CAT}(0)$ cube complexes \\cite{Niblo-Reeves, Bergeron-Wise, Ollivier-Wise}, to name just a few examples. Moreover, every finitely generated group with a codimension one subgroup admits an action on a ${\\rm CAT}(0)$ cube complex with unbounded orbits \\cite{Sageev,Gerasimov,Niblo-Roller}.\n\nOn the other hand, the geometry of finite dimensional ${\\rm CAT}(0)$ cube complexes is much better understood than general ${\\rm CAT}(0)$ geometry, even with no local compactness assumption. For instance, groups acting properly on finite dimensional ${\\rm CAT}(0)$ cube complexes are known to have finite asymptotic dimension \\cite{Wright}, to satisfy the Von Neumann-Day dichotomy and, if torsion-free, even the Tits alternative \\cite{Sageev-Wise, CS}. It is not known whether the same are true for general ${\\rm CAT}(0)$ groups.\n\nThree features are particularly relevant in the study of ${\\rm CAT}(0)$ cube complexes. First of all, they are endowed with a metric of non-positive curvature. Secondly, the $1$-skeleton becomes a median graph when endowed with its intrinsic path-metric; this means that, for any three vertices, there exists a unique vertex that lies between any two of them. This property is closely related to the existence of hyperplanes and allows for a combinatorial approach that is not available in general ${\\rm CAT}(0)$ spaces. Finally, cube complexes are essentially discrete objects, in that their geometry is fully encoded by the $0$-skeleton. In particular, the automorphism group of a cube complex is totally disconnected.\n\nIt is natural to wonder how much in the theory of ${\\rm CAT}(0)$ cube complexes can be extended to those spaces that share the second feature: \\emph{median spaces}. These provide a simultaneous generalisation of ${\\rm CAT}(0)$ cube complexes and real trees; for an introduction, see e.g.~\\cite{Nica-thesis, CDH, Bow4} and references therein. The class of median spaces is closed under ultralimits and also includes all $L^1$ spaces, so certain pathologies are bound to arise in this context.\n\nThe notion of \\emph{rank} of a median space was introduced in \\cite{Bow1}; for ${\\rm CAT}(0)$ cube complexes it coincides with the usual concept of dimension. It was recently shown that connected, finite rank median spaces are also endowed with a \\emph{canonical} ${\\rm CAT}(0)$ metric \\cite{Bow4}. Thus it seems that, in finite rank, the only true difference between ${\\rm CAT}(0)$ cube complexes and general median spaces lies in the discreteness of the former. Most of the results of the present paper support this analogy.\n\nWe will restrict our attention to \\emph{finite rank} median spaces; this is essential when considering the Tits alternative, as every amenable group admits a proper action on an infinite rank median space \\cite{Cherix-Martin-Valette,CDH}. Nontrivial examples of finite rank median spaces arise, for instance, from asymptotic cones of coarse median spaces \\cite{Bow1,Zeidler}. The Cayley graphs of many interesting groups are coarse median: hyperbolic groups, cubulated groups \\cite{Hagen-Susse}, fundamental groups of closed irreducible 3-manifolds not modelled on Nil or Sol, mapping class groups \\cite{Behrstock-Minsky,Behrstock-Drutu-Sapir, Behrstock-Drutu-Sapir2} and, more generally, all groups that are HHS \\cite{Bow3,HHS,HHS2}.\n\nOur main result is the following version of the Tits alternative:\n\n\\begin{thmA}\nLet $X$ be a complete, finite rank median space with an isometric action $\\Gamma\\curvearrowright X$. Suppose that $\\Gamma$ has no nonabelian free subgroups.\n\\begin{itemize}\n\\item If the action is free, $\\Gamma$ is virtually finite-by-abelian. If moreover $X$ is connected or $\\Gamma$ is finitely generated, then $\\Gamma$ is virtually abelian.\n\\item If the action is (metrically) proper, $\\Gamma$ is virtually (locally finite)-by-abelian.\n\\item If all point stabilisers are amenable, $\\Gamma$ is amenable.\n\\end{itemize}\n\\end{thmA}\n\nWe remark that every countable, locally finite group admits a proper action on a simplicial tree, see Example~II.7.11 in \\cite{BH}. \n\nA discrete group $\\Gamma$ has the Haagerup property if and only if it admits a proper action on a median space \\cite{Cherix-Martin-Valette,CDH}. However, Theorem~A shows that $\\Gamma$ needs not have a proper (or free) action on a \\emph{finite rank} median space; for instance, it suffices to consider any torsion-free amenable group that is not virtually abelian. \n\nIn fact, there even exist groups $\\Gamma$ with the Haagerup property, such that every action of $\\Gamma$ on a complete, finite rank median space has a global fixed point; a simple example is provided by irreducible lattices in $SL_2\\mathbb{R}\\times SL_2\\mathbb{R}$ \\cite{Fioravanti3}.\n\nOur proof of Theorem~A follows the same broad outline as the corresponding result for ${\\rm CAT}(0)$ cube complexes, as it appears in \\cite{CS,CFI}. Given an action $\\Gamma\\curvearrowright X$, either $\\Gamma$ has a finite orbit within a suitable compactification of $X$, or the space $X$ exhibits a certain tree-like behaviour. In the former case, one obtains a ``big'' abelian quotient of $\\Gamma$; in the latter, one can construct many nonabelian free subgroups with a ping-pong argument.\n\nWe remark that the first proof of the Tits alternative for ${\\rm CAT}(0)$ cube complexes was due to M.~Sageev and D.~T.~Wise \\cite{Sageev-Wise} and follows a completely different strategy. It relies on the Algebraic Torus Theorem \\cite{Dunwoody-Swenson} and a key fact is that, when a group $\\Gamma$ acts nontrivially on a ${\\rm CAT}(0)$ cube complex, some hyperplane stabiliser is a codimension one subgroup of $\\Gamma$. Unfortunately, this approach is bound to fail when dealing with median spaces: there is an analogous notion of ``hyperplane'' (see Section~\\ref{prelims}), but all hyperplane stabilisers could be trivial, even if the group $\\Gamma$ is one-ended. This happens for instance when a surface group acts freely on a real tree \\cite{Morgan-Shalen}.\n\nMany of the techniques of \\cite{CS} have proven extremely useful in the study of ${\\rm CAT}(0)$ cube complexes, for instance in \\cite{Nevo-Sageev,Fernos,CFI,Kar-Sageev,Kar-Sageev2} to name just a few examples. We extend some of this machinery to median spaces, in particular what goes by the name of ``flipping'', ``skewering'' and ``strong separation''; see Theorems~B and~D below. We will exploit these results in \\cite{Fioravanti3} to obtain a superrigidity result analogous to the one in \\cite{CFI}.\n\nTo state the rest of the results of the present paper, we need to introduce some terminology. In \\cite{Fioravanti1} we defined a compactification $\\overline X$ of a complete, finite rank median space $X$. We refer to it as the \\emph{Roller compactification} of $X$; for ${\\rm CAT}(0)$ cube complexes, it consists precisely of the union of $X$ and its Roller boundary $\\partial X$, in the usual sense (see \\cite{BCGNW, Nevo-Sageev} for a definition). \n\nRoller compactifications of general median spaces strongly resemble Roller compactifications of cube complexes. For instance, the space $\\overline X$ has a natural structure of median algebra and $\\partial X$ is partitioned as a union of median spaces (``components'' in our terminology), whose rank is strictly lower than that of $X$. These properties will be essential when extending the machinery of \\cite{CS} to finite rank median spaces. For ${\\rm CAT}(0)$ cube complexes, our approach is slightly different from that of \\cite{CS}, in that we work with Roller boundaries rather than visual boundaries.\n\nWe say that an isometric action $\\Gamma\\curvearrowright X$ is \\emph{Roller elementary} if $\\Gamma$ has at least one finite orbit in $\\overline X$. An action $\\Gamma\\curvearrowright X$ is \\emph{Roller minimal} if $\\Gamma$ leaves invariant no proper, closed, convex subset of $\\overline X$ and $X$ is not a single point; for the notion of convexity in the median algebra $\\overline X$, see Section~\\ref{prelims}. In ${\\rm CAT}(0)$ cube complexes, Roller minimal actions are precisely essential actions (in the terminology of \\cite{CS}) with no fixed point in the visual boundary.\n\nLike ${\\rm CAT}(0)$ cube complexes, median spaces are endowed with a canonical collection of halfspaces $\\mathscr{H}$. We say that $\\Gamma\\curvearrowright X$ is \\emph{without wall inversions} if there exists no $g\\in\\Gamma$ such that $g\\mathfrak{h}=\\mathfrak{h}^*$ for some $\\mathfrak{h}\\in\\mathscr{H}$. Here $\\mathfrak{h}^*$ denotes the complement $X\\setminus\\mathfrak{h}$ of the halfspace $\\mathfrak{h}$. Actions without wall inversions are a generalisation of actions on $0$-skeleta of simplicial trees without edge inversions. We remark that, perhaps counterintuitively, every action on a \\emph{connected} median space is automatically without wall inversions (see Proposition~\\ref{all about halfspaces} below); this includes all actions on real trees.\n\nWe have the following analogue of the Flipping and Double Skewering Lemmata from \\cite{CS}.\n\n\\begin{thmB}\nLet $X$ be a complete, finite rank median space with a Roller minimal action $\\Gamma\\curvearrowright X$ without wall inversions.\n\\begin{itemize}\n\\item For ``almost every'' halfspace $\\mathfrak{h}$, there exists $g\\in\\Gamma$ with $\\mathfrak{h}^*\\subsetneq g\\mathfrak{h}$ and $d(g\\mathfrak{h}^*,\\mathfrak{h}^*)>0$.\n\\item For ``almost all'' halfspaces $\\mathfrak{h},\\mathfrak{k}$ with $\\mathfrak{h}\\subseteq\\mathfrak{k}$ there exists $g\\in\\Gamma$ with $g\\mathfrak{k}\\subsetneq\\mathfrak{h}\\subseteq\\mathfrak{k}$ and $d(g\\mathfrak{k}, \\mathfrak{h}^*)>0$.\n\\end{itemize}\n\\end{thmB}\n\nPositivity of distances is automatic in cube complexes, but not in general median spaces. Theorem~B cannot be proved for \\emph{all} halfspaces; counterexamples already appear in real trees, see Section~\\ref{CS-like machinery}. The notion of ``almost every'' should be understood with respect to a certain measure on $\\mathscr{H}$; see Section~\\ref{prelims} below for a definition.\n\nFor every complete, finite rank median space $X$, we can define a \\emph{barycentric subdivision} $X'$; we study these in Section~\\ref{splitting the atom}. The space $X'$ is again complete and median of the same rank. There is a canonical isometric embedding $X\\hookrightarrow X'$ and every action on $X$ extends to an action without wall inversions on $X'$. Thus, the assumption that $\\Gamma\\curvearrowright X$ be without wall inversions in Theorem~B is not restrictive. \n\nRoller minimality may seem a strong requirement, but the structure of Roller compactifications makes Roller minimal actions easy to come by:\n\n\\begin{propC}\nLet $X$ be a complete, finite rank median space with an isometric action $\\Gamma\\curvearrowright X$. If $\\Gamma$ fixes no point of $\\overline X$, there exists a $\\Gamma$-invariant component $Z\\subseteq\\overline X$ and a closed, convex, $\\Gamma$-invariant subset $C\\subseteq Z$ such that the action $\\Gamma\\curvearrowright C$ is Roller minimal.\n\\end{propC}\n\nWe remark that actions with a global fixed point in $\\overline X$ have a very specific structure, see Theorem~F below.\n\nTheorem~B allows us to construct free groups of isometries, as soon as we have ``tree-like'' configurations of halfspaces inside the median space $X$. More precisely, we need \\emph{strongly separated} pairs of halfspaces; in the case of ${\\rm CAT}(0)$ cube complexes, these were introduced in \\cite{Behrstock-Charney} and used in \\cite{CS} to characterise irreducibility. We prove:\n\n\\begin{thmD}\nLet $X$ be a complete, finite rank median space admitting a Roller minimal action $\\Gamma\\curvearrowright X$ without wall inversions. The median space splits as a nontrivial product if and only if no two halfspaces are strongly separated.\n\\end{thmD}\n\nFollowing well-established techniques \\cite{CS}, Theorems~B and~D yield:\n\n\\begin{thmE}\nLet $X$ be a complete, finite rank median space with an isometric action $\\Gamma\\curvearrowright X$. Either $\\Gamma$ contains a nonabelian free subgroup or the action is Roller elementary.\n\\end{thmE}\n\nThe last step in the proof of Theorem~A consists in the study of Roller elementary actions; we employ the same strategy as the appendix to \\cite{CFI}. To this end, we need to extend the notion of \\emph{unidirectional boundary set (UBS)} to median spaces. \n\nIn ${\\rm CAT}(0)$ cube complexes, UBS's were introduced in \\cite{Hagen}. Up to a certain equivalence relation, they define the simplices in Hagen's simplicial boundary and provide a useful tool to understand Tits boundaries, splittings and divergence \\cite{Hagen, Behrstock-Hagen}. They can be thought of as a generalisation of embedded cubical orthants. \n\nWe introduce UBS's in median spaces and use them to prove the following result. A more careful study of UBS's will be carried out in \\cite{Fioravanti3}, where we show that Roller elementarity is equivalent to the vanishing of a certain cohomology class.\n\n\\begin{thmF}\nLet $X$ be complete and finite rank. Suppose that $\\Gamma\\curvearrowright X$ fixes a point in the Roller boundary of $X$. A finite-index subgroup $\\Gamma_0\\leq\\Gamma$ fits in an exact sequence\n\\[1\\longrightarrow N\\longrightarrow \\Gamma_0 \\longrightarrow\\mathbb{R}^r,\\]\nwhere $r=\\text{rank}(X)$ and every finitely generated subgroup of $N$ has an orbit in $X$ with at most $2^r$ elements. If $X$ is connected, every finitely generated subgroup of $N$ fixes a point.\n\\end{thmF}\n\n{\\bf Structure of the paper.} In Section~\\ref{preliminaries} we review the basic theory of median spaces and median algebras, with a special focus on the results of \\cite{Fioravanti1}. We use halfspaces to characterise when a median space splits as a nontrivial product. We study barycentric subdivisions and a similar construction that allows us to canonically embed general median spaces into connected ones. Section~\\ref{elliptic isometries} is concerned with groups of elliptic isometries. In Section~\\ref{stab xi} we study UBS's and prove Theorem~F. In Section~\\ref{CS-like machinery} we introduce Roller elementarity and Roller minimality; we prove Proposition~C and Theorems~A,~B and~D. Finally, Section~\\ref{facing triples} is devoted to constructing free subgroups of isometry groups; we prove Theorem~E there.\n\n{\\bf Acknowledgements.} The author warmly thanks Brian Bowditch, Pier\\-re-Emmanuel Caprace, Indira Chatterji, Thomas Delzant, Cornelia Dru\\c tu, Talia Fern\\'os, Mark Hagen for many helpful conversations and Anthony Genevois for his comments on an earlier version. The author expresses special gratitude to Cornelia Dru\\c tu for her excellent supervision and to Talia Fern\\'os for her encouragement to pursue this project. \n\nThis work was undertaken at the Mathematical Sciences Research Institute in Berkeley during the Fall 2016 program in Geometric Group Theory, where the author was supported by the National Science Foundation under Grant no.~DMS-1440140 and by the GEAR Network. Part of this work was also carried out at the Isaac Newton Institute for Mathematical Sciences, Cambridge, during the programme ``Non-positive curvature, group actions and cohomology'' and was supported by EPSRC grant no.~EP\/K032208\/1. The author was also supported by the Clarendon Fund and the Merton Moussouris Scholarship.\n\n\n\\section{Preliminaries.}\\label{preliminaries}\n\n\\subsection{Median spaces and median algebras.}\\label{prelims}\n\nLet $X$ be a metric space. A finite sequence of points $(x_k)_{1\\leq k\\leq n}$ is a \\emph{geodesic} if $d(x_1,x_n)=d(x_1,x_2)+...+d(x_{n-1},x_n)$. The interval $I(x,y)$ between $x,y\\in X$ is the set of points lying on a geodesic from $x$ to $y$. We say that $X$ is a \\emph{median space} if, for all $x,y,z\\in X$, the intersection $I(x,y)\\cap I(y,z)\\cap I(z,x)$ consists of a single point, which we denote by $m(x,y,z)$. In this case, the map $m\\colon X^3\\rightarrow X$ endows $X$ with a structure of \\emph{median algebra}, see e.g.~\\cite{CDH, Bow1, Roller} for a definition and various results.\n\nIn a median algebra $(M,m)$, the \\emph{interval} $I(x,y)$ between $x,y\\in M$ is the set of points $z\\in M$ with $m(x,y,z)=z$; this is equivalent to the definition above if $M$ arises from a median space. A subset $C\\subseteq M$ is \\emph{convex} if $I(x,y)\\subseteq C$ whenever $x,y\\in C$. Every collection of pairwise-intersecting convex subsets of a median algebra has the finite intersection property; this is Helly's Theorem, see e.g.~Theorem~2.2 in \\cite{Roller}.\n\nA \\emph{halfspace} is a convex subset $\\mathfrak{h}\\subseteq M$ whose complement $\\mathfrak{h}^*:=M\\setminus\\mathfrak{h}$ is also convex. We will refer to the unordered pair $\\mathfrak{w}:=\\{\\mathfrak{h},\\mathfrak{h}^*\\}$ as a \\emph{wall}; we say that $\\mathfrak{h}$ and $\\mathfrak{h}^*$ are the \\emph{sides} of $\\mathfrak{w}$. The wall $\\mathfrak{w}$ separates subsets $A\\subseteq M$ and $B\\subseteq M$ if $A\\subseteq\\mathfrak{h}$ and $B\\subseteq\\mathfrak{h}^*$ or vice versa. The wall $\\mathfrak{w}$ is \\emph{contained} in a halfspace $\\mathfrak{k}$ if $\\mathfrak{h}\\subseteq\\mathfrak{k}$ or $\\mathfrak{h}^*\\subseteq\\mathfrak{k}$; we say, with a slight abuse of terminology, that $\\mathfrak{w}$ is contained in $\\mathfrak{k}_1\\cap...\\cap\\mathfrak{k}_k$ if $\\mathfrak{w}$ is contained in $\\mathfrak{k}_i$ for each $i$.\n\nThe sets of halfspaces and walls of $M$ are denoted $\\mathscr{H}(M)$ and $\\mathscr{W}(M)$ respectively, or simply $\\mathscr{H}$ and $\\mathscr{W}$. Given subsets $A,B\\subseteq M$, we write $\\mathscr{H}(A|B)$ for the set of halfspaces with $B\\subseteq\\mathfrak{h}$ and $A\\subseteq\\mathfrak{h}^*$ and we set $\\sigma_A:=\\mathscr{H}(\\emptyset|A)$; we will not distinguish between $x\\in M$ and the singleton $\\{x\\}$. If $A,B$ are convex and disjoint, we have $\\mathscr{H}(A|B)\\neq\\emptyset$, see e.g.~Theorem~2.7 in \\cite{Roller}; we will refer to the sets $\\mathscr{H}(x|y)$, $x,y\\in M$, as \\emph{halfspace-intervals}.\n\nA \\emph{pocset} is a poset equipped with an order-reversing involution $*$ such that every element $a$ is incomparable with $a^*$. Ordering $\\mathscr{H}$ by inclusion, we obtain a pocset where the involution is given by taking complements. If $E\\subseteq\\mathscr{H}$, we write $E^*:=\\{\\mathfrak{h}^*\\mid\\mathfrak{h}\\in E\\}$. We say that $\\mathfrak{h},\\mathfrak{k}\\in\\mathscr{H}$ are \\emph{transverse} if any two elements in the set $\\{\\mathfrak{h},\\mathfrak{h}^*,\\mathfrak{k},\\mathfrak{k}^*\\}$ are incomparable; equivalently, the intersections $\\mathfrak{h}\\cap\\mathfrak{k}$, $\\mathfrak{h}\\cap\\mathfrak{k}^*$, $\\mathfrak{h}^*\\cap\\mathfrak{k}$, $\\mathfrak{h}^*\\cap\\mathfrak{k}^*$ are all nonempty subsets of $M$. Two walls are transverse if they arise from transverse halfspaces. \n\nMaximal subsets of $\\mathscr{H}$ consisting of pairwise-intersecting halfspaces are termed \\emph{ultrafilters}; a set of pairwise-intersecting halfspaces is an ultrafilter if and only if it contains a side of every wall of $M$. For every $x\\in M$, the subset $\\sigma_x\\subseteq\\mathscr{H}$ is an ultrafilter. A subset $\\Omega\\subseteq\\mathscr{H}$ is said to be \\emph{inseparable} if it contains all $\\mathfrak{j}\\in\\mathscr{H}$ such that $\\mathfrak{h}\\subseteq\\mathfrak{j}\\subseteq\\mathfrak{k}$, for $\\mathfrak{h},\\mathfrak{k}\\in\\Omega$. The \\emph{inseparable closure} of a subset $\\Omega\\subseteq\\mathscr{H}$ is the smallest inseparable set containing $\\Omega$; it coincides the union of the sets $\\mathscr{H}(\\mathfrak{k}^*|\\mathfrak{h})$ as $\\mathfrak{h},\\mathfrak{k}$ vary in $\\Omega$.\n\nThe set $\\{-1,1\\}$ has a unique structure of median algebra. Considering its median map separately in all coordinates, we endow $\\{-1,1\\}^k$ with a median-algebra structure for each $k\\in\\mathbb{N}$; we will refer to it as a \\emph{$k$-hypercube}. The \\emph{rank} of $M$ is the maximal $k\\in\\mathbb{N}$ such that we can embed a $k$-hypercube into $M$. By Proposition~6.2 in \\cite{Bow1} this is the same as the maximal cardinality of a set of pairwise-transverse halfspaces. Note that $M$ has rank zero if and only if it consists of a single point. The following is immediate from Ramsey's Theorem \\cite{Ramsey}:\n\n\\begin{lem}\\label{Ramsey}\nIf $M$ has finite rank and $\\sigma_1,\\sigma_2\\subseteq\\mathscr{H}$ are two ultrafilters, every infinite subset of $\\sigma_1\\setminus\\sigma_2$ contains an infinite subset that is totally ordered by inclusion.\n\\end{lem}\n\nIf $C\\subseteq M$ and $x\\in M$, we say that $y\\in C$ is a \\emph{gate} for $(x,C)$ if $y\\in I(x,z)$ for all $z\\in C$. The set $C$ is \\emph{gate-convex} if a gate exists for every point of $M$; in this case, gates are unique and define a \\emph{gate-projection} $\\pi_C\\colon M\\rightarrow C$. If $C$ is gate-convex, we have $\\mathscr{H}(x|\\pi_C(x))=\\mathscr{H}(x|C)$ for every $x\\in M$. Every interval $I(x,y)$ is gate-convex with projection $z\\mapsto m(x,y,z)$. Gate-convex sets are always convex, but the converse does not hold in general. We record a few more properties of gate-convex subsets in the following result; see \\cite{Fioravanti1} for proofs.\n\n\\begin{prop}\\label{all about gates}\nLet $C,C'\\subseteq M$ be gate-convex.\n\\begin{enumerate}\n\\item The sets $\\{\\mathfrak{h}\\in\\mathscr{H}(M)\\mid\\mathfrak{h}\\cap C\\neq\\emptyset,~\\mathfrak{h}^*\\cap C\\neq\\emptyset\\}$, ${\\{\\pi_C^{-1}(\\mathfrak{h})\\mid\\mathfrak{h}\\in\\mathscr{H}(C)\\}}$ and $\\mathscr{H}(C)$ are all naturally in bijection.\n\\item There exists a \\emph{pair of gates}, i.e.~a pair $(x,x')$ of points $x\\in C$ and $x'\\in C'$ such that $\\pi_C(x')=x$ and $\\pi_{C'}(x)=x'$. In particular, we have ${\\mathscr{H}(x|x')=\\mathscr{H}(C|C')}$.\n\\item If $C\\cap C'\\neq\\emptyset$, we have $\\pi_C(C')=C\\cap C'$ and $\\pi_C\\o\\pi_{C'}=\\pi_{C'}\\o\\pi_C$. In particular, if $C'\\subseteq C$, we have $\\pi_{C'}=\\pi_{C'}\\o\\pi_C$.\n\\end{enumerate}\n\\end{prop}\n\nA \\emph{topological median algebra} is a median algebra endowed with a Hausdorff topology so that $m$ is continuous. Every median space $X$ is a topological median algebra, since $m\\colon X^3\\rightarrow X$ is $1$-Lipschitz if we consider the $\\ell^1$ metric on $X^3$. Every gate-convex subset of a median space is closed and convex; the converse holds if $X$ is complete. Gate-projections are $1$-Lipschitz. If ${C,C'\\subseteq X}$ are gate-convex and $X$ is complete, points $x\\in C$ and $x'\\in C'$ form a pair of gates if and only if $d(x,x')=d(C,C')$, see Lemma~2.9 in \\cite{Fioravanti1}; this holds in particular when $C'$ is a singleton. \n\nWe will only consider complete, finite rank median space in the rest of the paper. The following is Proposition~B in \\cite{Fioravanti1}.\n\n\\begin{prop}\\label{all about halfspaces}\nEvery halfspace is either open or closed (possibly both). If $\\mathfrak{h}_1\\supsetneq ... \\supsetneq\\mathfrak{h}_k$ is a chain of halfspaces with $\\overline{\\mathfrak{h}_1^*}\\cap\\overline{\\mathfrak{h}_k}\\neq\\emptyset$, we have $k\\leq 2\\cdot\\text{rank}(X)$.\n\\end{prop}\n\n\\begin{lem}\\label{intersections of halfspaces}\nLet $\\mathcal{C}\\subseteq\\mathscr{H}$ be totally ordered by inclusion and suppose that the halfspaces in $\\mathcal{C}$ are at uniformly bounded distance from a point $x\\in X$. The intersection of all halfspaces in $\\mathcal{C}$ is nonempty. \n\\end{lem}\n\\begin{proof}\nIt suffices to consider the case when $\\mathcal{C}$ does not have a minimum; by Lemma~2.27 in \\cite{Fioravanti1}, we can find a cofinal subset $\\{\\mathfrak{h}_n\\}_{n\\geq 0}$ with $\\mathfrak{h}_{n+1}\\subsetneq\\mathfrak{h}_n$. Let $x_n$ be the gate-projection of $x$ to $\\overline{\\mathfrak{h}_n}$; by Proposition~\\ref{all about gates}, the sequence $(x_n)_{n\\geq 0}$ is Cauchy, hence it converges to a point $\\overline x\\in X$ that lies in $\\overline{\\mathfrak{h}_n}$ for all $n\\geq 0$. If $\\overline x$ did not lie in every $\\mathfrak{h}_n$, there would exist $N\\geq 0$ with $\\overline x\\in\\overline{\\mathfrak{h}_n}\\setminus\\mathfrak{h}_n$ for all $n\\geq N$. In particular, $\\overline{\\mathfrak{h}_n^*}\\cap\\overline{\\mathfrak{h}_m}\\neq\\emptyset$ for all $m,n\\geq N$ and this would violate Proposition~\\ref{all about halfspaces}.\n\\end{proof}\n\nEndowing $\\mathbb{R}^n$ with the $\\ell^1$ metric, we obtain a median space. Like $\\mathbb{R}^n$, a rich class of median spaces also has an analogue of the $\\ell^{\\infty}$ metric (see \\cite{Bow5}) and of the $\\ell^2$ metric (see \\cite{Bow4}). We record the following result for later use.\n\n\\begin{thm}[\\cite{Bow4}]\\label{CAT(0) metric}\nIf $X$ is connected, it admits is a bi-Lipschitz-equivalent ${\\rm CAT}(0)$ metric that is canonical in the sense that:\n\\begin{itemize}\n\\item every isometry of the median metric of $X$ is also an isometry for the ${\\rm CAT}(0)$ metric;\n\\item the ${\\rm CAT}(0)$ geodesic between $x$ and $y$ is contained in $I(x,y)$; in particular, subsets that are convex for the median metric are also convex for the ${\\rm CAT}(0)$ metric.\n\\end{itemize}\n\\end{thm}\n\nA \\emph{pointed measured pocset (PMP)} is a 4-tuple $\\left(\\mathscr{P},\\mathscr{D},\\eta,\\sigma\\right)$, where $\\mathscr{P}$ is a pocset, $\\sigma\\subseteq\\mathscr{P}$ is an ultrafilter, $\\mathscr{D}$ is a $\\sigma$-algebra of subsets of $\\mathscr{P}$ and $\\eta$ is a measure defined on $\\mathscr{D}$. Let $\\overline{\\mathcal{M}}\\left(\\mathscr{P},\\mathscr{D},\\eta\\right)$ be the set of all ultrafilters ${\\sigma'\\subseteq\\mathscr{P}}$ with $\\sigma'\\triangle\\sigma\\in\\mathscr{D}$, where we identify sets with $\\eta$-null symmetric difference. We endow this space with the extended metric $d(\\sigma_1,\\sigma_2):=\\eta(\\sigma_1\\triangle\\sigma_2)$. The set of points at finite distance from $\\sigma$ is a median space, which we denote $\\mathcal{M}\\left(\\mathscr{P},\\mathscr{D},\\eta,\\sigma\\right)$; see Section~2.2 in \\cite{Fioravanti1}.\n\nLet $X$ be a complete, finite rank median space. In \\cite{Fioravanti1} we constructed a semifinite measure $\\widehat{\\nu}$ defined on a $\\sigma$-algebra $\\widehat{\\mathscr{B}}\\subseteq 2^{\\mathscr{H}}$ such that $\\widehat{\\nu}(\\mathscr{H}(x|y))=d(x,y)$, for all $x,y\\in X$. There, we referred to elements of $\\widehat{\\mathscr{B}}$ as \\emph{morally measurable sets}, but, for the sake of simplicity, we will just call them \\emph{measurable sets} here. Note that this measure space is different from the ones considered in \\cite{CDH}. \n\nEvery inseparable subset of $\\mathscr{H}$ lies in $\\widehat{\\mathscr{B}}$; in particular, all ultrafilters on $\\mathscr{H}$ are measurable. If $C,C'\\subseteq X$ are convex (or empty), the set $\\mathscr{H}(C|C')$ is measurable and $\\widehat{\\nu}(\\mathscr{H}(C|C'))=d(C,C')$. A halfspace is an atom for $\\widehat{\\nu}$ if and only if it is clopen. The space $X$ is connected if and only if $\\widehat{\\nu}$ has no atoms, in which case $X$ is geodesic. We say that a halfspace $\\mathfrak{h}$ is \\emph{thick} if both $\\mathfrak{h}$ and $\\mathfrak{h}^*$ have nonempty interior; $\\widehat{\\nu}$-almost every halfspace is thick. We denote by $\\mathscr{H}^{\\times}$ the set of non-thick halfspaces. See \\cite{Fioravanti1} for proofs.\n\nPicking a basepoint $x\\in X$, we can identify $X\\simeq\\mathcal{M}(\\mathscr{H},\\widehat{\\mathscr{B}},\\widehat{\\nu},\\sigma_x)$ isometrically by mapping each $y\\in X$ to the ultrafilter $\\sigma_y\\subseteq\\mathscr{H}$, see Corollary~3.12 in \\cite{Fioravanti1}. In particular, $X$ sits inside the space $\\overline{\\mathcal{M}}(\\mathscr{H},\\widehat{\\mathscr{B}},\\widehat{\\nu})$, which we denote by $\\overline X$. If $I\\subseteq X$ is an interval, we have a projection $\\pi_I\\colon\\overline X\\rightarrow I$ that associates to each ultrafilter $\\sigma\\subseteq\\mathscr{H}$ the only point of $I$ that is represented by the ultrafilter $\\sigma\\cap\\mathscr{H}(I)$. We give $\\overline X$ the coarsest topology for which all the projections $\\pi_I$ are continuous. Defining\n\\[m(\\sigma_1,\\sigma_2,\\sigma_3):=(\\sigma_1\\cap\\sigma_2)\\cup(\\sigma_2\\cap\\sigma_3)\\cup(\\sigma_3\\cap\\sigma_1),\\]\nwe endow $\\overline X$ with a structure of topological median algebra. We have:\n\n\\begin{prop}[\\cite{Fioravanti1}]\nThe topological median algebra $\\overline X$ is compact. The inclusion $X\\hookrightarrow\\overline X$ is a continuous morphism of median algebras with dense, convex image.\n\\end{prop}\n\nWe call $\\overline X$ the \\emph{Roller compactification} of $X$ and $\\partial X:=\\overline X\\setminus X$ the \\emph{Roller boundary}. We remark that, in general, $X\\hookrightarrow\\overline X$ is not an embedding and $\\partial X$ is not closed in $\\overline X$. \n\nIf $C\\subseteq X$ is convex, the closure of $C$ in $X$ coincides with the intersection of $X$ and the closure of $C$ in $\\overline X$. If $C\\subseteq X$ is closed and convex, the closure of $C$ inside $\\overline X$ is canonically identified with the Roller compactification $\\overline C$. The median map of $\\overline X$ and the projections $\\pi_I\\colon\\overline X\\rightarrow I$ are $1$-Lipschitz with respect to the extended metric on $\\overline X$. \n\nLooking at pairs of points of $\\overline X$ at finite distance, we obtain a partition of $\\overline X$ into \\emph{components}; each component is a median space with the restriction of the extended metric of $\\overline X$. The subset $X\\subseteq\\overline X$ always forms an entire component of $\\overline X$.\n\n\\begin{prop}[\\cite{Fioravanti1}]\\label{components}\nEach component $Z\\subseteq\\partial X$ is a complete median space with $\\text{rank}(Z)\\leq\\text{rank}(X)-1$. Moreover, $Z$ is convex in $\\overline X$ and the inclusion $Z\\hookrightarrow\\overline X$ is continuous. The closure of $Z$ in $\\overline X$ is canonically identified with the Roller compactification $\\overline Z$ and there is a gate-projection $\\pi_Z\\colon\\overline X\\rightarrow\\overline Z$ that maps $X$ into $Z$.\n\\end{prop}\n\nEvery halfspace $\\mathfrak{h}\\in\\mathscr{H}$ induces a halfspace $\\widetilde{\\mathfrak{h}}$ of $\\overline X$ with $\\widetilde{\\mathfrak{h}}\\cap X=\\mathfrak{h}$; thus, we can identify $\\mathscr{H}$ with a subset of $\\mathscr{H}(\\overline X)$. \n\n\\begin{prop}[\\cite{Fioravanti1}]\\label{halfspaces of components}\nEvery thick halfspace of a component $Z\\subseteq\\partial X$ is of the form $\\widetilde{\\mathfrak{h}}\\cap Z$, for some $\\mathfrak{h}\\in\\mathscr{H}$.\n\\end{prop}\n\nAny two points of $\\overline X$ are separated by a halfspace of the form $\\widetilde{\\mathfrak{h}}$. To every $\\xi\\in\\overline X$, we can associate a \\emph{canonical ultrafilter} $\\sigma_{\\xi}\\subseteq\\mathscr{H}$ representing $\\xi$; it satisfies $\\mathfrak{h}\\in\\sigma_{\\xi}\\Leftrightarrow\\xi\\in\\widetilde{\\mathfrak{h}}$.\n\n\n\\subsection{Products.}\n\nGiven median spaces $X_1,X_2$, we can consider the product $X_1\\times X_2$, which is itself a median space with the $\\ell^1$ metric, i.e.\n\\[d_{X_1\\times X_2}\\left((x_1,x_2),(x'_1,x'_2)\\right):=d_{X_1}(x_1,x'_1)+d_{X_2}(x_2,x'_2).\\]\nThe space $X_1\\times X_2$ is complete if and only if $X_1$ and $X_2$ are. We say that subsets $A,B\\subseteq\\mathscr{H}$ are \\emph{transverse} if $\\mathfrak{h}$ and $\\mathfrak{k}$ are transverse whenever $\\mathfrak{h}\\in A$ and $\\mathfrak{k}\\in B$. We have the following analogue of Lemma~2.5 in \\cite{CS} (recall that we are only considering complete, finite rank median spaces).\n\n\\begin{cor}\\label{products}\nThe following are equivalent:\n\\begin{enumerate}\n\\item $X$ splits as a product $X_1\\times X_2$, where each $X_i$ has at least two points;\n\\item there is a measurable, $*$-invariant partition $\\mathscr{H}=\\mathscr{H}_1\\sqcup\\mathscr{H}_2$, where the $\\mathscr{H}_i$ are nonempty and transverse;\n\\item there is measurable, $*$-invariant partition $\\mathscr{H}=\\mathscr{H}_1\\sqcup\\mathscr{H}_2\\sqcup\\mathscr{K}$, where the $\\mathscr{H}_i$ are nonempty and transverse, while $\\mathscr{K}$ is null.\n\\end{enumerate}\n\\end{cor}\n\\begin{proof}\nIf $X=X_1\\times X_2$, part~1 of Proposition~\\ref{all about gates} provides subsets $\\mathscr{H}(X_1)$ and $\\mathscr{H}(X_2)$ of $\\mathscr{H}(X)$; they are nonempty, disjoint, $*$-invariant, transverse and measurable. Every $\\mathfrak{h}\\in\\mathscr{H}(X)$ must split a fibre $X_1\\times\\{*\\}$ or $\\{*\\}\\times X_2$ nontrivially and thus lies either in $\\mathscr{H}(X_1)$ or in $\\mathscr{H}(X_2)$.\n\nWe conclude by proving that 3 implies 1, since 2 trivially implies 3. Let $\\widehat{\\mathscr{B}}_i$ be the $\\sigma$-algebra of subsets of $\\mathscr{H}_i$ that lie in $\\widehat{\\mathscr{B}}$; fixing $x\\in X$, we simply write $\\mathcal{M}_i$ for $\\mathcal{M}(\\mathscr{H}_i,\\widehat{\\mathscr{B}}_i,\\widehat{\\nu},\\sigma_x\\cap\\mathscr{H}_i)$. Define a map $\\iota\\colon X\\rightarrow\\mathcal{M}_1\\times\\mathcal{M}_2$ by intersecting ultrafilters on $\\mathscr{H}$ with each $\\mathscr{H}_i$. Since $\\mathscr{K}$ is null, this is an isometric embedding. Given ultrafilters $\\sigma_i\\subseteq\\mathscr{H}_i$, the set $\\sigma_1\\sqcup\\sigma_2$ consists of pairwise-intersecting halfspaces and Zorn's Lemma provides an ultrafilter $\\sigma\\subseteq\\mathscr{H}$ containing $\\sigma_1\\sqcup \\sigma_2$. Thus, $\\iota$ is surjective and $X\\simeq\\mathcal{M}_1\\times\\mathcal{M}_2$.\n\nWe are left to show that each $\\mathcal{M}_i$ contains at least two points; we construct points $x_i,x_i'\\in X$ such that $\\widehat{\\nu}(\\mathscr{H}(x_i|x_i')\\cap\\mathscr{H}_i)>0$. Pick any $\\mathfrak{h}_i\\in\\mathscr{H}_i$; by Proposition~\\ref{all about halfspaces}, replacing $\\mathfrak{h}_i$ with its complement if necessary, we can assume that there exists $x_i\\not\\in\\overline{\\mathfrak{h}_i}$. Let $x_i'$ be the gate-projection of $x_i$ to $\\overline{\\mathfrak{h}_i}$. None of the halfspaces in $\\mathscr{H}(x_i|x_i')$ is transverse to $\\mathfrak{h}_i$, thus $\\mathscr{H}(x_i|x_i')\\setminus\\mathscr{K}$ is contained in $\\mathscr{H}_i$ and has positive measure.\n\\end{proof}\n\nIn particular, $\\text{rank}(X_1\\times X_2)=\\text{rank}(X_1)+\\text{rank}(X_2)$. We say that $X$ is \\emph{irreducible} if it cannot be split nontrivially as a product $X_1\\times X_2$. We remark that in parts~2 and~3 of Corollary~\\ref{products}, the sets $\\mathscr{H}_i$ are \\emph{not} required to have positive measure, but simply to be nonempty. The following is immediate from Corollary~\\ref{products}:\n\n\\begin{lem}\\label{Roller for products}\nIf $X=X_1\\times X_2$, we have $\\overline X=\\overline X_1\\times\\overline X_2$.\n\\end{lem}\n\nWe can also use Corollary~\\ref{products} to characterise isometries of products; the following is an analogue of Proposition~2.6 in \\cite{CS} (also compare with \\cite{Foertsch-Lytchak}, when $X$ is connected and locally compact).\n\n\\begin{prop}\\label{isometries of products}\nThere is a canonical decomposition $X= X_1\\times ... \\times X_k$, where each $X_i$ is irreducible. Every isometry of $X$ permutes the factors $X_i$; in particular, the product of the isometry groups of the factors has finite index in $\\text{Isom}~X$.\n\\end{prop}\n\\begin{proof}\nThe existence of such a splitting follows from the observation that, in any nontrivial product, factors have strictly lower rank. By Corollary~\\ref{products}, this corresponds to a transverse decomposition $\\mathscr{H}=\\mathscr{H}_1\\sqcup ...\\sqcup \\mathscr{H}_k$, where we can identify $\\mathscr{H}_i=\\mathscr{H}(X_i)$. Given $g\\in\\text{Isom}~X$, the decompositions\n\\[\\mathscr{H}_i=\\bigsqcup_{j=1}^k\\mathscr{H}_i\\cap g\\mathscr{H}_j\\]\nare transverse and each piece is measurable and $*$-invariant. Since $X_i$ is irreducible, we must have $\\mathscr{H}_i\\cap g\\mathscr{H}_j=\\emptyset$ for all but one $j$, again by Corollary~\\ref{products}, and the result follows.\n\\end{proof}\n\n\n\\subsection{Splitting the atom.}\\label{splitting the atom}\n\nIn this section we describe two constructions that allow us to embed median spaces into ``more connected'' ones. We will only consider complete, finite rank spaces.\n\nGiven a median space $X$, let $\\mathscr{A}\\subseteq\\mathscr{H}$ be the set of atoms of $\\widehat{\\nu}$. The idea is to split every atom into two ``hemiatoms'' of half the size. We thus obtain a new measured pocset $(\\mathscr{H}',\\mathscr{B}',\\nu')$, whose associated median space generalises barycentric subdivisions of cube complexes. We now describe this construction more in detail.\n\nAs a set, $\\mathscr{H}'$ consists of $\\mathscr{H}\\setminus\\mathscr{A}$, to which we add two copies $\\mathfrak{a}_+,\\mathfrak{a}_-$ of every $\\mathfrak{a}\\in\\mathscr{A}$; we have a projection $p\\colon\\mathscr{H}'\\rightarrow\\mathscr{H}$ with fibres of cardinality one or two. We give $\\mathscr{H}'$ a structure of poset by declaring that $\\mathfrak{j}\\subsetneq\\mathfrak{j}'$ if $p(\\mathfrak{j})\\subsetneq p(\\mathfrak{j}')$, or $\\mathfrak{j}=\\mathfrak{a}_-$ and $\\mathfrak{j}=\\mathfrak{a}_+$ for some $\\mathfrak{a}\\in\\mathscr{A}$. We promote this to a structure of pocset by setting $\\mathfrak{j}^*=\\mathfrak{j}'$ if $p(\\mathfrak{j})^*=p(\\mathfrak{j}')\\not\\in\\mathscr{A}$ and, in addition, $(\\mathfrak{a}_-)^*=(\\mathfrak{a}^*)_+$, $(\\mathfrak{a}_+)^*=(\\mathfrak{a}^*)_-$ if $\\mathfrak{a}\\in\\mathscr{A}$. Observe that each intersection between $\\mathscr{A}$ and a halfspace-interval is at most countable; thus $\\mathscr{A}$ and all its subsets are measurable. In particular $\\mathscr{B}':=\\{E\\subseteq\\mathscr{H}'\\mid p(E)\\setminus\\mathscr{A}\\in\\widehat{\\mathscr{B}}\\}$ is a $\\sigma$-algebra of subsets of $\\mathscr{H}'$, on which we can define the measure\n\\[\\nu'(E):=\\widehat{\\nu}\\left(p(E)\\setminus\\mathscr{A}\\right)+\\frac{1}{2}\\cdot\\sum_{\\substack{\\mathfrak{a}\\in\\mathscr{A} \\\\ \\mathfrak{a}_+\\in E}}\\widehat{\\nu}\\left(\\{\\mathfrak{a}\\}\\right)+\\frac{1}{2}\\cdot\\sum_{\\substack{\\mathfrak{a}\\in\\mathscr{A} \\\\ \\mathfrak{a}_-\\in E}}\\widehat{\\nu}\\left(\\{\\mathfrak{a}\\}\\right).\\]\nIf $F\\subseteq\\mathscr{H}$ is measurable, we have $\\nu'(p^{-1}(F))=\\widehat{\\nu}(F)$. Given $z\\in X$, we set $X':=\\mathcal{M}(\\mathscr{H}',\\mathscr{B}',\\nu',p^{-1}(\\sigma_z))$; note that this does not depend on the choice of $z$. Taking preimages under $p$ of ultrafilters on $\\mathscr{H}$, we obtain an isometric embedding $X\\hookrightarrow X'$.\n\n\\begin{lem}\\label{cubes in X'}\nFor each $x\\in X'\\setminus X$ there exist canonical subsets $C(x)\\subseteq X$, $\\widehat{C}(x)\\subseteq X'$ and isomorphisms of median algebras\n\\[\\iota_x\\colon\\{-1,1\\}^k\\rightarrow C(x),\\]\n\\[\\hat{\\iota}_x\\colon\\{-1,0,1\\}^k\\rightarrow\\widehat{C}(x).\\]\nHere $1\\leq k\\leq r:=\\text{rank}(X)$ and the map $\\hat{\\iota}_x$ extends $\\iota_x$ taking $(0,...,0)$ to $x$. Moreover, $C(x)$ is gate-convex in $X$ and $\\widehat{C}(x)$ is gate-convex in $X'$.\n\\end{lem}\n\\begin{proof}\nLet $\\sigma\\subseteq\\mathscr{H}'$ be an ultrafilter representing $x$; since $x\\not\\in X$ the set \n\\[ W(x):=\\left\\{\\{\\mathfrak{a},\\mathfrak{a}^*\\}\\in\\mathscr{W}(X)\\mid \\mathfrak{a}\\in\\mathscr{A} ~~\\text{and}~~ \\mathfrak{a}_+\\in\\sigma ~~\\text{and}~~ (\\mathfrak{a}^*)_+\\in\\sigma\\right\\}\\] \nis nonempty. Any two walls in $W(x)$ are transverse, so $k:=\\# W(x)\\leq r$; choose halfspaces $\\mathfrak{a}_1,...,\\mathfrak{a}_k\\in\\mathscr{H}$ representing all walls in $W(x)$. The set $p(\\sigma)\\setminus\\{\\mathfrak{a}_1^*,...,\\mathfrak{a}_k^*\\}$ is an ultrafilter on $\\mathscr{H}$ and it represents a point $q\\in X$. This will be the point $\\iota_x(1,...,1)$ in $C(x)$. To construct $q'\\in C(x)$, simply replace $\\mathfrak{a}_i\\in\\sigma_q\\subseteq\\mathscr{H}$ with $\\mathfrak{a}_i^*$ whenever the $i$-th coordinate of $q'$ is $-1$; the result is an ultrafilter on $\\mathscr{H}$ representing $q'\\in X$.\n\nTo construct a point $u\\in\\widehat{C}(x)\\subseteq X'$, consider the point $u'\\in C(x)$ obtained by replacing all the zero coordinates of $u$ with $1$'s. Whenever the $i$-th coordinate of $u$ is $0$, we replace $(\\mathfrak{a}_i)_-\\in p^{-1}(\\sigma_{u'})$ with $(\\mathfrak{a}_i^*)_+$, obtaining an ultrafilter on $\\mathscr{H}'$ that represents the point $u$. \n\nWe are left to check that $C(x)$ and $\\widehat{C}(x)$ are gate-convex. Let $H(x)\\subseteq\\mathscr{H}$ be the set of halfspaces corresponding to the walls of $W(x)$. We define a map $\\pi\\colon X'\\rightarrow\\widehat{C}(x)$ as follows: given an ultrafilter $\\sigma'\\subseteq\\mathscr{H}'$, the intersection $\\sigma'\\cap p^{-1}(H(x))$ determines a unique point of $\\widehat{C}(x)$ and we call this $\\pi(\\sigma')$. Note that the restriction of $\\pi$ to $X$ takes values in $C(x)$. It is straightforward to check that $\\pi$ and $\\pi|_X$ are gate-projections.\n\\end{proof}\n\n\\begin{lem}\\label{halfspaces of X'}\nEvery halfspace of $X'$ arises from an element of $\\mathscr{H}'$.\n\\end{lem}\n\\begin{proof}\nObserve that $\\text{Hull}_{X'}(X)=X'$ since, for every $x\\in X'$, the hull of $C(x)$ in $X'$ is $\\widehat{C}(x)$. Thus, every halfspace of $X'$ intersects $X$ in a halfspace of $X$. Given $\\mathfrak{h}\\in\\mathscr{H}$, we consider $\\mathscr{F}(\\mathfrak{h}):=\\left\\{\\mathfrak{k}\\in\\mathscr{H}(X')\\mid \\mathfrak{k}\\cap X=\\mathfrak{h}\\right\\}$; note that $\\mathscr{F}(\\mathfrak{h})\\neq\\emptyset$ by Lemma~6.5 in \\cite{Bow1}. If $\\mathfrak{h}\\in\\mathscr{A}$, we can construct halfspaces of $X'$ corresponding to $\\mathfrak{h}_+,\\mathfrak{h}_-\\in\\mathscr{H}'$. For instance, $\\mathfrak{h}_+$ corresponds to the set of ultrafilters on $\\mathscr{H}'$ that contain $\\mathfrak{h}_+$; this is well-defined as $\\mathfrak{h}_+$ has positive measure. Thus, we only need to show that $\\#\\mathscr{F}(\\mathfrak{h})=1$ if $\\mathfrak{h}\\in\\mathscr{H}\\setminus\\mathscr{A}$ and $\\mathscr{F}(\\mathfrak{h})=\\{\\mathfrak{h}_-,\\mathfrak{h}_+\\}$ if $\\mathfrak{h}\\in\\mathscr{A}$.\n\nIf, for some $\\mathfrak{k}\\in\\mathscr{H}(X')$ and $x\\in X'$, both $\\mathfrak{k}\\cap\\widehat{C}(x)$ and $\\mathfrak{k}^*\\cap\\widehat{C}(x)$ are nonempty, there exists $\\mathfrak{h}\\in\\mathscr{A}$ such that $\\mathfrak{k}\\in\\{\\mathfrak{h}_-,\\mathfrak{h}_+\\}$, by part~1 of Proposition~\\ref{all about gates}. Thus, we can suppose that, for every $x\\in X'$, we either have $\\widehat{C}(x)\\subseteq\\mathfrak{k}$ or $\\widehat{C}(x)\\subseteq\\mathfrak{k}^*$. Suppose, for the sake of contradiction, that the same is true of $\\mathfrak{k}'\\in\\mathscr{H}(X')$, with $\\mathfrak{k}'\\neq\\mathfrak{k}$ and $\\mathfrak{k}'\\cap X=\\mathfrak{k}\\cap X$. Let $z\\in\\mathfrak{k}\\triangle\\mathfrak{k}'$ be a point; observe that $z\\not\\in X$ and, by our assumptions, the hypercube $\\widehat{C}(z)$ is entirely contained in $\\mathfrak{k}\\triangle\\mathfrak{k}'$. This implies that $C(z)\\subseteq\\mathfrak{k}\\triangle\\mathfrak{k}'$, violating the fact that $\\mathfrak{k}'\\cap X=\\mathfrak{k}\\cap X$.\n\\end{proof}\n\nGiven a subgroup $\\Gamma\\leq\\text{Isom}~X$, we say that the action $\\Gamma\\curvearrowright X$ is \\emph{without wall inversions} if $g\\mathfrak{h}\\neq\\mathfrak{h}^*$ for every $g\\in\\Gamma$ and $\\mathfrak{h}\\in\\mathscr{H}$. We denote by $a(X)$ the supremum of the $\\widehat{\\nu}$-masses of the elements of $\\mathscr{A}$.\n\n\\begin{prop}[Properties of $X'$]\\label{properties of X'}\n\\begin{enumerate}\n\\item The median space $X'$ is complete and $\\text{rank}(X')=\\text{rank}(X)$.\n\\item There is an isometric embedding $X\\hookrightarrow X'$ and $\\text{Hull}_{X'}(X)=X'$.\n\\item Every isometry of $X$ extends canonically to an isometry of $X'$ yielding $\\text{Isom}~X\\hookrightarrow\\text{Isom}~X'$. Moreover, the induced action $\\text{Isom}~X\\curvearrowright X'$ is without wall inversions. \n\\item We have $a(X')\\leq\\frac{1}{2}\\cdot a(X)$.\n\\item The inclusion $X\\hookrightarrow X'$ canonically extends to a monomorphism of median algebras $\\overline X\\hookrightarrow\\overline{X'}$. For every $\\xi\\in\\overline{X'}\\setminus\\overline X$ there exists a canonical cube $\\{-1,0,1\\}^k\\hookrightarrow\\overline{X'}$ centred at $\\xi$, with $\\{-1,1\\}^k\\hookrightarrow\\overline{X}$ and $1\\leq k\\leq\\text{rank}(X)$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nWe have already shown part~2 and part~4 is immediate. Part~5 can be proved like Lemma~\\ref{cubes in X'} since we now have Lemma~\\ref{halfspaces of X'}. If $g\\in\\text{Isom}~X$ and $\\mathfrak{h}\\in\\mathscr{H}$ satisfy $g\\mathfrak{h}=\\mathfrak{h}^*$, Proposition~\\ref{all about halfspaces} implies that the halfspace $\\mathfrak{h}$ is an atom; part~3 follows easily. By Lemma~\\ref{halfspaces of X'}, the only nontrivial statement in part~1 is the completeness of $X'$.\n\nLet $\\sigma_n\\subseteq\\mathscr{H}'$ be ultrafilters corresponding to a Cauchy sequence in $X'$. The sets $\\underline{\\sigma}:=\\liminf\\sigma_n$ and $\\overline{\\sigma}:=\\limsup\\sigma_n$ lie in $\\mathscr{B}'$. Any two halfspaces in $\\underline{\\sigma}$ intersect, thus, by Zorn's Lemma, it is contained in an ultrafilter $\\sigma\\subseteq\\mathscr{H}'$ with $\\sigma\\subseteq\\overline{\\sigma}$. If we show that $\\nu'(\\overline{\\sigma}\\setminus\\underline{\\sigma})=0$, it follows that $\\sigma\\in\\mathscr{B}'$ and the points of $X'$ represented by $\\sigma_n$ converge to the point of $X'$ represented by $\\sigma$. Note that it suffices to show that a subsequence of $(\\sigma_n)_{n\\geq 0}$ converges; in particular, we can assume that $\\nu'(\\sigma_n\\triangle\\sigma_{n+1})\\leq\\frac{1}{2^n}$ for all $n\\geq 0$. In this case, $\\overline{\\sigma}\\setminus\\underline{\\sigma}=\\limsup\\left(\\sigma_n\\triangle\\sigma_{n+1}\\right)$ has measure zero by the Borel-Cantelli Lemma. \n\\end{proof}\n\nWe will refer to $X'$ as the \\emph{barycentric subdivision} of $X$. Indeed, if $X$ is the $0$-skeleton of a ${\\rm CAT}(0)$ cube complex, $X'$ is the $0$-skeleton of the usual barycentric subdivision. \n\nA variation on this construction allows us to embed median spaces into connected ones. We define a sequence $(X_n)_{n\\geq 0}$ by setting $X_0:=X$ and $X_{n+1}:=X_n'$. These come equipped with compatible isometric embeddings $X_m\\hookrightarrow X_n$, for $mr\\cdot\\sum_{i=1}^kd(x,s_ix).\\] \nWe write $g=s_{i_1}...s_{i_n}$ and set $g_j:=s_{i_1}...s_{i_j}$ and $x_j:=g_jx$. We define inductively the points $y_j$, starting with $y_0=x_0=x$ and declaring $y_{j+1}$ to be the projection of $x_{j+1}$ to $I(y_j,gx)$. In particular $(y_j)_{0\\leq j\\leq n}$ is a geodesic from $x$ to $gx$ and $\\mathscr{H}(y_j|y_{j+1})\\subseteq\\mathscr{H}(x_j|x_{j+1})=g_j\\mathscr{H}(x|s_{i_{j+1}}x)$. The sets $U_j:=g_j^{-1}\\mathscr{H}(y_j|y_{j+1})$ all lie in $\\bigcup_i\\mathscr{H}(x|s_ix)$ and, since\n\\[\\sum_{j=0}^{n-1}\\widehat{\\nu}(U_j)=\\sum_{j=0}^{n-1}d(y_j,y_{j+1})=d(x,gx)>r\\cdot\\widehat{\\nu}\\left(\\bigcup_{i=1}^k\\mathscr{H}(x|s_ix)\\right),\\]\nthere exist $\\tau\\in\\{1,...,k\\}$ and a measurable subset $\\Omega\\subseteq\\mathscr{H}(x|s_{\\tau}x)$ such that $\\widehat{\\nu}(\\Omega)>0$ and $\\Omega\\subseteq U_j$ for $r+1$ indices $j_10$. If $x\\in g\\mathfrak{h}^*\\cap\\mathfrak{h}$, we have for $n\\geq 1$, \n\\[\\bigcup_{k=1}^{n-1}\\mathscr{H}(g^{kr}\\mathfrak{h}^*|g^{(k+1)r}\\mathfrak{h})\\subseteq\\mathscr{H}(x|g^{nr}x),\\]\nthus $d(x,g^{nr}x)\\geq (n-1)D$. Hence, the $\\langle g\\rangle$-orbit of $x$ is unbounded.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem~\\ref{Sageev's theorem}]\nThe action $\\Gamma\\curvearrowright X$ has bounded orbits or Proposition~\\ref{key point in Sageev} and Lemma~\\ref{nesting vs fixed points} would provide a non-elliptic isometry in $\\Gamma$. The conclusion follows from Corollary~\\ref{finite orbits}.\n\\end{proof}\n\nIn ${\\rm CAT}(0)$ cube complexes, Proposition~\\ref{key point in Sageev} above implies that the stabiliser $\\Gamma_{\\mathfrak{h}}$ of $\\mathfrak{h}$ is a codimension one subgroup of $\\Gamma$. This fails in general for actions on median spaces. \n\nFor instance, surface groups have free actions on real trees \\cite{Morgan-Shalen}; in these, all halfspace-stabilisers are trivial. One can still conclude that $\\Gamma_{\\mathfrak{h}}$ is codimension one as in \\cite{Sageev} if, in addition, for every $x,y\\in X$ we have $g\\mathfrak{h}\\in\\mathscr{H}(x|y)$ only for finitely many left cosets $g\\Gamma_{\\mathfrak{h}}$.\n\n\n\n\\section{Stabilisers of points in the Roller boundary.}\\label{stab xi}\n\nLet $X$ be a complete median space of finite rank $r$. Let $\\xi\\in\\partial X$ be a point in the Roller boundary; we denote by $\\text{Isom}_{\\xi}X$ the subgroup of $\\text{Isom}~X$ fixing the point $\\xi$. The main result of this section will be the following analogue of a result of P.-E.~Caprace (see the appendix to \\cite{CFI}).\n\n\\begin{thm}\\label{stabiliser of xi}\nThe group $\\text{Isom}_{\\xi}X$ contains a subgroup $K_{\\xi}$ of index at most $r!$ that fits in an exact sequence\n\\[1\\longrightarrow N_{\\xi}\\longrightarrow K_{\\xi} \\longrightarrow\\mathbb{R}^r.\\]\nEvery finitely generated subgroup of $N_{\\xi}$ has an orbit with at most $2^r$ elements.\n\\end{thm}\n\nIn order to prove this, we will have to extend to median spaces part of the machinery developed in \\cite{Hagen}.\n\n\\begin{defn}\\label{UBS}\nGiven $\\xi\\in\\partial X$, a \\emph{chain of halfspaces diverging to $\\xi$} is a sequence $(\\mathfrak{h}_n)_{n\\geq 0}$ of halfspaces of $X$ with $\\mathfrak{h}_n\\supsetneq\\mathfrak{h}_{n+1}$ and $\\xi\\in\\widetilde{\\mathfrak{h}}_n$ for all $n\\geq 0$ and such that $d(x,\\mathfrak{h}_n)\\rightarrow +\\infty$ for $x\\in X$. A \\emph{UBS} for $\\xi\\in\\partial X$ and $x\\in X$ is an inseparable subset $\\Omega\\subseteq\\sigma_{\\xi}\\setminus\\sigma_x$ that contains a chain of halfspaces diverging to $\\xi$.\n\\end{defn}\n\nThis is an analogue of Definition~3.4 in \\cite{Hagen}, except that we consider sets of halfspaces instead of sets of walls. For cube complexes, our definition is a bit more restrictive than Hagen's, since we assume by default that $\\Omega$ lie in some $\\sigma_{\\xi}\\setminus\\sigma_x$. This is enough for our purposes and avoids annoying technicalities.\n\nWe denote by $\\mathcal{U}(\\xi,x)$ the set of all UBS's for $\\xi$ and $x$ and we define a relation $\\preceq$:\n\\begin{align*}\n\\Omega_1\\preceq\\Omega_2 & \\xLeftrightarrow{\\text{def}^{\\underline{\\text{n}}}} \\sup_{\\mathfrak{h}\\in\\Omega_1\\setminus\\Omega_2} d(x,\\mathfrak{h})<+\\infty,\n\\end{align*}\nwhich we read as ``$\\Omega_1$ is \\emph{almost contained} in $\\Omega_2$''. If $\\Omega_1\\preceq\\Omega_2$ and $\\Omega_2\\preceq\\Omega_1$, we write $\\Omega_1\\sim\\Omega_2$ and say that $\\Omega_1$ and $\\Omega_2$ are \\emph{equivalent}. The relation $\\preceq$ descends to a partial order, also denoted $\\preceq$, on the set $\\overline{\\mathcal{U}}(\\xi,x)$ of $\\sim$-e\\-quiv\\-a\\-lence classes. We denote the equivalence class associated to the UBS $\\Omega$ by $[\\Omega]$. A UBS is said to be \\emph{minimal} if it projects to a minimal element of $\\overline{\\mathcal{U}}(\\xi,x)$. Two UBS's are \\emph{almost disjoint} if their intersection is not a UBS.\n\nWe will generally forget about the basepoint $x$ and simply write $\\left(\\overline{\\mathcal{U}}(\\xi),\\preceq\\right)$; indeed, if $x,y\\in X$, we have a canonical isomorphism $\\overline{\\mathcal{U}}(\\xi,x)\\simeq\\overline{\\mathcal{U}}(\\xi,y)$ given by intersecting UBS's with $\\sigma_{\\xi}\\setminus\\sigma_x$ or $\\sigma_{\\xi}\\setminus\\sigma_y$. \n\n\\begin{lem}\\label{almost disjoint} \nLet $\\Omega_1,\\Omega_2\\subseteq\\sigma_{\\xi}\\setminus\\sigma_x$ be UBS's. \n\\begin{enumerate}\n\\item If $\\Omega_1\\preceq\\Omega_2$, we have $\\widehat{\\nu}(\\Omega_1\\setminus\\Omega_2)<+\\infty$. In particular, if $\\Omega_1$ and $\\Omega_2$ are equivalent, we have $\\widehat{\\nu}\\left(\\Omega_1\\triangle\\Omega_2\\right)<+\\infty$.\n\\item The UBS's $\\Omega_1,\\Omega_2$ are almost disjoint if and only if $\\Omega_1\\cap\\Omega_2$ consists of halfspaces at uniformly bounded distance from $x$.\n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nBy Dilworth's Theorem \\cite{Dilworth}, we can decompose $\\Omega_1\\setminus\\Omega_2$ as a disjoint union $\\mathcal{C}_1\\sqcup ...\\sqcup\\mathcal{C}_k$, where each $\\mathcal{C}_i$ is totally ordered by inclusion and $k\\leq r$. If $\\Omega_1\\preceq\\Omega_2$, Lemma~\\ref{intersections of halfspaces} implies that the intersection and union of all halfspaces in $\\mathcal{C}_i$ are halfspaces $\\mathfrak{h}_i$ and $\\mathfrak{k}_i$, respectively. Thus, $\\Omega_1\\setminus\\Omega_2$ is contained in the union of the sets $\\mathscr{H}(\\mathfrak{k}_i^*|\\mathfrak{h}_i)$, which all have finite measure. \n\nSince $\\Omega_1\\cap\\Omega_2$ is inseparable, $\\Omega_1$ and $\\Omega_2$ are almost disjoint if and only if $\\Omega_1\\cap\\Omega_2$ does not contain a chain of halfspaces diverging to $\\xi$. By Lemma~\\ref{Ramsey}, this is equivalent to $\\Omega_1\\cap\\Omega_2$ being at uniformly bounded distance from $x$.\n\\end{proof}\n\nPart~1 of Lemma~\\ref{almost disjoint} is in general not an ``if and only if'' since UBS's can have finite measure. An example appears in the staircase in Figure~3 of \\cite{Fioravanti1}, where the UBS is given by the set of vertical halfspaces containing the bottom of the staircase.\n\n\\begin{figure}\n\\centering\n\\includegraphics[height=2.5in]{img3.png}\n\\caption{}\n\\label{CSC with a flap}\n\\end{figure}\nThe inseparable closure of a chain of halfspaces diverging to $\\xi$ is always a UBS and every minimal UBS is equivalent to a UBS of this form. However, not all UBS's of this form are minimal. For instance, consider the ${\\rm CAT}(0)$ square complex in Figure~\\ref{CSC with a flap}, which is a variation of the usual staircase with a one-dimensional flap. The inseparable closure of $\\{\\mathfrak{h}_n\\}_{n\\geq 0}$ is not a minimal UBS, since it also contains all $\\mathfrak{k}_n$, while the inseparable closures of $\\{\\mathfrak{h}_n\\}_{n\\geq 1}$ and $\\{\\mathfrak{k}_n\\}_{n\\geq 1}$ are minimal.\n\n\\begin{lem}\\label{symmetric almost-transversality}\nLet $(\\mathfrak{h}_m)_{m\\geq 0}$ and $(\\mathfrak{k}_n)_{n\\geq 0}$ be chains of halfspaces in $\\sigma_{\\xi}\\setminus\\sigma_x$ that diverge to $\\xi$. Suppose that no $\\mathfrak{h}_m$ lies in the inseparable closure of $\\{\\mathfrak{k}_n\\}_{n\\geq 0}$. Either almost every $\\mathfrak{k}_n$ is transverse to almost every $\\mathfrak{h}_m$, or almost every $\\mathfrak{h}_m$ is transverse to almost every $\\mathfrak{k}_n$.\n\\end{lem}\n\\begin{proof} \nWe first suppose that there exist $\\overline n,\\overline m\\geq 0$ such that $\\mathfrak{h}_{\\overline m}\\subseteq\\mathfrak{k}_{\\overline n}$; without loss of generality, $\\overline m=\\overline n=0$. An inclusion of the type $\\mathfrak{k}_n\\subseteq\\mathfrak{h}_m$ can never happen, or we would have $\\mathfrak{k}_n\\subseteq\\mathfrak{h}_m\\subseteq\\mathfrak{h}_0\\subseteq\\mathfrak{k}_0$ and $\\mathfrak{h}_0$ would lie in the inseparable closure of $\\{\\mathfrak{k}_n\\}_{n\\geq 0}$. Since the sequence $(\\mathfrak{k}_n)_{n\\geq 0}$ diverges, for every $m$ there exists $n(m)$ such that $\\mathfrak{k}_n$ does not contain $\\mathfrak{h}_m$ for $n\\geq n(m)$; thus, for $n\\geq n(m)$, the halfspaces $\\mathfrak{k}_n$ and $\\mathfrak{h}_m$ are transverse. \n\nNow suppose instead that an inclusion of the form $\\mathfrak{h}_m\\subseteq\\mathfrak{k}_n$ never happens. For every $n$, there exists $m(n)$ such that $\\mathfrak{k}_n$ is not contained in $\\mathfrak{h}_m$ for ${m\\geq m(n)}$; thus, for $m\\geq m(n)$, the halfspaces $\\mathfrak{k}_n$ and $\\mathfrak{h}_m$ are transverse.\n\\end{proof}\n\nFollowing \\cite{corrigendum}, we construct a directed graph $\\mathcal{G}(\\xi)$ as follows. Vertices of $\\mathcal{G}(\\xi)$ correspond to minimal elements of $\\left(\\overline{\\mathcal{U}}(\\xi),\\preceq\\right)$. Given diverging chains $(\\mathfrak{h}_m)_{m\\geq 0}$ and $(\\mathfrak{k}_n)_{n\\geq 0}$ in minimal UBS's $\\Omega$ and $\\Omega'$, respectively, we draw an oriented edge from $[\\Omega]$ to $[\\Omega']$ if almost every $\\mathfrak{h}_m$ is transverse to almost every $\\mathfrak{k}_n$, but the same does not happen if we exchange $(\\mathfrak{h}_m)_{m\\geq 0}$ and $(\\mathfrak{k}_n)_{n\\geq 0}$. This does not depend on which diverging chains we pick, as $\\Omega$, $\\Omega'$ are minimal.\n\nBy Lemma~\\ref{symmetric almost-transversality}, the vertices corresponding to $\\Omega$ and $\\Omega'$ are not joined by any edge if and only if almost every $\\mathfrak{h}_m$ is transverse to almost every $\\mathfrak{k}_n$ and almost every $\\mathfrak{k}_n$ is transverse to almost every $\\mathfrak{h}_m$. It is clear that there are no directed cycles of length $2$ in $\\mathcal{G}(\\xi)$.\n\n\\begin{lem}\\label{no directed cycles}\nIf there is a directed path from $[\\Omega]$ to $[\\Xi]$, there is an oriented edge from $[\\Omega]$ to $[\\Xi]$. In particular, $\\mathcal{G}(\\xi)$ contains no directed cycles.\n\\end{lem}\n\\begin{proof}\nIt suffices to prove that, if there is an edge from $[\\Omega]$ to $[\\Omega']$ and from $[\\Omega']$ to $[\\Omega'']$, there is also an edge from $[\\Omega]$ to $[\\Omega'']$. Pick diverging chains $(\\mathfrak{h}_n)_{n\\geq 0}$, $(\\mathfrak{h}'_n)_{n\\geq 0}$, $(\\mathfrak{h}''_n)_{n\\geq 0}$ in $\\Omega$, $\\Omega'$, $\\Omega''$, respectively. By hypothesis, there are infinitely many $\\mathfrak{h}_k''$ that are not transverse to almost every $\\mathfrak{h}'_j$; thus, for every $k$ there exists $j$ such that $\\mathfrak{h}''_k\\supseteq\\mathfrak{h}'_j$. A similar argument works for $\\Omega$ and $\\Omega'$; hence, for every $k$ there exist $i,j$ such that $\\mathfrak{h}''_k\\supseteq\\mathfrak{h}'_j\\supseteq\\mathfrak{h}_i$. If no oriented edge from $[\\Omega]$ to $[\\Omega'']$ existed, almost every $\\mathfrak{h}''_k$ would be transverse to almost every $\\mathfrak{h}_i$ and this would contradict the previous statement.\n\\end{proof}\n\n\\begin{rmk}\\label{graph for any chains}\nA graph like $\\mathcal{G}(\\xi)$ above can be constructed whenever we have a family of diverging chains $(\\mathfrak{h}_n^i)_{n\\geq 0}$, $i\\in I$, with the property that, if $i\\neq j$, either no $\\mathfrak{h}_m^i$ lies in the inseparable closure of $\\{\\mathfrak{h}_n^j\\}_{n\\geq 0}$ or vice versa. Lemma~\\ref{no directed cycles} and part~1 of Proposition~\\ref{main prop on UBS's} below still hold in this context.\n\\end{rmk}\n\nWe say that a collection of vertices $\\mathcal{V}\\subseteq\\mathcal{G}(\\xi)^{(0)}$ is \\emph{inseparable} if, for every $v,w\\in\\mathcal{V}$, all the vertices on the directed paths from $v$ to $w$ also lie in $\\mathcal{V}$. The following extends Lemma~3.7 and Theorem~3.10 in \\cite{Hagen}.\n\n\\begin{prop}\\label{main prop on UBS's}\n\\begin{enumerate}\n\\item The graph $\\mathcal{G}(\\xi)$ has at most $r$ vertices.\n\\item For every UBS $\\Omega$ there exists a minimal UBS $\\Omega'\\preceq\\Omega$. If $\\Omega$ is the inseparable closure of a diverging chain $(\\mathfrak{h}_n)_{n\\geq 0}$, we can take $\\Omega'$ to be the inseparable closure of $(\\mathfrak{h}_n)_{n\\geq N}$, for some $N\\geq 0$.\n\\item Given a UBS $\\Omega$ and a set $\\{\\Omega_1,...,\\Omega_k\\}$ of representatives of all equivalence classes of minimal UBS's almost contained in $\\Omega$, we have\n\\[\\sup_{\\mathfrak{h}\\in\\Omega\\triangle\\left(\\Omega_1\\cup ... \\cup\\Omega_k\\right)} d(x,\\mathfrak{h})<+\\infty.\\]\n\\item There is an isomorphism of posets between $\\left(\\overline{\\mathcal{U}}(\\xi),\\preceq\\right)$ and the collection of inseparable subsets of $\\mathcal{G}(\\xi)^{(0)}$, ordered by inclusion. It is given by associating to $[\\Omega]$ the set $\\{[\\Omega_1],...,[\\Omega_k]\\}$ of minimal equivalence classes of UBS's almost contained in $\\Omega$.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nTo prove part~1, we show that every finite subset $\\mathcal{V}\\subseteq\\mathcal{G}(\\xi)^{(0)}$ satisfies ${\\#\\mathcal{V}\\leq r}$; more precisely, we prove by induction on $k$ that, if $\\Omega_1,...,\\Omega_k$ are UBS's representing the elements of $\\mathcal{V}$, we can find pairwise-transverse halfspaces $\\mathfrak{h}_i\\in\\Omega_i$. The case $k=1$ is trivial; suppose $k\\geq 2$. By Lemma~\\ref{no directed cycles} we can assume, up to reordering the $\\Omega_i$, that there is no edge from $[\\Omega_i]$, $i\\leq k-1$, to $[\\Omega_k]$. There exist $\\mathfrak{h}\\in\\Omega_k$ and diverging chains $\\{\\mathfrak{k}_n^i\\}_{n\\geq 0}\\subseteq\\Omega_i$, $i\\leq k-1$, that are transverse to $\\mathfrak{h}$; in particular, $\\mathfrak{h}$ is transverse to every element in the inseparable closure of $\\{\\mathfrak{k}_n^i\\}_{n\\geq 0}$, for $i\\leq k-1$. By the inductive hypothesis, we can find $\\mathfrak{h}_i$ in the inseparable closure of $\\{\\mathfrak{k}_n^i\\}_{n\\geq 0}$ so that $\\mathfrak{h}_1,...,\\mathfrak{h}_{k-1}$ are pairwise transverse. Hence $\\mathfrak{h},\\mathfrak{h}_1,...,\\mathfrak{h}_{k-1}$ are pairwise transverse.\n\nIf $\\Omega_1\\prec...\\prec\\Omega_k$ is a chain of non-equivalent UBS's, we have $k\\leq r$. Indeed, we can consider diverging chains in $\\Omega_1$ and in $\\Omega_i\\setminus\\Omega_{i-1}$ for $2\\leq i\\leq k$, which exist by Lemma~\\ref{Ramsey}, and appeal to Remark~\\ref{graph for any chains}. This implies the existence of minimal UBS's almost contained in any UBS. It also shows that, for every diverging chain $(\\mathfrak{h}_n)_{n\\geq 0}$, there exists $N\\geq 0$ such that the inseparable closures $\\Omega_M$ of $(\\mathfrak{h}_n)_{n\\geq M}$ are all equivalent for $M\\geq N$. In particular, every diverging chain in $\\Omega_N$ has a cofinite subchain that is contained in $\\Omega_M$, if ${M\\geq N}$. By Lemma~\\ref{symmetric almost-transversality}, the UBS $\\Omega_N$ is equivalent to the inseparable closure of any diverging chain it contains, i.e.~$\\Omega_N$ is minimal. This proves part~2.\n\nRegarding part~3, it is clear that the supremum over $\\left(\\Omega_1\\cup ... \\cup\\Omega_k\\right)\\setminus\\Omega$ is finite. If the supremum over $\\Omega\\setminus\\left(\\Omega_1\\cup ... \\cup\\Omega_k\\right)$ were infinite, Lemma~\\ref{Ramsey} and part~2 would provide a diverging chain in $\\Omega\\setminus\\left(\\Omega_1\\cup ... \\cup\\Omega_k\\right)$ whose inseparable closure $\\Omega'$ is a minimal UBS. Thus $\\Omega'\\preceq\\Omega$, but $\\Omega'\\not\\sim\\Omega_i$ for all $i$, a contradiction.\n\nFinally, we prove part~4. The map $[\\Omega]\\mapsto\\{[\\Omega_1],...,[\\Omega_k]\\}$ is an injective morphism of posets by part~3. The collection $\\{[\\Omega_1],...,[\\Omega_k]\\}$ is inseparable since the inseparable closure of $(\\Omega_i\\cap\\Omega)\\cup(\\Omega_j\\cap\\Omega)$ contains all minimal UBS's corresponding to vertices on directed paths from $[\\Omega_i]$ to $[\\Omega_j]$ and vice versa; this follows for instance from the proof of Lemma~\\ref{no directed cycles}.\n\nGiven an inseparable collection $\\{[\\Omega_1],...,[\\Omega_k]\\}$, we construct a UBS $\\Omega$ such that these are precisely the equivalence classes of minimal UBS's almost contained in $\\Omega$. Let $(\\mathfrak{h}_n^i)_{n\\geq 0}$ be a diverging chain in $\\Omega_i$, for every $i$, and denote by $\\Omega^N$ the inseparable closure of ${\\{\\mathfrak{h}_n^1\\}_{n\\geq N}\\cup...\\cup\\{\\mathfrak{h}_n^k\\}_{n\\geq N}}$. If $N$ is large enough, every minimal UBS almost contained in $\\Omega^N$ is equivalent to one of the $\\Omega_i$. Otherwise, by part~1, we would be able to find a diverging chain $\\{\\mathfrak{h}_n\\}_{n\\geq 0}$ such that its inseparable closure $\\Xi$ is not equivalent to any of the $\\Omega_i$ and $\\mathfrak{h}_{a_n}^j\\subseteq\\mathfrak{h}_n\\subseteq\\mathfrak{h}_{b_n}^k$, for some $j,k$, with $b_n\\rightarrow+\\infty$. \nThis implies that $\\Xi$ lies on a directed path from $[\\Omega_j]$ to $[\\Omega_k]$ and contradicts inseparability of the collection $\\{[\\Omega_1],...,[\\Omega_k]\\}$.\n\\end{proof}\n\nLet $\\Omega\\subseteq\\sigma_{\\xi}\\setminus\\sigma_x$ be a UBS and let $K_{\\Omega}\\leq\\text{Isom}_{\\xi}X$ be the subgroup of isometries that preserve the equivalence class $[\\Omega]$. We can construct a homomorphism analogous to the one in Proposition~4.H.1 in \\cite{Cor}:\n\\begin{align*}\n\\chi_{\\Omega}\\colon K_{\\Omega} \\longrightarrow & \\mathbb{R} \\nonumber \\\\\ng\\longmapsto & \\widehat{\\nu}\\left(g^{-1}\\Omega\\setminus\\Omega\\right)- \\widehat{\\nu}\\left(\\Omega\\setminus g^{-1}\\Omega \\right) .\n\\end{align*}\nWe will refer to $\\chi_{\\Omega}$ as the \\emph{transfer character} associated to $\\Omega$. We remark that the definition makes sense due to part~1 of Lemma~\\ref{almost disjoint}. The arguments in \\cite{Cor}, show that $\\chi_{\\Omega}$ does not change if we replace $\\Omega$ with a measurable set $\\Omega'\\subseteq\\mathscr{H}$ such that $\\widehat{\\nu}\\left(\\Omega\\triangle\\Omega'\\right)<+\\infty$. Thus transfer characters only depend on the equivalence class of the UBS $\\Omega$. Moreover, if $\\{\\Omega_1,...,\\Omega_k\\}$ is a set of representatives of all equivalence classes of minimal UBS's almost contained in $\\Omega$, we have $\\chi_{\\Omega}=\\chi_{\\Omega_1}+...+\\chi_{\\Omega_k}$ by part~3 of Proposition~\\ref{main prop on UBS's}.\n\nNow, let $\\Omega_1,...,\\Omega_k$ be UBS's representing all minimal elements of $\\overline{\\mathcal{U}}(\\xi)$. The group $\\text{Isom}_{\\xi}X$ permutes the equivalence classes of the $\\Omega_i$ and a subgroup $K_{\\xi}\\leq\\text{Isom}_{\\xi}X$ of index at most $k!\\leq r!$ preserves them all. Note that, by part~4 of Proposition~\\ref{main prop on UBS's}, this is precisely the kernel of the action of $\\text{Isom}_{\\xi}X$ on $\\overline{\\mathcal{U}}(\\xi)$. We define a homomorphism $\\chi_{\\xi}:=(\\chi_{\\Omega_1},...,\\chi_{\\Omega_k})\\colon K_{\\xi}\\rightarrow \\mathbb{R}^k$.\n\n\\begin{prop}\\label{kernel of chi}\nEvery finitely generated subgroup $\\Gamma\\leq\\ker\\chi_{\\xi}$ has an orbit in $X$ with at most $2^r$ elements.\n\\end{prop}\n\\begin{proof}\nIf $\\Gamma$ did not have an orbit with at most $2^r$ elements, all orbits would be unbounded by Corollary~\\ref{finite orbits} and Proposition~\\ref{key point in Sageev} would provide ${\\mathfrak{h}\\in\\mathscr{H}}$ and $g\\in\\Gamma$ with $g\\mathfrak{h}\\subsetneq\\mathfrak{h}$; hence ${d(g^r\\mathfrak{h},\\mathfrak{h}^*)>0}$ by Proposition~\\ref{all about halfspaces}. If $\\xi\\in\\widetilde{\\mathfrak{h}}^*$, we replace $\\mathfrak{h}$ with $\\mathfrak{h}^*$ and $g$ with $g^{-1}$. Now $(g^{nr}\\mathfrak{h})_{n\\geq 0}$ is a sequence of halfspaces diverging to $\\xi$ and, by part~2 of Proposition~\\ref{main prop on UBS's}, the inseparable closure $\\Omega^N$ of $\\{g^{nr}\\mathfrak{h}\\}_{n\\geq N}$ is a minimal UBS if $N$ is large enough. Thus, $\\Omega^N\\sim\\Omega_i$ for some $i$ and we have ${0=\\chi_{\\Omega_i}(g)=\\chi_{\\Omega^N}(g)}$. We obtain a contradiction by observing that\n\\[r\\cdot\\chi_{\\Omega^N}(g)=\\chi_{\\Omega^N}(g^r)\\geq\\widehat{\\nu}\\left(\\mathscr{H}(\\mathfrak{h}^*|g^r\\mathfrak{h})\\setminus\\{g^r\\mathfrak{h}\\}\\right)> 0.\\]\n\\end{proof}\n\nTheorem~\\ref{stabiliser of xi} immediately follows from Proposition~\\ref{kernel of chi}. \n\n\n\n\\section{Caprace-Sageev machinery.}\\label{CS-like machinery}\n\nLet $X$ be a complete median space of finite rank $r$. The goal of this section is extending to median spaces Theorem~4.1 and Proposition~5.1 from \\cite{CS}. \n\nOur techniques provide a different approach also in the case of ${\\rm CAT}(0)$ cube complexes, as we use Roller boundaries instead of visual boundaries. This strategy of proof was suggested to us by T. Fern\\'os.\n\nLet $\\Gamma$ be a group of isometries of $X$. We say that $g\\in \\Gamma$ \\emph{flips} $\\mathfrak{h}\\in\\mathscr{H}$ if ${d\\left(g\\mathfrak{h}^*,\\mathfrak{h}^*\\right)>0}$ and $g\\mathfrak{h}^*\\neq\\mathfrak{h}$. The halfspace $\\mathfrak{h}$ is \\emph{$\\Gamma$-flippable} if some $g\\in \\Gamma$ flips it.\n\n\\begin{thm}\\label{flipping}\nSuppose $\\Gamma$ acts without wall inversions. For every thick halfspace, exactly one of the following happens:\n\\begin{enumerate}\n\\item $\\mathfrak{h}$ is $\\Gamma$-flippable;\n\\item the closure of $\\widetilde{\\mathfrak{h}}^*$ in $\\overline X$ contains a proper, closed, convex, $\\Gamma$-invariant subset $C\\subseteq\\overline X$.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\nIf $\\mathfrak{h}$ is $\\Gamma$-flippable, $\\overline{\\mathfrak{h}^*}$ and $g\\overline{\\mathfrak{h}^*}$ are disjoint subsets of $X$; let $(x,x')$ be a pair of gates and $I:=I(x,x')$. Observe that $\\pi_I$ maps the closure of $\\widetilde{\\mathfrak{h}}^*$ to $x$ and the closure of $g\\widetilde{\\mathfrak{h}}^*$ to $x'$; hence, any wall of $X$ separating $x$ and $x'$ induces a wall of $\\overline X$ separating the closures of $\\widetilde{\\mathfrak{h}}^*$ and $g\\widetilde{\\mathfrak{h}}^*$. Thus, options 1 and 2 are mutually exclusive. If 1 does not hold, we have $g\\overline{\\mathfrak{h}^*}\\cap\\overline{\\mathfrak{h}^*}\\neq\\emptyset$ for every $g\\in\\Gamma$, since the action has no wall inversions. Helly's Theorem implies that the closures of the sets $g\\widetilde{\\mathfrak{h}}^*$, $g\\in\\Gamma$, have the finite intersection property and, since $\\overline X$ is compact, their intersection $C$ is nonempty. It is closed, convex and $\\Gamma$-invariant; since $\\mathfrak{h}$ is thick, we have $C\\neq\\overline X$. \n\\end{proof}\n\nThe thickness assumption in Theorem~\\ref{flipping} is necessary. Consider the real tree obtained from the ray $[0,+\\infty)$ by attaching a real line $\\ell_n$ to the point $\\frac{1}{n}$ for every $n\\geq 1$. Complete this to a real tree $T$ so that there exist isometries $g_n$ with axes $\\ell_n$; let $\\Gamma$ be the group generated by these. The minimal subtree for $\\Gamma$ contains all the lines $\\ell_n$; let $X$ be its closure in $T$. The action $\\Gamma\\curvearrowright X$ does not preserve any proper, closed, convex subset of $\\overline X$, but the singleton $\\{0\\}$ inside the original ray is a halfspace that is not flipped by $\\Gamma$.\n\nWe remark that any action on a connected median space is automatically without wall inversions by Proposition~\\ref{all about halfspaces}. When $X$ is connected, we denote by $\\partial_{\\infty}X$ the visual boundary of the ${\\rm CAT}(0)$ space arising from Theorem~\\ref{CAT(0) metric}. If no proper, closed, convex subset of $X$ is $\\Gamma$-invariant, the following describes the only obstruction to flippability of halfspaces.\n\n\\begin{prop}\\label{visual boundary vs Roller boundary}\nIf $X$ is connected, there exists a closed, convex, $\\Gamma$-invariant subset $C\\subseteq\\partial X$ if and only if $\\Gamma$ fixes a point of $\\partial_{\\infty}X$.\n\\end{prop}\n\\begin{proof}\nSuppose $C\\subseteq\\partial X$ is closed, convex and $\\Gamma$-invariant; Lemma~2.6 in \\cite{Fioravanti1} implies that $C$ is gate-convex, hence the set $\\sigma_C:=\\{\\mathfrak{h}\\in\\mathscr{H}\\mid C\\subseteq\\widetilde{\\mathfrak{h}}\\}$ is nonempty as it contains all halfspaces separating $x\\in X$ and $\\pi_C(x)$. By Theorem~\\ref{flipping}, any $\\mathfrak{h}\\in\\sigma_C^*$ is not $\\Gamma$-flippable; thus $\\{g\\overline{\\mathfrak{h}}\\mid\\mathfrak{h}\\in\\sigma_C,~g\\in\\Gamma\\}$ is a collection of subsets of $X$ with the finite intersection property. These subsets are convex also with respect to the ${\\rm CAT}(0)$ metric and their intersection is empty. The topological dimension of every compact subset of $X$ is bounded above by the rank of $X$, see Lemma~2.10 in \\cite{Fioravanti1} and Theorem~2.2, Lemma~7.6 in \\cite{Bow1}; thus, the geometric and telescopic dimensions (see \\cite{CL}) of the ${\\rm CAT}(0)$ metric are at most $r$. The existence of a fixed point in $\\partial_{\\infty}X$ now follows from Proposition~3.6 in \\cite{CS}.\n\nConversely, suppose $\\zeta\\in\\partial_{\\infty}X$ is fixed by $\\Gamma$. The intersection of a halfspace of $X$ and a ray for the ${\\rm CAT}(0)$ metric is either empty or a subray. Hence, given $x\\in X$, the subset $\\sigma(x,\\zeta)\\subseteq\\mathscr{H}$ of halfspaces intersecting the ray $x\\zeta$ in a subray is an ultrafilter; it represents a point $\\xi(x,\\zeta)\\in\\overline X$. If $y\\in X$ is another point and $x_n,y_n$ are points diverging along the rays $x\\zeta$ and $y\\zeta$, we have $\\sigma(x,\\zeta)\\triangle\\sigma(y,\\zeta)\\subseteq\\liminf\\left(\\mathscr{H}(x_n|y_n)\\cup\\mathscr{H}(y_n|x_n)\\right)$. For every $n$, the points $x_n$ and $y_n$ are at most as far apart as $x$ and $y$ in the ${\\rm CAT}(0)$ metric; since the latter is bi-Lipschitz equivalent to the median metric on $X$, we conclude that $\\widehat{\\nu}\\left(\\sigma(x,\\zeta)\\triangle\\sigma(y,\\zeta)\\right)<+\\infty$. Thus, $\\xi(x,\\zeta)$ and $\\xi(y,\\zeta)$ lie in the same component $Z\\subseteq\\overline X$, which is $\\Gamma$-invariant. Moreover, $\\xi(x,\\zeta)\\not\\in X$ as $\\mathscr{H}(x|z)\\subseteq\\sigma(x,\\zeta)$ for every $z$ on the ray $x\\zeta$; hence $Z\\subseteq\\partial X$. Finally, it is easy to show that $\\overline Z\\subseteq\\partial X$.\n\\end{proof}\n\nWe are interested in studying actions where every thick halfspace is flippable, see Corollary~\\ref{double skewering} below. To this end, we introduce the following notions of non-elementarity.\n\n\\begin{defn}\\label{elementarity notion}\nWe say that the action $\\Gamma\\curvearrowright X$ is:\n\\begin{itemize} \n\\item \\emph{Roller nonelementary} if $\\Gamma$ has no finite orbit in $\\overline X$; \n\\item \\emph{Roller minimal} if $X$ is not a single point and $\\Gamma$ does not preserve any proper, closed, convex subset of the Roller compactification $\\overline X$.\n\\item \\emph{essential} if $\\Gamma$ does not preserve any proper, closed, convex subset of the median space $X$.\n\\end{itemize}\n\\end{defn}\n\nThe action of $\\Gamma$ is Roller elementary if and only if a finite-index subgroup of $\\Gamma$ fixes a point of $\\overline X$; thus, Roller nonelementarity passes to finite index subgroups. This fails for Roller minimality. For instance, consider the action of $\\Gamma=\\mathbb{Z}^2\\rtimes\\mathbb{Z}\/4\\mathbb{Z}$ on the the standard cubulation of $\\mathbb{R}^2$; the action of $H:=\\mathbb{Z}^2$ is by translations, whereas $\\mathbb{Z}\/4\\mathbb{Z}$ rotates around the origin. The action of $\\Gamma$ is Roller minimal, but $H$ has four fixed points in the Roller compactification.\n\nThe same example shows that Roller minimal actions might not be Roller nonelementary. Roller nonelementary actions need not be Roller minimal either: Let $T$ be the Cayley graph of a nonabelian free group $F$ and consider the product action of $F\\times\\mathbb{Z}$ on $T\\times\\mathbb{R}$. It is Roller nonelementary but leaves invariant two components of the Roller boundary, both isomorphic to $T$.\n\nBy Proposition~\\ref{visual boundary vs Roller boundary}, an essential action $\\Gamma\\curvearrowright X$ is Roller minimal if and only if no point of the visual boundary $\\partial_{\\infty}X$ is fixed by $\\Gamma$. In particular, an essential action with no finite orbits in $\\partial_{\\infty}X$ is always Roller minimal and Roller nonelementary. \n\nThe following is immediate from Theorem~\\ref{flipping} and the proof of the Double Skewering Lemma in the introduction of \\cite{CS}.\n\n\\begin{cor}\\label{double skewering}\nIf $\\Gamma\\curvearrowright X$ is Roller minimal and without wall inversions, every thick halfspace is $\\Gamma$-flippable. Moreover, if $\\mathfrak{h}\\subseteq\\mathfrak{k}$ are thick halfspaces, there exists $g\\in\\Gamma$ such that $g\\mathfrak{k}\\subsetneq\\mathfrak{h}\\subseteq\\mathfrak{k}$ and $d(g\\mathfrak{k},\\mathfrak{h}^*)>0$.\n\\end{cor}\n\nOne can usually reduce to studying a Roller minimal action by appealing to the following result. \n\n\\begin{prop}\\label{Roller elementary vs strongly so}\nEither $\\Gamma\\curvearrowright\\overline X$ fixes a point or there exist a $\\Gamma$-invariant component $Z\\subseteq\\overline X$ and a $\\Gamma$-invariant, closed, convex subset ${C\\subseteq Z}$ such that $\\Gamma\\curvearrowright C$ is Roller minimal.\n\\end{prop}\n\\begin{proof}\nLet $K\\subseteq\\overline X$ be a minimal, nonempty, closed, $\\Gamma$-invariant, convex subset; it exists by Zorn's Lemma. Corollary~4.31 in \\cite{Fioravanti1} provides a component $Z\\subseteq\\overline X$ of maximal rank among those that intersect $K$. Since $Z$ must be $\\Gamma$-invariant, we have $\\overline Z\\cap K= K$ by the minimality of $K$, i.e.~$K\\subseteq\\overline Z$. The set $C:=K\\cap Z$ is nonempty, convex, $\\Gamma$-invariant and closed in $Z$, since the inclusion $Z\\hookrightarrow\\overline X$ is continuous. By minimality of $K$, we have $K=\\overline C$ and the latter can be identified with the Roller compactification of $C$ (see Lemma~4.8 in \\cite{Fioravanti1}). We conclude that either $\\Gamma\\curvearrowright C$ is Roller minimal or $C$ is a single point.\n\\end{proof}\n\n\\begin{cor}\\label{Roller elementary vs strongly so 2}\nIf $\\Gamma\\curvearrowright X$ is Roller nonelementary, there exist a $\\Gamma$-in\\-variant component $Z\\subseteq\\overline X$ and a $\\Gamma$-invariant, closed, convex subset $C\\subseteq Z$ such that $\\Gamma\\curvearrowright C$ is Roller minimal and Roller nonelementary.\n\\end{cor}\n\nWe remark that the following is immediate from part~5 of Proposition~\\ref{properties of X'}:\n\n\\begin{lem}\\label{RNE for X'}\nThe action $\\Gamma\\curvearrowright X$ is Roller elementary if and only if the action $\\Gamma\\curvearrowright X'$ is.\n\\end{lem}\n\nRelying on Theorem~E, we can already provide a proof of Theorem~A.\n\n\\begin{proof}[Proof of Theorem~A]\nSince $\\Gamma$ contains no nonabelian free subgroups, the action $\\Gamma\\curvearrowright X$ is Roller elementary by Theorem~E. Theorem~F yields a finite-index subgroup $\\Gamma_0\\leq\\Gamma$ and $N\\lhd\\Gamma_0$ such that $\\Gamma_0\/N$ is abelian and every finitely generated subgroup of $N$ has an orbit with at most $2^r$ elements. \n\nIf all point stabilisers are amenable, $N$ is amenable, as it is the direct limit of its finitely generated subgroups; hence, $\\Gamma$ is amenable. If $\\Gamma\\curvearrowright X$ is\nproper, every finitely generated subgroup of $N$ is finite. If $\\Gamma\\curvearrowright X$ is free, $N$ is finite; if $X$ is connected, $N$ must be trivial by Theorem~\\ref{CAT(0) metric} and Cartan's fixed point theorem. Finally, finitely generated finite-by-abelian groups are virtually abelian, for instance by Lemma~II.7.9 in \\cite{BH}.\n\\end{proof}\n\nWe now proceed to obtain an analogue of Proposition~5.1 from \\cite{CS}, namely Theorem~\\ref{strong separation} below. We say that $\\mathfrak{h},\\mathfrak{k}\\in\\mathscr{H}$ are \\emph{strongly separated} if $\\overline{\\mathfrak{h}}\\cap\\overline{\\mathfrak{k}}=\\emptyset$ and no $\\mathfrak{j}\\in\\mathscr{H}$ is transverse to both $\\mathfrak{h}$ and $\\mathfrak{k}$.\n\n\\begin{lem}\\label{slight reformulation of SS}\nHalfspaces with disjoint closures are strongly separated if and only if no thick halfspace is transverse to both.\n\\end{lem}\n\\begin{proof}\nSuppose that $\\overline{\\mathfrak{h}_1}\\cap\\overline{\\mathfrak{h}_2}=\\emptyset$ and a nowhere-dense halfspace $\\mathfrak{k}$ is transverse to both $\\mathfrak{h}_i$. Pick points $y_i\\in\\mathfrak{h}_i\\cap\\mathfrak{k}^*$ and observe that $I:=I(y_1,y_2)\\subseteq\\mathfrak{k}^*$; since $\\mathfrak{k}$ is closed by Proposition~\\ref{all about halfspaces}, we have $d(I,\\mathfrak{k})>0$. Thus, if $(x_1,x_2)$ is a pair of gates for $(I,\\mathfrak{k})$, the set $\\mathscr{H}(x_1|x_2)$ has positive measure and it contains a thick halfspace $\\mathfrak{k}'$. It is easy to see that $\\mathfrak{k}'$ is transverse $\\mathfrak{h}_1$ and $\\mathfrak{h}_2$.\n\\end{proof}\n\n\\begin{thm}\\label{strong separation}\nIf $\\Gamma\\curvearrowright X$ is Roller minimal and without wall inversions, the following are equivalent:\n\\begin{enumerate}\n\\item $X$ is irreducible;\n\\item there exists a pair of strongly separated halfspaces;\n\\item for every $\\mathfrak{h}\\in\\mathscr{H}\\setminus\\mathscr{H}^{\\times}$, there exist halfspaces $\\mathfrak{h}'\\subseteq\\mathfrak{h}\\subseteq\\mathfrak{h}''$ so that $\\mathfrak{h}'$ and $\\mathfrak{h}''^*$ are thick and strongly separated.\n\\end{enumerate}\n\\end{thm}\n\\begin{proof}\n3 clearly implies 2 and 1 follows from 2 using Corollary~\\ref{products}. We are left to prove that 1 implies 3. Suppose for the sake of contradiction that, for some $\\mathfrak{h}\\in\\mathscr{H}\\setminus\\mathscr{H}^{\\times}$, we cannot find $\\mathfrak{h}'$ and $\\mathfrak{h}''$. We reach a contradiction as in the proof of Proposition~5.1 in \\cite{CS} once we construct sequences $(\\mathfrak{h}_n')_{n\\geq 0}$, $(\\mathfrak{h}_n'')_{n\\geq 0}$ and $(\\mathfrak{k}_n)_{n\\geq 0}$ of thick halfspaces such that\n\\begin{enumerate}\n\\item $\\mathfrak{k}_n$ is transverse to $\\mathfrak{h}_{n-1}'$ and $\\mathfrak{h}_{n-1}''$ for $n\\geq 1$;\n\\item $\\mathfrak{k}_n\\in\\mathscr{H}(\\mathfrak{h}_n''^*|\\mathfrak{h}_n')$ for $n\\geq 0$;\n\\item $\\mathfrak{h}_n'\\subsetneq\\mathfrak{h}_{n-1}'\\subsetneq\\mathfrak{h}\\subsetneq\\mathfrak{h}_{n-1}''\\subsetneq\\mathfrak{h}_n''$ for $n\\geq 1$.\n\\end{enumerate}\nBy Corollary~\\ref{double skewering}, we can find $g\\in\\Gamma$ such that $g^{-1}\\mathfrak{h}\\subsetneq\\mathfrak{h}\\subsetneq g\\mathfrak{h}$ and we set $\\mathfrak{h}_0':=g^{-1}\\mathfrak{h}$ and $\\mathfrak{h}_0'':=g\\mathfrak{h}$. Now suppose that we have defined $\\mathfrak{h}_n'$, $\\mathfrak{h}_n''$ and $\\mathfrak{k}_{n-1}$. Corollary~\\ref{double skewering} yields $g'\\in\\Gamma$ with $\\mathfrak{h}_n'\\subsetneq\\mathfrak{h}\\subsetneq\\mathfrak{h}_n''\\subsetneq g'\\mathfrak{h}_n'\\subsetneq g'\\mathfrak{h}_n''$ and $\\overline{\\mathfrak{h}_n''}\\cap g'\\overline{\\mathfrak{h}_n'^*}=\\emptyset$. By hypothesis, $\\mathfrak{h}_n'$ and $g'\\mathfrak{h}_n''^*$ are not strongly separated, but they have disjoint closures; thus there exists $\\mathfrak{k}$ transverse to $\\mathfrak{h}_n'$ and $g'\\mathfrak{h}_n''$. By Lemma~\\ref{slight reformulation of SS}, we can assume that $\\mathfrak{k}$ is thick. \n\nThe construction of the sequences can be concluded as in \\cite{CS} once we obtain analogues of their Lemmata~5.2,~5.3 and~5.4. Lemmata~5.3 and~5.4 can be proved using Corollary~\\ref{double skewering} as in \\cite{CS}, with the additional requirement that all input and output halfspaces be thick. We prove the following version of Lemma~5.2:\n\\begin{center}\n\\emph{``If $\\mathfrak{h},\\mathfrak{k}$ are thick transverse halfspaces, one of the four sectors determined by $\\mathfrak{h}$ and $\\mathfrak{k}$ contains a thick halfspace.''}\n\\end{center}\nLet $\\mathcal{H}$ be the set of thick halfspaces that are not transverse to $\\mathfrak{h}$ and $\\mathcal{K}$ the set of thick halfspaces that are not transverse to $\\mathfrak{k}$. As in \\cite{CS}, we can assume that every halfspace in $\\mathcal{H}$ is transverse to every halfspace in $\\mathcal{K}$. Let $\\mathcal{H}'$ be the collection of thick halfspaces that either contain or are contained in some halfspace of $\\mathcal{H}$; we define $\\mathcal{K}'$ similarly. \n\nObserve that $\\mathfrak{a}\\in\\mathscr{H}$ lies in $\\mathcal{H}'$ if and only if there exist $\\mathfrak{b},\\mathfrak{b}'\\in\\mathcal{H}$ such that $\\mathfrak{b}\\subseteq\\mathfrak{a}\\subseteq\\mathfrak{b}'$; again, this is proved as in \\cite{CS}. Thus, halfspaces in $(\\mathscr{H}\\setminus\\mathscr{H}^{\\times})\\setminus\\mathcal{H}'$ must be transverse to all halfspaces in $\\mathcal{H}'$. We conclude that we have a $*$-invariant partition\n\\[\\mathscr{H}=\\mathcal{H}'\\sqcup\\left(\\mathscr{H}\\setminus(\\mathcal{H}'\\cup\\mathscr{H}^{\\times})\\right)\\sqcup\\mathscr{H}^{\\times},\\]\nwhere the first two pieces are transverse and the third is null. Since $\\mathfrak{h}\\in\\mathcal{H}'$ and $\\mathfrak{k}\\in\\mathscr{H}\\setminus(\\mathcal{H}'\\cup\\mathscr{H}^{\\times})$ this partition is nontrivial. Finally, observe that $\\mathcal{H}'$ is inseparable and, thus, measurable. Corollary~\\ref{products} now violates the irreducibility of $X$.\n\\end{proof}\n\n\n\n\\section{Facing triples.}\\label{facing triples}\n\nLet $X$ be a complete median space of finite rank $r$ and $\\Gamma\\curvearrowright X$ an isometric action without wall inversions. In this section we study certain tree-like behaviours displayed by all median spaces that admit a Roller nonelementary action. These will allow us to construct nonabelian free subgroups of their isometry groups. \n\nWe say that the median space $X$ is \\emph{lineal} with endpoints $\\xi,\\eta\\in\\overline X$ if $X\\subseteq I(\\xi,\\eta)$. \n\n\\begin{lem}\\label{intervals vs elementarity}\nEvery action on a lineal median space is Roller elementary.\n\\end{lem}\n\\begin{proof}\nThe elements of $\\mathscr{F}:=\\left\\{\\{\\xi,\\eta\\}\\subseteq\\overline X\\mid X\\subseteq I(\\xi,\\eta)\\right\\}\\neq\\emptyset$ are permuted by each isometry of $X$. If $\\{\\xi_1,\\eta_1\\}$, $\\{\\xi_2,\\eta_2\\}$ are distinct elements of $\\mathscr{F}$, the sets $\\mathscr{H}(\\xi_1,\\eta_2|\\eta_1,\\xi_2)$ and $\\mathscr{H}(\\xi_1,\\xi_2|\\eta_1,\\eta_2)$ are transverse and their union is $\\mathscr{H}(\\xi_1|\\eta_1)$, which contains a side of every wall of $X$. By Corollary~\\ref{products}, $X$ splits as a product $X_1\\times X_2$. Thus, if $X$ is irreducible, we have $\\#\\mathscr{F}=1$ and an index-two subgroup of $\\Gamma$ fixes two points of $\\overline X$.\n\nIn general, let $X=X_1\\times ... \\times X_k$ be the decomposition of $X$ into irreducible factors and $\\Gamma$ a group of isometries of $X$; by Proposition~\\ref{isometries of products}, a finite-index subgroup $\\Gamma_0\\leq\\Gamma$ leaves this decomposition invariant. Since $\\overline X=\\overline{X_1}\\times ... \\times\\overline{X_k}$ by Lemma~\\ref{Roller for products}, if $X$ is lineal so is each $X_i$. The previous discussion shows that a finite-index subgroup of $\\Gamma_0$ fixes points $\\xi_i\\in\\overline{X_i}$, for all $i$; in particular, it fixes the point $(\\xi_1,...,\\xi_k)\\in\\overline X$, hence $\\Gamma\\curvearrowright X$ is Roller elementary.\n\\end{proof}\n\nHalfspaces $\\mathfrak{h}_1,\\mathfrak{h}_2,\\mathfrak{h}_3$ are said to form a \\emph{facing triple} if they are pairwise disjoint; if each $\\mathfrak{h}_i$ is thick, we speak of a thick facing triple. If $X$ is lineal, $\\mathscr{H}$ does not contain facing triples. On the other hand, we have the following result; compare with Corollary~2.34 in \\cite{CFI} and Theorem~7.2 in \\cite{CS}.\n\n\\begin{prop}\\label{facing 3-ples exist}\n\\begin{enumerate}\n\\item If $\\Gamma\\curvearrowright X$ is Roller nonelementary, there exists a thick facing triple.\n\\item If $X$ is irreducible and $\\Gamma\\curvearrowright X$ is Roller nonelementary and Roller minimal, every thick halfspace is part of a thick facing triple.\n\\end{enumerate}\n\\end{prop}\n\\begin{proof}\nWe prove part~1 by induction on the rank; the rank-zero case is trivial. In general, let $C\\subseteq Z$ be as provided by Corollary~\\ref{Roller elementary vs strongly so 2}. If ${Z\\subseteq\\partial X}$, we have $\\text{rank}(C)\\leq\\text{rank}(X)-1$ by Proposition~\\ref{components}; in this case, we conclude by the inductive hypothesis and Proposition~\\ref{halfspaces of components}. Otherwise, we have $C\\subseteq X$; let $C=C_1\\times ...\\times C_k$ be its decomposition into irreducible factors. By Proposition~\\ref{isometries of products}, a finite-index subgroup $\\Gamma_0\\leq\\Gamma$ preserves this decomposition and, since $\\overline C=\\overline{C_1}\\times...\\times\\overline{C_k}$, there exists $i\\leq k$ such that $\\Gamma_0\\curvearrowright C_i$ is Roller nonelementary. If $k\\geq 2$, we have $\\text{rank}(C_i)\\leq\\text{rank}(X)-1$ and we conclude again by the inductive hypothesis. If $C$ is irreducible, Corollary~\\ref{double skewering} and Theorem~\\ref{strong separation} provide $\\mathfrak{h}\\in\\mathscr{H}(C)\\setminus\\mathscr{H}^{\\times}(C)$ and $g\\in\\Gamma$ such that $g\\mathfrak{h}$ and $\\mathfrak{h}^*$ are strongly separated. Since $\\overline C$ is compact, there exists a point $\\xi\\in\\overline C$ that lies in the closure of every $g^n\\widetilde{\\mathfrak{h}}$, $n\\in\\mathbb{Z}$; similarly, we can find $\\eta\\in\\overline C$ lying in the closure of every $g^n\\widetilde{\\mathfrak{h}}^*$, $n\\in\\mathbb{Z}$.\n\nBy Lemma~\\ref{intervals vs elementarity}, there exists $x\\in C$ with $m:=m(x,\\xi,\\eta)\\neq x$. Picking $\\mathfrak{j}\\in\\mathscr{H}(m|x)\\setminus\\mathscr{H}^{\\times}(C)$, neither $\\xi$ nor $\\eta$ can lie in $\\widetilde{\\mathfrak{j}}$. Since $g^n\\mathfrak{h}$ and $g^m\\mathfrak{h}^*$ are strongly separated for $n0$, the trap frequency is increased and the two molecules are pushed apart slightly so that their mean separation is larger than in the absence of the dipole-dipole interaction. For $q r^3 < 0$, these effects are reversed.\n\nFigure~\\ref{figSupp:eigs1D} shows the energies of the first 5 eigenstates of Eq.~\\eqref{eqSupp:ham-rel} calculated numerically as a function of $\\tilde{\\delta x}$ for $q=1$ and $r=3.5$. At large separations, the energy shifts agree well with those expected for point dipoles fixed at the potential minima (dashed lines). For intermediate separations, the shifts from the full 1D calculation are larger because the finite extent of the wavefunction means that $\\langle 1\/x_{\\rm rel}^3\\rangle > 1\/\\langle x_{\\rm rel}\\rangle^3$. This effect is larger for excited motional states where the extent of the wavefunction is larger. At small separations, the dipoles are pushed apart by their interaction and so the energy shift is reduced from the value expected from fixed point dipoles. \n\n\\begin{figure}\n \\centering\n \\includegraphics{eigs1D.pdf}\n \\caption{Energies of the first 5 relative motional states in presence of the dipole-dipole interaction for $q=1$, $r=3.5$. Dashed lines: calculations for point dipoles fixed at potential minima; solid lines: full 1D calculation.}\n \\label{figSupp:eigs1D}\n\\end{figure}\n\n\\section{Hyperfine interaction and shifted potentials}\n\nHere, we consider in more detail the complications introduced by the hyperfine interaction. As in the main text, we take the handedness of the light to be along $z$.\n\nThe hyperfine interaction couples the nuclear spin $\\vec{I}$ and the total electronic angular momentum $\\vec{J}$. Their sum is $\\vec{F}$. Let us first consider states with well defined $F$, $J$ and $I$, and a vector Stark shift which is small compared to the hyperfine interaction so that we need only consider the diagonal matrix elements of the effective Stark shift operator. In this case, the vector Stark shift is $W_1 = \\frac{1}{2 \\epsilon_0 c}\\alpha^{(1)} g_F m_F (\\vec{C}\\cdot\\hat{z}) I$ where\n\\begin{equation}\n g_F=\\frac{F(F+1)+J(J+1)-I(I+1)}{4J(J+1)F(F+1)}.\n\\label{eqSupp:gF}\n\\end{equation}\n\nThis is a useful result for molecules where the spin-rotation interaction is large compared to the hyperfine interaction. States from neighboring rotational manifolds that have the same values of $J$, $F$ and $m_F$ will have the same vector Stark shift, and the potentials for these states will be identical. However, for many molecules of interest, the hyperfine and spin-rotation interactions are similar in size so the hyperfine coupling mixes states with the same $F$ and $m_F$ but different $J$. The vector Stark shift of these mixed states is not given by Eq.~(\\ref{eqSupp:gF}), but instead depends on the relative size of the hyperfine and spin-rotation coupling. As a result, in general, states in different rotational levels will have different vector Stark shifts, and potentials that are shifted relative to one another. Here we consider the effect of this shift on the dipole-dipole interaction. \n\nWe assume the $\\ket{0_\\pm}$ states have potential minima at $\\pm \\delta x_0\/2$ and the $\\ket{1_\\pm}$ states at $\\pm \\delta x_1\/2$. The full Hamiltonian is now\n\\begin{equation}\n \\begin{split}\n H =& \\frac{p_A^2}{2M} + \\frac{p_B^2}{2M} + \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_A+\\frac{\\delta x_0}{2}\\right)^2\\sket{0_-}{A}\\sbra{0_-}{A}\\\\\n &\\quad + \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_A+\\frac{\\delta x_1}{2}\\right)^2\\sket{1_-}{A}\\sbra{1_-}{A}\\\\\n &\\quad + \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_B-\\frac{\\delta x_0}{2}\\right)^2\\sket{0_+}{B}\\sbra{0_+}{B}\\\\\n &\\quad + \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_B-\\frac{\\delta x_1}{2}\\right)^2\\sket{1_+}{B}\\sbra{1_+}{B}\\\\\n &\\quad + \\frac{\\vec{d}_A\\cdot\\vec{d}_B-3(\\vec{d}_A\\cdot\\hat{x})(\\vec{d}_B\\cdot\\hat{x})}{4\\pi\\epsilon_0|x_B-x_A|^3}.\n \\end{split}\n\\end{equation}\nThe Hamiltonian is no longer separable into motional and internal parts. We apply a unitary transformation\n\\begin{equation}\n \\begin{split}\n U(\\eta) &= \\Big[T_A(-\\tfrac{\\eta}{2})\\sket{0_-}{A}\\sbra{0_-}{A} + T_A(\\tfrac{\\eta}{2})\\sket{1_-}{A}\\sbra{1_-}{A}\\Big]\\\\\n & \\otimes \\Big[T_B(\\tfrac{\\eta}{2})\\sket{0_+}{B}\\sbra{0_+}{B} + T_B(-\\tfrac{\\eta}{2})\\sket{1_+}{B}\\sbra{1_+}{B}\\Big]\n \\end{split}\n\\end{equation}\nwhere $T_{A\/B}$ are the single particle translation operators such that $T_{A\/B}(\\eta)\\ket{x_{A\/B}}=\\ket{x_{A\/B}+\\eta}$. We find\n\\begin{equation}\n \\begin{split}\n H' &= U(\\delta x_{10}) H U^\\dagger(\\delta x_{10})\\\\\n &= \\frac{p_A^2}{2M} + \\frac{p_B^2}{2M} + \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_A+\\frac{\\delta x_{\\rm av}}{2}\\right)^2\\\\\n &\\quad+ \\frac{1}{2}M\\omega_{\\rm t}^2\\left(x_B-\\frac{\\delta x_{\\rm av}}{2}\\right)^2\\\\\n &\\quad + \\frac{\\Lambda_{10}}{4\\pi\\epsilon_0}\\big[D_A D_B^\\dagger K(\\delta x_{10}) + D_A^\\dagger D_B K^\\dagger(\\delta x_{10})\\\\\n &\\qquad\\qquad+D_A D_B F(\\delta x_{10}) + D_A^\\dagger D_B^\\dagger F^\\dagger(\\delta x_{10})\\big],\n \\end{split}\n\\end{equation}\nwhere $\\delta x_{\\rm av}=\\frac{1}{2}(\\delta x_0+ \\delta x_1)$, $\\delta x_{10}=\\frac{1}{2}(\\delta x_0-\\delta x_1)$, $D_{A} = \\sket{1_-}{A}\\sbra{0_-}{A}$, $D_B = \\sket{1_+}{B}\\sbra{0_+}{B}$ and we have defined the operators\n\\begin{equation}\n \\begin{split}\n K(\\eta) &= T_A\\left(\\tfrac{\\eta}{2}\\right)T_B\\left(\\tfrac{\\eta}{2}\\right)\\frac{1}{|x_B-x_A|^3}T_A^\\dagger\\left(-\\tfrac{\\eta}{2}\\right)T_B^\\dagger\\left(-\\tfrac{\\eta}{2}\\right),\\\\\n F(\\eta) &= T_A\\left(\\tfrac{\\eta}{2}\\right)T_B\\left(-\\tfrac{\\eta}{2}\\right)\\frac{1}{|x_B-x_A|^3}T_A^\\dagger\\left(-\\tfrac{\\eta}{2}\\right)T_B^\\dagger\\left(\\tfrac{\\eta}{2}\\right).\n \\end{split}\n\\end{equation}\nWe can immediately neglect, as before, the off-resonant terms in $D_A D_B$ and $D_A^\\dagger D_B^\\dagger$ because they couple internal states which are separated by the rotational energy.\nTransforming again to the dimensionless relative position operators we have\n\\begin{equation}\n \\begin{split}\n \\tilde{H}' &= \\frac{1}{\\hbar\\omega_{\\rm t}} H' = \\frac{p_{\\rm cm}^2}{2} + \\frac{p_{\\rm rel}^2}{2} + \\frac{1}{2} x_{\\rm cm}^2 + \\frac{1}{2} (x_{\\rm rel}-\\tilde{\\delta x}_\\mathrm{av})^2 \\\\&\\quad +\\frac{r^3}{|x_{\\rm rel}|^3}\\big[T_{\\rm cm}(2\\xi)D_A D_B^\\dagger + T_{\\rm cm}(-2\\xi)D_A^\\dagger D_B\\big],\n \\end{split}\n \\label{eqSupp:red-mot-ham-shift}\n\\end{equation}\nwhere $\\xi = \\sqrt{\\frac{M\\omega_{\\rm t}}{2\\hbar}}\\delta x_{10}$ and $\\tilde{\\delta x}_\\mathrm{av}=\\sqrt{\\frac{M\\omega_{\\rm t}}{2\\hbar}}\\delta x_\\mathrm{av}$. $T_{\\rm cm}$ and $T_{\\rm rel}$ are the translation operators for the dimensionless center-of-mass and relative coordinates respectively and we have used $T_A(\\frac{\\delta x_{10}}{2}) = T_{\\rm rel}(-\\frac{\\xi}{2})T_{\\rm cm}(\\frac{\\xi}{2})$, $T_B(\\frac{\\delta x_{10}}{2}) = T_{\\rm rel}(\\frac{\\xi}{2})T_{\\rm cm}(\\frac{\\xi}{2})$. In Eq.~\\eqref{eqSupp:red-mot-ham-shift}, the dipole-dipole interaction couples the center-of-mass and relative motions, and has an off-diagonal matrix element between the states $\\ket{\\Psi^+}$ and $\\ket{\\Psi^-}$.\n\nWe note that $\\tilde{H}' = \\tfrac{1}{\\hbar \\omega_{\\rm t}}H_{\\rm m} +\\delta H$ where $H_{\\rm m}$ is given by Eq.~(\\ref{eqSupp:ham-mot-red}) and\n\\begin{equation}\n\\begin{split}\n \\delta H =& \\frac{r^3}{|x_{\\rm rel}|^3}\\big[T_{\\rm cm}(2\\xi)D_A D_B^\\dagger+ T_{\\rm cm}(-2\\xi)D_A^\\dagger D_B\\big]\\\\ &- \\frac{r^3}{|x_{\\rm rel}|^3}\\big[D_A D_B^\\dagger + D_A^\\dagger D_B\\big].\n \\end{split}\n\\end{equation}\nTaking zeroth-order eigenstates to be those of $H_{\\rm m}$, and using first-order perturbation theory, we find that the dipole-dipole energies are just multiplied by the factor $e^{-\\xi^2}$. This factor is used in the calculation of the dipole-dipole interaction in the main text. It accounts for the shifted potentials irrespective of the size of the dipole-dipole interaction because the center-of-mass eigenstates of Eq.~\\eqref{eqSupp:ham-cm} are unchanged by the dipole-dipole interaction.\n\n\\section{Tunneling}\n\nStates that have total angular momentum $F>1\/2$ will be subject to a tensor ac Stark shift. This has several effects. One is to make the trap frequencies for different states differ from each other. For the parameters considered in this work, this effect is small. The most important effect is to give an off-diagonal term that couples together states with $\\Delta m_F \\le 2$. For pairs of states that are degenerate at $x=0$, this introduces an avoided crossing and provides a mechanism for molecules to tunnel from one potential to the other. \n\nLet $a$ be the matrix element of the tensor part of the Stark shift operator between the pair of states, and assume that this is constant across the region of interest. The single-molecule tunneling rate is\n\\begin{equation}\n \\gamma_\\mathrm{t} \\simeq 2 |a| \\cusbraket{\\phi_-}{\\phi_+}.\n\\end{equation}\nwhere $\\ket{\\phi_-}$ and $\\ket{\\phi_+}$ are the motional states corresponding to the pair of internal states. If we assume harmonic oscillator ground states, this is $2 |a| e^{-\\tilde{\\delta x}^2\/2}$. \n\nIn the realistic example described in the main text, there is no tunneling because we choose orthogonal tweezer polarizations so that $a=0$ for all pairs of states. If instead we choose parallel polarizations, there can be tunelling between the $\\ket{1_\\pm}$ states with $|a|=\\SI{1.4}{\\mega\\hertz}$. Taking the same tweezer parameters used in the main text, $\\delta x$ for this pair of states is \\SI{175}{\\nano\\meter}, and $\\gamma_{\\rm t}$ is \\SI{2.6}{\\kilo\\hertz}.\n\n\\section{Photon scattering rate}\n\nThe scattering rate is dominated by the vector tweezer since it is tuned much closer to resonance than the scalar tweezer. The scattering rate from light tuned to the midpoint of the fine structure interval is dominated by scattering from these two levels and is well approximated by,\n\\begin{equation}\n R_\\mathrm{ph} = \\frac{2\\Gamma\\Omega^2}{3\\delta_{\\rm fs}^2}\n\\end{equation}\nwhere $\\Omega=d_{AX}\\sqrt{2 I\/\\epsilon_0 c}\/\\hbar$ is the Rabi frequency and $\\Gamma$ is the linewidth of the $^2\\Pi$ state. For CaF we have $\\Gamma=2\\pi\\times \\SI{8.3}{\\mega\\hertz}$, $\\delta_{\\rm fs}=2\\pi\\times\\SI{2.14}{\\tera\\hertz}$ and $d_{AX}=0.97\\times\\SI{5.95}{\\debye}$ where the first factor is the Franck-Condon factor and the second is the transition dipole moment between electronic states. \n\n\\section{Collisional loss rate}\n\nThe collisional loss rate for two molecules with wavefunctions $\\psi_A(x)$ and $\\psi_B(x)$ is\n\\begin{equation}\n R_{\\rm col} = \\beta \\int |\\psi_A(x)|^2|\\psi_B(x)|^2 \\mathrm{d}^3 x\n\\end{equation}\nwhere $\\beta$ is the two-body loss rate constant, recently measured for CaF in \\SI{780}{\\nano\\meter} tweezer traps \\cite{Cheuk2020}. For two molecules in the motional ground states of two displaced but otherwise identical potential wells, this is\n\\begin{equation}\n R_{\\rm col} = \\beta \\left(\\frac{m}{2 \\pi\\hbar}\\right)^\\frac{3}{2} \\omega_{\\rm r} \\omega_{\\rm a}^\\frac{1}{2} e^{-\\tilde{\\delta x}^2}\n\\end{equation}\nwhere $\\omega_{\\rm r}$ is the trap frequency in the radial direction and $\\omega_{\\rm a}$ is the trap frequency in the axial direction.\n\nWe find that, for CaF in a trap that has $\\omega_{\\rm r}=2\\pi\\times\\SI{200}{\\kilo\\hertz}$, $\\omega_{\\rm a}=2\\pi\\times\\SI{35}{\\kilo\\hertz}$, keeping the collisional loss rate below \\SI{1}{\\hertz} requires a separation of 4.7 oscillator lengths, or \\SI{140}{\\nano\\meter}. The rate is very sensitive to separation in this region -- decreasing the separation to 3.6 harmonic oscillator lengths (or \\SI{106}{\\nano\\meter}) increases the loss rate to \\SI{100}{\\hertz}.\n\n\\section{Transport}\n\n\\begin{figure}\n \\centering\n \\includegraphics{transport-c2.pdf}\n \\caption{Trap merge sequence. (a) Intensity ramps of the different tweezers used in the sequence. (b) Separation of the potential minima as a function of time. Top row shows snapshots of potentials at three different times in the sequence.}\n \\label{figSupp:transport}\n\\end{figure}\n\nFigure~\\ref{figSupp:transport} illustrates a simple sequence to bring a pair of CaF molecules from two separated potentials into a single, state-dependent trap ready for the fast two-qubit gate. The adiabatic transport is achieved with intensity ramps of four spatially-fixed tweezer traps as shown in Fig.~\\mref{figSupp:transport}{(a)}. The top row of the figure shows the potentials at three points during the merger. The molecules begin in two \\SI{780}{\\nano\\meter} tweezers focused at $x=\\pm\\SI{0.6}{\\micro\\meter}$, each having power $P_{\\rm sep}$. A \\SI{604.966}{\\nano\\meter} tweezer and another \\SI{780}{\\nano\\meter} tweezer are focused at $x=0$ and have powers $P_{\\rm vec}$ and $P_{\\rm sc}$ respectively. $P_{\\rm sep}$ is ramped down from its initial value of $\\SI{10}{\\milli\\watt}$ while $P_{\\rm vec}$ is ramped up (note that $P_{\\rm vec}$ is shown multiplied by 20 to appear on same scale). Finally $P_{\\rm sc}$ is ramped up to squeeze the two molecules together. Figure~\\mref{figSupp:transport}{(b)} shows the separation of the trap minima as a function of time. The chosen power ramps keep the trap frequency fixed throughout. The mean number of photons scattered during this transport sequence is \\num{1.6e-2} and the probability of motional excitation is smaller still. More sophisticated non-adiabatic sequences will allow for faster transport and fewer scattered photons.\n\n\n\\end{document}","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzmmtg b/data_all_eng_slimpj/shuffled/split2/finalzzmmtg new file mode 100644 index 0000000000000000000000000000000000000000..ee39416db09843c9f65e937096c5c8ca49cf598f --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzmmtg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\\subsection{Polyfold regularization}\n\nConsider a compactified moduli space arising from the study of $J$-holomorphic curves in symplectic geometry.\nA foundational problem is finding some way to give this moduli space enough structure to define invariants.\nPolyfold theory, as developed by Hofer, Wysocki, and Zehnder, has been successful in providing a general abstract framework in which it is possible to ``regularize'' such a moduli space, yielding a perturbed moduli space which has sufficient additional structure.\n\n\\begin{theorem}[Polyfold regularization theorem, {\\cite[Thm.~15.4, Cor.~15.1]{HWZbook}}]\n\tIn some established cases, we can construct a polyfold ${\\mathcal Z}$ such that the compactified moduli space $\\overline{\\mathcal{M}}$ is equal to the zero set of a $\\text{sc}$-smooth Fredholm section $\\overline{\\partial}$ of a strong polyfold bundle ${\\mathcal W} \\to {\\mathcal Z}$, i.e., $\\overline{\\mathcal{M}} = \\smash{\\overline{\\partial}}\\vphantom{\\partial}^{-1} (0) \\subset {\\mathcal Z}$.\n\t\n\tWe may then ``regularize'' the moduli space $\\overline{\\mathcal{M}}$ by means of an ``abstract perturbation.''\n\tThe perturbed moduli space $\\overline{\\mathcal{M}}(p):=(\\overline{\\partial}+p)^{-1}(0)$ then has the structure of a compact oriented ``weighted branched orbifold.''\n\\end{theorem}\n\nIn the boundaryless case, such an approach has been successful in regularizing the Gromov--Witten moduli spaces (see \\cite{HWZGW}).\nA specialized approach has yielded a proof of the Arnold conjecture (see \\cite{filippenko2018arnold}).\nThis approach is also being used in the pursuit of a well-defined symplectic field theory (see \\cite{fish2018sft}).\n\nFor a suitably constructed abstract perturbation, the perturbed moduli space $\\overline{\\mathcal{M}}(p)$ has the structure of a compact oriented weighted branched orbifold, and therefore possesses sufficient structure to define the ``branched integration'' of differential forms.\n\n\\begin{theorem}[Polyfold invariants, {\\cite[Cor.~15.2]{HWZbook}}]\n\tLet ${\\mathcal O}$ be an orbifold and consider a $\\text{sc}$-smooth map $f: {\\mathcal Z} \\to {\\mathcal O}$.\n\tWe may define the \\textbf{polyfold invariant} as the homomorphism obtained by pulling back a de Rahm cohomology class from the orbifold and taking the ``branched integral'' over a perturbed moduli space:\n\t\\[\n\tH^*_{\\dR} ({\\mathcal O}) \\to \\mathbb{R}, \\qquad \t\\omega \t\t\t\\mapsto \\int_{\\overline{\\mathcal{M}}(p)} f^*\\omega.\n\t\\]\n\tThis homomorphism does not depend on the choice of abstract perturbation.\n\\end{theorem}\n\nIn particular, this is precisely the form for the polyfold Gromov--Witten invariants defined in \\cite[Thm.~1.12]{HWZbook}.\n\n\\subsection{Naturality of polyfold invariants}\n\nGiven a compactified moduli space $\\overline{\\mathcal{M}}$, it is possible to model $\\overline{\\mathcal{M}}$ in subtly different ways.\nThat is, it is possible to construct distinct polyfolds ${\\mathcal Z}$ and ${\\mathcal Z}'$ which contain $\\overline{\\mathcal{M}}$ as a compact subset, $\\overline{\\mathcal{M}} \\subset {\\mathcal Z}$ and $\\overline{\\mathcal{M}} \\subset {\\mathcal Z}'$.\nAfter regularization of the moduli space $\\overline{\\mathcal{M}}$ we obtain perturbed moduli spaces $\\overline{\\mathcal{M}}(p) \\subset {\\mathcal Z}$ and $\\overline{\\mathcal{M}}(p') \\subset {\\mathcal Z}'$ which have the structure of compact oriented weighted branched suborbifolds.\nWe obtain distinct polyfold invariants by taking the branched integral over these perturbed moduli spaces.\nThus, we find ourselves in the following situation: given a moduli space $\\overline{\\mathcal{M}}$ we can define polyfold invariants associated to the distinct polyfolds ${\\mathcal Z}$ and ${\\mathcal Z}'$ and which, a priori, we cannot assume are equivalent.\n\\emph{Therefore the polyfold invariants, which aspire to be agnostic of all possible choices, may depend on the subtle choices made in modeling a given moduli space.}\n\nIn this paper, we provide a general framework for studying and resolving this problem.\nThe first step is to find a third polyfold ${\\mathcal Y}$ which models $\\overline{\\mathcal{M}}$ and which refines the different structures or choices made, in the sense that there are inclusion maps\n\t\\[\n\t{\\mathcal X}' \\hookleftarrow {\\mathcal Y} \\hookrightarrow {\\mathcal X}.\n\t\\]\nThe problem then reduces to showing that the polyfold invariants for ${\\mathcal Y}$ and ${\\mathcal X}$ are equal.\nWe consider a commutative diagram of inclusion maps between polyfolds and between strong polyfold bundles of the form:\n\t\\[\\begin{tikzcd}\n\t{\\mathcal V} \\arrow[r, hook] \\arrow[d, \"\\overline{\\partial}_{\\mathcal Y} \\quad\"'] & {\\mathcal W} \\arrow[d, \"\\quad \\overline{\\partial}_{\\mathcal Z}\"] & \\\\\n\t{\\mathcal Y} \\arrow[r, hook] \\arrow[u, bend left] & {\\mathcal Z} \\arrow[u, bend right] & \n\t\\end{tikzcd}\\]\nin addition to a commutative diagram with target space the orbifold ${\\mathcal O}$:\n\t\\[\\begin{tikzcd}\n\t& & {\\mathcal O} \\\\\n\t{\\mathcal Y} \\arrow[r, hook] \\arrow[rru, \"f_{\\mathcal Y}\"] & {\\mathcal Z} \\arrow[ru, \"f_{\\mathcal Z}\"'] & \n\t\\end{tikzcd}\n\t\\]\nAs outlined at the start of \\S~\\ref{subsec:intermediary-subbundles-naturality}, we assume that these inclusion maps satisfy a number of properties.\nAlthough these hypothesis are somewhat lengthy at a glance, they will be natural from the point of view of our applications, and moreover reflect some commonalities of the practical construction of distinct polyfolds which model the same moduli space.\nIn these application, we furthermore note that the bundle ${\\mathcal V}$ is not the same as the pullback bundle of ${\\mathcal W}$, hence we may not use the methods of pulling back abstract perturbations of Theorem~\\ref{thm:pullback-regular-perturbation}.\n\nThe most substantial hypothesis is the existence of an ``intermediary subbundle,'' a subset of the target strong polyfold bundle ${\\mathcal R}\\subset {\\mathcal W}$ whose object space is fiberwise a (not necessarily complete) vector space and which satisfies some additional properties (see Definition~\\ref{def:intermediate-subbundle}).\n\n\\begin{theorem}[Naturality of polyfold invariants]\n\t\\label{thm:naturality-polyfold-invariants}\n\tConsider a compactified moduli space $\\overline{\\mathcal{M}}$ which is modeled by two polyfolds ${\\mathcal Y}$ and ${\\mathcal Z}$, i.e., $\\overline{\\mathcal{M}} \\subset {\\mathcal Y}$ and $\\overline{\\mathcal{M}} \\subset {\\mathcal Z}$.\n\tSuppose there is an inclusion map ${\\mathcal Y} \\hookrightarrow {\\mathcal Z}$.\n\tMoreover, assume we satisfy the hypothesis of the general framework described in \\S~\\ref{subsec:intermediary-subbundles-naturality}.\n\t\n\tSuppose there exists an intermediary subbundle ${\\mathcal R} \\subset {\\mathcal W}$. Then the polyfold invariants for ${\\mathcal Y}$ and ${\\mathcal Z}$ defined via the branched integral are equal.\n\tThis means that, given a de Rahm cohomology class $\\omega\\in H^*_{\\dR} ({\\mathcal O})$ the branched integrals over the perturbed moduli spaces are equal,\n\t\\[\n\t\\int_{\\overline{\\mathcal{M}}(p)} f_{\\mathcal Y}^* \\omega = \\int_{\\overline{\\mathcal{M}}(p')} f_{\\mathcal Z}^* \\omega,\n\t\\]\n\tfor any regular abstract perturbations.\n\\end{theorem}\n\nThe proof ends up being somewhat involved as we encounter some substantial technical difficulties, which we sketch briefly.\nRoughly, the existence of an intermediary subbundle allows the construction of abstract perturbations $p'$ of the strong polyfold bundle ${\\mathcal W} \\to {\\mathcal Z}$ whose restrictions induce a well-defined abstract perturbation $p$ of the strong polyfold bundle ${\\mathcal V} \\to {\\mathcal Y}$.\nThis allows us to consider a well-defined restriction between perturbed moduli spaces, \n\t\\[\\overline{\\mathcal{M}}(p) \\hookrightarrow \\overline{\\mathcal{M}}(p').\\]\nOn the level of topological spaces, this restriction is a continuous bijection.\nWhile we can achieve transversality for both perturbations, the abstract polyfold machinery is only able to ``control the compactness'' of the target perturbed moduli space, hence via usual methods we can only assume that $\\overline{\\mathcal{M}}(p')$ is a compact topological space.\n\nUsing only knowledge of the underlying topologies of both of these spaces, it is impossible to say anything more. \nThe key to resolving this problem is understanding the additional structure that these spaces possess---the branched orbifold structure---and using this structure to prove an invariance of domain result for weighted branched orbifolds (see Lemma~\\ref{lem:invariance-of-domain-branched-orbifolds}).\nThis result will allow us to assert that the above map is a homeomorphism---and therefore, $\\overline{\\mathcal{M}}(p)$ is also compact.\n\nThe second major difficulty comes from the fact that the restricted perturbation $p$ on the source space is not a ``regular'' perturbation (see Definition~\\ref{def:regular-perturbation}).\nThis is problematic due to the fact that the present theory only guarantees the existence of a compact cobordism between abstract perturbations which are both assumed to be ``regular'' (see Theorem~\\ref{thm:cobordism-between-regular-perturbations}).\nIn order to resolve this problem, we must generalize the abstract perturbation theory to allow for perturbation of $\\text{sc}$-smooth proper ``Fredholm multisections'' (see \\S~\\ref{subsec:fredholm-multisections}).\nThis generalization enables us to construct a compact cobordism from the restricted perturbation $p$ to a regular perturbation (see Proposition~\\ref{prop:cobordism-multisection-regular}).\n\n\\begin{comment}\nExplain the problem in simple terms...\nAnd explain the main theorem in simple terms.\nAnd then explain some basic steps and ideas required in the proof of the main theorem.\n\nGiven a compact moduli space $\\overline{\\mathcal{M}}$, it is possible to model the problem with distinct polyfolds which contain $\\overline{\\mathcal{M}}$ as a compact subset.\nAfter regularization, we then have perturbed moduli spaces, which have the structure of compact weighted branched orbifolds.\nThe polyfold invariants are defined by integration over these compact weighted branched orbifolds.\nAlthough they are supposed to model the same moduli space, a priori we cannot assume that these polyfold invariants are equal.\n\nWe provide a general framework for studying this problem.\nFirst step, is to find a polyfold which refines the different structures \/ choices of the two polyfolds xxx, in the sense that there should be inclusion maps\n\nBy symmetry, it is equivalent to show the polyfold invariants for ${\\mathcal Y}$ and ${\\mathcal X}$ are equal.\nAs part of the general framework, we consider commutative diagrams of the form:\n\t\\[\\begin{tikzcd}\n\tDIAGRAM\n\t\\end{tikzcd}\\]\nThere are a long list of expected properties of the smooth inclusion ${\\mathcal Y} \\hookrightarrow {\\mathcal X}$ should naturally satisfy, see the introduction of the section XXX.\n(We should note at the outset that the bundle W is not the same as the pullback bundle)\nThese expected properties don't really help us in solving this problem, but they do give us a foothold in understanding the difficulties.\n\nThe key object of study is an ``intermediary subbundle,'' a subset $R\\subset W$ which is fiberwise a bundle, and satsfies some additional properties.\n\n\\begin{theorem}\n\tSuppose we are in the general framework as outlined in section XXX.\n\tSuppose there exists an intermediary subbundle.\n\tThen the polyfold invariants are equal.\n\\end{theorem}\n\n\nWe give a sketch of some of the ideas involved in the proof of the theorem.\nRoughly, the existence of an intermediary subbundle allows us to construct abstract perturbations which have a well defined restriction.\nThis allows us to get a well-defined restriction between perturbed solution spaces:\n\nHowever, while we can control the compactness of the target space, we have no hope of controlling the compactness of the source space.\nAnd from the point of view of point-set topology, we are totally stuck.\n\nETC continue the argument...\n\\end{comment}\n\n\\subsection{Application: Naturality of the polyfold Gromov--Witten invariants}\n\nThe construction of a Gromov--Witten polyfold structure requires choices, such as the choice of a cut-off function in the gluing constructions, choices of good uniformizing families of stable maps, choice of a locally finite refinement of a cover of M-polyfold charts, as well as the exponential gluing profile.\n\nIn addition to these choices, one must also choose a strictly increasing sequence $(\\delta_i)_{i\\geq 0} \\subset (0,2\\pi)$, i.e.,\n\t\\[\n\t0<\\delta_0 < \\delta_1 < \\cdots < 2\\pi.\n\t\\]\nThis sequence is used to define $\\text{sc}$-Banach spaces which are then used to define the M-polyfold models of the Gromov--Witten polyfold ${\\mathcal Z}_{A,g,k}^{3,\\delta_0}$ (see \\cite[\\S~2.4]{HWZGW}).\n\nThe following theorem states that, having fixed the exponential gluing profile and a strictly increasing sequence $(\\delta_i)_{i\\geq 0}\\subset (0,2\\pi)$, different choices lead to Morita equivalent polyfold structures. Hence the Gromov--Witten polyfold invariants are independent of such choices.\n\n\\begin{theorem}[{\\cite[Thm.~3.37]{HWZGW}}]\n\tHaving fixed the exponential gluing profile and a strictly increasing sequence $(\\delta_i)_{i\\geq 0}\\subset (0,2\\pi)$, the underlying topological space ${\\mathcal Z}_{A,g,k}^{3,\\delta_0}$ possesses a natural equivalence class of polyfold structures.\n\\end{theorem}\n\nWe can use Theorem~\\ref{thm:naturality-polyfold-invariants} to show that the polyfold Gromov--Witten invariants are also independent of the choice of increasing sequence, and hence are natural in the sense that they do not depend on any choice made in the construction of the Gromov--Witten polyfolds.\n\n\\begin{corollary}[Naturality of the polyfold Gromov--Witten invariants]\n\t\\label{cor:naturality-polyfold-gw-invariants}\n\tThe polyfold Gromov--Witten invariants do not depend on the choice of an increasing sequence $(\\delta_i)_{i\\geq 0} \\allowbreak \\subset (0,2\\pi)$.\n\\end{corollary}\n\nWe now consider the choice of puncture at the marked points\nThe underlying set of the Gromov--Witten polyfolds consist of stables curves.\nAs constructed in \\cite{HWZbook}, these stable curves are required to satisfy exponential decay estimates on punctured neighborhoods of the nodal pairs.\nIn contrast, for these Gromov--Witten polyfolds no such decay is required at the marked points.\n\nHowever, in some situations we would like to treat the marked points in the same way as the nodal points.\nFor example, this is true in the context of the splitting and genus reduction axioms, where we will wish to identify a pair of marked points with the same image with a nodal pair.\nAllowing a puncture with exponential decay at a specified marked point is a global condition on a Gromov--Witten polyfold, and hence different choices of puncture at the marked points yield distinct Gromov--Witten polyfolds.\n\nWe again use Theorem~\\ref{thm:naturality-polyfold-invariants} to show that the polyfold Gromov--Witten invariants are independent of such choice of puncture at the marked points.\n\n\\begin{corollary}\n\t\\label{cor:punctures-equal}\n\tThe polyfold Gromov--Witten invariants do not depend on the choice of puncture at the marked points.\n\\end{corollary}\n\n\\subsection{Pulling back abstract perturbations in polyfold theory}\n\nConsider distinct moduli spaces $\\overline{\\mathcal{M}}$ and $\\overline{\\mathcal{M}}'$ which are modeled by polyfolds ${\\mathcal Y}$ and ${\\mathcal Z}$, respectively.\nConsider a naturally defined $\\text{sc}$-smooth map between polyfolds $f: {\\mathcal Y} \\to {\\mathcal Z}$ which restricts to a map between moduli spaces $f|_{\\overline{\\mathcal{M}}} : \\overline{\\mathcal{M}} \\to \\overline{\\mathcal{M}}'$.\nIn many situations we would like to study the geometry of this map and in order to establish algebraic relationships between the respective polyfold invariants.\n\nHowever, without work, we cannot assume that this map will \\emph{persist} after abstract perturbation.\nAbstract perturbations are constructed using bump functions and choices of vectors in a strong polyfold bundle, which in general we cannot assume will be preserved by the $\\text{sc}$-smooth map $f$.\n\nTo solve this problem, consider a pullback diagram of strong polyfold bundles as follows:\n\t\\[\\begin{tikzcd}\n\tf^* {\\mathcal W} \\arrow[d, \"f^*\\overline{\\partial} \\quad\"'] \\arrow[r, \"\\text{proj}_2\"'] & {\\mathcal W} \\arrow[d, \"\\quad \\overline{\\partial}\"] & \\\\\n\t{\\mathcal Y} \\arrow[r, \"f\"'] \\arrow[u, bend left] & {\\mathcal Z}. \\arrow[u, bend right] & \n\t\\end{tikzcd}\\]\nThe natural approach for obtaining a well-defined map between the perturbed moduli spaces is to take the pullback an abstract perturbation.\nThe main technical point is ensuring that we can control the compactness of the pullback perturbation.\nThis is achieved by a mild topological hypothesis on the map $f$, called the ``topological pullback condition'' (see Definition~\\ref{topological-pullback-condition}).\n\n\\begin{theorem}\n\t\\label{thm:pullback-regular-perturbation}\n\tConsider a $\\text{sc}$-smooth map between polyfolds, $f: {\\mathcal Y} \\to {\\mathcal Z}$, and consider a pullback diagram of strong polyfold bundles as above.\n\tIf $f$ satisfies the topological pullback condition then there exists a regular perturbation $p$ which pulls back to a regular perturbation $f^*p$.\n\t\n\tIt follows that we can consider a well-defined restriction between perturbed moduli spaces,\n\t\t\\[\n\t\tf|_{\\overline{\\mathcal{M}}(f^*p)} : \\overline{\\mathcal{M}}(f^*p) \\to \\overline{\\mathcal{M}} (p).\n\t\t\\]\n\\end{theorem}\n\nThis theorem follows from the more technically stated Theorem~\\ref{thm:compatible-pullbacks}.\n\n\\subsection{Application: Permutation maps between perturbed Gromov--Witten moduli spaces}\n\nLet $(Q,\\omega)$ be a closed symplectic manifold, and fix a homology class $A \\in H_2 (Q;\\mathbb{Z})$ and integers $g,\\ k\\geq 0$ such that $2g+k \\geq 3$.\nFix a permutation $\\sigma: \\{1,\\ldots, k\\} \\to \\{ 1,\\ldots, k \\}$.\nConsider the natural $\\text{sc}$-diffeomorpism between Gromov--Witten polyfold defined by permuting the marked points,\n\t\\[\n\t\\sigma: {\\mathcal Z}_{A,g,k}\\to {\\mathcal Z}_{A,g,k}.\n\t\\]\nFor a fixed compatible almost complex structure $J$, this map has a well-defined restriction to the unperturbed Gromov--Witten moduli spaces\n\t\\[\n\t\\sigma|_{\\overline{\\mathcal{M}}_{A,g,k}(J)} : \\overline{\\mathcal{M}}_{A,g,k}(J) \\to \\overline{\\mathcal{M}}_{A,g,k}(J).\n\t\\]\n\nAs we have mentioned, abstract perturbations are constructed using bump functions and choic\\-es of vectors in a strong polyfold bundle, which in general will not exhibit symmetry with regards to the labelings of the marked points.\nAs a result, given a stable curve $x\\in {\\mathcal Z}_{A,g,k}$ which satisfies a perturbed equation $(\\overline{\\partial}_J+p)(x)=0$ we cannot expect that $(\\overline{\\partial}_J+p)(\\sigma(x))=0$, as the perturbations are not symmetric with regards to the permutation $\\sigma$.\nTherefore, naively there does not exist a permutation map between perturbed Gromov--Witten moduli spaces.\n\nHowever, since $\\sigma: {\\mathcal Z}_{A,g,k}\\to {\\mathcal Z}_{A,g,k}$ is a homeomorphism on the level of the underlying topological spaces, it is immediate that it satisfies the topological pullback condition, hence we immediately obtain the following corollary.\n\n\\begin{corollary}\n\t\\label{cor:pullback-via-permutation}\n\tThere exists a regular perturbation which pulls back to a regular perturbation via the permutation map $\\sigma:{\\mathcal Z}_{A,g,k}\\to {\\mathcal Z}_{A,g,k}$.\n\tTherefore, we can consider a well-defined permutation map between the perturbed Gromov--Witten moduli spaces,\n\t\t\\[\n\t\t\\sigma|_{\\overline{\\mathcal{M}}_{A,g,k}(\\sigma^*p)} : \\overline{\\mathcal{M}}_{A,g,k}(\\sigma^*p) \\to \\overline{\\mathcal{M}}_{A,g,k} (p).\n\t\t\\]\n\\end{corollary}\n\n\\subsection{Organization of the paper}\n\nWe give a self contained introduction to the basic abstract perturbation machinery of polyfold theory in \\S~\\ref{sec:abstract-perturbations-polyfold-theory}.\nIn \\S~\\ref{subsec:polyfolds-ep-groupoids} we review scale calculus, the definition of a polyfold as an ep-groupoid, and discuss the induced topology on subgroupoids and on branched suborbifolds.\nIn \\S~\\ref{subsec:abstract-perturbations} we discuss strong polyfold bundles, $\\text{sc}$-smooth Fredholm sections and $\\text{sc}^+$-multisection perturbations. In addition, we also discuss transverse perturbations, how to control the compactness of a perturbation, and questions of orientation.\nIn \\S~\\ref{subsec:branched-integral-polyfold-invariants} we consider $\\text{sc}$-smooth differential forms, the definition of the branched integral on a weighted branched suborbifold, and how to define the polyfold invariants.\n\nWe provide a general framework for proving that the polyfold invariants are natural, and do not depend on the construction of a polyfold model for a given moduli space in \\S~\\ref{sec:naturality-polyfold-invariants}.\nIn \\S~\\ref{subsec:invariance-of-domain} we prove an invariance of domain result for branched suborbifolds, Lemma~\\ref{lem:invariance-of-domain-branched-orbifolds}.\nIn \\S~\\ref{subsec:fredholm-multisections} we generalize the polyfold abstract perturbation theory to the case of a $\\text{sc}$-smooth proper Fredholm multisection.\nIn \\S~\\ref{subsec:intermediary-subbundles-naturality} we provide the general framework, introduce the definition of an intermediary subbundle, and prove that the equality of polyfolds invariants in Theorem~\\ref{thm:naturality-polyfold-invariants}.\nIn \\S~\\ref{subsec:independence-sequence} we apply Theorem~\\ref{thm:naturality-polyfold-invariants} to show that the polyfold Gromov--Witten invariants are independent of the choice of increasing sequence.\nIn \\S~\\ref{subsec:independence-punctures} we apply Theorem~\\ref{thm:naturality-polyfold-invariants} to show that the polyfold Gromov--Witten invariants are independent of the choice of puncture at the marked points.\n\nWe discuss how to pull back regular perturbations in \\S~\\ref{sec:pulling-back-abstract-perturbations}.\nIn \\S~\\ref{subsec:pullbacks-strong-polyfold-bundles} we define the pullback of a strong polyfold bundle and of a $\\text{sc}^+$-multisection.\nIn \\S~\\ref{subsec:topological-pullback-condition-controlling-compactness} we introduce the topological pullback condition and show how it allows us to pullback a pair which controls compactness.\nIn \\S~\\ref{subsec:construction-regular-perturbation-which-pullback} we construct regular perturbations which pullback to regular perturbations, proving Theorem~\\ref{thm:compatible-pullbacks}.\nIn \\S~\\ref{subsec:permutation-map} we apply Theorem~\\ref{thm:compatible-pullbacks} to obtain a well-defined permutation map between the perturbed Gromov--Witten moduli spaces.\n\nIn Appendix~\\ref{appx:local-surjectivity} we consider some basic properties of the linearized Cauchy--Riemann operator, which allow us to assert the simple fact that cokernel vectors can be chosen so that they vanish on small neighborhoods of the marked or nodal points.\n\n\n\\section{Abstract perturbations in polyfold theory}\n\t\\label{sec:abstract-perturbations-polyfold-theory}\n\nIn this section we recall and summarize the construction of abstract perturbations in polyfold theory, as developed by Hofer, Wysocki, and Zehnder.\n\n\\subsection{Polyfolds and ep-groupoids}\n\t\\label{subsec:polyfolds-ep-groupoids}\n\nWe use the modern language of \\'etale proper Lie groupoids to define polyfolds. \nThe notion of orbifold was first introduced by Satake \\cite{satake1956generalization}, with further descriptions in terms of groupoids and categories by Haefliger \\cites{haefliger1971homotopy,haefliger1984groupoide,haefliger2001groupoids}, and Moerdijk \\cites{moerdijk2002orbifolds,moerdijk2003introduction}. \nWith this perspective, a polyfold may be viewed as a generalization of a (usually infinite-dimensional) orbifold, with additional structure. This generalization of the \\'etale proper Lie groupoid language to the polyfold context is due to Hofer, Wysocki, and Zehnder \\cite{HWZ3}.\nFor full details in the present context, we will refer the reader to \\cite{HWZbook} for the abstract definitions of ep-groupoids in the polyfold context.\n\n\\subsubsection[sc-Structures, M-polyfolds, and polyfold structures]{$\\text{sc}$-Structures, M-polyfolds, and polyfold structures}\n\n\nWe begin by discussing the basic definitions of ``scale calculus'' in polyfold theory.\nScale calculus is a generalization of classical functional analytic concepts, designed to address the classical failure of reparametrization actions to be differentiable (see \\cite[Ex.~2.1.4]{ffgw2016polyfoldsfirstandsecondlook}).\nThus, scale calculus begins by generalizing notions of Banach spaces and of Fr\\'echet differentiability in order to obtain scale structures where reparametrization will be a smooth action.\n\n\n\\begin{definition}[{\\cite[Def.~1.1]{HWZbook}}]\n\tA \\textbf{$\\text{sc}$-Banach space} consists of a Banach space $E$ together with a decreasing sequence of linear subspaces\n\t\\[\n\tE=E_0\\supset E_1 \\supset \\cdots \\supset E_\\infty := \\cap_{i\\geq 0} E_i\n\t\\]\n\tsuch that the following two conditions are satisfied.\n\t\\begin{enumerate}\n\t\t\\item The inclusion operators $E_{m+1} \\to E_m$ are compact.\n\t\t\\item $E_\\infty$ is dense in every $E_i$.\n\t\\end{enumerate}\n\\end{definition}\n\n\\begin{definition}[{\\cite[Def.~1.9]{HWZbook}}]\n\t\\label{def:ssc-differentiability-ssc-Banach-spaces}\n\tA map $f:U\\rightarrow U'$ between two open subsets of $\\text{sc}$-Banach spaces $E$ and $E'$ is called a \\textbf{$\\text{sc}^0$-map}, if $f(U_i)\\subset U'_i$ for all $i\\geq 0$ and if the induced maps $f:U_i\\rightarrow U'_i$ are continuous. \n\tFurthermore, $f$ is called a \\textbf{$\\text{sc}^1$-map}, or of \\textbf{class $\\text{sc}^1$}, if the following conditions are satisfied.\n\t\\begin{itemize}\n\t\t\\item For every $x\\in U_1$ there exists a bounded \n\t\tlinear map $Df(x)\\in \\mathcal{L}(E_0, E'_0)$ satisfying for $h\\in\n\t\tE_1$, with $x+h\\in U_1$,\n\t\t\\[\\frac{1}{\\norm{h}_1}\\norm{f(x+h)-f(x)-Df(x)h}_0\\to 0\\quad\n\t\t\\text{as\\ $\\norm{h}_1\\to 0$.}\\mbox{}\\\\[4pt]\\]\n\t\t\\item The tangent map $Tf:TU\\to TU'$,\n\t\tdefined by\n\t\t\\[Tf(x, h)=(f(x), Df(x)h),\n\t\t\\]\n\t\tis a $\\text{sc}^0$-map between the tangent spaces.\n\t\\end{itemize}\n\\end{definition}\n\nIf $Tf:TU\\to TU'$ is of class $\\text{sc}^1$, then $f:U\\to U'$ is called of class $\\text{sc}^2$; inductively, the map $f:U\\to E'$ is called of class $\\text{sc}^k$ if the $\\text{sc}^0$-map $T^{k-1}f:T^{k-1}U\\to T^{k-1}E'$ is of class $\\text{sc}^1$. A map which is of class $\\text{sc}^k$ for every $k$ is called \\textbf{$\\text{sc}$-smooth} or of \\textbf{class $\\text{sc}^{\\infty}$}. The basic building block which allows us to check the $\\text{sc}$-differentiability of maps is the chain rule.\n\n\\begin{proposition}[Chain rule, {\\cite[Thm.~1.1]{HWZbook}}]\n\tAssume that $E$, $F$, and $G$ are $\\text{sc}$-smooth Banach spaces and $U\\subset E$ and $V\\subset F$ are open sets. Assume that $f:E\\to F$, $g:V\\to G$ are of class $\\text{sc}^1$ and $f(U)=V$. Then the composition $g\\circ f :U\\to G$ is of class $\\text{sc}^1$ and the tangent maps satisfy\n\t\\[\n\tT(g\\circ f) = Tg \\circ Tf.\n\t\\]\n\\end{proposition}\n\n\n\n\\begin{definition}[{\\cite[Defs.~2.1,~2.2]{HWZbook}}]\n\tConsider a $\\text{sc}$-Banach space $E$ and consider an open subset $U\\subset E$. A $\\text{sc}$-smooth map $r:U\\to U$ is called a \\textbf{$\\text{sc}$-smooth retraction} on $U$ if $r\\circ r = r$.\n\tA \\textbf{local M-polyfold model (without boundary)} is a pair $(O,E)$ consisting of a $\\text{sc}$-Banach space $E$ and a subset $O\\subset E$ such that there exists a $\\text{sc}$-smooth retraction $r:U \\to U$ defined on an open subset $U\\subset E$ such that $r(U)= O$. We call $O$, equipped with the subspace topology $O\\subset E$, a \\textbf{$\\text{sc}$-retract}.\n\\end{definition}\n\nThese definitions of $\\text{sc}$-differentiability extend to local M-polyfolds models in the following way.\n\\begin{definition}[{\\cite[Def.~2.4]{HWZbook}}]\n\tA map $f:O \\to O'$ between two local M-polyfold models is of \\textbf{class $\\text{sc}^k$} if the composition $f\\circ r:U\\to E'$ is of class $\\text{sc}^k$ where $U\\subset E$ is an open subset of the $\\text{sc}$-Banach space $E$ and where $r:U\\to U$ is a $\\text{sc}$-smooth retraction onto $r(U)=O$. \n\\end{definition}\n\n\\begin{comment}\n{only include this if i want to elaborate more about reparametrization.\nIn Gromov--Witten theory the compactification phenomena consist of nodal curves; hence our local M-polyfold models are without boundary and the $i$th-noded curves appear as interior points in the local M-polyfold models of codimension $2i$ (see \\cite[Rem.~5.3.2]{ffgw2016polyfoldsfirstandsecondlook}).\nOther cases such as Hamiltonian Floer theory (as first introduced in \\cite{floer1988unregularized}) and Symplectic Field Theory (as first introduced in \\cite{egh2000introduction}) would require the inclusion of partial quadrants in order to deal with boundaries and corners (see \\cite[Defs.~1.6,~2.2]{HWZbook} for details on local M-polyfold models with partial quadrants).}\n\\end{comment}\n\nIn the absence of isotropy, we may consider the following definition of an ``M-polyfold,'' short for a ``polyfold of manifold type.''\n\n\\begin{definition}[{\\cite[Def.~2.8]{HWZbook}}]\n\tWe say that a paracompact Hausdorff topological space $Z$ is an \\textbf{M-polyfold} if every point $z\\in Z$ has an open neighborhood\n\twhich is homeomorphic to a $\\text{sc}$-retract $O$, and such that the induced transition maps between any two $\\text{sc}$-retracts are $\\text{sc}$-smooth.\n\\end{definition}\n\nHowever, in almost all situations that arise isotropy is inevitable, and must be dealt with.\nIn this sense, polyfold behave like infinite-dimensional orbifolds, and so we introduce the language of ep-groupoids.\n\n\\begin{definition}[{\\cite[Defs.~7.1,~7.3]{HWZbook}}]\n\tA \\textbf{groupoid} $(Z,{\\bar\\m}{Z})$ is a small category consisting of a set of objects $Z$, a set of morphisms ${\\bar\\m}{Z}$ which are all invertible, and the five structure maps $(s,t,m,u,i)$ (the source, target, multiplication, unit, and inverse maps).\n\tAn \\textbf{ep-groupoid} is a groupoid $(Z,{\\bar\\m}{Z})$ such that the object set $Z$ and the morphism set ${\\bar\\m}{Z}$ are both M-polyfolds, and such that all the structure maps are $\\text{sc}$-smooth maps which satisfy the following properties.\n\t\\begin{itemize}\n\t\t\\item \\textbf{(\\'etale).} The source and target maps\n\t\t$s:{\\bar\\m}{Z}\\to Z$ and $t:{\\bar\\m}{Z}\\to Z$ are surjective local sc-diffeomorphisms.\n\t\t\\item \\textbf{(proper).} For every point $z\\in Z$, there exists an\n\t\topen neighborhood $V(z)$ so that the map\n\t\t$t:s^{-1}(\\overline{V(z)})\\rightarrow Z$ is a proper mapping.\n\t\\end{itemize}\n\\end{definition}\n\nFor a fixed object $z\\in Z$ we denote the \\textbf{isotropy group of $z$} by\n\t\\[\n\t{\\bar\\m}{G}(z) := \\{\t\\phi \\in {\\bar\\m}{Z} \\mid s(\\phi)=t(\\phi = z)\t\\}.\n\t\\]\nBy \\cite[Prop.~7.4]{HWZbook}, the properness condition ensures that this is a finite group.\nThe \\textbf{orbit space} of the ep-groupoid $(Z,{\\bar\\m}{Z})$,\n\t\\[\n\t\\abs{Z} := Z \/ \\sim,\n\t\\]\nis the quotient of the set of objects $Z$ by the equivalence relation given by $z\\sim z'$ if there exists a morphism $\\phi\\in {\\bar\\m}{Z}$ with $s(\\phi)=z$ and $t(\\phi)=z'$. It is equipped with the quotient topology defined via the map \n\t\\begin{equation}\\label{eq:quotient-map}\n\t\\pi: Z\\to\\abs{Z}, \\qquad z\\mapsto \\abs{z}.\n\t\\end{equation}\n\n\n\n\n\\begin{definition}[{\\cite[Def.~16.1]{HWZbook}}]\n\tLet ${\\mathcal Z}$ be a second countable, paracompact, Hausdorff topological space. A \\textbf{polyfold structure} on ${\\mathcal Z}$ consists of an ep-groupoid $(Z,{\\bar\\m}{Z})$ and a homeomorphism $\\abs{Z}\\simeq {\\mathcal Z}$.\n\\end{definition}\n\nDefining an ep-groupoid involves making a choice of local structures. Taking an equivalence class of ep-groupoids makes our differentiable structure choice independent. The appropriate notion of equivalence in this category-theoretic context is a ``Morita equivalence class'' (see \\cite[Def.~3.2]{HWZ3})\n\n\\begin{definition}[{\\cite[Def.~16.3]{HWZbook}}]\n\tA \\textbf{polyfold} consists of a second countable, paracompact, Hausdorff topological space ${\\mathcal Z}$ together with a Morita equivalence class of polyfold structures $[(Z,{\\bar\\m}{Z})]$ on ${\\mathcal Z}$.\n\\end{definition}\n\nTaking a Morita equivalence class of a given polyfold structure (in the case of polyfolds) is analogous to taking a maximal atlas for a given atlas (in the usual definition of manifolds).\nGiven distinct polyfold structures which define an orbifold or a polyfold, the method of proving they define the same Morita equivalence class is by demonstrating that both polyfold structures possess a common refinement.\n\nThe scales of a $\\text{sc}$-Banach space induce a filtration on the local M-polyfold models, which is moreover preserved by the structure maps $s,t$. Consequently, there is a well-defined filtration on the orbit space which hence induces a filtration\n\t\\[\n\t{\\mathcal Z} = {\\mathcal Z}_0 \\supset {\\mathcal Z}_1 \\supset \\cdots \\supset {\\mathcal Z}_\\infty = \\cap_{k\\geq 0} {\\mathcal Z}_k\n\t\\]\non the underlying topological space ${\\mathcal Z}$.\n\n\\begin{notation}\n\tIt is common to denote both the ep-groupoid ``$(Z,{\\bar\\m}{Z})$,'' and its object set ``$Z$,'' by the same letter ``$Z$.''\t\n\tWe will refer to the underlying set, the underlying topological space, or the polyfold by the letter ``${\\mathcal Z}$.''\n\tWe will always assume that a topological space ${\\mathcal Z}$ with a polyfold structure is necessarily second countable, paracompact, and Hausdorff.\t\n\tFurthermore, we will write objects as ``$x\\in Z$,'' morphisms as ``$\\phi \\in {\\bar\\m}{Z}$,'' and points as ``$[x]\\in {\\mathcal Z}$'' (due to the identification $\\abs{Z} \\simeq {\\mathcal Z}$). We will write ``$\\phi: x\\to y$'' for a morphism $\\phi \\in {\\bar\\m}{Z}$ with $s(\\phi)=x$ and $t(\\phi)=y$.\n\\end{notation}\n\nThe local topology of a polyfold is related to the local isotropy groups, as demonstrated by the following proposition.\n\n\\begin{proposition}[Natural representation of ${\\bar\\m}{G}(x)$, {\\cite[Thm.~7.1, Prop.~7.6]{HWZbook}}]\n\t\\label{prop:natural-representation}\n\tLet be an ep-groupoid $(Z,{\\bar\\m}{Z})$. Let $x\\in Z$ with isotropy group ${\\bar\\m}{G}(x)$. Then for every open neighborhood $V$ of $x$ there exists an open neighborhood $U\\subset V$ of $x$, a group homomorphism $\\Phi : {\\bar\\m}{G}(x)\\rightarrow \\text{Diff}_{\\text{sc}}(U)$, $g\\mapsto \\Phi (g)$, and a $\\text{sc}$-smooth map\n\t$\\Gamma: {\\bar\\m}{G}(x)\\times U\\rightarrow {\\bar\\m}{Z}$ such that the following holds.\n\t\\begin{enumerate}\n\t\t\\item $\\Gamma(g,x)=g$.\n\t\t\\item $s(\\Gamma(g,y))=y$ and $t(\\Gamma(g,y))=\\Phi (g)(y)$ for all $y\\in U$ and $g\\in {\\bar\\m}{G}(x)$.\n\t\t\\item If $h: y\\rightarrow z$ is a morphism between points in $U$, then there exists a unique element $g\\in {\\bar\\m}{G}(x)$ satisfying $\\Gamma(g,y)=h$, i.e., \n\t\t\\[\n\t\t\\Gamma: {\\bar\\m}{G}(x)\\times U\\rightarrow \\{\\phi\\in {\\bar\\m}{Z} \\mid \\text{$s(\\phi)$ and $t(\\phi)\\in U$}\\}\n\t\t\\]\n\t\tis a bijection.\n\t\\end{enumerate}\n\tThe data $(\\Phi,\\Gamma)$ is called the \\textbf{natural representation} of ${\\bar\\m}{G}(x)$.\n\tMoreover, consider the following topological spaces:\n\t\\begin{itemize}\n\t\t\\item ${\\bar\\m}{G}(x) \\backslash U$, equipped with quotient topology defined by the projection $U \\to {\\bar\\m}{G}(x) \\backslash U$,\n\t\t\\item $U \/ \\sim$, where $x \\sim x'$ for $x, x' \\in U$ if there exists a morphism $\\phi \\in {\\bar\\m}{Z}$ with $s(\\phi)=x$ and $t(\\phi)=x$, equipped with the quotient topology defined by the projection $U \\to U \/ \\sim$,\n\t\t\\item $\\abs{U}$, the image of $U$ under the map $Z\\to \\abs{Z}$, equipped with the subspace topology defined by the inclusion $\\abs{U}\\subset \\abs{Z}$.\n\t\\end{itemize}\n\tThen these spaces are all naturally homeomorphic.\n\\end{proposition}\n\n\\subsubsection{Maps between polyfolds}\n\nUsing category-theoretic language, we discuss the definition of map between polyfolds.\n\n\\begin{definition}\n\tA \\textbf{$\\text{sc}^k$ functor} between two polyfold structures\n\t\\[\n\t\\hat{f}:(Z_1,{\\bar\\m}{Z}_1) \\to (Z_2,{\\bar\\m}{Z}_2)\n\t\\]\n\tis a functor on groupoidal categories which moreover is a $\\text{sc}^k$ map when considered on the object and morphism sets.\n\\end{definition}\n\nA $\\text{sc}^k$ functor between two polyfold structures $(Z_1,{\\bar\\m}{Z}_1)$, $(Z_2,{\\bar\\m}{Z}_2)$ with underlying topological spaces ${\\mathcal Z}_1$, ${\\mathcal Z}_2$ induces a continuous map on the orbit spaces $\\abs{\\hat{f}}:\\abs{Z_1} \\to \\abs{Z_2}$, and hence also induces a continuous map $f : {\\mathcal Z}_1 \\to {\\mathcal Z}_2$, as illustrated in the following commutative diagram.\n\t\\[\n\t\\begin{tikzcd}[row sep=small]\n\t\\abs{Z_1} \\arrow[d,phantom,\"\\rotatebox{90}{\\(\\simeq\\)}\"] \\arrow[r,\"\\abs{\\hat{f}}\"] & \\abs{Z_2} \\arrow[d,phantom,\"\\rotatebox{90}{\\(\\simeq\\)}\"] \\\\\n\t{\\mathcal Z}_1 \\arrow[r,\"f\"] & {\\mathcal Z}_2\n\t\\end{tikzcd}\n\t\\]\n\n\\begin{definition}\n\tConsider two topological spaces ${\\mathcal Z}_1$, ${\\mathcal Z}_2$ with orbifold structures $(Z_1,{\\bar\\m}{Z}_1)$, $(Z_2,{\\bar\\m}{Z}_2)$. We define a \\textbf{$\\text{sc}^k$ map between polyfolds} as a continuous map \n\t\\[\n\tf: {\\mathcal Z}_1 \\to {\\mathcal Z}_2\n\t\\]\n\tbetween the underlying topological spaces of the polyfolds, for which there exists an associated $\\text{sc}^k$ functor\n\t\\[\n\t\\hat{f}: (Z_1,{\\bar\\m}{Z}_1) \\to (Z_2,{\\bar\\m}{Z}_2).\n\t\\]\n\tsuch that $\\abs{\\hat{f}}$ induces $f$.\n\\end{definition}\n\n\\begin{remark}\n\tFrom an abstract point of view a stronger notion of map is needed. This leads to the definition of \\textit{generalized maps} between orbifold structures, following a category-theoretic localization procedure \\cite[\\S~2.3]{HWZ3}. Following this, a precise notion of map between two polyfolds is defined using an appropriate equivalence class of a given generalized map between two given polyfold structures \\cite[Def.~16.5]{HWZbook}.\t\n\tWith this in mind, taking an appropriate equivalence class of a given $\\text{sc}^k$-functor between two given polyfold structures is sufficient for giving a well-defined map between two polyfolds.\n\\end{remark}\n\n\\subsubsection{Subgroupoids}\n\nWe state some essential facts about the topology of subgroupoids.\n\n\\begin{definition}\t\n\tLet $(Z,{\\bar\\m}{Z})$ be an ep-groupoid.\n\tWe say that a subset of the object set, $S\\subset Z$, is \\textbf{saturated} if $S = \\pi^{-1} (\\pi(S))$, where $\\pi$ is the quotient map \\eqref{eq:quotient-map}.\n\tWe define a \\textbf{subgroupoid} as the full subcategory $(S,{\\bar\\m}{S})$ associated to a saturated subset of the object set.\n\n\n\\end{definition}\n\nA subgroupoid $(S,{\\bar\\m}{S})$ comes equipped with the subspace topology induced from the ep-groupoid $(Z,{\\bar\\m}{Z})$, in addition to the induced grading. It does not come with a $\\text{sc}$-smooth structure in general, so the \\'etale condition no longer makes sense. However, one may observe it inherits the following directly analogous properties.\n\\begin{itemize}\n\t\\item The source and target maps are surjective local homeomorphisms which moreover respect the induced grading. We say that the source and target maps are \\textbf{$\\text{sc}^0$-homeomorphisms} and the subgroupoid $(S,{\\bar\\m}{S})$ is automatically \\textbf{$\\text{sc}^0$-\\'etale}.\n\t\\item For every point $x\\in S$, there exists an open neighborhood $V(x)$ so that the map $t: s^{-1}(\\overline{V(x)}) \\to S$ is a proper mapping. (This can be shown from the definitions, using in addition that if $f:X\\to Y$ is proper, then for any subset $V\\subset Y$ the restriction $f|_{f^{-1}(V)}: f^{-1}(V)\\to V$ is proper.)\n\\end{itemize}\nThus, a subgroupoid is automatically $\\text{sc}^0$-\\'etale in the above sense, as well as proper.\n\n\\begin{remark}\n\t\\label{rmk:local-topology-subgroupoid}\n\tLet $U$ be an open subset of $S$.\n\tWe may consider two topologies on $U$:\n\t\\begin{itemize}\n\t\t\\item $(U, \\tau_S)$, where $\\tau_S$ is the subspace topology induced from the inclusion $\\cup_{i\\in I} M_i \\allowbreak \\hookrightarrow S$,\n\t\t\\item $(U, \\tau_Z)$, where $\\tau_Z$ is the subspace topology induced from the inclusion $\\cup_{i\\in I} M_i \\allowbreak \\hookrightarrow Z$.\n\t\\end{itemize}\n\tThen these two topologies are identical. Moreover, $U\\hookrightarrow S$ is a local homeomorphism.\n\\end{remark}\n\n\\begin{comment}\n\tLet $Y$ be a topological space. Let $X$ be a subset of $Y$, and equip $X$ with the subspace topology. Consider an open subset $A\\subset X$, hence $A= X\\cap U$ for some open subset $U$ of $Y$.\n\tWe can equip $A$ with the subspace topology induced from the inclusion $A\\hookrightarrow X$, or the subspace topology induced from the inclusion $A\\hookrightarrow Y$.\n\tThen these two topologies are identical. Moreover, $A\\hookrightarrow X$ is a local homeomorphism.\n\\end{comment}\n\n\\begin{proposition}\n\t\\label{prop:topology-subgroupoid}\n\tConsider the orbit space of a subgroupoid, $\\abs{S}$. There are two topologies on this space we may consider:\n\t\\begin{itemize}\n\t\t\\item the subspace topology $\\tau_s$, induced from the inclusion $\\abs{S}\\subset \\abs{Z}$,\n\t\t\\item the quotient topology $\\tau_q$, induced from the projection $S\\to \\abs{S}$.\n\t\\end{itemize}\n\tThese two topologies are identical.\n\\end{proposition}\n\\begin{proof}\nWe show that $\\tau_s = \\tau_q$.\n\t\\begin{itemize}\n\t\t\\item $\\tau_s \\subset \\tau_q$\n\t\\end{itemize}\n\tSuppose $U\\subset \\abs{S}$ and $U\\in \\tau_s$. Then $U = V\\cap \\abs{S}$ for $V\\subset \\abs{Z}$ open. By definition, $\\pi^{-1} (V) \\subset Z$ is open. Moreover, $\\pi^{-1} (U) = \\pi^{-1} (V) \\cap \\pi^{-1}(S) = \\pi^{-1}(V)\\cap S$. Hence $\\pi^{-1}(U)$ is open in $S$. It follows from the definition of the quotient topology that $U\\in \\tau_q$.\n\t\\begin{itemize}\n\t\t\\item $\\tau_q \\subset \\tau_s$\n\t\\end{itemize}\n\tSuppose $U\\subset \\abs{S}$ and $U\\in \\tau_q$. We will show for every $[x]\\in U$ there exists a subset $B \\subset \\abs{S}$ such that $B \\in \\tau_s$ and $[x] \\in B \\subset U$. It will then follow that $U\\in\\tau_s$, as desired.\n\t\n\tLet $x\\in \\pi^{-1}(U)$ be a representative of $[x]$. There exists an open neighborhood $V(x) \\subset Z$ equipped with the natural action by ${\\bar\\m}{G}(x)$ and such that $V(x) \\cap S \\subset \\pi^{-1}(U)$.\n\tObserve that $\\abs{V(x)\\cap S} = \\abs{V(x)} \\cap \\abs{S}$; this follows since $S$ is saturated.\t\n\t\n\tLet $B:= \\abs{V(x)} \\cap \\abs{S} \\subset U$. Then observe that $\\abs{V(x)} \\subset \\abs{Z}$ is open,\n\tsince the quotient map $\\pi : Z \\to \\abs{Z}$ is an open map (see \\cite[Prop.~7.1]{HWZbook}).\n\tHence $B:=\\abs{V(x)} \\cap \\abs{S}\\subset \\abs{S}$ is open in the subspace topology. It follows that $B \\in \\tau_s$ and $[x] \\in B \\subset U$, as desired.\n\\end{proof}\n\nThe following proposition is an analog of Proposition~\\ref{prop:natural-representation} for subgroupoids.\n\n\\begin{proposition}[Induced representation of ${\\bar\\m}{G}(x)$ for a subgroupoid]\n\t\\label{prop:natural-representation-subgroupoid}\n\tLet $(S,{\\bar\\m}{S})$ be a subgroupoid of an ep-groupoid $(Z,{\\bar\\m}{Z})$. Let $x\\in S$ with isotropy group ${\\bar\\m}{G}(x)$. Then for every open neighborhood $V$ of $x$ there exists an open neighborhood $U\\subset V$ of $x$, a group homomorphism $\\Phi : {\\bar\\m}{G}(x)\\rightarrow \\text{Homeo}_{\\text{sc}^0}(U)$, $g\\mapsto \\Phi (g)$, and a $\\text{sc}^0$-map\n\t$\\Gamma: {\\bar\\m}{G}(x)\\times U\\rightarrow {\\bar\\m}{S}$ such that the following holds.\n\t\\begin{enumerate}\n\t\t\\item $\\Gamma(g,x)=g$,\n\t\t\\item $s(\\Gamma(g,y))=y$ and $t(\\Gamma(g,y))=\\Phi (g)(y)$ for all $y\\in U$ and $g\\in {\\bar\\m}{G}(x)$,\n\t\t\\item if $h: y\\rightarrow z$ is a morphism between points in $U$, then there exists a unique element $g\\in {\\bar\\m}{G}(x)$ satisfying $\\Gamma(g,y)=h$, i.e., \n\t\t\\[\n\t\t\\Gamma: {\\bar\\m}{G}(x)\\times U\\rightarrow \\{\\phi\\in {\\bar\\m}{Z} \\mid \\text{$s(\\phi)$ and $t(\\phi)\\in U$}\\}\n\t\t\\]\n\t\tis a bijection.\n\t\\end{enumerate}\n\tMoreover, consider the following topological spaces:\n\t\\begin{itemize}\n\t\t\\item ${\\bar\\m}{G}(x) \\backslash U$, equipped with quotient topology via the projection $U \\to {\\bar\\m}{G}(x) \\backslash U$,\n\t\t\\item $U \/ \\sim$, where $x \\sim x'$ for $x, x' \\in U$ if there exists a morphism $\\phi : x \\to x'$, equipped with the quotient topology via the projection $U \\to U \/ \\sim$,\n\t\t\\item $\\abs{U}$, the image of $U$ under the map $S\\to \\abs{S}$, equipped with the subspace topology,\n\t\t\\item $\\abs{U}$, the image of $U$ under the map $Z \\to \\abs{Z}$, equipped with the subspace topology.\n\t\\end{itemize}\n\tThen these spaces are all naturally homeomorphic.\n\\end{proposition}\n\n\\subsubsection{Weighted branched suborbifolds}\n\nView $\\mathbb{Q}^+:= \\mathbb{Q} \\cap [0,\\infty)$ as an ep-groupoid, having only the identities as morphisms. \nConsider a polyfold, consisting of a polyfold structure $(Z,{\\bar\\m}{Z})$ and an underlying topological space ${\\mathcal Z}$.\nConsider a functor $\\hat{\\theta}: (Z,{\\bar\\m}{Z}) \\to \\mathbb{Q}^+$ which induces the function $\\theta:=\\abs{\\hat{\\theta}} :{\\mathcal Z} \\to \\mathbb{Q}^+$.\nObserve that $\\hat{\\theta}$ defines a subgroupoid $(S,{\\bar\\m}{S})\\subset (Z,{\\bar\\m}{Z})$ with object set\n\t\\[\n\tS:= \\operatorname{supp} (\\hat{\\theta}) = \\{x\\in Z\\mid \\hat{\\theta}(x)>0 \\}\n\t\\]\nand with underlying topological space\n\t\\[\n\t{\\mathcal S} := \\operatorname{supp} (\\theta) = \\{[x]\\in {\\mathcal Z} \\mid \\theta([x])>0\\}.\n\t\\]\nMoreover, $(S,{\\bar\\m}{S})$ is a full subcategory of $(Z,{\\bar\\m}{Z})$ whose object set is saturated, i.e., $S= \\pi^{-1} (\\pi(S))$ where $\\pi : Z \\to \\abs{Z}, x\\mapsto [x]$.\n\n\\begin{definition}[{\\cite[Def.~9.1]{HWZbook}}]\n\t\\label{def:weighed-branched-suborbifold}\n\tA \\textbf{weighted branched suborbifold structure} consists of a subgroupoid $(S,{\\bar\\m}{S}) \\subset (Z,{\\bar\\m}{Z})$ defined by a functor $\\hat{\\theta} : (Z,{\\bar\\m}{Z}) \\to \\mathbb{Q}^+$ as above which satisfies the following properties.\n\t\\begin{enumerate}\n\t\t\\item ${\\mathcal S} \\subset {\\mathcal Z}_\\infty$.\n\t\t\\item Given an object $x\\in S$, there exists an open neighborhood $U\\subset Z$ of $x$ and a finite collection $M_i$, $i\\in I$ of finite-dimensional submanifolds of $Z$ (in the sense of \\cite[Def.~4.19]{HWZ2}) such that\n\t\t\\[\n\t\tS \\cap U= \\bigcup_{i \\in I}M_i.\n\t\t\\]\n\t\tWe require that the inclusion maps $\\phi_i: M_i\\hookrightarrow U$ are proper and are {topological embeddings,} and in addition we require that the submanifolds $M_i$ all have the same dimension. \n\t\tThe submanifolds $M_i$ are called \\textbf{local branches} in $U$. \\label{def:local-branches}\n\t\t\\item There exist positive rational numbers $w_i$, $i\\in I$, (called \\textbf{weights}) such that if $y\\in S \\cap U$, then\n\t\t\\[\\hat{\\theta}(y)=\\sum_{\\{i \\in I \\mid y\\in M_i\\}} w_i.\\]\n\t\\end{enumerate}\n\n\tWe call ${(M_i)}_{i\\in I}$ and ${(w_i)}_{i\\in I}$ a \\textbf{local branching structure}.\n\\end{definition}\n\nBy shrinking the open set $U$ we may assume that the local branches $M_i$ (equipped with the subspace topology induced from $U$) are homeomorphic to open subsets of $\\mathbb{R}^n$. Hence we may assume that a local branch is given by a subset $M_i\\subset\\mathbb{R}^n$ and an inclusion map $\\phi_i : M_i\\hookrightarrow U$ where $\\phi_i$ is proper and a homeomorphism onto its image.\n\n\n\\begin{definition}\n\t\\label{def:local-orientation}\n\tLet $(S,{\\bar\\m}{S})$ be a weighted branched suborbifold structure. Consider an object $x\\in S$ and a local branching structure $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$ at $x$.\n\tSuppose moreover that each local branch has an \\textit{orientation}, denoted as $(M_i,o_i)$\n\t\n\tWe define a \\textbf{local orientation} at $x$ with respect to the local branching structure $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$ as the following finite formal sum of weighted oriented tangent planes:\n\t\\[\n\t\\sum_{\\{i\\in I \\mid x\\in M_i\\}} w_i \\cdot T_x (M_i,o_i).\n\t\\]\n\\end{definition}\n\n\\begin{definition}\n\t\\label{def:orientation}\n\tLet $(S,{\\bar\\m}{S})$ be a weighted branched suborbifold structure.\n\tWe define an \\textbf{orientation} on $(S,{\\bar\\m}{S})$ as a local orientation at every object $x\\in S$ and local branching structure $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$ at $x$ such that the following holds.\n\t\\begin{enumerate}\n\t\t\\item We require that the local orientation is well-defined and does not depend on choice of local branching structure. Given an object $x\\in S$, suppose we have:\n\t\t\\begin{itemize}\n\t\t\t\\item a local orientation at $x$ with respect to a local branching structure $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$,\n\t\t\t\\item a local orientation at $x$ with respect to a local branching structure $(M'_j)_{j\\in I'}$, $(w'_j)_{j\\in I'}$.\n\t\t\\end{itemize}\n\t\tWe require the finite formal sums of weighted oriented tangent planes to be identical, i.e.,\n\t\t\\[\n\t\t\\sum_{\\{i\\in I \\mid x\\in M_i\\}} w_i \\cdot T_x (M_i,o_i)= \\sum_{\\{j\\in I' \\mid x\\in M'_j\\}} w'_j \\cdot T_x (M'_j,o_j)\n\t\t\\]\n\t\t\\item We require morphism invariance of the local orientations. Given a morphism, $\\phi : x \\to y$ there exists a well-defined tangent map $T\\phi : T_xZ \\to T_yZ$. \n\t\tSuppose we have:\n\t\t\\begin{itemize}\n\t\t\t\\item a local orientation at $x$ with respect to a local branching structure $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$,\n\t\t\t\\item a local orientation at $y$ with respect to a local branching structure $(M'_j)_{j\\in I'}$, $(w'_j)_{j\\in I'}$.\n\t\t\\end{itemize}\t\t\n\t\tThe image of a finite formal sum of weighted oriented tangent planes under this map is again a finite formal sum of weighted oriented tangent planes.\n\t\tWe require invariance of the local orientations under this map, i.e.,\n\t\t\\[\n\t\t\\sum_{\\{j\\in I' \\mid y\\in M'_j\\}} w'_j \\cdot T_y (M'_j,o'_j) = \\sum_{\\{i\\in I \\mid x\\in M_i\\}} w_i \\cdot T\\phi_* (T_x (M_i,o_i)).\n\t\t\\]\n\t\\end{enumerate}\n\\end{definition}\n\nA \\textbf{weighted branched suborbifold structure with boundary} consists of a subgroupoid $(S,{\\bar\\m}{S})\\subset (Z,{\\bar\\m}{Z})$ defined identically to Definition~\\ref{def:weighed-branched-suborbifold} except we allow the possibility that the local branches are manifolds with boundary.\nA \\textbf{local orientation} at an object $x\\in S$ is again defined as in Definition~\\ref{def:local-orientation} as a finite formal sum determined by orientations of the local branches, and again an \\textbf{orientation} is also defined similarly to Definition~\\ref{def:orientation}.\n\n\\subsection{Abstract perturbations in polyfold theory}\n\t\\label{subsec:abstract-perturbations}\n\nAbstract perturbations in polyfold theory are a mixture of two different technologies:\n\\begin{enumerate}\n\t\\item scale calculus generalizations of classical Fredholm theory, involving the development of analogs of Fredholm maps, compact perturbations, and the implicit function theorem for surjective Fredholm operators (originally developed in \\cite{HWZ2});\n\t\\item equivariant transversality through the use of ``multisections;'' due to the presence of nontrivial isotropy, it is generally impossible to obtain transversality through the use of single valued sections, and thus it is necessary to work with multisections (developed in \\cite{cieliebak2003equivariant} and generalized to polyfold theory in \\cite{HWZ3}).\n\\end{enumerate}\n\n\\subsubsection[Strong polyfold bundles and sc+-multisections]{Strong polyfold bundles and $\\text{sc}^+$-multisections}\n\t\\label{subsubsec:polyfold-abstract-perturbations}\n\nIn order to develop a Fredholm theory for polyfolds, it is necessary to formulate the notion of a ``strong polyfold bundle'' over a polyfold.\nLet $P:W\\to Z$ be a strong M-polyfold bundle (see \\cite[Def.~2.26]{HWZbook}).\nRecall that a fiber $p^{-1} (y) = W_y$ over an object $y \\in O_x$ carries the structure of a $\\text{sc}$-Banach space. Furthermore $W$ is equipped with a double filtration $W_{m,k}$ for $0\\leq k\\leq m+1$, and the filtered spaces\n\t\\begin{gather*}\n\tW[0]:= W_{0,0} \\supset W_{1,1}\\supset \\cdots \\supset W_{i,i} \\supset \\cdots,\t\\\\\n\tW[1]:= W_{0,1} \\supset W_{1,2}\\supset \\cdots \\supset W_{i,i+1}\\supset \\cdots \n\t\\end{gather*}\nare both M-polyfolds in their own rights.\nWith respect to these filtrations, the maps $P[0]: W[0]\\to Z$ and $P[1]:W[1]\\to Z$ are both $\\text{sc}$-smooth.\n\n\\begin{proposition}[{\\cite[Prop.~2.16]{HWZbook}}]\n\t\\label{prop:pullback-bundle}\n\tLet $P: W \\to Z$ be a strong M-polyfold bundle, and let $f: Y \\to Z$ be a $\\text{sc}$-smooth map between M-polyfolds. The pullback $f^* W := \\{(y,w_x) \\in Y \\times W \\mid f(y)=x=P(w_x) \\}$ carries a natural structure of a strong M-polyfold bundle over the M-polyfold $Y$.\n\\end{proposition}\n\nLet $(Z,{\\bar\\m}{Z})$ be a polyfold structure, and consider a strong M-polyfold bundle over the object space, $P:W\\to Z.$\nThe source map $s:{\\bar\\m}{Z}\\to Z$ is a local $\\text{sc}$-diffeomorphism, and hence we may consider the fiber product\n\t\\[\n\t{\\bar\\m}{Z} _s\\times_P W = \\{(\\phi,w)\\in {\\bar\\m}{Z}\\times W\t\\mid\ts(\\phi)=P(w)\t\\}.\n\t\\]\nVia the above proposition, we can also view as ${\\bar\\m}{Z} _s\\times_P W$ as the pullback bundle via $s$ over the morphism space ${\\bar\\m}{Z}$,\n\t\\[\\begin{tikzcd}\n\t{\\bar\\m}{Z} _s\\times_P W \\arrow[r] \\arrow[d] & W \\arrow[d] \\\\\n\t{\\bar\\m}{Z} \\arrow[r, \"s\"] & Z.\n\t\\end{tikzcd}\\]\n\n\\begin{definition}[{\\cite[Def.~8.4]{HWZbook}}]\n\t\\label{def:strong-polyfold-bundle}\n\tA \\textbf{strong polyfold bundle structure} $(W,{\\bar\\m}{W})$ over a polyfold structure $(Z,{\\bar\\m}{Z})$ consists of a strong M-polyfold bundle over the object M-polyfold $P:W\\to Z$ together with a strong bundle map\n\t\\[\n\t\\mu : {\\bar\\m}{Z} _s\\times_P W \\to W\n\t\\]\n\twhich covers the target map $t:{\\bar\\m}{Z} \\to Z$, such that the diagram\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t{\\bar\\m}{Z} _s\\times_P W \\arrow[r, \"\\mu\"] \\arrow[d] & W \\arrow[d] \\\\\n\t\t{\\bar\\m}{Z} \\arrow[r, \"t\"] & Z\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tcommutes.\n\tFurthermore we require the following:\n\t\\begin{enumerate}\n\t\t\\item $\\mu$ is a surjective local diffeomorphism and linear on fibers,\n\t\t\\item $\\mu(\\operatorname{id}_x,w)= w$ for all $x\\in Z$ and $w\\in W_x$,\n\t\t\\item $\\mu(\\phi \\circ \\gamma ,w)= \\mu (\\phi,\\mu(\\gamma,w))$ for all $\\phi,\\gamma\\in{\\bar\\m}{Z}$ and $w\\in W$ which satisfy\n\t\t\\[\n\t\ts(\\gamma) = P(w),\\qquad t(\\gamma) = s(\\phi) = P(\\mu(\\gamma,w)).\n\t\t\\]\n\t\\end{enumerate}\n\\end{definition}\n\nA strong polyfold bundle structure $(W,{\\bar\\m}{W})$ has polyfold structures in its own right: we may take $W$ as the object set with the grading $W_{i,i}$ or $W_{i,i+1}$, and define the morphism set by ${\\bar\\m}{W}:= {\\bar\\m}{Z} _s\\times_P W$. Moreover, we have source and target maps $s,t: {\\bar\\m}{W} \\to W$ defined as follows:\n\t\\[\n\ts(\\phi,w) := w, \\qquad t(\\phi,w) := \\mu(\\phi,w).\n\t\\]\nWe have a natural smooth projection functor $\\hat{P}: (W,{\\bar\\m}{W}) \\to (Z,{\\bar\\m}{Z})$.\n\n\\begin{definition}\n\tA \\textbf{strong polyfold bundle} consists of a topological space ${\\mathcal W}$ together with a Morita equivalence class of strong polyfold bundle structures $(W,{\\bar\\m}{W})$.\n\\end{definition}\n\nThe double filtration of the fibers is preserved by the structure maps, and hence the orbit space $\\abs{W}$ is equipped with a double filtration\n\t\\[\n\t\\abs{W}_{m,k}, \\quad \\text{for } 0\\leq m \\ \\text{and}\\ 0\\leq k\\leq m+1.\n\t\\]\nWe moreover obtain polyfolds ${\\mathcal W}[0]$ and ${\\mathcal W}[1]$ with the filtrations ${\\mathcal W}[0]_i := {\\mathcal W}_{i,i}$ and ${\\mathcal W}[1]_i := {\\mathcal W}_{i,i+1}$. Unless specified, ``${\\mathcal W}$'' refers to the first filtration, i.e., ${\\mathcal W}[0]$ and ``$P$'' refers to the projection map with respect to this filtration.\n\n\\begin{definition}[{\\cite[Def.~12.1]{HWZbook}}]\n\tWe define a \\textbf{$\\text{sc}$-smooth Fredholm section} of the strong polyfold bundle $P:{\\mathcal W}\\to{\\mathcal Z}$ as a $\\text{sc}$-smooth map between polyfolds $\\overline{\\partial}: {\\mathcal Z}\\to {\\mathcal W}$ which satisfies $P\\circ \\overline{\\partial} = \\operatorname{id}_{\\mathcal Z}$ (where $\\operatorname{id}_{\\mathcal Z}$ is the identity map on ${\\mathcal Z}$)\n\n\tWe require that $\\overline{\\partial}$ is \\textbf{regularizing}, meaning that if $[x] \\in {\\mathcal Z}_m$ and $\\overline{\\partial} ([x]) \\in {\\mathcal W}_{m,m+1}$ then $[x]\\in {\\mathcal Z}_{m+1}$.\n\tFinally, we require that at every smooth object $x\\in Z$ the germ $(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ is a ``Fredholm germ'' (see \\cite[Def.~3.7]{HWZbook}).\n\\end{definition}\n\n\\begin{definition}\n\t\\label{def:unperturbed-solution-space}\n\tWe define the \\textbf{unperturbed solution space} of $\\overline{\\partial}$ as the set \n\t\t\\[\n\t\t{\\mathcal S}(\\overline{\\partial}) :=\\{ [x]\\in {\\mathcal Z}\\mid \\overline{\\partial}([x]) = 0\\} \\subset {\\mathcal Z},\n\t\t\\]\n\twith topology given by the subspace topology induced from ${\\mathcal Z}$.\n\tThe space ${\\mathcal S}(\\overline{\\partial})$ has an associated subgroupoid structure $(S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}), {\\bar\\m}{S}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}))$ defined as follows:\n\\begin{itemize}\n\t\\item (saturated) object set: $S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}) := \\{\tx\\in Z \\mid \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x)=0\t\\} \\subset Z$,\n\t\\item morphism set: ${\\bar\\m}{S} (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}) := \\{ \\phi \\in {\\bar\\m}{Z}\t\\mid s(\\phi) \\in S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}) \\ (\\text{equivalently } t(\\phi)\\in S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}))\t\\} \\subset\t{\\bar\\m}{Z}$.\n\\end{itemize}\nBoth the object and morphism sets carry the subspace topology induced from the topologies on the object space $Z$ and morphism space ${\\bar\\m}{Z}$.\n\\end{definition}\n\nWe say that the Fredholm section $\\overline{\\partial}$ is \\textbf{proper} if the unperturbed solution space ${\\mathcal S}(\\overline{\\partial})$ is a compact topological space.\n\n\\begin{definition}[{\\cite[Def.~2.24]{HWZbook}}]\n\tA \\textbf{$\\text{sc}^+$-section} is a $\\text{sc}$-smooth map $s: Z \\to W[1]$ which satisfies $P\\circ s = \\operatorname{id}_Z$\n\\end{definition}\n\nThe significance of this definition is captured in the fact that if $(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ is a Fredholm germ and $s$ is a germ of a $\\text{sc}^+$-section around $y$, then $(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}+s,x)$ remains a Fredholm germ.\nThis follows tautologically from the definition of a Fredholm germ (see the comment following \\cite[Def.~2.44]{HWZGW}).\nWe may view the relationship of Fredholm sections and $\\text{sc}^+$-sections in the current theory as the analogs of Fredholm and compact operators in classical functional analysis.\n\nOne can view a ``multisection'' as the rationally weighted characteristic function of an equivariant collection of locally defined single valued sections.\nThis is made precise in the following definition.\n\n\\begin{definition}\n\t\\label{def:sc-multisection}\n\tWe view $\\mathbb{Q}^+:= \\mathbb{Q} \\cap [0,\\infty)$ as an ep-groupoid, having only the identities as morphisms.\n\tA \\textbf{$\\text{sc}^+$-multisection} of a strong polyfold bundle $P:{\\mathcal W}\\to {\\mathcal Z}$ consists of the following:\n\t\t\\begin{itemize}\n\t\t\t\\item a function $\\Lambda:{\\mathcal W} \\to \\mathbb{Q}^+$,\n\t\t\t\\item an associated functor $\\hat{\\Lambda}: W \\to \\mathbb{Q}^+$ where $\\abs{\\hat{\\Lambda}}$ induces $\\Lambda$,\n\t\t\\end{itemize}\n\tsuch that at every $[x]\\in {\\mathcal Z}$ there exists a \\textbf{local section structure} defined as follows.\n\tLet $x\\in Z$ be a representative of $[x]$ and let $U\\subset Z$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of $x$, and consider the restricted strong M-polyfold bundle $P: W|_U \\to U$.\n\tThen there exist finitely many $\\text{sc}^+$-sections $s_1,\\ldots,s_k : U \\to W_U$ (called \\textbf{local sections}) with associated positive rational numbers $\\sigma_1,\\ldots ,\\sigma_k \\in \\mathbb{Q}^+$ (called \\textbf{weights}) which satisfy the following:\n\t\\begin{enumerate}\n\t\t\\item $\\sum_{i=1}^k \\sigma_i =1.$\n\t\t\\item The restriction $\\hat{\\Lambda}|_{W|_{U}}: W_U \\to \\mathbb{Q}^+$ is related to the local sections and weights via the equation \n\t\t\t\\[\n\t\t\t\\hat{\\Lambda}|_{W|_{U}}(w)=\\sum_{i\\in \\{1,\\ldots, k \\mid w=s_i(P(w))\\}} \\sigma_i\n\t\t\t\\]\n\t\twhere the empty sum has by definition the value $0$.\n\t\\end{enumerate}\t\n\\end{definition}\n\nWe define the \\textbf{domain support} of $\\Lambda$ as the subset of ${\\mathcal Z}$ given by\n\t\\[\n\t\\text{dom-supp}(\\Lambda) := \\text{cl}_{\\mathcal Z} (\t\\{\t[x]\\in{\\mathcal Z} \\mid \\exists [w]\\in {\\mathcal W}_{[x]}\\setminus\\{0\\} \\text{ such that } \\Lambda([w])>0\t\\}\t).\n\t\\]\n\\begin{comment}\nThe \\textbf{domain support} of $\\Lambda$ is the subset of $Z$, defined by \n\\[\n\\text{dom-supp}(\\Lambda) = \\text{cl}_Z (\\{x\\in Z \\mid \\text{there exists $w\\in W_x\\setminus\\{0\\}$ for which $\\Lambda(w)>0$}\\}).\n\\]\n\nThe \\textbf{support of $\\hat{\\Lambda}$} is the subset of $\\operatorname{supp}(\\hat{\\Lambda})$ of ${\\mathcal W}$ defined by\n\\[\n\\operatorname{supp}(\\hat{\\Lambda}) = \\{\t[w]\\in{\\mathcal W} \\mid \\hat{\\Lambda}(w)>0\t\\}\n\\]\nThe \\textbf{support of $\\Lambda$} is the subset $\\operatorname{supp}(\\Lambda)$ of $W$ defined by\n\\[\n\\operatorname{supp}(\\Lambda)=\\{w\\in W\t\\mid\t\\Lambda(w)>0\\}.\n\\]\n\\end{comment}\n\n\\begin{definition}\n\t\\label{def:perturbed-solution-space}\n\tAssociated to a $\\text{sc}$-smooth Fredholm section $\\overline{\\partial}$ and a $\\text{sc}^+$-multisection $\\Lambda$, \n\twe define the \\textbf{perturbed solution space} as the set\n\t\t\\[\n\t\t{\\mathcal S}(\\overline{\\partial},\\Lambda) :=\\{[x]\\in{\\mathcal Z} \\mid \\Lambda(\\overline{\\partial}([x]))\t>0\t\\}\\subset {\\mathcal Z}\n\t\t\\]\n\twith topology given by the subspace topology induced from ${\\mathcal Z}$. It is equipped with a \\textbf{weight function} $\\Lambda\\circ \\overline{\\partial}:{\\mathcal S}(\\overline{\\partial},\\Lambda) \\to \\mathbb{Q}^+$.\n\tThe space ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ has an associated subgroupoid structure $({\\mathcal S}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}), {\\bar\\m}{{\\mathcal S}}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}))$ with (saturated) object set\n\t\t\\[\n\t\tS(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}) :=\\{x\\in Z \\mid \\hat{\\Lambda} ( \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x))>0\\} \\subset Z\n\t\t\\]\n\tand with morphism set given by \n\t\t\\[\n\t\t{\\bar\\m}{S} (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}, \\hat{\\Lambda}) := \\{ \\phi \\in {\\bar\\m}{Z}\t\\mid s(\\phi) \\in S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda})\t\\} \\subset\t{\\bar\\m}{Z}\n\t\t\\]\n\t(we could equivalently require that $t(\\phi)\\in S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}, \\hat{\\Lambda})$).\n\tIt is equipped with a \\textbf{weight functor} $\\hat{\\Lambda}\\circ \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}:({\\mathcal S}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}), {\\bar\\m}{{\\mathcal S}}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda})) \\to \\mathbb{Q}^+$.\n\\end{definition}\n\nNote that the space ${\\mathcal S}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda})$ or the subgroupoid $({\\mathcal S}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}), {\\bar\\m}{{\\mathcal S}}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},\\hat{\\Lambda}))$ can be respectively encoded entirely via the weight function or the weight functor; such a description is closer to the language used in \\cite{HWZ3} and \\cite{HWZbook}.\n\n\\subsubsection{Transverse perturbations}\n\nAt a local level, it is easy to adapt the functional analytic construction of compact perturbations of Fredholm operators to M-polyfolds; the implicit function theorem for M-polyfolds \\cite[Thm.~3.13]{HWZbook} then guarantees that the zero set of a transversal $\\text{sc}$-Fredholm section has the structure of a finite-dimensional manifold.\nIt is somewhat more involved to adapt these constructions to the global level, as this requires using multisections to obtain equivariance.\n\n\\begin{definition}[{\\cite[Def.~15.2]{HWZbook}}]\n\t\\label{def:transversal-pair}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, $\\overline{\\partial}$ a $\\text{sc}$-smooth Fredholm section, and $\\Lambda$ a $\\text{sc}^+$-multisection.\n\t\n\tConsider a point $[x]\\in {\\mathcal Z}$. We say $(\\overline{\\partial},\\Lambda)$ is \\textbf{transversal at $[x]$} if, given a local $\\text{sc}^+$-section structure for $\\Lambda$ at a representative $x$, the linearized local expression \n\t\t\\[D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}-s_i)(x):T_x Z \\to W_x\\]\n\tis surjective for all $i\\in I$ with $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x)=s_i(x)$. We say that $(\\overline{\\partial},\\Lambda)$ is \\textbf{transversal} if it is transversal at every $[x] \\in {\\mathcal S}(\\overline{\\partial},\\Lambda)$.\n\n\t\n\n\t\n\n\\end{definition}\n\nGiven a $\\text{sc}$-Fredholm section it is relatively easy to construct a transversal multisection (see the general position argument of \\cite[Thm.~15.4]{HWZbook}); a key ingredient is \\cite[Lem.~5.3]{HWZbook} which guarantees the existence of locally defined $\\text{sc}^+$-sections which take on a prescribed value at a point.\n\n\\begin{theorem}[{\\cite[Thm.~4.13]{HWZ3}}\\footnote{The original statement of this theorem carries the additional requirement that the perturbed solution set ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ is a compact set. This requirement is unnecessary, and is not used in the proof.}]\n\t\\label{thm:transversal-pairs-weighted-branched-suborbifolds} \\label{thm:transversality}\n\n\tIf the pair $(\\overline{\\partial},\\Lambda)$ is transversal, then the perturbed solution set ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ carries in a natural way the structure of a weighted branched suborbifold.\n\\end{theorem}\n\n\\begin{remark}[Relationship between local section structures and local branching structures]\n\t\\label{rmk:relationship-local-section-structures-local-branching-structures}\n\tConsider a weighted branched suborbifold ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ defined by a transversal $\\text{sc}$-smooth Fredholm section $\\overline{\\partial}$ and a $\\text{sc}^+$-multisection $\\Lambda$.\n\tThe relationship between the local section structure for $\\Lambda$ and the local branching structure can be described as follows.\n\tConsider a point $[x]\\in{\\mathcal S}(\\overline{\\partial},\\Lambda)$, and let $U\\subset Z$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of a representative $x$.\t\n\tConsider a local section structure for $\\Lambda$ at $[x]$ consisting of $\\text{sc}^+$-sections $s_i : U \\to W|_U$ and weights $w_i$ for $i\\in I$.\n\tThe implicit function theorem for M-polyfolds then implies that the sets\n\t\t\\[\n\t\tM_i = (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} -s_i)^{-1}(0)\n\t\t\\]\n\tdefine finite dimensional submanifolds which together with the weights $w_i$ give a local branching structure in $U$.\n\\end{remark}\n\n\\subsubsection{Pairs which control compactness}\n\nGiven a proper $\\text{sc}$-smooth Fredholm section $\\overline{\\partial}$ and a $\\text{sc}^+$-multisection $\\Lambda$, we need some way to control the compactness of the resulting perturbed solution space ${\\mathcal S}(\\overline{\\partial},\\Lambda)$. This can be achieved by requiring that the perturbation $\\Lambda$ is ``small'' in a suitable sense.\n\n\\begin{definition}[{\\cite[Def.~12.2]{HWZbook}}]\n\t\\label{def:auxiliary-norm}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle.\n\tWe define an \\textbf{auxiliary norm} as a $\\text{sc}^0$-map \n\t\\[N:{\\mathcal W}[1] \\to [0,\\infty)\\]\n\twhere we regard $[0,\\infty)$ as a smooth manifold with the trivial ep-groupoid structure (i.e., a polyfold with finite-dimensional local models and trivial isotropy).\n\tIt has an associated $\\text{sc}^0$-functor $\\hat{N}:W[1]\\to [0,\\infty)$ where as usual $\\abs{\\hat{N}}$ induces $N$.\n\tWe require that $\\hat{N}$ satisfies the following conditions.\n\t\\begin{enumerate}\n\t\t\\item The restriction of $\\hat{N}$ to each fiber $W_x[1]$ is a complete norm. (Recall that for each $x\\in Z$, the fiber $W_x[1]$ is a $\\text{sc}$-Banach space.)\n\t\t\\item \\label{property-2-auxiliary-norm} If $\\{h_k\\}$ is a sequence in $W[1]$ such that $\\{\\hat{P}(h_k)\\}$ converges in $Z$ to an object $x$, and if $\\lim_{k\\to \\infty} \\hat{N}(h_k) = 0$, then $\\{h_k\\}$ converges to $0_x \\in W_x[1]$.\n\t\\end{enumerate}\n\\end{definition}\n\n\nGiven a point $[x]\\in{\\mathcal Z}$ we define the \\textbf{pointwise norm} of $\\Lambda$ at $[x]$ with respect to the auxiliary norm $N$ by\n\t\\[\n\tN[\\Lambda] ([x]) := \\max \\{\tN([w])\t\\mid\t[w]\\in {\\mathcal W}[1], \\Lambda ([w])>0, P([w])=[x]\t\\}\n\t\\]\nand moreover define the \\textbf{norm} of $\\Lambda$ with respect to $N$ by\n\t\\[\n\tN[\\Lambda] := \\sup_{[x]\\in{\\mathcal Z}} N[\\Lambda] ([x]).\n\t\\]\n\n\\begin{definition}[{\\cite[Def.~15.4]{HWZbook}}]\n\t\\label{def:pair-which-controls-compactness}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $\\overline{\\partial}$ be a $\\text{sc}$-smooth proper Fredholm section, and let $N :{\\mathcal W}[1]\\to [0,\\infty)$ be an auxiliary norm.\t\n\tConsider an open neighborhood ${\\mathcal U}$ of the unperturbed solution set ${\\mathcal S}(\\overline{\\partial})\\subset {\\mathcal Z}$.\n\tWe say that the pair $(N,{\\mathcal U})$ \\textbf{controls the compactness} of $\\overline{\\partial}$ if the set \n\t\\[\n\tcl_{\\mathcal Z} \\{[x]\\in {\\mathcal U} \\mid \\overline{\\partial} ([x]) \\in {\\mathcal W}[1], N(\\overline{\\partial}([x]))\\leq 1\\} \\subset {\\mathcal Z}\n\t\\]\n\tis compact.\n\\end{definition}\n\n\\begin{remark}\n\t\\label{rmk:shrink-neighborhood}\n\tWe may always shrink the controlling neighborhood ${\\mathcal U}$ of the unperturbed solution set. To be precise, suppose that $(N, {\\mathcal U})$ is a pair which controls compactness, and let ${\\mathcal U}'$ be an open set such that ${\\mathcal S}(\\overline{\\partial})\\subset{\\mathcal U}'\\subset {\\mathcal U}$. It is immediate from the above definition that the pair $(N, {\\mathcal U}')$ also controls compactness.\n\\end{remark}\n\nGiven a $\\text{sc}$-smooth proper Fredholm section of a strong polyfold bundle, \\cite[Prop.~2.27]{HWZ3} guarantees the existence of auxiliary norms. The existence of a pair which control compactness then follows from \\cite[Thm.~4.5]{HWZ3} which states that given an auxiliary norm $N$ there always exists an associated neighborhood ${\\mathcal U}$, such that the pair $(N, {\\mathcal U})$ controls compactness.\n\n\\begin{theorem}[{\\cite[Lem.~4.16]{HWZ3}}]\n\t\\label{thm:compactness}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $\\overline{\\partial}$ be a $\\text{sc}$-smooth proper Fredholm section, and let $(N, {\\mathcal U})$ be a pair which controls compactness.\n\t\n\tConsider a $\\text{sc}^+$-multisection $\\Lambda$ and suppose it satisfies the following:\n\t\t\\begin{itemize}\n\t\t\t\\item $N[\\Lambda] \\leq 1$,\n\t\t\t\\item $\\text{dom-supp} (\\Lambda) \\subset {\\mathcal U}$.\n\t\t\\end{itemize}\n\tThen the perturbed solution set ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ is compact.\n\tWe call such a $\\text{sc}^+$-multisection \\textbf{$(N,{\\mathcal U})$-admissible} (compare with \\cite[Def.~15.5]{HWZbook}).\n\\end{theorem}\n\n\\subsubsection{Determinant line bundles and orientations}\n\nWe do not try to give a full account of the polyfold theory on orientations (for that, we refer to \\cite[\\S~6]{HWZbook}).\nHowever, to talk precisely about orientations in our main theorems it is necessary to give a brief summary of the main ideas and definitions.\n\n\\begin{definition}[{\\cite[Defs.~6.3,~6.4]{HWZbook}}]\n\tLet $T: E \\to F$ be a bounded linear Fredholm operator between real Banach spaces. The \\textbf{determinant} of $T$ is the 1-dimensional real vector space\n\t\t\\[\\det T = \\Lambda^{\\max} (\\ker T) \\otimes \\left(\\Lambda^{\\max} (\\operatorname{coker} T)\t\\right)^*.\\]\n\tAn \\textbf{orientation} of $T$ is a choice of orientation of the real line $\\det T$.\n\\end{definition}\n\nLet $P: W\\to Z$ be a strong M-polyfold bundle, and let $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}: Z \\to W$ be a $\\text{sc}$-smooth Fredholm section.\nIn general, there is no intrinsic notion a linearization of the section $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}$ at smooth points $x \\in Z_\\infty$ if $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x) \\neq 0$.\nTo deal with this, one chooses a locally defined $\\text{sc}^+$-section $s$ such that $s(x)=\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x)$; one may then consider the well-defined linearization $D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x) :T_xZ \\to W_x$.\nThe \\textbf{space of linearizations} of $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}$ at $x$ is then defined as the following subset of linear Fredholm operators from $T_x Z \\to W_x:$\n\t\\[\\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x) := \\{\tD(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x) + a \\mid a: T_x Z \\to W_x \\text{ is a }\\text{sc}^+\\text{-operator}\t\\}.\\]\nIt may be observed that $\\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ is a convex subset, and hence is contractible.\n\nTo each linearization we may associate its determinant; in doing so, we may consider the disjoint union\n\t\\[\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x) := \\bigsqcup_{L\\in \\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)} \\{L\\}\\times \\det(L).\\]\nA priori, this set does not have much structure, as although each determinant is a real line, locally the kernel and cokernel of the linearizations may vary in dimension.\nHowever, with some work it is possible to prove the following proposition.\n\n\\begin{proposition}[{\\cite[Prop.~6.11]{HWZbook}}]\n\tThe set $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ has the structure of a topological line bundle over $\\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$.\n\tThe base space $\\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ is contractible and hence $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ has two possible orientations.\n\\end{proposition}\n\nWe may therefore define an \\textbf{orientation of $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}$ at a smooth point $x\\in Z_\\infty$} as a choice of one of the two possible orientations for $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$. We denote such an orientation by $o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$.\n\nAs we vary the smooth points, we need some way to compare the orientations at each point.\nIntuitively, the choice of an orientation at a point should automatically determine an orientation at all nearby points.\nThis intuition is made precise in the theory as a sort of ``local orientation propagation.''\n\n\\begin{theorem}[{\\cite[Thm.~6.1]{HWZbook}}]\n\tConsider a smooth point $x\\in Z_\\infty$. There exists an open neighborhood $U\\subset Z$ such that for any smooth point $y\\in U$ and for any $\\text{sc}$-smooth path $\\phi: [0,1] \\to Z$ with $\\phi(0)=x,$ $\\phi(1)=y$ there exists a well-defined ``local orientation propagation.'' This means that, given an orientation $o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$ of $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ we can associate an orientation $\\phi_* o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$ of $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},y)$, and moreover this association does not depend on the choice of $\\text{sc}$-smooth path.\n\\end{theorem}\n\nWe may therefore define an orientation of a Fredholm section as a fixed choice of orientation at all smooth points which is consistent with the local orientation propagation.\n\n\\begin{definition}[{\\cite[Def.~6.11]{HWZbook}}]\n\t\\label{def:oriented-Fredholm}\n\tLet $P: W\\to Z$ be a strong M-polyfold bundle, and let $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}: Z \\to W$ be a $\\text{sc}$-smooth Fredholm section.\n\tWe define an \\textbf{orientation} of $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}$ as an association for every smooth point $x\\in Z_\\infty$ with an orientation $o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$ of the determinant $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ and which is consistent with the local orientation propagation in the following sense.\n\t\n\tFor any two smooth points $x, y \\in Z_\\infty$ and for any $\\text{sc}$-smooth path $\\phi:[0,1] \\to Z$ with $\\phi(0)=x,$ $\\phi(1)=y$ the orientation $o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},y)}$ is the same as the pushforward orientation $\\phi_* o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$ determined by the local orientation propagation. (Compare with \\cite[Defs.~6.12,~6.13]{HWZbook}.)\n\\end{definition}\n\nWe end with an observation regarding how the above abstract discussion induces orientations on the perturbed solution spaces.\nConsider an oriented $\\text{sc}$-smooth Fredholm section $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}:Z\\to W$ and consider a $\\text{sc}^+$-section locally defined on a neighborhood $U$ of a point $x\\in Z$, $s: U \\to W|_U$.\nSuppose that $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(x) = s(x)$ and suppose that the linearization \n\t\\[D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x) :T_x Z \\to W_x\\]\nis surjective. The implicit function theorem for M-polyfolds implies that $M:= (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)^{-1}(0)$ has the structure of a finite-dimensional manifold.\n\nA choice of orientation $o_{(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)}$ of $\\operatorname{DET}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ determines for any linearization $T\\in \\operatorname{Lin} (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$ a choice of orientation of $\\det T$.\nThen simply observe that $D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x) \\in \\operatorname{Lin}(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$},x)$, and since\n\t\\[\\det ((D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x)) = \\Lambda^{\\max} (\\ker (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x))) = \\Lambda^{\\max} (T_x M)\\]\na choice of orientation for $\\det ((D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s)(x))$ automatically induces an orientation for $M$ at $x$.\n\n\\subsubsection{Regular perturbations and compact cobordism}\n\nIn order to define invariants, a perturbed solution set needs to be both transversally cut out, and compact.\nWe therefore introduce the following definition, given also in \\cite[Cor.~15.1]{HWZbook}.\n\n\\begin{definition}\n\t\\label{def:regular-perturbation}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $\\overline{\\partial}$ be a $\\text{sc}$-smooth proper Fredholm section, and let $(N, {\\mathcal U})$ be a pair which controls compactness.\n\t\n\tSuppose a $\\text{sc}^+$-multisection $\\Lambda$ satisfies both the requirements of Theorem~\\ref{thm:transversality} and Theorem~\\ref{thm:compactness}, i.e.,\n\t\\begin{itemize}\n\t\t\\item $(\\overline{\\partial}, \\Lambda)$ is a transversal pair,\n\t\t\\item $N[\\Lambda] \\leq 1$ and $\\text{dom-supp} (\\Lambda) \\subset {\\mathcal U}$.\n\t\\end{itemize}\n\tWe then say $\\Lambda$ is a \\textbf{regular perturbation} of $\\overline{\\partial}$ with respect to the pair $(N,{\\mathcal U})$.\n\\end{definition}\n\n\\begin{corollary}[{\\cite[Cor.~15.1]{HWZbook}}]\n\t\\label{prop:existence-regular-perturbations}\n\n\t\n\tThere exist regular perturbations $\\Lambda$ of $\\overline{\\partial}$ with respect to the pair $(N,{\\mathcal U})$.\t\n\tTheorems~\\ref{thm:transversality} and \\ref{thm:compactness} immediately imply that the perturbed solution space ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ has the structure of a compact weighted branched suborbifold, with weight function given by $\\Lambda\\circ \\overline{\\partial} : {\\mathcal S}(\\overline{\\partial},\\Lambda) \\to \\mathbb{Q}^+$.\n\\end{corollary}\n\nCompact weighted branched suborbifolds are suitable geometric spaces for defining invariants.\nHowever, it remains to show that such invariants are independent of the choices used to define such a compact weighted branched orbifold, in particular, are independent of:\n\t\\begin{itemize}\n\t\t\\item the choice of regular perturbation,\n\t\t\\item the choice of pair which controls compactness.\n\t\\end{itemize}\n\n\\begin{theorem}[{\\cite[Cor.~15.1]{HWZbook}}]\n\t\\label{thm:cobordism-between-regular-perturbations}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $\\overline{\\partial}$ be a $\\text{sc}$-smooth proper oriented Fredholm section, and let $(N_0, {\\mathcal U}_0)$, $(N_1,{\\mathcal U}_1)$ be two pairs which control compactness. Suppose that $\\Lambda_0$ is a regular perturbation of $\\overline{\\partial}$ with respect to the pair $(N_0,{\\mathcal U}_0)$, and likewise $\\Lambda_1$ is a regular perturbation of $\\overline{\\partial}$ with respect to the pair $(N_1,{\\mathcal U}_1)$. Consider the strong polyfold bundle $[0,1]\\times {\\mathcal W} \\to [0,1]\\times {\\mathcal Z}$ and the $\\text{sc}$-smooth proper oriented Fredholm section $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}$ defined by $(t,[z]) \\mapsto (t,\\overline{\\partial}([z]))$.\n\t\n\tThen there exists a pair $(N,{\\mathcal U})$ which controls the compactness of $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}$ and which satisfies the following:\n\t\\begin{enumerate}\n\t\t\\item the auxiliary norm $N:[0,1]\\times {\\mathcal W} \\to \\mathbb{Q}^+$ restricts to $N_0$ on $\\{0\\}\\times {\\mathcal W}$ and restricts to $N_1$ on $\\{1\\}\\times {\\mathcal W}$,\n\t\t\\item the open neighborhood ${\\mathcal U}$ of ${\\mathcal S} (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}})$ satisfies ${\\mathcal U} \\cap (\\{0\\}\\times {\\mathcal Z} )= {\\mathcal U}_0$ and ${\\mathcal U} \\cap (\\{1\\}\\times {\\mathcal Z} )= {\\mathcal U}_1$.\n\t\\end{enumerate}\n\t\n\tIn addition, there exists a regular perturbation $\\tilde{\\Lambda}$ of $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}$ with respect to the pair $(N,{\\mathcal U})$, such that $\\tilde{\\Lambda}|_{\\{0\\}\\times {\\mathcal W}}$ can be identified with $\\Lambda_0$ and likewise $\\tilde{\\Lambda}|_{\\{1\\}\\times {\\mathcal W}}$ can be identified with $\\Lambda_1$.\n\t\n\tIt follows that the perturbed solution set ${\\mathcal S} (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\tilde{\\Lambda})$ has the structure of a compact weighted branched suborbifold, and is a cobordism between perturbed solution sets, in the sense that\n\t\t\\[\n\t\t\\partial {\\mathcal S} (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\tilde{\\Lambda}) = -{\\mathcal S} (\\overline{\\partial},\\Lambda_0) \\sqcup {\\mathcal S} (\\overline{\\partial},\\Lambda_1).\n\t\t\\]\n\\end{theorem}\n\n\\subsection{The branched integral and polyfold invariants}\n\t\\label{subsec:branched-integral-polyfold-invariants}\n\nWe now describe how to define the polyfold invariants through the use of the branched integral. The definition of the branched integral theory on compact oriented weighted branched suborbifolds was originally developed in \\cite{HWZint}.\n\n\\begin{definition}[{\\cite[Def.~4.9]{HWZbook}}]\n\tLet ${\\mathcal Z}$ be a polyfold with an associated polyfold structure $(Z,{\\bar\\m}{Z})$.\n\tThe vector space of $\\text{sc}$-differential $k$-forms $\\Omega^k (Z)$ is the set of $\\text{sc}$-smooth maps \n\t\\[\\omega:\\bigoplus^k_{n=1} TZ\\rightarrow \\mathbb{R}\\]\n\tdefined on the Whitney sum of the tangent of the object space, which are linear in each argument and skew-symmetric.\n\tMoreover, we require that the maps $\\omega$ are morphism invariant in the following sense: for every morphism $\\phi: x\\to y$ in ${\\bar\\m}{Z}_1$ with tangent map $T\\phi:T_xZ\\rightarrow T_yZ$ we require that\n\t\\[\n\t(T\\phi)^*\\omega_y=\\omega_x.\n\t\\]\n\\end{definition}\n\nRecall the definition of ${\\mathcal Z}^i$ as the shifted polyfold with shifted polyfold structure $(Z^i,{\\bar\\m}{Z}^i)$.\nVia the inclusion maps ${\\mathcal Z}^i \\hookrightarrow {\\mathcal Z}$ we may pullback a $\\text{sc}$-differential $k$-form $\\omega$ in $\\Omega^k(Z)$ to $\\Omega^k(Z^i)$, obtaining a directed system\n\\[\n\\Omega^k(Z) \\to \\cdots \\to \\Omega^k(Z^i) \\to \\Omega^k(Z^{i+1}) \\to \\cdots,\n\\]\nwe denote by $\\Omega^k_\\infty (Z)$ the direct limit of this system.\nAs defined in \\cite[p.~149]{HWZbook} there exists an \\textbf{exterior derivative}\n\\[\nd:\\Omega^*(Z^{i+1}) \\to \\Omega^{* +1}(Z^i)\n\\]\nsuch that the composition $d\\circ d = 0$.\nThe exterior derivative commutes with the inclusion maps $Z^i \\hookrightarrow Z^{i+1}$ and hence induces a map\n\\[\nd:\\Omega^*_\\infty(Z) \\to \\Omega^{* +1}_\\infty(Z)\n\\]\nwhich also satisfies $d\\circ d =0$.\n\n\\begin{theorem}[{\\cite[Thm.~9.2]{HWZbook}}]\n\t\\label{def:branched-integral}\n\tLet ${\\mathcal Z}$ be a polyfold with polyfold structure $(Z,{\\bar\\m}{Z})$ which admits $\\text{sc}$-smooth partitions of unity.\n\tGiven a $\\text{sc}$-smooth differential form $\\omega\\in \\Omega^n_\\infty (Z)$ and an $n$-dimensional compact oriented weighted branched suborbifold ${\\mathcal S}\\subset {\\mathcal Z}$.\n\t\n\tThen there exists a well-defined \\textbf{branched integral}, denoted as\n\t\\[\\int_{{\\mathcal S}} \\omega,\\]\n\twhich is partially characterized by the following property. \n\tConsider a point $[x]\\in {\\mathcal S}$ and a representative $x\\in S$ with isotropy group ${\\bar\\m}{G}(x)$. Let $(M_i)_{i\\in I}$, $(w_i)_{i\\in I}$, $(o_i)_{i\\in I}$ be a local branching structure at $x$ contained in a ${\\bar\\m}{G}(x)$-invariant open neighborhood $U\\subset Z$ of $x$.\n\tConsider a $\\text{sc}$-smooth differential form $\\omega\\in \\Omega^n_\\infty (Z)$ and suppose that $\\abs{\\operatorname{supp} \\omega} \\subset \\abs{U}$.\n\tThen\n\t\\[\n\t\\int_{{\\mathcal S}} \\omega = \\frac{1}{\\sharp {\\bar\\m}{G}^\\text{eff}(x)} \\left( \\sum_{i\\in I} w_i \\cdot \\int_{(M_i,o_i)} \\omega\\right)\n\t\\]\n\twhere $\\sharp {\\bar\\m}{G}^\\text{eff}(x)$ is the order of the effective isotropy group and $\\int_{(M_i,o_i)} \\omega$ is the usual integration of the differential $n$-form $\\omega$ on the oriented $n$-dimensional manifold $M_i$.\n\\end{theorem}\n\n\\begin{theorem}[Stokes' theorem, {\\cite[Thm.~9.4]{HWZbook}}]\n\t\\label{thm:stokes}\n\tLet ${\\mathcal Z}$ be a polyfold with polyfold structure $(Z,{\\bar\\m}{Z})$ which admits $\\text{sc}$-smooth partitions of unity.\n\tLet ${\\mathcal S}$ be an $n$-dimensional compact oriented weighted branched suborbifold, and let $\\partial {\\mathcal S}$ be its boundary with induced weights and orientation. Consider a $\\text{sc}$-differential form $\\omega\\in \\Omega^{n-1}_\\infty (Z)$.\n\tThen \n\t\\[\n\t\\int_{{\\mathcal S}} d\\omega = \\int_{\\partial {\\mathcal S}} \\omega.\n\t\\]\n\\end{theorem}\n\nThe next theorem follows the same reasoning used to prove \\cite[Thm.~11.8]{HWZbook}.\n\\begin{theorem}[Change of variables]\n\t\\label{thm:change-of-variables}\n\n\n\tLet ${\\mathcal S}_i\\subset {\\mathcal Z}_i$ be $n$-dimensional compact oriented weighted branched suborbifolds with weight functions $\\vartheta_i:{\\mathcal S}_i \\to \\mathbb{Q}^+$ for $i=1,2$. \n\tLet $(S_i,{\\bar\\m}{S_i})$ be the associated branched suborbifold structures with associated weight functors $\\hat{\\vartheta}_i: (S_i,{\\bar\\m}{S_i}) \\to \\mathbb{Q}^+$ for $i=1,2$.\n\t\n\tLet $g:{\\mathcal Z}_1 \\to {\\mathcal Z}_2$ be a $\\text{sc}$-smooth map between polyfolds, which has a well-defined restriction $g|_{{\\mathcal S}_1}\t: {\\mathcal S}_1 \\to {\\mathcal S}_2$ between the branched suborbifolds. In addition, assume the following:\n\t\\begin{itemize}\n\t\t\\item $g: {\\mathcal S}_1 \\to {\\mathcal S}_2$ is a homeomorphism between the underlying topological spaces,\n\t\t\\item $\\hat{g}: S_1\\to S_2$ is injective and an orientation preserving local homeomorphism,\n\t\t\\item $g$ is weight preserving, i.e., $\\vartheta_2\\circ g=\\vartheta_1$ and $\\hat{\\vartheta}_2 \\circ \\hat{g}=\\hat{\\vartheta}_1$.\n\t\\end{itemize}\n\t\n\tThen given a $\\text{sc}$-smooth differential form $\\omega \\in \\Omega^n_\\infty (Z_2)$,\n\t\\[\n\t\\int_{{\\mathcal S}_2} \\omega = \\int_{{\\mathcal S}_1} g^* \\omega.\n\t\\]\n\\end{theorem}\n\n\\begin{theorem}[Polyfold invariants as branched integrals, {\\cite[Cor.~15.2]{HWZbook}}]\n\tConsider a $\\text{sc}$-smooth map \n\t\\[\n\tf:{\\mathcal Z} \\to {\\mathcal O}\n\t\\]\n\tfrom a polyfold ${\\mathcal Z}$ to an orbifold ${\\mathcal O}$.\n\tWe may define the \\textbf{polyfold invariant} as the homomorphism obtained by pulling back a de Rahm cohomology class from the orbifold and taking the branched integral over a perturbed zero set:\n\t\\[\n\tH^*_{\\dR} (O) \t\\to \\mathbb{R}, \\qquad \\omega \\mapsto \\int_{{\\mathcal S}(p)} f^*\\omega.\n\t\\]\n\tBy Theorem~\\ref{thm:cobordism-between-regular-perturbations} and by Stokes' theorem~\\ref{thm:stokes}, this homomorphism does not depend on the choice of abstract perturbation used to obtain the compact oriented weighted branched suborbifold ${\\mathcal S}(p)$.\n\\end{theorem}\n\n\n\\section{Naturality of polyfold invariants}\n\t\\label{sec:naturality-polyfold-invariants}\n\nIn this section we establish the necessary theory for proving the naturality of polyfold invariants, culminating in Theorem~\\ref{thm:naturality-polyfold-invariants} and in Corollaries~\\ref{cor:naturality-polyfold-gw-invariants} and \\ref{cor:punctures-equal}.\n\n\\subsection{Invariance of domain and branched suborbifolds}\n\t\\label{subsec:invariance-of-domain}\n\nIn the process of considering the naturality of the polyfold invariants, we will encounter a smooth bijection between weighted branched suborbifolds,\n\t\\[\n\tf: {\\mathcal S}_1 \\to {\\mathcal S}_2,\n\t\\]\nwhere $\\dim {\\mathcal S}_1 = \\dim {\\mathcal S}_2$ and ${\\mathcal S}_2$ is a compact topological space.\nWe would like to show that this map is a homeomorphism.\n\nHowever using only knowledge of the topologies of these spaces, it is impossible to show this.\nThe key to resolving this problem is understanding the branched suborbifold structure and how to use this additional structure to prove an invariance of domain result.\nThis result will allow us to assert that the above map is a homeomorphism.\n\nInvariance of domain is a classical theorem of algebraic topology due to Brouwer, and was originally published in 1911.\n\\begin{theorem}[Invariance of domain, {\\cite{brouwer1911beweis}}]\n\t\\label{thm:invariance-of-domain}\n\tLet $U$ be an open subset of $\\mathbb{R}^n$, and let $f: U\\to \\mathbb{R}^n$ be an injective continuous map. Then $f$ is a homeomorphism between $U$ and $f(U)$.\n\\end{theorem}\n\nThis result can immediately be generalized to manifolds; let $M$ and $N$ be an $n$-dimensional manifolds and let $f: M\\to N$ be an injective continuous map. Then $f$ is a homeomorphism onto its image. Moreover, if $f$ is bijective, it is a homeomorphism. We seek to generalize this result to the branched suborbifolds of our current situation.\n\n\\subsubsection{Local topology of branched submanifolds}\n\n\nAs a starting definition, a \\emph{branched manifold} is a topological space which is locally homeomorphic to a finite union of open subsets of $\\mathbb{R}^n$.\nHowever, such a broad definition of a branched manifold immediately raises the possibility of non-desirable topological properties. Consider the classic example of the \\emph{line with two origins}---although this is a locally Euclidean and second-countable topological space, it is not Hausdorff.\nIn contrast, the branched submanifolds we study are embedded into open subsets of ambient M-polyfolds and have better behaved topologies.\n\n\\begin{lemma}\n\t\\label{lem:topology-of-local-branching-structures}\n\tLet $U$ be a metrizable topological space.\n\tLet $M_i$, $i\\in I$ be a finite collection of finite-dimensional manifolds together with inclusion maps $\\phi_i: M_i \\hookrightarrow U$.\n\tAssume moreover that each $\\phi_i$ is proper and a topological embedding.\n\t\n\tConsider the set defined by the image of the inclusions, $\\cup_{i \\in I} \\phi_i (M_i)$.\n\tThere are two topologies we may consider on this set:\n\t\\begin{itemize}\n\t\t\\item $(\\cup_{i \\in I} \\phi_i(M_i), \\tau_s)$, where $\\tau_s$ is the subspace topology induced from $U$\n\t\t\\item $(\\cup_{i \\in I} \\phi_i(M_i), \\tau_q)$, where $\\tau_q$ is the quotient topology induced by the map $\\sqcup_{i\\in I} \\phi_i : \\sqcup_{i\\in I} M_i \\to \\cup_{i \\in I} \\phi(M_i)$.\n\t\\end{itemize}\n\tThese two topologies are identical.\n\\end{lemma}\n\\begin{proof}\n\tWe show that $\\tau_s = \\tau_q$.\n\t\\begin{itemize}\n\t\t\\item $\\tau_s \\subset \\tau_q$\n\t\\end{itemize}\n\tConsider the following commutative diagram where $q$ is the quotient map, $\\phi_i$ are the continuous inclusion maps $\\phi_i: M_i \\to U$, and $i$ is inclusion map.\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\\bigsqcup_{i\\in I} M_i \\arrow[r, \"\\sqcup_{i\\in I}\\phi_i\"] \\arrow[d, \"q\"'] & U \\\\\n\t\t(\\bigcup_{i\\in I} \\phi_i(M_i),\\tau_q) \\arrow[ru, \"i\",hook] \\arrow[r, \"\\operatorname{id}\"] & (\\bigcup_{i \\in I} \\phi_i(M_i), \\tau_s) \\arrow[u, \"i\"', hook]\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tThen by the characteristic property of the quotient topology, $\\sqcup_{i\\in I} \\phi_i$ continuous implies $i:(\\cup_{i \\in I} \\phi_i(M_i), \\tau_q) \\allowbreak \\hookrightarrow \\allowbreak U $ continuous.\n\tBy the definition of the subspace topology, $i:(\\cup_{i \\in I} \\phi_i(M_i), \\tau_q) \\hookrightarrow U $ is continuous.\n\tBy the characteristic property of the subspace topology, $i:(\\cup_{i \\in I} \\phi_i(M_i), \\tau_q) \\hookrightarrow U $ continuous implies\n\t$\\operatorname{id} : (\\cup_{i \\in I} \\phi_i(M_i), \\tau_q) \\to (\\cup_{i \\in I} \\phi_i(M_i), \\tau_s)$\n\tis continuous.\n\t\n\t\\begin{itemize}\n\t\t\\item $\\tau_q \\subset \\tau_s$\n\t\\end{itemize}\n\tBy assumption $U$ is a metrizable space; hence it is also a regular topological space.\n\tThe assumption that each $\\phi_i$ is a topological embedding and is proper implies moreover that the images $\\phi_i(M_i)\\subset U$ are closed in the subspace topology; to see this note that in metric spaces, sequential compactness is equivalent to compactness, and then use properness.\n\t\n\tSuppose $V\\subset \\cup_{i \\in I} \\phi_i(M_i)$ and $V\\in \\tau_q$. \n\tWe will show for every $x\\in V$ there exists a subset $B \\subset \\cup_{i \\in I} \\phi_i(M_i)$ such that $B \\in \\tau_s$ and $x \\in B \\subset V$. This implies that $V\\in\\tau_s$, as desired.\n\t\n\tBy the definition of the quotient topology, the set $q^{-1} (V) \\subset \\sqcup_{i\\in I} M_i$ is open and hence $q^{-1} (V) \\cap M_i$ is open in the topology on $M_i$.\tConsider $x$ as a point in $U$ via the set inclusion $\\cup_{i\\in I} \\phi_i(M_i) \\subset U$, since $\\phi_i : M_i\\to U$ is an injection it follows that $q^{-1} (x) = \\{x_{i_1}, \\ldots, x_{i_k} \\}$ where $x_{i_l} \\in M_{i_l}$ for a nonempty subset $\\{i_1,\\ldots , i_k \\} \\subset I$.\n\t\n\tLet $B_\\epsilon (x)\\subset U$ be an $\\epsilon$-ball at $x$.\n\tSince $\\phi_{i_l}$ is a topological embedding it follows that the sets $\\phi^{-1}_{i_l} (B_\\epsilon(x))$ give a neighborhood basis for $M_{i_l}$ at the point $x_{i_l}$.\n\tTherefore, we may take $\\epsilon$ small enough that $\\phi^{-1}_{i_l} (B_\\epsilon(x)) \\subset q^{-1} (V) \\cap M_{i_l}$ for all $i_l \\in \\{i_1,\\ldots , i_k \\}$.\t\n\tSince $U$ is a regular topological space, and since $x$ and $\\phi_j(M_j)$ are disjoint closed subsets of $U$ for $j \\in I \\setminus \\{i_1,\\ldots , i_k \\}$, we can find disjoint open neighborhoods that separate $x$ and $\\phi_j(M_j)$.\n\tThis moreover implies that we may take $\\epsilon$ small enough that $\\phi^{-1}_j (B_\\epsilon (x)) = \\emptyset$ for all $j \\in I \\setminus \\{i_1,\\ldots , i_k \\}$.\n\tFor such an $\\epsilon$, it follows that $\\phi_i^{-1} (B_\\varepsilon(x)) \\subset q^{-1}(V) \\cap M_i$ for all $i\\in I$.\n\t\t\n\tThe desired set is then given by\n\t\t\\[\n\t\tB:= B_\\epsilon (x) \\cap \\bigcup_{i \\in I} \\phi_i (M_i);\n\t\t\\]\n\tit is an open set in the subspace topology on $\\cup_{i \\in I} \\phi_i(M_i)$.\n\tBy construction, $q^{-1}(B) = \\sqcup_{i\\in I} \\phi_i^{-1} (B_\\varepsilon(x)) \\subset \\sqcup_{i\\in I} q^{-1}(V) \\cap M_i = q^{-1} (V)$, therefore $B\\subset V$ as desired.\n\n\t\n\n\t\n\n\\end{proof}\n\nAn open subset of an M-polyfold with the subspace topology is a metrizable topological space, and hence the above lemma applies to the branched suborbifolds of Definition~\\ref{def:weighed-branched-suborbifold}\n\n\\begin{lemma}\n\t\\label{lem:local-homeo-m-polyfold}\n\tLet $S_i$ be $n$-dimensional branched submanifolds of M-polyfolds $Z_i$ for $i=1,2$.\n\tConsider an injective continuous map between these two M-polyfolds,\n\t$\\hat{f}: Z_1 \\hookrightarrow Z_2,$\n\tand suppose that there is a well-defined restriction to the branched submanifolds,\n\t$\\hat{f}|_{S_1}:S_1 \\hookrightarrow S_2.$\n\t\n\tFor every $x\\in S_1$ with $y:= \\hat{f}(x) \\in S_2$, suppose that there exist local branching structures $(M_i)_{i\\in I}$ at $x$ and $(M'_j)_{j\\in I}$ at $y$ which have the same index set $I$. \n\tMoreover, assume that $\\hat{f}$ has a well-defined restriction to the individual local branches for each index $i\\in I$ as follows:\n\t\\[\n\t\\hat{f}|_{M_i} : M_i \\hookrightarrow M'_i.\n\t\\]\n\tThen $\\hat{f}|_{S_1}$ is a local homeomorphism between $S_1$ and $S_2$. Since we have assumed that $\\hat{f}$ is injective, it follows that $\\hat{f}|_{S_1}$ is also a homeomorphism onto its image.\n\\end{lemma}\n\\begin{proof}\n\tLet $x\\in S_1$ which maps to $\\hat{f}(x)\\in S_2$. By assumption, there exists a local branching structure $(M_i)_{i\\in I}$ in a neighborhood $O_x$ of $x$, and there exists a local branching structure $(M'_j)_{j\\in J}$ in a neighborhood $O_{\\hat{f}(x)}$ of $\\hat{f}(x)$ such that the index sets are the same, $I=J$, and $\\hat{f}$ restricts to a injective continuous map between each branch, i.e.,\n\t\\[\\hat{f}|_{M_i} : M_i \\to M'_i.\\]\n\t\n\tWe may invoke invariance of domain \\ref{thm:invariance-of-domain} to see that the restricted maps $\\hat{f}|_{M_i}$ are homeomorphisms onto their images.\n\tObserve that the open balls $B_\\epsilon(\\hat{f}(x))\\subset O_{\\hat{f}(x)}$ give a neighborhood basis for $M_i'$ at $\\hat{f}(x)$ for all $i\\in I$. It follows that $\\hat{f}^{-1} (B_\\epsilon (\\hat{f}(x))) \\subset O_x$ give a neighborhood basis for $M_i$ at $x$ for all $i\\in I$. For $\\epsilon$ small enough, the restricted maps \n\t\\begin{equation}\\label{eq:restriction-to-local-branches}\n\t\\hat{f}|_{M_i \\cap \\hat{f}^{-1} (B_\\epsilon (\\hat{f}(x)))} : M_i \\cap \\hat{f}^{-1} (B_\\epsilon (\\hat{f}(x))) \\to M_i' \\cap B_\\epsilon(\\hat{f}(x))\n\t\\end{equation}\n\tare homeomorphisms for all $i\\in I$.\n\t\n\tDefine a neighborhood of $x$ by $U_x = \\hat{f}^{-1} (B_\\epsilon (\\hat{f}(x)))$; then $N_i := M_i \\cap \\hat{f}^{-1} (B_\\epsilon(\\hat{f}(x)))$ give local branches in $U_x$. Define a neighborhood of $\\hat{f}(x)$ by $U_{\\hat{f}(x)}:=B_\\epsilon(\\hat{f}(x))$; then $N_i' := M_i' \\cap B_\\epsilon(\\hat{f}(x))$ give local branches in $U_{\\hat{f}(x)}$. We can now rewrite \\eqref{eq:restriction-to-local-branches} more simply as\n\t\\[\n\t\\hat{f}|_{N_i} : N_i \\to N'_i.\n\t\\]\n\tand note again that the maps $\\hat{f}|_{N_i}$ are homeomorphisms for all $i\\in I$. Hence the map $\\sqcup_{i\\in I} (\\hat{f}|_{N_i}): \\sqcup_{i\\in I} N_i \\to \\sqcup_{i\\in I} N'_i$ is also a homeomorphism.\n\t\n\tConsider the following commutative diagram of maps.\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\\bigsqcup_{i\\in I} N_i \\arrow[d, \"q\"] \\arrow[r, \"\\sqcup (\\hat{f}|_{N_i})\"] & \\bigsqcup_{i\\in I} N'_i \\arrow[d, \"q'\"] \\\\\n\t\t(\\cup_{i\\in I} N_i, \\tau_q) \\arrow[r, \"\\hat{f}|_{\\cup N_i}\"', hook] & (\\cup_{i\\in I} N'_i, \\tau_{q'})\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tWe assert that the map $\\hat{f}|_{\\cup N_i} : (\\cup_{i \\in I} N_i,\\tau_q) \\hookrightarrow (\\cup_{i\\in I} N'_i,\\tau_{q'})$ is a homeomorphism.\n\tIndeed, by assumption $\\hat{f}|_{\\cup N_i}$ is injective. We can use the fact that $\\sqcup (\\hat{f}|_{N_i})$ is a bijection to see that $\\hat{f}|_{\\cup N_i}$ must also be surjective.\n\tIt is easy to check that $\\hat{f}|_{\\cup N_i}$ is continuous with respect to the quotient topologies $\\tau_q$ and $\\tau_{q'}$.\n\tFurthermore, $\\hat{f}|_{\\cup N_i}$ is an open map.\n\tTo see this, let $U\\subset (\\cup_{i \\in I} N_i,\\tau_q)$ be an open set. Then $q^{-1} (U) \\subset \\sqcup_{i\\in I} N_i$ is open by the definition of the quotient topology.\n\tSince $\\sqcup (\\hat{f}|_{N_i})$ is a homeomorphism, $(\\sqcup (\\hat{f}|_{N_i})) (q^{-1}(U))$ is open.\n\tCommutativity of the diagram and the fact that both $\\sqcup (\\hat{f}|_{N_i})$ and $\\hat{f}|_{\\cup N_i}$ are bijections implies that $(\\sqcup \\hat{f}|_{N_i}) (q^{-1}(U)) = q'^{-1} (\\hat{f}|_{\\cup N_i} (U))$.\n\tIt therefore follows that $\\hat{f}|_{\\cup N_I} (U)$ is open by the definition of the quotient topology.\n\t\n\tBy Lemma~\\ref{lem:topology-of-local-branching-structures}, the fact that $\\hat{f}|_{\\cup N_i} : (\\cup_{i \\in I} N_i,\\tau_q) \\hookrightarrow (\\cup_{i\\in I} N'_i,\\tau_{q'})$ is a homeomorphism implies that $\\hat{f}|_{\\cup N_i} : (\\cup_{i \\in I} N_i,\\tau_s) \\hookrightarrow (\\cup_{i\\in I} N'_i,\\tau_s)$ is a homeomorphism.\n\tNote that $\\cup_{i \\in I} N_i \\subset S_1$ and $\\cup_{i \\in I} N'_i \\subset S_2$ are both open subsets.\n\tBy Remark~\\ref{rmk:local-topology-subgroupoid}, the inclusion maps $(\\cup_{i \\in I} N_i,\\tau_s)\\hookrightarrow S_1$ and $(\\cup_{i\\in I} N'_i,\\tau_s)\\hookrightarrow S_2$ are both local homeomorphisms. We now see that the map $\\hat{f} : S_1 \\to S_2$ is a local homeomorphism on an open neighborhood of the point $x\\in S_1$. Since $x\\in S_1$ was arbitrary, and since $\\hat{f}$ is injective, we can conclude $\\hat{f}$, considered on the object sets, is a local homeomorphism. It then follows from the \\'etale property that $\\hat{f}$, considered on the morphism sets, is a local homeomorphism. This proves the claim.\n\\end{proof}\n\n\\begin{lemma}\n\t\\label{lem:invariance-of-domain-branched-orbifolds}\n\tLet ${\\mathcal S}_i$ be an $n$-dimensional branched suborbifold of a polyfold ${\\mathcal Z}_i$ for $i=1,2$.\n\tConsider an injective continuous map between these two polyfolds, $f: {\\mathcal Z}_1 \\hookrightarrow {\\mathcal Z}_2$,\n\tand which has an associated functor $\\hat{f}: (Z_1,{\\bar\\m}{Z_1}) \\hookrightarrow (Z_2,{\\bar\\m}{Z_2})$, which is injective and continuous with respect to the object and morphism sets.\n\tIn addition, assume that the functor $\\hat{f}$ is fully faithful.\n\tSuppose that $f$ has a well-defined restriction to the branched suborbifolds $f|_{{\\mathcal S}_1}:{\\mathcal S}_1 \\hookrightarrow {\\mathcal S}_2$; it follows that $\\hat{f}$ restricts to a well-defined functor between the subgroupoids $\\hat{f}|_{S_1} : (S_1,{\\bar\\m}{S}_1) \\to (S_2,{\\bar\\m}{S}_2)$.\n\t\n\tAssume that for every $x\\in S_1$ with $y:= \\hat{f}(x) \\in S_2$, there exist local branching structures $M_i$, $i\\in I$ at $x$ and $M'_j$, $j\\in I$ at $y$ which have the same index set $I$. \n\tMoreover, assume that $\\hat{f}$ has a well-defined restriction to the individual local branches for each index $i\\in I$ as follows:\n\t\\[\n\t\\hat{f}|_{M_i} : M_i \\hookrightarrow M'_i.\n\t\\]\n\t\n\tThen the restriction $f|_{{\\mathcal S}_1}:{\\mathcal S}_1 \\hookrightarrow {\\mathcal S}_2$ is a local homeomorphism. In particular, if $f|_{{\\mathcal S}_1}$ is a bijection, then it is a homeomorphism.\n\\end{lemma}\n\\begin{proof}\t\n\tLet $[x] \\in {\\mathcal S}_1$ and let $f([x]) \\in {\\mathcal S}_2$. Let $x$ be a representative of $[x]$, hence $\\hat{f}(x)$ is a representative of $f([x])$. \n\tFrom the proof of Lemma~\\ref{lem:local-homeo-m-polyfold}, we have seen that there exists a local branching structure $(N_i)_{i\\in I}$ at $x$ and a local branching structure $(N'_i)_{i\\in I}$ at $\\hat{f}(x)$ such that $\\hat{f}|_{\\cup N_i} : (\\cup_{i \\in I} N_i,\\tau_s) \\hookrightarrow (\\cup_{i\\in I} N'_i,\\tau_s)$ is a homeomorphism.\n\t\n\tThe proof now follows the same reasoning as Lemma~\\ref{lem:local-homeo-m-polyfold}.\n\tConsider the following commutative diagram of maps.\n\t\\begin{center}\n\t\t\\begin{tikzcd}\n\t\t\\bigcup_{i\\in I} N_i \\arrow[d, \"q\"] \\arrow[r, \"\\hat{f}|_{\\cup N_i}\"] & \\bigcup_{i\\in I} N'_i \\arrow[d, \"q'\"] \\\\\n\t\t(\\abs{\\cup_{i\\in I} N_i},\\tau_q) \\arrow[r, \"f|_{\\abs{\\cup N_i}}\"', hook] & (\\abs{\\cup_{i\\in I} N'_i}, \\tau_{q'})\n\t\t\\end{tikzcd}\n\t\\end{center}\n\tWe assert that the map $f|_{\\abs{\\cup N_i}}$ is a homeomorphism.\n\tIndeed, by assumption $f|_{\\abs{\\cup N_i}}$ is injective. We can use the fact that $\\hat{f}|_{\\cup N_i}$ is a bijection to see that $f|_{\\abs{\\cup N_i}}$ must also be surjective.\n\tBy assumption, $f$ is continuous and therefore the restriction $f|_{\\abs{\\cup N_i}}$ is continuous.\n\tFurthermore, $f|_{\\abs{\\cup N_i}}$ is an open map.\n\tTo see this, let $U\\subset \\abs{\\cup_{i\\in I} N_i}$ be an open set.\n\tThen $q^{-1}(U)\\subset \\cup_{i \\in I} N_i$ is open by the definition of the quotient topology.\n\tSince $\\hat{f}|_{\\cup N_i}$ is a homeomorphism, $(\\hat{f}|_{\\cup N_i})(q^{-1}(U)) \\subset \\cup_{i \\in I} N'_i$ is open. Commutativity of the diagram and the fact that both $\\hat{f}|_{\\cup N_i}$ and $f|_{\\abs{\\cup N_i}}$ are bijections implies that $(\\hat{f}|_{\\cup N_i}) (q^{-1}(U)) = q'^{-1} (f|_{\\abs{\\cup N_i}}(U))$.\n\tIt therefore follows that $f|_{\\abs{\\cup N_i}}(U)$ is open by the definition of the quotient topology.\n\t\n\tProposition~\\ref{prop:natural-representation-subgroupoid} implies that the inclusion maps $\\abs{\\cup_{i\\in I} N_i} \\hookrightarrow {\\mathcal S}_1$ and $\\abs{\\cup_{i\\in I} N'_i} \\hookrightarrow {\\mathcal S}_2$ are local homeomorphisms.\n\tWe now see that the map $f|_{{\\mathcal S}_1} : {\\mathcal S}_1 \\to {\\mathcal S}_2$ is a local homeomorphism on an open neighborhood of the point $[x]\\in {\\mathcal S}_1$. Since $[x]\\in {\\mathcal S}_1$ was arbitrary it follows that $f|_{{\\mathcal S}_1}$ is a local homeomorphism.\n\tIt moreover follows that if $f|_{{\\mathcal S}_1}$ is bijective, it is a homeomorphism. This proves the claim.\n\\end{proof}\n\n\\subsection{Fredholm multisections and abstract perturbations}\n\t\\label{subsec:fredholm-multisections}\n\nIn this subsection we generalize the polyfold abstract perturbation theory from Fredholm sections to Fredholm multisections.\nThis involves minor modifications to the definitions and theorems originally developed in \\cite{HWZ3} and which we recalled in \\S~\\ref{subsec:abstract-perturbations}.\nThis generalization is developed with a specific goal in mind, which is the proof of Theorem~\\ref{thm:naturality}.\n\n\\begin{definition}\n\tLet ${\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle. We define a \\textbf{$\\text{sc}$-smooth Fredholm multisection} as \n\t\\begin{enumerate}\n\t\t\\item a function $F:{\\mathcal W} \\to \\mathbb{Q}^+$,\n\t\t\\item an associated functor $\\hat{F}: W \\to \\mathbb{Q}^+$ where $\\abs{\\hat{F}}$ induces $F$,\n\t\\end{enumerate}\n\tsuch that at ever $[x]\\in {\\mathcal Z}$ there exists a \\textbf{local Fredholm section structure} defined as follows.\n\tLet $x\\in Z$ be a representative of $[x]$ and let $U\\subset Z$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of $x$, and consider the restricted strong M-polyfold bundle $P: W|_U \\to U$.\n\tThen there exist finitely many $\\text{sc}$-Fredholm sections $f_1,\\ldots,f_k : U \\to W|_U$\n\twith associated positive rational numbers $\\sigma_1,\\ldots ,\\sigma_k \\in \\mathbb{Q}^+$ which satisfy the following:\n\t\\begin{enumerate}\n\t\t\\item $\\sum_{i=1}^k \\sigma_i =1.$\n\t\t\\item The restriction $\\hat{F}|_{W|_U}: W|_U \\to \\mathbb{Q}^+$ is related to the local sections and weights via the equation \n\t\t\t\\[\n\t\t\t\\hat{F}|_{W|_U}(w)=\\sum_{\\{i\\in \\{1,\\ldots, k\\} \\mid w=f_i(p(w))\\}} \\sigma_i\n\t\t\t\\]\n\t\twhere the empty sum has by definition the value $0$.\n\t\\end{enumerate}\n\\end{definition}\n\nWe say that the Fredholm multisection $F$ is \\textbf{proper} if the unperturbed solution set\n\t\\[\n\t{\\mathcal S} (F) := \\{ [z]\\in {\\mathcal Z} \\mid F(0_{[x]})\t>0\t\\} \\subset {\\mathcal Z}\n\t\\]\nis a compact topological space.\n(Notice that the condition $F(0_{[x]})>0$ is equivalent to the condition that $f_i(x) = 0$ for some $i \\in I$ for a given representative $x$ and a local Fredholm section structure $(f_i)_{i\\in I}$, $(\\sigma_i)_{i\\in I}$ at $x$.)\nFurthermore, we can define a weight function on the unperturbed solution set, ${\\mathcal S}(F) \\to \\mathbb{Q}^+$, by $[z] \\mapsto F(0_{[x]})$.\n\n\\begin{example}\n\tFor the applications we have in mind, the $\\text{sc}$-smooth Fredholm multisections are obtained as a pair $(\\overline{\\partial},\\Lambda)$ consisting of:\n\t\\begin{itemize}\n\t\t\\item a $\\text{sc}$-smooth Fredholm section $\\overline{\\partial}:{\\mathcal Z}\\to{\\mathcal W}$,\n\t\t\\item a $\\text{sc}^+$-multisection $\\Lambda :{\\mathcal W} \\to \\mathbb{Q}^+$.\n\t\\end{itemize}\n\tGiven a point $[x]\\in{\\mathcal Z}$, we define a local Fredholm section structure for $(\\overline{\\partial},\\Lambda)$ at $[x]$ as follows.\n\tLet $x\\in Z$ be a representative of $[x]$ and let $U\\subset Z$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of $x$, and consider the restricted strong M-polyfold bundle $P: W|_U \\to U$.\n\tConsider the $\\text{sc}$-smooth Fredholm section $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}: U \\to W|_U$, and let $(s_i)_{i\\in I}$, $(\\sigma_i)_{i\\in I}$ be a local section structure for $\\Lambda$ at $x$.\n\t\n\tThen the local Fredholm section structure is given by $f_i := \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} - s_i$ with associated weight $\\sigma_i$. It follows from \\cite[Thm.~3.2]{HWZbook} that such an $f_i$ is in fact a $\\text{sc}$-smooth Fredholm section.\n\tWe may then define the functor $\\hat{F}$ locally via the equation\n\t\t\\[\n\t\t\\hat{F}|_{W|_U}(w)=\\sum_{i\\in \\{1,\\ldots, k \\mid w=f_i(p(w))\\}} \\sigma_i\n\t\t\\]\n\twhere the empty sum has by definition the value $0$. It is evident this extends to a well-defined functor $\\hat{F}: (W,{\\bar\\m}{W}) \\to \\mathbb{Q}^+$.\n\tFinally, observe the perturbed solution set ${\\mathcal S} (\\overline{\\partial},\\Lambda)$ associated to the pair $(\\overline{\\partial},\\Lambda)$ is the same as the unperturbed solution set ${\\mathcal S} (F)$ associated to the Fredholm multisection $F$, i.e.,\n\t\t\\[\n\t\t\\{ [z]\\in {\\mathcal Z} \\mid \\Lambda(\\overline{\\partial}([x]))\t>0\t\\} = \\{ [z]\\in {\\mathcal Z} \\mid F(0_{[x]})\t>0\t\\}.\n\t\t\\]\n\\end{example}\n\n\\subsubsection{Transverse perturbations of Fredholm multisections}\n\nWe can immediately adapt the main definitions and results of \\S~\\ref{subsec:abstract-perturbations}; there is no difficulty in generalizing the construction of transverse perturbations to Fredholm multisections.\n\n\\begin{definition}\n\tAssociated to a $\\text{sc}$-smooth Fredholm multisection $\\overline{\\partial}$ and a $\\text{sc}^+$-mul\\-ti\\-sec\\-tion $\\Gamma$, \n\twe define the \\textbf{perturbed solution space} as the set\n\t\\[\n\t{\\mathcal S}(F,\\Lambda) :=\\{[z]\\in{\\mathcal Z} \\mid (F\\oplus\\Gamma) (0_{[z]})\t>0\t\\}\\subset {\\mathcal Z}\n\t\\]\n\twith topology given by the subspace topology induced from ${\\mathcal Z}$. It is equipped with the weight function ${\\mathcal S}(F,\\Gamma) \\to \\mathbb{Q}^+,$ $[z]\\mapsto (F\\oplus\\Gamma) (0_{[z]}).$\n\\end{definition}\n\nAlong the same lines as Definition~\\ref{def:transversal-pair}, we can formulate what it means for a Fredholm multisection and a $\\text{sc}^+$-multisection to be transversal.\n\n\\begin{definition}\n\t\\label{def:fredholm-multisection-transversal}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, $F$ a $\\text{sc}$-smooth Fredholm multisection, and $\\Gamma$ a $\\text{sc}^+$-multisection.\n\t\n\tConsider a point $[x]\\in {\\mathcal Z}$. We say $(F,\\Gamma)$ is \\textbf{transversal at $[x]$} if, given a local Fredholm section structure for $F$ at $[x]$ and given a local $\\text{sc}^+$-section structure for $\\Gamma$ at $[x]$, then the linearized local expression \n\t\t\\[\n\t\tD(f_i-s_j)(x):T_x Z \\to W_x\n\t\t\\]\n\tis surjective for all $i\\in I$, $j\\in J$ with $f_i(x)=s_j(x)$. We say that $(F,\\Gamma)$ is \\textbf{transversal} if it is transversal at every $[x] \\in {\\mathcal S}(F,\\Gamma)$.\n\\end{definition}\n\nConsider our example of a Fredholm multisection $(\\overline{\\partial},\\Lambda)$ consisting of a Fredholm section and a $\\text{sc}^+$-multisection $\\Lambda$, and let $\\Gamma$ be an additional $\\text{sc}^+$-multisection. Then the sum $\\Lambda \\oplus \\Gamma: {\\mathcal W} \\to \\mathbb{Q}^+$ is a $\\text{sc}^+$-multisection, with local section structure given by $s_i +r_j$ where $(s_i)_{i\\in I}$ is a local section structure for $\\Lambda$ and $(r_j)_{j\\in J}$ is a local section structure for $\\Gamma$.\nWe may now observe that the pair $(\\overline{\\partial}, \\Lambda\\oplus \\Gamma)$ consisting of the Fredholm section $\\overline{\\partial}$ and the $\\text{sc}^+$-multisection $\\Lambda\\oplus \\Gamma$ is transversal in the sense of Definition~\\ref{def:transversal-pair} if an only if the pair $((\\overline{\\partial},\\Lambda), \\Gamma)$ consisting of the Fredholm multisection $(\\overline{\\partial},\\Lambda)$ and the $\\text{sc}^+$-multisection $\\Gamma$ is transversal in the sense of the above Definition~\\ref{def:fredholm-multisection-transversal}.\n\nWe have an analog of Theorem~\\ref{thm:transversal-pairs-weighted-branched-suborbifolds}.\n\n\\begin{proposition}\n\t\\label{prop:fredholm-multisection-transversal-pairs-weighted-branched-suborbifolds}\n\tLet $P:{\\mathcal W}\\rightarrow {\\mathcal Z}$ be a strong polyfold bundle, $F$ a $\\text{sc}$-smooth Fredholm multisection, and $\\Gamma$ a $\\text{sc}^+$-multisection.\n\tIf the pair $(F,\\Lambda)$ is transversal, then the perturbed solution set ${\\mathcal S}(F,\\Gamma)$ carries in a natural way the structure of a weighted branched suborbifold.\n\\end{proposition}\n\n\\subsubsection{Controlling compactness of Fredholm multisections}\n\nIn contrast to construction of transverse perturbations of Fredholm multisections, where no modification of the underlying definitions or ideas was required, it is somewhat more involved to show how to control the compactness of Fredholm multisections.\nIt is necessary to refer to the earlier work contained in \\cite[\\S~4.2]{HWZ3} in order to obtain complete results in our current situation.\n\n\\begin{definition}\n\tConsider a Fredholm multisection and a point $[z] \\in {\\mathcal Z}$.\n\tLet $(f_i)_{i\\in I}$ be a local section structure for $F$ at a representative $z$.\n\tLet $N:{\\mathcal W}[1]\\to [0,\\infty)$ be an auxiliary norm with associated $\\text{sc}^0$-functor $\\hat{N}:W[1]\\to [0,\\infty)$; as in \\cite[p.~434]{HWZbook}, we may extend $N$ to all of ${\\mathcal W}$ by defining $N([w]):= +\\infty$ for $[w]\\in {\\mathcal W}[0] \\setminus {\\mathcal W}[1]$, and likewise extend $\\hat{N}$ to all of $W$.\n\tWe define the \\textbf{min norm of the Fredholm multisection} $F$ at $[z]$ by the equation\n\t\\[\n\tN_{\\min} (F) [z] := \\min_{i\\in I} \\{\t\\hat{N} (\tf_i(z)\t)\t\\}.\n\t\\]\n\n\\end{definition}\n\n\\begin{definition}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $F$ be a $\\text{sc}$-smooth proper Fredholm multisection, and let $N :{\\mathcal W}\\to [0,\\infty)$ be an extended auxiliary norm.\n\t\n\tConsider an open neighborhood ${\\mathcal U}$ of the unperturbed solution set $\\mathcal{S}(F)\\subset {\\mathcal Z}$.\n\n\tWe say that the pair $(N,{\\mathcal U})$ \\textbf{controls the compactness} of $F$ provided the set \n\t\\[\n\tcl_{\\mathcal Z} \\{[x]\\in {\\mathcal U} \\mid N_{\\min} (F) [x]\\leq 1\\} \\subset {\\mathcal Z}\n\t\\]\n\tis compact.\n\n\\end{definition}\n\n\\begin{proposition}[Analog of {\\cite[Thm.~4.5]{HWZ3}}]\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $F$ be a $\\text{sc}$-smooth proper Fredholm multisection, and let $N :{\\mathcal W}[1]\\to [0,\\infty)$ be an auxiliary norm. Then there exists an open neighborhood ${\\mathcal U}$ of the unperturbed solution set $\\mathcal{S}(F)$ such that the pair $(N,{\\mathcal U})$ controls the compactness of $F$.\n\\end{proposition}\n\n\\begin{proposition}[Analog of {\\cite[Lem.~4.16]{HWZ3}}]\n\t\\label{prop:fredholm-multisection-compactness}\n\tLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $F$ be a $\\text{sc}$-smooth proper Fredholm multisection, and let $(N,{\\mathcal U})$ be a pair which controls compactness.\n\tIf a $\\text{sc}^+$-multisection $\\Gamma$ satisfies $N[\\Gamma] \\leq 1$ and $\\text{dom-supp} (\\Gamma) \\subset {\\mathcal U}$, then the perturbed solution set $\\mathcal{S}(F,\\Gamma)$ is compact\n\\end{proposition}\n\n\\subsubsection{Regular perturbations and compact cobordism}\n\nLet $P:{\\mathcal W}\\to {\\mathcal Z}$ be a strong polyfold bundle, let $F$ be a $\\text{sc}$-smooth proper Fredholm multisection, and let $(N, {\\mathcal U})$ be a pair which controls compactness.\nWe say a $\\text{sc}^+$-multisection $\\Gamma$ is a \\textbf{regular perturbation} of $F$ with respect to the pair $(N,{\\mathcal U})$ if it satisfies the following:\n\t\\begin{itemize}\n\t\t\\item $(F, \\Gamma)$ is a transversal pair,\n\t\t\\item $N[\\Gamma] \\leq 1$ and $\\text{dom-supp} (\\Gamma) \\subset {\\mathcal U}$.\n\t\\end{itemize}\nAs in \\cite[Cor.~15.1]{HWZbook}, one can prove that there exist regular perturbations $\\Gamma$ of $F$ with respect to the pair $(N,{\\mathcal U})$.\nIt follows from Proposition~\\ref{prop:fredholm-multisection-transversal-pairs-weighted-branched-suborbifolds} and Proposition~\\ref{prop:fredholm-multisection-compactness} that the perturbed solution space ${\\mathcal S}(F,\\Gamma)$ has the structure of a compact weighted branched suborbifold, with weight function given by ${\\mathcal S}(F,\\Gamma) \\to \\mathbb{Q}^+,$ $[z]\\mapsto (F\\oplus \\Gamma)(0_{[z]})$.\n\nFurthermore, as in \\cite[Cor.~15.1]{HWZbook} one can prove the existence of a compact cobordism between perturbed solution sets of regular perturbations.\n\n\\subsubsection{Cobordism from a transversal Fredholm multisection to a regular perturbation}\n\t\\label{subsubsec:cobordism-multisection-regular}\n\nHaving developed the above generalization to Fredholm multisections, we are finally in a position to state the desired specialized result, Proposition~\\ref{prop:cobordism-multisection-regular}.\n\nConsider a strong polyfold bundle $P:{\\mathcal W} \\to {\\mathcal Z}$ and a $\\text{sc}$-smooth proper Fredholm section $\\overline{\\partial}$.\nSuppose that $(N_0,{\\mathcal U}_0)$ is a pair which controls the compactness of $\\overline{\\partial}$.\nConsider a $\\text{sc}^+$-multisection $\\Lambda$ and suppose that $(\\overline{\\partial},\\Lambda)$ is a transversal pair.\n(Note that we do not assume that $\\Lambda$ is admissible to a pair which controls compactness.)\nNow, consider the strong polyfold bundle ${\\mathcal W}\\times[0,1] \\to {\\mathcal Z}\\times[0,1]$, and consider a $\\text{sc}$-smooth Fredholm multisection $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ defined as follows:\n\t\\begin{itemize}\n\t\t\\item $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}$ is the $\\text{sc}$-smooth Fredholm section defined by $([z],s)\\mapsto (\\overline{\\partial}([z]),s)$,\n\t\t\\item $\\tilde{\\Lambda}$ is the $\\text{sc}^+$-multisection defined for $s\\neq 0$ by\n\t\t$([w],s)\\mapsto \\Lambda(1\/s \\cdot [w])$ and for $s=0$ by\n\t\t\t\\[\n\t\t\t([w],0)\t\\mapsto \n\t\t\t\\begin{cases}\n\t\t\t\t1, &\\text{if } [w]=[0], \\\\\n\t\t\t\t0, &\\text{if } [w]\\neq [0],\n\t\t\t\\end{cases}\n\t\t\t\\]\n\t\tand\n\t\twhose local section structure at an object $(x,s)$ is defined by $O_x\\times [0,1] \\to W \\times [0,1]; (x,s) \\mapsto (s\\cdot s_i(x), s)$ (where $(s_i)$ is the original local section structure for $\\Lambda$ at the object $x\\in Z$).\n\t\\end{itemize}\nMoreover, let us assume that the Fredholm multisection $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ is proper, i.e., the solution set ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ is compact.\n\nObserve that the topological boundary of ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ is given by the following set:\n\t\\[\n\t\\partial {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda}) = {\\mathcal S}(\\overline{\\partial}) \\sqcup {\\mathcal S}(\\overline{\\partial},\\Lambda).\n\t\\]\nBy the assumption that $(\\overline{\\partial}, \\Lambda)$ is a transversal pair ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ is a weighted branched orbifold; moreover it is a closed subset of ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\tilde{\\Lambda})$ and is therefore compact.\nWe emphasize that since $\\Lambda$ is not admissible to a pair which controls compactness, it is not a regular perturbation (see Definition~\\ref{def:regular-perturbation}) and hence cannot be used to define polyfold invariants.\nWe can almost consider ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ as a compact cobordism, except $\\overline{\\partial}$ is not assumed to be transverse and hence ${\\mathcal S}(\\overline{\\partial})$ is not assumed to have the structure of a weighted branched suborbifold.\n\nThe following proposition demonstrates how to perturb the solution space ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ in order to obtain a compact cobordism between ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ and a perturbed solution space ${\\mathcal S}(\\overline{\\partial},\\Gamma_0)$ where $\\Gamma_0$ is a \\emph{regular} perturbation.\n\n\\begin{proposition}\n\t\\label{prop:cobordism-multisection-regular}\n Suppose that $\\Gamma_0$ is a regular perturbation of $\\overline{\\partial}$ with respect to the pair $(N_0,{\\mathcal U}_0)$,\n\tThere exists a pair $(N,{\\mathcal U})$ which controls the compactness of the Fredholm multisection $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ and which satisfies the following:\n\t\\begin{itemize}\n\t\t\\item the auxiliary norm $N: {\\mathcal W}\\times [0,1] \\to \\mathbb{Q}^+$ restricts to $N_0$ on ${\\mathcal W}\\times \\{0\\}$,\n\t\t\\item the open neighborhood ${\\mathcal U}$ of ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ satisfies ${\\mathcal U} \\cap ({\\mathcal Z}\\times \\{0\\}) = {\\mathcal U}_0$.\n\t\\end{itemize}\n\tMoreover, there exists a regular perturbation $\\Gamma$ of $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}},\\tilde{\\Lambda})$ with respect to the pair $(N,{\\mathcal U})$ such that $\\Gamma|_{{\\mathcal W}\\times \\{0\\}}$ can be identified with $\\Gamma_0$ and such that $\\Gamma|_{{\\mathcal W}\\times\\{1\\}} \\equiv 0$.\n\\end{proposition}\n\nThe proof of this proposition follows the same reasoning used to prove Theorem~\\ref{thm:cobordism-between-regular-perturbations}, noting in addition that we do not need to perturb in a neighborhood of ${\\mathcal Z}\\times\\{1\\}$, as by assumption $(\\overline{\\partial},\\Lambda)$ is a transversal pair.\n\n\\subsection{Intermediary subbundles and naturality of polyfold invariants}\n\t\\label{subsec:intermediary-subbundles-naturality}\n\nConsider a commutative diagram as follows,\n\\begin{equation}\\label{eq:commutative-diagram-naturality}\n\t\\begin{tikzcd}\n\t{\\mathcal W}_1 \\arrow[r, \"\\iota_{\\mathcal W}\"', hook] \\arrow[d, \"\\overline{\\partial}_1\\quad \"'] & {\\mathcal W}_2 \\arrow[d, \"\\quad \\overline{\\partial}_2\"] & \\\\\n\t{\\mathcal Z}_1 \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[u, bend left] & {\\mathcal Z}_2 \\arrow[u, bend right] & \n\t\\end{tikzcd}\n\\end{equation}\nwhere:\n\\begin{itemize}\n\t\\item ${\\mathcal W}_i \\to {\\mathcal Z}_i$ are strong polyfold bundles for $i=1,2$.\n\t\\item $\\overline{\\partial}_i$ are $\\text{sc}$-smooth proper oriented Fredholm sections of the same index for $i=1,2$.\n\t\\item $\\iota_{\\mathcal Z} :{\\mathcal Z}_1 \\hookrightarrow {\\mathcal Z}_2$ is a $\\text{sc}$-smooth injective map, and the associated functor between polyfold structures $\\hat{\\iota}_{\\mathcal Z} : (Z_1,{\\bar\\m}{Z}_1) \\hookrightarrow (Z_2,{\\bar\\m}{Z}_2)$ is fully faithful and is also an injection on both the object and the morphism sets.\n\t\\item $\\iota_{\\mathcal W}:{\\mathcal W}_1\\hookrightarrow{\\mathcal W}_2$ is a $\\text{sc}$-smooth injective map, and the associated functor between polyfold strong bundle structures $\\hat{\\iota}_{\\mathcal W} :(W_1,{\\bar\\m}{W}_1) \\hookrightarrow (W_2,{\\bar\\m}{W}_2)$ is fully faithful, and is also an injection on both the object and the morphism sets. Moreover, $\\hat{\\iota}$ is a bundle map (i.e., restricts to a linear map on the fibers).\n\t\\item ${\\mathcal S}(\\overline{\\partial}_2) \\subset \\text{Im} (\\iota_{\\mathcal Z})$.\n\\end{itemize}\n\nIn order to deal with orientations, consider the following.\nConsider a smooth object $x\\in (Z_1)_\\infty$ which maps to $y:= \\hat{\\iota}_{\\mathcal Z} \\in (Z_2)_\\infty$.\nConsider a locally defined $\\text{sc}^+$-section $s:' U \\to W_2$ defined on an open neighborhood $U\\subset Z_2$ of $y$, which satisfies $s'(y) = \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 (y)$.\nAssume that this $\\text{sc}^+$-section has a well-defined restriction $s'|_{U \\cap \\hat{\\iota}_{\\mathcal Z} (Z_1)} : U \\cap \\hat{\\iota}_{\\mathcal Z} (Z_1) \\to \\hat{\\iota}_{\\mathcal W} (W_1)$, which induces a $\\text{sc}^+$-section $s: \\hat{\\iota}_{\\mathcal Z}^{-1}(U) \\to W_1$ which moreover satisfies $s(x) = \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1 (x)$.\nWe therefore have a commutative diagram.\n\t\\[\n\t\\begin{tikzcd}\n\tT_x W_1 \\arrow[r, \"D\\hat{\\iota}_{\\mathcal W}\"'] & T_y W_2 & \\\\\n\tT_x Z_1 \\arrow[r, \"D\\hat{\\iota}_{\\mathcal Z}\"'] \\arrow[u, \"D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x)\"] & T_y Z_2 \\arrow[u, \"D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y)\"'] & \n\t\\end{tikzcd}\n\t\\]\nConsider the following maps: $D\\hat{\\iota}_{\\mathcal Z}: \\ker (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x))) \\to \\ker (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))$, and $D\\hat{\\iota}_{\\mathcal W}: \\text{Im} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x))) \\to \\text{Im} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))$, which therefore induces a map $\\operatorname{coker} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x))) \\to \\operatorname{coker} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))$.\nThese maps induce a map between the determinant real lines\n\t\\begin{gather*}\n\t\\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x))) = \\Lambda^{\\max} (\\ker (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x)))) \\otimes (\\Lambda^{\\max} (\\operatorname{coker} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x)))))^*,\t\\\\\n\t\\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))) = \\Lambda^{\\max} (\\ker (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))) \\otimes (\\Lambda^{\\max} (\\operatorname{coker} (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y)))\t)^*\t.\n\t\\end{gather*}\n\n\\begin{itemize}\n\t\\item\tAssume that the induced map between the determinants\n\t\t\t\t\\[\\hat{\\iota}_* : \\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s)(x))) \\to \\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s')(y))\\]\n\t\t\tis an isomorphism. Moreover, assume that this isomorphism is orientation preserving, with respect to the chosen orientations of $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1$ at the point $x$ and $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2$ at the point $y$ (see Definition~\\ref{def:oriented-Fredholm}).\n\\end{itemize}\n\nReturning to the main discussion, it follows from commutativity of \\eqref{eq:commutative-diagram-naturality} that $\\iota_{\\mathcal Z}$ restricts to a continuous bijection between the unperturbed solution sets,\n\t\\[\\iota_{\\mathcal Z} |_{{\\mathcal S}(\\overline{\\partial}_1)} : {\\mathcal S}(\\overline{\\partial}_1)\\to {\\mathcal S}(\\overline{\\partial}_2).\\]\nIn fact, this map is a homeomorphism as can be shown via point-set topology, noting that ${\\mathcal S}(\\overline{\\partial}_1)$ is compact and ${\\mathcal S}(\\overline{\\partial}_2)$ is Hausdorff (see \\cite[Rmk.~3.1.15]{MWtopology}).\n\nIn order to compare the polyfold invariants, suppose we also have a commutative diagram\n\t\\begin{equation}\\label{eq:gw-invariant-pair-of-polyfolds}\n\t\\begin{tikzcd}\n\t& & {\\mathcal O} \\\\\n\t{\\mathcal Z}_1 \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[rru, \"f_1\"] & {\\mathcal Z}_2 \\arrow[ru, \"f_2\"'] & \n\t\\end{tikzcd}\n\t\\end{equation}\nwhere:\n\\begin{itemize}\n\t\\item ${\\mathcal O}$ is a finite-dimensional orbifold.\n\t\\item $f_i$ are $\\text{sc}$-smooth maps for $i=1,2$.\n\\end{itemize}\n\n\\begin{definition}\\label{def:intermediate-subbundle}\n\tWe define an \\textbf{intermediary subbundle} as a subset ${\\mathcal R} \\subset {\\mathcal W}_2$ which satisfies the following properties.\n\t\\begin{enumerate}\n\t\t\\item Let $(R,{\\bar\\m}{R})$ be the associated subgroupoid of ${\\mathcal R}$. Then for every object $x\\in Z_2$ we require that the fiber $R_x : = R \\cap (W_2)_x$ is a vector subspace of $(W_2)_x$. (Note that we do not require that $R_x$ is complete.)\n\t\t\\item \\label{property-2-intermediary-subbundle} For any point $[x] \\in {\\mathcal Z}_2$, if $\\overline{\\partial}_2 ([x]) \\in {\\mathcal R}$ then $[x] \\in \\iota_{\\mathcal Z}({\\mathcal Z}_1)$. (Equivalently, for any object $x\\in Z_2$, if $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 (x) \\in R$ then $x \\in \\hat{\\iota}_{\\mathcal Z} (Z_1)$.)\n\t\t\\item \\label{property-3-intermediary-subbundle}\n\t\tGiven $[x_0] \\in {\\mathcal S}(\\overline{\\partial}_1) \\simeq {\\mathcal S}(\\overline{\\partial}_2)$, let $V\\subset U\\subset Z_2$ be ${\\bar\\m}{G}(x_0)$-invariant open neighborhoods of a representative $x_0\\in Z_2$ such that $\\overline{V} \\subset U$.\n\t\tWe require that there exist $\\text{sc}^+$-sections\n\t\t\t\\[\n\t\t\ts'_i :U \\to W_2, \\qquad 1\\leq i \\leq k\n\t\t\t\\]\n\t\twhich have well-defined restrictions $s'_i|_{U\\cap \\hat{\\iota}_{\\mathcal Z} (Z_1)} : U\\cap \\hat{\\iota}_{\\mathcal Z} (Z_1) \\to \\hat{\\iota}_{\\mathcal W} (W_1)$. These restrictions induce sections $s_i : \\hat{\\iota}_{\\mathcal Z}^{-1} (U) \\to W_1$ which we require to be $\\text{sc}^+$ with respect to the M-polyfold structures on $Z_1$ and $W_1$.\n\t\tWe require that:\n\t\t\t\\begin{itemize}\n\t\t\t\\item $s'_i (U) \\subset R$,\n\t\t\t\\item $s'_i= 0$ on $U\\setminus V$,\n\t\t\t\\item $\\text{span}\\{s'_1(x_0),\\ldots , s'_k(x_0)\\} \\oplus \\text{Im}(D\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2(x_0)) = (W_2)_{x_0},$\n\t\t\t\\item $\\text{span}\\{s_1(x_0),\\ldots , s_k(x_0)\\} \\oplus \\text{Im}(D\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1(x_0)) = (W_1)_{x_0}.$\n\t\t\t\\end{itemize}\n\t\t\\item In addition, given a pair $(N_2,{\\mathcal U}_2)$ which controls the compactness of $\\overline{\\partial}_2$, we require that these $\\text{sc}^+$-sections satisfy the following:\n\t\t\t\\begin{itemize}\n\t\t\t\\item $\\hat{N}_2[s'_i] \\leq 1,$\n\t\t\t\\item $\\abs {\\operatorname{supp} (s'_i)}\\subset {\\mathcal U}_2$.\n\t\t\t\\end{itemize}\n\t\t\\begin{comment}\n\t\t\\item \\red{Alternative.}\n\t\tGiven $[x] \\in {\\mathcal S}(\\overline{\\partial}_2)$, let $U\\subset Z_2$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of a representative $x$.\n\t\tWe require that there exists a parametrized $\\text{sc}^+$-multisection\n\t\t\t\\[\n\t\t\t\\Lambda : {\\mathcal W}_2\\times B^k_\\varepsilon \\to \\mathbb{Q}^+\n\t\t\t\\]\n\t\tsuch that the composition $\\Lambda \\circ \\iota_{\\mathcal W} : {\\mathcal W}_1 \\times B^k_\\varepsilon \\to \\mathbb{Q}^+$ is also a well-defined $\\text{sc}^+$-multisection.\n\t\t\\end{comment}\n\t\\end{enumerate}\n\\end{definition}\n\nDespite the lengthy properties that a intermediary subbundle must satisfy, in practice such subbundles are easy to construct, as we demonstrate in \\S~\\ref{subsec:independence-sequence} and \\S~\\ref{subsec:independence-punctures}.\n\nWe may now prove Theorem~\\ref{thm:naturality-polyfold-invariants}, which we restate in order to be consistent with our current notation.\n\n\\begin{theorem}\n\t\\label{thm:naturality}\n\tSuppose there exists an intermediary subbundle ${\\mathcal R} \\subset {\\mathcal W}_2$. Then the polyfold invariants for ${\\mathcal Z}_1$ and ${\\mathcal Z}_2$ defined via the branched integral are equal.\n\tThis means that, given a de Rahm cohomology class $\\omega\\in H^*_{\\dR} ({\\mathcal O})$ the branched integrals over the perturbed solution spaces are equal,\n\t\t\\[\n\t\t\\int_{{\\mathcal S} (\\overline{\\partial}_1, p_1)} f_1^* \\omega = \\int_{{\\mathcal S}(\\overline{\\partial}_2,p_2)} f_2^* \\omega,\n\t\t\\]\n\tfor any choices of regular perturbations.\n\\end{theorem}\n\\begin{proof}\nWe prove the theorem in six steps.\n\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 1:} \\emph{We use property \\ref{property-3-intermediary-subbundle} of the intermediary subbundle to construct a transversal $\\text{sc}^+$-multisection with a well-defined transversal restriction.}\n\\end{itemize}\n\nAt the outset, fix pairs $(N_i,{\\mathcal U}_i)$ which control the compactness of $\\overline{\\partial}_i$ for $i=1,2$. Consider a point $[x_0]\\in {\\mathcal S}(\\overline{\\partial}_1)\\simeq {\\mathcal S}(\\overline{\\partial}_2)$ and let $x_0\\in Z_1$ be a representative with isotropy group ${\\bar\\m}{G}(x_0)$. Via the inclusion map $\\hat{\\iota}_{\\mathcal Z}$, we may identify $x_0$ with its image in $Z_2$ and note that we may also identify the isotropy groups.\n\nWe may use property \\ref{property-3-intermediary-subbundle} of the intermediary subbundle to construct an $\\text{sc}^+$-mul\\-ti\\-sec\\-tion functor $\\hat{\\Lambda}'_0:W_2 \\times B_\\varepsilon^k \\to \\mathbb{Q}^+$ \nwith local section structure given by $\\left\\lbrace g * \\left(\\sum_{i=1}^k t_i \\cdot s'_i\\right)\\right\\rbrace_{g\\in {\\bar\\m}{G}(x_0)}$ which satisfies the following.\nThere exists a ${\\bar\\m}{G}(x_0)$-invariant open neighborhood $x_0 \\subset U'_0 \\subset Z_2$ such that at any object $x\\in U'_0$ and for any $g\\in {\\bar\\m}{G}(x_0)$ the linearization of the function \n\t\\begin{align*}\n\tU'_0 \\times B_\\varepsilon^k \t&\\to W_2\\\\\n\t(x, t_1,\\ldots,t_k)\t\t&\\mapsto \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 (x) - g * \\left(\\sum_{i=1}^k t_i \\cdot s'_i(x)\\right)\n\t\\end{align*}\nprojected to the fiber $(W_2)_x$ is surjective.\n\nFurthermore, property \\ref{property-3-intermediary-subbundle} ensures that the functor\n$\\hat{\\Lambda}_0 :=\\hat{\\Lambda}_0' ( \\hat{\\iota}_{\\mathcal W} (\\cdot), \\cdot ) :W_1 \\times B_\\varepsilon^k \\to \\mathbb{Q}^+$ is also a $\\text{sc}^+$-multisection functor, with local section structure $\\left\\lbrace g * \\left(\\sum_{i=1}^k t_i \\cdot s_i\\right)\\right\\rbrace_{g\\in {\\bar\\m}{G}(x_0)}$ where the $\\text{sc}^+$-sections $s_i$ are induced by the well-defined restrictions of the sections $s'_i$. Likewise, there exists a ${\\bar\\m}{G}(x_0)$-invariant open neighborhood $x_0 \\subset U_0 \\subset Z_1$ such that at any object $x\\in U_0$ and for any $g\\in {\\bar\\m}{G}(x_0)$ the linearization of the function $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1(x) - g * \\left(\\sum_{i=1}^k t_i \\cdot s_i(x)\\right)$ projected to the fiber $(W_1)_x$ is surjective.\n\nWe may cover the compact topological space ${\\mathcal S}(\\overline{\\partial}_2)$ by a finite collection of such neighborhoods $\\abs{U'_i}$ of points $[x_i]\\in {\\mathcal S}(\\overline{\\partial}_2)$; we may also cover ${\\mathcal S}(\\overline{\\partial}_1)$ by a finite collection of such neighborhoods $\\abs{U_i}$ of points $[x_i]\\in {\\mathcal S}(\\overline{\\partial}_1)$.\nIt follows that the finite sum of $\\text{sc}^+$-multisections\n\t\\[\n\t\\Lambda_2:= \\bigoplus_i \\Lambda'_i : {\\mathcal W}_2 \\times B_\\varepsilon^N \\to \\mathbb{Q}^+\n\t\\]\nhas the property that: for any point $[x] \\in {\\mathcal Z}_2$ with $\\Lambda_2 \\circ \\overline{\\partial}_2 ([x])>0$, and for any parametrized local section structure $\\{s'_i\\}_{i\\in I}$ at a representative $x$, the linearization of the function $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 (x) - s'_i(x,t)$ projected to the fiber $(W_2)_x$ is surjective.\nLikewise, the finite sum of $\\text{sc}^+$-multisections\n\t\\[\n\t\\Lambda_1 := \\bigoplus_i \\Lambda_i = \\bigoplus_i \\Lambda'_i (\\iota_{\\mathcal W}(\\cdot),\\cdot) : {\\mathcal W}_1 \\times B_\\varepsilon^N \\to \\mathbb{Q}^+\n\t\\]\nhas the property that for any point $[x] \\in {\\mathcal Z}_1$ which satisfies $\\Lambda_1 \\circ \\overline{\\partial}_1 ([x])>0$ and for any parametrized local section structure $\\{s_i\\}_{i\\in I}$ at a representative $x$, the linearization of the function $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1 (x) - s_i(x,t)$ projected to the fiber $(W_1)_x$ is surjective. Observe moreover that the multisection sum commutes with composition and thus $\\Lambda_1(\\cdot,\\cdot) = \\Lambda_2 (\\iota_{\\mathcal W}(\\cdot),\\cdot)$.\n\nFurthermore for $\\varepsilon$ sufficiently small, for any fixed $t_0 \\in B_\\varepsilon^N$ the $\\text{sc}^+$-multisection $\\Lambda_2 (\\cdot, t_0)$ is controlled by the pair $(N_2,{\\mathcal U}_2)$, i.e.,\n\\begin{itemize}\n\t\\item $N_2[\\Lambda_2(\\cdot,t_0)] \\leq 1$,\n\t\\item $\\text{dom-supp}(\\Lambda_2(\\cdot,t_0)) \\subset {\\mathcal U}_2$.\n\\end{itemize}\n\nIn contrast, $\\Lambda_1(\\cdot,t_0)$ will generally not be controlled by the pair $(N_1,{\\mathcal U}_1)$, as in general, \t\n\t\\[\n\t\\text{dom-supp}(\\Lambda_1(\\cdot,t_0)) = \\iota_{\\mathcal Z}^{-1} (\\text{dom-supp}(\\Lambda_2(\\cdot,t_0))) \\nsubseteq {\\mathcal U}_1.\n\t\\]\n\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 2:} \\emph{We show the thickened solution sets satisfy the hypotheses of Lemma~\\ref{lem:invariance-of-domain-branched-orbifolds}, and are therefore homeomorphic.}\n\\end{itemize}\n\nConsider the strong polyfold bundle ${\\mathcal W}_i \\times B_\\varepsilon^N \\to {\\mathcal Z}_i \\times B_\\varepsilon^N$ for $i=1,2$, and let $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_i : {\\mathcal Z}_i \\times B_\\varepsilon^N \\to {\\mathcal W}_i \\times B_\\varepsilon^N$ denote the $\\text{sc}$-smooth proper Fredholm section defined by $([z],t)\\mapsto (\\overline{\\partial}_i([z]),t)$.\nBy construction, $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_i, \\Lambda_i)$ are transversal pairs; hence by Theorem~\\ref{thm:transversal-pairs-weighted-branched-suborbifolds} the thickened solution sets\n\t\\[\n\t{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_i, \\Lambda_i) = \\{ ([z],t) \\in {\\mathcal Z}_i \\times B_\\varepsilon^N \\mid \\Lambda_i (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_i ([z],t)) >0 \\} \\subset {\\mathcal Z}_i\\times B_\\varepsilon^N\n\t\\]\nhave the structure of weighted branched orbifolds.\n\nWe now claim that these thickened solution sets satisfy the hypotheses of Lemma~\\ref{lem:invariance-of-domain-branched-orbifolds}.\nIndeed, commutativity of the diagram \\eqref{eq:commutative-diagram-naturality} together with the equation $\\Lambda_1(\\cdot,\\cdot) = \\Lambda_2 (\\iota_{\\mathcal W}(\\cdot),\\cdot)$ imply that the injective continuous map $\\tilde{\\iota}_{\\mathcal Z}: {\\mathcal Z}_1 \\times B_\\varepsilon^N \\to {\\mathcal Z}_2 \\times B_\\varepsilon^N; ([z],t)\\mapsto (\\iota_{\\mathcal Z}([z]),t)$ has a well-defined restriction to the thickened solution sets,\n\t\\begin{equation}\\label{eq:restriction-to-thickening}\n\t\\tilde{\\iota}_{\\mathcal Z} |_{{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1)} : {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1) \\hookrightarrow {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2, \\Lambda_2).\n\t\\end{equation}\nMoreover, at any $(x,t)\\in S_1(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1,\\hat{\\Lambda}_1)$ which maps to $(y,t)\\in S_2(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2,\\hat{\\Lambda}_2)$, the local section structure $(s_i)$ for $\\hat{\\Lambda}_1$ at $(x,t)$ is induced by the restrictions of the local section structure $(s'_i)$ for $\\hat{\\Lambda}_2$ at $(y,t)$. In particular, we have the following commutative diagram.\n\t\\[\\begin{tikzcd}[column sep = large]\n\tW_1\\times B_\\varepsilon^N \\arrow[r, \"{(\\hat{\\iota}_{\\mathcal W}(\\cdot),\\cdot)}\"', hook] \\arrow[d, \"\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1 - s_i \\quad \"'] & W_2\\times B_\\varepsilon^N \\arrow[d, \"\\quad \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 - s'_i\"] & \\\\\n\tO_x\\times B_\\varepsilon^N \\arrow[r, \"{(\\hat{\\iota}_{\\mathcal Z}(\\cdot),\\cdot)}\"', hook] \\arrow[u, bend left] & O_y\\times B_\\varepsilon^N \\arrow[u, bend right] & \n\t\\end{tikzcd}\\]\nAs noted in Remark~\\ref{rmk:relationship-local-section-structures-local-branching-structures}, the local section structures and the local branching structures are related via the equations $M_i= (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1 - s_i)^{-1}(0)$, $M'_i = (\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 - s'_i)^{-1}(0)$. Thus it follow from commutativity that we have the required well-defined restriction to the individual local branches.\nAnd now Lemma~\\ref{lem:invariance-of-domain-branched-orbifolds} implies that the map \\eqref{eq:restriction-to-thickening} is a local homeomorphism.\n\nFurthermore, we may observe that by our orientation assumptions the natural induced map $\\tilde{\\iota}_* : \\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_1-s_i)(x))) \\to \\det (D(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2-s'_i)(y))$ is an orientation preserving isomorphism; hence the restriction $\\tilde{\\iota}_{\\mathcal Z} |_{M_i} : M_i \\to M'_i$ is orientation preserving.\n\nWe now show that \\eqref{eq:restriction-to-thickening} is a bijection.\nLet $([y],t)\\in {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2,\\Lambda_2)$, let $(y,t)$ be a representative of $([y],t)$, and consider a local section structure $(s'_i)$ for $\\Lambda_2$ at $(y,t)$.\nIt follows that $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2(y) - s'_i (y,t) = 0$ for some index $i$.\nObserve by construction, $s'_i$ is a finite sum of $\\text{sc}^+$-sections with image contained in the intermediate subbundle $R$, and hence $s'_i(y,t)\\in R$.\nIt follows that $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_2 (y)\\in R$, hence property \\ref{property-2-intermediary-subbundle} of the intermediate subbundle implies that $y\\in \\hat{\\iota}_{\\mathcal Z} (Z_1)$. Therefore, there exists a point $[x]\\in {\\mathcal Z}_1$ such that $\\tilde{\\iota}_{\\mathcal Z} ([x],t) = ([y],t)$. Commutativity of \\eqref{eq:commutative-diagram-naturality} implies that $\\Lambda_1(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1([x]),t) = \\Lambda_2(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2([y],t)) >0$, and therefore $([x],t) \\in {\\mathcal S}_1(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1)$.\nThus, \\eqref{eq:restriction-to-thickening} is a homeomorphism.\n\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 3:} \\emph{For a common regular value $t_0$ the branched integrals of the perturbed solution spaces of $\\overline{\\partial}_1$ and $\\overline{\\partial}_2$ are equal.}\n\\end{itemize}\n\nBy Sard's theorem, we can find a common regular value $t_0 \\in B_\\varepsilon^N$ of the projections ${\\mathcal S} (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1) \\to B_\\varepsilon^N$ and ${\\mathcal S} (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2,\\Lambda_2) \\to B_\\varepsilon^N$.\nFor this common regular value, the perturbed solution sets\n\t\\[\n\t{\\mathcal S}(\\overline{\\partial}_i, \\Lambda_i (\\cdot,t_0)) := \\{\t[z]\\in {\\mathcal Z}_i \\mid \\overline{\\partial}_i(\\Lambda_i([z],t_0))>0\t\\} \\subset {\\mathcal Z}_i\n\t\\]\nhave the structure of weighted branched suborbifolds.\n\nAs we have already noted, $\\Lambda_2(\\cdot,t_0)$ is controlled by the pair $(N_2,{\\mathcal U}_2)$ and hence ${\\mathcal S}(\\overline{\\partial}_2, \\Lambda_2 (\\cdot,t_0))$ is a compact topological space.\nFor such a common regular value, the homeomorphism \\eqref{eq:restriction-to-thickening} has a well-defined restriction to these perturbed solution sets. This restriction is a homeomorphism, and hence ${\\mathcal S}(\\overline{\\partial}_1, \\Lambda_1 (\\cdot,t_0))$ is also a compact topological space (even though in general $\\Lambda_1(\\cdot,t_0)$ will not be controlled by the pair $(N_1,{\\mathcal U}_1)$).\n\nThe restriction $\\tilde{\\iota}_{\\mathcal Z} |_{{\\mathcal S}(\\overline{\\partial}_1, \\Lambda_1 (\\cdot,t_0))}$ satisfies the necessary hypotheses for the change of variables theorem~\\ref{thm:change-of-variables}. \nTherefore for a given $\\text{sc}$-smooth differential form $\\omega\\in \\Omega_\\infty^* ({\\mathcal Z}_2)$ we have\n\t\\begin{equation}\n\t\\label{eq:change-variables}\n\t\\int_{{\\mathcal S}(\\overline{\\partial}_2, \\Lambda_2 (\\cdot,t_0))} \\omega\n\t= \\int_{{\\mathcal S}(\\overline{\\partial}_1, \\Lambda_1 (\\cdot,t_0))} \\tilde{\\iota}_{\\mathcal Z}^* \\omega.\n\t\\end{equation}\nHowever, since in general $\\Lambda_1(\\cdot,t_0)$ is not controlled by a pair, we cannot assume that it is a regular perturbation in the sense of Definition~\\ref{def:regular-perturbation}.\nThis is problematic since Theorem~\\ref{thm:cobordism-between-regular-perturbations} only implies the existence of a compact cobordism between the perturbed solution spaces of two perturbations which are both assumed to be regular perturbations (see Figure~\\ref{fig:cobordism}).\n\n\\begin{figure}[ht]\n\t\\centering\n\t\\includegraphics{cobordism.eps}\n\t\\caption{Compact cobordism between regular perturbations}\\label{fig:cobordism}\n\\end{figure}\n\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 4:} \\emph{We show that the set \n\t\t\\[\n\t\t{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1 (\\cdot,st_0)) = \\{\t([z],s) \\in {\\mathcal Z}_1\\times [0,1]\t\\mid\t\\Lambda_1(\\overline{\\partial}_1(z),s t_0)>0\t\\}\n\t\t\\]\t\t\n\t\tis compact.}\n\\end{itemize}\n\nLet $\\delta = \\abs{t_0}$.\nThe auxiliary norm $\\tilde{N}_2 : {\\mathcal W}_2[1] \\times \\overline{B_{\\delta}^N} \\to [0,\\infty)$ defined by $\\tilde{N}_2 ([w],t) := N_2([w])$ together with the open set $\\tilde{{\\mathcal U}}_2 := {\\mathcal U}_2 \\times \\overline{B_{\\delta}^N}$ together control the compactness of the extended $\\text{sc}$-smooth Fredholm section $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2$.\nBy construction, $\\Lambda_2$ is controlled by this pair and hence by Theorem~\\ref{thm:compactness} the thickened solution set ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2, \\Lambda_2(\\cdot, t); t\\in \\overline{B_\\delta ^N})$ is a compact topological space.\nTherefore, the closed subset ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2, \\Lambda_2(\\cdot, t) ; t = s \\cdot t_0, s\\in[0,1])$ is also compact.\n\nThe restriction of \\eqref{eq:restriction-to-thickening} yields a homeomorphism \n\t\\[\n\t{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1(\\cdot, t) ; t = s \\cdot t_0, s\\in[0,1]) \\to {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_2, \\Lambda_2(\\cdot, t) ; t = s \\cdot t_0, s\\in[0,1]).\n\t\\]\nFrom this it is now clear that \n\t\\[\n\t{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1 (\\cdot,st_0)) \\simeq {\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1(\\cdot, t) ; t = s \\cdot t_0, s\\in[0,1])\n\t\\]\nis a compact topological space.\n\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 5:} \\emph{We interpret the pair $(\\overline{\\partial}_1,\\Lambda_1(\\cdot,t_0))$ as a transversal Fredholm multisection, and use Proposition~\\ref{prop:cobordism-multisection-regular} to obtain a compact cobordism to a regular perturbation.}\n\\end{itemize}\n\nWe claim that the hypotheses described in \\S~\\ref{subsubsec:cobordism-multisection-regular} are satisfied.\nIn particular, we must show that the extended Fredholm multisection $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\tilde{\\Lambda}_1 (\\cdot,t_0))$ is proper.\nThis can be seen using step 4; indeed, the solution set ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\tilde{\\Lambda}_1 (\\cdot,t_0))$ described in \\S~\\ref{subsubsec:cobordism-multisection-regular} can be identified with the compact set ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}_1, \\Lambda_1 (\\cdot,st_0))$.\n\nWe may therefore use Proposition~\\ref{prop:cobordism-multisection-regular} to obtain a cobordism from $(\\overline{\\partial}_1,\\Lambda_1(\\cdot,t_0))$ to a regular perturbation $\\Gamma_0: {\\mathcal W}_1 \\to \\mathbb{Q}^+$ of $\\overline{\\partial}_1$.\nGiven a closed $\\text{sc}$-smooth differential form $\\omega\\in \\Omega_\\infty^* ({\\mathcal Z}_1)$, Stokes' theorem~\\ref{thm:stokes} then implies\n\t\\begin{equation}\n\t\\label{eq:step-5-cobordism-stokes}\n\t\\int_{{\\mathcal S}(\\overline{\\partial}_1,\\Gamma_0)}\t\\omega =\\int_{{\\mathcal S}(\\overline{\\partial}_1,\\Lambda_1(\\cdot,t_0))} \\omega.\n\t\\end{equation}\n\t\n\\begin{itemize}[leftmargin=0em]\n\t\\item[]\\textbf{Step 6:} \\emph{We show that the polyfold invariants are equal.}\n\\end{itemize}\n\nLet $\\omega\\in H^*_{\\dR} (O)$ be the de Rahm cohomology class fixed in the statement of the theorem, and used to define the polyfold invariants.\nWe can now compute relate the branched integrals as follows:\n\\begin{align*}\n\t \\int_{{\\mathcal S}(\\overline{\\partial}_2,\\Lambda_2(\\cdot,t_0))} f_2^* \\omega\n\t \t\t& =\t\\int_{{\\mathcal S}(\\overline{\\partial}_1, \\Lambda_1 (\\cdot,t_0))} \\tilde{\\iota}_{\\mathcal Z}^* f_2^*\\omega\t\\\\\n\t \t\t& =\t\\int_{{\\mathcal S}(\\overline{\\partial}_1, \\Lambda_1 (\\cdot,t_0))} f_1^*\\omega\t\\\\\n\t \t\t& =\t\\int_{{\\mathcal S}(\\overline{\\partial}_1,\\Gamma_0)}\tf_1^*\\omega,\n\\end{align*}\nwhere the first equality follows from equation \\eqref{eq:change-variables}, the second equality follows from the commutativity of \\eqref{eq:gw-invariant-pair-of-polyfolds}, and the third equality follows from equation \\eqref{eq:step-5-cobordism-stokes}.\nBy construction, $\\Lambda_2(\\cdot,t_0)$ is a regular perturbation of $\\overline{\\partial}_2$, while $\\Gamma_0$ is a regular perturbation of $\\overline{\\partial}_1$.\nThis proves the theorem.\n\\end{proof}\n\n\\subsection{Gromov--Witten invariants are independent of choice of sequence \\texorpdfstring{$\\delta_i$}{\u03b4i}}\n\t\\label{subsec:independence-sequence}\n\nWe now use Theorem~\\ref{thm:naturality} to show that the Gromov--Witten polyfold invariants are independent of the choice of increasing sequence $(\\delta_i)_{i\\geq 0}\\subset (0,2\\pi)$.\nGiven two sequences $(\\delta_i)\\subset (0,2\\pi)$ and $(\\delta_i')\\subset (0,2\\pi)$ we can always find a third sequence $(\\delta_i'')\\subset (0,2\\pi)$ which satisfies\n\t\\[\n\t\\delta_i \\leq \\delta_i'', \\qquad \\delta_i' \\leq \\delta_i''\n\t\\]\nfor all $i$. The GW-polyfold associated to the sequence $(\\delta_i'')$ give a refinement of the GW-polyfolds associated to $(\\delta_i)$ and $(\\delta_i')$, in the sense that there are inclusion maps\n\t\\[\n\t{\\mathcal Z}_{A,g,k}^{3,\\delta'_0} \\hookleftarrow {\\mathcal Z}_{A,g,k}^{3,\\delta''_0} \\hookrightarrow {\\mathcal Z}_{A,g,k}^{3,\\delta_0}.\n\t\\]\nIt is therefore sufficient to consider inclusion maps of the form\n${\\mathcal Z}^{3,\\delta_0}_{A,g,k} \\hookrightarrow {\\mathcal Z}^{3,\\delta_0'}_{A,g,k}$\nwith $\\delta_i' \\leq \\delta_i$ for all $i$ and demonstrate that the associated GW-invariants are equal.\n\nTo this end, consider the commutative diagram:\n\\[\\begin{tikzcd}\n{\\mathcal W}^{2,\\delta_0}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal W}\"', hook] \\arrow[d, \"\\overline{\\partial}_J\\quad \"'] & {\\mathcal W}^{2,\\delta_0'}_{A,g,k} \\arrow[d, \"\\quad \\overline{\\partial}_J'\"] & \\\\\n{\\mathcal Z}^{3,\\delta_0}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[u, bend left] & {\\mathcal Z}^{3,\\delta_0'}_{A,g,k} \\arrow[u, bend right] & \n\\end{tikzcd}\\]\nand observe that it satisfies the same properties as \\eqref{eq:commutative-diagram-naturality}.\nIn addition, consider the commutative diagram:\n\\[\\begin{tikzcd}\n& & Q^k\\times \\smash{\\overline{\\mathcal{M}}}\\vphantom{\\mathcal{M}}^{\\text{log}}_{g,k} \\\\\n{\\mathcal Z}^{3,\\delta_0}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[rru, \"ev_i \\times \\pi\"] & {\\mathcal Z}^{3,\\delta_0'}_{A,g,k} \\arrow[ru, \"ev_i\\times \\pi\"'] & \n\\end{tikzcd}\\]\nwhich satisfies the same properties as \\eqref{eq:gw-invariant-pair-of-polyfolds}.\n\nNote that if $\\delta_0' < \\delta_0$ the inclusion map $\\iota_{\\mathcal Z}$ is not proper.\nTo see this, exploit the difference in exponential weights to produce a sequence which converges in a local M-polyfold model for ${\\mathcal Z}^{3,\\delta_0}_{A,g,k}$ but diverges in a local M-polyfold model for ${\\mathcal Z}^{3,\\delta_0'}_{A,g,k}$.\nNote also that the pullback strong polyfold bundle is not the same as the standard strong polyfold bundle on ${\\mathcal Z}^{3,\\delta_0}_{A,g,k}$.\n\n\\begin{proposition}\n\t\\label{prop:existence-subbundle-naturality}\n\tThe set\t\n\t\\[\n\t{\\mathcal R} := \\{\t[\\Sigma,j,M,D,u,\\xi] \\in {\\mathcal W}^{2,\\text{deg}_0'}_{A,g,k} \\mid \\operatorname{supp} \\xi \\subset K \\subset \\Sigma\\setminus\\abs{D} \\text{ for some compact } K\t\\}\n\t\\]\n\tis an intermediary subbundle of the strong polyfold bundle ${\\mathcal W}^{2,\\text{deg}_0'}_{A,g,k}$.\n\\end{proposition}\n\\begin{proof}\n\tWe must show that the set ${\\mathcal R}$ satisfies the properties of Definition~\\ref{def:intermediate-subbundle}.\n\tThe first two properties can be easily checked.\n\t\n\tWe show how to construct the $\\text{sc}^+$-sections required by property \\ref{property-3-intermediary-subbundle}.\n\tConsider a stable curve $[\\alpha]=[\\Sigma,j,M,D,u] \\in {\\mathcal S}(\\overline{\\partial}_J') \\subset {\\mathcal Z}^{3,\\text{deg}_0'}_{A,g,k}$ and let let $\\alpha = (\\Sigma,j,M,D,u)$ be a stable map representative.\n\tLet $V_\\alpha \\subset U_\\alpha$ be a ${\\bar\\m}{G}(\\alpha)$-invariant M-polyfold charts centered at $\\alpha$ such that $\\overline{V_\\alpha}\\subset U_\\alpha$.\n\tThis means we have good uniformizing family \n\t\\[\n\t(a,v,\\eta ) \\mapsto (\\Sigma_{a},j(a,v),M_{a},D_{a}, \\oplus_{a} \\exp_u (\\eta)),\\qquad (a,v,\\eta) \\in O_\\alpha.\n\t\\]\n\tLet $K\\to O_\\alpha$ be a local strong bundle model, with $\\text{sc}$-coordinates given by $(a,v,\\eta,\\xi)$ where $\\xi \\in H^{2,\\text{deg}_0'}(\\Sigma,\\Lambda^{0,1} \\otimes_J u^* TQ)$.\n\t\n\tUse Corollary~\\ref{cor:vectors-which-span-cokernel} to choose vectors $v_1,\\ldots, v_k$ which vanish on disk-like regions of the nodal points and such that\n\t\\begin{itemize}\n\t\t\\item $\\text{span}\\{v_1,\\ldots , v_k\\} \\oplus \\text{Im}(D\\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}'(\\alpha)) = H^{2,\\text{deg}_0'}(\\Sigma,\\Lambda^{0,1} \\otimes_J u^* TQ)$,\n\t\t\\item $\\text{span}\\{v_1,\\ldots , v_k\\} \\oplus \\text{Im}(D\\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}(\\alpha)) = H^{2,\\text{deg}_0}(\\Sigma,\\Lambda^{0,1} \\otimes_J u^* TQ)$.\n\t\\end{itemize}\n\t\n\tLet $\\beta: U_\\alpha \\to [0,1]$ be an $\\text{sc}$-smooth cutoff function which satisfies $\\beta\\equiv 1$ near $(0,0,0)$, and $\\beta \\equiv 0$ on $U_\\alpha\\setminus V_\\alpha$.\n\tIn these local $\\text{sc}$-coordinates, the desired $\\text{sc}^+$-sections are defined as follows:\n\t\\[\n\ts_i': U_\\alpha\\to K,\\qquad (a,v,\\eta) \\mapsto (a,v,\\eta, \\beta(a,v,\\eta) \\cdot ( \\rho_{a} (v_i)\t\t)),\n\t\\]\n\twhere $\\rho_{a}$ is the strong bundle projection defined using the hat gluings, see \\cite[p.~117]{HWZbook} and \\cite[pp.~65--67]{HWZGW}\n\t\n\tLet $(N,{\\mathcal U})$ be a pair which controls the compactness of $\\overline{\\partial}_J'$.\n\tBy construction, these $\\text{sc}^+$-sections satisfy $N[s_i'] \\leq C$ for some constant $C\\leq \\infty$, and hence by rescaling the vectors $v_i$ we may assume that $N[s_i'] \\leq 1$. Moreover, by shrinking the support of the cutoff function we may assume that $\\abs{\\operatorname{supp} s_i'} = \\abs{\\operatorname{supp} \\beta } \\subset {\\mathcal U}$.\n\t\n\tBy construction, the $\\text{sc}^+$-section $s_i'$ induces a well-defined restriction $s_i|_{\\hat{\\iota}_{\\mathcal Z}^{-1} (U_\\alpha)}$. Locally this restriction is given by multiplying the $\\text{sc}$-smooth cutoff $\\beta \\circ \\hat{\\iota}_{\\mathcal Z}$ and the locally constant vector $v_i$, hence it is $\\text{sc}^+$.\n\\end{proof}\n\nHaving shown the previous proposition, we may immediately apply Theorem~\\ref{thm:naturality} to see that \nthe polyfold Gromov--Witten invariants do not depend on the choice of an increasing sequence $(\\delta_i)_{i\\geq 0} \\allowbreak \\subset (0,2\\pi)$.\nThis proves Corollary~\\ref{cor:naturality-polyfold-gw-invariants}.\n\n\\subsection[Gromov-Witten invariants are independent of punctures at marked points]{Gromov--Witten invariants are independent of punctures at marked points}\n\\label{subsec:independence-punctures}\n\nWe now recall the regularity estimates that the stable curves of the Gromov--Witten polyfolds as constructed in \\cite{HWZGW} are required to satisfy.\nLet $u: \\Sigma \\to Q$ be a continuous map, and fix a point $z\\in \\Sigma$.\nWe consider a local expression for $u$ as follows. Choose a small disk-like neighborhood $D_z\\subset \\Sigma$ of $z$ such that there exists a biholomorphism $ \\sigma:[0,\\infty)\\times S^1\\rightarrow D_z \\setminus\\{z\\}$.\nLet $\\varphi:U\\rightarrow \\mathbb{R}^{2n}$ be a smooth chart on a neighborhood $U \\subset Q$ of $u(z)$ such that to $\\varphi (u(z))=0$.\nThe local expression \n\\[\n\\tilde{u}: [s_0,\\infty)\\times S^1 \\to \\mathbb{R}^{2n}, \\qquad (s,t) \\mapsto \\varphi\\circ u\\circ \\sigma(s,t)\n\\]\nis defined for $s_0$ large.\nLet $m\\geq 3$ be an integer, and let $\\delta >0$. We say that $u$ is of \\textbf{class $H^{m,\\delta}$ around the point $z\\in \\Sigma$}\nif $e^{\\delta s} \\tilde{u}$ belongs to the space $L^2([s_0,\\infty)\\times S^1,\\mathbb{R}^{2n})$.\nWe say that $u$ is of \\textbf{class} $H^m_{\\text{loc}}$ \\textbf{around the point $z\\in \\Sigma$} if $u$ belongs to the space $H^m_{\\text{loc}}(D_z)$.\nIf $u$ is of class $H^{m,\\delta}$ at a point $z\\in \\Sigma$ we will refer to that point as a \\textbf{puncture}.\n\nBy definition, any stable map representative $(\\Sigma,j,M,D,u)$ of a stable curve in the Gromov--Witten polyfold ${\\mathcal Z}_{A,g,k}$ is required to be of class $H^{3,\\text{deg}_0}$ at all nodal points.\nThis is required in order to carry out the gluing construction at the nodes of \\cite[\\S~2.4]{HWZGW}.\n\nHowever, in some situations we would like to treat the marked points in the same way as the nodal points.\nNote that allowing a puncture with exponential decay at a specified marked point is a global condition on a Gromov--Witten polyfold.\nHence, we will need to require that the map $u$ is of class $H^{3,\\delta_0}$ at a fixed subset of the marked points (in addition to the nodal points).\n\nGiven a subset $I \\subset \\{1,\\ldots,k\\}$ we can define a GW-polyfold ${\\mathcal Z}^I_{A,g,k}$ where we require that all stable map representatives are of class $H^{3,\\delta_0}$ at the marked points $z_i$ for all $i\\in I$ and of class $H^3_\\text{loc}$ at the remaining marked points.\nGiven another subset $J \\subset\\{1,\\ldots,k\\}$ we can define a GW-polyfold ${\\mathcal Z}^J_{A,g,k}$ in the same manner.\nOn the other hand, we can consider a GW-polyfold ${\\mathcal Y}_{A,g,k}$ where we require that all stable map representatives are of class $H^{3,\\delta_0}$ and also of class $H^3_\\text{loc}$ at all the marked points.\nSuch a GW-polyfold with strict regularity at all marked points gives a refinement of the GW-polyfolds with different choices of punctures $I,J\\subset \\{1,\\ldots,k\\}$ at the marked points, in the sense that there are inclusion maps\n\\[\n{\\mathcal Z}^I_{A,g,k} \\hookleftarrow {\\mathcal Y}_{A,g,k} \\hookrightarrow {\\mathcal Z}^J_{A,g,k}.\n\\]\nIt is sufficient to consider inclusion maps of the form ${\\mathcal Y}_{A,g,k} \\hookrightarrow {\\mathcal Z}^I_{A,g,k}$ and demonstrate that the associated GW-invariants are equal.\n\nTo this end, consider the commutative diagram:\n\\[\\begin{tikzcd}\n{\\mathcal V}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal W}\"', hook] \\arrow[d, \"\\overline{\\partial}_J\\quad \"'] & {\\mathcal W}^I_{A,g,k} \\arrow[d, \"\\quad \\overline{\\partial}_J\"] & \\\\\n{\\mathcal Y}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[u, bend left] & {\\mathcal Z}^I_{A,g,k} \\arrow[u, bend right] & \n\\end{tikzcd}\\]\nand observe that it satisfies the same properties as \\eqref{eq:commutative-diagram-naturality}.\nIn addition, consider the commutative diagram:\n\\[\\begin{tikzcd}\n& & Q^k\\times \\smash{\\overline{\\mathcal{M}}}\\vphantom{\\mathcal{M}}^{\\text{log}}_{g,k} \\\\\n{\\mathcal Y}_{A,g,k} \\arrow[r, \"\\iota_{\\mathcal Z}\"', hook] \\arrow[rru, \"ev_i \\times \\pi\"] & {\\mathcal Z}^I_{A,g,k} \\arrow[ru, \"ev_i\\times \\pi\"'] & \n\\end{tikzcd}\\]\nwhich satisfies the same properties as \\eqref{eq:gw-invariant-pair-of-polyfolds}.\n\nThere exist sequences of maps which converge in $H^3_\\text{loc}$ but do not converge in $H^{3,\\delta_0}$. Consequently, in general the above inclusion map is not proper. Furthermore, the pullback strong polyfold bundle is not the same as the standard strong polyfold bundle on ${\\mathcal Y}_{A,g,k}$.\n\n\\begin{proposition}\n\tThe set\t\n\t\\[\n\t{\\mathcal R} := \\{\t[\\Sigma,j,M,D,u,\\xi] \\in {\\mathcal W}^I_{A,g,k} \\mid \\operatorname{supp} \\xi \\subset K \\subset \\Sigma\\setminus M \\text{ for some compact } K\t\\}\n\t\\]\n\tis an intermediary subbundle of the strong polyfold bundle ${\\mathcal W}^I_{A,g,k}$.\n\\end{proposition}\n\\begin{proof}\n\tThe proof is identical to the proof of Proposition~\\ref{prop:existence-subbundle-naturality}, except here we use Corollary~\\ref{cor:vectors-which-span-cokernel} to choose vectors $v_i$ which vanish on disk-like regions of the marked points instead of the nodal points.\n\\end{proof}\n\nAgain, combining the previous proposition and Theorem~\\ref{thm:naturality} we see that the polyfold Gromov--Witten invariants do not depend on the choice of puncture at the marked points.\nThis proves Corollary~\\ref{cor:punctures-equal}.\n\n\n\\section{Pulling back abstract perturbations}\n\t\\label{sec:pulling-back-abstract-perturbations}\n\nIn this section we show how to construct a regular perturbation which pulls back to a regular perturbation, culminating in Theorem~\\ref{thm:compatible-pullbacks} and in Corollary~\\ref{cor:pullback-via-permutation}.\n\n\\subsection{Pullbacks of strong polyfold bundles}\n\t\\label{subsec:pullbacks-strong-polyfold-bundles}\n\nLet $P: {\\mathcal W} \\to {\\mathcal Z}$ be a strong polyfold bundle, and let $f: {\\mathcal Y} \\to {\\mathcal Z}$ be a $\\text{sc}$-smooth map between polyfolds.\nConsider the topological pullback\n\t\\[f^* {\\mathcal W} = \\{([x],[w]_{[y]})\t\\mid f([x])=[y]=P([w]_{[y]})\t\\} \\subset {\\mathcal Y} \\times {\\mathcal W}\\]\nequipped with the subspace topology. Since ${\\mathcal Y}$ and ${\\mathcal W}$ are second countable, paracompact, Hausdorff topological spaces, so too is the product ${\\mathcal Y} \\times {\\mathcal W}$ and hence $f^*{\\mathcal W}$ is also a second countable, paracompact, Hausdorff topological spaces.\n\nWe can take the pullback $\\hat{f}^* W$ of the object strong M-polyfold bundle; by Proposition~\\ref{prop:pullback-bundle} this has the structure of a strong M-polyfold bundle over the object space $Y$.\nThe fiber product \n\t\\[{\\bar\\m}{Y} _s \\times_{\\text{proj}_1} \\hat{f}^* W = \\{\t(\\phi,y,w_x) \\mid s(\\phi) = y, \\hat{f}(y)=x\t\\}\\]\nmay be viewed as the strong M-polyfold bundle via the source map $s$ over the morphism space ${\\bar\\m}{Y}$,\n\t\\[\\begin{tikzcd}\n\t{\\bar\\m}{Y} _s \\times_{\\text{proj}_1} \\hat{f}^* W \\arrow[d] \\arrow[r] & \\hat{f}^* W \\arrow[d] & \\\\\n\t{\\bar\\m}{Y} \\arrow[r, \"s\"'] & Y & \n\t\\end{tikzcd}\\]\n\nWe may define a \\textbf{pullback strong polyfold bundle structure} over the polyfold structure $(Y,{\\bar\\m}{Y})$ as the strong M-polyfold bundle $\\text{proj}_1 : \\hat{f}^*W \\to Y$ together with the bundle map defined as follows:\n\t\\begin{align*}\n\t\\lambda: {\\bar\\m}{Y} _s \\times_{\\text{proj}_1} \\hat{f}^* W & \\to \\hat{f}^*W \\\\\n\t\t\t\t\t\t\t(\\phi,y,w_x)\t& \\mapsto (t(\\phi), \\mu(f(\\phi), w_x))\n\t\\end{align*}\nIt is straightforward to check that this map satisfies the requirements of Definition~\\ref{def:strong-polyfold-bundle}.\n\nGiven a $\\text{sc}$-smooth section $\\overline{\\partial}:{\\mathcal Z}\\to {\\mathcal W}$ there exists a well-defined \\textbf{pullback section} $f^*\\overline{\\partial}:{\\mathcal Y} \\to f^*{\\mathcal W}$. Between the underlying sets, it is defined by\n\t\\[\n\tf^*\\overline{\\partial} ([x]) = ([x], \\overline{\\partial} \\circ f ([x]) ).\n\t\\]\nIt is automatically regularizing if $\\overline{\\partial}$ is regularizing.\nWe may define the pullback of a $\\text{sc}^+$-multisection as follows.\n\n\\begin{definition}\n\t\\label{def:pullback-multisection}\n\tGiven a $\\text{sc}^+$-multisection $\\Lambda:{\\mathcal W}\\to \\mathbb{Q}^+$ there exists a well-defined \\textbf{pullback $\\text{sc}^+$-multisection} $\\text{proj}_2^*\\Lambda :f^*{\\mathcal W} \\to \\mathbb{Q}^+$.\n\tIt consists of the following:\n\t\\begin{enumerate}\n\t\t\\item the function $\\Lambda \\circ \\text{proj}_2 :f^*{\\mathcal W}\\to\\mathbb{Q}^+$\n\t\t\\item the functor $\\hat{\\Lambda} \\circ \\text{proj}_2 :\\hat{f}^* W\\to \\mathbb{Q}^+$\n\t\t\\item at each $[x]\\in {\\mathcal Z}_1$ there exists a ``pullback local section structure'' for $\\text{proj}_2^*\\Lambda$, defined below.\n\t\\end{enumerate}\n\\end{definition}\n\nGiven $[x]\\in{\\mathcal Z}_1$, the local section structure for $\\text{proj}_2^*\\Lambda$ at $[x]$ is described as follows.\nLet $x\\in Z_1$ be a representative of $[x]$.\nLet $y:=\\hat{f}(x)\\in Z_2$, and let $U_y\\subset Z_2$ be a ${\\bar\\m}{G}(y)$-invariant open neighborhood of $y$, and let \n$s_1,\\ldots,s_k : U_y \\to W$ be a local section structure for $\\hat{\\Lambda}$ at $y$ with associated weights $\\sigma_1,\\ldots ,\\sigma_k$.\nLet $U_x\\subset Z_1$ be a ${\\bar\\m}{G}(x)$-invariant open neighborhood of $x$ such that $\\hat{f}(U_x)\\subset U_y$, and consider the restricted strong M-polyfold bundle $(\\hat{f}^*W)|_{U_x} \\to U_x$.\nThen the pullback of the local sections $\\hat{f}^*s_1,\\ldots,\\hat{f}^*s_k :U_x \\to \\hat{f}^* W$ with the associated weights $\\sigma_1,\\ldots ,\\sigma_k$ gives the local section structure for $\\text{proj}_2^*\\Lambda$ at $[x]$.\n\nIndeed, it tautologically follows from the original assumption that $s_1,\\ldots,s_k$, $\\sigma_1,\\ldots,\\sigma_k$ is a local section structure for $\\Lambda$ at $[y]$ that\n\\begin{enumerate}\n\t\\item $\\sum_{i=1}^k \\sigma_i =1$ \n\t\\item the local expression $\\text{proj}_2^* \\hat{\\Lambda}:\\hat{f}^*W|_{U_x}\\to \\mathbb{Q}^+$ is related to the local sections and weights via the equation \n\t\\[\n\t\\text{proj}_2^* \\hat{\\Lambda} (x',w_{y'}) = \\sum_{\\{i\\in \\{1,\\ldots, k \\} \\mid (x',w_{y'}) = \\hat{f}^*s_i (\\text{proj}_1 (x',w_{y'} ) )\\}} \\sigma_i\n\t\\]\n\tfor all $(x',w_{y'}) \\in (\\hat{f}^* W)|_{U_x}$ (which necessarily satisfy $\\hat{f}(x')= y' = P(w_{y'})$).\n\\end{enumerate}\n\n\\subsection{The topological pullback condition and pullbacks of pairs which control compactness}\n\t\\label{subsec:topological-pullback-condition-controlling-compactness}\n\nSuppose at the outset that we have a $\\text{sc}$-smooth map $f: {\\mathcal Y} \\to {\\mathcal Z}$ and a strong polyfold bundle $P:{\\mathcal W} \\to {\\mathcal Z}$ with a $\\text{sc}$-smooth proper Fredholm section $\\overline{\\partial}$.\nConsider the pullback of this bundle and of this section via the map $f$ yielding the following commutative diagram:\n\t\\[\\begin{tikzcd}\n\tf^* {\\mathcal W} \\arrow[d, \"f^* \\overline{\\partial}\\quad\"'] \\arrow[r, \"\\text{proj}_2\"'] & {\\mathcal W} \\arrow[d, \"\\quad \\overline{\\partial}\"] & \\\\\n\t{\\mathcal Y} \\arrow[r, \"f\"'] \\arrow[u, bend left] & {\\mathcal Z}. \\arrow[u, bend right] & \n\t\\end{tikzcd}\\]\nAssume that the $\\text{sc}$-smooth section $f^*\\overline{\\partial}$ is a proper Fredholm section; such an assumption is not automatic from the above setup, however it is natural in the context of polyfold maps one might encounter.\\footnote{An alternative to outright assuming that $f^*\\overline{\\partial}$ is a Fredholm section would be to formulate a precise notion of a ``Fredholm map'' for a map between polyfolds, and then require that $f$ is such a map. This would also be natural in the context of polyfold maps one might encounter.}\n\nGiven a pair $(N,{\\mathcal U})$ which controls the compactness of $\\overline{\\partial}$ we show in this subsection how to obtain a pullback of this pair, which will control the compactness of $f^*\\overline{\\partial}$.\n\n\\begin{proposition}\n\tLet $N :{\\mathcal W}[1] \\to [0,\\infty)$ be an auxiliary norm. The pullback of $N$, given by\n\t\\[\n\t\\text{proj}_2^* N : f^* {\\mathcal W}[1] \\to [0,\\infty)\n\t\\]\n\tdefines an auxiliary norm on the pullback strong polyfold bundle $f^* {\\mathcal W}\\to {\\mathcal Y}$.\n\\end{proposition}\n\\begin{proof}\n\tThis is immediate from the definitions. In particular, property \\ref{property-2-auxiliary-norm} of Definition~\\ref{def:auxiliary-norm} can be checked as follows. Let $(x_k, w_k)$ be a sequence in $\\hat{f}^* W[1]$, such that $x_k$ converges to $x$ in $Y$, and suppose $\\text{proj}_2^* \\hat{N} (x_k, w_k) \\to 0$. Then $w_k$ is a sequence in $W[1]$ such that $\\hat{f}(x_k)$ converges to $\\hat{f}(x)$ in $Z$, and $\\hat{N} (w_k) = \\text{proj}_2^* \\hat{N} (x_k, w_k) \\to 0$, and hence $w_k\\to 0_{\\hat{f}(x)}$. Thus $(x_k,w_k)\\to (x,0_{\\hat{f}(x)})$, as required.\n\\end{proof}\n\n\\begin{definition}\\label{topological-pullback-condition}\n\tWe say that $f$ satisfies the \\textbf{topological pullback condition} if for all $[y] \\in {\\mathcal S}(\\overline{\\partial})\\subset {\\mathcal Z}$ and for any open neighborhood ${\\mathcal V} \\subset {\\mathcal Y}$ of the fiber $f^{-1} ([y])$ there exists an open neighborhood ${\\mathcal U}_{[y]}\\subset {\\mathcal Z}$ of $[y]$ such that $f^{-1} ({\\mathcal U}_{[y]}) \\subset {\\mathcal V}$.\n\tNote that if $f^{-1} ([y])=\\emptyset$, this implies that there exists an open neighborhood ${\\mathcal U}_{[y]}$ of $[y]$ such that $f^{-1} ({\\mathcal U}_{[y]})=\\emptyset$.\n\\end{definition}\n\n\\begin{proposition}\n\t\\label{prop:simultaneous-compactness}\n\tSuppose that the map $f:{\\mathcal Y} \\to {\\mathcal Z}$ satisfies the topological pullback condition. \n\tThen there exists a pair $(N,{\\mathcal U})$ which controls the compactness of $\\overline{\\partial}$ such that the pair $(\\text{proj}_2^* N, f^{-1} ({\\mathcal U}) )$ controls the compactness of $f^* \\overline{\\partial}$.\n\t\n\tFurthermore, if a $\\text{sc}^+$-multisection $\\Lambda: {\\mathcal W}\\to \\mathbb{Q}^+$ satisfies $N [\\Lambda ] \\leq 1$ and $\\text{dom-supp} (\\Lambda) \\subset {\\mathcal U}$, then its pullback $\\text{proj}_2^* \\Lambda :f^*{\\mathcal W} \\to \\mathbb{Q}^+$ satisfies $\\text{proj}_2^* N [\\text{proj}_2^* \\Lambda ] \\leq 1$ and $\\text{dom-supp} (\\text{proj}_2^* \\Lambda) \\allowbreak \\subset \\allowbreak f^{-1} ({\\mathcal U})$.\n\\end{proposition}\n\\begin{proof}\n\tLet $(N, {\\mathcal V})$ be a pair which controls the compactness of $\\overline{\\partial}$.\n\tBy the previous proposition we know that the pullback $\\text{proj}_2^* N :f^* {\\mathcal W}[1]\\to [0,\\infty)$ is an auxiliary norm.\n\tWe may then apply \\cite[Prop.~4.5]{HWZ3} to assert the existence of a neighborhood ${\\mathcal U}'\\subset {\\mathcal Y}$ of ${\\mathcal S}(f^*\\overline{\\partial})$ such that the pair $(\\text{proj}_2^* N, {\\mathcal U}')$ controls the compactness of $f^*\\overline{\\partial}$.\n\t\n\tAt every $[y] \\in {\\mathcal S}(\\overline{\\partial})$, observe that $f^{-1} ([y]) \\subset {\\mathcal S}(f^*\\overline{\\partial}) \\subset {\\mathcal U}'$. We can use the topological pullback condition to choose a neighborhood ${\\mathcal U}_{[y]}$ of $[y]$ such that $f^{-1} ({\\mathcal U}_{[y]}) \\subset {\\mathcal U}'\\subset {\\mathcal Y}$ and moreover such that ${\\mathcal U}_{[y]} \\subset {\\mathcal V}\\subset {\\mathcal Z}$. \n\tWe define an open neighborhood by ${\\mathcal U} := \\cup_i \\ {\\mathcal U}_{[y]_i}$ for every $[y]_i\\in {\\mathcal S}(\\overline{\\partial})$.\n\t\n\tThen $(N,{\\mathcal U})$ is the desired pair.\n\tIndeed, ${\\mathcal U}$ is an open neighborhood of the unperturbed solution set ${\\mathcal S}(\\overline{\\partial})$.\n\tAnd ${\\mathcal U}\\subset {\\mathcal V}$ since for every $[y]_i$ we have ${\\mathcal U}_{[y]_i}\\subset {\\mathcal V}$. Hence we have ${\\mathcal S}(\\overline{\\partial}) \\subset {\\mathcal U}\\subset {\\mathcal V}$ therefore it follows from Remark~\\ref{rmk:shrink-neighborhood} that $(N,{\\mathcal U})$ controls the compactness of $\\overline{\\partial}$.\n\t\n\tObserve that ${\\mathcal S}(f^*\\overline{\\partial}) = f^{-1} ({\\mathcal S}(\\overline{\\partial})) \\subset f^{-1}({\\mathcal U})$. By the construction of ${\\mathcal U}$ we have $f^{-1} ({\\mathcal U}) \\subset {\\mathcal U}'$. Hence we have ${\\mathcal S}(f^*\\overline{\\partial}) \\subset f^{-1}({\\mathcal U}) \\subset {\\mathcal U}'$ therefore it follows from Remark~\\ref{rmk:shrink-neighborhood} that $(\\text{proj}_2^* N, f^{-1} ({\\mathcal U}))$ controls the compactness of $f^* \\overline{\\partial}$.\n\t\n\tFinally, the claim regarding the pullback of a $\\text{sc}^+$-multisection is immediate from the construction.\n\\end{proof}\n\n\\subsection{Construction of regular perturbations which pullback to regular perturbations}\n\t\\label{subsec:construction-regular-perturbation-which-pullback}\n\nWith the same assumptions and setup as in the previous subsection (i.e., our map satisfies the topological pullback condition), we show in this subsection how to construct regular perturbations which will pullback to regular perturbations\n\n\\begin{theorem}\n\t\\label{thm:compatible-pullbacks}\n\tWe can construct a regular perturbation $\\Lambda: {\\mathcal W} \\to \\mathbb{Q}^+$ which pulls back to a regular perturbation $\\text{proj}_2^* \\Lambda$. This means that the perturbations satisfy the following conditions.\n\t\\begin{enumerate}\n\t\t\\item $(\\overline{\\partial}, \\Lambda)$ and $(f^* \\overline{\\partial}, \\text{proj}_2^* \\Lambda)$ are both transversal pairs.\n\t\t\\item There exists a pair $(N,{\\mathcal U})$ which controls the compactness of $\\overline{\\partial}$ such that the pair $(\\text{proj}_2^* N, f^{-1} ({\\mathcal U}) )$ controls the compactness of $f^* \\overline{\\partial}$. Then:\n\t\t\\begin{itemize}\n\t\t\t\\item $N [\\Lambda ] \\leq 1$ and $\\text{dom-supp} (\\Lambda) \\subset {\\mathcal U}$\n\t\t\t\\item $\\text{proj}_2^* N [\\text{proj}_2^* \\Lambda ] \\leq 1$ and $\\text{dom-supp} (\\text{proj}_2^* \\Lambda) \\subset f^{-1} ({\\mathcal U})$.\n\t\t\\end{itemize}\n\t\\end{enumerate}\n\n\\end{theorem}\n\\begin{proof}\n\nWe now give an explicit construction of a $\\text{sc}^+$-multisection $\\Lambda : {\\mathcal W} \\to \\mathbb{Q}^+$ such that $(\\overline{\\partial}, \\Lambda)$ and $(f^* \\overline{\\partial}, \\text{proj}_2^* \\Lambda)$ are both transversal pairs. Our approach is based on the general position argument of \\cite[Thm.~15.4]{HWZbook}.\n\n\\noindent\\emph{Local construction.}\nWe construct a $\\text{sc}^+$-multisection $\\Lambda_0 : {\\mathcal W} \\to \\mathbb{Q}^+$ which will be transversal at a point $[y_0] \\in {\\mathcal S}(\\overline{\\partial}) \\subset {\\mathcal Z}$.\nLet $U(y_0) \\subset Z$ be a ${\\bar\\m}{G}(y_0)$-invariant open neighborhood of $y_0$ and moreover let $V(y_0)\\subset U(y_0)$ be a ${\\bar\\m}{G}(y_0)$-invariant open neighborhood of $y_0$ such that $\\overline{V(y_0)} \\subset U(y_0)$.\n\nChoose smooth vectors $v_1,\\ldots, v_k \\in W_{y_0}$ such that\n\t\\[\n\t\\text{span} \\{v_1,\\ldots,v_k \\} \\oplus D\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_{y_0} (T_{y_0} Z) = W_{y_0}.\n\t\\]\n\nFor each smooth vector $v_1,\\ldots, v_k$ we can use \\cite[Lem.~5.3]{HWZbook} to define $\\text{sc}^+$-sections $s_i : U(y_0) \\to W$ such that\n\\begin{itemize}\n\t\\item $s_i= 0$ on $U(y_0)\\setminus V(y_0)$,\n\t\\item $s_i(y_0) = v_i$.\n\\end{itemize}\nFurthermore, to ensure that the resulting multisection will be controlled by the pair $(N,{\\mathcal U})$ we require that\n\\begin{itemize}\n\t\\item $N[s_i] \\leq 1,$\n\t\\item $\\text{supp}(s_i)\\subset {\\mathcal U}$.\n\\end{itemize}\n\nWe may use these locally constructed $\\text{sc}^+$-sections to define a $\\text{sc}^+$-multisection functor \n\t\\[\\hat{\\Lambda}'_0:W \\times B_\\varepsilon^k \\to \\mathbb{Q}^+\\]\nwith local section structure given by \n$\\left\\lbrace g * \\left(\\sum_{i=1}^k t_i \\cdot s_i\\right)\\right\\rbrace_{g\\in {\\bar\\m}{G}(y_0)}$\nwhich satisfies the following.\nThere exists a ${\\bar\\m}{G}(y_0)$-invariant open neighborhood $y_0 \\subset U'_0 \\subset Z$ such that at any object $y\\in U'_0$ and for any $g\\in {\\bar\\m}{G}(y_0)$ the linearization of the function \n\t\\begin{align*}\n\tU'_0 \\times B_\\varepsilon^k \t&\\to W\\\\\n\t(y, t_1,\\ldots,t_k)\t\t&\\mapsto \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}(y) - g * \\left(\\sum_{i=1}^k t_i \\cdot s_i(y)\\right)\n\t\\end{align*}\nprojected to the fiber $W_y$ is surjective.\n\nWe now construct a $\\text{sc}^+$-multisection whose pullback $\\text{proj}_2^* \\Lambda_0 : f^*{\\mathcal W} \\to \\mathbb{Q}^+$ will be transversal at a point $[x_0] \\in {\\mathcal S}(f^*\\overline{\\partial}) \\subset {\\mathcal Y}$.\nConsider a point $[x_0]\\in {\\mathcal S}(f^*\\overline{\\partial}) \\subset {\\mathcal Y}$ which maps to $[y_0]:=f([x_0])\\in {\\mathcal S}(\\overline{\\partial})\\subset {\\mathcal Z}$.\nLet $x_0\\in S(\\hat{f}^*\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$})$ be a representative of $[x_0]$, and let $U(x_0) \\subset Y$ be a ${\\bar\\m}{G}(x_0)$-invariant open neighborhood of $x_0$.\nThen $y_0:= \\hat{f}(x_0) \\in S(\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$})$ is a representative of $[y_0]$.\nLet $U(y_0) \\subset Z$ be a ${\\bar\\m}{G}(y_0)$-invariant open neighborhood of $y_0$ and moreover let $V(y_0)\\subset U(y_0)$ be a ${\\bar\\m}{G}(y_0)$-invariant open neighborhood of $y_0$ such that $\\overline{V(y_0)} \\subset U(y_0)$.\nBy shrinking the open set $U(x_0)$, we may assume that the local expression $\\hat{f}: U(x_0)\\to U(y_0)$ is well-defined.\n\nThe fibers $(\\hat{f}^*W)_{x_0}$ and $W_{y_0}$ may be identified via $\\text{proj}_2$.\nChoose smooth vectors $v_1,\\ldots, v_k \\in W_{y_0}$ such that\n\t\\[\\text{span} \\{\\text{proj}_2^{-1} (v_1),\\ldots,\\text{proj}_2^{-1} (v_k) \\} \\oplus D\\hat{f}^*\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$}_{x_0} (T_{x_0} Y) = (\\hat{f}^*W)_{x_0}.\\]\n\nFor each smooth vector $v_1,\\ldots, v_k$ we may use \\cite[Lem.~5.3]{HWZbook} to define $\\text{sc}^+$-sections $s_i : U(y_0) \\to W$ such that\n\\begin{itemize}\n\t\\item $s_i\\equiv 0$ on $U(y_0)\\setminus V(y_0)$,\n\t\\item $s_i(y_0) = v_i$.\n\\end{itemize}\nFurthermore, to ensure that the resulting multisection will be controlled by the pair $(N,{\\mathcal U})$ we require that\n\\begin{itemize}\n\t\\item $N[s_i] \\leq 1,$\n\t\\item $\\text{supp}(s_i)\\subset {\\mathcal U}$.\n\\end{itemize}\n\nWe may use these locally defined $\\text{sc}^+$-sections to define a $\\text{sc}^+$-multisection functor $\\hat{\\Lambda}_0:\tW \\times B_\\varepsilon^k \\to \\mathbb{Q}^+$ \nwith local section structure given as follows. \n\t\\[\\left\\lbrace g * \\left(\\sum_{i=1}^k t_i \\cdot s_i\\right)\\right\\rbrace_{g\\in {\\bar\\m}{G}(y_0)}\\]\nBy construction, the pullback $\\text{sc}^+$-multisection functor\n$\\text{proj}_2^* \\hat{\\Lambda}_0:\t\\hat{f}^* W \\times B_\\varepsilon^k \\to \\mathbb{Q}^+$ \nhas local section structure given as follows. \n\t\\[\\left\\lbrace \\hat{f}^* \\left(g * \\left(\\sum_{i=1}^k t_i \\cdot s_i\\right)\\right) \\right\\rbrace_{g\\in {\\bar\\m}{G}(y_0)}\\]\nIt may be observed that there exists a ${\\bar\\m}{G}(x_0)$-invariant open neighborhood $x_0 \\subset U_0\\subset Y$ such that at any object $x\\in U_0$ and for any $g\\in {\\bar\\m}{G}(y_0)$ the linearization of the function \n\t\\begin{align*}\n\tU_0 \\times B_\\varepsilon^k \t&\\to \\hat{f}^*W\\\\\n\t(x, t_1,\\ldots,t_k)\t&\\mapsto \\hat{f}^* \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} (x) - \\hat{f}^* \\left( g * \\left(\\sum_{i=1}^k t_i \\cdot s_i\\right)\\right) (x) \\\\\n\t\t\t\t\t\t&\\phantom{\\mapsto} = \\left(x, \\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} (\\hat{f}(x)) - g * \\left(\\sum_{i=1}^k t_i \\cdot s_i(\\hat{f}(x)) \\right) \\right)\n\t\\end{align*}\nprojected to the fiber $(\\hat{f}^* W)_x$ is surjective.\n\n\\noindent\\emph{Global construction.}\nWe may cover the compact topological space ${\\mathcal S}(\\overline{\\partial})$ by a finite collection of such neighborhoods $\\abs{U'_i}$ of points $[y_i]\\in {\\mathcal S}(\\overline{\\partial})$; we may also cover ${\\mathcal S}(f^*\\overline{\\partial})$ by a finite collection of such neighborhoods $\\abs{U_i}$ of points $[x_i]\\in {\\mathcal S}(f^*\\overline{\\partial})$.\nIt follows that the finite sum of $\\text{sc}^+$-multisections\n\t\\[\n\t\\Lambda:= \\bigoplus_i \\Lambda_i : {\\mathcal W} \\times B_\\varepsilon^N \\to \\mathbb{Q}^+\n\t\\]\nhas the property that: for any point $[y] \\in {\\mathcal Z}$ with $\\Lambda \\circ \\overline{\\partial} ([y])>0$, and for any parametrized local section structure $\\{s_i\\}_{i\\in I}$ at a representative $y$, the linearization of the function $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} (y) - s_i(y,t)$ projected to the fiber $W_y$ is surjective.\n\nLikewise, the pullback $\\text{sc}^+$-multisection $\\text{proj}_2^* \\Lambda : f^*{\\mathcal W} \\times B_\\varepsilon^N \\to \\mathbb{Q}^+$\nhas the property that for any point $[x] \\in {\\mathcal Y}$ which satisfies $\\text{proj}_2^* \\Lambda \\circ f^*\\overline{\\partial} ([x])>0$ and for any parametrized local section structure $\\{s_i\\}_{i\\in I}$ at a representative $x$, the linearization of the function $\\stackon[3pt]{$\\overline{\\partial}$}{$\\hat{}$} (x) - s_i(x,t)$ projected to the fiber $(\\hat{f}^*W)_x$ is surjective. \n\nFurthermore for $\\varepsilon$ sufficiently small, for any fixed $t_0 \\in B_\\varepsilon^N$ the $\\text{sc}^+$-multisection $\\Lambda (\\cdot, t_0)$ is controlled by the pair $(N,{\\mathcal U})$, i.e.,\n\\begin{itemize}\n\t\\item $N[\\Lambda(\\cdot,t_0)] \\leq 1$,\n\t\\item $\\text{dom-supp}(\\Lambda(\\cdot,t_0)) \\subset {\\mathcal U}$.\n\\end{itemize}\nIt follows from Proposition~\\ref{prop:simultaneous-compactness} that the pullback $\\text{sc}^+$-multisection $\\text{proj}_2^* \\Lambda (\\cdot, t_0)$ is controlled by the pullback $(\\text{proj}_2^*N,f^{-1}({\\mathcal U}))$.\n\n\\noindent\\emph{Common regular value.}\nConsider the strong polyfold bundle ${\\mathcal W} \\times B_\\varepsilon^N \\to {\\mathcal Z} \\times B_\\varepsilon^N$ and the pullback strong polyfold bundle $f^*{\\mathcal W} \\times B_\\varepsilon^N \\to {\\mathcal Y} \\times B_\\varepsilon^N$.\nLet $\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}} : {\\mathcal Z} \\times B_\\varepsilon^N \\to {\\mathcal W} \\times B_\\varepsilon^N$ denote the $\\text{sc}$-smooth proper Fredholm section defined by $([y],t)\\mapsto (\\overline{\\partial}([y]),t)$, and let $\\tilde{f}^*\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}} :{\\mathcal Y} \\times B_\\varepsilon^N \\to f^*{\\mathcal W}\\times B_\\varepsilon^N$ denote the $\\text{sc}$-smooth proper Fredholm section defined by $([x],t) \\mapsto ([x],\\overline{\\partial}(f([x])),t)$.\nBy construction, $(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\Lambda)$ and $(\\tilde{f}^*\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\text{proj}_2^* \\Lambda)$ are transversal pairs; hence by Theorem~\\ref{thm:transversal-pairs-weighted-branched-suborbifolds} the thickened solution sets\n\t\\begin{align*}\n\t{\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\Lambda) &= \\{ ([y],t) \\in {\\mathcal Z} \\times B_\\varepsilon^N \\mid \\Lambda (\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}} ([y],t)) >0 \\} \\subset {\\mathcal Z} \\times B_\\varepsilon^N,\t\\\\\n\t{\\mathcal S}(\\tilde{f}^*\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\text{proj}_2^* \\Lambda) &= \\{ ([x],t) \\in {\\mathcal Y} \\times B_\\varepsilon^N \\mid \\text{proj}_2^*\\Lambda (\\tilde{f}^*\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}} ([x],t)) >0 \\} \\subset {\\mathcal Y} \\times B_\\varepsilon^N\n\t\\end{align*}\nhave the structure of weighted branched orbifolds.\n\nBy Sard's theorem, we can find a common regular value $t_0 \\in B_\\varepsilon^N$ of the projections ${\\mathcal S}(\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\Lambda) \\to B_\\varepsilon^N$ and ${\\mathcal S}(\\tilde{f}^*\\scalerel*{\\tilde{\\overline{\\partial}}}{\\hat{M}}, \\text{proj}_2^* \\Lambda)\\to B_\\varepsilon^N$.\nThen the $\\text{sc}^+$-multisection $\\Lambda(\\cdot,t_0) : {\\mathcal W} \\to \\mathbb{Q}^+$ and its pullback $\\text{proj}_2^*\\Lambda(\\cdot,t_0) :f^*{\\mathcal W} \\to \\mathbb{Q}^+$ are the desired regular perturbations.\n\\end{proof}\n\nThe significance of this theorem is the following.\nBoth perturbed solution sets ${\\mathcal S}(f^*\\overline{\\partial}, \\text{proj}_2^* \\Lambda )$ and ${\\mathcal S}(\\overline{\\partial},\\Lambda)$ have the structure of compact oriented weighted branched suborbifolds. \nMoreover, the restriction of $f$ gives a well-defined continuous function between these perturbed solution spaces, i.e.,\n\t\\[\n\tf|_{{\\mathcal S}(f^*\\overline{\\partial}, \\text{proj}_2^* \\Lambda )}: {\\mathcal S}(f^*\\overline{\\partial}, \\text{proj}_2^* \\Lambda ) \\to {\\mathcal S}(\\overline{\\partial},\\Lambda).\n\t\\]\nFurthermore, $f$ is weight preserving in the sense that the weight functions are related via pullback by the following equation $ ( \\Lambda \\circ \\overline{\\partial}) \\circ f= \\text{proj}_2^* \\Lambda \\circ f^* \\overline{\\partial}$.\n\n\\subsection{The permutation maps between perturbed Gromov--Witten moduli spaces}\n\t\\label{subsec:permutation-map}\n{As we have explained in the introduction, naively there does not exist a permutation map for an arbitrary choice of abstract perturbation.}\n\nFix a permutation $\\sigma \\in S_k,\\ \\sigma: \\{1,\\ldots, k\\}\\to \\{1,\\ldots,k\\}$.\nAssociated to this permutation we can define a $\\text{sc}$-smooth permutation map between the Gromov--Witten polyfold\n\\[\n\\sigma: {\\mathcal Z}_{A,g,k} \\to {\\mathcal Z}_{A,g,k}, \\qquad [\\Sigma,j,M,D,u] \\mapsto [\\Sigma,j,M^\\sigma,D,u]\n\\]\nwhere $M = \\{z_1,\\ldots,z_k\\}$ and where $M^\\sigma := \\{z'_1,\\ldots,z'_k\\}$, $z'_i:= z_{\\sigma(i)}$.\n\nConsider the pullback via $\\sigma$ of the strong bundle ${\\mathcal W}_{A,g,k} \\to {\\mathcal Z}_{A,g,k}$ and the Cauchy--Riemann section $\\overline{\\partial}_J$, as illustrated in the below commutative diagram.\n\t\\[\n\t\\begin{tikzcd}\n\t\\sigma^* {\\mathcal W}_{A,g,k} \\arrow[d, \"\\sigma^* \\overline{\\partial}_J \\quad\"'] \\arrow[r, \"\\text{proj}_2\"'] & {\\mathcal W}_{A,g,k} \\arrow[d, \"\\quad \\overline{\\partial}_J\"] & \\\\\n\t{\\mathcal Z}_{A,g,k} \\arrow[r, \"\\sigma\"'] \\arrow[u, bend left] & {\\mathcal Z}_{A,g,k} \\arrow[u, bend right] & \n\t\\end{tikzcd}\n\t\\]\nThe map $\\sigma$ is a homeomorphism when considered on the underlying topological spaces, and hence satisfies the topological pullback condition.\nBy applying Theorem~\\ref{thm:compatible-pullbacks} we immediately obtain Corollary~\\ref{cor:pullback-via-permutation}.\n\nIt follows that the permutation map restricts to a well-defined map between the perturbed Gromov--Witten moduli spaces, \n\\[\n\\sigma|_{{\\mathcal S}_{A,g,k} (\\overline{\\partial}_J, \\text{proj}_2^* \\Lambda)} : {\\mathcal S}_{A,g,k} (\\overline{\\partial}_J, \\text{proj}_2^* \\Lambda) \\to {\\mathcal S}_{A,g,k} (\\overline{\\partial}_J,\\Lambda).\n\\]\nConsidered on the underlying topological spaces, this map is a homeomorphism. Considered on the branched ep-subgroupoid structures, the associated functor\n\\[\n\\hat{\\sigma}|_{S_{A,g,k} (\\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}, \\text{proj}_2^* \\hat{\\Lambda})}: S_{A,g,k} (\\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}, \\text{proj}_2^* \\hat{\\Lambda}) \\to S_{A,g,k} (\\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J},\\hat{\\Lambda})\n\\]\nis a local diffeomorphism, and moreover is injective.\nThe restricted permutation map $\\sigma$ and its associated functor $\\hat{\\sigma}$ are both weight preserving, i.e.,\n$(\\Lambda\\circ \\overline{\\partial}_J) \\circ \\sigma = \\text{proj}_2^*\\Lambda \\circ \\overline{\\partial}_J$ and $(\\hat{\\Lambda}\\circ \\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}) \\circ \\hat{\\sigma} = \\text{proj}_2^* \\hat{\\Lambda} \\circ \\scalerel*{\\hat{\\overline{\\partial}}_J}{\\hat{M}_J}$.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section*{Document History}\n\\begin{itemize}\n \\item[\\textbf{v1.0}] Initial release\n \\item[\\textbf{v1.1}] Added discussion of conflicting report to the introduction\\\\ Added discussion of cost pitfalls to Section~\\ref{sec:results:requirements:motivation}\n \\item[\\textbf{v1.2}] Added new data for the application type (Section~\\ref{subsec:application_type})\n\\end{itemize}\n\n\n\\newpage\n\\pagenumbering{roman}\n\\setcounter{tocdepth}{4}\n\\begin{spacing}{1.3}\n\\tableofcontents\n\\end{spacing}\n\n\n\\newpage\n\\thispagestyle{plain}\n\\section*{Executive Summary}\nThe serverless computing paradigm promises many desirable properties for cloud applications---low-cost, fine-grained deployment, and management-free operation.\nConsequently, the paradigm has underwent rapid growth: there currently exist tens of serverless platforms and all global cloud providers host serverless operations.\nTo help tune existing platforms, guide the design of new serverless approaches, and overall contribute to understanding this paradigm, in this work we present a long-term, comprehensive effort to identify, collect, and characterize serverless use cases.\nWe survey 89 use cases, sourced from white and grey literature, and from consultations with experts in areas such as scientific computing.\nWe study each use case using 24 characteristics, including general aspects, but also workload, application, and requirements. When the use cases employ workflows, we further analyze their characteristics.\nOverall, we hope our study will be useful for both academia and industry, and encourage the community to further share and communicate their use cases.\n\\\\\n\n\\noindent\\textbf{Keywords}\\\\\n\n\\noindent serverless use cases, cloud computing, serverless computing, serverless applications, workflows, requirements analysis, empirical research\\\\\n\n\\noindent\\textbf{Disclaimer}\\\\\n\n\\noindent SPEC, the SPEC logo and the names SPEC CPU2017, SERT, SPEC Cloud IaaS 2018 are trademarks of the Standard Performance Evaluation Corporation (SPEC). \nSPEC Research and SPEC RG Cloud are service marks of SPEC. Additional product and service names mentioned herein may be the trademarks of their respective owners. \nCopyright \u00a9 1988-2020 Standard Performance Evaluation Corporation (SPEC). \nAll rights reserved.\n\n\n\\mainmatter\n\\setlength{\\parindent}{0cm}\n\n\\input{sections\/Introduction.tex}\n\n\\input{sections\/Methodology.tex}\n\n\\input{sections\/Results.tex}\n\n\\input{sections\/Threats.tex}\n\n\\input{sections\/Conclusion.tex}\n\n\n\\cleardoublepage\n\\pagenumbering{gobble}\n\\pagestyle{empty}\n\\renewcommand\\bibname{References}\n\n\\bibliographystyle{IEEEtranSA}\n\n\\section{Conclusion and Future Work}\n\\label{sec:conclusion}\\label{sec:futurework}\n\nThe emergence of serverless computing has already led to a diverse design space, with tens of serverless platforms and the participation of all major cloud providers.\nWe identify in this work the need for a systematic, comprehensive study of serverless use cases, which could help the development of serverless techniques and solutions in the fields of software engineering, distributed systems, and performance engineering. \\\\\n\nWe have proposed a systematic process to identify, collect, and characterize serverless use cases.\nTo identify use cases, the process considers open-source software projects, peer-reviewed literature, self-published material, and domain knowledge.\nTo collect the use cases, the process proposes a structured repository, from which reviewers take and characterize each use case alongside 24 features of interest.\nEach use case is covered by the following types of features:\n(a) \\textit{general} characteristics, such as platform, application type and domain, whether the use case was observed in production, and whether the use case provides open-source software;\n(b) \\textit{workload} characteristics, such as the execution pattern, burstiness, types of triggers, and data volume;\n(c) \\textit{application} characteristics, such as programming language(s) used to develop it, the resource bounds, whether the application depends on external services, etc.;\n(d) the \\textit{requirements} posed by the use case, such as locality and latency, or the performance-cost trade-off; and\n(e) \\textit{workflow} characteristics, including structure, size, and internal parallelism. \\\\\n\nUsing this process, we have collected and characterized a total of 89~ serverless use cases from four different sources. Our systematic and comprehensive study reveals that:\n\\begin{enumerate}\n\n \\item We find a dominating portion of serverless use cases already being in production with AWS as the most popular platform and web services being the most common application domain. \n\n \\item Serverless workloads tend to exhibit on-demand execution patterns exemplified by 81\\% bursty workloads, which makes their load hard to predict.\n \n \\item Most cloud functions (67\\%) are short-running, with running times in the order of milliseconds or seconds, thus requiring serverless frameworks that impose small overheads when running functions.\n \n \\item Cost savings (both in terms of infrastructure and operation costs) are a bigger driver for the adoption of cost than the offered performance and scalability gains.\n \n \\item \n We observe an increasing trend toward ever-larger, ever more complex workflows, indicating the need to move toward (cloud-native) workflow engines.\n\\end{enumerate}\n\n\nLast, but not least, we see this study as a step toward a community-wide policy of sharing and discussing about use cases. Persisting beyond our effort, such use cases could stimulate a new wave of serverless designs, facilitate meaningful tuning and benchmarking, and overall prove useful for both academia and industry.\nWe therefore extend an open invitation to prospective new collaborators in the SPEC-RG Cloud group.\n\n\n\n\\section{Introduction}\\label{sec:intro}\\label{sec:introduction}\n\nServerless computing is an emerging technology with increasing impact on our modern society, and increasing adoption by both academia and industry~\\cite{FutureScape, Markets, 10.5555\/3027041.3027047}.\nThe key promise of serverless computing is to make computing services more accessible, fine-grained, and affordable~\\cite{DBLP:journals\/internet\/EykTTVUI18,DBLP:journals\/corr\/abs-1902-03383} by managing operational concerns~\\cite{10.1145\/3368454}.\nMajor cloud providers, such as Amazon, Microsoft, Google, and IBM already offer capable serverless platforms.\nHowever, serverless computing, and its common Function-as-a-Service~(FaaS) realization, still raises many important challenges that may reduce adoption. These challenges have been recognized and discussed in fields such as software engineering, distributed systems, performance engineering~\\cite{DBLP:conf\/middleware\/EykIST17,DBLP:conf\/wosp\/EykIAGE18,DBLP:conf\/cidr\/HellersteinFGSS19}.\nThis work focuses on a first step to alleviate these challenges:\n{\\it understanding serverless applications through a variety of use cases}.\n\\newline\n\nServerless computing enables developers to focus on implementing business logic, leaving the operational concerns to cloud providers. In turn, the providers turn to automation, which they achieve through \ncapable serverless platforms, such as AWS Lambda, Azure Functions, or Google Cloud Functions, and IBM Cloud Functions (based on Apache OpenWhisk).\nServerless platforms already support fine-grained function deployment, detailed resource allocation, and to some extent also autoscaling~\\cite{10.1145\/3368454}.\nHowever, more sophisticated operational features have started to emerge such as:\n(a) complex function composition and even full {\\it workflows}, \n(b) eventing and provider-managed messaging, \n(c) low-latency scheduling, \n(d) file storage and database setup, \n(e) streaming and locality-aware deployment, and \n(f) versioning and logging solutions. \nThese features facilitate the serverless {\\it application lifecyle}, and help further decreases the time-to-market for serverless applications~\\cite{leitner2019MixedMethod}.\n\\newline\n\nResearchers and industry practitioners have an urgent need for serverless use cases.\nThe variety of already existing platforms and support from the major cloud providers indicate the presence of many serverless applications.\nHowever, relatively little is known about their characteristics or behavior.\nFor an emerging technology such as serverless computing, researchers, \nengineers, and \nplatform providers \ncould use descriptions of {\\it use cases}---{\\it which applications?}, {\\it where and how was this technology already successfully applied?}, and {\\it what are the characteristics of these use cases?}---to guide their drive for discovery and improvement {\\it in the right direction}.\n\\textit{Researchers} can study different use cases related to the same application to extract meaningful patterns and trigger new designs. They can also identify representative use cases, which can later be used for the evaluation of novel approaches and in empirical studies. \n\\textit{Engineers} require descriptions in which areas serverless computing was already successfully applied, which helps to decide whether to adopt serverless computing for other projects. Additionally, existing solutions can serve as blueprints for similar use cases. \\textit{Platform providers} require knowledge of how their products are used, to optimize them and gaps in adoption can point out deficits in their current offerings.\n\\newline\n\nThere are only a few, and sometimes conflicting, reports addressing important questions such as why developers build serverless applications, when serverless applications are well suited, or how serverless applications are implemented in practice. For example, there are reports of significant cost savings by switching to serverless applications~\\cite{adzic2017serverless, Levinson2020}, but also articles suggesting higher cost in some scenarios compared to traditional hosting~\\cite{eivy2017wary}. There are also reports of successfully serverless applications for data-intensive applications~\\cite{witte2019serverless, 10.1145\/3307339.3343462}, despite other articles claiming that serverless is not well suited for data-intensive applications~\\cite{DBLP:conf\/cidr\/HellersteinFGSS19}. Some people suggest that containers are superior to serverless for latency-critical tasks~\\cite{Thorn}, but there are also reports of people successfully applying serverless for latency-critical user-facing traffic~\\cite{Droplr}. Having concrete information on these topics would be valuable for managers to guide decisions on whether a serverless application can be a suitable solution for a specific use case.\n\\newline\n\nHowever much needed, serverless use cases have not been studied systematically so far.\nFor serverless computing, existing research has focused on serverless platforms and their performance properties~\\cite{yussupov:19}.\nSeveral studies currently exist about the features, architecture, and performance properties of these platforms~\\cite{DBLP:journals\/internet\/EykIGEBVTSHA19,10.1007\/978-3-319-99819-0_11, figiela2018performance, lee2018evaluation, lloyd2018serverless, wang2018peeking}.\nShahrad et al.~\\cite{shahrad2020serverless} characterize the aggregated performance properties of the entire production FaaS workload from Microsoft Azure Functions, but do not provide details on individual use cases.\nA recent mixed-method empirical study investigates how developers use serverless computing, focusing on the issues (pain points) they experienced~\\cite{leitner2019MixedMethod}.\nAnother multivocal literature review discusses common patterns in the architecture of serverless applications~\\cite{taibi2020serverless}.\nTo the best of our knowledge, the only existing collection of serverless use cases is an article by Castro et al.~\\cite{10.1145\/3368454}, which introduces ten use cases collected from non-peer-reviewed (\\textit{grey}) literature.\n\\newline\n\nIn this technical report, we collect a total of 89~ serverless use cases from four different sources. 32~ use cases are from open-source projects, 23~ from white literature, 28~ from grey literature, and 6~ from the area of scientific computing. Each use case is reviewed by a pair of reviewers in regard to 24 characteristics, such as execution pattern, workflow coordination, use of external services, and motivation for adopting serverless.\nThe full dataset containing all use cases and their characteristics is publicly available as a persistent Zenodo repository~\\cite{dataset}. \n\\newline\n\n\nIn the next section, we discuss our process for use-case collection and characterization. In Section~\\ref{sec:results}, we describe the 24~ characteristics we reviewed for each use case and the results of this review. Section~\\ref{sec:validity} discusses threats to validity and mitigation strategies.\nFinally, Section~\\ref{sec:conclusion} concludes this technical report, and discusses promising future research directions based on the finding of this study. \n\n\\section{Study Design}\n\\label{sec:method}\n\nThis section summarizes our overall study process, describes the data sources to identify primary studies, the selection strategy with inclusion and exclusion criteria, the characteristics review protocol, and the discussion and consolidation phase covering inter-reviewer agreement.\n\n\\subsection{Process Overview}\n\\label{sec:process}\n\n\\Cref{fig:overview} summarizes the use case analysis process.\nFirstly, we compiled an extensive list of potentially relevant use cases from four different data sources (see \\Cref{sec:process:source}), namely open source projects, white literature, grey literature, and scientific computing.\nSecondly, we applied our selection criteria (see \\Cref{subsec:selection}) to classify and filter only relevant use cases in the context of this study.\nThis resulted in 83 use cases from publicly available sources and 6~ scientific use cases from internal sources, where we had access to domain experts.\nThirdly, we defined a list of interesting characteristics including potential values and perform reviews to extract the actual values from available documentation (see \\Cref{subsec:data_extraction}).\nFor all public sources, 2 randomly assigned researchers out of a pool of 7 available authors conducted two redundant reviews for each use case.\nEach scientific use case was reviewed by a single domain expert.\nSubsequently, we calculated the inter-reviewer agreements for all redundant reviews and resolved any conflicting values during discussion and consolidation (see \\Cref{subsec:data_synthesis}).\nThis resulted in a total of 89~ analyzed use cases.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=\\linewidth]{figures\/Schema.pdf}\n \\caption{Process overview for use case analysis.}\n \\label{fig:overview}\n\\end{figure}\n\n\\subsection{Data Sources}\n\\label{sec:process:identification}\\label{sec:process:source}\\label{sec:process:sourcing}\n\nReports on use cases for serverless applications appear in many different forms ranging from peer-reviewed academic papers, open-source projects, blog posts, podcasts, talks, provider-reported success stories to direct exchange with application developers. Therefore, we collect use cases from a variety of different sources. We also aim to not have a dominant source that contributes the lions share of the use cases and therefore introduces a strong selection bias. Based on this, we do not aim for an exhaustive collection of use cases, but collect use cases from the following different sources with the goal of obtaining a large varied sample:\n\n\\begin{itemize}\n \\item \\textbf{Open-source projects:} Many serverless open-source projects are currently available on GitHub.\n As a starting point for the open-source projects, we used an existing data set from \\cite{pavlov:19}. This data set was scraped from GitHub using GHTorrent~\\cite{Gousi13}, an offline mirror of the GitHub public event time line. It excludes unrelated or insignificant projects, based on a keyword search\\footnote{Keywords: \\textit{aws, aws lambda, amazon lambda, lambda functions, azure, openwhisk, serverless, google cloud functions, microsoft azure, azure functions, ibm blue mix, bluemix, oracle fn, oracle cloud fn, kubeless, ibm cloud functions, fn project}} and also excludes any projects that started prior to the launch of AWS Lambda (the first major serverless platform). From this data set, we removed small and inactive projects based on the number of files, commits, contributors, and watchers. As this still left us with many projects that only mention one of the keywords (e.g., ''In the future, we are looking to use AWS lambda for the image resizing.''), we manually filtered the resulting data set to include only projects that are deployed as serverless applications. This resulted in a total of 32~ use cases from open-source projects.\n \n \\item \\textbf{White literature:} There is also a growing interest in serverless applications from academia, which results in a number of scientific publications, i.e., journal papers, conference papers and workshop papers describing serverless use cases. For white literature, we based our search on an existing community-curated dataset on literature for serverless computing consisting of over 180 articles from 2016 to 2019~\\cite{sldataset}. First, we filtered the articles based on title and abstract. In a second iteration, we filtered out any articles that implement only a single function for evaluation purposes or do not include sufficient detail to enable a review. As the authors were familiar with a few additional publications describing serverless applications, we contributed these articles to the community-curated dataset and included them in this study. This resulted in a total of 23~ use cases from white literature.\n \n \\item \\textbf{Grey literature:} In software engineering, the discourse in not limited to scientific articles but extends to grey literature, such as blog posts, forum discussions and podcasts~\\cite{10.1145\/2915970.2916008, 10.1145\/2786805.2803200}. For serverless computing, there are a number of blog posts by companies or individuals, talks at industry conferences and provider-reported success stories, as the development of serverless computing was initially mostly industry driven. We filtered the case studies reported by the major serverless providers (AWS\\footnote{\\url{https:\/\/aws.amazon.com\/solutions\/case-studies\/}}, Azure\\footnote{\\url{https:\/\/azure.microsoft.com\/en-in\/case-studies\/}}, Google\\footnote{\\url{https:\/\/cloud.google.com\/customers}} and IBM\\footnote{\\url{https:\/\/www.ibm.com\/case-studies\/}}) and selected those that used mostly serverless solutions. We also included the ten use cases reported in a recent article on the rise of serverless computing~\\cite{10.1145\/3368454}, which to the best of our knowledge is the largest collection of grey literature on serverless use cases. We further extended this collection with grey literature articles describing serverless use cases that the authors were already familiar with. This process resulted in a total of 28~ use cases from grey literature.\n \n \\item \\textbf{Scientific computing:} There is also an increasing interest in serverless solutions from the scientific computing community (e.g., by NASA~\\cite{nasa}). However, most of these use cases are still at an early stage and therefore there is little public data available for them. One of the authors of this paper is currently employed at the German Aerospace Center~(DLR), which allowed us to collect information about several projects at DLR that are either currently moving to serverless solutions or are planning to do so. Additionally, a use case from the German Electron Syncrotron~(DESY) could be included. This resulted in a total of 6~ use cases from the area of scientific computing.\n\\end{itemize}\nSome use cases are contained in multiple sources, e.g., a use case might have a GitHub repository that matches our keywords and is also used in the evaluation of an academic paper. For these use cases, we assign them only to a single source using the following ranking: open-source projects > grey literature > white literature. For the scientific use cases, there are no overlaps with the other use case sources.\n\n\\subsection{Use Case Selection}\n\\label{subsec:selection}\n\nWe defined the following inclusion (I) and exclusion (E) criteria for our study:\n\\begin{itemize}\n \\item[I1] Concrete serverless use cases, as we are interested in real-world example applications.\n \\item[I2] Use cases described in sufficient detail to conduct a meaningful review (i.e., excluding vague high-level case studies mainly focusing on a specific serverless platform or solution, but lacking technical detail).\n \\item[E1] Serverless platforms (e.g., Apache OpenWhisk) and frameworks (e.g., Serverless Framework\\footnote{\\url{https:\/\/www.serverless.com\/}}), as these are not concrete workloads.\n \\item[E2] Boilerplate code and simple technology demonstrations as often found in official serverless provider documentation, as these do not constitute full-fledged use cases.\n \\item[E3] Academic papers on the same use case. For example, there are a number of academic papers that discuss serverless neural networks serving~\\cite{ishakian2018serving, bhattacharjee2019barista, tu2018pay}. In this case we only include a single representative paper.\n\\end{itemize}\n\n\\subsection{Characteristics Review}\n\\label{subsec:data_extraction}\nWe first determined and formalized the set of investigated characteristics. In an initial round, all authors individually suggested characteristics they consider interesting. In a next round, we merged similar characteristics and kept all characteristics that at least two authors considered relevant. This process resulted in 24 characteristics, which can be divided into five groups: general characteristics, workload characteristics, application characteristics, requirement characteristics, and workflow characteristics. \\emph{General characteristics} aim to quantify the structure of our data set and include characteristics such as ``Is the use case open-source?'', ``Is the use case currently deployed in production?'', and ``What domain is this use case from?''. \\emph{Workload characteristics} aim to describe the traffic pattern and request properties of the use case, e.g., ``Is the workload bursty?'', ``What is the data volume per request?'', and ``Is the application workload triggered by HTTP requests, cloud events, or regularly scheduled?''. \\emph{Application characteristics} describe the structure and properties of the serverless application itself and focuses on characteristics such as ``How many functions does the application consist of?'', ``What programming languages are used?'', and ``Which managed cloud services does the application use?''. \\emph{Requirement characteristics} describe the requirements from the stakeholders, such as ``Is latency relevant?'', ``Does the application have to run in a specific region, replicated in multiple regions or even on edge devices?'', and ``What is the reason for the adoption of serverless computing?''. Finally, \\emph{workflow characteristics} describe the properties of workflows within the serverless applications, e.g., ``Is the use case a workflow?'', ``How many functions does the workflow consist of?'', and ``How is the workflow execution coordinated?''. \\\\\n\nBased on a group discussion, we defined an exhaustive set of potential values for each characteristics. For example, for the characteristic ``How are executions triggered?'' we defined the potential values ``HTTP request'', ``Cloud event', ``Scheduled'' and ``Manual''. Additionally, for every characteristic we introduced the values ``Unknown'' and ``Not applicable''. ``Unknown'' indicates that the documentation of the use case does not contain enough information to determine this characteristic. ``Not applicable'' is used when a characteristic does not make sense for a use cases, for example all workflow characteristics are only applicable to use cases that contain a workflow. For some characteristics, we were not able to define a set of potential values prior to reviewing the use cases. For these characteristics, we used text fragments during the review. Using thematic coding~\\cite{coffey1996making, guest2011applied}, we extracted codes and treated those as the values for these characteristics. For example, for the characteristic ``What is the reason for the adoption of serverless computing?'' thematic coding resulted in the codes ``NoOps'', ``Scalability'', ``Performance'', ``Maintainability'' and ``Simplify Development''. This process enabled us to extract quantifiable results from the textual descriptions. \\\\\n\nWe randomly assigned each use case to 2 reviewers out of a pool of 7 available reviewers from the authors.\nWe manually adjusted a few reviewer assignments to minimize the number of coinciding reviewer pairs (i.e., avoid that many use cases are reviewed by the same two reviewers).\nSubsequently, each reviewer individually assigned values to all characteristics of its assigned use cases.\n\n\\subsection{Discussion and Consolidation}\n\\label{subsec:data_synthesis}\nAfter completing the initial round of reviews, we calculate the fleiss kappa to quantify the level of agreement between the reviewers~\\cite{gwet2014handbook}. Due to the nature of the fleiss kappa, we excluded all characteristic assignments, where at least one reviewer assigned multiple values for a characteristic (e.g., if a use case execution is triggered both via HTTP requests and cloud events), the characteristics using thematic coding as well as the numeric characteristic ``How many functions does the application consist of?''. As characteristics have a different number of possible values, we calculated an individual fleiss kappa value for each characteristic and then the weighted average across these individual kappa fleiss values. This results in a kappa fleiss value of 0.48, which can be interpreted as ``moderate agreement''~\\cite{landis1977measurement}.\n\nIn the following discussion and consolidation phase, the reviewers compared their notes and tried to reach a consensus for the characteristics with conflicting assignments. In a few cases, the two reviewers had different interpretations of a characteristics. These conflicts were discussed among all authors to ensure that characteristic interpretations were consistent. However for most conflicts, the consolidation was a quick process as the most frequent type of conflict was that one reviewer found additional documentation that the other reviewer did not find.\nFollowing this process, we were able to resolve all conflicts, resulting in a collection of 89~ use cases described by 24 characteristics. \\\\\n\nFor the scientific use cases, a different approach was necessary as many of them were not publicly available yet. Therefore, these use cases are reviewed by a single domain expert, which is either involved in the development of the use case or in direct contact with the development. For each of the scientific use cases there is also a textual description (see Appendix Section~\\ref{appendix}).\n\\subsection{Application Characteristics}\n\\label{sec:results:application}\nThis section characterizes the how the applications use cloud functions.\nIn the following, we analyze the applications regarding: the number of distinct functions in them, the function run times, the resource bounds of the functions, the programming languages used to implement the functions, the upgrade frequency of the cloud functions, and their interactions with external cloud services.\n\n\\subsubsection{Number of Distinct Functions}\n\\para{Description}\nThe business logic of serverless applications is contained within serverless functions and connects to a variety of managed cloud services. Similarly to microservices, the appropriate granularity of serverless functions is a controversial topic. Opinions range from wrapping each programming function as a serverless function, each API endpoint as a serverless function to full microservices as a serverless function. In this characteristic we investigate the number of distinct functions within the use cases. As this characteristics targets the development perspective, we count a function that is executed multiple times within an application as a single function. For some use cases, the only information available is along the lines of ''more than X functions\", which we count as X for this analysis.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/number_of_functions.pdf}\n \\caption{Histogram of the number of functions per use case, single outlier at 170 not shown.}\n \\label{fig:number_of_functions}\n\\end{figure}\n\n\\para{Results} About a third (32\\%) of the analyzed use cases consist of only a single function, as shown in\n\\autoref{fig:number_of_functions}.\nFurther, about one-fifth (21\\%) consist of two functions, a tenth (12\\%) of three functions, another tenth (11\\%) of four functions, and 5\\% of five functions. \nLarger sizes are very rare and without causing a mode in the empirical distribution: there exists in our analysis only one use case for each of the sizes 6, 7, 13, 15, 16, and, notably, 170; there exist only two use-cases for each of the sizes 8, 9, and 10. The use case with more than 170 functions is the back-end for the mobile app of a now defunct start-up. \nOverall, 82\\% of all use cases consist of five functions or less. \nFurthermore, 93\\% of the use cases that consists of ten functions or less. \\\\\n\n\\para{Discussion}\nOur results determine that serverless application use a low number of serverless functions, with 82\\% of all use cases consisting of five or less functions and 93\\% of the use cases consisting of less than ten functions. There are two potential reasons for this. First, the serverless application models reduces the amount of code developers have to write, as it allows them to focus on business logic while all other concerns are taken care of by the cloud provider and managed cloud services. Secondly, this seems to indicate that developers are currently choosing a rather large granularity for the size of serverless functions. However, determining the optimal granularity for serverless functions is still an open research challenge.\n\n\\subsubsection{Function Runtime}\n\\para{Description}\nThe run time of the cloud functions may have important impact on optimization choices of the serverless frameworks running these functions. We classified the run time of the functions in the use cases as: \\emph{short} (order of milliseconds or seconds) and \\emph{long} (order of minutes).\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/function_runtime.pdf}\n \\caption{Function runtime distribution among the surveyed use cases.}\n \\label{fig:function_runtime}\n\\end{figure}\n\n\\para{Results}\nThe majority of the functions in our survey are short (ms, s), 67\\%; only 22\\% of them have a run time in the order of minutes (see Figure~\\ref{fig:function_runtime}). We could not assess this characteristic in 10\\% of the use cases we studied. \nAll but one of the long-running functions is triggered on demand (as opposed to scheduled), with half of them falling into the scientific computing domain.\nThe long-running functions that did not fall within the scientific computing domain, are mostly operations or side-tasks, not business critical.\nFinally, these long functions are not high-volume on demand APIs (only one was classified as such).\\\\\n\n\\para{Discussion}\nThe overhead associated with running a function is larger, in proportion, for the case of functions with short run times.\nThis supports the large number of efforts concentrated in reducing this overhead. \nA limitation of our results is that, as the platforms impose a run time limit in the order of minutes, there may be a bias towards short running functions that would not exist if there were no time limits to the function run times. \\\\\n\n\\subsubsection{Resource Bounds}\n\\para{Description}\nWe wanted to know if the functions' run time were limited by \\emph{I\/O}, \\emph{CPU}, both (\\emph{hybrid}), the \\emph{network}, or an \\emph{external service} (e.g., cloud database).\nInformation on the workload mix can be useful for studies regarding the scheduling of functions and the routing of execution requests. \\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/resource_bounds.pdf}\n \\caption{Distribution of the resource bounds among the surveyed use cases.}\n \\label{fig:resource_bounds}\n\\end{figure}\n\n\\para{Results}\nMost use cases did not explicitly state this information (\\emph{unknown} is 72\\%, see Figure~\\ref{fig:resource_bounds}). For the use cases that did report this information, the I\/O-bound functions and CPU-bound functions are equally represented in our survey (9\\% I\/O, 8\\% CPU, 6\\% hybrid, 4\\% external service and 1\\% network). \\\\\n\n\\para{Discussion}\nThe percentage of use cases reporting this information is too small for us to derive any statistically significant analysis of the results. \\\\\n\n\\subsubsection{Programming Languages}\n\\para{Description}\nThis characteristic refers to the main programming languages used to write code for FaaS functions in a given application.\nFaaS providers typically offer a set of officially supported runtimes (e.g., Node.js for JavaScript).\nThese execution environments of FaaS functions determine the operating system and pre-installed software libraries.\nSome providers support further languages through custom runtimes, often in the form of Docker images following a documented interface.\nNotice that the programming language might differ from the technical function runtime as so called \\emph{shims} can be used to invoke a target language through a wrapper runtime (e.g., invoking C++ through Node.js via system calls). \\\\\n\n\\para{Results}\nJavaScript (32\\%) is the most common programming language for FaaS functions tied with Python (32\\%).\nLess common languages include Java (9\\%), C and C++ (8\\%), C\\# (6\\%), Go (3\\%), and Ruby (1\\%).\nWe were unable to determine the language for 25\\% of the use cases due to lacking technical descriptions. \\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/languages.pdf}\n \\caption{Programming language distribution among the surveyed use cases. Some use cases use multiple programming languages.}\n \\label{fig:languages}\n\\end{figure}\n\n\\para{Discussion}\nThe ranking of languages in our results follows a general trend also observed in other surveys. \nA study on FaaS industrial practices (N=161) \\cite{leitner2019MixedMethod} and initial results of the latest Serverless Community survey (N=109)\\footnote{Question 25 in \\url{https:\/\/www.nuweba.com\/blog\/serverless-community-survey-2020-results}} indicate that JavaScript is 20\\% more popular than Python on used languages in FaaS applications.\nHowever, they largely follow the same ranking but suggest higher popularity of Java over C\\#.\nOur results are plausible and confirm that JavaScript and Python are the most widely supported programming languages in FaaS.\nFurther, the remaining languages (i.e., Java, C, etc.) all belong to the world's most popular languages according to the TIOBE index\\footnote{\\url{https:\/\/www.tiobe.com\/tiobe-index\/}}, although they are often not the primary choice for the new FaaS paradigm.\n\n\n\\subsubsection{Function Upgrade Frequency} \n\\para{Description}\nHow frequently the code of the functions is updated has implications to software engineering and to the mechanisms used by the framework to upgrade the code in the functions that are already deployed.\nWe used two classification levels for this property: \\emph{rarely} and \\emph{often}; \\emph{unknown} indicates that this information cannot be obtained from the use case. \\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/upgrade.pdf}\n \\caption{Function upgrade frequency distribution among the surveyed use cases.}\n \\label{fig:upgrade}\n\\end{figure}\n\n\\para{Results}\nMost use cases did not explicitly state this information (unknown is 70\\%, see Figure~\\ref{fig:upgrade}). For the use cases that did report this information, the functions are updated rarely (26\\% rarely, 3\\% often).\\\\\n\n\\para{Discussion}\nThe percentage of use cases reporting this information is too small for us to derive any statistically significant analysis of the results. \\\\\n\n\\subsubsection{Use of External Services}\n\\label{subsec:external_services}\n\\para{Description}\nFaaS functions are often integrated into an ecosystem of serverless external services.\nPersistency services include cloud storage for blob data (e.g., Amazon S3 for images) and cloud database for structured data storage and querying (e.g., Amazon DynomoDB or Google Cloud SQL).\nA cloud API gateway exposes HTTP endpoints and can trigger FaaS functions upon incoming HTTP requests.\nMessaging services include cloud pub\/sub for durable asynchronous messaging (e.g., Amazon SNS), cloud queue for reliable FIFO-ordered messaging (e.g., Amazon SQS), and cloud streaming for real-time data ingestion and processing (e.g., Amazon Kinesis).\nCloud logging and monitoring refers to applications that explicitly process log data because we implicitly assume some essential logging infrastructure for FaaS functions (e.g., Amazon CloudWatch for AWS Lambda).\nCloud ML covers machine learning services, such as Amazon Rekognition for image or video analysis.\nNotice that we abstracted from vendor-specific services to cross-platform terminology (e.g., AWS S3 becomes Cloud Storage). \\\\\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/external_services.pdf}\n \\caption{Distribution of used external services among the surveyed use cases. Many use cases use multiple external services.}\n \\label{fig:external_services}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:external_services} shows that cloud storage (61\\%) and cloud database (47\\%) are the most popular external services, followed by the cloud API gateway (18\\%) and messaging services (10-17\\%).\nFor 12\\% of the use cases, we could not identify any external service integration. \\\\\n\n\\para{Discussion}\nGiven the ephemeral nature of FaaS functions, it is unsurprising that persistency services are the most popular external services, which is consistent with other survey results~\\cite{leitner2019MixedMethod}.\nHowever, the API gateway receives surprisingly little attention, especially when compared to the 46\\% of the use cases using HTTP triggers (see \\Cref{subsec:trigger_types}).\nWe suspect the use of an API gateway is often implicitly assumed, thus not explicitly mentioned, and therefore not comprehensively captured here.\nOverall, we conclude that currently used external services almost exclusively focus on technical aspects (e.g., storage or messaging) and more specialized services (e.g., cloud ML) are very uncommon among our surveyed use cases.\n\n\\subsection{General Characteristics}\n\\label{sec:results:general}\n\nIn this section, we analyse general characteristics of serverless use cases: the supported platform(s) and application types. Furthermore, we check if a serverless use case is in production yet and its availability as open source. Last, we report on the distribution across application domains for the analysed use cases.\n\n\\subsubsection{Platform}\n\\para{Description}\nIn November 2014, Amazon released the first commercial Function-as-a-Service platform with AWS Lambda and started the serverless trend. Two years later in 2016, Microsoft Azure, Google Cloud, and IBM Cloud released their own Function-as-a-Service platforms. There are also a number of open-source Function-as-a-Service platforms, such as Knative, OpenWhisk and OpenLambda. Selecting a deployment platform is a major decision for serverless applications, as there is a strong vendor lock-in that makes changing the deployment platform at a later point in time difficult. In this study, we grouped the deployment platforms into \\emph{AWS}, \\emph{Azure}, \\emph{IBM Cloud}, \\emph{Google Cloud}, and \\emph{Private Cloud}. \\\\\n\n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/platform.pdf}\n \\caption{Distribution of deployment platform among the surveyed use cases. Some use cases support multiple deployment platforms.}\n \\label{fig:platform}\n\\end{figure}\n\n\\para{Results}\nAmong the use cases we surveyed, AWS is the clear choice-leader, with 80\\% of the use cases choosing AWS as their deployment platform. The other cloud vendors are far behind, with Azure at 10\\%, IBM at 7\\% and Google Cloud with 3\\%. 8\\% of the use cases use a private cloud, with the majority of them being scientific use cases. A total of five use cases can be deployed across multiple cloud platforms.\\\\\n\n\\para{Discussion}\nThat AWS is by far the most popular choice for serverless deployment among the cases we surveyed can probably be attributed to AWS having a two year head-start with offering this technology as a commercial service.\nA consequence of the earlier head-start is that \nthere was more time to develop and report about serverless applications using AWS serverless technology. Additionally, AWS has the largest market share when it comes to general cloud computing~\\cite{gartner}, which gives it a larger existing user base that can move applications to serverless.\\\\\n\nThe very low adoption of private clouds outside of the scientific workflows is in strong contrast to the large number of open-source Function-as-a-Service frameworks that have been developed. A large appeal of the serverless application model is the reduction of operational concerns, so we hypothesize that the increase in operational concerns that comes with maintaining a fleet of servers and an open-source Function-as-a-Service frameworks is deterring the adoption of these frameworks. Additionally, most serverless applications make use of many managed services (storage, databases, messaging, logging, streaming, etc.) which are not available directly in a private cloud environment.\\\\\n\nIt is interesting that five of the use cases we studied can be deployed across multiple cloud platforms. This goes in contrast to the commonly reported vendor lock-in of serverless computing~\\cite{adzic2017serverless, eivy2017wary}. However, upon closer inspection, three of these use cases are computation frameworks that provide additional layers of abstraction on top of commercial Function-as-a-Service and another one is a monitoring framework that utilizes serverless technologies. Therefore, we conclude that while there are some frameworks that can operate across multiple cloud platforms, most serverless applications can only be deployed on the cloud platform they were initially developed for. This provides further evidence that vendor lock-in exists for serverless applications.\n\n\\subsubsection{Application Type} \n\\label{subsec:application_type}\n\\para{Description}\nWe categorized each use case according to their type of serverless application. \nThe motivation behind this was to explore for what kind of tasks a serverless approach is typically employed---whether there are certain application types that are dominant or application types that are notably missing in current use cases.\nTo evaluate this usage aspect, we added the \\textit{Application Type} metric in which we provided options of typical application types to group the use cases in:\n\n\\begin{enumerate}\n \\item \\emph{Operations \\& Monitoring:} Consists of the use cases that deploy serverless application to assist in operating software systems. Examples of such applications include automation of the test or deployment pipelines, failure mitigation or remediation, or controlling the state of running systems. This label superseeds all other labels.\n \\item \\emph{Stream\/async Processing:} Groups the serverless applications that perform an asynchronous task, which also includes any processing of events from an event bus or stream.\n \\item \\emph{Batch Task:} This is a special case of the \\texttt{stream\/async processing} category, which encompasses tasks that are executed in large batches. This label superseeds the \\texttt{stream\/async processing} label.\n \\item \\emph{API:} Contains use cases that employ serverless to implement an API, such as a REST API or a GraphQL API. The exact nature of this API is not relevant here, rather that it is called synchronously, so the caller is waiting for a response.\n \\item \\emph{Unknown:} Denotes the use cases that did not provide enough information about what type of serverless applications were employed.\n\\end{enumerate}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/application_type.pdf}\n \\caption{Application type distribution among the surveyed use cases.}\n \\label{fig:application_type}\n\\end{figure}\n\\para{Results}\nFigure~\\ref{fig:application_type} provides an overview of the collected results for the \\textit{application type} metric. We find that serverless is commonly used to implement \\texttt{APIs} (28\\%), \\texttt{stream\/async processing} (27\\%), \\texttt{batch tasks} (23\\%), and \\texttt{operations\/monitoring tasks} (20\\%). We were able to determine this characteristic for all but 2\\% of the survey applications.\n\n\\para{Discussion}\nServerless is commonly recommended for operations tasks, which also shows in our results as 20\\% of the use cases implement operations and monitoring applications. However, we also find large shares of APIs, asynchronous processing, and batch tasks. This shows that serverless is not seen as a niche technology that fits a special use case, but rather as a broadly applicable solution.\\\\\n\n\\subsubsection{In Production}\n\\label{subsec:production}\n\\para{Description}\nAnother characteristic that we analyzed is whether or not a specific use case was actually deployed in production environments.\nOur motivation behind this is to evaluate how reliable the given characteristics actually are, or how representative they are for the real applications running in practice.\nTherefore, this metric can be seen as a kind of validation of our results.\n\nFor this characteristic, we work with three possible values: \n\\begin{itemize}\n \\item \\emph{Yes}: There is clear evidence or statements claiming that the specific use case is already deployed in production.\n \\item \\emph{No}: There is strong evidence that the specific use case is not deployed in production.\n \\item \\emph{Unknown}: There can be no information found to support either ``Yes\" or ``No\".\n\\end{itemize}\nHowever, in our analysis an ``Unknown\" has almost certainly to be seen as a ``No\", as if we can not find evidence supporting that a use case was not applied in production, we have to assume that it is not.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/production.pdf}\n \\caption{Percentage of the survey use cases that are deployed in production.}\n \\label{fig:production}\n\\end{figure}\n\n\\para{Results}\nThe results of this characteristic are shown in Figure~\\ref{fig:production}.\nWe observe that more that half (55\\%) of analyzed use cases are actually designed for and deployed in production.\nRoughly a quarter (29\\%) is not used in production, and for the remaining 16\\% of use cases no clear answer could be found. \nHowever, as discussed above, we should assume that all ``Unknown'' use cases belong to the ``No'' field, leaving us with a total of 45\\% that do not have strong claims supporting their application in production.\\\\\n\n\\para{Discussion}\nAs more than half of the use cases included in this study are actually used in production, this can be seen as a strong indicator that the results of our analysis are representative and that the developed best practices have been applied in practice.\nFurthermore, many of the approaches not actively used in production actually originate from white literature. \nAs most of the white literature papers just present prototypical studies and evaluation use cases, their share of non-production use-cases is significantly higher. \nHowever, this in turn increases the respective share of in-production use cases for grey literature and GitHub Projects.\n\n\\subsubsection{Open Source}\n\n\\para{Description}\nOpen source indicates whether the source code of the FaaS function or application is publicly available.\nOpen source software is a valuable contribution for education, reuse, and testing.\nThis is exemplified through FaaS providers sharing their own vision for high-level reference architectures\\footnote{\\url{https:\/\/aws.amazon.com\/lambda\/resources\/reference-architectures\/}} and fostering cataloging of example applications\\footnote{\\url{https:\/\/aws.amazon.com\/serverless\/serverlessrepo\/}}. \\emph{Yes} indicates that a use case is open-source and \\emph{No} indicates that no source code is available.\nNotice that we barely check the availability of open source artifacts and cannot make any claims about completeness, maintenance levels, or use of appropriate licenses for this characteristic.\\\\\n\n\\para{Results} Figure~\\ref{fig:open_source} shows that 53\\% of the use cases are open source and 47\\% are closed source. \\\\\n\n\\para{Discussion}\nOpen source software is typically hosted on GitHub and similarly common for use cases deployed in production.\nInterestingly, open source software is comparably widespread among the 49 use cases deployed in production with 49\\%.\nWe expected a clearer tendency of use cases deployed in production to remain closed source, which is probably due to our selection strategy favoring open source use cases.\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/open_source.pdf}\n \\caption{Percentage of the survey use cases that are open-source.}\n \\label{fig:open_source}\n\\end{figure}\n\n\\subsubsection{Domain} \n\\para{Description}\nFor each use case, we classified their application domain based as one of the following: \\emph{IoT}, \\emph{entertainment}, \\emph{scientific computing}, \\emph{WebServices}, \\emph{public authority}, \\emph{university}, \\emph{FinTech}, \\emph{cross-domain}, or \\emph{other}.\nCross-domain are those use cases that are generic and could be useful across more than one domain; for example, a generic image identification service which could be used in IoT, scientific computing, WebServices, public authority, and university domains.\n\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/domain.pdf}\n \\caption{Distribution of the domain of the surveyed use cases.}\n \\label{fig:domain}\n\\end{figure}\n\n\\para{Results}\nUnsurprisingly, WebServices is the most common application domain in our survey (33\\%, see Figure~\\ref{fig:domain}).\nThis is followed by cross-domain use cases (24\\%), which we mostly found via our GitHub search.\nThe other groups with high representation in our review were scientific computing (16\\%) and IoT (10\\%).\nThe significant presence of scientific computing cases is a result of our conscientious efforts in including scientific computing use cases in our survey.\n\\\\\n\n\\para{Discussion}\nWe find that there is a wide variety of application domains represented in our survey.\nThis is a strength of our study, as others can use our insights to make decisions about the design and implementation of serverless frameworks that are applicable to a broad variety of domains.\n\\subsection{Requirements Characteristics}\n\\label{sec:results:requirements}\nIn the following section, we analyze the different requirements and expectations that users have towards the serverless platforms when moving towards those.\nWe discuss the main motivation drivers, as well as the trade-off between cost and performance, and requirements towards latency and locality of the invocations.\n\n\\subsubsection{Motivation}\n\\label{sec:results:requirements:motivation}\n\\para{Description}\nThis characteristic aims at capturing the motivation of the respective engineers and therefore quantifies why they decided to host their application in a serverless environment.\nFor this, we developed six main motivation fields and grouped each use case into one or multiple of those fields, depending on the motivation the authors gave in the description.\nIf no conclusive motivation could be found, we put \\emph{Unknown}.\nThe main motivations we found are:\n\\begin{itemize}\n \\item \\emph{Cost}: Running the application in a serverless platform significantly reduces operation cost in comparison to traditional cloud hosting.\n \\item \\emph{NoOps}: Deploying a serverless application has the advantage of saving operation effort.\n \\item \\emph{Scalability}: The increased scalability of serverless platforms is advantageous for the application.\n \\item \\emph{Performance}: The performance of the application, i.e., throughputs and response times, is better when running on a serverless platform.\n \\item \\emph{Simplify Development}: The development cycle as well as the release structure is easier using serverless applications.\n \\item \\emph{Maintainability}: Deploying an application in a serverless cloud saves maintenance effort.\n \\item \\emph{Scalability}: The increased scalability of serverless platforms is advantageous for the application.\n\\end{itemize}\n\n\\para{Results}\nThe results of our study can be observed in Figure~\\ref{fig:motivation}. \nThe biggest drivers for the adoption of serverless in our use cases are cost (33\\%), the reduced operation effort (24\\%), and the offered scalability (24\\%).\nTwo further significant motivation behind the adoption seems to be the performance benefits (13\\%) and the simplified development (9\\%). \nHowever, the maintainability (2\\%) only plays a minor role. \nFor 30\\% of the use cases, no specific motivation could be determined.\\\\\n\n\n\\para{Discussion} As the time savings by employing the NoOps paradigm of the serverless platforms can be converted to personnel costs, we observe that saving effort and costs seems to be a bigger contributor to the adoption of serverless than the offered performance and scalability improvements (although they are closely behind on second place).\n\nIt is important to note here, that there are many common pitfalls which can make serverless functions cost-inefficient. First, right now most providers bill by rounding up the execution time to the nearest 100ms. While this is negligible for most functions, this can be quite inefficient for very short-running functions. For example, if a functions runs for 10ms, it is billed for 100ms, which increases the billed duration tenfold. Secondly, most providers offer different function memory sizes and scale the other allocated resources such as CPU, I\/O capacity and network bandwidth accordingly. A recent survey reports that about 50\\% of serverless functions use the minimum size of 128MB~\\cite{datadog}, which is reported to be inefficient for most serverless functions. Thirdly, at a very large scale, the raw infrastructure costs are significantly larger than for a traditional VM-based solution~\\cite{eivy2017wary}. However, one might argue that the total cost of ownership could still be lower for the serverless solution due to the reduced operational overhead. In general, the the economic benefits of serverless computing heavily depend on the execution behavior and volumes of the application workload~\\cite{eivy2017wary}.\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/motivation.pdf}\n \\caption{Distribution of the motivation behind adopting serverless among the surveyed use cases. Some use cases have multiple motivations.}\n \\label{fig:motivation}\n\\end{figure}\n\n\\subsubsection{Cost\/Performance Trade-off}\n\\label{sec:results:requirements:tradeoff}\n\\para{Description}\nThe cost\/performance trade-off describes whether a use case tends to focus rather on cost optimization (i.e., \\emph{cost-focused}) or rather on performance optimization (i.e., \\emph{performance-focused}).\nThe trade-off is \\emph{undefined} if cost and performance are equally important and \\emph{unknown} if we could find no evidence towards any previously mentioned value in the provided use case description.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/tradeoff.pdf}\n \\caption{Distribution of the cost\/performance trade-off among the surveyed use cases.}\n \\label{fig:tradeoff}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:tradeoff} shows that cost is generally more important than performance for 41\\% of the use cases.\nCost-focused use cases are also twice as common compared to performance-focused use case (23\\%).\nFor 15\\% of the use cases, cost and performance are equally important.\nFinally, the trade-off remains unknown for 22\\% of the use cases.\\\\\n\n\\para{Discussion}\nThe clear focus on cost optimization is plausible given that cost is a strong motivation for adopting serverless (see \\Cref{sec:results:requirements:motivation}).\nServerless solutions, such as FaaS, were also associated with lower perceived total cost in another study~\\cite{leitner2019MixedMethod}.\n\n\\subsubsection{Is Latency Relevant?}\n\\para{Description}\nA diversity of use cases comes with a broad spectrum of expectations or even requirements on latency as a central performance metric. So we posed ourselves the question about the relevance of latency across the analysed serverless use cases. For this characteristic, we distinguish between four levels plus \\emph{Unknown}. The levels are:\n\\begin{itemize}\n \\item \\emph{Not important}: For these use cases we found evidence that latency does not play a central role. Delays and variations in latency are acceptable without disturbing the mode of operation. \n \\item \\emph{For complete use case}: Latency plays a relevant role for the whole use case on a level of mostly unspecified expectations on latency and its variations over time, e.g., expected to exhibit a latency for convenient human user interaction.\n \\item \\emph{For parts of the use case}: The use case includes parts where latency is irrelevant and other parts where latency is of concern following the understanding of the level above as mostly unspecified expectation. \n \\item \\emph{Real-time}: We select the level \\emph{real-time} if evidence was found that there are soft latency requirements specified. Replies that take longer than a given upper time limit are becoming useless and are not further processed. This interpretation of \\emph{real-time} is not implying safety critical states when latency requirements are violated. \n\\end{itemize}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/latency_relevant.pdf}\n \\caption{Distribution of latency importance among the surveyed use cases.}\n \\label{fig:latency_relevant}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:latency_relevant} shows that in more than one third (36\\%) of the use cases, latency does not play a role. On the other side, for 58\\% of the analysed use cases, latency is of relevance (joining the respective levels). In 27\\% this is only for parts of the use cases. \nThe portion of use cases with real-time requirements on latency is small with only 2\\%.\nNo clear assignment of was reasonable for 3\\% of use cases. \\\\\n\n\\para{Discussion}\nServerless computing is especially convenient for triggered or scheduled background tasks that need to run from time to time without any latency requirements. But issues around function cold-starts and limited function life time have not shown to be a showstopper for serverless uses cases that expect a certain degree of stable latency, e.g. for a smooth interaction with human users. Also, over time, we would expect more examples for serverless use cases that come with stricter soft-real-time requirements as the platforms continue to mature. We doubt that serverless computing will accommodate use cases in production with hard real-time requirements and safety critical implications in case of violations. \n\n\n\\subsubsection{Locality Requirements}\n\n\\para{Description}\nMigration of any application from dedicated servers in a possibly self-owned data center to a compute infrastructure managed by a cloud provider reduces the control over the locality where the code runs and data is persisted. From the early days of cloud computing on this remains still a possible issue or even show-stopper. \nWe analyse the serverless use cases if requirements on locality are imposed. The reasons for locality requirements can differ broadly from regulatory to performance related ones.\nHere, we distinguish between four levels of locality requirements plus the case ``unknown'': \n\n\\begin{itemize}\n \\item \\emph{None}: There is evidence that for the given use case, no locality requirements are imposed.\n \\item \\emph{Multi-region}: The serverless application is or should be deployed in multiple regions, e.g. for improved latency or in tailored variations for specific geographic regions.\n \\item \\emph{Specific-region}: The serverless applications is required to run in a specific region.\n \\item \\emph{Edge}: The application or parts of it should run closer to a user or IoT device in an edge infrastructure\n\\end{itemize}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/locality_requirement.pdf}\n \\caption{Locality requirement distribution among the surveyed use cases.}\n \\label{fig:locality_requirement}\n\\end{figure}\n\n\\para{Results}\nWhile we have unclear or unspecified locality requirements for 34\\% of the use cases, Figure~\\ref{fig:locality_requirement} shows that the biggest portion (44\\%) comes with no locality requirements. For 21\\% of the use cases, we found locality requirements. Out of those, 8\\% are deployed across regions, while 10\\% are run in specific regions. The remaining 3\\% are tailored serverless solutions for edge computing. \\\\\n\n\n\\para{Discussion}\nAs serverless technologies and applications are maturing, we expect to see more business-critical elements of serverless applications in daily operation. The portion of serverless use-cases that comes with region-specific requirements will grow respectively. Furthermore, the use of serverless technologies for Edge computing can be seen as a trend of growing importance. Thus, we think it is likely to see a growing importance of the locality requirement ``Edge''. At the current time, for the dominating part of use cases, locality requirements are apparently not specified or not given yet. \n\\subsection{Workflow Characteristics}\n\\label{sec:results:workflow}\\label{sec:results:structural}\n\nMany serverless use cases cannot use a single serverless function to meet their functional and non-functional requirements. \nInstead, such use cases require the execution of multiple functions, \nexpressed and orchestrated as \\textit{serverless workflows}. In this section, we investigate the characteristics of serverless workflows. However, not all use cases include workflows. Thus, we first investigate in Section~\\ref{sec:results:wf:is} which use cases are based on serverless workflows, and from then on we only report results for the use cases that do (the \\textit{workflow use cases}).\n\\newline\n\n\\subsubsection{Is it a Workflow?}\\label{sec:results:wf:is}\n\\para{Description}\nWe evaluate here the prevalence of serverless workflows among the surveyed use cases. A use case is categorized as a workflow (bar \\textit{Yes} in Figure~\\ref{fig:is_workflow}) if for a part or all of its functionality multiple serverless functions are needed. If not, the use case is not based on a workflow (\\textit{No}). Use cases where this could not be determined are assigned \\textit{Unknown}. \\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/is_workflow.pdf}\n \\caption{Percentage of use cases including workflows, among the surveyed use cases.}\n \\label{fig:is_workflow}\n\\end{figure}\n\n\\para{Results}\nAs depicted in Figure~\\ref{fig:is_workflow}, we observe that nearly a third (31\\%) of the use cases include serverless workflows. The other use cases (69\\%) are simple enough that one or a couple of independent serverless functions can fully provide the desired functionality. No use case was labeled Unknown.\\\\\n\n\\para{Discussion}\nThe relative prevalence of workflows in use cases is important, as it hints that serverless use cases are getting more and more complex. The evolution of use cases in fields such as grid computing and more recently cloud computing is indicative that, once workflows become acceptable practice, they become increasingly more prevalent~\\cite{hey2009fourth,isom2012your,deelman2018future}. Interesting too is the lack of use cases categorized as Unknown, which indicates that the presence or absence of workflows for any relevant use case is one of the clearest questions to answer. \n\n\\subsubsection{Workflow Coordination}\\label{sec:results:wf:coord}\n\n\n\\para{Description}\nThe use of workflows currently does not constrain the method used for orchestration. There are various approaches to ensure that tasks---or serverless functions---are executed in a coordinated way, e.g., functions can use events to trigger the start of a new task or for other purposes, a task can act as a coordinator for a specific workflow structure, or a workflow engine can orchestrate arbitrary workflows. To evaluate which of these approaches is prevalent, we surveyed their use across workflow use cases. More formally, for this part we categorized orchestration techniques as follows: \n\n\\begin{enumerate}\n \\item \\emph{Event} groups all use cases that rely on event-driven orchestration. In this approach, workflows are constructed by configuring functions to be triggered to execute on the arrival of the completion (or failure) events of other functions. Typically, this method requires functions to explicitly listen for and publish their results or errors to a message queue---though in some cases platforms this functionality is built-in and no explicit interaction with an external message queue is needed. \n \n \\item \\emph{Local coordinator} groups all the applications which rely on programmed, user-side logic to take care of the orchestration. An application running on the user's machine, such as a GUI or the client-side JavaScript running on a web page, invokes the functions in the appropriate order, and ensures that each function is executed with the correct configuration and input data.\n \n \\item \\emph{Workflow engine} contains the use cases that delegate the coordination to a dedicated workflow management system. This workflow engine has functionality to ensure the correct orchestration, along with higher-order concerns, such as (data) provenance, monitorability, and task scheduling optimizations. The workflows typically need to be specified in a consistent format using a set of workflow primitives that are supported by the workflow engine. Compared to the local coordinator, a workflow engine can also be seen as an external, persistent coordinator.\n \n \\item \\emph{Unknown} captures the use cases where we could not determine the coordination approach, for example because of lacking or lack of documentation.\n\\end{enumerate}\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/coordination.pdf}\n \\caption{Distribution of workflow coordination approach among the surveyed use cases.}\n \\label{fig:coordination}\n\\end{figure}\n\n\\para{Results}\nAs depicted in Figure~\\ref{fig:coordination}, half of the serverless workflows rely on events for the coordination (50\\%). Slightly less prevalent, about one-third (32\\%) of the use cases rely on a dedicated workflow engine to ensure correct coordination. Only a few (2\\%) defer the coordination to a local coordinator. For 16\\% of the workflow-based use cases the approach could not be determined.\\\\\n\n\\para{Discussion}\nOur results indicate that event-driven workflows are currently most prevalent. Inspecting the individual use cases, we find that this is in part caused by implicit workflows; use cases that do not explicitly construct workflows, but instead configure the function triggers in such a way that these form simple pipelines. While this approach can address simple workflows and especially chains of a few tasks,\nexperience from the fields of grid and cloud computing indicates using this approach will not scale to future workflows. We further find that, as the workflows grow in complexity, workflow engines are more often used to coordinate the workflows. In contrast to cloud-side coordination techniques, the use of local coordinators is unpopular because it distributes to the user more complex logic and, in part, because it is difficult to maintain. Furthermore, such an approach could be less reliable in operation -- cloud-side coordination techniques and workflow engines are carefully engineered for fault-tolerance, which significantly exceeds the typical development effort of local coordinators.\n\n\n\\subsubsection{Workflow Structure}\\label{sec:results:wf:structure}\n\n\\para{Description}\nThe complexity of a workflow is mostly determined by its structure. A {\\it bag of tasks} is a simple workflow (in mathematical terms: a degenerate workflow), which consists of a set of tasks that can be executed in any arbitrary order. \nAnother common workflow structure is the {\\it sequential workflow}, where all tasks need to be executed sequentially. \nWe further define {\\it complex workflows} as workflows that include significantly more complex structure than the previous types, including (multi-stage) gather and scatter operations, workflows with conditional execution of (some) tasks, workflows with loops, and fully dynamic workflows.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/workflow_structure.pdf}\n \\caption{Distribution of the workflow structure among the surveyed use cases.}\n \\label{fig:workflow_structure}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:workflow_structure} depicts the results.\nSequential workflows are the most popular workflow structure for serverless applications, with 40\\% of the workflow use cases including them. \nIncluding sequential workflows and bags of tasks (14\\%), over half (54\\%) of the workflow use cases only include non-complex workflows. \nAbout one-quarter (26\\%) of the serverless applications are complex workflows. \nLast, for over one-fifth (21\\%) of the workflow use cases we were not able to determine the workflow structure.\\\\\n\n\\para{Discussion}\nFor serverless applications, simple workflow structures (bag of tasks and sequential workflows) are more than twice as common as more complex workflow structures. We hypothesize that this is because serverless applications are currently mostly used for comparatively simple tasks and rarely for complex data analysis.\nAnother possible explanation is the lack of workflow engines (Section~\\ref{sec:results:wf:coord}); it is difficult to orchestrate arbitrarily complex workflows without such an engine. \\\\\n\n\nThe lack of bags of tasks can probably attributed to the fact that serverless applications come with built-in scalability when the functions can be conveniently executed in parallel. For example, resizing a collection of images can be conveniently implemented as a bag of many tasks, where each task invokes the same image-resizing function. In this case, the serverless platform has the capability to execute this case, without further need for orchestration from the user.\n\n\\subsubsection{Workflow Size} \\label{sec:results:wf:size}\n\n\\para{Description} Next, we study workflow size, expressed as the number of tasks in the workflow. \nWe aggregate all use cases into three groups:\n\\begin{enumerate}\n \\item \\textit{Small workflows}, containing 2--10 functions,\n \\item \\textit{Medium-size workflows}, invoking 10--1000 functions, and\n \\item \\textit{Large workflows}, comprised of more than 1000 function invocations. \n\\end{enumerate}\n\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/workflow_size.pdf}\n \\caption{Workflow size distribution among the surveyed use cases.}\n \\label{fig:workflow_size}\n\\end{figure}\n\n\\para{Results} Figure~\\ref{fig:workflow_size} depicts the results of our analysis. \nThe majority (59\\%) of analyzed workflows are small workflows.\nAround one fifth of use cases (19\\%) are medium-size, and only few (3\\%) qualify as large workflows.\nNearly one-fifth of the workflows (19\\%) could not be assigned to one of the groups. \\\\\n\n\\para{Discussion} Our results suggest that a majority of workflow executions are small; because these workflows are only composed of ten or less individual function executions, they are also relatively short-lived. This is consistent with the characteristics of early workflows in engineering and in scientific prototypes~\\cite{conf\/cgiw\/OstermannIPFE08}. Only about one-fifth of the workflows are medium or large-sized. Similarly to the previous section, we hypothesize that orchestrating workflows of this size is dependent on the presence of an automated facility, such as a workflow engine. (The other hypothesis introduced in Section, that serverless workflows are currently used for relatively simpler tasks, does not limit the size of the workflow -- in the earlier example, the image-resizing workflow can run 10,000s or even 100,000s of functions~\\cite{bharathi2008characterization, deelman2018future}.)\n\n\n\\subsubsection{Workflow Internal Parallelism}\n\n\\para{Description}\nFor those use cases in which a workflow of serverless functions was present, we further analyzed whether they present \\emph{internal parallelism}---at least an instance of multiple functions running in parallel---or not. \\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/parallelism.pdf}\n \\caption{Percentage of workflows with internal parallelism among the surveyed use cases.}\n \\label{fig:parallelism}\n\\end{figure}\n\n\\para{Results}\nFrom Figure~\\ref{fig:parallelism}, we observe that most workflows (52\\%) exhibit internal parallelism; about one-third (31\\%) of the workflows are simpler, exhibiting no internal parallelism.\nWe could not obtain this information (\\emph{unknown}) for 16\\% of the workflows. \\\\\n\n\\para{Discussion}\nThe high prevalence of workflows with at least some level of internal parallelism calls for workflow managers that are native to---or well integrated with---the serverless framework, to facilitate workflow composition and management yet deliver parallelism with low overhead.\n\\\\\n\\subsection{Workload Characteristics}\n\\label{sec:results:workload}\n\nThis section characterizes the nature of workloads imposed on serverless applications through human users or technical invokers.\nIn the following, we discuss execution patterns, burstiness, trigger types, and common data volumes of our reviewed use cases.\n\n\\subsubsection{Execution Pattern}\n\n\\para{Description}\nFunctions or workflows of functions can be triggered \\emph{on-demand} as a direct result of a user interacting with the application, or they can be \\emph{scheduled} to be run at specific times.\nFor the on-demand workflows, we further classify them as regular \\emph{on-demand} or \\emph{high-volume on-demand}.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/execution_pattern.pdf}\n \\caption{Execution pattern distribution among the surveyed use cases. Some use cases are executed both on a schedule and on-demand.}\n \\label{fig:execution_pattern}\n\\end{figure}\n\n\\para{Results}\nMost (workflows of) functions are triggered on-demand, with scheduled triggers being used in only 17\\% of the use cases we analyzed (see Figure~\\ref{fig:execution_pattern}).\nOut of the on-demand execution patterns, close to half are high-volume, business-critical invocations. \\\\\n\n\n\\para{Discussion}\nThe high prevalence of high-volume on-demand triggers calls for special study in minimizing function start-up times, and auto-scaling mechanisms.\nIn addition, half of the workflows that are scheduled fall into the application type category \\emph{operations} (see \\cref{subsec:application_type}), highlighting how the serverless model has been adopted---in many cases---to automate operations, software management, and DevOps pipelines.\n\\\\\n\n\\subsubsection{Burstiness}\n\\para{Description}\nThe workload of a function can be either bursty or non-bursty.\nA bursty workload follows a workload pattern that includes certain sudden and unexpected load spikes, or alternatively a significant amount of sustained noise and variation in intensity.\nWe classify a use case as bursty (i.e., \\emph{yes} for burstiness) if its workload typically includes or can include burst patterns in some situations and as non-bursty (i.e, \\emph{no}) if the workload is almost guaranteed to never receive burst patterns (e.g., if all executions are scheduled and known in advance).\nIf the burstiness of a use case is \\emph{unknown}, then the use case was either under-specified, or can be both bursty or non-bursty depending on the specific area of application.\nNote that in any scenario that involves a set of human users, we consider the workload pattern to be bursty, as user behavior can almost never the scheduled or reliably controlled, leading to a possibly bursty behavior.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/bursty.pdf}\n \\caption{Percentage of the survey use cases that have a bursty workload.}\n \\label{fig:bursty}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:bursty} depicts that a large majority (81\\%) of the analyzed workload patterns are classified as bursty.\nAdditionally, a 16\\% share have a clear non-bursty workload pattern, while a small minority of 3\\% could not be attributed to be either bursty or non-bursty.\\\\\n\n\\para{Discussion}\nAs one of the strengths of serverless computing is its seamless and almost infinite scalability, together with the general ease of operations it comes as no surprise that most of the use cases indeed experience bursty workload patterns.\nEarly adopters that want to test out the new emerging paradigm are more likely to choose a use case that is optimized for the serverless offering. \nAt the same time, engineers that have been struggling with bursty workloads and face regular performance issues are also more likely to migrate or adopt their application towards the a serverless clouds than applications that run smoothly.\n\nAn interesting comparison here would be to compare the share of bursty workloads executed on serverless platforms versus the share of bursty workloads of conventional applications.\nHowever, we can still conclude that a large majority of use cases designed for or applied to serverless platforms are experiencing bursty workloads and hence make use of the seamless elasticity that these services offer.\n\n\n\\subsubsection{Trigger Types}\n\\label{subsec:trigger_types}\n\n\\para{Description}\nTrigger types refer to alternative ways of invoking a FaaS function and are closely related to external services (see \\Cref{subsec:external_services}).\nAn \\emph{HTTP request} can trigger a FaaS function, which then processes the request and generates an HTTP response.\nThis HTTP routing is often implemented through API gateways.\nA \\emph{cloud event} describes a state change happening in a connected cloud service, such as a file upload to cloud storage or a modified value in a cloud database.\nSuch cloud events can be configured to trigger new function executions.\nA \\emph{scheduled} trigger invokes a FaaS function at a defined and potentially recurring time.\nThe category of \\emph{manually} triggered functions refers to human-initiated executions typically executed on-demand.\nNotice that some use cases combine multiple trigger types and thus the sum of proportions exceeds 100\\%. \\\\\n\n\\para{Results}\nFigure~\\ref{fig:trigger} reveals that the most common trigger types are HTTP request (46\\%) and cloud event (39\\%).\nFar less common are scheduled (12\\%) and manual (9\\%) execution triggers.\nWe were unable to derive the trigger type for 3\\% of the use cases from their insufficient descriptions.\\\\\n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/trigger.pdf}\n \\caption{Trigger type distribution among the surveyed use cases. Some use cases have multiple trigger types. }\n \\label{fig:trigger}\n\\end{figure}\n\n\\para{Discussion}\nWe compare our results to the trigger types reported for the production workload of Microsoft Azure functions~\\cite{shahrad2020serverless}.\nScheduled triggers and HTTP triggers are both 16-20\\% more common among Azure FaaS applications in comparison to our analyzed use cases focusing on AWS (85\\%).\nThe results for the remaining categories very closely (<=2\\%) match (after mapping some cumulative categories).\nWe conclude that the order of categories is in line with current production workloads reported for Azure but note that some values might be higher in practice.\nSuch an underestimation is plausible given that we derive our results from potentially incomplete sources.\nHowever, the explicit grouping of functions into applications in the Azure FaaS implementation possibly leads to different function groupings compared to our AWS-focused use cases.\n\n\\subsubsection{Data Volume}\n\n\\para{Description}\nThe data volume defines what load will be on network and storage devices. \nThe motivation here is to analyze whether there are any clusterings of data usages or certain patterns that are generally avoided.\nWe categorized the different use cases into five different categories: Volumes of \\emph{less than 1 MB} per execution, \\emph{less than 10 MB}, \\emph{less than 100 MB}, \\emph{less than 1 GB}, and \\emph{more than 1 GB}.\nAdditionally, there is also the \\emph{unknown} category, if data volume could not be assessed.\nNote that the data volume refers to executions of the entire workflow.\nFurthermore, as exact numbers were seldom found in the sources, this categorization is often based on the estimate of our reviewers.\\\\ \n\n\\begin{figure}[ht]\n \\centering\n \\includegraphics[width=0.8\\linewidth]{figures\/results\/data_volume.pdf}\n \\caption{Data volume distribution among the surveyed use cases.}\n \\label{fig:data_volume}\n\\end{figure}\n\n\\para{Results}\nFigure~\\ref{fig:data_volume} depicts the distribution of use cases among the different classes. \nAlmost half of the use cases (44\\%) fall in the smallest category of data volumes of less than 1 MB. \nThe second categorization transmitting more than 1 MB of data, but less than 10 MB, also make the second largest group with a fraction of 13\\%.\nEven less use cases (3\\%) consider a data volume between 10 and 100 MB. \nHowever, the following group between 100 MB and 1 GB increases in popularity, and finally the share of use cases transmitting 1 GB or more to the serverless platform increases to be the second-largest group (13\\%).\nAdditionally, 18\\% of use-cases could not have a specific data volume assigned and therefore do not count into any of the enumerated groups.\\\\\n\n\\para{Discussion}\nGenerally, the different data volumes are relatively distributed and do not cast a clear picture.\nThere is definitely a use case for any data volume characteristic. Therefore platforms should not strive to optimize themselves towards any specific limitation here.\nThat said, most of the use cases transmit less than 1 MB of data per workflow execution. \nNote that this group also includes all use cases that might not send any data at all. \nTherefore, the large majority of serverless use cases that we surveyed does not work with big amounts of data. \nHowever, there is also the exact opposite group of use cases working with vast vast amounts of data of 1 GB and more per workflow execution.\n\\section{Analysis Results: On the Characteristics of Serverless Use Cases}\n\\label{sec:results}\n\nWe describe in this section the results of our characterization and analysis of serverless use cases. Overall, we cover a diverse set of characteristics, identifying the values commonly used in practice and further analyzing their impact on serverless practice.\n\n\\subsection{Main Findings}\n\\label{sec:results:main}\n\n\nOur main findings are:\n\\begin{enumerate}\n\n \\item \\emph{General Characteristics:} We find AWS as the currently dominating for platform for serverless applications (80\\%). The dominating application domain is web services (33\\%), with 40\\% of the analysed workloads being business-critical and at least 55\\% of them in production already.\n \n \\item \\emph{Application Characteristics:}\n 82\\% of all use cases consist of applications that use five or less different functions. Most (67\\%) of these functions are short-running, with running times in the order of milliseconds or seconds. JavaScript and Python are the most used programming languages for cloud functions (each used by 32\\% of the cases we studied).\n These applications depend on a wide variety of cloud services, with the three most used ones being cloud storage (used by 61\\% of the applications) and cloud database (47\\%); cloud API gateway (18\\%) and cloud pub-sub (17\\%) are also widely used.\n \n \\item \\emph{Requirements Characteristics:} The reduced operation cost of serverless platforms (33\\%), the reduced operation effort (24\\%), the scalability (24\\%), and performance gains (13\\%) are the main drivers of serverless adoption. \n In comparison, cost savings seems to be a stronger motivator than the performance benefits. \n At the same time, 58\\% of use cases have latency requirements, 2\\% even have real-time demands, while only 36\\% are latency insensitive.\n Locality requirements are only relevant for 21\\% of the total use cases.\n \n \\item \\emph{Workload Characteristics:}\n 81\\% of the analyzed use cases exhibit bursty workloads.\n This highlights the overall trend of serverless workloads to feature unpredictable on-demand workloads, typically triggered through lightweight (<1MB) HTTP requests.\n\n \\item \\emph{Workflow Characteristics:}\n Although the presence of workflows is already sizable (31\\% of the use cases), most workflows are of simple structure, small, and short-lived. \n This is likely to change, as demand follows natural trends and orchestration methods move toward (cloud-native) workflow engines.\n \n\\end{enumerate}\n\n\\input{sections\/results\/General.tex}\n\n\\input{sections\/results\/Workload.tex}\n\n\\input{sections\/results\/Application.tex}\n\n\\input{sections\/results\/Requirements.tex}\n\n\\input{sections\/results\/Workflow.tex}\n\\section{Threats to Validity}\n\\label{sec:validity}\\label{sec:threats}\nWe discuss potential threats to validity and mitigation strategies for internal validity, construct validity, and external validity.\n\n\\subsection{Internal Validity}\nManual data extraction can lead to inaccurate or incomplete data.\nTo mitigate this threat, we established and discussed a review protocol prior to reviewing, continuously discussed upcoming questions during the review process, and performed redundant reviews through multiple reviewers.\nIn our review protocol, we established an exhaustive list of potential values for each characteristic and configured automated validation, which immediately highlighted deviations from these values.\nFor characteristics with thematic coding, we continuously refined their values in regular meetings during the review process.\nTo address potential individual bias, we performed two independent reviews for each use case, quantified the inter-rater agreement after an initial review round through Fleiss' Kappa, and resolved each disagreement in an extended discussion and consolidation phase.\n\n\\subsection{Construct Validity}\nTo align the goal of this study (i.e., comprehensive understanding of existing serverless use cases) with the data extraction, we compiled a list of 24 characteristics covering 5 different aspect groups.\nWe conducted and discussed this selection process together in an international working group with authors from 5 different institutions but other researchers might consider different characteristics as relevant.\n\n\\subsection{External Validity}\nOur study was designed to cover use cases from open source projects, white literature, and grey literature but we cannot claim generalizability to all serverless use cases.\nFor open source projects, we filtered non-trivial projects from the most popular open source repository (i.e., GitHub) but might have missed projects published in other repositories.\nHowever, we are unaware of such other repositories and also did not discover any among our other use cases from white and grey literature.\nOur white literature collection is based on a curated dataset on serverless literature and complemented with articles known to the authors but we might have missed more recent articles uncovered in the dataset and unknown to all authors.\nGrey literature use cases mostly focus on provider-reported case studies, an existing collection of grey literature use cases, and sources known to the authors.\nWe only partially cover corporate use cases as many of them remain unpublished and others provide insufficient details to conduct a meaningful review, which is similar to FaaS platforms~\\cite{DBLP:journals\/internet\/EykIGEBVTSHA19}.\nOur scientific computing use cases are limited to the aerospace domain originating from a national aerospace institution.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nHumans' unique abilities such as adaptive behavior in dynamic environments, and social interaction and moral judgment capabilities, make them essential elements of many control loops. On the other hand, compared to humans, automation provides higher computational performance and multi-tasking capabilities without any fatigue, stress, or boredom \\cite{Noth16,Kor15}. Although they have their own individual strengths, humans and automation also demonstrate several weaknesses. Humans may have anxiety, fear and may become unconscious during an operation. Furthermore, in the tasks that require increased attention and focus, humans tend to provide high gain control inputs that can cause undesired oscillations. One example of this phenomenon, for example, is the occurrence of pilot induced oscillations (PIO), where undesired and sustained oscillations are observed due to an abnormal coupling between the aircraft and the pilot \\cite{YilKol10, Yil11a,AcoYil14,TohYil18}. Similarly, automation may fail due to an uncertainty, fault or cyber-attack \\cite{Li14}. Thus, it is more preferable to design systems where humans and automation work in harmony, complementing each other, resulting in a structure that benefits from the advantages of both.\n\nTo achieve a reliable human-automation harmony, a mathematically rigorous human operator model is paramount. A human operator model helps develop safe control systems, and provide a better prediction of human actions and limitations \\cite{Hul13,Yuc18,Emre19, Zha19}. \nQuasi-linear model \\cite{Mcr57} is one of the first human operator models, which consists of a describing function and a remnant signal accounting for nonlinear behavior. An overview of this model is provided in \\cite{Mcr74}. In some applications, where the linear behavior may be dominant, the nonlinear part of this model can be ignored, and the resulting lead-lag-type compensator is used in closed loop stability analysis \\cite{Neal71}. The crossover model, proposed in \\cite{Mcr63}, is another important human operator model in the aerospace domain. It is motivated from the empirical observations that human pilots adapt their responses in such a way that the overall system dynamics resembles that of a well designed feedback system \\cite{Bee08}. A generalized crossover model which mimics human behavior when controlling a fractional order plant is proposed in \\cite{Mar17}. In \\cite{War16}, crossover model is employed to provide information about the human intent for the controller. In \\cite{Gil09}, the dynamics of the operator is represented as a spring-damper-mass system. \n\n\nControl theoretical operator models drawing from the optimal and adaptive control theories are also proposed by several authors. \nOptimal human models are based on the idea that a well trained human operator behaves in an optimal manner \\cite{Wier69, Kle70, Na12, Lon13, Hu19}. On the other hand, adaptive models, such as the ones proposed in \\cite{Hess09, Hess15} and \\cite{TohYil19Human}, aim to replicate the adaptation capability of humans in uncertain and dynamics environments. In \\cite{Hess09} and \\cite{Hess15}, adaptation rules are proposed based on expert knowledge. The adaptive model proposed in \\cite{Hess09} is applied to change the parameters of the pilot model based on force feedback from a smart inceptor \\cite{Xu19}. \nA survey on various pilot models can be found in \\cite{Lon14} and \\cite{Xu17}.\n\n\n\n\n\n Several approaches are also developed for human model parameter identification. In \\cite{Zaal11}, a two-step method using wavelets and a windowed maximum likelihood estimation is exploited for the estimation of time-varying pilot model parameters. In \\cite{Duar17}, a linear parameter varying model identification framework is incorporated to estimate time-varying human state space representation matrices. Subsystem identification is used in \\cite{Zha15} to model human control strategies. In \\cite{Van15}, a human operator model for preview tracking tasks is derived from measurement data.\n\n\nIn this paper, we build upon the earlier successful pilot models and propose an adaptive human pilot model that modifies its behavior based on plant uncertainties. This model distinguishes itself from earlier adaptive models by having mathematically derived laws to achieve a cross-over-model-like behavior, instead of employing expert knowledge. This allows a rigorous stability proof, using the Lyapunov-Krasovskii stability criteria, of the overall closed loop system. To validate the model, a setup, including a joystick and a monitor, is used. The participant data collected through this experimental setup is subjected to visual and statistical analyses to evaluate the accuracy of the proposed model. Initial research results of this study were presented in \\cite{TohYil19Human}, where the details of the mathematical proof and human experimental validation studies were missing.\n\n\nThis paper is organized as follows. In Section \\ref{sec:prob}, the problem statement is given. Obtaining reference model parameters, which determine the properties of the cross-over model, is discussed in Section \\ref{sec:reference}. Section \\ref{sec:human} presents the human control strategy together with a stability analysis. Experimental set-up, results, and a statistical analysis are provided in Section \\ref{sec:experiment}. Finally, a summary is given in Section \\ref{sec:conclusion}.\n\n\n\\section{PROBLEM STATEMENT}\\label{sec:prob}\n\nAccording to McRuer's crossover model \\cite{Mcr67}, human pilots in the control loop behave in a way that results in an open loop transfer function\n\\begin{equation}\\label{eq:e1x}\nY_{OL}(s)=Y_{h}(s)Y_{p}(s)=\\frac{\\omega_ce^{-\\tau s}}{s},\n\\end{equation}\nnear the crossover frequency, $ \\omega_c $, where $ Y_{h} $ is the transfer function of the human pilot and $ Y_{p} $ is the transfer function of the plant. $ \\tau $ is the effective time delay, including transport delays and high frequency neuromuscular lags.\n\nConsider the following plant dynamics\n\\begin{equation}\\label{eq:e2x}\n\\dot{x}_p(t)=A_px_p(t)+B_p u_p(t),\n\\end{equation}\nwhere $ x_p\\in \\mathbb{R}^{n_p} $ is the plant state vector, $ u_p\\in \\mathbb{R} $ is the input vector, $ A_p\\in \\mathbb{R}^{n_p\\times n_p} $ is an unknown state matrix and $ B_p\\in \\mathbb{R}^{n_p} $ is an unknown input matrix.\n\nThe human \\textit{neuromuscular model} \\cite{Mag71, Van04} is represented in state space form as\n\\begin{equation}\\label{eq:e3x}\n\\begin{aligned}\n\\dot{x}_h(t)&=A_hx_h(t)+B_hu(t-\\tau) \\\\\ny_h(t)&=C_hx_h(t)+D_hu(t-\\tau),\n\\end{aligned} \n\\end{equation}\nwhere $ x_h\\in \\mathbb{R}^{n_h} $ is the neuromuscular state vector, $ A_h\\in \\mathbb{R}^{n_h\\times n_h} $ is the state matrix, $ B_h\\in \\mathbb{R}^{n_h} $ is the input matrix, $ C_h\\in \\mathbb{R}^{1\\times n_h} $ is the output matrix and $ D_h\\in \\mathbb{R} $ is the control output matrix. $ u\\in \\mathbb{R} $ is the neuromuscular input vector, which represents the control decisions taken by the human and fed to the neuromuscular system, $ y_h\\in \\mathbb{R} $ is the output vector, and $ \\tau\\in \\mathbb{R}^+ $ is a known, constant delay. The neuromuscular model parameters are assumed to be known and the output of the model, $ y_h $, is used as the plant input $ u_p $ in (\\ref{eq:e2x}), that is $ y_h=u_p $ (see Fig. \\ref{fig:f1-2}). \n\\begin{figure}\n\t\\vspace{0.3cm}\n\t\\begin{center}\n\t\t\\includegraphics[width=9.0cm]{Blockdiagxx.jpg} %\n\t\t\\caption{The block diagram of the human adaptive behavior and decision making in a closed loop system.} \n\t\t\\label{fig:f1-2}\n\t\\end{center}\n\\end{figure}\n\n\nBy aggregating the human pilot and plant states, we obtain the combined open loop human neuromuscular and plant dynamics as\n\\begin{equation}\\label{eq:e4x}\n\\begin{aligned}\n\\underbrace{\\begin{bmatrix}\n\t\\dot{x}_h(t) \\\\ \\dot{x}_p(t)\n\t\\end{bmatrix}}_{\\dot{x}_{hp}(t)}&=\n\\underbrace{\\begin{bmatrix}\n\tA_h & 0_{n_h\\times n_p} \\\\ B_p C_h & A_p\n\t\\end{bmatrix}}_{A_{hp}}\\underbrace{\\begin{bmatrix}\n\tx_h(t) \\\\ x_p(t)\n\t\\end{bmatrix}}_{x_{hp(t)}}\\\\ &+\n\\underbrace{\\begin{bmatrix}\n\tB_h \\\\ B_pD_h\n\t\\end{bmatrix}}_{B_{hp}}u(t-\\tau),\n\\end{aligned} \n\\end{equation}\nwhich can be written in the following compact form\n\\begin{equation}\\label{eq:e5x}\n\\begin{aligned}\n\\dot{x}_{hp}(t)&=A_{hp}x_{hp}(t)+B_{hp}u(t-\\tau),\n\\end{aligned} \n\\end{equation}\nwhere $ x_{hp}=[x_h^T\\ x_p^T]^T\\in \\mathbb{R}^{(n_p+n_h)} $, $ A_{hp}\\in \\mathbb{R}^{(n_p+n_h)\\times (n_p+n_h)} $, $ B_{hp}\\in \\mathbb{R}^{(n_p+n_h)} $. \n\\begin{assumption}\n\tThe pair $ (A_{hp}, B_{hp}) $ is controllable.\n\\end{assumption}\n\nThe goal is to obtain the input $ u(t) $ in (\\ref{eq:e3x}), which is the human pilot control decision variable, such that the closed loop system consisting of the adaptive human pilot model and the plant follow the output of a unity feedback reference model with an open loop crossover model transfer function. The closed loop transfer function of the reference model is therefore calculated as\n\\begin{equation}\\label{eq:e21xqx}\n\\begin{aligned}\nG_{cl}(s)=\\frac{\\frac{\\omega_c}{s}e^{-\\tau s}}{1+\\frac{\\omega_c}{s}e^{-\\tau s}}=\\frac{\\omega_c e^{-\\tau s}}{s+\\omega_c e^{-\\tau s}}.\n\\end{aligned} \n\\end{equation}\nAn approximation of (\\ref{eq:e21xqx}) can be given as \n\\begin{equation}\\label{eq:e21xqqx}\n\\begin{aligned}\n\\hat{G}_{cl}(s)=\\frac{b_ms^m+b_{m-1}s^{m-1}+...+b_0}{s^n+a_{n-1}s^{n-1}+...+a_0}e^{-\\tau s},\n\\end{aligned} \n\\end{equation}\nwhere $ n=n_h+n_p $ and $ m\\leq n $ are positive real constants, and $ a_{i} $ and $ b_j $ for $ i=0, ..., n-1 $ and $ j=0, ..., m-1 $, are real constants to be estimated. \nThe reference model then can be obtained as the state space representation of (\\ref{eq:e21xqqx}) as\n\\begin{equation}\\label{eq:e7x}\n\\begin{aligned}\n\\dot{x}_{m}(t)&=A_{m}x_{m}(t)+B_{m}r(t-\\tau),\\\\\n\\end{aligned} \n\\end{equation}\nwhere $ x_m\\in \\mathbb{R}^{(n_h+n_p)} $ is the reference model state vector, $ A_m\\in \\mathbb{R}^{(n_h+n_p)\\times (n_h+n_p)} $ is the state matrix, $ B_m\\in \\mathbb{R}^{(n_h+n_p)\\times m_h} $ is the input matrix, and $ r\\in \\mathbb{R}^{m_h} $ is the reference input.\n\n\\section{REFERENCE MODEL PARAMETERS}\\label{sec:reference}\nThe crossover transfer function (\\ref{eq:e1x}) contains the crossover frequency, $ \\omega_c $, which is not known a priori. Experimental data, showing the reference input ($ r(t) $) frequency bandwidth, $ \\omega_i $, versus crossover frequency $ \\omega_c $, is provided in \\cite{Bee08} and \\cite{Mcr67}, for plant transfer functions $ K $, $ K\/s $ and $ K\/s^2 $. We fit polynomials to these experimental results to obtain the crossover frequency of the open loop transfer function given a reference input frequency bandwidth. These polynomials are given in Table I. It is noted that when the reference input has multiple frequency components, the highest frequency is used to calculate the crossover frequency.\n\n\n\\begin{remark}\n\tIn this work, we use the polynomial relationships provided in Table I for zero, first and second order plant dynamics with nonzero poles and zeros. Further experimental work can be conducted to obtain a more precise relationship between the crossover and reference input frequencies, but this is currently out of the scope of this work.\n\\end{remark}\n \n\n\n\n\\begin{table}\n\t\\vspace{0.3cm}\n\t\\caption{}\n\t\\centering\n\t\\begin{tabular}{ | c | c | }\n\t\t\\hline \n\t\tPlant transfer & Crossover frequency of the \\\\\n\t\tfunction & open loop transfer function (rad\/s)\\\\\n\t\t\\hline\n\t\t$ K $ & $ \\omega_c=0.067\\omega_i^2+0.099\\omega_i+4.8 $ \\\\ \\hline\n\t\t$ K\/s $ & $ \\omega_c=0.14\\omega_i+4.3 $ \\\\ \\hline\n\t\t$ K\/s^2 $ & $ \\omega_c=-0.0031\\omega_i^4-0.072\\omega_i^3+0.29\\omega_i^2$ \\\\ & $-0.13\\omega_i+3 $ \\\\\n\t\t\\hline\n\t\\end{tabular}\n\\end{table} \n\n\n\\section{HUMAN PILOT CONTROL DECISION COMMAND}\\label{sec:human}\n\n\nThe adaptive human pilot control decision command, $ u(t) $, is determined as\n\\begin{equation}\\label{eq:e8x}\nu(t)=K_rK_xx_{hp}(t+\\tau)+K_rr(t)\n\\end{equation}\nwhere $ K_x\\in \\mathbb{R}^{1\\times (n_h+n_p)} $, and $ K_r\\in \\mathbb{R}^{m_h\\times m_h} $. Using (\\ref{eq:e8x}) and (\\ref{eq:e5x}), the closed loop dynamics can be obtained as\n\\begin{equation}\\label{eq:e9x}\n\\begin{aligned}\n\\dot{x}_{hp}(t)&=(A_{hp}+B_{hp}K_rK_x)x_{hp}(t)+B_{hp}K_rr(t-\\tau).\n\\end{aligned} \n\\end{equation} \n\nEquation (\\ref{eq:e8x}) describes a non-causal decision command which requires future values of the states. This problem can be eliminated by solving the differential equation (\\ref{eq:e5x}) as a $ \\tau $-seconds ahead predictor as\n\\begin{equation}\\label{eq:e9xx}\n\\begin{aligned}\nx_{hp}(t+\\tau)&=e^{A_{hp}\\tau}x_{hp}(t)+\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta)d\\eta. \n\\end{aligned} \n\\end{equation}\n\n\\begin{assumption}\nThere exist ideal parameters $ K_r^* $ and $ K_x^* $ satisfying the following matching conditions\n\\begin{equation}\\label{eq:e10x}\n\\begin{aligned}\n&A_{hp}+B_{hp}K_r^*K_x^*=A_m\\\\\n&B_{hp}K_r^*=B_m.\n\\end{aligned} \n\\end{equation}\n\\end{assumption} \n\n\n\nBy substituting (\\ref{eq:e9xx}) into (\\ref{eq:e8x}), the human pilot control decision input can be written as \n\\begin{equation}\\label{eq:e11xxxy}\n\\begin{aligned}\nu(t)&=K_rK_xe^{A_{hp}\\tau}x_{hp}(t)\\\\\n&+K_rK_x\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta)d\\eta+K_rr(t).\n\\end{aligned} \n\\end{equation}\nBy defining $ \\theta_x(t) $ and $ \\lambda(t,\\eta) $ as\n\\begin{equation}\\label{eq:e11xxxq}\n\\begin{aligned}\n\\theta_x(t)&=K_r(t)K_x(t)e^{A_{hp}\\tau},\\\\\n\\lambda(t,\\eta)&=K_r(t)K_x(t)e^{-A_{hp}\\eta}B_{hp},\n\\end{aligned} \n\\end{equation}\n(\\ref{eq:e11xxxy}) can be rewritten as (see fig. \\ref{fig:f1-2})\n\\begin{equation}\\label{eq:e11xxx}\n\\begin{aligned}\nu(t)=\\theta_x(t)x_{hp}(t)+\\int_{-\\tau}^{0}\\lambda(t,\\eta)u(t+\\eta)d\\eta+K_r(t)r(t).\n\\end{aligned} \n\\end{equation}\nThe ideal values of $ \\theta_x $ and $ \\lambda $ can be obtained as\n\\begin{equation}\\label{eq:e11xxxxx}\n\\begin{aligned}\n\\theta_x^*&=K_r^*K_x^*e^{A_{hp}\\tau}\\\\ \n\\lambda^*(\\eta)&=K_r^*K_x^*e^{-A_{hp}\\eta}B_{hp}.\n\\end{aligned} \n\\end{equation}\nSince $ A_{hp} $ and $ B_{hp} $ are unknown, $ \\theta_x $ and $ \\lambda $ need to be estimated. The closed loop dynamics can be obtained using (\\ref{eq:e5x}) and (\\ref{eq:e11xxx}) as\n\\begin{equation}\\label{eq:e11xxxx}\n\\begin{aligned}\n\\dot{x}_{hp}(t)&=A_{hp}x_{hp}(t)+B_{hp}\\theta_x(t-\\tau)x_{hp}(t-\\tau)\\\\ &+\\int_{-\\tau}^{0}B_{hp}\\lambda(t-\\tau,\\eta)u(t+\\eta-\\tau)d\\eta\\\\ &+B_{hp}K_rr(t-\\tau),\n\\end{aligned} \n\\end{equation}\n\nDefining the deviations of the adaptive parameters from their ideal values as $ \\tilde{\\theta}_x=\\theta_x-\\theta_x^* $ and $ \\tilde{\\lambda}=\\lambda-\\lambda^* $, and adding and subtracting $ A_mx_{hp}(t) $ to (\\ref{eq:e11xxxx}), and using (\\ref{eq:e10x}), we obtain that\n\\begin{equation}\\label{eq:e12xxz}\n\\begin{aligned}\n\\dot{x}_{hp}(t)&=A_mx_{hp}(t)-B_{hp}K_r^*K_x^*x_{hp}(t)\\\\ &+B_{hp}K_r(t-\\tau)K_x(t-\\tau)\\Big(e^{A_{hp}\\tau}x_{hp}(t-\\tau)\\\\ &+\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta-\\tau)d\\eta \\Big)\\\\ &+B_{hp}K_r(t-\\tau)r(t-\\tau).\\\\\n\\end{aligned} \n\\end{equation}\n Using (\\ref{eq:e9xx}), (\\ref{eq:e12xxz}) is rewritten as\n\\begin{equation}\\label{eq:e12xx}\n\\begin{aligned}\n\\dot{x}_{hp}(t)&=A_mx_{hp}(t)-B_{hp}K_r^*K_x^*x_{hp}(t)\\\\ &+B_{hp}K_r(t-\\tau)K_x(t-\\tau)x_{hp}(t)\\\\ &+B_{hp}K_r(t-\\tau)r(t-\\tau).\n\\end{aligned} \n\\end{equation}\nDefining the tracking error as $ e(t)=x_{hp}-x_{m} $, and subtracting (\\ref{eq:e7x}) from (\\ref{eq:e12xx}), and using (\\ref{eq:e10x}), and following a similar procedure given in \\cite{NarAnn12}, it is obtained that\n\\begin{equation}\\label{eq:e12xxxz}\n\\begin{aligned}\n\\dot{e}(t)&=\\dot{x}_{hp}-\\dot{x}_{m}\\\\\n&=A_me(t)-B_{hp}K_r^*K_x^*x_{hp}(t)\\\\ &+B_{hp}K_r(t-\\tau)K_x(t-\\tau)x_{hp}(t)\\\\ &+B_{hp}(K_r(t-\\tau)-K_r^*)r(t-\\tau)\\\\\n&=A_me(t)+\\big( -B_{hp}K_r^*K_x^*\\\\\n&+B_{hp}(K_r^*-K_r^*+K_r(t-\\tau))K_x(t-\\tau)\\big)x_{hp}(t) \n\\\\&+B_{hp}(K_r(t-\\tau)-K_r^*)r(t-\\tau)\\\\\n&=A_me(t)+B_{m}(K_x(t-\\tau)-K_x^*)x_{hp}(t)\\\\\n&+B_{m}({K_r^*}^{-1}K_r(t-\\tau)-1)K_x(t-\\tau)x_{hp}(t)\\\\\n&+B_{m}({K_r^*}^{-1}K_r(t-\\tau)-1)r(t-\\tau)\\\\\n&=A_me(t)+B_{m}(\\tilde{K}_x(t-\\tau)x_{hp}(t)\\\\\n&\\hspace{-0.1cm}+B_{m}({K_r^*}^{-1}-K_r^{-1}(t-\\tau))K_r(t-\\tau)K_x(t-\\tau)x_{hp}(t)\\\\\n&+B_{m}({K_r^*}^{-1}-K_r^{-1}(t-\\tau))K_r(t-\\tau)r(t-\\tau).\n\\end{aligned} \n\\end{equation}\nUsing (\\ref{eq:e9xx}) and defining $ \\Phi={K_r^*}^{-1}-K_r^{-1} $, we can rewrite (\\ref{eq:e12xxxz}) as\n\\begin{equation}\\label{eq:e12xxxzz}\n\\begin{aligned}\n\\dot{e}(t)&=A_me(t)+{B_mK_r^*}^{-1}(K_r^*K_x(t-\\tau)-K_r^*K_x^*)\\\\\n&\\hspace{-0.1cm}\\times\\Big( e^{A_{hp}\\tau}x_{hp}(t-\\tau)+\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta-\\tau)d\\eta \\Big)\\\\\n&+B_m\\Phi(t-\\tau)\\Big( K_r(t-\\tau)K_x(t-\\tau)\\Big( e^{A_{hp}\\tau}x_{hp}(t-\\tau)\\\\\n&+\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta-\\tau)d\\eta \\Big)\\\\\n&+K_r(t-\\tau)r(t-\\tau) \\Big).\n\\end{aligned} \n\\end{equation}\nUsing (\\ref{eq:e11xxxxx}) and (\\ref{eq:e12xxxzz}), we obtain that\n\\begin{equation}\\label{eq:e12xxxzzz}\n\\begin{aligned}\n\\dot{e}(t)&=A_me(t)+B_mK_x(t-\\tau)\\Big( e^{A_{hp}\\tau}x_{hp}(t-\\tau)\\\\\n&+\\int_{-\\tau}^{0}e^{-A_{hp}\\eta}B_{hp}u(t+\\eta-\\tau)d\\eta \\Big)\\\\\n&-B_m{K_r^*}^{-1}\\Big( \\theta_x^*x_{hp}(t-\\tau)\\\\\n&+\\int_{-\\tau}^{0}\\lambda^*(\\eta)u(t+\\eta-\\tau)d\\eta \\Big)\\\\\n&+B_{m}\\Phi(t-\\tau)u(t-\\tau).\\\\\n\\end{aligned} \n\\end{equation}\nUsing (\\ref{eq:e11xxxq}), (\\ref{eq:e12xxxzzz}) can be rewritten as\n\\begin{equation}\\label{eq:e12xxxq}\n\\begin{aligned}\n\\dot{e}(t)&=A_me(t)+B_m\\Big( \\big( K_r^{-1}(t-\\tau)\\theta_x(t-\\tau)-{K_r^*}^{-1}\\theta_x^*\\big)\\\\\n&\\times x_{hp}(t-\\tau)+\\int_{-\\tau}^{0}\\big( K_r^{-1}(t-\\tau)\\lambda(t-\\tau,\\eta)\\\\\n&-{K_r^*}^{-1}\\lambda^*(\\eta) \\big) u(t+\\eta-\\tau)d\\eta \\Big)\\\\\n&+B_{m}\\Phi(t-\\tau)u(t-\\tau).\n\\end{aligned} \n\\end{equation}\nDefining $ \\theta_1=K_r^{-1}\\theta_x $ and $ \\lambda_1=K_r^{-1}\\lambda $, and using their deviations from their ideal values, $ \\tilde{\\theta}_1=\\theta_1-\\theta_1^* $ and $ \\tilde{\\lambda}_1=\\lambda_1-\\lambda_1^* $, where \n$ \\theta_1^*={K_r^*}^{-1}\\theta_x^* $ and $ \\lambda_1^*={K_r^*}^{-1}\\lambda^* $, (\\ref{eq:e12xxxq}) can be rewritten as\n\\begin{equation}\\label{eq:e12xxx}\n\\begin{aligned}\n\\dot{e}(t)&=A_me(t)+B_{m}\\tilde{\\theta}_1(t-\\tau)x_{hp}(t-\\tau)\\\\ &+B_m\\int_{-\\tau}^{0}\\tilde{\\lambda}_1(t-\\tau,\\eta)u(t+\\eta-\\tau)d\\eta \\\\ \n&+B_{m}\\Phi(t-\\tau)u(t-\\tau).\n\\end{aligned} \n\\end{equation}\nThe following lemma will be necessary to prove the main theorem of this article.\n\n\\begin{lemma}\\label{lem1}\nSuppose that the continuous function $ u(t) $ is given as\n\\begin{equation}\\label{eq:e21xx}\n\\begin{aligned}\nu(t)=f(t)+\\int_{-\\tau}^{0}\\lambda(t,\\eta)u(t+\\eta)d\\eta,\n\\end{aligned} \n\\end{equation}\nwhere $ u,f: [t_0-\\tau,\\infty]\\rightarrow R $, and $ \\lambda:[t_0, \\infty)\\times [-\\tau, 0]\\rightarrow R $. Then\n\\begin{equation}\\label{eq:e22xx}\n\\begin{aligned}\n|u(t)|\\leq 2(\\bar{f}+c_0c_1)e^{c_0^2(t-t')},\\ \\forall t_j'\\geq t_i',\n\\end{aligned} \n\\end{equation}\nif constants $ t_i',\\bar{f}, c_0, c_1\\in R^+ $ exist such that $ |f(t)|\\leq \\bar{f} $, \n\\begin{equation}\\label{eq:e23xx}\n\\begin{aligned}\n\\int_{-\\tau}^{0}\\lambda^2(t,\\eta)d\\eta\\leq c_0^2 \\ \\ for\\ t\\in [t_i',t_j'),\n\\end{aligned} \n\\end{equation}\nand\n\\begin{equation}\\label{eq:e24xx}\n\\begin{aligned}\n\\int_{-\\tau}^{0}u^2(t+\\eta)d\\eta\\leq c_1^2 \\ \\ \\forall\nt\\leq t_i'.\n\\end{aligned} \n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nThe proof of Lemma \\ref{lem1} can be found in \\cite{YilAnn10}.\n\\end{proof}\n\n\n\n\n\\begin{theorem}\\label{thm1}\nGiven the initial conditions $ \\tilde{\\theta}_1(\\xi)$, $ \\tilde{\\lambda}_1(\\xi,\\eta) $, $ \\Phi(\\xi) $ and $ x_{hp}(\\xi) $ for $ \\xi\\in [-\\tau, 0] $, and $ u(\\zeta) $ for $ \\zeta\\in[-2\\tau,0] $, there exists a $ \\tau^* $ such that for all $ \\tau\\in[0,\\tau^*] $, the controller (\\ref{eq:e11xxx}) with the adaptive laws\n\\begin{equation}\\label{eq:14thm}\n\\dot{{\\theta}}_1^T(t)=-x_{hp}(t-\\tau)e(t)^TPB_{m},\n\\end{equation} \n\\begin{equation}\\label{eq:15thm}\n\\dot{{\\Phi}}^T(t)=-u(t-\\tau)e(t)^TPB_{m},\\\\\n\\end{equation} \n\\begin{equation}\\label{eq:16thm}\n\\dot{\\lambda}_1^T(t,\\eta)=-u(t+\\eta-\\tau)e(t)^TPB_{m},\n\\end{equation} \nwhere $ P $ is the symmetric positive definite matrix satisfying\nthe Lyapunov equation $ A_m^TP +P A_m = -Q $ for a symmetric\npositive definite matrix $ Q $, which can be employed to obtain controller parameters using $ \\dot{K}_r=\\text{Proj}(K_r\\dot{\\Phi}K_r) $, $\\theta_x(t)=K_r(t)\\theta_1(t) $ and $ \\lambda(t)=K_r(t)\\lambda_1(t) $, make the pilot neuromuscular and plant aggregate system (\\ref{eq:e5x}) follow the crossover reference model (\\ref{eq:e7x}) asymptotically, i.e, $ lim_{t\\to \\infty} x_{hp}(t)=x_m(t) $, while keeping all the signals bounded.\n\\end{theorem}\n\\begin{proof}\nConsider a Lyapunov-Krasovskii functional \\cite{YilAnn10}\n\\begin{equation}\\label{eq:e12x}\n\\begin{aligned}\nV(t)&=e^TPe+\\text{tr}({\\Phi}^T(t){\\Phi}(t))+\\text{tr}(\\tilde{\\theta}_1^T(t)\\tilde{\\theta}_1(t))\\\\ &+\\int_{-\\tau}^{0}\\int_{t+v}^{t}\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(\\xi)\\dot{\\tilde{\\theta}}_1(\\xi))d\\xi dv\\\\\n&+\\int_{-\\tau}^{0}\\int_{t+v}^{t}\\text{tr}(\\dot{{\\Phi}}^T(\\xi)\\dot{{\\Phi}}(\\xi))d\\xi dv\\\\\n&+\\int_{-\\tau}^{0}\\text{tr}(\\tilde{\\lambda}_1^T(t,\\eta)\\tilde{\\lambda}_1(t,\\eta))d\\eta\\\\ &+\\int_{-\\tau}^{0}\\int_{t+v}^{t}\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(\\xi,\\eta)\\dot{\\tilde{\\lambda}}_1(\\xi,\\eta))d\\eta d\\xi dv.\n\\end{aligned} \n\\end{equation}\nThe derivative of $ V(t) $ can be calculated as\n\\begin{equation}\\label{eq:e14x}\n\\begin{aligned}\n\\dot{V}(t)&=\\dot{e}^T(t)^TPe(t)+e^T(t)P\\dot{e}(t)+2\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\tilde{\\theta}_1(t))\\\\ &+2\\text{tr}(\\dot{{\\Phi}}^T(t){\\Phi}(t))+\\int_{-\\tau}^{0}2\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\tilde{\\lambda}_1(t,\\eta))d\\eta\\\\ \n&+\\tau \\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t+v)\\dot{\\tilde{\\theta}}_1(t+v))dv\\\\ \n&+\\tau\\text{tr}(\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{{\\Phi}}^T(t+v)\\dot{{\\Phi}}(t+v))dv\\\\\n&+\\tau\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta))d\\eta\\\\\n&-\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t+v,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta))d\\eta dv.\n\\end{aligned} \n\\end{equation}\nSubstituting (\\ref{eq:e12xxx}) into (\\ref{eq:e14x}) and using the Lyapunov equation $ A_m^TP +P A_m = -Q $, it is obtained that\n\\begin{equation\n\\begin{aligned}\n\\dot{V}(t)&=-{e}^T(t)Qe(t)+2e^T(t)PB_m\\tilde{\\theta}_1(t-\\tau)x_{hp}(t-\\tau)\\\\\n&+2e^T(t)PB_m\\int_{-\\tau}^{0}\\tilde{\\lambda}_1(t-\\tau,\\eta)u(t+\\eta-\\tau)d\\eta\\\\\n&+2e^T(t)PB_m\\Phi(t-\\tau)u(t-\\tau)\\\\\n&+2\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\tilde{\\theta}_1(t))+2\\text{tr}(\\dot{{\\Phi}}^T(t){\\Phi}(t)) \\\\\n&+\\int_{-\\tau}^{0}2\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\tilde{\\lambda}_1(t,\\eta))d\\eta\\\\ \n&+\\tau \\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t+v)\\dot{\\tilde{\\theta}}_1(t+v))dv\\\\ \n&+\\tau\\text{tr}(\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{{\\Phi}}^T(t+v)\\dot{{\\Phi}}(t+v))dv\\notag\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{eq:e14xx}\n\\begin{aligned}\n&+\\tau\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta))d\\eta\\\\\n&-\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t+v,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta))d\\eta dv.\n\\end{aligned} \n\\end{equation}\nUsing $ g(t)-g(t-\\tau)=\\int_{-\\tau}^{0}\\dot{g}(t+v)dv $, (\\ref{eq:e14xx}) can be rewritten as\n\\begin{equation}\\label{eq:e14xxxx}\n\\begin{aligned}\n\\dot{V}(t)&=-{e}^T(t)Qe(t)\\\\\n&+2\\text{tr}\\Big(x_{hp}(t-\\tau)e^T(t)PB_m\\tilde{\\theta}_1(t)+\\dot{\\tilde{\\theta}}_1^T(t)\\tilde{\\theta}_1(t)\\Big)\\\\\n&+2\\text{tr}\\Big(u(t-\\tau)e^T(t)PB_m\\Phi(t)+\\dot{{\\Phi}}^T(t){\\Phi}(t)\\Big)\\\\\n&+\\int_{-\\tau}^{0}2\\text{tr}\\Big(u(t+\\eta-\\tau)e^T(t)PB_m\\tilde{\\lambda}_1(t,\\eta)\\\\ &+\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\tilde{\\lambda}_1(t,\\eta)\\Big)d\\eta\\\\\n&-2e^T(t)PB_m(\\int_{-\\tau}^{0}\\dot{\\tilde{\\theta}}_1(t+v)dv)x_{hp}(t-\\tau)\\\\\n&-2e^T(t)PB_m(\\int_{-\\tau}^{0}\\dot{{\\Phi}}(t+v)dv)u(t-\\tau)\\\\\n&-2e^T(t)PB_m\\Big(\\int_{-\\tau}^{0}(\\int_{-\\tau}^{0}\\dot{\\tilde{\\lambda}}_1(t+v,\\eta)dv)\\\\\n&\\times u(t+\\eta-\\tau)d\\eta\\Big)\\\\\n&+\\tau \\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t+v)\\dot{\\tilde{\\theta}}_1(t+v))dv\\\\ \n&+\\tau\\text{tr}(\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{{\\Phi}}^T(t+v)\\dot{{\\Phi}}(t+v))dv\\\\\n&+\\tau\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta))d\\eta\\\\\n&-\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t+v,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta))d\\eta dv.\n\\end{aligned} \n\\end{equation}\nBy substituting (\\ref{eq:14thm})-(\\ref{eq:16thm}) into (\\ref{eq:e14xxxx}), it is obtained that\n\\begin{equation\n\\begin{aligned}\n\\dot{V}(t)&=-{e}^T(t)Qe(t)\\\\\n&-2\\int_{-\\tau}^{0}\\text{tr}(x_{hp}(t-\\tau)e(t)^TPB_{m}\\dot{\\tilde{\\theta}}_1(t+v))dv\\\\ &-2\\int_{-\\tau}^{0}\\text{tr}(u(t-\\tau)e(t)^TPB_{m}\\dot{{\\Phi}}(t+v))dv\\\\\n&-2\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}(u(t+\\eta-\\tau)e(t)^TPB_{m}\\dot{\\tilde{\\lambda}}_1(t+v,\\eta))dvd\\eta\\\\\n&+\\tau \\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t+v)\\dot{\\tilde{\\theta}}_1(t+v))dv\\\\ \n&+\\tau\\text{tr}(\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t))-\\int_{-\\tau}^{0}\\text{tr}(\\dot{{\\Phi}}^T(t+v)\\dot{{\\Phi}}(t+v))dv\\\\\n&+\\tau\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta))d\\eta\\\\\n&-\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t+v,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta))d\\eta dv\\notag \n\\end{aligned} \n\\end{equation}\n\\begin{equation}\\label{eq:e17x}\n\\begin{aligned}\n&=-{e}^T(t)Qe(t)+\\int_{-\\tau}^{0}\\text{tr}\\Big(2\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t+v)\\\\\n&+\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t) -\\dot{\\tilde{\\theta}}_1^T(t+v)\\dot{\\tilde{\\theta}}_1(t+v)\\Big)dv\\\\\n&+\\int_{-\\tau}^{0}\\text{tr}\\Big(2\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t+v)+\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t)\\\\\n&-\\dot{{\\Phi}}^T(t+v)\\dot{{\\Phi}}(t+v)\\Big)dv\\\\\n&+\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}\\text{tr}\\Big(2\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta)\\\\\n&+\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta)-\\dot{\\tilde{\\lambda}}_1^T(t+v,\\eta)\\dot{\\tilde{\\lambda}}_1(t+v,\\eta)\\Big)d\\eta dv.\n\\end{aligned} \n\\end{equation}\nUsing the trace property $ \\text{tr}(A+B)=\\text{tr}(A)+\\text{tr}(B) $, and the algebraic inequality $ a^2\\geq 2ab-b^2 $ for two scalars $ a $ and $ b $, it can be shown that $ \\text{tr}(2A^TB+A^TA-B^TB)\\leq 2\\text{tr}(A^TA) $. Using these inequalities, (\\ref{eq:e17x}) can be rewritten as\n\\begin{equation}\\label{eq:e17xx}\n\\begin{aligned}\n\\dot{V}(t)&\\leq-{e}^T(t)Qe(t)+\\int_{-\\tau}^{0}2\\text{tr}(\\dot{\\tilde{\\theta}}_1^T(t)\\dot{\\tilde{\\theta}}_1(t))dv\\\\\n&+\\int_{-\\tau}^{0}2\\text{tr}(\\dot{{\\Phi}}^T(t)\\dot{{\\Phi}}(t))dv\\\\\n&+\\int_{-\\tau}^{0}\\int_{-\\tau}^{0}2\\text{tr}(\\dot{\\tilde{\\lambda}}_1^T(t,\\eta)\\dot{\\tilde{\\lambda}}_1(t,\\eta))d\\eta dv.\n\\end{aligned} \n\\end{equation}\nBy substituting (\\ref{eq:14thm})-(\\ref{eq:16thm}) into (\\ref{eq:e17xx}), and using the trace operator property $ \\text{tr}(AB)=\\text{tr}(BA) $ for two square matrices $ A $ and $ B $, (\\ref{eq:e17xx}) can be rewritten as\n\\begin{equation}\\label{eq:e18x}\n\\begin{aligned}\n\\dot{V}(t)&\\leq-{e}^T(t)Qe(t)\\\\\n&+2\\tau \\text{tr}\\big(e(t)x_{hp}^T(t-\\tau)x_{hp}(t-\\tau)e(t)^TPB_{m}B_{m}^TP\\big)\\\\ \n&+2\\tau \\text{tr}\\big(e(t)u^T(t-\\tau)u(t-\\tau)e(t)^TPB_{m}B_{m}^TP\\big)\\\\\n&+2\\tau\\int_{-\\tau}^{0} \\text{tr}\\big(e(t)u^T(t+\\eta-\\tau)u(t+\\eta-\\tau)e(t)^T\\\\\n&\\times PB_{m}B_{m}^TP\\big)d\\eta.\n\\end{aligned} \n\\end{equation}\nUsing $ \\text{tr}(AB)\\leq \\text{tr}(A)\\text{tr}(B) $ for two positive semidefinite matrices $ A $ and $ B $, and $ \\text{tr}(X^TX)=||X||_F^2 $ for a matrix $ X $, an upper bound for (\\ref{eq:e18x}) can be derived as \n\\begin{equation\n\\begin{aligned}\n\\dot{V}(t)&\\leq-{e}^T(t)Qe(t)\\\\\n&+2\\tau \\text{tr}(e(t)x_{hp}^T(t-\\tau)x_{hp}(t-\\tau)e(t)^T)\\text{tr}(PB_{m}B_{m}^TP)\\\\ \n&+2\\tau \\text{tr}\\big(e(t)u^T(t-\\tau)u(t-\\tau)e(t)^T\\big)\\text{tr}\\big(PB_{m}B_{m}^TP\\big)\\\\\n&+2\\tau\\int_{-\\tau}^{0} \\text{tr}\\big(e(t) u^T(t-\\tau+\\eta)u(t-\\tau+\\eta)e(t)^T\\big)\\\\\n&\\times \\text{tr}\\big(PB_{m}B_{m}^TP\\big)d\\eta\\\\\n&\\leq -\\lambda_{min}(Q)||e(t)||^2\\\\\n&+2\\tau||x_{hp}(t-\\tau)e(t)^T||_F^2||B_{m}^TP||_F^2\\\\\n&+2\\tau ||u(t-\\tau)e(t)^T||_F^2||B_{m}^TP||_F^2\\\\\n&+2\\tau\\int_{-\\tau}^{0}||u(t+\\eta-\\tau)e(t)^T||_F^2||B_{m}^TP||_F^2d\\eta \\notag\n\\end{aligned}\n\\end{equation}\n\\begin{equation}\\label{eq:e19x}\n\\begin{aligned}\n&\\leq -\\lambda_{min}(Q)||e(t)||^2\\\\\n&+2\\tau||x_{hp}(t-\\tau)||^2||e(t)||^2||B_{m}^TP||_F^2\\\\\n&+2\\tau ||u(t-\\tau)||^2||e(t)||^2||B_{m}^TP||_F^2\\\\\n&+2\\tau\\int_{-\\tau}^{0}||u(t+\\eta-\\tau)||^2||e(t)||^2||B_{m}^TP||_F^2d\\eta\\\\\n&=||B_{m}^TP||_F^2||e(t)||^2\\Big( -\\frac{\\lambda_{min}(Q)}{||B_{m}^TP||_F^2}\\\\\n&+2\\tau\\big( ||x_{hp}(t-\\tau)||^2+||u(t-\\tau)||^2\\\\\n&+\\int_{-\\tau}^{0}||u(t+\\eta-\\tau)||^2d\\eta \\big) \\Big).\n\\end{aligned} \n\\end{equation}\nDefining $ q\\equiv \\frac{\\lambda_{min}(Q)}{||B_{m}^TP||_F^2} $, the inequality\n\\begin{equation}\\label{eq:e20x}\n\\begin{aligned}\nq&-2\\tau\\big( ||x_{hp}(t-\\tau)||^2+||u(t-\\tau)||^2+\\\\\n&+\\int_{-\\tau}^{0}||u(t+\\eta-\\tau)||^2d\\eta \\big)> 0.\n\\end{aligned} \n\\end{equation}\nneeds to be satisfied for the non-positiveness of $ \\dot{V} $.\nAssuming that $ x_{hp} $ and $ u $ are bounded in the interval $ [t_0-2\\tau,t_0) $, the rest of the proof is divided into the following four steps:\n\n\\noindent\n\\textbf{Step 1} In this step, the negative semi-definiteness of the Lyapunov-Krasovskii functional's (\\ref{eq:e12x}) time derivative in the interval $ [t_0-\\tau,t_0) $ is shown which leads to the boundedness of the the signals in this interval. In addition, an upper bound for $ u $ in the interval $ [t_0-2\\tau,t_0) $ is given.\n\nSuppose that\n\\begin{equation}\n\\begin{aligned}\n\\sup_{\\xi\\in[t_0-\\tau, t_0)}&||x_{hp}(\\xi)||^2\\leq \\gamma_1\\\\\n\\sup_{\\xi\\in[t_0-2\\tau, t_0)}&||u(\\xi)||^2\\leq \\gamma_2\n\\end{aligned} \n\\end{equation} \nfor some positive $ \\gamma_1, \\gamma_2$, and a $ \\tau_1 $ is given such that\n\\begin{equation}\n\\begin{aligned}\n2\\tau_1(\\gamma_1+\\gamma_2+\\tau_1\\gamma_2) 0,\\\\ &\\forall \\xi\\in[t_0,t_0+\\tau), \\forall\\tau\\in[0, \\tau_1].\n\\end{aligned} \n\\end{equation}\nIt follows that $ V(t) $, defined in (\\ref{eq:e12x}), is non-increasing for $ t\\in[t_0, t_0+\\tau) $. Thus, we have\n\\begin{equation}\n\\begin{aligned}\n\\lambda_{min}(P)||e(\\xi)||^2\\leq e(\\xi)^TPe(\\xi)\\leq V(t_0),\n\\end{aligned} \n\\end{equation}\nwhich leads to\n\\begin{equation}\n\\begin{aligned}\n||x_{hp}(\\xi)||-||x_m(\\xi)||\\leq ||e(\\xi)||\\leq \\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}.\n\\end{aligned} \n\\end{equation}\nThen, we have\n\\begin{equation}\n\\begin{aligned}\n||x_{hp}(\\xi)||\\leq\\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}+||x_{m}(\\xi)||,\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\nWe also have the inequality\n\\begin{align}\\label{eq:x47}\n&||{\\Phi}(\\xi)||^2\\leq V(t_0)\\implies ||{K_r^*}^{-1}-K_r^{-1}(\\xi)||^2\\leq V(t_0)\\notag\\\\\n&\\implies ||K_r^{-1}(\\xi)||\\leq \\sqrt{V(t_0)}+||{K_r^*}^{-1}||.\n\\end{align}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\nIt is noted that the boundedness of $ \\Phi={K_r^*}^{-1}-K_r^{-1} $ does not guarantee the boundedness of $ \\tilde{K}_r $. In order to guarantee the boundedness of $ \\tilde{K}_r $ independent of the boundedness of $ \\Phi $, the projection algorithm \\cite{Eug13} is employed as \n\\begin{equation}\\label{eq:e19xzzz}\n\\begin{aligned}\n\\dot{K}_r=\\text{Proj}\\big(K_r,-K_rB_m^TPe(t)u^T(t-\\tau)K_r\\big),\n\\end{aligned} \n\\end{equation}\nwith an upper bound $ K_{max} $, that is $ ||{K}_r||\\leq K_{max} $. \nThus, a lower bound for $ ||K_r^{-1}(\\xi)|| $ can be calculated using the following algebraic manipulations: \n \\begin{align}\\label{eq:47x}\n &K_r(\\xi)K_r^{-1}(\\xi)=I\\Rightarrow ||K_r(\\xi)K_r^{-1}(\\xi)||=1\\notag \\\\\n &\\Rightarrow 1\\leq ||K_r(\\xi)||||K_r^{-1}(\\xi)||\\leq K_{max}||K_r^{-1}(\\xi)||\\notag \\\\\n & \\Rightarrow \\frac{1}{K_{max}}\\leq ||K_r^{-1}(\\xi)||.\n \\end{align}\nDefining $ k_1=\\sqrt{V(t_0)}+||{K_r^*}^{-1}|| $, and using (\\ref{eq:x47}), it is obtained that\n\\begin{align}\\label{eq:47}\n\\frac{1}{K_{max}}\\leq ||K_r^{-1}(\\xi)||\\leq k_1,\\ \\ \\ \\xi\\in[t_0,t_0+\\tau).\n\\end{align}\nTherefore, $ K_r $ is always bounded and $ K_r^{-1}(\\xi) $ is bounded for $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\n\nFurthermore, using the definitions of $ \\theta_x, \\theta_1, \\lambda,\\lambda_1 $ given in Theorem \\ref{thm1}, and the non-increasing Lyapunov functional (\\ref{eq:e12x}), it can be concluded that\n\\begin{align}\\label{eq:48}\n||\\tilde{\\theta}_1(\\xi)||_F^2\\leq V(t_0)&\\implies ||\\tilde{K}_r^{-1}(\\xi)\\tilde{\\theta}_x(\\xi)||_F^2\\leq V(t_0),\n\\end{align}\n\\begin{align}\\label{eq:49}\n\\int_{-\\tau}^{0}||\\tilde{\\lambda}_1(\\xi,\\eta)||_F^2 &d\\eta \\leq V(t_0)\\\\\n&\\implies \\int_{-\\tau}^{0}||K_r^{-1}(\\xi)\\tilde{\\lambda}(\\xi,\\eta)||_F^2d\\eta\\leq V(t_0),\\notag\n\\end{align} \nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $. Using (\\ref{eq:48}) and (\\ref{eq:49}), it can be obtained that\n\\begin{equation}\n\\begin{aligned}\n||\\tilde{\\theta}_x(\\xi)||_F^2&\\leq K_{max}^2V(t_0),\\\\\n\\int_{-\\tau}^{0}||\\tilde{\\lambda}(\\xi,\\eta)||_F^2d\\eta&\\leq K_{max}^2V(t_0).\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\n\nTo simplify the notation, we define\n\\begin{equation}\n\\begin{aligned}\nI_0\\equiv \\text{max}&\\Big( \\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}+\\sup_{[t_0,t_0+\\tau)}||x_{m}(\\xi)||\\\\ &, K_{max}\\sqrt{V(t_0)}, K_{max}^2V(t_0) \\Big),\n\\end{aligned} \n\\end{equation}\nwhere $ R_{max} $ is the upper bound of the reference input $ r(t) $.\n\nAn upper bound on the control signal $ u(t) $ for $ t\\in[t_0, t_0+\\tau) $ can be derived by using Lemma \\ref{lem1} and (\\ref{eq:e11xxx}). In particular, setting $ t_i'=t_0 $, $ t_j'=t_0+\\tau $, $ c_0^2=V(t_0) $, we obtain that\n\\begin{equation}\\label{eq:e50}\n\\begin{aligned}\n|u(\\xi)|\\leq 2\\Big( \\bar{f}+\\big( \\int_{-\\tau}^{0}u^2(t_0+\\eta)d\\eta \\big)^{1\/2}I_0 \\Big)e^{I_0\\tau},\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $, where $ \\bar{f} $, which is the upper bound of $ \\theta_x(t)x_{hp}(t)+K_r(t)r(t) $, depends only on $ I_0 $. Defining $ g(\\gamma_2, I_0, \\tau)\\equiv 2(\\bar{f}+\\gamma_2I_0\\sqrt{\\tau})e^{I_0\\tau} $, (\\ref{eq:e50}) can be rewritten as\n\\begin{equation}\n\\begin{aligned}\n|u(\\xi)|\\leq g(\\gamma_2, I_0, \\tau),\\ \\forall \\xi\\in[t_0,t_0+\\tau).\n\\end{aligned} \n\\end{equation}\n\n\nThe rest of the proof is similar to the one given in \\cite{YilAnn10}. Below, a summary of the next steps are given.\n\n\\noindent\n\\textbf{Step 2} A delay range $ [0, \\tau_2] $ is found that satisfies the condition (\\ref{eq:e20x}) over the interval $ [t_0, t_0+2\\tau) $ as\n\\begin{align}\\label{eq:step2}\n\t2\\tau_2\\left( I_0^2+\\left( max\\left( \\gamma_2,g\\left( \\gamma_2,I_0,\\tau_2 \\right) \\right) \\right)^2 (1+\\tau_2) \\right) < q,\n\\end{align} \nwhich leads to $ ||x_{hp}(\\xi)|| 0.\n\\end{aligned} \n\\end{equation}\nneeds to be satisfied for the non-positiveness of $ \\dot{V} $.\nAssuming that $ x_{hp} $ and $ u $ are bounded in the interval $ [t_0-2\\tau,t_0) $, the rest of the proof is divided into the following four steps:\n\n\\noindent\n\\textbf{Step 1} In this step, the negative semi-definiteness of the Lyapunov-Krasovskii functional's (\\ref{eq:e12x}) time derivative in the interval $ [t_0-\\tau,t_0) $ is shown which leads to the boundedness of the the signals in this interval. In addition, an upper bound for $ u $ in the interval $ [t_0-2\\tau,t_0) $ is given.\n\nSuppose that\n\\begin{equation}\n\\begin{aligned}\n\\sup_{\\xi\\in[t_0-\\tau, t_0)}&||x_{hp}(\\xi)||^2\\leq \\gamma_1\\\\\n\\sup_{\\xi\\in[t_0-2\\tau, t_0)}&||u(\\xi)||^2\\leq \\gamma_2\n\\end{aligned} \n\\end{equation} \nfor some positive $ \\gamma_1, \\gamma_2$, and a $ \\tau_1 $ is given such that\n\\begin{equation}\n\\begin{aligned}\n2\\tau_1(\\gamma_1+\\gamma_2+\\tau_1\\gamma_2) 0,\\\\ &\\forall \\xi\\in[t_0,t_0+\\tau), \\forall\\tau\\in[0, \\tau_1].\n\\end{aligned} \n\\end{equation}\nIt follows that $ V(t) $, defined in (\\ref{eq:e12x}), is non-increasing for $ t\\in[t_0, t_0+\\tau) $. Thus, we have\n\\begin{equation}\n\\begin{aligned}\n\\lambda_{min}(P)||e(\\xi)||^2\\leq e(\\xi)^TPe(\\xi)\\leq V(t_0),\n\\end{aligned} \n\\end{equation}\nwhich leads to\n\\begin{equation}\n\\begin{aligned}\n||x_{hp}(\\xi)||-||x_m(\\xi)||\\leq ||e(\\xi)||\\leq \\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}.\n\\end{aligned} \n\\end{equation}\nThen, we have\n\\begin{equation}\n\\begin{aligned}\n||x_{hp}(\\xi)||\\leq\\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}+||x_{m}(\\xi)||,\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\nWe also have the inequality\n\\begin{align}\\label{eq:x47}\n&||{\\Phi}(\\xi)||^2\\leq V(t_0)\\implies ||{K_r^*}^{-1}-K_r^{-1}(\\xi)||^2\\leq V(t_0)\\notag\\\\\n&\\implies ||K_r^{-1}(\\xi)||\\leq \\sqrt{V(t_0)}+||{K_r^*}^{-1}||.\n\\end{align}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\nIt is noted that the boundedness of $ \\Phi={K_r^*}^{-1}-K_r^{-1} $ does not guarantee the boundedness of $ \\tilde{K}_r $. In order to guarantee the boundedness of $ \\tilde{K}_r $ independent of the boundedness of $ \\Phi $, the projection algorithm \\cite{Eug13} is employed as \n\\begin{equation}\\label{eq:e19xzzz}\n\\begin{aligned}\n\\dot{K}_r=\\text{Proj}\\big(K_r,-K_rB_m^TPe(t)u^T(t-\\tau)K_r\\big),\n\\end{aligned} \n\\end{equation}\nwith an upper bound $ K_{max} $, that is $ ||{K}_r||\\leq K_{max} $. \nThus, a lower bound for $ ||K_r^{-1}(\\xi)|| $ can be calculated using the following algebraic manipulations: \n \\begin{align}\\label{eq:47x}\n &K_r(\\xi)K_r^{-1}(\\xi)=I\\Rightarrow ||K_r(\\xi)K_r^{-1}(\\xi)||=1\\notag \\\\\n &\\Rightarrow 1\\leq ||K_r(\\xi)||||K_r^{-1}(\\xi)||\\leq K_{max}||K_r^{-1}(\\xi)||\\notag \\\\\n & \\Rightarrow \\frac{1}{K_{max}}\\leq ||K_r^{-1}(\\xi)||.\n \\end{align}\nDefining $ k_1=\\sqrt{V(t_0)}+||{K_r^*}^{-1}|| $, and using (\\ref{eq:x47}), it is obtained that\n\\begin{align}\\label{eq:47}\n\\frac{1}{K_{max}}\\leq ||K_r^{-1}(\\xi)||\\leq k_1,\\ \\ \\ \\xi\\in[t_0,t_0+\\tau).\n\\end{align}\nTherefore, $ K_r $ is always bounded and $ K_r^{-1}(\\xi) $ is bounded for $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\n\nFurthermore, using the definitions of $ \\theta_x, \\theta_1, \\lambda,\\lambda_1 $ given in Theorem \\ref{thm1}, and the non-increasing Lyapunov functional (\\ref{eq:e12x}), it can be concluded that\n\\begin{align}\\label{eq:48}\n||\\tilde{\\theta}_1(\\xi)||_F^2\\leq V(t_0)&\\implies ||\\tilde{K}_r^{-1}(\\xi)\\tilde{\\theta}_x(\\xi)||_F^2\\leq V(t_0),\n\\end{align}\n\\begin{align}\\label{eq:49}\n\\int_{-\\tau}^{0}||\\tilde{\\lambda}_1(\\xi,\\eta)||_F^2 &d\\eta \\leq V(t_0)\\\\\n&\\implies \\int_{-\\tau}^{0}||K_r^{-1}(\\xi)\\tilde{\\lambda}(\\xi,\\eta)||_F^2d\\eta\\leq V(t_0),\\notag\n\\end{align} \nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $. Using (\\ref{eq:48}) and (\\ref{eq:49}), it can be obtained that\n\\begin{equation}\n\\begin{aligned}\n||\\tilde{\\theta}_x(\\xi)||_F^2&\\leq K_{max}^2V(t_0),\\\\\n\\int_{-\\tau}^{0}||\\tilde{\\lambda}(\\xi,\\eta)||_F^2d\\eta&\\leq K_{max}^2V(t_0).\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $.\n\nTo simplify the notation, we define\n\\begin{equation}\n\\begin{aligned}\nI_0\\equiv \\text{max}&\\Big( \\sqrt{\\frac{V(t_0)}{\\lambda_{min}(P)}}+\\sup_{[t_0,t_0+\\tau)}||x_{m}(\\xi)||\\\\ &, K_{max}\\sqrt{V(t_0)}, K_{max}^2V(t_0) \\Big),\n\\end{aligned} \n\\end{equation}\nwhere $ R_{max} $ is the upper bound of the reference input $ r(t) $.\n\nAn upper bound on the control signal $ u(t) $ for $ t\\in[t_0, t_0+\\tau) $ can be derived by using Lemma \\ref{lem1} and (\\ref{eq:e11xxx}). In particular, setting $ t_i'=t_0 $, $ t_j'=t_0+\\tau $, $ c_0^2=V(t_0) $, we obtain that\n\\begin{equation}\\label{eq:e50}\n\\begin{aligned}\n|u(\\xi)|\\leq 2\\Big( \\bar{f}+\\big( \\int_{-\\tau}^{0}u^2(t_0+\\eta)d\\eta \\big)^{1\/2}I_0 \\Big)e^{I_0\\tau},\n\\end{aligned} \n\\end{equation}\nfor $ \\forall \\xi\\in[t_0,t_0+\\tau) $, where $ \\bar{f} $, which is the upper bound of $ \\theta_x(t)x_{hp}(t)+K_r(t)r(t) $, depends only on $ I_0 $. Defining $ g(\\gamma_2, I_0, \\tau)\\equiv 2(\\bar{f}+\\gamma_2I_0\\sqrt{\\tau})e^{I_0\\tau} $, (\\ref{eq:e50}) can be rewritten as\n\\begin{equation}\n\\begin{aligned}\n|u(\\xi)|\\leq g(\\gamma_2, I_0, \\tau),\\ \\forall \\xi\\in[t_0,t_0+\\tau).\n\\end{aligned} \n\\end{equation}\n\n\nThe rest of the proof is similar to the one given in \\cite{YilAnn10}. Below, a summary of the next steps are given.\n\n\\noindent\n\\textbf{Step 2} A delay range $ [0, \\tau_2] $ is found that satisfies the condition (\\ref{eq:e20x}) over the interval $ [t_0, t_0+2\\tau) $ as\n\\begin{align}\\label{eq:step2}\n\t2\\tau_2\\left( I_0^2+\\left( max\\left( \\gamma_2,g\\left( \\gamma_2,I_0,\\tau_2 \\right) \\right) \\right)^2 (1+\\tau_2) \\right) < q,\n\\end{align} \nwhich leads to $ ||x_{hp}(\\xi)|| 0$) the MPSR can be destroyed\n\\cite{Richter:zpb93,Kitatani:92,Richter:epl94}. However, in a recent\nwork \\cite{Richter:zpb93,Richter:epl94} we have presented general\narguments that the MPSR may survive for relatively large $J_2$. These\narguments are based on exact diagonalization results for small clusters\n(number of sites $N \\le 24$), as well as on the spin wave approximation.\nFollowing these arguments Zeng and Parkinson \\cite{Zeng:prb95} use the\nMPSR as a new way of investigating the spatial periodicity in the ground\nstate of frustrated spin chains. Furthermore, they studied the breakdown\nof the MPSR as a function of the chain length and the frustrating $J_2$.\nBy finite size extrapolation they estimated a finite critical value for\n$J_2$ for an infinite chain of about $0.03J_1$ where the MPSR is\nviolated in the ground state.\n\nBecause of the exponential growth of the number of basis states the\ndirect numerical calculation of the singlet ground state is limited to\nsmall clusters and the conclusions obtained from small systems seem not\nto be quite reliable.\n\nIn this paper we exploit the observation that the MPSR holds not only\nfor the singlet ground state but also for every lowest eigenstate in any\nsubspace with higher quantum number $M \\le {N\\over2}$ of the z-component\nof the total spin. In these subspaces the number of basis states is\nreduced and it is possible to diagonalize much larger systems. With this\ndata an approximation to the thermodynamic limit is more reliable. Below\nwe present data up to $N\\le144$ and we can conclude that the MPSR\nsurvives indeed a finite frustration $J_2$ at least for states with\nhigher quantum numbers $M$.\n\n\\section{Marshall Peierls sign rule}\nIn the unfrustrated limit $J_2=0$ the lowest eigenstate of the\nHamiltonian (\\ref{H12}) in each subspace of fixed eigenvalue $M$ of the\nspin operator $S_{total}^z$ reads\n\\begin{equation}\n\\label{GZ:Phi}\n\\Psi_{M} = \\sum_{m}{c_{m}^{(M)}|m \\rangle} \\hspace{0.2cm},\n\\hspace{0.5cm} c_{m}^{(M)}>0 \\hspace{0.2cm}.\n\\end{equation}\nHere the Ising states $|m\\rangle$ are defined by\n\\begin{equation}\n\\label{m}\n|m\\rangle \\equiv (-1)^{S_A-M_A}|m_1\\rangle \\otimes |m_2 \\rangle \\otimes\n\\cdots \\otimes |m_N\\rangle \\hspace{0.2cm},\n\\end{equation}\nwhere $|m_i\\rangle,\\hspace{0.2cm}i=1,\\cdots,N$, are the eigenstates of\nthe site spin operator $S_{i}^{z}$ ($ -s_{i} \\leq m_{i} \\leq s_i$),\n$S_A= \\sum_{i\\in A}s_i$, $M_{A(B)}=\\sum_{i\\in A(B)}m_i$, $M=M_A+M_B$.\nThe lattice consists of two equivalent sublattices $A$ and $B$.\n$s_i\\equiv s$, $i=1,\\cdots,N$, are the site spins. The summations in\nEq.(\\ref{GZ:Phi}) are restricted by the condition $\\sum_{i=1}^N m_i=M$.\nThe wave function (\\ref{GZ:Phi}) is not only an eigenstate of the\nunfrustrated Hamiltonian ($J_2=0$) and $S_{total}^z$ but simultaneously\nof the square of the total spin ${\\bf S}_{total}^2$ with quantum number\n$S=\\mid \\!M \\! \\mid$. Because $c_m^{(M)}>0$ is valid for each $m$ from\nthe basis set (\\ref{m}) it is impossible to build up other orthonormal\nstates without using negative amplitudes $c_m^{(M)}$. Hence the ground\nstate wave function $\\Psi_M$ is nondegenerated.\n\nFor the lowest energy eigenvalues $E(S)$ belonging to the subspace $M$\nwe have the Lieb-Mattis level-ordering\n\\begin{equation}\n\\label{e}\nE(S)0$). In the subspace with maximum\n$M=N\/2$ the MPSR is never violated in any dimension. Here the only\npossible state is the fully polarized ferromagnetic state which does not\nchange with increasing $J_2$.\n\nIn the next subspace $M=(N\/2)-1$ analytic solutions are found for linear\nchains and square lattices. In this subspace we deal with the so-called\none-magnon state, where the wave function can be expressed as a\nBloch-wave with a given $\\vec k$.\n\n{\\it Linear Chain} - In this case the solution can be found by comparing\nthe energies as a function of the wave vector $\\vec k$.\n\\begin{equation}\nE(k) = J_1 ( {N\\over 4} -1 + \\cos(k) ) + J_2 ( {N\\over 4} -1 + \\cos(2k) )\n\\end{equation}\nwith $\\vec k = {2 \\pi\\over N} i , i=0,\\pm1,\\pm2,\\ldots,+{N\\over2} $.\nThe comparison of $E(\\pi)$ and $E({2 \\pi\\over N}({N\\over2}-1))$ yields the\nequation for the critical $J_2$\n\\begin{equation}\nJ_2^c = J_1 {1 + \\cos \\left[ \\pi (1- {2 \\over N}) \\right] \\over\n 1 - \\cos \\left[ 2\\pi (1- {2 \\over N}) \\right] } .\n\\end{equation}\nIn the limit $N \\rightarrow \\infty$ one obtains $J_2^c={1\\over4}J_1$.\n\n{\\it Square lattice} -\nFor small $J_2$ in the considered subspace the lowest energy is\nobtained for $\\vec k=(\\pi,\\pi)$ and reads $E_1=J_1(N-8)+J_2N$. The\ncorresponding eigenfunction fulfills the MPSR. For larger $J_2$ we have\nto distinguish between two cases: (a) If the number of spins in the\nsublattice $N\/2$ is even we find a transition at $J_2=(J_1\/2)$ to a\ntwofold degenerated ground state with $\\vec k =(\\pi,0)$ or $\\vec k\n=(0,\\pi)$ with an energy $E_2=J_1(N-4)+J_2(N-8)$. This state violates\nthe MPSR, i.e. we have $J_2^c={1\\over2}J_1$. Notice, that the\neigenfunctions with $\\vec k =(\\pi,0)$ or $\\vec k =(0,\\pi)$ fulfill the\nso-called product-MPSR \\cite{Richter:zpb93}. (b) If the number of spins\nin the sublattice is odd (e.g. $N=26$), the situation is more\ncomplicated. The energy levels cross each other for $J_2^{c}$ slightly\ngreater than ${1\\over2}J_1$.\n\n\\section{Numerical Results}\nIn subspaces with lower quantum numbers $M < (N\/2)-1$ we cannot find\nsimple expressions for the eigenvalues and eigenfunctions. Hence, we\npresent exact diagonalization data for $M\\le(N\/2)-2$. Using a modified\nLanczos procedure we calculate in every subspace $M$ the state with the\nlowest energy $E_0(M)$. Since the number of Ising basis states increases\nexponentially as ${N \\choose N-M}$, one can calculate $E_0(M)$ for {\\bf\nall} $M={N\\over2},\\ldots,0$ only for relatively small systems (in our\ncase $N \\le 26$). However, in subspaces with larger $M$ we are able to\npresent data for $N$ up to $144$. In all cases we use periodic boundary\nconditions. Analyzing the eigenfunction with respect to the MPSR we can\ndetermine numerically the critical $J_2^{c}$ where the MPSR is violated.\n\n{\\it Linear Chain} - In Fig.\\ref{fig1} we show $J_2^{c}$ as a function\nof $1\/N$. For $M(N) = (N\/2)-1$ the analytic result is drawn. For the\nnext $M(N) = (N\/2)-2$ the data show a similar behavior with the same\ncritical value of $J_2={1\\over4}$ for $N\\rightarrow \\infty$. By\nconsidering the numerical data an analytic solution can be predicted\n\\begin{equation}\nJ_2^c = J_1{1 + \\cos \\left[ \\pi (1- {2 \\over (N-1)}) \\right] \\over\n 1 - \\cos \\left[ 2\\pi (1- {2 \\over (N-1)}) \\right] } .\n\\end{equation}\nThe following subspaces $M(N)=(N\/2)-p$, $p>2$ show a different behavior\nwith different critical values for $J_2$ if $N\\rightarrow \\infty$. These\ncritical values decrease for increasing $p$ but evidently a finite\nregion with a non-violated MPSR is preserved.\n\nIn Fig.\\ref{fig2} the critical $J^c_2$ is shown as a function of a\nrenormalized \\linebreak $M_r = M(N)\/(N\/2) $ for small systems\n($N=8,\\ldots,26$) over the full range of $M_r$. $M_r=1$ is the ground\nstate subspace for a ferromagnet and $M_r=0$ for an antiferromagnet. The\nfinite size extrapolation for the ground state with $M_r=0$ yields a\nsmall but finite critical value $J^c_2 \\approx 0.03J_1$ which\ncorresponds to the value estimated by Zeng and Parkinson in\n\\cite{Zeng:prb95}. The monotonic decrease of $J_2^{c}$ with decreasing\n$M_r$ indicates a finite region of a validity of the MPSR for all $M_r$.\n\n{\\it Square lattice} - In Fig.\\ref{fig3} we show $J_2^{c}$ as a function\nof $1\/N$. Here the $N$ dependence is less regular and a finite size\nextrapolation is much more difficult. This is mainly connected with the\nshape of the periodic boundaries. For some of the finite lattices the\nboundaries are parallel to the x- and y-axis (e.g. for $N=$4x4,\n6x6,...,12x12) whereas for other lattices the boundaries are inclined\n(e.g. $N=18,20,32$, see e.g \\cite{Oitmaa79}). The impression of an\noscillating behavior of $J_2^c$ versus $1\/N$ stems just from the\nalternation of parallel and inclined finite lattices. Nevertheless, it\nis evident that the critical $J^c_2$ goes to a finite value for\n$N\\rightarrow \\infty$. An extrapolation to the thermodynamic limit for\nthe antiferromagnetic ground state, i.e. subspace $M=0$, is almost\nimpossible for the square lattice. However, if we assume for $M=0$ that\nthe $J_2^c$ is almost independent of $1\/N$ for $N>16$ (as it is\nsuggested by Fig.\\ref{fig3} and by spin wave theory) we could estimate\nfrom our data for $N=10,16,18,20$ a critical value of about $0.2 \\ldots\n0.3$.\n\nFig.\\ref{fig4} supports this estimation. Here the critical $J_2^c$ is\nshown for different small lattices $N \\le 34$ as a function of $M_r$. It\nis seen that for $M_r \\le 0.6$ the critical $J_2^c$ does not strongly\ndepend on $M_r$ (in contrast to the linear chain, where we have a\nmonotonic decrease) and gives for all the lattices a value of about\n$0.3J_1$ for the antiferromagnetic ground state ($M_r=0$).\n\n\\section{Conclusion}\nFor linear chains and square lattices we have shown, that in subspaces\nwith large quantum number $M$ of the spin operator $S_{total}^z$, the\nMarshall-Peierls sign rule is preserved up to a fairly large frustration\nparameter $J_2^{c}$.\n\nMoreover, for linear chains the finite size extrapolation is quite\nreliable even for the singlet ground state and yields for {\\bf all} $M$\na finite parameter region for $J_2$ where the MPSR is valid.\n\nFor square lattices we observe in general higher critical values $J_2^c$\ncompared to linear chains. From this observation and from the\nextrapolation for subspaces with higher $M$ we argue that for square\nlattices the MPSR is stable against a finite frustration in all\nsubspaces, too.\n\n\n\\section{Acknowledgments}\nThis work has been supported by the Deutsche Forschungsgemeinschaft\n(Project No. Ri 615\/1-2) and the Bulgarian Science Foundation (Grant\nF412\/94).\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sIntroduction}\n\nA complex $b$-manifold is a smooth manifold with boundary together with a complex $b$-structure. The latter is a smooth involutive subbundle ${}^b{\\!}T^{0,1}\\mathcal M$ of the complexification $\\mathbb C{}^b{\\!}T\\mathcal M$ of Melrose's $b$-tangent bundle \\cite{RBM1,RBM2} with the property that\n\\begin{equation*}\n\\mathbb C{}^b{\\!}T\\mathcal M = {}^b{\\!}T^{0,1}\\mathcal M + \\overline{{}^b{\\!}T^{0,1}\\mathcal M}\n\\end{equation*}\nas a direct sum. Manifolds with complex $b$-structures generalize the situation that arises as a result of spherical and certain anisotropic (not complex) blowups of complex manifolds at a discrete set of points or along a complex submanifold, cf. \\cite[Section 2]{Me3}, \\cite{Me5}, as well as (real) blow-ups of complex analytic varieties with only point singularities.\n\nThe interior of $\\mathcal M$ is a complex manifold. Its $\\overline \\partial$-complex determines a $b$-elliptic complex, the ${}^b\\!\\overline \\partial$-complex, on sections of the exterior powers of the dual of ${}^b{\\!}T^{0,1}\\mathcal M$, see Section~\\ref{sPreliminaries}. The indicial families $\\overline\\D(\\sigma)$ of the ${}^b\\!\\overline \\partial$-operators at a connected component $\\mathcal N$ of $\\partial \\mathcal M$ give, for each $\\sigma$, an elliptic complex, see Section~\\ref{sIndicialComplex}. Their cohomology at the various values of $\\sigma$ determine the asymptotics at $\\mathcal N$ of tempered representatives of cohomology classes of the ${}^b\\!\\overline \\partial$-complex, in particular of tempered holomorphic functions. \n\nEach boundary component $\\mathcal N$ of $\\mathcal M$ inherits from ${}^b{\\!}T^{0,1}\\mathcal M$ the following objects in the $C^\\infty$ category:\n\\begin{enumerate}\n\\item an involutive vector subbundle $\\overline\\Vee\\subset \\mathbb C T\\mathcal N$ such that $\\mathcal V+\\overline\\Vee=\\mathbb C T\\mathcal N$;\n\\item a real nowhere vanishing vector field $\\mathcal T$ such that $\\mathcal V\\cap\\overline\\Vee=\\Span_\\mathbb C\\mathcal T$;\n\\item a class $\\pmb \\beta$ of sections of $\\smash[t]{\\overline\\Vee}^*$,\n\\end{enumerate}\nwhere the elements of $\\pmb \\beta$ have additional properties, described in (4) below. The vector bundle $\\overline\\Vee$, being involutive, determines a complex of first order differential operators $\\overline\\Dee$ on sections of the exterior powers of $\\smash[t]{\\overline\\Vee}^*$, elliptic because of the second property in (1) above. To that list add\n\\begin{enumerate}\n\\item [(4)] If $\\beta\\in \\pmb \\beta$ then $\\overline\\Dee\\beta=0$ and $\\Im\\langle \\beta,\\mathcal T\\rangle=-1$, and if $\\beta$, $\\beta'\\in\\pmb\\beta$, then $\\beta'-\\beta=\\overline\\Dee u$ with $u$ real-valued.\n\\end{enumerate}\nThese properties, together with the existence of a Hermitian metric on $\\overline\\Vee$ invariant under $\\mathcal T$ make $\\mathcal N$ behave in many ways as the circle bundle of a holomorphic line bundle over a compact complex manifold. These analogies are investigated in \\cite{Me6,Me7,Me8,Me9}. The last of these papers contains a detailed account of circle bundles from the perspective of these boundary structures. The paper \\cite{Me4}, a predecessor of the present one, contains some facts studied here in more detail.\n\n\n\\medskip\nThe paper is organized as follows. Section~\\ref{sPreliminaries} deals with the definition of complex $b$-structure and Section~\\ref{sHolomorphicVectorBundles} with holomorphic vector bundles over complex $b$-manifolds (the latter term just means that the $b$-tangent bundle takes on a primary role over that of the usual tangent bundle). The associated Dolbeault complexes are defined in these sections accordingly.\n\nSection~\\ref{sBoundaryStructure} is a careful account of the structure inherited by the boundary.\n\nIn Section~\\ref{sLocalInvariants} we show that complex $b$-structures have no formal local invariants at boundary points. The issue here is that we do not have a Newlander-Nirenberg theorem that is valid in a neighborhoods of a point of the boundary, so no explicit local model for $b$-manifolds.\n\nSection~\\ref{sIndicialComplex} is devoted to general aspects of $b$-elliptic first order complexes $A$. We introduce here the set $\\spec_{b,\\mathcal N}^q(A)$, the boundary spectrum of the complex in degree $q$ at the component $\\mathcal N$ of $\\mathcal M$, and prove basic properties of the boundary spectrum (assuming that the boundary component $\\mathcal N$ is compact), including some aspects concerning Mellin transforms of $A$-closed forms. Some of these ideas are illustrated using the $b$-de Rham complex.\n\nSection~\\ref{sUnderlyingCRcomplexes} is a systematic study of the $\\overline \\partial_b$-complex of CR structures on $\\mathcal N$ associated with elements of the class $\\pmb \\beta$. Each $\\beta\\in \\pmb\\beta$ defines a CR structure, $\\overline \\K_\\beta=\\ker \\beta$. Assuming that $\\overline\\Vee$ admits a $\\mathcal T$-invariant Hermitian metric, we show that there is $\\beta\\in \\pmb \\beta$ such that the CR structure $\\overline \\K_\\beta$ is $\\mathcal T$-invariant.\n\nIn Section~\\ref{sSpectrum} we assume that $\\overline\\Vee$ is $\\mathcal T$-invariant and show that for $\\mathcal T$-invariant CR structures, a theorem proved in \\cite{Me9} gives that the cohomology spaces of the associated $\\overline \\partial_b$-complex, viewed as the kernel of the Kohn Laplacian at the various degrees, split into eigenspaces of $-i\\mathcal {L}_\\mathcal T$. The eigenvalues of the latter operator are related to the indicial spectrum of the ${}^b\\!\\overline \\partial$-complex.\n\nIn Section~\\ref{sIndicialCohomology} we prove a precise theorem on the indicial cohomology and spectrum for the ${}^b\\!\\overline \\partial$-complex under the assumption that $\\overline\\Vee$ admits a $\\mathcal T$-invariant Hermitian metric.\n\nFinally, we have included a very short appendix listing a number of basic definitions in connection with $b$-operators. \n\n\n\\section{Complex $b$-structures}\\label{sPreliminaries}\n\nLet $\\mathcal M$ be a smooth manifold with smooth boundary. An almost CR $b$-structure on $\\mathcal M$ is a subbundle $\\overline\\W$ of the complexification, $\\mathbb C{}^b{\\!}T\\mathcal M\\to\\mathcal M$ of the $b$-tangent bundle of $\\mathcal M$ (Melrose \\cite{RBM1,RBM2}) such that\n\\begin{equation}\\label{bCRstructure}\n\\mathcal W\\cap \\overline\\W=0\n\\end{equation}\nwith $\\mathcal W=\\overline\\W$. If in addition \n\\begin{equation}\\label{bElliptic}\n\\mathcal W + \\overline\\W=\\mathbb C{}^b{\\!}T\\mathcal M\n\\end{equation}\nthen we say that $\\overline\\W$ is an almost complex $b$-structure and write ${}^b{\\!}T^{0,1}\\mathcal M$ instead of $\\overline\\W$ and ${}^b{\\!}T^{1,0}\\mathcal M$ for its conjugate. As is customary, the adverb ``almost'' is dropped if $\\mathcal W$ is involutive. Note that since $C^\\infty(\\mathcal M;{}^b{\\!}T\\mathcal M)$ is a Lie algebra, it makes sense to speak of involutive subbundles of ${}^b{\\!}T\\mathcal M$ (or its complexification). \n\n\\begin{definition}\nA complex $b$-manifold is a manifold together with a complex $b$-structure.\n\\end{definition}\n\nBy the Newlander-Nirenberg Theorem~\\cite{NeNi57}, the interior of complex $b$-manifold is a complex manifold. However, its boundary is not a CR manifold; rather, as we shall see, it naturally carries a family of CR structures parametrized by the defining functions of $\\partial\\mathcal M$ in $\\mathcal M$ which are positive in $\\open \\mathcal M$.\n\n\\medskip\nThat $C^\\infty(\\mathcal M;{}^b{\\!}T\\mathcal M)$ is a Lie algebra is an immediate consequence of the definition of the $b$-tangent bundle, which indeed can be characterized as being a vector bundle ${}^b{\\!}T\\mathcal M\\to\\mathcal M$ together with a vector bundle homomorphism \n\\begin{equation*}\n\\mathrm{ev}:{}^b{\\!}T\\mathcal M\\to T\\mathcal M\n\\end{equation*}\ncovering the identity such that the induced map\n\\begin{equation*}\n\\mathrm{ev}_*:C^\\infty(\\mathcal M;{}^b{\\!}T\\mathcal M)\\to C^\\infty(\\mathcal M;T\\mathcal M)\n\\end{equation*}\nis a $C^\\infty(\\mathcal M;\\mathbb R)$-module isomorphism onto the submodule $C^\\infty_{\\tan}(\\mathcal M;T\\mathcal M)$ of smooth vector fields on $\\mathcal M$ which are tangential to the boundary of $\\mathcal M$. Since $C^\\infty_{\\tan}(\\mathcal M,T\\mathcal M)$ is closed under Lie brackets, there is an induced Lie bracket on $C^\\infty(\\mathcal M;{}^b{\\!}T\\mathcal M)$ The homomorphism $\\mathrm{ev}$ is an isomorphism over the interior of $\\mathcal M$, and its restriction to the boundary,\n\\begin{equation}\\label{evbM}\n\\mathrm{ev}_{\\partial\\mathcal M}:{}^b{\\!}T_{\\partial\\mathcal M}\\mathcal M\\to T\\partial\\mathcal M\n\\end{equation}\nis surjective. Its kernel, a fortiori a rank-one bundle, is spanned by a canonical section denoted $\\mathfrak r\\partial_\\mathfrak r$. Here and elsewhere, $\\mathfrak r$ refers to any smooth defining function for $\\partial \\mathcal M$ in $\\mathcal M$, by convention positive in the interior of $\\mathcal M$.\n\nAssociated with a complex $b$-structure on $\\mathcal M$ there is a Dolbeault complex. Let ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M$ denote the $q$-th exterior power of the dual of ${}^b{\\!}T^{0,1}M$. Then the operator\n\\begin{equation*}\n\\cdots\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M)\\xrightarrow{{}^b\\!\\overline \\partial} C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}\\mathcal M) \\to \\cdots\n\\end{equation*}\nis define by \n\\begin{multline}\\label{CartanFormula}\n(q+1)\\,{}^b\\!\\overline \\partial\\phi(V_0,\\dotsc,V_q)=\\sum_{j=0}^q V_j\\phi(V_0,\\dotsc,\\hat V_j,\\dotsc,V_q)\\\\\n +\\sum_{j0$, we define ${}^b\\!\\overline \\partial$ on forms of type $(p,q)$ with $p>0$ with the aid of the $b$-de Rham complex, exactly as in Foland and Kohn~\\cite{FK} for standard complex structures and de Rham complex. The $b$-de Rham complex, we recall from Melrose~\\cite{RBM2}, is the complex associated with the dual, $\\mathbb C{}^b{\\!}T^*\\mathcal M$, of $\\mathbb C{}^b{\\!}T\\mathcal M$,\n\\begin{equation*}\n\\cdots\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^r\\mathcal M)\\xrightarrow{{}^b\\!d} C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{r+1}\\mathcal M) \\to \\cdots\n\\end{equation*}\nwhere ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\mathcal M$ denotes the $r$-th exterior power of $\\mathbb C{}^b{\\!}T^*\\mathcal M$. The operators ${}^b\\!d$ are defined by the same formula as \\eqref{CartanFormula}, now however with the $V_j\\in C^\\infty(\\mathcal M;\\mathbb C{}^b{\\!}T\\mathcal M)$. On functions $f$ we have \n\\begin{equation*}\n{}^b\\!d f=\\mathrm{ev}^*df.\n\\end{equation*}\nMore generally,\n\\begin{equation*}\n\\mathrm{ev}^* \\circ d = {}^b\\!d\\circ \\mathrm{ev}^*\n\\end{equation*}\nin any degree. Also,\n\\begin{equation}\\label{FirstOrder}\n{}^b\\!d(f\\phi)=f\\,{}^b\\!d\\phi+{}^b\\!d f\\wedge \\phi\\text{ for } \\phi\\in C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^r\\mathcal M) \\text{ and } f\\in C^\\infty(\\mathcal M).\n\\end{equation}\nIt is convenient to note here that for $f\\in C^\\infty(\\mathcal M)$,\n\\begin{equation}\\label{VanishingOnBdy}\n\\text{${}^b\\!d f$ vanishes on $\\partial \\mathcal M$ if $f$ does.}\n\\end{equation}\n\nNow, with the obvious definition,\n\\begin{equation}\\label{SpittingOfDeRham}\n{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^r\\mathcal M=\\bigoplus_{p+q=r}{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M.\n\\end{equation}\nUsing the special cases \n\\begin{gather*}\n {}^b\\!d:C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1})\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{1,1})+C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,2}),\\\\\n {}^b\\!d:C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{1,0})\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{2,0})+C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{1,1}),\n\\end{gather*}\nconsequences of the involutivity of ${}^b{\\!}T^{0,1}\\mathcal M$ and its conjugate, one gets \n\\begin{equation*}\n{}^b\\!d\\phi\\in C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p+1,q}\\mathcal M)\\oplus C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q+1}\\mathcal M)\\quad \\text{if } \\phi\\in C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M)\n\\end{equation*}\nfor general $(p,q)$. Let $\\pi_{p,q}:{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k\\mathcal M\\to{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M$ be the projection according to the decomposition \\eqref{SpittingOfDeRham}, and define\n\\begin{equation*}\n{}^b\\!\\dee=\\pi_{p+1,q}{}^b\\!d,\\quad {}^b\\!\\overline \\partial=\\pi_{q,p+1}{}^b\\!d,\n\\end{equation*}\nso ${}^b\\!d={}^b\\!\\dee+{}^b\\!\\overline \\partial$. The operators ${}^b\\!\\overline \\partial$ are identical to the $\\overline \\partial$-operators over the interior of $\\mathcal M$ and with the previously defined ${}^b\\!\\overline \\partial$ operators on $(0,q)$-forms, and give a complex\n\\begin{equation}\\label{bdeebarComplex}\n\\cdots\n\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M) \\xrightarrow{{}^b\\!\\overline \\partial}\nC^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q+1}\\mathcal M)\n\\to\\cdots\n\\end{equation}\nfor each $p$. On functions $f:\\mathcal M\\to\\mathbb C$,\n\\begin{equation}\\label{bdeebarOnFunctions}\n{}^b\\!\\overline \\partial f = \\pi_{0,1}\\, {}^b\\!d f.\n\\end{equation}\nThe formula\n\\begin{equation}\\label{bdeebarFirstOrderPrime}\\tag{\\ref{FirstOrder}$'$}\n{}^b\\!\\overline \\partial f\\phi={}^b\\!\\overline \\partial f\\wedge \\phi+f{}^b\\!\\overline \\partial\\phi ,\\quad f\\in C^\\infty(\\mathcal M),\\ \\phi\\in C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M),\n\\end{equation}\na consequence of \\eqref{FirstOrder}, implies that ${}^b\\!\\overline \\partial$ is a first order operator. As a consequence of \\eqref{VanishingOnBdy}, \n\\begin{equation}\\label{bdeebarVanishingOnBdy}\\tag{\\ref{VanishingOnBdy}$'$}\n\\text{${}^b\\!\\overline \\partial f$ vanishes on $\\partial \\mathcal M$ if $f$ does.}\n\\end{equation}\n\n\n\n\\medskip\nThe operators of the $b$-de Rham complex are first order operators because of \\eqref{FirstOrder}, and \\eqref{VanishingOnBdy} implies that these are $b$-operators, see \\eqref{TotallyChar}. Likewise, \\eqref{bdeebarFirstOrderPrime} and \\eqref{bdeebarVanishingOnBdy} imply that in any bidegree, the operator $\\phi\\mapsto \\mathfrak r^{-1}\\,{}^b\\!\\overline \\partial \\,\\mathfrak r\\phi$ has coefficients smooth up to the boundary, so\n\\begin{equation}\\label{bdeebarOnpq}\n{}^b\\!\\overline \\partial\\in \\Diff^1_b(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q+1}\\mathcal M),\n\\end{equation}\nsee \\eqref{TotallyChar}. We also get from these formulas that the $b$-symbol of ${}^b\\!\\overline \\partial$ is\n\\begin{equation}\\label{bsymbdeebar}\n\\bsym({}^b\\!\\overline \\partial)(\\xi)(\\phi)=i \\pi_{0,1}(\\xi)\\wedge \\phi, \\quad x\\in\\mathcal M,\\ \\xi\\in {}^b{\\!}T^*_x\\mathcal M,\\ \\phi\\in {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}_x\\mathcal M,\n\\end{equation}\nsee \\eqref{TheBSymbol}. Since $\\pi_{0,1}$ is injective on the real $b$-cotangent bundle (this follows from \\eqref{bElliptic}), the complex \\eqref{bdeebarComplex} is $b$-elliptic.\n\n\n\\section{Holomorphic vector bundles}\\label{sHolomorphicVectorBundles}\nThe notion of holomorphic vector bundle in the $b$-category is a translation of the standard one using connections. Let $\\rho:F\\to\\mathcal M$ be a complex vector bundle. Recall from \\cite{RBM2} that a $b$-connection on $F$ is a linear operator\n\\begin{equation*}\n{}^b\\!\\nabla:C^\\infty(\\mathcal M;F)\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^1\\mathcal M \\otimes F)\n\\end{equation*}\nsuch that\n\\begin{equation}\\label{Connection}\n{}^b\\!\\nabla f\\phi=f\\, {}^b\\!\\nabla\\phi+ {}^b\\!d f \\otimes \\phi\n\\end{equation}\nfor each $\\phi \\in C^\\infty(\\mathcal M;F)$ and $f\\in C^\\infty(\\mathcal M)$. This property automatically makes ${}^b\\!\\nabla$ a $b$-operator.\n\nA standard connection $\\nabla:C^\\infty(\\mathcal M;F)\\to C^\\infty(\\mathcal M;\\raise2ex\\hbox{$\\mathchar\"0356$}^1\\mathcal M \\otimes F)$ determines a $b$-connection by composition with\n\\begin{equation*}\n\\mathrm{ev}^*\\otimesI:\\raise2ex\\hbox{$\\mathchar\"0356$}^1\\mathcal M \\otimes F\\to {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^1\\mathcal M \\otimes F,\n\\end{equation*}\nbut $b$-connections are more general than standard connections. Indeed, the difference between the latter and the former can be any smooth section of the bundle $\\Hom(F,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^1\\mathcal M\\otimes F)$. A $b$-connection ${}^b\\!\\nabla$ on $F$ arises from a standard connection if and only if ${}^b\\!\\nabla_{\\mathfrak r\\partial_\\mathfrak r}=0$ along $\\partial\\mathcal M$.\n\nAs in the standard situation, the $b$-connection ${}^b\\!\\nabla$ determines operators\n\\begin{equation}\\label{ExtConnection}\n{}^b\\!\\nabla:C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k\\mathcal M \\otimes F)\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{k+1}\\mathcal M \\otimes F)\n\\end{equation}\nby way of the usual formula translated to the $b$ setting:\n\\begin{equation}\\label{ExtConnectionBis}\n{}^b\\!\\nabla(\\alpha\\otimes \\phi) = (-1)^k \\alpha \\wedge {}^b\\!\\nabla \\phi + {}^b\\!d\\alpha \\wedge \\phi,\\quad \\phi \\in C^\\infty(\\mathcal M;F),\\ \\alpha\\in {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k\\mathcal M.\n\\end{equation}\nSince\n\\begin{equation*}\n{}^b\\!\\nabla \\mathfrak r\\alpha\\otimes \\phi=\\mathfrak r\\, {}^b\\!\\nabla(\\alpha\\otimes \\phi) + {}^b\\!d \\mathfrak r \\wedge \\alpha\\otimes \\phi\n\\end{equation*}\nis smooth and vanishes on $\\partial\\mathcal M$, also\n\\begin{equation*}\n{}^b\\!\\nabla\\in \\Diff^1_b(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k\\mathcal M\\otimes F,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{k+1}\\mathcal M\\otimes F).\n\\end{equation*}\nThe principal $b$-symbol of \\eqref{ExtConnection}, easily computed using \\eqref{ExtConnectionBis} and\n\\begin{equation*}\n\\bsym({}^b\\!\\nabla)({}^b\\!d f)(\\phi) = \\lim_{\\tau\\to\\infty} \\frac{e^{-i\\tau f}}{\\tau} {}^b\\!\\nabla e^{i\\tau f}\\phi\n\\end{equation*}\nfor $f\\in C^\\infty(\\mathcal M;\\mathbb R)$ and $\\phi \\in C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k\\mathcal M \\otimes F)$, is\n\\begin{equation*}\n\\bsym({}^b\\!\\nabla)(\\xi)(\\phi) = i\\xi\\wedge \\phi,\\quad \\xi\\in {}^b{\\!}T_x^*\\mathcal M,\\ \\phi \\in {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k_x\\mathcal M \\otimes F_x,\\ x\\in \\mathcal M.\n\\end{equation*}\n\nAs expected, the connection is called holomorphic if the component in ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,2}\\mathcal M\\otimes F$ of the curvature operator\n\\begin{equation*}\n\\Omega={}^b\\!\\nabla^2:C^\\infty(\\mathcal M;F)\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^2\\mathcal M\\otimes F),\n\\end{equation*}\nvanishes. Such a connection gives $F$ the structure of a complex $b$-manifold. Its complex $b$-structure can be described locally as in the standard situation, as follows. Fix a frame $\\eta_\\mu$ for $F$ and let the $\\omega^\\nu_\\mu$ be the local sections of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$ such that\n\\begin{equation*}\n{}^b\\!\\overline \\partial \\eta_\\mu = \\sum_\\nu \\omega^\\nu_\\mu\\otimes \\eta_\\nu.\n\\end{equation*}\nDenote by $\\zeta^\\mu$ the fiber coordinates determined by the frame $\\eta_\\mu$. Let $V_1,\\dotsc, V_{n+1}$ be a frame of ${}^b{\\!}T^{0,1}\\mathcal M$ over $U$, denote by $\\tilde V_j$ the sections of $\\mathbb C{}^b{\\!}T F$ over $\\rho^{-1}(U)$ which project on the $V_j$ and satisfy $\\tilde V_j\\zeta^\\mu=\\tilde V_j\\overline \\zeta^\\mu=0$ for all $\\mu$, and by $\\partial_{\\zeta^\\mu}$ the vertical vector fields such that $\\partial_{\\zeta^\\mu}\\zeta^\\nu=\\delta^\\nu_\\mu$ and $\\partial_{\\zeta^\\mu}\\overline \\zeta^\\nu=0$. Then the sections\n\\begin{equation}\\label{LocalbT01E}\n\\tilde V_j-\\sum_{\\mu,\\nu}\\zeta^\\mu \\langle \\omega^\\nu_\\mu, V_j\\rangle \\partial_{\\zeta^\\nu},\\ j=1,\\dotsc,n+1,\\quad \\partial_{\\overline \\zeta^\\nu},\\ \\nu=1,\\dotsc,k\n\\end{equation}\nof $\\mathbb C{}^b{\\!}T F$ over $\\rho^{-1}(U)$ form a frame of ${}^b{\\!}T^{0,1}F$. As in the standard situation, the involutivity of this subbundle of $\\mathbb C{}^b{\\!}T F$ is equivalent to the condition on the vanishing of the $(0,2)$ component of the curvature of ${}^b\\!\\nabla$. A vector bundle $F\\to\\mathcal M$ together with the complex $b$-structure determined by a choice of holomorphic $b$-connection (if one exists at all) is a holomorphic vector bundle.\n\n\nThe $\\overline \\partial$ operator of a holomorphic vector bundle is\n\\begin{equation*}\n{}^b\\!\\overline \\partial = (\\pi_{0,q+1}\\otimes I)\\circ {}^b\\!\\nabla : C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M\\otimes F)\\to C^\\infty(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}\\mathcal M \\otimes F).\n\\end{equation*}\nAs is the case for standard complex structures, the condition on the curvature of ${}^b\\!\\nabla$ implies that these operators form a complex, $b$-elliptic since\n\\begin{equation*}\n\\bsym({}^b\\!\\overline \\partial)(\\xi)(\\phi) = i\\pi_{0,1}(\\xi)\\wedge \\phi,\\quad \\xi\\in {}^b{\\!}T_x^*\\mathcal M,\\ \\phi \\in {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^k_x\\mathcal M \\otimes F_x,\\ x\\in \\mathcal M\n\\end{equation*}\nand $\\pi_{0,1}(\\xi)=0$ for $\\xi\\in {}^b{\\!}T^*\\mathcal M$ if and only if $\\xi=0$.\n\nAlso as usual, a $b$-connection ${}^b\\!\\nabla$ on a Hermitian vector bundle $F\\to\\mathcal M$ with Hermitian form $h$ is Hermitian if\n\\begin{equation*}\n{}^b\\!d h(\\phi,\\psi)= h({}^b\\!\\nabla \\phi,\\psi)+h(\\phi,{}^b\\!\\nabla\\psi)\n\\end{equation*}\nfor every pair of smooth sections $\\phi$, $\\psi$ of $F$. In view of the definition of ${}^b\\!d$ this means that for every $v\\in \\mathbb C{}^b{\\!}T\\mathcal M$ and sections as above,\n\\begin{equation*}\n\\mathrm{ev}(v) h(\\phi,\\psi)= h({}^b\\!\\nabla_{\\!v} \\phi,\\psi)+h(\\phi,{}^b\\!\\nabla_{\\!\\overline v}\\psi)\n\\end{equation*}\nOn a complex $b$-manifold $\\mathcal M$, if an arbitrary connection ${}^b\\!\\nabla'$ and the Hermitian form $h$ are given for a vector bundle $F$, holomorphic or not, then there is a unique \\emph{Hermitian} $b$-connection ${}^b\\!\\nabla$ such that $\\pi_{0,1}{}^b\\!\\nabla = \\pi_{0,1}{}^b\\!\\nabla'$. Namely, let $\\eta_\\mu$ be a local orthonormal frame of $F$, let\n\\begin{equation*}\n(\\pi_{0,1}\\otimesI)\\circ {}^b\\!\\nabla'\\eta_\\mu = \\sum_\\nu\\omega^\\nu_\\mu \\otimes \\eta_\\nu,\n\\end{equation*}\nand let ${}^b\\!\\nabla$ be the connection defined in the domain of the frame by\n\\begin{equation}\\label{HermitianConnectionForms}\n{}^b\\!\\nabla\\eta_\\mu =(\\omega^\\nu_\\mu-\\overline \\omega^\\mu_\\nu)\\otimes \\eta_\\nu.\n\\end{equation}\nIf the matrix of functions $Q=[q^\\mu_\\lambda]$ is unitary and $\\tilde \\eta_\\lambda=\\sum_\\mu q^\\mu_\\lambda \\eta_\\mu$, then \n\\begin{equation*}\n(\\pi_{0,1}\\otimesI)\\circ {}^b\\!\\nabla'\\tilde \\eta_\\lambda = \\sum_\\nu\\tilde \\omega^\\sigma_\\lambda \\otimes \\tilde \\eta_\\sigma\n\\end{equation*}\nwith\n\\begin{equation*}\n\\tilde \\omega^\\sigma_\\lambda = \\sum_\\mu \\overline q^\\mu_\\sigma\\,{}^b\\!\\overline \\partial q^\\mu_\\lambda + \\sum_{\\mu,\\nu} \\overline q^\\mu_\\sigma q^\\nu_\\lambda\\omega^\\mu_\\nu,\n\\end{equation*}\nusing \\eqref{Connection}, that $Q^{-1}=[\\overline q^\\mu_\\lambda]$, and that $\\pi_{0,1}{}^b\\!d f={}^b\\!\\overline \\partial f$. Thus\n\\begin{align*}\n\\tilde \\omega^\\sigma_\\lambda-\\overline {\\tilde \\omega}^\\lambda_\\sigma\n&= \\sum_\\mu (\\overline q^\\mu_\\sigma\\,{}^b\\!\\overline \\partial q^\\mu_\\lambda - q^\\mu_\\lambda\\,{}^b\\!\\dee \\overline q^\\mu_\\sigma) + \\sum_{\\mu,\\nu} (\\overline q^\\mu_\\sigma q^\\nu_\\lambda\\omega^\\mu_\\nu - q^\\mu_\\lambda \\overline q^\\nu_\\sigma\\overline \\omega^\\mu_\\nu)\\\\\n&=\\sum_\\mu ({}^b\\!\\overline \\partial q^\\mu_\\lambda + {}^b\\!\\dee q^\\mu_\\lambda)\\overline q^\\mu_\\sigma + \\sum_{\\mu,\\nu} q^\\nu_\\lambda ( \\omega^\\mu_\\nu - \\overline \\omega^\\nu_\\mu)\\overline q^\\mu_\\sigma\\\\\n&=\\sum_\\mu {}^b\\!d q^\\mu_\\lambda + \\overline q^\\mu_\\sigma + \\sum_{\\mu,\\nu} q^\\nu_\\lambda ( \\omega^\\mu_\\nu - \\overline \\omega^\\nu_\\mu)\\overline q^\\mu_\\sigma\n\\end{align*}\nusing that $\\overline{{}^b\\!\\overline \\partial f}={}^b\\!\\dee \\overline f$ and that $\\sum_\\mu q^\\mu_\\lambda\\,{}^b\\!\\dee \\overline q^\\mu_\\sigma = - \\sum_\\mu {}^b\\!\\dee q^\\mu_\\lambda\\, \\overline q^\\mu_\\sigma$ because $\\sum_\\mu q^\\mu_\\lambda \\overline q^\\mu_\\sigma$ is constant, and that ${}^b\\!\\overline \\partial q^\\mu_\\lambda + {}^b\\!\\dee q^\\mu_\\lambda={}^b\\!d q^\\mu_\\lambda$. Thus there is a globally defined Hermitian connection locally given by \\eqref{HermitianConnectionForms}. We leave to the reader to verify that this connection is Hermitian. Clearly ${}^b\\!\\nabla$ is the unique Hermitian connection such that $\\pi_{0,1}{}^b\\!\\nabla = \\pi_{0,1}{}^b\\!\\nabla'$. When ${}^b\\!\\nabla'$ is a holomorphic connection, ${}^b\\!\\nabla$ is the unique Hermitian holomorphic connection.\n\n\\begin{lemma}\nThe vector bundles ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,0}\\mathcal M$ are holomorphic.\n\\end{lemma}\n\nWe prove this by exhibiting a holomorphic $b$-connection. Fix an auxiliary Hermitian metric on ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,0}\\mathcal M$ and pick an orthonormal frame $(\\eta_\\mu)$ of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,0}\\mathcal M$ over some open set $U$. Let $\\omega^\\nu_\\mu$ be the unique sections of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$ such that\n\\begin{equation*}\n{}^b\\!\\overline \\partial\\eta_\\mu =\\sum_\\nu \\omega^\\nu_\\mu\\wedge\\eta_\\nu,\n\\end{equation*}\nand let ${}^b\\!\\nabla$ be the $b$-connection defined on $U$ by the formula \\eqref{HermitianConnectionForms}. As in the previous paragraph, this gives a globally defined $b$-connection. That it is holomorphic follows from\n\\begin{equation*}\n{}^b\\!\\overline \\partial \\omega^\\nu_\\mu + \\sum_\\lambda \\omega^\\nu_\\lambda\\wedge \\omega^\\lambda_\\mu = 0,\n\\end{equation*}\na consequence of ${}^b\\!\\overline \\partial^2=0$. Evidently, with the identifications ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M\\otimes {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,0}\\mathcal M={}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{p,q}\\mathcal M$, $\\pi_{p,q+1}{}^b\\!\\nabla$ is the ${}^b\\!\\overline \\partial$ operator in \\eqref{bdeebarOnpq}.\n\n\n\\section{The boundary a complex $b$-manifold}\\label{sBoundaryStructure}\n\nSuppose that $\\mathcal M$ is a complex $b$-manifold and $\\mathcal N$ is a component of its boundary. We shall assume $\\mathcal N$ compact, although for the most part this is not necessary.\n\nThe homomorphism\n\\begin{equation*}\n\\mathrm{ev}:\\mathbb C{}^b{\\!}T\\mathcal M\\to \\mathbb C T\\mathcal M\n\\end{equation*}\nis an isomorphism over the interior of $\\mathcal M$, and its restriction to $\\mathcal N$ maps onto $\\mathbb C T\\mathcal N$ with kernel spanned by $\\mathfrak r\\partial_\\mathfrak r$. Write\n\\begin{equation*}\n\\mathrm{ev}_\\mathcal N:\\mathbb C{}^b{\\!}T_{\\mathcal N}\\mathcal M\\to \\mathbb C T\\mathcal N\n\\end{equation*}\nfor this restriction and \n\\begin{equation}\\label{bdyIso}\n\\Phi:{}^b{\\!}T^{0,1}_{\\mathcal N}\\mathcal M\\to\\overline\\Vee\n\\end{equation}\nfor of the restriction of $\\mathrm{ev}_\\mathcal N$ to ${}^b{\\!}T^{0,1}_{\\mathcal N}\\mathcal M$. From \\eqref{bCRstructure} and the fact that the kernel of $\\mathrm{ev}_\\mathcal N$ is spanned by the real section $\\mathfrak r\\partial_{\\mathfrak r}$ one obtains that $\\Phi$ is injective, so its image,\n\\begin{equation*}\n\\overline\\Vee=\\Phi({}^b{\\!}T^{0,1}_{\\mathcal N}\\mathcal M)\n\\end{equation*}\nis a subbundle of $\\mathbb C T\\mathcal N$.\n\nSince ${}^b{\\!}T^{0,1}\\mathcal M$ is involutive, so is $\\overline\\Vee$, see \\cite[Proposition 3.12]{Me3}. From \\eqref{bElliptic} and the fact that $\\mathrm{ev}_\\mathcal N$ maps onto $\\mathbb C T\\mathcal N$, one obtains that\n\\begin{equation}\\label{VbarIsElliptic}\n\\mathcal V + \\overline\\Vee=\\mathbb C T\\mathcal N, \n\\end{equation}\nsee \\cite[Lemma 3.13]{Me3}. Thus\n\n\\begin{lemma}\n$\\overline\\Vee$ is an elliptic structure.\n\\end{lemma}\n\nThis just means what we just said: $\\overline\\Vee$ is involutive and \\eqref{VbarIsElliptic} holds, see Treves~\\cite{Tr81,Tr92}; the sum need not be direct. All elliptic structures are locally of the same kind, depending only on the dimension of $\\mathcal V\\cap \\overline\\Vee$. This is a result of Nirenberg~\\cite{Ni57} (see also H\\\"ormander~\\cite{Ho65}) extending the Newlander-Nirenberg theorem. In the case at hand, $\\overline\\Vee\\cap \\mathcal V$ has rank $1$ because of the relation\n\\begin{equation*}\n\\rk_\\mathbb C(\\mathcal V\\cap \\overline\\Vee)=2\\rk_\\mathbb C\\overline\\Vee-\\dim\\mathcal N\n\\end{equation*}\nwhich holds whenever \\eqref{VbarIsElliptic} holds. \n\\begin{equation}\\label{HypoanalyticChart}\n\\display{300pt}{Every $p_0\\in \\mathcal N$ hs a neighborhood in which there coordinates $x^1,\\dotsc,x^{2n},t$ such that with $z^j=x^j+\\mathfrak m x^{j+n}$, the vector fields \n\\begin{equation*}\n\\hspace*{-60pt}\\frac{\\partial}{\\partial \\overline z^1},\\dotsc,\\frac{\\partial}{\\partial \\overline z^n},\\ \\frac{\\partial }{\\partial t}\n\\end{equation*}\nspan $\\overline\\Vee$ near $p_0$. The function $(z^1,\\dotsc,z^n,t)$ is called a hypoanalytic chart (Baouendi, Chang, and Treves~\\cite{BCT83}, Treves~\\cite{Tr92}).}\n\\end{equation}\n\nThe intersection $\\overline\\Vee\\cap \\mathcal V$ is, in the case we are discussing, spanned by a canonical globally defined real vector field.\nNamely, let $\\mathfrak r\\partial_\\mathfrak r$ be the canonical section of ${}^b{\\!}T\\mathcal M$ along $\\mathcal N$. There is a unique section $J\\mathfrak r\\partial_\\mathfrak r$ of ${}^b{\\!}T\\mathcal M$ along $\\mathcal N$ such that $\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r \\partial_\\mathfrak r$ is a section of ${}^b{\\!}T^{0,1}\\mathcal M$ along $\\mathcal N$. Then\n\\begin{equation*}\n\\mathcal T=\\mathrm{ev}_\\mathcal N(J\\mathfrak r\\partial_\\mathfrak r)\n\\end{equation*}\nis a nonvanishing real vector field in $\\mathcal V\\cap \\overline\\Vee$, (see \\cite[Lemma 2.1]{Me4}). Using the isomorphism \\eqref{bdyIso} we have\n\\begin{equation*}\n\\mathcal T=\\Phi(J(\\mathfrak r\\partial_\\mathfrak r)-i\\mathfrak r\\partial_\\mathfrak r).\n\\end{equation*}\n\nBecause $\\overline\\Vee$ is involutive, there is yet another complex, this time associated with the exterior powers of the dual of $\\overline\\Vee$:\n\\begin{equation}\\label{bdyComplex}\n\\cdots\n\\to C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q \\smash[t]{\\overline\\Vee}^*)\\xrightarrow{\\overline\\Dee}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\smash[t]{\\overline\\Vee}^*)\n\\to\\cdots,\n\\end{equation}\nwhere $\\overline\\Dee$ is defined by the formula \\eqref{CartanFormula} where now the $V_j$ are sections of $\\overline\\Vee$. The complex \\eqref{bdyComplex} is elliptic because of \\eqref{VbarIsElliptic}. For a function $f$ we have $\\overline\\Dee f=\\iota^*df$, where $\\iota^*:\\mathbb C T^*\\mathcal N\\to \\smash[t]{\\overline\\Vee}^*$ is the dual of the inclusion homomorphism $\\iota:\\overline\\Vee\\to\\mathbb C T\\mathcal N$.\n\nFor later use we show:\n\\begin{lemma}\\label{ConstantSolutions}\nSuppose that $\\mathcal N$ is compact and connected. If $\\zeta:\\mathcal N\\to\\mathbb C$ solves $\\overline\\Dee\\zeta=0$, then $\\zeta$ is constant.\n\\end{lemma}\n\n\\begin{proof}\nLet $p_0$ be an extremal point of $|\\zeta|$. Fix a hypoanalytic chart $(z,t)$ for $\\overline\\Vee$ centered at $p_0$. Since $\\overline\\Dee \\zeta=0$, $\\zeta(z,t)$ is independent of $t$ and $\\partial_{\\overline z^\\nu}\\zeta=0$. So there is a holomorphic function $Z$ defined in a neighborhood of $0$ in $\\mathbb C^n$ such that $\\zeta=Z\\circ z$. Then $|Z|$ has a maximum at $0$, so $Z$ is constant near $0$. Therefore $\\zeta$ is constant, say $\\zeta(p)=c$, near $p_0$. Let $C=\\set{p:\\zeta(p)=c}$, a closed set. Let $p_1\\in C$. Since $p_1$ is also an extremal point of $\\zeta$, the above argument gives that $\\zeta$ is constant near $p_1$, therefore equal to $c$. Thus $C$ is open, and consequently $\\zeta$ is constant on $\\mathcal N$.\n\\end{proof}\n\n\n\nSince the operators ${}^b\\!\\overline \\partial:C^\\infty(\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M)\\to C^\\infty(\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}\\mathcal M)$ are totally characteristic, they induce operators\n\\begin{equation*}\n{}^b\\!\\overline \\partial_b:C^\\infty(\\mathcal N,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}_\\mathcal N\\mathcal M)\\to C^\\infty(\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}_\\mathcal N\\mathcal M),\n\\end{equation*}\nsee \\eqref{Pb}; these boundary operators define a complex because of \\eqref{PQb}. By way of the dual\n\\begin{equation}\\label{DualbdyIso}\n\\Phi^*:\\smash[t]{\\overline\\Vee}^*\\to {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}_\\mathcal N\\mathcal M\n\\end{equation}\nof the isomorphism \\eqref{bdyIso} the operators ${}^b\\!\\overline \\partial_b$ become identical to the operators of the $\\overline\\Dee$-complex \\eqref{bdyComplex}: The diagram\n\\begin{equation*}\n\\begin{CD}\n\\cdots &@>>> C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q \\smash[t]{\\overline\\Vee}^*)&@>\\overline\\Dee>>C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\smash[t]{\\overline\\Vee}^*)&@>>>&\\cdots\\\\\n&& &@V{\\Phi^*}VV&@VV{\\Phi^*}V&\\\\\n\\cdots &@>>> C^\\infty(\\mathcal N,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}_\\mathcal N\\mathcal M)&@>{}^b\\!\\overline \\partial_b>>C^\\infty(\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}_\\mathcal N\\mathcal M)&@>>>&\\cdots\n\\end{CD}\n\\end{equation*}\nis commutative and the vertical arrows are isomorphisms. This can be proved by writing the ${}^b\\!\\overline \\partial$ operators using Cartan's formula \\eqref{CartanFormula} for ${}^b\\!\\overline \\partial$ and $\\overline\\Dee$ and comparing the resulting expressions.\n\nLet $\\mathfrak r:\\mathcal M\\to\\mathbb R$ be a smooth defining function for $\\partial \\mathcal M$, $\\mathfrak r>0$ in the interior of $\\mathcal M$. Then ${}^b\\!\\overline \\partial \\mathfrak r$ is smooth and vanishes on $\\partial \\mathcal M$, so $\\frac{{}^b\\!\\overline \\partial \\mathfrak r}{\\mathfrak r}$ is also a smooth ${}^b\\!\\overline \\partial$-closed section of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$. Thus we get a $\\overline\\Dee$-closed element\n\\begin{equation}\\label{DefinitionBeta}\n\\beta_\\mathfrak r=[\\Phi^*]^{-1}\\frac{{}^b\\!\\overline \\partial \\mathfrak r}{\\mathfrak r} \\in C^\\infty(\\partial \\mathcal M;\\smash[t]{\\overline\\Vee}^*).\n\\end{equation}\nBy definition,\n\\begin{equation*}\n\\langle \\beta_\\mathfrak r,\\mathcal T\\rangle=\\langle \\frac{{}^b\\!\\overline \\partial \\mathfrak r}{\\mathfrak r},J(\\mathfrak r\\partial_\\mathfrak r)-i\\mathfrak r\\partial_\\mathfrak r\\rangle.\n\\end{equation*}\nExtend the section $\\mathfrak r\\partial_\\mathfrak r$ to a section of ${}^b{\\!}T\\mathcal M$ over a neighborhood $U$ of $\\mathcal N$ in $\\mathcal M$ with the property that $\\mathfrak r\\partial_\\mathfrak r\\rr=\\mathfrak r$. In $U$ we have\n\\begin{equation*}\n\\langle {}^b\\!\\overline \\partial \\mathfrak r,J(\\mathfrak r\\partial_\\mathfrak r)-i\\mathfrak r\\partial_\\mathfrak r\\rangle = (J(\\mathfrak r\\partial_\\mathfrak r)-i\\mathfrak r\\partial_\\mathfrak r)\\mathfrak r = J(\\mathfrak r\\partial_\\mathfrak r)\\mathfrak r -i\\mathfrak r.\n\\end{equation*}\nThe function $J(\\mathfrak r\\partial_\\mathfrak r)\\mathfrak r$ is smooth, real-valued, and vanishes along the boundary. So $\\mathfrak r^{-1}J(\\mathfrak r\\partial_\\mathfrak r)\\mathfrak r$ is smooth, real-valued. Thus\n\\begin{equation*}\n\\langle\\beta_\\mathfrak r,\\mathcal T\\rangle = a_\\mathfrak r-i\n\\end{equation*}\non $\\mathcal N$ for some smooth function $a_\\mathfrak r:\\mathcal N\\to\\mathbb R$, see \\cite[Lemma 2.5]{Me4}.\n\nIf $\\mathfrak r'$ is another defining function for $\\partial \\mathcal M$, then $\\mathfrak r'=\\mathfrak r e^u$ for some smooth function $u:\\mathcal M\\to\\mathbb R$. Then\n\\begin{equation*}\n{}^b\\!\\overline \\partial \\mathfrak r'=e^u\\,{}^b\\!\\overline \\partial \\mathfrak r+ e^u\\mathfrak r\\,{}^b\\!\\overline \\partial u\n\\end{equation*}\nand it follows that\n\\begin{equation*}\n\\beta_{\\mathfrak r'}=\\beta_\\mathfrak r+ \\overline\\Dee u.\n\\end{equation*}\nIn particular,\n\\begin{equation*}\na_{\\mathfrak r'}=a_\\mathfrak r+ \\mathcal T u.\n\\end{equation*}\nLet $\\mathfrak a_t$ denote the one-parameter group of diffeomorphisms generated by $\\mathcal T$.\n\n\\begin{proposition}\\label{Averages}\nThe functions $a^{\\sup}_\\mathrm{av}$, $a^{\\inf}_\\mathrm{av}:\\mathcal N\\to \\mathbb R$ defined by\n\\begin{equation*}\na^{\\sup}_\\mathrm{av}(p) =\\limsup_{t\\to\\infty}\\frac{1}{2t}\\int_{-t}^t a_{\\mathfrak r}(\\mathfrak a_s(p))\\,ds,\\quad a^{\\inf}_\\mathrm{av}(p) =\\liminf_{t\\to\\infty}\\frac{1}{2t}\\int_{-t}^t a_{\\mathfrak r}(\\mathfrak a_s(p))\\,ds\n\\end{equation*}\nare invariants of the complex $b$-structure, that is, they are independent of the defining function $\\mathfrak r$. The equality $a^{\\sup}_\\mathrm{av} = a^{\\inf}_\\mathrm{av}$ holds for some $\\mathfrak r$ if and only if it holds for all $\\mathfrak r$. \n\\end{proposition}\n\nIndeed,\n\\begin{equation*}\n\\lim_{t\\to\\infty}\\Big( \\frac{1}{2t}\\int_{-t}^t a_{\\mathfrak r'}(\\mathfrak a_s(p))\\,ds - \\frac{1}{2t}\\int_{-t}^t a_{\\mathfrak r}(\\mathfrak a_s(p))\\,ds\\Big) = \\lim_{t\\to\\infty}\\frac{1}{2t}\\int_{-t}^t \\frac{d}{ds}u(\\mathfrak a_s(p))\\,ds = 0\n\\end{equation*}\nbecause $u$ is bounded (since $\\mathcal N$ is compact).\n\nThe functions $a^{\\sup}_\\mathrm{av}$, $a^{\\inf}_\\mathrm{av}$ are constant on orbits of $\\mathcal T$, but they may not be smooth. \n\n\n\n\\begin{example}\\label{AnisotropicSphere}\nLet $\\mathcal N$ be the unit sphere in $\\mathbb C^{n+1}$ centered at the origin. Write $(z^1,\\dotsc,z^{n+1})$ for the standard coordinates in $\\mathbb C^{n+1}$. Fix $\\tau_1,\\dotsc,\\tau_{n+1}\\in \\mathbb R\\backslash 0$, all of the same sign, and let\n\\begin{equation*}\n\\mathcal T= i\\sum_{j=1}^{n+1} \\tau_j(z^j\\partial_{z^j}-\\overline z^j\\partial_{\\overline z^j}).\n\\end{equation*}\nThis vector field is real and tangent to $\\mathcal N$. Let $\\overline \\K$ be the standard CR structure of $\\mathcal N$ as a submanifold of $\\mathbb C^{n+1}$ (the part of $T^{0,1}\\mathbb C^{n+1}$ tangential to $\\mathcal N$). The condition that the $\\tau_j$ are different from $0$ and have the same sign ensures that $\\mathcal T$ is never in $\\mathcal K\\oplus \\overline \\K$. Indeed, the latter subbundle of $\\mathbb C T\\mathcal N$ is the annihilator of the pullback to $\\mathcal N$ of $i \\overline \\partial \\sum_{\\ell=1}^{n+1}|z^\\ell|^2$. The pairing of this form with $\\mathcal T$ is\n\\begin{equation*}\n\\langle i \\sum_{\\ell=1}^{n+1} z^\\ell d\\overline z^\\ell,i\\sum_{j=1}^{n+1} \\tau_j(z^j\\partial_{z^j}-\\overline z^j\\partial_{\\overline z^j})\\rangle = \\sum_{j=1}^{n+1} \\tau_j|z^j|^2,\n\\end{equation*}\na function that vanishes nowhere if and only if all $\\tau_j$ are different from zero and have the same sign. Thus $\\overline\\Vee=\\overline \\K\\oplus \\Span_\\mathbb C\\mathcal T$ is a subbundle of $\\mathbb C T\\mathcal N$ of rank $n+1$ with the property that $\\mathcal V+\\overline\\Vee=\\mathbb C T\\mathcal N$. To show that $\\overline\\Vee$ is involutive we first note that $\\overline \\K$ is the annihilator of the pullback to $\\mathcal N$ of the span of the differentials $dz^1,\\dotsc,dz^{n+1}$. Let $\\mathcal {L}_\\mathcal T$ denote the Lie derivative with respect to $\\mathcal T$. Then $\\mathcal {L}_\\mathcal T dz^j =i \\tau_j dz^j$, so if $L$ is a CR vector field, then so is $[L,\\mathcal T]$. Since in addition $\\overline \\K$ and $\\Span_\\mathbb C \\mathcal T$ are themselves involutive, $\\overline\\Vee$ is involutive. Thus $\\overline\\Vee$ is an elliptic structure with $\\mathcal V\\cap \\overline\\Vee=\\Span_\\mathbb C\\mathcal T$. Let $\\beta$ be the section of $\\smash[t]{\\overline\\Vee}^*$ which vanishes on $\\overline \\K$ and satisfies $\\langle \\beta,\\mathcal T\\rangle=-i$. Let $\\overline\\Dee$ denote the operators of the associated differential complex. Then $\\overline\\Dee\\beta=0$, since $\\beta$ vanishes on commutators of sections of $\\overline \\K$ (since $\\overline \\K$ is involutive) and on commutators of $\\mathcal T$ with sections of $\\overline \\K$ (since such commutators are in $\\overline \\K$).\n\nIf the $\\tau_j$ are positive (negative), this example may be viewed as the boundary of a blowup (compactification) of $\\mathbb C^{n+1}$, see \\cite{Me5}.\n\\end{example}\n\n\\medskip\nLet now $\\rho:F\\to\\mathcal M$ be a holomorphic vector bundle. Its ${}^b\\!\\overline \\partial$-complex also determines a complex along $\\mathcal N$,\n\\begin{equation}\\label{defDeebarBundle}\n\\cdots \\to C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N)\\xrightarrow{\\overline\\Dee} C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N)\\to\\cdots,\n\\end{equation}\nwhere $\\overline\\Dee$ is defined using the boundary operators ${}^b\\!\\overline \\partial_b$ and the isomorphism \\eqref{DualbdyIso}:\n\\begin{equation}\\label{defDeebarE}\n\\overline\\Dee (\\phi\\otimes \\eta) = (\\Phi^*)^{-1}{}^b\\!\\overline \\partial_b [\\Phi^* (\\phi\\otimes \\eta)]\n\\end{equation}\nwhere $\\Phi^*$ means $\\Phi^*\\otimesI$. These operators can be expressed locally in terms of the operators of the complex \\eqref{bdyComplex}. Fix a smooth frame $\\eta_\\mu$, $\\mu=1,\\dotsc,k$, of $F$ in a neighborhood $U\\subset \\mathcal M$ of $p_0\\in \\mathcal N$, and suppose\n\\begin{equation*}\n{}^b\\!\\overline \\partial \\eta_\\mu = \\sum_\\nu\\omega^\\nu_\\mu\\otimes \\eta_\\nu.\n\\end{equation*}\nThe $\\omega^\\nu_\\mu$ are local sections of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$, and if $\\sum_\\mu \\phi^\\mu\\otimes \\eta_\\mu$ is a section of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M\\otimes F$ over $U$, then\n\\begin{equation*}\n{}^b\\!\\overline \\partial\\sum\\phi^\\mu\\otimes\\eta_\\mu = \\sum_\\nu ({}^b\\!\\overline \\partial \\phi^\\nu +\\sum_\\mu \\omega^\\nu_\\mu\\wedge \\phi^\\mu)\\otimes \\eta_\\nu.\n\\end{equation*}\nTherefore, using the identification \\eqref{DualbdyIso}, the boundary operator ${}^b\\!\\overline \\partial_b$ is the operator given locally by\n\\begin{equation}\\label{DeebarE}\n\\overline\\Dee\\sum\\phi^\\mu\\otimes\\eta_\\mu = \\sum_\\nu (\\overline\\Dee \\phi^\\nu +\\sum_\\mu \\omega^\\nu_\\mu\\wedge \\phi^\\mu)\\otimes \\eta_\\nu\n\\end{equation}\nwhere now the $\\phi^\\mu$ are sections of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$, the $\\omega^\\nu_\\mu$ are the sections of $\\smash[t]{\\overline\\Vee}^*$ corresponding to the original $\\omega^\\nu_\\mu$ via $\\Phi^*$, and $\\overline\\Dee$ on the right hand side of the formula is the operator associated with $\\overline\\Vee$.\n\nThe structure bundle ${}^b{\\!}T^{0,1} F$ is locally given as the span of the sections \\eqref{LocalbT01E}. Applying the evaluation homomorphism $\\mathbb C{}^b{\\!}T_{\\partial F} F\\to \\mathbb C T\\partial F$ (over $\\mathcal N$) to these sections gives vector fields on $F_\\mathcal N$ forming a frame for the elliptic structure $\\overline\\Vee_F$ inherited by $F_\\mathcal N$. Writing $V_j^0=\\mathrm{ev} V_j$, this frame is just\n\\begin{equation}\\label{LocalVbarE}\n\\tilde V_j^0-\\sum_{\\mu,\\nu}\\zeta^\\mu \\langle \\omega^\\nu_\\mu, V_j^0\\rangle \\partial_{\\zeta^\\nu},\\ j=1,\\dotsc,n+1,\\quad \\partial_{\\overline \\zeta^\\nu},\\ \\nu=1,\\dotsc,k,\n\\end{equation}\nwhere now the $\\omega^\\nu_\\mu$ are the forms associated to the $\\overline\\Dee$ operator of $F_\\mathcal N$. Alternatively, one may take the $\\overline\\Dee$ operators of $F_\\mathcal N$ and use the formula \\eqref{DeebarE} to define a subbundle of $\\mathbb C TF$ locally as the span of the vector fields \\eqref{LocalVbarE}, a fortiori an elliptic structure on $F_\\mathcal N$, involutive because\n\\begin{equation*}\n\\overline\\Dee\\omega^\\nu + \\sum_\\lambda \\omega^\\nu_\\lambda\\wedge \\omega^\\lambda_\\mu = 0.\n\\end{equation*}\n\nTo obtain a formula for the canonical real vector field $\\mathcal T_F$ in $\\overline\\Vee_F$, let $J_F$ be the almost complex $b$-structure of ${}^b{\\!}T F$ and consider again the sections \\eqref{LocalbT01E}; they are defined in an open set $\\rho^{-1}(U)$, $U$ a neighborhood in $\\mathcal M$ of a point of $\\mathcal N$. Since the elements $\\partial_{\\overline\\zeta^\\nu}$ are sections of ${}^b{\\!}T^{0,1}F$,\n\\begin{equation}\\label{JEVert}\nJ_F\\Re\\partial_{\\overline\\zeta^\\nu}=\\Im\\partial_{\\overline\\zeta^\\nu}.\n\\end{equation}\nPick a defining function $\\mathfrak r$ for $\\mathcal N$. Then $\\tilde\\mathfrak r=\\rho^*\\mathfrak r$ is a defining function for $F_\\mathcal N$. We may take $V_{n+1}=\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r$ along $U\\cap \\mathcal N$. Then $\\tilde V_{n+1}=\\tilde \\mathfrak r\\partial_{\\tilde \\mathfrak r}+i \\widetilde {J\\mathfrak r \\partial_\\mathfrak r}$ along $\\rho^{-1}(U)\\cap F_\\mathcal N$ and so\n\\begin{multline*}\nJ_F\\Re\\big(\\tilde \\mathfrak r\\partial_{\\tilde \\mathfrak r} + i \\widetilde{J\\mathfrak r\\partial_\\mathfrak r}-\\sum_{\\mu,\\nu} \\zeta^\\mu\\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle\\partial_{\\zeta^\\nu}\\big) =\\\\ \\Im\\big(\\tilde \\mathfrak r\\partial_{\\tilde \\mathfrak r} + i \\widetilde{J\\mathfrak r\\partial_\\mathfrak r}-\\sum_{\\mu,\\nu} \\zeta^\\mu\\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle\\partial_{\\zeta^\\nu}\\big)\n\\end{multline*}\nalong $\\rho^{-1}(U)\\cap F_\\mathcal N$. Using \\eqref{JEVert} this gives\n\\begin{equation*}\nJ_F\\tilde \\mathfrak r\\partial_{\\tilde \\mathfrak r} =\n\\widetilde{J\\mathfrak r\\partial_\\mathfrak r} - 2\\Im\\sum _{\\mu,\\nu} \\zeta^\\mu\\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle\\partial_{\\zeta^\\nu}.\n\\end{equation*}\nApplying the evaluation homomorphism gives\n\\begin{equation}\\label{TE}\n\\mathcal T_F =\n\\tilde\\mathcal T - 2\\Im\\sum _{\\mu,\\nu} \\zeta^\\mu\\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle\\partial_{\\zeta^\\nu}\n\\end{equation}\nwhere $\\tilde \\mathcal T$ is the real vector field on $\\rho^{-1}(U\\cap \\mathcal N)=\\rho^{-1}(U)\\cap F_\\mathcal N$ which projects on $\\mathcal T$ and satisfies $\\tilde \\mathcal T\\zeta^\\mu=0$ for all $\\mu$.\n\nLet $h$ be a Hermitian metric on $F$, and suppose that the frame $\\eta_\\mu$ is orthonormal. Applying $\\mathcal T_E$ as given in \\eqref{TE} to the function $|\\zeta|^2=\\sum|\\zeta^\\mu|^2$ we get that $\\mathcal T_F$ is tangent to the unit sphere bundle of $F$ if and only if\n\\begin{equation*}\n\\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle - \\overline{\\langle\\omega^\\mu_\\nu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle} = 0\n\\end{equation*}\nfor all $\\mu, \\nu$. Equivalently, in terms of the isomorphism \\eqref{DualbdyIso},\n\\begin{equation}\\label{ExactMetricCondition}\n\\langle(\\Phi^*)^{-1}\\omega^\\nu_\\mu,\\mathcal T \\rangle + \\overline{\\langle(\\Phi^*)^{-1}\\omega^\\mu_\\nu,\\mathcal T\\rangle} = 0\\quad \\text{ for all }\\mu,\\nu.\n\\end{equation}\n\\begin{definition}\\label{ExactMetric}\nThe Hermitian metric $h$ will be called exact if \\eqref{ExactMetricCondition} holds.\n\\end{definition}\n\nThe terminology in Definition \\ref{ExactMetric} is taken from the notion of exact Riemannian $b$-metric of Melrose~\\cite[pg. 31]{RBM2}. For such metrics, the Levi-Civita $b$-connection has the property that ${}^b\\!\\nabla_{\\mathfrak r\\partial_\\mathfrak r}=0$ [op. cit., pg. 58]. We proceed to show that the Hermitian holomorphic connection of an exact Hermitian metric on $F$ also has this property. Namely, suppose that $h$ is an exact Hermitian metric, and let $\\eta_\\mu$ be an orthonormal frame of $F$. Then for the Hermitian holomorphic connection we have\n\\begin{equation*}\n\\langle\\omega^\\nu_\\mu-\\overline \\omega^\\mu_\\nu,\\mathfrak r\\partial_\\mathfrak r\\rangle = \\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r\\rangle - \\overline{\\langle \\omega^\\mu_\\nu,\\mathfrak r\\partial_\\mathfrak r\\rangle}\n=\\frac{1}{2}\\big( \\langle\\omega^\\nu_\\mu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle - \\overline{\\langle \\omega^\\mu_\\nu,\\mathfrak r\\partial_\\mathfrak r+i J\\mathfrak r\\partial_\\mathfrak r\\rangle} \\big)\n\\end{equation*}\nusing that the $\\omega^\\nu_\\mu$ are of type $(0,1)$. Thus ${}^b\\!\\nabla_{\\mathfrak r\\partial_\\mathfrak r}=0$.\n\n\\section{Local invariants}\\label{sLocalInvariants}\n\nComplex structures have no local invariants: every point of a complex $n$-manifold has a neighborhood biholomorphic to a ball in $\\mathbb C^n$ It is natural to ask the same question about complex $b$-structures, namely,\n\\begin{equation*}\n\\display{300pt}{is there a local model depending only on dimension for every complex $b$-stucture? }\n\\end{equation*}\nIn lieu of a Newlander-Nirenberg theorem, we show that complex $b$-structures have no local formal invariants at the boundary. More precisely:\n\n\\begin{proposition}\\label{NoLocalFormalInvariants}\nEvery $p_0\\in \\mathcal N$ has a neighborhood $V$ in $\\mathcal M$ on which there are smooth coordinates $x^j$, $j=1,\\dotsc,2n+2$ centered at $p_0$ with $x^{n+1}$ vanishing on $V\\cap\\mathcal N$ such that with\n\\begin{equation}\\label{NoLocalFormalInvariants1}\n\\overline L^0_j=\\frac{1}{2}(\\partial_{x^j}+i\\partial_{x^{j+n+1}}),\\ j\\leq n,\\quad\n\\overline L^0_{n+1} = \\frac{1}{2}(x^{n+1}\\partial_{x^{n+1}}+i\\partial_{x^{2n+2}})\n\\end{equation}\nthere are smooth functions $\\gamma^j_k$ vanishing to infinite order on $V\\cap\\mathcal N$ such that\n\\begin{equation*}\n\\overline L_j=\\overline L_j^0+\\sum_{k=1}^{n+1}\\gamma^k_j L_k^0\n\\end{equation*}\nis a frame for ${}^b{\\!}T^{0,1}\\mathcal M$ over $V$.\n\\end{proposition}\n\nThe proof will require some preparation. Let $\\mathfrak r:\\mathcal M\\to\\mathbb R$ be a defining function for $\\partial\\mathcal M$. Let $p_0\\in \\mathcal N$, pick a hypoanalytic chart $(z,t)$ (cf. \\eqref{HypoanalyticChart}) centered at $p_0$ with $\\mathcal T t=1$. Let $U\\subset \\mathcal N$ be a neighborhood of $p_0$ contained in the domain of the chart, mapped by it to $B\\times(-\\delta,\\delta)\\subset \\mathbb C^n\\times\\mathbb R$, where $B$ is a ball with center $0$ and $\\delta$ is some small positive number. For reference purposes we state\n\n\\begin{lemma}\\label{LocalExactBStructure} On such $U$, the problem\n\\begin{equation*}\n\\overline\\Dee \\phi = \\psi, \\quad \\psi\\in C^\\infty(U;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\smash[t]{\\overline\\Vee}^*|_U)\\text{ and }\\overline\\Dee \\psi=0\n\\end{equation*}\nhas a solution in $C^\\infty(U;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*|_U)$.\n\\end{lemma}\n\nExtend the functions $z^j$ and $t$ to a neighborhood of $p_0$ in $\\mathcal M$. Shrinking $U$ if necessary, we may assume that in some neighborhood $V$ of $p_0$ in $\\mathcal M$ with $V\\cap\\partial\\mathcal M=U$, $(z,t,\\mathfrak r)$ maps $V$ diffeomorphically onto $B\\times(-\\delta,\\delta)\\times [\\neutral{]}0,\\varepsilon\\neutral{(})$ for some $\\delta$, $\\varepsilon>0$. Since the form $\\beta_\\mathfrak r$ defined in \\eqref{DefinitionBeta} is $\\overline\\Dee$-closed, there is $\\alpha\\in C^\\infty(U)$ such that\n\\begin{equation*}\n-i\\overline\\Dee \\alpha=\\beta_\\mathfrak r.\n\\end{equation*}\nExtend $\\alpha$ to $V$ as a smooth function. The section\n\\begin{equation}\\label{StartNoInv}\n{}^b\\!\\overline \\partial(\\log \\mathfrak r+i\\alpha)=\\frac{{}^b\\!\\overline \\partial \\mathfrak r}{\\mathfrak r}+i{}^b\\!\\overline \\partial\\alpha\n\\end{equation}\nof ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$ over $V$ vanishes on $U$, since $\\beta_\\mathfrak r+i \\overline\\Dee \\alpha=0$. So there is a smooth section $\\phi$ of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$ over $V$ such that\n\\begin{equation*}\n{}^b\\!\\overline \\partial(\\log \\mathfrak r +i\\alpha)=\\mathfrak r e^{i\\alpha}\\phi.\n\\end{equation*}\nSuppose $\\zeta:U\\to\\mathbb C$ is a solution of $\\overline\\Dee \\zeta=0$ on $U$, and extend it to $V$. Then ${}^b\\!\\overline \\partial\\zeta$ vanishes on $U$, so again we have\n\\begin{equation*}\n{}^b\\!\\overline \\partial\\zeta=\\mathfrak r e^{i\\alpha}\\psi.\n\\end{equation*}\nfor some smooth section $\\psi$ of ${}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,1}\\mathcal M$ over $V$. The following lemma will be applied for $f_0$ equal to $\\log\\mathfrak r+i\\alpha$ or each of the functions $z^j$.\n\n\\begin{lemma}\nLet $f_0$ be smooth in $V\\backslash U$ and suppose that ${}^b\\!\\overline \\partial f_0=\\mathfrak r e^{i\\alpha}\\psi_1$ with $\\psi_1$ smooth on $V$. Then there is $f:V\\to\\mathbb C$ smooth vanishing at $U$ such that ${}^b\\!\\overline \\partial(f_0+f)$ vanishes to infinite order on $U$.\n\\end{lemma}\n\n\\begin{proof} Suppose that $f_1,\\dotsc,f_{N-1}$ are defined on $V$ and that\n\\begin{equation}\\label{StepN}\n{}^b\\!\\overline \\partial \\sum_{k=0}^{N-1}(\\mathfrak r e^{i\\alpha})^k f_k = (\\mathfrak r e^{i\\alpha})^N\\psi_N\n\\end{equation}\nholds with $\\psi_N$ smooth in $V$; by the hypothesis, \\eqref{StepN} holds when $N=1$. Using \\eqref{StartNoInv} we get that ${}^b\\!\\overline \\partial(\\mathfrak r e^{i\\alpha})=(\\mathfrak r e^{i\\alpha})^2\\phi$, therefore\n\\begin{equation*}\n0={}^b\\!\\overline \\partial\\big((\\mathfrak r e^{i\\alpha})^N\\psi_N) = (\\mathfrak r e^{i\\alpha})^N[{}^b\\!\\overline \\partial\\psi_N + N\\mathfrak r e^{i\\alpha}\\phi\\wedge \\psi_N],\n\\end{equation*}\nwhich implies that ${}^b\\!\\overline \\partial\\psi_N=0$ on $U$. With arbitrary $f_N$ we have\n\\begin{equation*}\n{}^b\\!\\overline \\partial\\sum_{k=0}^{N}(\\mathfrak r e^{i\\alpha})^k f_k = (\\mathfrak r e^{i\\alpha})^N(\\psi_N + {}^b\\!\\overline \\partial f_N + N \\mathfrak r e^{i\\alpha}f_N \\phi).\n\\end{equation*}\nSince $\\overline\\Dee\\psi_N=0$ and $H_{\\overline\\Dee}^1(U)=0$ by Lemma~\\ref{LocalExactBStructure}, there is a smooth function $f_N$ defined in $U$ such that $\\overline\\Dee f_N=-\\psi_N$ in $U$. So there is $\\chi_N$ such that $\\psi_N + {}^b\\!\\overline \\partial f_N = \\mathfrak r e^{i\\alpha}\\chi_N$. With such $f_N$, \\eqref{StepN} holds with $N+1$ in place of $N$ and some $\\psi_{N+1}$. Thus there is a sequence $\\{f_j\\}_{j=1}^\\infty$ such that \\eqref{StepN} holds for each $N$. Borel's lemma then gives $f$ smooth with\n\\begin{equation*}\nf\\sim \\sum_{k=1}^\\infty (\\mathfrak r e^{i\\alpha})^k f_k\\quad\\text{ on }U\n\\end{equation*}\nsuch that $\\overline\\Dee(f_0+f)$ vanishes to infinite order on $U$.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition~\\ref{NoLocalFormalInvariants}]\nApply the lemma with $f_0=\\log\\mathfrak r+i\\alpha$ to get a function $f$ such that ${}^b\\!\\overline \\partial (f_0+f)$ vanishes to infinite order at $U$. Let\n\\begin{equation*}\nx^{n+1}=\\mathfrak r e^{-\\Im\\alpha+\\Re f},\\quad x^{2n+2}=\\Re\\alpha+\\Im f.\n\\end{equation*}\nThese functions are smooth up to $U$.\n\nApplying the lemma to each of the functions $f_0=z^j$, $j=1,\\dotsc,n$ gives smooth functions $\\zeta^j$ such that $\\zeta^j=z^j$ on $U$ and ${}^b\\!\\overline \\partial \\zeta^j=0$ to infinite order at $U$. Define\n\\begin{equation*}\nx^j=\\Re \\zeta^j,\\quad x^{j+n+1}=\\Im\\zeta^j,\\quad j=1,\\dotsc,n.\n\\end{equation*}\nThe functions $x^j$, $j=1\\dots,2n+2$ are independent, and the forms\n\\begin{equation*}\n\\eta^j={}^b\\!d\\zeta^j,j=1\\dots,n,\\quad \\eta^{n+1}=\\frac{1}{x^{n+1}e^{i x^{2n+2}}}{}^b\\!d[x^{n+1}e^{i x^{2n+2}}]\n\\end{equation*}\ntogether with their conjugates form a frame for $\\mathbb C{}^b{\\!}T\\mathcal M$ near $p_0$. Let $\\eta^j_{1,0}$ and $\\eta^j_{0,1}$ be the $(1,0)$ and $(0,1)$ components of $\\eta^j$ according to the complex $b$-structure of $\\mathcal M$. Then\n\\begin{equation*}\n\\eta^j_{0,1}=\\sum_k p^j_k \\eta^k +q^j_k \\overline {\\eta}^k.\n\\end{equation*}\nSince $\\eta^j_{0,1}={}^b\\!\\overline \\partial\\zeta^j$ vanishes to infinite order at $U$,\nthe coefficients $p^j_k$ and $q^j_k$ vanish to infinite order at $U$. Replacing this formula for $\\eta^j_{0,1}$ in $\\eta^j=\\eta^j_{1,0}+\\eta^j_{0,1}$\nget\n\\begin{equation*}\n\\sum_k(\\delta^j_k-p^j_k )\\eta^k -\\sum_k q^j_k \\overline \\eta^k= \\eta^j_{1,0}.\n\\end{equation*}\nThe matrix $I-[p^j_k]$ is invertible with inverse of the form $I+[P^j_k]$ with $P^j_k$ vanishing to infinite order at $U$. So\n\\begin{equation}\\label{TheForms}\n\\eta^j -\\sum_k \\gamma^j_k \\overline \\eta^k=\\sum_k(\\delta^j_k+P^j_k) \\eta^k_{1,0}\n\\end{equation}\nwith suitable $\\gamma^j_k$ vanishing to infinite order on $U$. Define the vector fields $\\overline L_j^0$ as in \\eqref{NoLocalFormalInvariants1}. The vector fields\n\\begin{equation*}\n\\overline L_j=\\overline L_j^0+\\sum_k \\gamma^k_j L_k^0, \\quad j=1,\\dotsc,n+1\n\\end{equation*}\nare independent and since $\\langle \\overline L_j^0,\\eta^k\\rangle=0$ and $\\langle L_j^0,\\eta^k\\rangle=\\delta^k_j$, they annihilate each of the forms on the left hand side of \\eqref{TheForms}. So they annihilate the forms $\\eta^k_{1,0}$, which proves that the $\\overline L_j$ form a frame of ${}^b{\\!}T^{0,1}\\mathcal M$.\n\\end{proof}\n\n\\section{Indicial complexes}\\label{sIndicialComplex}\n\nThroughout this section we assume that $\\mathcal N$ is a connected component of the boundary of a compact manifold $\\mathcal M$. Let\n\\begin{equation}\\label{GenericComplex}\n\\cdots\\to C^\\infty(\\mathcal M;E^q)\\xrightarrow{A_q} C^\\infty(\\mathcal M;E^{q+1})\\to\\cdots\n\\end{equation}\nbe a $b$-elliptic complex of operators $A_q\\in \\Diff^1_b(\\mathcal M;E^q,E^{q+1})$; the $E^q$, $q=0,\\dotsc,r$, are vector bundles over $\\mathcal M$.\n\nNote that since $A_q$ is a first order operator,\n\\begin{equation}\\label{FirstOrderChar}\nA_q(f\\phi)=f A_q\\phi-i\\, \\bsym(A_q)({}^b\\!d f)(\\phi).\n\\end{equation}\nThis formula follows from the analogous formula for the standard principal symbol and the definition of principal $b$-symbol. It follows from \\eqref{FirstOrderChar} and \\eqref{VanishingOnBdy} that $A_q$ defines an operator\n\\begin{equation*}\nA_{b,q}:\\Diff^1(\\mathcal N;E^q_{\\mathcal N},E^{q+1}_{\\mathcal N}).\n\\end{equation*}\nFix a smooth defining function $\\mathfrak r:\\mathcal M\\to\\mathbb R$ for $\\partial \\mathcal M$, $\\mathfrak r>0$ in the interior of $\\mathcal M$, let \n\\begin{equation*}\n\\mathcal A_q(\\sigma): \\Diff^1_b(\\mathcal N;E^q_{\\mathcal N},E^{q+1}_{\\mathcal N}),\\quad \\sigma\\in \\mathbb C\n\\end{equation*}\ndenote the indicial family of $A_q$ with respect to $\\mathfrak r$, see \\eqref{IndicialFamily}. Using \\eqref{FirstOrderChar} and defining\n\\begin{equation*}\n\\Lambda_{\\mathfrak r,q}=\\bsym(A_q)(\\frac{{}^b\\!d \\mathfrak r}{\\mathfrak r}),\n\\end{equation*}\nthe indicial family of $A_q$ with respect to $\\mathfrak r$ is\n\\begin{equation}\\label{DefGenericIndOp}\n\\mathcal A_q(\\sigma) = A_{b,q}+\\sigma\\Lambda_{\\mathfrak r,q}:C^\\infty(\\mathcal N;E^q_\\mathcal N)\\to C^\\infty(\\mathcal N;E^{q+1}_\\mathcal N).\n\\end{equation}\nBecause of \\eqref{PQb}, these operators form an elliptic complex\n\\begin{equation}\\label{IndicialComplex}\n\\cdots\n\\to C^\\infty(\\mathcal N;E^q_\\mathcal N) \\xrightarrow{\\mathcal A_q(\\sigma)}\nC^\\infty(\\mathcal N;E^{q+1}_\\mathcal N)\n\\to\\cdots\n\\end{equation}\nfor each $\\sigma$ and each connected component $\\mathcal N$ of $\\partial \\mathcal M$. The operators depend on $\\mathfrak r$, but the cohomology groups at a given $\\sigma$ for different defining functions $\\mathfrak r$ are isomorphic. Indeed, if $\\mathfrak r'$ is another defining function for $\\partial \\mathcal M$, then $\\mathfrak r'=e^u\\mathfrak r$ for some smooth real-valued function $u$, and a simple calculation gives\n\\begin{equation*}\n(A_{b,q}+\\sigma \\Lambda_{\\mathfrak r,q})(e^{i\\sigma u}\\phi)=e^{i\\sigma u}(A_{b,q}+\\sigma \\Lambda_{\\mathfrak r',q})\\phi.\n\\end{equation*}\nIn analogy with the definition of boundary spectrum of an elliptic operator $A\\in \\Diff^m_b(\\mathcal M;E,F)$, we have\n\n\\begin{definition}\nLet $\\mathcal N$ be a connected component of $\\partial \\mathcal M$. The family of complexes \\eqref{IndicialComplex}, $\\sigma\\in \\mathbb C$, is the indicial complex of \\eqref{GenericComplex} at $\\mathcal N$. For each $\\sigma\\in \\mathbb C$ let $H^q_{\\mathcal A(\\sigma)}(\\mathcal N)$ denote the $q$-th cohomology group of \\eqref{IndicialComplex} on $\\mathcal N$. The $q$-th boundary spectrum of the complex \\eqref{GenericComplex} at $\\mathcal N$ is the set\n\\begin{equation*}\n\\spec_{b,\\mathcal N}^q(A)=\\set{\\sigma\\in \\mathbb C: H^q_{\\mathcal A(\\sigma)}(\\mathcal N)\\ne 0}.\n\\end{equation*}\nThe $q$-th boundary spectrum of $A$ is $\\spec_b^q(A) = \\bigcup_{\\mathcal N}\\spec_{b,\\mathcal N}^q(A)$.\n\\end{definition}\n\nThe spaces $H^q_{\\mathcal A(\\sigma)}(\\mathcal N)$ are finite-dimensional because \\eqref{IndicialComplex} is an elliptic complex and $\\mathcal N$ is compact. It is convenient to isolate the behavior of the indicial complex according to the components of the boundary, since the sets $\\spec_{b,\\mathcal N}^q(A)$ can vary drastically from component to component.\n\n\\medskip\nSuppose that $\\mathcal M$ is a complex $b$-manifold. Recall that since\n\\begin{equation*}\n{}^b\\!\\overline \\partial \\in \\Diff^1_b(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}\\mathcal M),\n\\end{equation*}\nthere are induced boundary operators\n\\begin{equation*}\n{}^b\\!\\overline \\partial_b\\in \\Diff^1(\\mathcal N;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}_\\mathcal N\\mathcal M,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}_\\mathcal N\\mathcal M)\n\\end{equation*}\nwhich via the isomorphism \\eqref{bdyIso} become the operators of the $\\overline\\Dee$-complex \\eqref{bdyComplex}. Combining \\eqref{bdeebarOnFunctions} and \\eqref{bsymbdeebar} we get\n\\begin{equation*}\n\\bsym({}^b\\!\\overline \\partial)(\\frac{{}^b\\!d \\mathfrak r}{\\mathfrak r})(\\phi) = i \\frac{{}^b\\!\\overline \\partial\\mathfrak r}{\\mathfrak r}\\wedge\\phi\n\\end{equation*}\nand using \\eqref{DefinitionBeta} we may identify $\\widehat{{}^b\\!\\overline \\partial_b}(\\sigma)$, given by \\eqref{DefGenericIndOp}, with the operator\n\\begin{equation}\\label{DefIndOp}\n\\overline\\D(\\sigma)\\phi = \\overline\\Dee \\phi + i \\sigma\\beta_\\mathfrak r\\wedge \\phi.\n\\end{equation}\nIf $E\\to\\mathcal M$ is a holomorphic vector bundle, then the indicial family of\n\\begin{equation*}\n{}^b\\!\\overline \\partial\\in \\Diff^1_b(\\mathcal M;{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q}\\mathcal M\\otimes E,{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}^{0,q+1}\\mathcal M\\otimes E)\n\\end{equation*}\nis again given by \\eqref{DefIndOp}, but using the operator $\\overline\\Dee$ of the complex \\eqref{defDeebarBundle}.\n\n\\medskip\nReturning to the general complex \\eqref{GenericComplex}, fix a smooth positive $b$-density $\\mathfrak m$ on $\\mathcal M$ and a Hermitian metric on each $E^q$. Let $\\mathcal A_q^\\star(\\sigma)$ be the indicial operator of the formal adjoint, $A^\\star_q$, of $A_q$. The Laplacian $\\square_q$ of the complex \\eqref{GenericComplex} in degree $q$ belongs to $\\Diff^2_b(\\mathcal M;E^q\\mathcal M)$, is $b$-elliptic, and its indicial operator is\n\\begin{equation*}\n\\widehat \\square_q(\\sigma)=\n\\mathcal A_q^\\star(\\sigma) \\mathcal A_q(\\sigma)+\\mathcal A_{q-1}(\\sigma) \\mathcal A_{q-1}^\\star(\\sigma).\n\\end{equation*}\nThe $b$-spectrum of $\\square_q$ at $\\mathcal N$, see Melrose~\\cite{RBM2}, is the set\n\\begin{equation*}\n\\spec_{b,\\mathcal N}(\\square_q) = \\set{\\sigma\\in \\mathbb C:\\widehat\\square_q(\\sigma):C^\\infty(\\mathcal N;E^q_\\mathcal N)\\to C^\\infty(\\mathcal N;E^q_\\mathcal N)\\text{ is not invertible}}.\n\\end{equation*}\nNote that unless $\\sigma$ is real, $\\widehat \\square_q(\\sigma)$ is not the Laplacian of the complex \\eqref{IndicialComplex}.\n\n\\begin{proposition}\\label{DiscretenesOfBSpectrum}\nFor each $q$, $\\spec_{b,\\mathcal N}^q(A)\\subset \\spec_{b,\\mathcal N}(\\square_q)$. \\end{proposition}\n\nNote that the set $\\spec_{b,\\mathcal N}(\\square_q)$ depends on the choice of Hermitian metrics and $b$-density used to construct the Laplacian, but that the subset $\\spec_{b,\\mathcal N}^q(A)$ is independent of such choices. For a general $b$-elliptic complex \\eqref{GenericComplex} it may occur that $\\spec_{b,\\mathcal N}^q(A)\\ne \\spec_{b,\\mathcal N}(\\square_q)$. In Example \\ref{IndComplexbd} we show that $\\spec_{b,\\mathcal N}^q({}^b\\!d)\\subset \\set{0}$. As is well known, $\\spec_{b,\\mathcal N}(\\Delta_q)$ is an infinite set if $\\dim\\mathcal M>1$. At the end of this section we will give an example where $\\spec_{b,\\mathcal N}^0({}^b\\!\\overline \\partial)$ is an infinite set. A full discussion of $\\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial)$ for any $q$ and other aspects of the indicial complex of complex $b$-structures is given in Section~\\ref{sIndicialCohomology}.\n\n\\begin{proof}[Proof of Proposition~\\ref{DiscretenesOfBSpectrum}]\nSince $\\square_q$ is $b$-elliptic, the set $\\spec_{b,\\mathcal N}(\\square_q)$ is closed and discrete. Let $H^2(\\mathcal N;E^q_\\mathcal N)$ be the $L^2$-based Sobolev space of order $2$. For $\\sigma\\notin \\spec_{b,\\mathcal N}(\\square_q)$ let\n\\begin{equation*}\n\\mathcal G_q(\\sigma):L^2(\\mathcal N;E^q_\\mathcal N)\\to H^2(\\mathcal N;E^q_\\mathcal N)\n\\end{equation*}\nbe the inverse of $\\widehat \\square_q(\\sigma)$. The map $\\sigma\\mapsto \\mathcal G_q(\\sigma)$ is meromorphic with poles in $\\spec_b(\\square_q)$. Since\n\\begin{equation*}\n\\mathcal A_q^\\star(\\sigma)= [\\mathcal A_q(\\overline \\sigma)]^\\star\n\\end{equation*}\nthe operators $\\widehat \\square_q(\\sigma)$ are the Laplacians of the complex \\eqref{IndicialComplex} when $\\sigma$ is real. Thus for $\\sigma\\in \\mathbb R\\backslash (\\spec_{b,\\mathcal N}(\\square_q)\\cup\\spec_{b,\\mathcal N}(\\square_{q+1}))$ we have\n\\begin{equation*}\n\\mathcal A_{q}(\\sigma) \\mathcal G_{q}(\\sigma)=\\mathcal G_{q+1}(\\sigma)\\mathcal A_q(\\sigma),\\quad\n\\mathcal A_q(\\sigma)^\\star \\mathcal G_{q+1}(\\sigma) = \\mathcal G_q(\\sigma)\\mathcal A_q^\\star(\\sigma)\n\\end{equation*}\nby standard Hodge theory. Since all operators depend holomorphically on $\\sigma$, the same equalities hold for $\\sigma\\in \\mathfrak R=\\mathbb C\\backslash(\\spec_{b,\\mathcal N}(\\square_q)\\cup \\spec_{b,\\mathcal N}(\\square_{q+1}))$. It follows that\n\\begin{equation*}\n\\mathcal A_q^\\star(\\sigma)\\mathcal A_q(\\sigma) \\mathcal G_q(\\sigma) =\n\\mathcal G_q(\\sigma) \\mathcal A_q^\\star(\\sigma)\\mathcal A_q(\\sigma)\n\\end{equation*}\nin $\\mathfrak R$. By analytic continuation the equality holds on all of $\\mathbb C\\backslash \\spec_{b,\\mathcal N}(\\square_q)$. Thus if $\\sigma_0\\notin \\spec_{b,\\mathcal N}(\\square_q)$ and $\\phi$ is a $\\mathcal A_q(\\sigma_0)$-closed section, $\\mathcal A_q(\\sigma_0)\\phi=0$, then the formula\n\\begin{equation*}\n\\phi=[\\mathcal A_q^\\star(\\sigma_0) \\mathcal A_q(\\sigma_0)+\\mathcal A_{q-1}(\\sigma_0) \\mathcal A_{q-1}^\\star(\\sigma_0)]\\mathcal G_q(\\sigma_0) \\phi\n\\end{equation*}\nleads to\n\\begin{equation*}\n\\phi=\\mathcal A_{q-1}(\\sigma_0)[\\mathcal A_{q-1}^\\star(\\sigma_0) \\mathcal G_{q}(\\sigma_0)\\phi].\n\\end{equation*}\nTherefore $\\sigma_0\\notin \\spec_{b,\\mathcal N}^q(A)$.\n\\end{proof}\n\nSince $\\square_q$ is $b$-elliptic, the set $\\spec _{b,\\mathcal N}(\\square_q)$ is discrete and intersects each horizontal strip $a\\leq\\Im\\sigma\\leq b$ in a finite set (Melrose~\\cite{RBM2}). Consequently:\n\n\\begin{corollary}\nThe sets $\\spec_{b,\\mathcal N}^q(A)$, $q=0,1\\dotsc$, are closed, discrete, and intersect each horizontal strip $a\\leq\\Im\\sigma\\leq b$ in a finite set.\n\\end{corollary}\n\nWe note in passing that the Euler characteristic of the complex \\eqref{IndicialComplex} vanishes for each $\\sigma$. Indeed, let $\\sigma_0\\in \\mathbb C$. The Euler characteristic of the $\\mathcal A(\\sigma_0)$-complex is the index of\n\\begin{equation*}\n\\mathcal A(\\sigma_0) + \\mathcal A(\\sigma_0)^\\star:\\bigoplus_{q\\text{ even}}C^\\infty(\\mathcal N;E^q)\\to \\bigoplus_{q\\text{ odd}}C^\\infty(\\mathcal N;E^q).\n\\end{equation*}\nThe operator $\\mathcal A_q(\\sigma)$ is equal to $A_{b,q} + \\sigma \\Lambda_{\\mathfrak r,q}$, see \\eqref{DefGenericIndOp}. Thus $\\mathcal A_q(\\sigma)^\\star = A_{b,q}^\\star+\\overline \\sigma \\Lambda_{\\mathfrak r,q}^\\star$, and it follows that for any $\\sigma$,\n\\begin{equation*}\n\\mathcal A(\\sigma) + \\mathcal A(\\sigma)^\\star=\n\\mathcal A(\\sigma_0) + \\mathcal A(\\sigma_0)^\\star\n+(\\sigma-\\sigma_0)\\Lambda_\\mathfrak r+(\\overline \\sigma-\\overline \\sigma_0)\\Lambda_\\mathfrak r^\\star\n\\end{equation*}\nis a compact perturbation of $\\mathcal A(\\sigma_0) + \\mathcal A(\\sigma_0)^\\star$. Therefore, since the index is invariant under compact perturbations, the index of $\\mathcal A(\\sigma) + \\mathcal A(\\sigma)^\\star$ is independent of $\\sigma$. Then it vanishes, since it vanishes when $\\sigma\\notin \\bigcup_q \\spec_{b,\\mathcal N}^q(A)$.\n\n\\medskip\nLet $\\mathfrak {Mero}^q(\\mathcal N)$ be the sheaf of germs of $C^\\infty(\\mathcal N;E^q)$-valued meromorphic functions on $\\mathbb C$ and let $\\mathfrak {Hol}^q(\\mathcal N)$ be the subsheaf of germs of holomorphic functions. Let $\\mathfrak S^q(\\mathcal N)=\\mathfrak {Mero}^q(\\mathcal N)\/\\mathfrak {Hol}^q(\\mathcal N)$. The holomorphic family $\\sigma\\mapsto \\mathcal A_q(\\sigma)$ gives a sheaf homomorphism $\\mathcal A_q:\\mathfrak {Mero}^q(\\mathcal N)\\to\\mathfrak {Mero}^{q+1}(\\mathcal N)$ such that $\\mathcal A_q(\\mathfrak {Hol}^q(\\mathcal N))\\subset \\mathfrak {Hol}^{q+1}(\\mathcal N)$ and $\\mathcal A_{q+1}\\circ \\mathcal A_{q}=0$, so we have a complex\n\\begin{equation}\\label{BdyMeroCohomologySheaf}\n\\cdots\\to \\mathfrak S^q(\\mathcal N)\\xrightarrow{\\mathcal A_q}\\mathfrak S^{q+1}(\\mathcal N)\\to\\cdots.\n\\end{equation}\n\nThe cohomology sheafs $\\mathfrak H^q_A(\\mathcal N)$ of this complex contain more refined information about the cohomology of the complex $A$.\n\n\\begin{proposition}\nThe sheaf $\\mathfrak H^q_\\mathcal A(\\mathcal N)$ is supported on $\\spec_{b,\\mathcal N}^q(A)$.\n\\end{proposition}\n\n\\begin{proof}\nLet $\\sigma_0\\in \\mathbb C$ be such that $H^q_{\\mathcal A(\\sigma_0)}(\\mathcal N)=0$ and let\n\\begin{equation}\\label{SingPart}\n\\phi(\\sigma)=\\sum_{k=1}^{\\mu}\\frac{\\phi_k}{(\\sigma-\\sigma_0)^{k}},\n\\end{equation}\n$\\mu>0$, $\\phi_k\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*)$, represent the $\\mathcal A$-closed element $[\\phi]$ of the stalk of $\\mathfrak S^q(\\mathcal N)$ over $\\sigma_0$. The condition that $\\mathcal A_q[\\phi]=0$ means that $\\mathcal A_q(\\sigma)\\phi(\\sigma)$ is holomorphic, that is,\n\\begin{equation*}\n\\frac{\\mathcal A_q(\\sigma_0)\\phi_\\mu}{(\\sigma-\\sigma_0)^\\mu} +\\sum_{k=1}^{\\mu-1}\\frac{\\mathcal A_q(\\sigma_0)\\phi_k+\\Lambda_{\\mathfrak r,q}\\phi_{k+1}}{(\\sigma-\\sigma_0)^{k}}=0.\n\\end{equation*}\nIn particular $\\mathcal A_q(\\sigma_0)\\phi_\\mu=0$. Since $H^q_{\\mathcal A(\\sigma_0)}(\\mathcal N)=0$, there is $\\psi_\\mu\\in C^\\infty(\\mathcal N;E^{q-1})$ such that $\\mathcal A_{q-1}(\\sigma_0)\\psi_\\mu=\\phi_\\mu$. This shows that if $\\mu=1$, then $[\\phi]$ is exact, and that if $\\mu>1$, then letting $\\phi'(\\sigma)=\\phi(\\sigma)-\\mathcal A_{q-1}(\\sigma)\\psi_\\mu\/(\\sigma-\\sigma_0)^\\mu$, that $\\phi$ is cohomologous to an element $[\\phi']$ represented by a sum as in \\eqref{SingPart} with $\\mu-1$ instead of $\\mu$. By induction, $[\\phi]$ is exact.\n\\end{proof}\n\n\\begin{definition}\\label{CohomologySheafs}\nThe cohomology sheafs $\\mathfrak H^q_{A}(\\mathcal N)$ of the complex \\eqref{BdyMeroCohomologySheaf} will be referred to as the indicial cohomology sheafs of the complex $A$. If $[\\phi]\\in \\mathfrak h^q_A(\\mathcal N)$ is a nonzero element of the stalk over $\\sigma_0$, the smallest $\\mu$ such that there is a meromorphic function \\eqref{SingPart} representing $[\\phi]$ will be called the order of the pole of $[\\phi]$.\n\\end{definition}\n\nThe relevancy of this notion of pole lies in that it predicts, for any given cohomology class of the complex $A$, the existence of a representative with the most regular leading term (the smallest power of log that must appear in the expansion at the boundary). We will see later (Proposition~\\ref{bdeebarCohomologySheaf}) that for the $b$-Dolbeault complex, under a certain geometric assumption, the order of the pole of $[\\phi]\\in \\mathfrak H^q_{{}^b\\!\\overline \\partial}(\\mathcal N)\\backslash 0$ is $1$.\n\n\\begin{example}\\label{IndComplexbd}\nFor the $b$-de Rham complex one has $\\spec_{b,\\mathcal N}^q({}^b\\!d)\\subset \\set{0}$ and\n\\begin{equation*}\nH^q_{\\mathcal D(0)}(\\mathcal N)=H^q_{\\mathrm {dR}}(\\mathcal N)\\oplus H^{q-1}_{\\mathrm {dR}}(\\mathcal N)\n\\end{equation*}\nfor each component $\\mathcal N$ of $\\partial\\mathcal M$, and that every element of the stalk of $\\mathfrak H^q_{{}^b\\!d}(\\mathcal N)$ over $0$ has a representative with a simple pole. By way of the residue we get an isomorphism from the stalk over $0$ onto $H^q_{\\mathrm {dR}}(\\mathcal N)$.\n\nSince the map \\eqref{evbM} is surjective with kernel spanned by $\\mathfrak r\\partial_\\mathfrak r$, the dual map\n\\begin{equation}\\label{evbMDual}\n\\mathrm{ev}_{\\mathcal N}^*: T^*{\\mathcal N}\\to {}^b{\\!}T_{\\mathcal N}^*\\mathcal M\n\\end{equation}\nis injective with image the annihilator, $\\mathcal H$, of $\\mathfrak r\\partial\\mathfrak r$. Let $\\mathbf i_{\\mathfrak r\\partial_\\mathfrak r}:{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^q\\mathcal M\\to {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^{q-1}\\mathcal M$ denote interior multiplication by $\\mathfrak r\\partial_\\mathfrak r$ Then $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\mathcal H=\\ker(\\mathbf i_{\\mathfrak r\\partial_\\mathfrak r}:{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^q\\mathcal M\\to {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^{q-1}\\mathcal M)$. The isomorphism \\eqref{evbMDual} gives isomorphisms\n\\begin{equation*}\n\\mathrm{ev}_{\\mathcal N}^*: \\raise2ex\\hbox{$\\mathchar\"0356$}^q{\\mathcal N}\\to \\mathcal H^q\n\\end{equation*}\nfor each $q$. Fix a defining function $\\mathfrak r$ for $\\mathcal N$ and let $\\Pi:{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^q\\mathcal M \\to {}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^q\\mathcal M$ be the projection on $\\mathcal H^q$ according to the decomposition\n\\begin{equation*}\n{}^b{\\!}\\raise2ex\\hbox{$\\mathchar\"0356$}_{\\mathcal N}^q\\mathcal M=\\mathcal H^q\\oplus \\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\mathcal H^{q-1},\n\\end{equation*}\nthat is,\n\\begin{equation*}\n\\Pi\\phi = \\phi-\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge \\mathbf i_{\\mathfrak r\\partial_\\mathfrak r}\\phi.\n\\end{equation*}\nIf $\\phi^0\\in C^\\infty(\\mathcal N,\\mathcal H^q)$ and $\\phi^1\\in C^\\infty(\\mathcal N,\\mathcal H^{q-1})$, then\n\\begin{equation*}\n{}^b\\!d(\\phi^0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi^1)=\\Pi\\,{}^b\\!d\\phi^0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge(-\\Pi\\,{}^b\\!d\\phi^1).\n\\end{equation*}\nSince\n\\begin{equation*}\n\\mathfrak r^{-i\\sigma}{}^b\\!d\\mathfrak r^{i\\sigma}\\phi = {}^b\\!d \\phi +i \\sigma\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi,\n\\end{equation*}\nthe indicial operator $\\mathcal D(\\sigma)$ of ${}^b\\!d$ is\n\\begin{equation*}\n\\mathcal D(\\sigma)(\\phi_0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi^1) = \\Pi\\,{}^b\\!d\\phi^0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge(i \\sigma\\phi^0-\\Pi\\, {}^b\\!d\\phi^1).\n\\end{equation*}\nIf $\\mathcal D(\\sigma)(\\phi_0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi^1)=0$, then of course $\\Pi{}^b\\!d\\phi^0=0$ and $i \\sigma\\phi^0=\\Pi{}^b\\!d\\phi^1$, and it follows that if $\\sigma\\ne 0$, then\n\\begin{equation*}\n(\\phi_0+\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi^1) = \\mathcal D(\\sigma)\\frac{1}{i\\sigma}\\phi^1.\n\\end{equation*}\nThus all cohomology groups of the complex $\\mathcal D(\\sigma)$ vanish if $\\sigma\\ne 0$, i.e., $\\spec_{b,\\mathcal N}^q({}^b\\!d)\\subset \\set{0}$.\n\nIt is not hard to verify that\n\\begin{equation*}\n\\Pi{}^b\\!d\\,\\mathrm{ev}_{\\mathcal N}^*=\\mathrm{ev}_{\\mathcal N}^*d.\n\\end{equation*}\nSince\n\\begin{equation*}\n\\mathfrak r^{-i\\sigma}{}^b\\!d\\mathfrak r^{i\\sigma}\\phi = {}^b\\!d \\phi +i \\sigma\\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\phi,\n\\end{equation*}\nthe indicial operator of ${}^b\\!d$ at $\\sigma=0$ can be viewed as the operator\n\\begin{equation*}\n\\begin{bmatrix}\nd & 0\\\\\n0 & -d\n\\end{bmatrix}:\n\\begin{matrix}\n\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\mathcal N\\\\\n\\oplus\\\\\n\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\mathcal N\n\\end{matrix} \\to\n\\begin{matrix}\n\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\mathcal N\\\\\n\\oplus\\\\\n\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\mathcal N\n\\end{matrix}.\n\\end{equation*}\nFrom this we get the cohomology groups of $\\mathcal D(0)$ in terms of the de Rham cohomology of $\\mathcal N$:\n\\begin{equation*}\nH^q_{\\mathcal D(0)}(\\mathcal N)=H^q_{\\mathrm {dR}}(\\mathcal N)\\oplus H^{q-1}_{\\mathrm {dR}}(\\mathcal N).\n\\end{equation*}\nThus the groups $H^q_{\\mathcal D(0)}(\\mathcal N)$ do not vanish for $q=0$, $1$, $\\dim\\mathcal M-1$, $\\dim\\mathcal M$ but may vanish for other values of $q$.\n\nWe now show that every element of the stalk of $\\mathfrak H^q_{{}^b\\!d}(\\mathcal N)$ over $0$ has a representative with a simple pole at $0$. Suppose that\n\\begin{equation}\\label{Representative}\n\\phi(\\sigma)=\\sum_{k=1}^\\mu \\frac{1}{\\sigma^k}\\left(\\phi^0_k+\\frac{{}^b\\!d \\mathfrak r}{\\mathfrak r}\\wedge \\phi^1_k\\right)\n\\end{equation}\nis such that $\\mathcal D(\\sigma)\\phi(\\sigma)$ is holomorphic. Then\n\\begin{equation*}\n\\sum_{k=1}^\\mu \\frac{1}{\\sigma^k}\\left(d\\phi^0_k - \\frac{{}^b\\!d \\mathfrak r}{\\mathfrak r}\\wedge d\\phi^1_k\\right) + \\frac{{}^b\\!d \\mathfrak r}{\\mathfrak r}\\wedge\\left(\\sum_{k=1}^{\\mu-1} \\frac{i}{\\sigma^k}\\phi^0_{k+1}\\right) = 0,\n\\end{equation*}\nhence $d\\phi^0_1=0$, $d\\phi^1_\\mu=0$ and $\\phi^0_k=-i d\\phi^1_{k-1}$, $k=2,\\dotsc,\\mu$. Let\n\\begin{equation*}\n\\psi(\\sigma)=-i \\sum_{k=2}^{\\mu+1} \\frac{1}{\\sigma^k} \\phi^1_{k-1}.\n\\end{equation*}\nThen\n\\begin{align*}\n\\mathcal D(\\sigma)\\psi(\\sigma)\n&= -i \\sum_{k=2}^{\\mu+1} \\frac{1}{\\sigma^k} d\\phi^1_{k-1} + \\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\sum_{k=2}^{\\mu+1}\\frac{1}{\\sigma^{k-1}}\\phi^1_{k-1}\\\\\n&= \\sum_{k=2}^{\\mu} \\frac{1}{\\sigma^k} \\phi^0_k + \\frac{{}^b\\!d\\mathfrak r}{\\mathfrak r}\\wedge\\sum_{k=1}^{\\mu}\\frac{1}{\\sigma^k}\\phi^1_k\n\\end{align*}\nso\n\\begin{equation*}\n\\phi(\\sigma)-\\mathcal D(\\sigma)\\psi(\\sigma)=\\frac{1}{\\sigma}\\phi^0_1.\n\\end{equation*}\nThe map that sends the class of the $\\mathcal D(\\sigma)$-closed element \\eqref{Representative} to the class of $\\phi^0_1$ in $H^q_{\\mathrm {dR}}(\\mathcal N)$ is an isomorphism.\n\\end{example}\n\n\\begin{example}\nAs we just saw, the boundary spectrum of the ${}^b\\!d$ complex in degree $0$ is just $\\set{0}$. In contrast, $\\spec_{b,\\mathcal N}^0({}^b\\!\\overline \\partial)$ may be an infinite set. We illustrate this in the context of Example \\ref{AnisotropicSphere}. The functions\n\\begin{equation*}\nz^\\alpha=(z^1)^{\\alpha_1}\\dotsm (z^{n+1})^{\\alpha_{n+1}},\n\\end{equation*}\nwhere the $\\alpha_j$ are nonnegative integers, are CR functions that satisfy\n\\begin{equation*}\n\\mathcal T z^\\alpha=i (\\sum \\tau_j\\alpha_j) z^\\alpha.\n\\end{equation*}\nThis implies that\n\\begin{equation*}\n\\overline\\Dee z^\\alpha + i (-i \\sum \\tau_j\\alpha_j) \\beta z^\\alpha=0\n\\end{equation*}\nwith $\\beta$ as in Example \\ref{AnisotropicSphere}, so the numbers $\\sigma_\\alpha=(-i \\sum \\tau_j\\alpha_j)$ belong to $\\spec_{b,\\mathcal N}^0({}^b\\!\\overline \\partial)$. \n\nFor the sake of completeness we also show that if $\\sigma\\in \\spec_{b,\\mathcal N}^0({}^b\\!\\overline \\partial)$, then $\\sigma=\\sigma_\\alpha$ for some $\\alpha$ as above. To see this, suppose that $\\zeta:S^{2n+1}\\to\\mathbb C$ is not identically zero and satisfies\n\\begin{equation*}\n\\overline\\Dee \\zeta+i \\sigma\\zeta\\beta=0\n\\end{equation*}\nfor some $\\sigma\\ne 0$. Then $\\zeta$ is smooth, because the principal symbol of $\\overline\\Dee$ on functions is injective. Since $\\langle\\beta,\\mathcal T\\rangle=-i$,\n\\begin{equation*}\nT\\zeta+ \\sigma\\zeta=0.\n\\end{equation*}\nThus $\\zeta(\\mathfrak a_t(p))=e^{-\\sigma t}\\zeta(p)$ for any $p$. Since $|\\zeta(\\mathfrak a_t(p))|$ is bounded as a function of $t$ and $\\zeta$ is not identically $0$, $\\sigma$ must be purely imaginary. Since $\\zeta$ is a CR function, it extends uniquely to a holomorphic function $\\tilde\\zeta$ on $B=\\set{z\\in \\mathbb C^{n+1}:\\|z\\|<1}$, necessarily smooth up to the boundary. Let $\\zeta_t=\\zeta\\circ \\mathfrak a_t$. This is also a smooth CR function, so it has a unique holomorphic extension $\\tilde \\zeta_t$ to $B$. The integral curve through $z_0=(z^1_0,\\dotsc,z^{n+1}_0)$ of the vector field $\\mathcal T$ is\n\\begin{equation*}\nt\\mapsto \\mathfrak a_t(z_0)=(e^{i \\tau_1 t} z^1_0,\\dotsc,e^{i \\tau_{n+1} t} z^{n+1}_0)\n\\end{equation*}\nExtending the definition of $\\mathfrak a_t$ to allow arbitrary $z\\in \\mathbb C^{n+1}$ as argument we then have that $\\tilde \\zeta_t=\\tilde \\zeta\\circ \\mathfrak a_t$. Then\n\\begin{equation*}\n\\partial_t\\tilde \\zeta_t+ \\sigma\\tilde \\zeta_t=0\n\\end{equation*}\ngives\n\\begin{equation*}\n\\tilde\\zeta(z)=\\sum_{\\set{\\alpha:\\pmb \\tau\\cdot\\alpha=i \\sigma}} c_\\alpha z^\\alpha\n\\end{equation*}\nfor $|z|<1$, where $\\pmb\\tau=(\\tau_1,\\dotsc,\\tau_{n+1})$. Thus $\\sigma=-i\\sum\\tau_j\\alpha_j$ as claimed. Note that $\\Im \\sigma$ is negative (positive) if the $\\tau_j$ are positive (negative) and $\\alpha\\ne0$.\n\\end{example}\n\n\\section{Underlying CR complexes}\\label{sUnderlyingCRcomplexes}\n\nAgain let $\\mathfrak a:\\mathbb R\\times\\mathcal N\\to\\mathcal N$ be the flow of $\\mathcal T$. Let $\\mathcal {L}_\\mathcal T$ denote the Lie derivative with respect to $\\mathcal T$ on de Rham $q$-forms or vector fields and let $\\mathbf i_\\mathcal T$ denote interior multiplication by $\\mathcal T$ of de Rham $q$-forms or of elements of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$.\n\nThe proofs of the following two lemmas are elementary.\n\n\\begin{lemma}\nIf $\\alpha$ is a smooth section of the annihilator of $\\overline\\Vee$ in $\\mathbb C T^*\\mathcal N$, then $(\\mathcal {L}_{\\mathcal T}\\alpha)|_{\\overline\\Vee}=0$. Consequently, for each $p\\in \\mathcal N$ and $t\\in \\mathbb R$, $d\\mathfrak a_t:\\mathbb C T_p\\mathcal N\\to \\mathbb C T_{\\mathfrak a_t(p)}\\mathcal N$ maps $\\overline\\Vee_p$ onto $\\overline\\Vee_{\\mathfrak a_t(p)}$.\n\\end{lemma}\n\nIt follows that there is a well defined smooth bundle homomorphism $\\mathfrak a_t^*: \\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^* \\to \\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$ covering $\\mathfrak a_{-t}$. In particular, one can define the Lie derivative $\\mathcal {L}_{\\mathcal T}\\phi$ with respect to $\\mathcal T$ of an element in $\\phi \\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*)$. The usual formula holds:\n\n\\begin{lemma}\nIf $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*)$, then $\\mathcal {L}_{\\mathcal T}\\phi=\\mathbf i_{\\mathcal T}\\overline\\Dee\\phi+\\overline\\Dee\\mathbf i_{\\mathcal T}\\phi$. Consequently, for each $t$ and $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*)$, $\\overline\\Dee \\mathfrak a_t^*\\phi = \\mathfrak a_t^*\\overline\\Dee\\phi$.\n\\end{lemma}\n\n\\medskip\nFor any defining function $\\mathfrak r$ of $\\mathcal N$ in $\\mathcal M$, $\\overline \\K_{\\mathfrak r}=\\ker\\beta_\\mathfrak r$ is a CR structure of CR codimension $1$: indeed, $\\mathcal K_\\mathfrak r\\cap \\overline \\K_\\mathfrak r\\subset \\Span_\\mathbb C\\mathcal T$ but since $\\langle\\beta_\\mathfrak r,\\mathcal T\\rangle$ vanishes nowhere, we must have $\\overline \\K\\cap \\mathcal K=0$. Since $\\mathcal K\\oplus \\overline \\K\\oplus\\Span_\\mathbb C\\mathcal T=\\mathbb C T\\mathcal N$, the CR codimension is $1$. Finally, if $V,W\\in C^\\infty(\\mathcal N;\\overline \\K_{\\mathfrak r})$, then \n\\begin{equation*}\n\\langle\\beta_\\mathfrak r,[V,W]\\rangle=V\\langle\\beta_\\mathfrak r,W\\rangle-W\\langle\\beta_\\mathfrak r,V\\rangle-2\\overline\\Dee\\beta(V,W),\n\\end{equation*}\nSince the right hand side vanishes, $[V,W]$ is again a section of $\\overline \\K_\\mathfrak r$.\n\nSince $\\overline\\Vee=\\overline \\K_\\mathfrak r\\oplus \\Span_\\mathbb C \\mathcal T$, the dual of $\\overline \\K_\\mathfrak r$ is canonically isomorphic to the kernel of $\\mathbf i_\\mathcal T:\\smash[t]{\\overline\\Vee}^*\\to\\mathbb C$. We will write $\\overline \\K^*$ for this kernel. More generally, $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*$ and the kernel, $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$, of $\\mathbf i_\\mathcal T:\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\to\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\smash[t]{\\overline\\Vee}^*$ are canonically isomorphic. The vector bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$ are independent of the defining function $\\mathfrak r$. We regard the $\\overline \\partial_b$-operators of the CR structure as operators\n\\begin{equation*}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*)\\to C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K^*).\n\\end{equation*}\nThey do depend on $\\mathfrak r$ but we will not indicate this in the notation. \n\nTo get a formula for $\\overline \\partial_b$, let\n\\begin{equation*}\n\\tilde \\beta_\\mathfrak r=\\frac{i }{i -a_\\mathfrak r}\\beta_\\mathfrak r\n\\end{equation*}\n(so that $\\langle i \\tilde \\beta_\\mathfrak r,\\mathcal T\\rangle=1$). The projection $\\Pi_\\mathfrak r:\\raise2ex\\hbox{$\\mathchar\"0356$}^q \\smash[t]{\\overline\\Vee}^*\\to\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$ on $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$ according to the decomposition\n\\begin{equation}\\label{DecompositionOfVeebar}\n\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*=\\raise2ex\\hbox{$\\mathchar\"0356$}^q \\overline \\K^* \\oplus i \\tilde \\beta_\\mathfrak r\\wedge \\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K^*\n\\end{equation}\nis\n\\begin{equation}\\label{DefinitionOfPi}\n\\Pi_\\mathfrak r\\phi=\\phi-i\\tilde \\beta_\\mathfrak r\\wedge \\mathbf i_\\mathcal T\\phi.\n\\end{equation}\n\n\\begin{lemma}\nWith the identification of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*$ with $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$ described above, the $\\overline \\partial_b$-operators of the CR structure $\\overline \\K_\\mathfrak r$ are given by\n\\begin{equation}\\label{DefinitionOfdeebarb}\n\\overline \\partial_b\\phi=\\Pi_\\mathfrak r\\overline\\Dee\\phi\\quad \\text{ if }\\phi\\in C^\\infty(\\mathcal N,\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*),\n\\end{equation}\n\\end{lemma}\n\n\\begin{proof} Suppose that $(z,t)$ is a hypoanalytic chart for $\\overline\\Vee$ on some open set $U$, with $\\mathcal T t=1$. So $\\partial_{\\overline z^\\mu}$, $\\mu=1\\dotsc,n$, $\\mathcal T=\\partial_t$ is a frame for $\\overline\\Vee$ over $U$ with dual frame $\\overline\\Dee\\overline z^\\mu$, $\\overline\\Dee t$. If\n\\begin{equation*}\n\\beta_\\mathfrak r=\\sum_{\\mu=1}^n\\beta_\\mu\\overline\\Dee \\overline z^\\mu+\\beta_0\\overline\\Dee t.\n\\end{equation*}\nthen\n\\begin{equation*}\n\\overline L_\\mu=\\partial_{\\overline z^\\mu}-\\frac{\\beta_\\mu}{\\beta_0}\\partial_t,\\quad\\mu=1,\\dotsc,n\n\\end{equation*}\nis a frame for $\\overline \\K_\\mathfrak r$ over $U$. Let $\\overline \\eta^\\mu$ denote the dual frame (for $\\overline \\K_\\mathfrak r^*$). Since the $\\overline L_\\mu$ commute, $\\overline \\partial_b\\overline \\eta^\\mu=0$, so if $\\phi=\\sum'_{|I|=q}\\phi_I\\,\\overline \\eta^I$, then (with the notation as in eg. Folland and Kohn~\\cite{FK})\n\\begin{equation*}\n\\overline \\partial_b \\phi=\\sideset{}{'}\\sum_{|J|=q+1}\\,\\sideset{}{'}\\sum_{|I|=q}\\sum_\\mu \\sign {\\mu I}{J}\\overline L_\\mu\\phi_I\\,\\overline \\eta^J.\n\\end{equation*}\nOn the other hand, the frame of $\\smash[t]{\\overline\\Vee}^*$ dual to the frame $\\overline L_\\mu$, $\\mu=1,\\dotsc,n$, $\\mathcal T$ of $\\overline\\Vee$ is $\\overline\\Dee \\overline z^\\mu$, $i\\tilde \\beta_\\mathfrak r$, and the identification of $\\overline \\K_\\mathfrak r^*$ with $\\overline \\K^*$ maps the $\\eta^\\mu$ to the $\\overline\\Dee \\overline z^\\mu$. So, as a section of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$,\n\\begin{equation*}\n\\phi=\\sideset{}{'}\\sum_{|I|=q} \\phi_I\\, \\overline\\Dee\\overline z^I\n\\end{equation*}\nand\n\\begin{equation*}\n\\overline\\Dee\\phi=\\sideset{}{'}\\sum_{|J|=q+1}\\,\\sideset{}{'}\\sum_{|I|=q} \\sign{\\mu I}{J} \\overline L_\\mu\\phi_I \\,\\overline\\Dee\\overline z^J+\ni\\tilde \\beta_\\mathfrak r\\wedge\\sideset{}{'}\\sum_{|I|=q} \\mathcal T\\phi_I\\,\\overline\\Dee\\overline z^I.\n\\end{equation*}\nThus $\\Pi_\\mathfrak r\\overline\\Dee\\phi$ is the section of $\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K^*$ associated with $\\overline \\partial_b\\phi$ by the identifying map.\n\\end{proof}\n\nUsing \\eqref{DefinitionOfPi} in \\eqref{DefinitionOfdeebarb} and the fact that $\\mathbf i_\\mathcal T\\overline\\Dee\\phi=\\mathcal {L}_\\mathcal T\\phi$ if $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*)$ we get\n\\begin{equation}\\label{FormulaForDeeOnK*}\n\\overline \\partial_b\\phi=\\overline\\Dee\\phi-i \\tilde \\beta_\\mathfrak r\\wedge\\mathcal {L}_\\mathcal T\\phi \\quad \\text{ if }\\phi\\in C^\\infty(\\mathcal N,\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*).\n\\end{equation}\n\nThe $\\overline\\Dee$ operators can be expressed in terms of the $\\overline \\partial_b$ operators. Suppose $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*)$. Then $\\phi=\\phi^0+i \\tilde \\beta_\\mathfrak r\\wedge \\phi^1$ with unique $\\phi^0 \\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q \\overline \\K^*)$ and $\\phi^1 \\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1} \\overline \\K^*)$, and\n\\begin{equation*}\n\\overline\\Dee\\phi^0=\\overline \\partial_b\\phi^0+i\\tilde \\beta_\\mathfrak r\\wedge\\mathcal {L}_\\mathcal T\\phi^0,\n\\end{equation*}\nsee \\eqref{FormulaForDeeOnK*}. Using\n\\begin{equation*}\n\\overline\\Dee\\tilde \\beta_\\mathfrak r=\\frac{\\overline\\Dee a_\\mathfrak r}{i - a_\\mathfrak r}\\wedge\\tilde \\beta_\\mathfrak r\n\\end{equation*}\nand \\eqref{FormulaForDeeOnK*} again we get\n\\begin{equation*}\n\\overline\\Dee(i \\tilde \\beta_\\mathfrak r\\wedge \\phi^1)=\ni\\tilde \\beta_\\mathfrak r\\wedge \\big(-\\frac{\\overline\\Dee a_\\mathfrak r}{i - a_\\mathfrak r}\\wedge\\phi^1 - \\overline\\Dee\\phi^1 \\big)=\ni\\tilde \\beta_\\mathfrak r\\wedge \\big(-\\frac{\\overline \\partial_b a_\\mathfrak r}{i - a_\\mathfrak r}\\wedge\\phi^1 - \\overline \\partial_b\\phi^1 \\big).\n\\end{equation*}\nThis gives\n\\begin{equation}\\label{DeeAsMatrix}\n\\overline\\Dee=\n\\begin{bmatrix}\n\\overline \\partial_b & 0\\\\\n\\mathcal {L}_\\mathcal T & -\\overline \\partial_b -\\dfrac{\\overline \\partial_b a_\\mathfrak r}{i -a_\\mathfrak r}\n\\end{bmatrix}\n:\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*)\\\\ \\oplus \\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K^*)\n\\end{matrix}\n\\to\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K^*)\\\\ \\oplus \\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*)\n\\end{matrix}.\n\\end{equation}\n\nSince $\\mathcal T$ itself is $\\mathcal T$-invariant, $\\mathbf i_\\mathcal T\\mathfrak a_t^*=a_t^*\\mathbf i_\\mathcal T$: the subbundle $\\overline \\K^*$ of $\\smash[t]{\\overline\\Vee}^*$ is invariant under $\\mathfrak a_t^*$ for each $t$. This need not be true of $\\overline \\K_\\mathfrak r$, i.e., the statement that for all $t$, $d\\mathfrak a_t(\\overline \\K_\\mathfrak r)\\subset \\overline \\K_\\mathfrak r$, equivalently,\n\\begin{equation*}\nL\\in C^\\infty(\\mathcal M;\\overline \\K_\\mathfrak r)\\implies [\\mathcal T,L]\\in C^\\infty(\\mathcal M;\\overline \\K_\\mathfrak r),\n\\end{equation*}\nmay fail to hold. Since $\\overline\\Dee\\beta_\\mathfrak r=0$, the formula\n\\begin{equation*}\n0=\\mathcal T\\langle \\beta_\\mathfrak r,L\\rangle - L\\langle \\beta_\\mathfrak r,\\mathcal T\\rangle - \\langle \\beta_\\mathfrak r,[\\mathcal T,L]\\rangle\n\\end{equation*}\nwith $L\\in C^\\infty(\\mathcal N;\\overline \\K_\\mathfrak r)$ gives that $\\overline \\K_\\mathfrak r$ is invariant under $d\\mathfrak a_t$ if and only if $La_\\mathfrak r=0$ for each CR vector field, that is, if and only if $a_\\mathfrak r$ is a CR function. This proves the equivalence between the first and last statements in the following lemma. The third statement is the most useful.\n\n\\begin{lemma}\\label{Invariances}\nLet $\\mathfrak r$ be a defining function for $\\mathcal N$ in $\\mathcal M$ and let $\\overline \\partial_b$ denote the operators of the associated CR complex. The following are equivalent:\n\\begin{enumerate}\n\\item The function $a_\\mathfrak r$ is CR;\n\\item $\\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r=0$;\n\\item $\\mathcal {L}_\\mathcal T\\overline \\partial_b-\\overline \\partial_b\\mathcal {L}_\\mathcal T=0$;\n\\item $\\overline \\K_\\mathfrak r$ is $\\mathcal T$-invariant.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nFrom $\\beta_\\mathfrak r=(a_\\mathfrak r-i)i\\tilde \\beta_\\mathfrak r$ and $\\mathcal {L}_\\mathcal T\\beta_\\mathfrak r=\\overline\\Dee a_\\mathfrak r$ we obtain\n\\begin{equation*}\n\\overline\\Dee a_\\mathfrak r=(\\mathcal {L}_\\mathcal T a_\\mathfrak r)i \\tilde \\beta_\\mathfrak r+(a_\\mathfrak r-i)i \\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r,\n\\end{equation*}\nso\n\\begin{equation*}\n\\overline \\partial_b a_\\mathfrak r=\\overline\\Dee a_\\mathfrak r-(\\mathcal {L}_\\mathcal T a_\\mathfrak r) i \\tilde \\beta_\\mathfrak r =(a_\\mathfrak r-i)i \\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r.\n\\end{equation*}\nThus $a_\\mathfrak r$ is CR if and only if $\\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r=0$.\n\nUsing $\\mathcal {L}_\\mathcal T\\overline\\Dee=\\overline\\Dee\\mathcal {L}_\\mathcal T$ and the definition of $\\overline \\partial_b$ we get\n\\begin{equation*}\n\\mathcal {L}_\\mathcal T \\overline \\partial_b\\phi\n=\\mathcal {L}_\\mathcal T(\\overline\\Dee\\phi-i\\tilde \\beta_\\mathfrak r\\wedge \\mathcal {L}_\\mathcal T\\phi)\n=\\overline \\partial_b\\mathcal {L}_\\mathcal T\\phi-i(\\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r)\\wedge \\mathcal {L}_\\mathcal T\\phi\n\\end{equation*}\nfor $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*)$. Thus $\\mathcal {L}_\\mathcal T\\overline \\partial_b-\\overline \\partial_b\\mathcal {L}_\\mathcal T=0$ if and only if $\\mathcal {L}_\\mathcal T\\tilde \\beta_\\mathfrak r=0$.\n\\end{proof}\n\n\n\\begin{lemma}\\label{WithInvariantMetric}\nSuppose that $\\overline\\Vee$ admits a $\\mathcal T$-invariant metric. Then there is a defining function $\\mathfrak r$ for $\\mathcal N$ in $\\mathcal M$ such that $a_\\mathfrak r$ is constant. If $\\mathfrak r$ and $\\mathfrak r'$ are defining functions such that $a_\\mathfrak r$ and $a_{\\mathfrak r'}$ are constant, then $a_\\mathfrak r = a_{\\mathfrak r'}$. This constant will be denoted $\\mathfrak a_\\mathrm{av}$.\n\\end{lemma}\n\n\\begin{proof}\nLet $h$ be a metric as stated. Let $\\mathcal H^{0,1}$ be the subbundle of $\\overline\\Vee$ orthogonal to $\\mathcal T$. This is $\\mathcal T$-invariant, and since the metric is $\\mathcal T$-invariant, $\\mathcal H^{0,1}$ has a $\\mathcal T$-invariant metric. This metric gives canonically a metric on $\\mathcal H^{1,0}=\\overline {\\mathcal H^{0,1}}$. Using the decomposition $\\mathbb C T\\mathcal N=\\mathcal H^{1,0}\\oplus\\mathcal H^{0,1}\\oplus \\Span_\\mathbb C \\mathcal T$ we get a $\\mathcal T$-invariant metric on $\\mathbb C T\\mathcal N$ for which the decomposition is orthogonal. This metric is induced by a Riemannian metric $g$. Let $\\mathfrak m_0$ be the corresponding Riemannian density, which is $\\mathcal T$-invariant because $g$ is. Since $\\overline\\Dee$, $h$, and $\\mathfrak m_0$ are $\\mathcal T$-invariant, so are the formal adjoint $\\overline\\Dee^\\star$ of $\\overline\\Dee$ and the Laplacians of the $\\overline\\Dee$-complex, and if $G$ denotes the Green's operators for these Laplacians, then $G$ is also $\\mathcal T$-invariant, as is the orthogonal projection $\\Pi$ on the space of $\\overline\\Dee$-harmonic forms. Arbitrarily pick a defining function $\\mathfrak r$ for $\\mathcal N$ in $\\mathcal M$. Then\n\\begin{equation*}\na_\\mathfrak r-G\\overline\\Dee^\\star\\overline\\Dee a_\\mathfrak r=\\Pi a_\\mathfrak r\n\\end{equation*}\nwhere $\\Pi a_\\mathfrak r$ is a constant function by Lemma~\\ref{ConstantSolutions}.\nSince $\\beta_\\mathfrak r$ is $\\overline\\Dee$-closed, $\\overline\\Dee a_\\mathfrak r=\\mathcal {L}_\\mathcal T\\beta_\\mathfrak r$. Thus $G\\overline\\Dee^\\star\\overline\\Dee a_\\mathfrak r= \\mathcal T G\\overline\\Dee^\\star \\beta_\\mathfrak r$, and since $a_\\mathfrak r$ is real valued and $\\mathcal T$ is a real vector field,\n\\begin{equation*}\na_\\mathfrak r- \\mathcal T \\Re G\\overline\\Dee^\\star\\beta_\\mathfrak r=\\Re\\Pi a_\\mathfrak r.\n\\end{equation*}\nExtend the function $u=\\Re G\\overline\\Dee^\\star\\beta_\\mathfrak r$ to $\\mathcal M$ as a smooth real-valued function. Then $\\mathfrak r'=e^{-u}\\mathfrak r$ has the required property.\n\nSuppose that $\\mathfrak r$, $\\mathfrak r'$ are defining functions for $\\mathcal N$ in $\\mathcal M$ such that $a_\\mathfrak r$ and $a_{\\mathfrak r'}$ are constant. Then these functions are equal by Proposition \\ref{Averages}. \n\\end{proof}\n\nNote that if for some $\\mathfrak r$, the subbundle $\\overline \\K_\\mathfrak r$ is $\\mathcal T$-invariant and admits a $\\mathcal T$ invariant Hermitian metric, then there is a $\\mathcal T$-invariant metric on $\\overline\\Vee$.\n\n\\medskip\nSuppose now that $\\rho:F\\to\\mathcal M$ is a holomorphic vector bundle over $\\mathcal M$. Using the operators\n\\begin{equation*}\n\\overline\\Dee:C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N)\\to C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N),\n\\end{equation*}\nsee \\eqref{defDeebarE}, define operators\n\\begin{equation}\\label{deebarbE}\n\\cdots\\to C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)\\xrightarrow{\\overline \\partial_b} C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K^*\\otimes F_\\mathcal N)\\to\\cdots\n\\end{equation}\nby\n\\begin{equation*}\n\\overline \\partial_b \\phi =\\Pi_\\mathfrak r\\overline\\Dee\\phi,\\quad\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)\n\\end{equation*}\nwhere $\\Pi_\\mathfrak r$ means $\\Pi_\\mathfrak r\\otimes I$ with $\\Pi_\\mathfrak r$ defined by \\eqref{DefinitionOfPi}. The operators \\eqref{deebarbE} form a complex. Define also\n\\begin{equation*}\n\\mathcal {L}_\\mathcal T = \\mathbf i_\\mathcal T\\overline\\Dee + \\overline\\Dee\\mathbf i_\\mathcal T\n\\end{equation*}\nwhere $\\mathbf i_\\mathcal T$ stands for $\\mathbf i_\\mathcal T\\otimes I$. Then\n\\begin{equation*}\n\\mathbf i_\\mathcal T \\mathcal {L}_\\mathcal T = \\mathcal {L}_\\mathcal T\\mathbf i_\\mathcal T,\\quad \\mathcal {L}_\\mathcal T\\overline\\Dee=\\overline\\Dee\\mathcal {L}_\\mathcal T.\n\\end{equation*}\nThe first of these identities implies that the image of $C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)$ by $\\mathcal {L}_\\mathcal T$ is contained in $C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)$.\nWith these definitions, $\\overline\\Dee$ as an operator\n\\begin{equation*}\n\\overline\\Dee :\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)\\\\ \\oplus \\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K^*\\otimes F_\\mathcal N)\n\\end{matrix}\n\\to\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K^*\\otimes F_\\mathcal N)\\\\ \\oplus \\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F_\\mathcal N)\n\\end{matrix}.\n\\end{equation*}\nis given by the matrix in \\eqref{DeeAsMatrix} with the new meanings for $\\overline \\partial_b$ and $\\mathcal {L}_\\mathcal T$.\n\nAssume that there is a $\\mathcal T$-invariant Riemannian metric on $\\mathcal N$, that $\\mathfrak r$ has be chosen so that $a_\\mathfrak r$ is constant, that $\\overline \\K_\\mathfrak r$ is orthogonal to $\\mathcal T$, and that $\\mathcal T$ has unit length. Then the term involving $\\overline \\partial_b a_\\mathfrak r$ in the matrix \\eqref{DeeAsMatrix} is absent, and since $\\mathbb D^2=0$,\n\\begin{equation*}\n\\mathcal {L}_\\mathcal T\\overline \\partial_b=\\overline \\partial_b\\mathcal {L}_\\mathcal T.\n\\end{equation*}\nWrite $h_{\\smash[t]{\\overline\\Vee}^*}$ for the metric induced on the bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$ or $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$.\n\nIf $\\eta_\\mu$, $\\mu=1,\\dotsc,k$ is a local frame of $F_\\mathcal N$ over an open set $U\\subset \\mathcal N$ and $\\phi$ is a local section of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N$ over $U$, then for some smooth sections $\\phi^\\mu$ of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$ and $\\omega^\\nu_\\mu$ of $\\smash[t]{\\overline\\Vee}^*$ over $U$,\n\\begin{equation*}\n\\phi= \\sum_\\mu \\phi^\\mu \\otimes \\eta_\\mu,\\quad \\overline\\Dee \\sum_\\mu \\phi^\\mu \\otimes \\eta_\\mu = \\sum_\\nu (\\overline\\Dee\\phi^\\nu + \\sum_\\mu \\omega^\\nu_\\mu \\wedge \\phi^\\mu)\\otimes \\eta_\\nu.\n\\end{equation*}\nThis gives\n\\begin{equation*}\n\\overline \\partial_b \\sum_\\mu \\phi^\\mu \\otimes \\eta_\\mu = \\sum_\\nu (\\overline \\partial_b\\phi^\\nu + \\sum_\\mu \\Pi_\\mathfrak r\\omega^\\nu_\\mu \\wedge \\phi^\\mu)\\otimes \\eta_\\nu\n\\end{equation*}\nand\n\\begin{equation*}\n\\mathcal {L}_\\mathcal T \\sum_\\mu \\phi^\\mu \\otimes \\eta_\\mu = \\sum_\\nu (\\mathcal {L}_\\mathcal T\\phi^\\nu + \\sum_\\mu \\langle\\omega^\\nu_\\mu,\\mathcal T\\rangle \\phi^\\mu)\\otimes \\eta_\\nu.\n\\end{equation*}\n\nSuppose now that $h_F$ is a Hermitian metric on $F$. With this metric and the metric $h_{\\smash[t]{\\overline\\Vee}^*}$ we get Hermitian metrics $h$ on each of the bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N$. If $\\eta_\\mu$ is an orthonormal frame of $F_\\mathcal N$ and $\\phi=\\sum\\phi^\\mu\\otimes\\eta_\\mu$, $\\psi=\\sum\\psi^\\mu\\otimes\\eta_\\mu$ are sections of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N$, then\n\\begin{equation*}\nh(\\phi,\\psi) = \\sum_\\nu h_{\\smash[t]{\\overline\\Vee}^*}(\\phi^\\mu,\\psi^\\mu).\n\\end{equation*}\nTherefore\n\\begin{align*}\nh(\\mathcal {L}_\\mathcal T\\phi,\\psi)&+h(\\phi,\\mathcal {L}_\\mathcal T\\psi)\\\\\n&= \\sum_\\nu h_{\\smash[t]{\\overline\\Vee}^*}(\\mathcal {L}_\\mathcal T\\phi^\\nu + \\sum_\\mu \\langle\\omega^\\nu_\\mu,\\mathcal T\\rangle \\phi^\\mu,\\psi^\\nu)\n+ \\sum_\\mu h_{\\smash[t]{\\overline\\Vee}^*}(\\phi^\\mu, \\mathcal {L}_\\mathcal T\\psi^\\mu + \\langle\\omega^\\mu_\\nu,\\mathcal T\\rangle \\psi^\\nu)\\\\\n&= \\sum_\\nu \\mathcal T h_{\\smash[t]{\\overline\\Vee}^*}(\\phi^\\nu,\\psi^\\nu) + \\sum_{\\mu,\\nu} ( \\langle\\omega^\\nu_\\mu,\\mathcal T\\rangle + \\overline{\\langle\\omega^\\mu_\\nu,\\mathcal T\\rangle}) h_{\\smash[t]{\\overline\\Vee}^*}(\\phi^\\mu,\\psi^\\nu)\\\\\n&= \\mathcal T h(\\phi,\\psi) + \\sum_{\\mu,\\nu} ( \\langle\\omega^\\nu_\\mu,\\mathcal T\\rangle + \\overline{\\langle\\omega^\\mu_\\nu,\\mathcal T\\rangle}) h_{\\smash[t]{\\overline\\Vee}^*}(\\phi^\\mu,\\psi^\\nu).\n\\end{align*}\nThus $\\mathcal T h(\\phi,\\psi) = h(\\mathcal {L}_\\mathcal T\\phi,\\psi) +h(\\phi,\\mathcal {L}_\\mathcal T\\psi)$ if and only if\n\\begin{equation}\\label{TangencyOfTE}\n\\langle\\omega^\\nu_\\mu,\\mathcal T\\rangle + \\overline{\\langle\\omega^\\mu_\\nu,\\mathcal T\\rangle}=0\\text{ for all }\\mu,\\nu.\n\\end{equation}\nThis condition is \\eqref{ExactMetricCondition}; just note that by the definition of $\\overline\\Dee$, the forms $(\\Phi^*)^{-1}\\omega^\\nu_\\mu$ in \\eqref{ExactMetricCondition} are the forms that we are denoting $\\omega^\\nu_\\mu$ here. Thus \\eqref{TangencyOfTE} holds if and only if $h_F$ is an exact Hermitian metric, see Definition \\eqref{ExactMetric}.\n\nConsequently,\n\n\\begin{lemma} The statement\n\\begin{equation}\\label{LieTbis}\n\\mathcal T h(\\phi,\\psi) = h(\\mathcal {L}_\\mathcal T\\phi,\\psi)+h(\\phi,\\mathcal {L}_\\mathcal T\\psi)\\quad \\forall \\phi,\\psi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N)\n\\end{equation}\nholds if and only the Hermitian metric $h_F$ is exact.\n\\end{lemma}\n\n\n\n\\section{Spectrum}\\label{sSpectrum}\n\nSuppose that $\\overline\\Vee$ admits an invariant Hermitian metric. Let $\\mathfrak r$ be a defining function for $\\mathcal N$ in $\\mathcal M$ such that $a_\\mathfrak r$ is constant. By Lemma \\eqref{Invariances} $\\overline \\K_\\mathfrak r$ is $\\mathcal T$-invariant, so the restriction of the metric to this subbundle gives a $\\mathcal T$-invariant metric; we use the induced metric on the bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$ in the following. As in the proof of Lemma~\\ref{WithInvariantMetric}, there is a $\\mathcal T$-invariant density $\\mathfrak m_0$ on $\\mathcal N$. \n\nLet $\\rho:F\\to\\mathcal M$ be a Hermitian holomorphic vector bundle, assume that the Hermitian metric of $F$ is exact, so with the induced metric $h$ on the vector bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F_\\mathcal N$, \\eqref{LieTbis} holds. We will write $F$ in place of $F_\\mathcal N$. \n\nLet $\\overline \\partial_b^\\star$ be the formal adjoint of the ${}^b\\!\\overline \\partial$ operator \\eqref{deebarbE} with respect to the inner on the bundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F$ and the density $\\mathfrak m_0$, and let $\\square_{b,q} = \\overline \\partial_b\\deebarb^\\star + \\overline \\partial_b^\\star\\overline \\partial_b$ be the formal $\\overline \\partial_b$-Laplacian. Since $-i\\mathcal {L}_\\mathcal T$ is formally selfadjoint and commutes with $\\overline \\partial_b$, $\\mathcal {L}_\\mathcal T$ commutes with $\\square_{b,q}$. Let\n\\begin{equation*}\n\\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)=\\ker\\square_{b,q}=\\set{\\phi\\in L^2(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F):\\square_{b,q}\\phi=0}\n\\end{equation*}\nand let\n\\begin{equation*}\n\\Dom_q(\\mathcal {L}_\\mathcal T)=\\set{\\phi\\in \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)\\text{ and }\\mathcal {L}_\\mathcal T\\phi\\in \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)}.\n\\end{equation*}\nThe spaces $\\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)$ may be of infinite dimension, but in any case they are closed subspaces of $L^2(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F)$, so they may be regarded as Hilbert spaces on their own right. If $\\phi\\in \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)$, the condition $\\mathcal {L}_\\mathcal T\\phi\\in \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)$ is equivalent to the condition\n\\begin{equation*}\n\\mathcal {L}_\\mathcal T\\phi\\in L^2(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*\\otimes F).\n\\end{equation*}\nSo we have a closed operator\n\\begin{equation}\\label{LieOnB-Harmonic}\n-i \\mathcal {L}_\\mathcal T:\\Dom_q(\\mathcal {L}_\\mathcal T)\\subset \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)\\to \\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F).\n\\end{equation}\nThe fact that $\\square_{b,q}-\\mathcal {L}_\\mathcal T^2$ is elliptic, symmetric, and commutes with $\\mathcal {L}_\\mathcal T$ implies that \\eqref{LieOnB-Harmonic} is a selfadjoint Fredholm operator with discrete spectrum (see \\cite[Theorem 2.5]{Me9}).\n\n\\begin{definition}\\label{BCohomologyWithCoeffs}\nLet $\\spec^q_0(-i \\mathcal {L}_\\mathcal T)$ be the spectrum of the operator \\eqref{LieOnB-Harmonic}, and let $\\mathscr H^q_{\\overline \\partial_b,\\tau}(\\mathcal N;F)$ be the eigenspace of $-i\\mathcal {L}_\\mathcal T$ in $\\mathscr H^q_{\\overline \\partial_b}(\\mathcal N;F)$ corresponding to the eigenvalue $\\tau$.\n\\end{definition}\n\nLet $\\pmb\\tau$ denote the principal symbol of $-i\\mathcal T$. Then the principal symbol of $\\mathcal {L}_\\mathcal T$ acting on sections of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K^*$ is $\\pmb\\tauI$. Because $\\square_{b,q}-\\mathcal {L}_\\mathcal T^2$ is elliptic, $\\Char(\\square_{b,q})$, the characteristic variety of $\\square_{b,q}$, lies in $\\pmb\\tau\\ne 0$. Let\n\\begin{equation*}\n\\Char^\\pm(\\square_{b,q})=\\set{\\nu\\in\\Char (\\square_{b,q}):\\pmb \\tau(\\nu)\\gtrless 0}.\n\\end{equation*}\nBy \\cite[Theorem 4.1]{Me9}, if $\\square_{b,q}$ is microlocally hypoelliptic on $\\Char^\\pm(\\square_{b,q})$, then \n\\begin{equation*}\n\\set{\\tau\\in \\spec^q_0(-i \\mathcal {L}_\\mathcal T):\\tau\\gtrless0}\n\\end{equation*}\nis finite. We should perhaps point out that $\\Char(\\square_{b,q})$ is equal to the characteristic variety, $\\Char(\\overline \\K_r)$, of the CR structure. \n\nAs a special case consider the situation where $F$ is the trivial line bundle. Let $\\theta_\\mathfrak r$ be the real $1$-form on $\\mathcal N$ which vanishes on $\\overline \\K_\\mathfrak r$ and satisfies $\\langle\\theta_\\mathfrak r,\\mathcal T\\rangle =1$; thus $\\theta_\\mathfrak r$ is smooth, spans $\\Char(\\overline \\K_\\mathfrak r)$, and has values in $\\Char^+(\\overline \\K_\\mathfrak r)$. The Levi form of the structure is\n\\begin{equation*}\n\\Levi_{\\theta_\\mathfrak r}(v,w)=-i d\\theta_\\mathfrak r(v,\\overline w), \\quad v,\\ w\\in \\mathcal K_{\\mathfrak r,p},\\ p\\in \\mathcal N.\n\\end{equation*}\nSuppose that $\\Levi_{\\theta_\\mathfrak r}$ is nondegenerate, with $k$ positive and $n-k$ negative eigenvalues. It is well known that then $\\square_{b,q}$ is microlocally hypoelliptic at $\\nu\\in\\Char\\mathcal K_\\mathfrak r$ for all $q$ except if $q=k$ and $\\pmb \\tau(\\nu)<0$ or if $q=n-k$ and $\\pmb \\tau(\\nu)>0$.\n\nThen the already mentioned Theorem~4.1 of \\cite{Me9} gives:\n\n\\begin{theorem}[{\\cite[Theorem 6.1]{Me9}}]\\label{WeakVanishing}\nSuppose that $\\overline\\Vee$ admits a Hermitian metric and that for some defining function $\\mathfrak r$ such that $\\mathfrak a_\\mathfrak r$ is constant, $\\Levi_{\\theta_\\mathfrak r}$ is nondegenerate with $k$ positive and $n-k$ negative eigenvalues. Then\n\\begin{enumerate}\n\\item $\\spec_0^q(-i \\mathcal {L}_\\mathcal T)$ is finite if $q\\ne k,\\ n-k$;\n\\item $\\spec_0^k(-i\\mathcal {L}_\\mathcal T)$ contains only finitely many positive elements, and \\item $\\spec_0^{n-k}(-i\\mathcal {L}_\\mathcal T)$ contains only finitely many negative elements.\n\\end{enumerate}\n\\end{theorem}\n\n\\section{Indicial cohomology}\\label{sIndicialCohomology}\n\nSuppose that there is a $\\mathcal T$-invariant Hermitian metric $\\tilde h$ on $\\overline\\Vee$. By Lemma~\\ref{WithInvariantMetric} there is a defining function $\\mathfrak r$ such that $\\langle\\beta_\\mathfrak r,\\mathcal T\\rangle$ is constant, equal to $a_\\mathrm{av}-i$. Therefore $\\overline \\K_\\mathfrak r$ is $\\mathcal T$-invariant. Let $h$ be the metric on $\\overline\\Vee$ which coincides with $\\tilde h$ on $\\overline \\K_\\mathfrak r$, makes the decomposition $\\overline\\Vee=\\overline \\K_\\mathfrak r\\oplus \\Span_\\mathbb C\\mathcal T$ orthogonal, and for which $\\mathcal T$ has unit length. The metric $h$ is $\\mathcal T$-invariant. We fix $\\mathfrak r$ and such a metric, and let $\\mathfrak m_0$ be the Riemannian measure associated with $h$. The decomposition \\eqref{DecompositionOfVeebar} of $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*$ is an orthogonal decomposition.\n\nRecall that $\\overline\\D(\\sigma)\\phi=\\overline\\Dee \\phi+i\\sigma\\beta_\\mathfrak r\\wedge \\phi$. Since $a_\\mathfrak r=a_\\mathrm{av}$ is constant (in particular CR),\n\\begin{equation*}\n\\overline\\D(\\sigma)(\\phi^0 + i \\tilde \\beta_\\mathfrak r\\wedge \\phi^1) =\n\\overline \\partial_b\\phi_0+i\\tilde \\beta_\\mathfrak r\\wedge \\big[\\big(\\mathcal {L}_\\mathcal T+(1+i a_\\mathrm{av})\\sigma\\big)\\phi^0 -\\overline \\partial_b\\phi^1\\big]\n\\end{equation*}\nif $\\phi^0\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*)$ and $\\phi^1\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K_\\mathfrak r^*)$. So $\\overline\\D(\\sigma)$ can be regarded as the operator\n\\begin{equation}\\label{CalDasMarix}\n\\overline\\D(\\sigma)=\\begin{bmatrix}\n\\overline \\partial_b & 0\\\\\n\\mathcal {L}_\\mathcal T+(1+i a_\\mathrm{av})\\sigma & -\\overline \\partial_b\n\\end{bmatrix}:\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*) \\\\ \\oplus\\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K_\\mathfrak r^*)\n\\end{matrix}\n\\to\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K_\\mathfrak r^*) \\\\ \\oplus\\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q}\\overline \\K_\\mathfrak r^*).\n\\end{matrix}\n\\end{equation}\nSince the subbundles $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r$ and $\\tilde \\beta \\wedge \\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1} \\overline \\K_\\mathfrak r$ are orthogonal with respect to the metric induced by $h$ on $\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline\\Vee$, the formal adjoint of $\\overline\\D(\\sigma)$ with respect to this metric and the density $\\mathfrak m_0$ is\n\\begin{equation*}\n\\overline\\D(\\sigma)^\\star=\n\\begin{bmatrix}\n\\overline \\partial_b^\\star & -\\mathcal {L}_\\mathcal T+(1-i a_\\mathrm{av})\\overline \\sigma \\\\\n0& -\\overline \\partial_b^\\star\n\\end{bmatrix}:\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q+1}\\overline \\K_\\mathfrak r^*) \\\\ \\oplus\\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q}\\overline \\K_\\mathfrak r^*)\n\\end{matrix}\n\\to\n\\begin{matrix}\nC^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*) \\\\ \\oplus\\\\ C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K_\\mathfrak r^*)\n\\end{matrix}\n\\end{equation*}\nwhere $\\overline \\partial_b^\\star$ is the formal adjoint of $\\overline \\partial_b$. So the Laplacian, $\\square_{\\overline\\D(\\sigma),q}$, of the $\\overline\\D(\\sigma)$-complex is the diagonal operator with diagonal entries $P_q(\\sigma)$, $P_{q-1}(\\sigma)$ where\n\\begin{equation*}\nP_q(\\sigma)=\\square_{b,q}+(\\mathcal {L}_\\mathcal T+(1+i a_\\mathrm{av})\\sigma)(-\\mathcal {L}_\\mathcal T+(1-i a_\\mathrm{av})\\overline \\sigma)\n\\end{equation*}\nacting on $C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*)$ and $P_{q-1}(\\sigma)$ is the ``same'' operator, acting on sections of $\\raise2ex\\hbox{$\\mathchar\"0356$}^{q-1}\\overline \\K_\\mathfrak r^*$; recall that $\\mathcal {L}_\\mathcal T$ commutes with $\\overline \\partial_b$ and since $\\mathcal {L}_\\mathcal T^\\star=-\\mathcal {L}_\\mathcal T$, also with $\\overline \\partial_b^\\star$, and that $a_\\mathrm{av}$ is constant. Note that $P_q(\\sigma)$ is an elliptic operator.\n\nSuppose that $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*)$ is a nonzero element of $\\ker P_q(\\sigma)$; the complex number $\\sigma$ is fixed. Since $P_q(\\sigma)$ is elliptic, $\\ker P_q(\\sigma)$ is a finite dimensional space, invariant under $-i \\mathcal {L}_\\mathcal T$ since the latter operator commutes with $P_q(\\sigma)$. As an operator on $\\ker P_q(\\sigma)$, $-i \\mathcal {L}_\\mathcal T$ is selfadjoint, so there is a decomposition of $\\ker P_q(\\sigma)$ into eigenspaces of $-i\\mathcal {L}_\\mathcal T$. Thus\n\\begin{equation*}\n\\phi=\\sum_{j=1}^N \\phi_j, \\quad -i \\mathcal {L}_\\mathcal T\\phi_j=\\tau_j\\phi_j\n\\end{equation*}\nwhere the $\\tau_j$ are distinct real numbers and $\\phi_j\\in \\ker P_q(\\sigma)$, $\\phi_j\\ne 0$. In particular,\n\\begin{equation*}\n\\square_{b,q}\\phi_j + (\\mathcal {L}_\\mathcal T+(1+i a_\\mathrm{av})\\sigma)(-\\mathcal {L}_\\mathcal T+(1-i a_\\mathrm{av})\\overline \\sigma)\\phi_j = 0,\n\\end{equation*}\nfor each $j$, that is,\n\\begin{equation*}\n\\square_{b,q}\\phi_j+ |i\\tau_j+(1+i a_\\mathrm{av})\\sigma|^2\\phi_j = 0.\n\\end{equation*}\nSince $\\square_{b,q}$ is a nonnegative operator and $\\phi_j\\ne 0$, $i\\tau_j+(1+i a_\\mathrm{av})\\sigma=0$ and $\\phi_j\\in \\ker\\square_{b,q}$. Since $\\sigma$ is fixed, all $\\tau_j$ are equal, which means that $N=1$. Conversely, if $\\phi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\overline \\K_\\mathfrak r^*)$ belongs to $\\ker \\square_{b,q}$ and $-i\\mathcal {L}_\\mathcal T\\phi=\\tau \\phi$, then $P_q(\\sigma)\\phi=0$ with $\\sigma$ such that $\\tau=(i-a_\\mathrm{av})\\sigma$.\n\nLet $\\mathscr H^q_{\\overline\\D(\\sigma)}(\\mathcal N)$ be the kernel of $\\square_{\\overline\\D(\\sigma),q}$.\n\n\\begin{theorem}\\label{TheCohomology}\nSuppose that $\\overline\\Vee$ admits a $\\mathcal T$-invariant metric and let $\\mathfrak r$ be a defining function for $\\mathcal N$ in $\\mathcal M$ such that $\\langle \\beta_\\mathfrak r,\\mathcal T\\rangle=a_\\mathrm{av}-i$ is constant. Then\n\\begin{equation*}\n\\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial) = (i-a_\\mathrm{av})^{-1}\\spec_0^q(-i \\mathcal {L}_\\mathcal T)\\cup (i-a_\\mathrm{av})^{-1}\\spec_0^{q-1}(-i \\mathcal {L}_\\mathcal T),\n\\end{equation*}\nand if $\\sigma\\in \\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial)$, then, with the notation in Definition \\ref{BCohomologyWithCoeffs}\n\\begin{equation*}\n\\mathscr H^q_{\\overline\\D(\\sigma)}(\\mathcal N)=\\mathscr H^q_{\\overline \\partial_b,\\tau(\\sigma)}(\\mathcal N)\\oplus \\mathscr H^{q-1}_{\\overline \\partial_b,\\tau(\\sigma)}(\\mathcal N)\n\\end{equation*}\nwith $\\tau(\\sigma)=(i-a_\\mathrm{av})\\sigma$.\n\\end{theorem}\n\nIf the CR structure $\\overline \\mathcal K_\\mathfrak r$ is nondegenerate, Proposition~\\ref{WeakVanishing} gives more specific information on $\\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial)$. In particular,\n\\begin{proposition}\nWith the hypotheses of Theorem~\\ref{TheCohomology}, suppose that $\\Levi_{\\theta_\\mathfrak r}$ is nondegenerate with $k$ positive and $n-k$ negative eigenvalues. If $k>0$, then $\\spec_{b,\\mathcal N}^0 \\subset \\set{\\sigma\\in \\mathbb C:\\Im\\sigma\\leq 0}$, and if $n-k>0$, then $\\spec_{b,\\mathcal N}^0({}^b\\!\\overline \\partial) \\subset \\set{\\sigma\\in \\mathbb C:\\Im\\sigma\\geq 0}$.\n\\end{proposition}\n\n\\begin{remark}\nThe $b$-spectrum of the Laplacian of the ${}^b\\!\\overline \\partial$-complex in any degree can be described explicitly in terms of the joint spectra $\\spec(-i\\mathcal {L}_\\mathcal T,\\square_{b,q})$. We briefly indicate how. With the metric $h$ and defining function $\\mathfrak r$ as in the first paragraph of this section, suppose that $h$ is extended to a metric on ${}^b{\\!}T^{0,1}\\mathcal M$. This gives a Riemannian $b$-metric on $\\mathcal M$ that in turn gives a $b$-density $\\mathfrak m$ on $\\mathcal M$. With these we get formal adjoints ${}^b\\!\\overline \\partial^\\star$ whose indicial families $\\overline\\D^\\star(\\sigma)$ are related to those of ${}^b\\!\\overline \\partial$ by\n\\begin{equation*}\n\\overline\\D^\\star(\\sigma) = \\widehat{{}^b\\!\\overline \\partial^\\star}(\\sigma)= [\\widehat{{}^b\\!\\overline \\partial}(\\overline \\sigma)]^\\star = \\overline\\D(\\overline \\sigma)^\\star.\n\\end{equation*}\nBy \\eqref{CalDasMarix},\n\\begin{equation*}\n\\overline\\D^\\star(\\sigma)=\n\\begin{bmatrix}\n\\overline \\partial_b^\\star & -\\mathcal {L}_\\mathcal T+(1-i a_\\mathrm{av}) \\sigma \\\\\n0& -\\overline \\partial_b^\\star\n\\end{bmatrix}.\n\\end{equation*}\nUsing this one obtains that the indicial family of the Laplacian $\\square_q$ of the ${}^b\\!\\overline \\partial$-complex in degree $q$ is a diagonal operator with diagonal entries $P'_q(\\sigma)$, $P'_{q-1}(\\sigma)$ with\n\\begin{equation*}\nP'_q(\\sigma)=\\square_{b,q}+(\\mathcal {L}_\\mathcal T+(1+i a_\\mathrm{av})\\sigma)(-\\mathcal {L}_\\mathcal T+(1-i a_\\mathrm{av})\\sigma)\n\\end{equation*}\nand the analogous operator in degree $q-1$. The set $\\spec_b(\\square_q)$ is the set of values of $\\sigma$ for which either $P'_q(\\sigma)$ or $P'_{q-1}(\\sigma)$ is not injective. These points can written in terms of the points $\\spec(-i\\mathcal {L}_\\mathcal T,\\square_{b})$ as asserted. In particular one gets\n\\begin{equation*}\n\\spec_b(\\square_q)\\subset \\set{\\sigma: |\\Re \\sigma|\\leq |a_\\mathrm{av}||\\Im\\sigma|}\n\\end{equation*}\nwith $\\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial)$ being a subset of the boundary of the set on the right.\n\\end{remark}\n\n\nWe now discuss the indicial cohomology sheaf of ${}^b\\!\\overline \\partial$, see Definition \\ref{CohomologySheafs}. We will show:\n\n\\begin{proposition}\\label{bdeebarCohomologySheaf}\nLet $\\sigma_0\\in \\spec_{b,\\mathcal N}^q({}^b\\!\\overline \\partial)$. Every element of the stalk of $\\mathfrak H^q_{{}^b\\!\\overline \\partial}(\\mathcal N)$ at $\\sigma_0$ has a representative of the form\n\\begin{equation*}\n\\frac{1}{\\sigma-\\sigma_0}\n\\begin{bmatrix}\n\\phi^0 \\\\ 0\n\\end{bmatrix}\n\\end{equation*}\nwhere $\\phi^0\\in \\mathscr H^q_{\\overline \\partial_b,\\tau_0}(\\mathcal N)$, $\\tau_0= (i-a_\\mathrm{av})\\sigma_0$.\n\\end{proposition}\n\n\\begin{proof}\nLet\n\\begin{equation}\\label{RepresentativeA}\n\\phi(\\sigma)=\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\phi^0_k \\\\ \\phi^1_k\n\\end{bmatrix}\n\\end{equation}\nrepresent an element in the stalk at $\\sigma_0$ of the sheaf of germs of $C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^q\\smash[t]{\\overline\\Vee}^*\\otimes F)$-valued meromorphic functions on $\\mathbb C$ modulo the subsheaf of holomorphic elements. Letting $\\alpha=1+i a_\\mathrm{av}$ we have\n\\begin{equation*}\n\\overline\\D(\\sigma)\\phi(\\sigma) =\n\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\overline \\partial_b \\phi^0_k \\\\\n\\big(\\mathcal {L}_\\mathcal T + \\alpha\\sigma_0\\big) \\phi^0_k-\\overline \\partial_b \\phi^1_k\n\\end{bmatrix}\n+ \\sum_{k=0}^{\\mu-1} \\frac{\\alpha}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n0 \\\\\n\\phi^0_{k+1}\n\\end{bmatrix},\n\\end{equation*}\nso the condition that $\\overline\\D(\\sigma)\\phi(\\sigma)$ is holomorphic is equivalent to\n\\begin{equation}\\label{TopEqs}\n\\overline \\partial_b\\phi^0_k=0,\\ k=1,\\dotsc,\\mu\n\\end{equation}\nand\n\\begin{equation}\\label{BottomEqs}\n\\begin{gathered}\n(\\mathcal {L}_\\mathcal T + \\alpha\\sigma_0) \\phi^0_\\mu-\\overline \\partial_b \\phi^1_\\mu = 0,\\\\ (\\mathcal {L}_\\mathcal T + \\alpha\\sigma_0) \\phi^0_k-\\overline \\partial_b \\phi^1_k + \\alpha\\phi^0_{k+1}=0,\\ k=1,\\dotsc,\\mu-1.\n\\end{gathered}\n\\end{equation}\n\nLet $P_{q'}=\\square_{b,q'}-\\mathcal {L}_\\mathcal T^2$ in any degree $q'$. For any $(\\tau,\\lambda)\\in \\mathbb R^2$ and $q'$ let\n\\begin{equation*}\n\\mathcal E^{q'}_{\\tau,\\lambda}=\\set{\\psi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q'}\\smash[t]{\\overline\\Vee}^*\\otimes F):P_{q'}\\psi=\\lambda\\psi,\\ -i \\mathcal {L}_\\mathcal T \\psi=\\tau\\psi}.\n\\end{equation*}\nThis space is zero if $(\\tau,\\lambda)$ is not in the joint spectrum $\\Sigma^{q'} = \\spec^{q'}(-i\\mathcal {L}_\\mathcal T,P_{q'})$. Each $\\phi^i_k$ decomposes as a sum of elements in the spaces $\\mathcal E^{q-i}_{\\tau,\\lambda}$, $(\\tau,\\lambda)\\in \\Sigma^{q-i}$. Suppose that already $\\phi^i_k\\in \\mathcal E^{q-i}_{\\tau,\\lambda}$:\n\\begin{equation*}\nP_{q-i}\\phi^i_k=\\lambda \\phi^i_k,\\quad -i\\mathcal {L}_\\mathcal T \\phi^i_k=\\tau\\phi^i_k,\\quad i=0,1,\\ k=1,\\dotsc,\\mu.\n\\end{equation*}\nThen \\eqref{BottomEqs} becomes\n\\begin{equation}\\label{BottomEqsEigen}\n\\begin{gathered}\n(i\\tau + \\alpha\\sigma_0) \\phi^0_\\mu-\\overline \\partial_b \\phi^1_\\mu = 0,\\\\ (i\\tau + \\alpha\\sigma_0) \\phi^0_k-\\overline \\partial_b \\phi^1_k + \\alpha\\phi^0_{k+1}=0,\\ k=1,\\dotsc,\\mu-1.\n\\end{gathered}\n\\end{equation}\nIf $\\tau\\ne \\tau_0$, then $i\\tau + \\alpha\\sigma_0\\ne 0$, and we get $\\phi^0_k=\\overline \\partial_b \\psi^0_k$ for all $k$ with\n\\begin{equation*}\n\\psi^0_k = \\sum_{j=0}^{\\mu-k} \\frac{(-\\alpha)^j}{(i\\tau + \\alpha\\sigma_0)^{j+1}}\\phi^1_{k+j}.\n\\end{equation*}\nTrivially\n\\begin{equation*}\n\\big(\\mathcal {L}_\\mathcal T + \\alpha\\sigma_0\\big) \\psi^0_\\mu = \\phi^1_\\mu\n\\end{equation*}\nand also\n\\begin{equation*}\n\\big(\\mathcal {L}_\\mathcal T + \\alpha\\sigma_0\\big) \\psi^0_k + \\alpha\\psi^0_{k+1} = \\phi^1_k,\\quad k=1,\\dotsc,\\mu-1,\n\\end{equation*}\nso\n\\begin{equation*}\n\\phi(\\sigma) - \\overline\\D(\\sigma)\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix} \\psi^0_k\\\\\n0\n\\end{bmatrix}=0\n\\end{equation*}\nmodulo an entire element.\n\nSuppose now that the $\\phi^i_k$ are arbitrary and satisfy \\eqref{TopEqs}-\\eqref{BottomEqs}. The sum\n\\begin{equation}\\label{FourierSeries}\n\\phi^i_k=\\sum_{(\\tau,\\lambda)\\in \\Sigma^{q-i}} \\phi^i_{k,\\tau,\\lambda},\\quad \\phi^i_{k,\\tau,\\lambda}\\in \\mathcal E^{q-i}_{\\tau,\\lambda}\n\\end{equation}\nconverges in $C^\\infty$, indeed for each $N$ there is $C_{i,k,N}$ such that\n\\begin{equation}\\label{RapidDecay}\n\\sup_{p\\in \\mathcal N}\\|\\phi^i_{k,\\tau,\\lambda}(p)\\|\\leq C_{i,k,N}(1+\\lambda)^{-N}\\quad\\text{ for all }\\tau,\\lambda.\n\\end{equation}\nSince $\\overline\\D(\\sigma)$ preserves the spaces $\\mathcal E^q_{\\tau,\\lambda}\\oplus \\mathcal E^{q-1}_{\\tau,\\lambda}$, the relations \\eqref{BottomEqsEigen} hold for the $\\phi^i_{k,\\tau,\\lambda}$ for each $(\\tau,\\lambda)$. Therefore,\nwith\n\\begin{equation}\\label{PsiFourierSeries}\n\\psi^0_k = \\sum_{\\substack{(\\tau,\\lambda)\\in\\Sigma^{q-1}\\\\\\tau\\ne \\tau_0}}\\sum_{j=0}^{\\mu-k} \\frac{(-\\alpha)^j}{(i\\tau + \\alpha\\sigma_0)^{j+1}}\\phi^1_{k+j,\\tau,\\lambda}\n\\end{equation}\nwe have formally that\n\\begin{equation*}\n\\phi(\\sigma)-\\overline\\D(\\sigma) \\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix} \\psi_k\\\\\n0\n\\end{bmatrix} = \\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix} \\tilde \\phi^0_k\\\\\n\\tilde \\phi^1_k\n\\end{bmatrix}\n\\end{equation*}\nwith\n\\begin{equation}\\label{AtTau0}\n\\tilde \\phi^i_k=\\sum_{\\substack{(\\tau,\\lambda)\\in \\Sigma^{q-1}\\\\\\tau=\\tau_0}} \\phi^i_{k,\\tau,\\lambda},\\quad \\phi^i_{k,\\tau,\\lambda}\\in \\mathcal E^{q-i}_{\\tau,\\lambda}.\n\\end{equation}\nHowever, the convergence in $C^\\infty$ of the series \\eqref{PsiFourierSeries} is questionable since there may be a sequence $\\set{(\\tau_\\ell,\\lambda_\\ell)}_{\\ell=1}^\\infty \\subset \\spec(-i\\mathcal {L}_\\mathcal T,P_{q-1})$ of distinct points such that $\\tau_\\ell\\to\\tau_0$ as $\\ell\\to\\infty$, so that the denominators $i \\tau_\\ell+\\alpha\\sigma_0$ in the formula for $\\psi^0_k$ tend to zero so fast that for some nonnegative $N$, $\\lambda_\\ell^{-N}\/(i \\tau_\\ell+\\alpha\\sigma_0)$ is unbounded. To resolve this difficulty we will first show that $\\phi(\\sigma)$ is $\\overline\\D(\\sigma)$-cohomologous (modulo holomorphic terms) to an element of the same form as $\\phi(\\sigma)$ for which in the series \\eqref{FourierSeries} the terms $\\phi^i_{k,\\tau,\\lambda}$ vanish if $\\lambda-\\tau^2>\\varepsilon$; the number $\\varepsilon>0$ is chosen so that\n\\begin{equation}\\label{ChoiceOfEps}\n(\\tau_0,\\lambda)\\in \\Sigma^q\\cup\\Sigma^{q-1} \\implies \\lambda=\\tau_0^2\\text{ or }\\lambda\\geq\\tau_0^2+\\varepsilon.\n\\end{equation}\nRecall that $\\spec^{q'}(-i\\mathcal {L}_\\mathcal T,P_{q'})\\subset \\set{(\\tau,\\lambda):\\lambda\\geq \\tau^2}$.\n\nFor any $V\\subset \\bigcup_{q'}\\Sigma^{q'}$ let\n\\begin{equation*}\n\\Pi^{q'}_V:L^2(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q'}\\smash[t]{\\overline\\Vee}^*\\otimes F)\\to L^2(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q'}\\smash[t]{\\overline\\Vee}^*\\otimes F)\n\\end{equation*}\nbe the orthogonal projection on $\\bigoplus_{(\\tau,\\lambda)\\in V} \\mathcal E^{q'}_{\\tau,\\lambda}$. If $\\psi\\in C^\\infty(\\mathcal N;\\raise2ex\\hbox{$\\mathchar\"0356$}^{q'}\\smash[t]{\\overline\\Vee}^*\\otimes F)$, then the series\n\\begin{equation*}\n\\Pi^{q'}_V \\psi=\\sum_{(\\tau,\\lambda)\\in V}\\psi_{\\tau,\\lambda}, \\quad\\psi_{\\tau,\\lambda}\\in \\mathcal E^{q'}_{\\tau,\\lambda}\n\\end{equation*}\nconverges in $C^\\infty$. It follows that $\\square_{b,q'}$ and $\\mathcal {L}_\\mathcal T$ commute with $\\Pi^{q'}_V$ and that $\\overline \\partial_b\\Pi^{q'}_V=\\Pi^{q'+1}_V\\overline \\partial_b$. Since the $\\Pi^{q'}_V$ are selfadjoint, also $\\overline \\partial_b^\\star\\Pi^{q'+1}_V = \\Pi^{q'}_V\\overline \\partial_b^\\star$.\n\nLet\n\\begin{equation*}\nU=\\set{(\\tau,\\lambda)\\in \\Sigma^q\\cup \\Sigma^{q-1}:\\lambda <\\tau^2+\\varepsilon},\\quad U^c=\\Sigma^q\\cup \\Sigma^{q-1}\\backslash U.\n\\end{equation*}\nThen, for any sequence $\\set{(\\tau_\\ell,\\lambda_\\ell)}\\subset U$ of distinct points we have $|\\tau_\\ell|\\to\\infty$ as $\\ell\\to\\infty$. Define\n\\begin{equation*}\nG^{q'}_{U^c}\\psi =\\sum_{(\\tau,\\lambda)\\in U^c}\\frac{1}{\\lambda-\\tau^2}\\psi_{\\tau,\\lambda}\n\\end{equation*}\nIn this definition the denominators $\\lambda-\\tau^2$ are bounded from below by $\\varepsilon$, so $G^{q'}_{U^c}$ is a bounded operator in $L^2$ and maps smooth sections to smooth sections because the components of such sections satisfy estimates as in \\eqref{RapidDecay}. The operators are analogous to Green operators: we have\n\\begin{equation}\\label{PseudoGreen1}\n\\square_{b,q'}G^{q'}_{U^c} = G^{q'}_{U^c} \\square_{b,q'}=I -\\Pi^{q'}_U\n\\end{equation}\nso if $\\overline \\partial_b\\psi=0$, then\n\\begin{equation}\\label{PseudoGreen2}\n\\square_{b,q'}G^{q'}_{U^c}\\psi = \\overline \\partial_b\\deebarb^\\star G^{q'}_{U^c}\\psi\n\\end{equation}\nsince $\\overline \\partial_b G^{q'}_{U^c}= G^{q'+1}_{U^c}\\overline \\partial_b$.\n\nWrite $\\phi(\\sigma)$ in \\eqref{RepresentativeA} as\n\\begin{equation*}\n\\phi(\\sigma) =\\Pi_{U^c}\\phi(\\sigma)+ \\Pi_{U}\\phi(\\sigma)\n\\end{equation*}\nwhere\n\\begin{equation*}\n\\Pi_{U^c}\\phi(\\sigma) =\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\Pi^q_{U^c}\\phi^0_k \\\\ \\Pi^{q-1}_{U^c}\\phi^1_k\n\\end{bmatrix},\\quad\n\\Pi_U\\phi(\\sigma)=\n\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\Pi^q_U\\phi^0_k \\\\ \\Pi^{q-1}_U\\phi^1_k\n\\end{bmatrix}.\n\\end{equation*}\nSince $\\overline\\D(\\sigma)\\phi(\\sigma)$ is holomorphic, so are $\\overline\\D(\\sigma)\\Pi_{U^c}\\phi(\\sigma)$ and $\\overline\\D(\\sigma)\\Pi_U\\phi(\\sigma)$.\n\nWe show that $\\Pi_{U^c}\\phi(\\sigma)$ is exact modulo holomorphic functions. Using \\eqref{TopEqs}, \\eqref{PseudoGreen1}, and \\eqref{PseudoGreen2}, $\\Pi^q_{U^c}\\phi^0_k = \\overline \\partial_b^\\star \\overline \\partial_b \\Pi^q_{U^c}\\phi^0_k$. Then\n\\begin{equation*}\n\\Pi_{U^c}\\phi(\\sigma)-\\overline\\D(\\sigma)\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\overline \\partial_b^\\star G^q_{U^c}\\Pi^q_U\\phi^0_k \\\\ 0\n\\end{bmatrix}\n=\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n0 \\\\ \\hat\\phi^1_k\n\\end{bmatrix}\n\\end{equation*}\nmodulo a holomorphic term for some $\\hat\\phi^1_k$ with $\\Pi^{q-1}_{U^c}\\hat\\phi^1_k=\\hat\\phi^1_k$. The element on the right is $\\overline\\D(\\sigma)$-closed modulo a holomorphic function, so its components satisfy \\eqref{TopEqs}, \\eqref{BottomEqs}, which give that the $\\tilde\\phi^1_k$ are $\\overline \\partial_b$-closed. Using again \\eqref{PseudoGreen1} and \\eqref{PseudoGreen2} we see that $\\Pi_{U^c}\\phi(\\sigma)$ represent an exact element.\n\nWe may thus assume that $\\Pi^q_{U^c}\\phi(\\sigma)=0$. If this is the case, then the series \\eqref{PsiFourierSeries} converges in $C^\\infty$, so $\\phi(\\sigma)$ is cohomologous to the element\n\\begin{equation*}\n\\tilde\\phi(\\sigma)=\\sum_{k=1}^\\mu \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\tilde \\phi^0_k \\\\ \\tilde \\phi^1_k\n\\end{bmatrix}\n\\end{equation*}\nwhere the $\\tilde \\phi^i_k$ are given by \\eqref{AtTau0} and satisfy $\\Pi^{q-i}_{U^c}\\tilde \\phi^i_k=0$. By \\eqref{ChoiceOfEps}, $\\tilde \\phi^i_k\\in \\mathcal E^{q-i}_{\\tau_0,\\tau_0^2}$. In particular, $\\square_{b,q-i}\\phi^i_k=0$.\n\nAssuming now that already $\\phi^i_k\\in \\mathcal E^{q-i}_{\\tau_0,\\tau_0^2}$, the formulas \\eqref{BottomEqsEigen} give (since $\\tau=\\tau_0$ and $i\\tau_0+\\alpha\\sigma_0=0$)\n\\begin{equation*}\n\\overline \\partial_b\\phi^1_\\mu=0,\\quad \\phi^0_k = \\overline \\partial_b \\frac{1}{\\alpha}\\phi^1_{k-1},\\ k=2,\\dotsc,\\mu.\n\\end{equation*}\nThen\n\\begin{equation*}\n\\phi(\\sigma)-\\frac {1}{\\alpha}\\overline\\D(\\sigma)\\sum_{k=2}^{\\mu+1} \\frac{1}{(\\sigma-\\sigma_0)^k}\n\\begin{bmatrix}\n\\phi^1_{k-1} \\\\ 0\n\\end{bmatrix}\n=\\frac{1}{\\sigma-\\sigma_0}\n\\begin{bmatrix}\n\\phi^0_1 \\\\ 0\n\\end{bmatrix}\n\\end{equation*}\nwith $\\square_{b,q}\\phi^0_1=0$.\n\\end{proof}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzznieb b/data_all_eng_slimpj/shuffled/split2/finalzznieb new file mode 100644 index 0000000000000000000000000000000000000000..1d097076cd65e68a7f783cc8fd072c9d42e21cec --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzznieb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nDecay estimate of solutions is a fundamental problem in the qualitative analysis of evolution equations. In most cases this problem can be reduced to differential or integral inequalities. For non-retarded evolution equations, numerous inequalities are available to make the performance of decay estimate fruitful (see e.g. \\cite{BS,LL,B.G.Pach,Qin}), among which is the remarkable Gronwall-Bellman inequality which was first proposed in Gronwall \\cite{Gron} and later extended to a more general form in Bellman \\cite{Bell1}. In contrast, the situation in the case of retarded equations seems to be more complicated. Although there have appeared many nice retarded differential and integral inequalities in the literature (see e.g. {\\cite{Dee,FT,Halanay,LL,Lip,WangGT,MP,B.G.Pach,YG}} and references cited therein), the existing ones are far from being adequate to provide easy-to-handle and efficient tools for studying the dynamics of this type of equations, and it is still a challenging task to derive decay estimates for their solutions, even if for the scalar functional differential equation $\\dot x=f(t,x,x_t)$. In fact, it is often the case that one has to fall his back on differential\/integral inequalities without delay when dealing with retarded differential or integral equations, which makes the calculations in the argument much involved and restrictive.\n\nIn this paper we investigate the following type of retarded integral inequalities:\n\\begin{equation}\\label{e1.1}\\begin{array}{ll}\ny(t)\\leq &E(t,\\tau)\\|y_\\tau\\|+\\int_\\tau^t K_1(t,s)\\|y_s\\|ds\\\\[2ex]\n&+\\int_t^\\infty K_2(t,s)\\|y_s\\|ds+\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\,t\\geq\\tau\\geq 0,\n\\end{array}\n\\end{equation}\nwhere $E$, $K_1$ and $K_2$ are nonnegative measurable functions on $Q:=(\\mathbb{R}^+)^2$, $\\rho\\geq 0$ is a constant,\n$\\|\\.\\|$ denotes the usual sup-norm of the space ${{\\mathcal C}}:=C([-r,0])$ for some given $r\\geq0$, $y(t)$ is a nonnegative continuous function on $[-r,\\infty)$ (called a {\\em solution} of \\eqref{e1.1}), and $y_t$ ($t\\geq 0$) denotes the {\\em lift} of $y$ in ${{\\mathcal C}}$,\n\\begin{equation}\\label{e1.15}\ny_t(s)=y(t+s),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} s\\in [-r,0].\n\\end{equation}\nOur main purpose is to establish some uniform decay estimates for its solutions.\nSpecifically, let $E$ be a function on $Q$ satisfying that\n\\begin{equation}\\label{e1.13}\\begin{array}{ll}\n\\lim_{t\\rightarrow \\infty}E(t+s,s)=0\\mbox{ uniformly w.r.t. $s\\in\\mathbb{R}^+$}, \\end{array}\n\\end{equation} and suppose\n\\begin{equation}\\label{e1.14E}\n\\vartheta(E):=\\sup_{t\\geq s\\geq0}E(t,s)\\leq \\vartheta<\\infty,\n\\end{equation}\n\\begin{equation}\\label{e1.14}\\begin{array}{ll}\nI(K_1,K_2):=\\sup_{t\\geq 0}\\(\\int_0^t K_1(t,s)ds+\\int_t^\\8K_2(t,s)ds\\)\\leq \\kappa<\\infty.\\end{array}\n\\end{equation}\nDenote ${\\mathscr L}_r(E;K_1,K_2;\\rho)$ the solution set of \\eqref{e1.1}, i.e.,\n\\begin{equation}\\label{e1.5}{\\mathscr L}_r(E;K_1,K_2;\\rho)=\\{y\\in C([-r,\\infty)):\\,\\,\\,y\\geq0\\mbox{ and satisfies } \\eqref{e1.1}\\}. \\end{equation}\nWe show that the following theorem holds true.\n\\begin{theorem}\\label{t:3.1}\nLet $\\vartheta$ and $\\kappa$ be the positive constants in \\eqref{e1.14E} and \\eqref{e1.14}.\n \\begin{enumerate}\n \\item[$(1)$] If $\\kappa<1$ then for any $R,\\varepsilon>0$, there exists $T>0$ such that\n \\begin{equation}\\label{e:t2.2}\n \\|y_t\\|<\\mu \\rho+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t>T\n \\end{equation}\n for all bounded functions $y\\in {\\mathscr L}_r(E;K_1,K_2;\\rho)$ with $\\|y_0\\|\\leq R$, where\n\\begin{equation}\\label{emu}\n \\mu =1\/(1-\\kappa).\\end{equation}\n\\vs\\item[$(2)$] If $\\kappa<1\/(1+\\vartheta)$ then there exist $M,\\lambda>0$ $($independent of $\\rho$$)$ such that\n\\begin{equation}\\label{e:gi}\n{\\|y_t\\|}\\leq M\\|y_0\\|e^{-\\lambda t}+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0\n\\end{equation}\nfor all bounded functions $y\\in {\\mathscr L}_r(E;K_1,K_2;\\rho)$, where\n\\begin{equation}\\label{ec}\n\\gamma=({\\mu+1})\/{(1-\\kappa c)},\\hs c =\\max\\(\\vartheta \/(1-\\kappa),\\,1\\).\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\\begin{remark}\\label{r1.1}If $\\kappa<1\/(1+\\vartheta)$ then one trivially verifies that\n$\n\\kappa c <1.\n$\n\n\\end{remark}\n\n\nThe particular case where $K_2= 0$ is of crucial importance in applications. In such a case we show that if $I(K_1,0)\\leq\\kappa<1$ then any function $y\\in {\\mathscr L}_r(E;K_1,0;\\rho)$ is automatically bounded. Hence the boundedness requirement on $y$ in Theorem \\ref{t:3.1} can be removed. Consequently we have\n\n\\begin{theorem}\\label{t:3.2} Let $(K_1,K_2)=(K,0)$, and let $\\vartheta$, $\\kappa$, $\\mu$ and $\\gamma$ be the same constants as in Theorem \\ref{t:3.1}. Then the following assertions hold.\n\\begin{enumerate}\n\\item[$(1)$] If $\\kappa<1$ then for any $R,\\varepsilon>0$, there exists $T>0$ such that\n \\begin{equation}\\label{e:gia}\n \\|y_t\\|<\\mu \\rho+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t>T\n \\end{equation}\nfor all $y\\in {\\mathscr L}_r(E;K,0;\\rho)$ with $\\|y_0\\|\\leq R$.\n\\vs\n\\item[$(2)$] If $\\kappa<1\/(1+\\vartheta)$ then there exist $M,\\lambda>0$ such that for all $y\\in {\\mathscr L}_r(E;K,0;\\rho)$,\n\\begin{equation}\\label{e:gi1}\n{\\|y_t\\|}\\leq M\\|y_0\\|e^{-\\lambda t}+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0.\n\\end{equation}\n\\end{enumerate}\n\\end{theorem}\n\nTheorem \\ref{t:3.1} can be seen as an extension of the following result in Hale \\cite{Hale1} (see \\cite[pp. 110, Lemma 6.2]{Hale1}) which plays a fundamental role in constructing invariant manifolds of differential equations.\n\\begin{proposition}\\label{p1.3}\\cite{Hale1}\n Suppose $\\alpha>0$, $\\gamma>0$, K, L, M are nonnegative constants and $u$ is a nonnegative bounded continuous solution of the inequality\n\\begin{equation}\\label{e1.8}\nu(t)\\le Ke^{-\\alpha t}+L\\int_{0}^{t}e^{-\\alpha(t-s)}u(s)ds+M\\int_{0}^{\\infty}e^{-\\gamma s}u(t+s)ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\ge0.\n\\end{equation}\nIf\n$\n\\beta:=L\/\\alpha+M\/\\gamma<1\n$\nthen\n\\begin{equation}\\label{e1.7}\nu(t)\\le(1-\\beta)^{-1}Ke^{-[\\alpha-(1-\\beta)^{-1}L]t},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0.\n\\end{equation}\n\\end{proposition}\nNote that there is in fact an additional requirement in \\eqref{e1.7} to guarantee the exponential decay of $u$, that is, $\\alpha-(1-\\beta)^{-1}L>0$, or equivalently,\n\\begin{equation}\\label{e1.9}\nL\/\\alpha+M\/\\gamma<1-L\/\\alpha.\n\\end{equation}\n\nLet us say a little more about the special case $M=0$, in which \\eqref{e1.9} reads as $L\/\\alpha<1\/2$. In such a case $L\/\\alpha$ coincides with the constant $\\kappa$ in Theorem \\ref{t:3.2}.\n Setting $K=u_0$ in \\eqref{e1.8}, we see that the upper bound $\\vartheta$ of the decay functor $E$ in \\eqref{e1.8} (corresponding to \\eqref{e1.1}) equals $1$. Consequently the smallness requirement on $\\kappa$ in assertion (2) of Theorem \\ref{t:3.2} reduces to that $\\kappa=L\/\\alpha<1\/2$.\n\n On the other hand, if $1\/2\\leq \\kappa=L\/\\alpha<1$ then we can only infer from \\eqref{e1.7} that $u$ has at most an exponential growth. However, Theorem \\ref{t:3.2} still assures that a function satisfying the corresponding integral inequality must approach $0$ in a uniform manner with respect to initial data in bounded sets.\n\nWe also mention that our proof for Theorem \\ref{t:3.1} is significantly different not only from the one for Proposition \\ref{p1.3} given in \\cite{Hale1}, but also from those in the literature for other types of differential or integral inequalities.\n\n\\begin{remark}\n The smallness requirement $\\kappa<1$ in the above theorems is optimal in some sense. This can be seen from the simple example of scalar equation:\n\\begin{equation}\\label{e1.6}\n\\dot x=-a x+bx(t-1),\n\\end{equation}\nwhere $a,b>0$ are constants, for which the assumption $\\kappa<1$ in Theorem \\ref{t:3.1} on the corresponding integral inequality to guarantee the global asymptotic stability of the $0$ solution of the equation amounts to require that $ba$ then simple calculations show that \\eqref{e1.6} has a positive eigenvalue and hence $0$ is unstable; see e.g. Kuang \\cite[Chap. 3, Sect. 2]{Kuang}.\\end{remark}\n\n\\begin{remark}\\label{r:1.6}\nIt remains open whether the assumption $\\kappa<1\/(1+\\vartheta)$ in Theorem \\ref{t:3.2} to guarantee global exponential decay for \\eqref{e1.1} can be further relaxed in the full generality of the theorem.\n\\end{remark}\n\nAs a simple example of applications, we consider the asymptotic stability of the scalar functional differential equation:\n\\begin{equation}\\label{e1.10}\n\\dot x=-a(t)x+B(t,x_t),\n\\end{equation}\nwhere $a\\in C(\\mathbb{R})$, and $B$ is a continuous function on $\\mathbb{R}\\times C([-r,0])$ for some fixed $r\\geq0$ with\n$\n|B(t,\\phi)|\\leq b(t)\\|\\phi\\|$.\nSpecial cases of the equation were studied in the literature by many authors. For instance, in an earlier work of Winston \\cite{Wins}, the author considered the case where $a(t)$ is nonnegative and $b(t)\\leq \\theta a(t)$ for some $\\theta<1$. Using Razumikhin's method\nthe author proved the exponential asymptotic stability and the asymptotic stability of the equation under the assumption $a(t)\\geq\\alpha>0$ and that $a(t)\\geq0$ with $\\int_0^\\8a(t)dt=\\infty$, respectively.\nHere we revisit this problem and allow $a(t)$ to be a function which may change sign. Assume\n $\\lim_{t\\rightarrow \\infty}\\int_s^{s+t} a(\\tau)d\\tau \\rightarrow \\infty\\,\\, \\mbox{uniformly w.r.t $s\\in\\mathbb{R}$}.$\nWe show that the null solution of \\eqref{e1.10} is globally asymptotically stable provided that\n$$\\begin{array}{ll}\n\\kappa_\\tau:= \\sup_{t\\geq \\tau}\\int_\\tau^tE(t,s)b(s)ds<1,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,\\tau\\in \\mathbb{R},\\end{array}\n$$\nwhere $\nE(t,s)=\\exp\\(-\\int_s^t a(\\sigma)d\\sigma\\).$ Some results on global exponential asymptotic stability will also be presented. It is not difficulty to check that if $a(t)$ is a nonnegative function with $\\int_0^\\8a(s)ds=\\infty$ and $b(t)\\leq \\theta a(t)$ ($t\\in\\mathbb{R}$) for some $\\theta<1$, then $\\kappa_\\tau\\leq \\theta<1$ for all $\\tau\\in\\mathbb{R}$.\n\nAs another example of applications for our integral inequalities, we discuss the existence of pullback attractor for ODE system\n\\begin{equation}\\label{e1.16}\n\\dot x=F_0(t,x)+\\sum_{i=1}^mF_i(t,x(t-r_i)),\\hs x=x(t)\\in\\mathbb{R}^n,\n\\end{equation}\nwhere $F_i(t,x)$ ($0\\leq i\\leq m$) are continuous mappings from $\\mathbb{R}\\times \\mathbb{R}^n$ to $\\mathbb{R}^n$ which are locally Lipschitz in $x$ in a uniform manner with respect to $t$ on bounded intervals, and $r_i:\\mathbb{R}\\rightarrow [0,r]$ ($1\\leq i\\leq m$) are measurable functions. The investigation of the dynamics of delayed differential equations in the framework of pullback attractor theory developed in \\cite{Crau, Kloeden2,Kloeden1} etc. was first initiated by Caraballo et al. \\cite{Carab}. In recent years there is an increasing interest on this topic for both retarded ODEs and PDEs; see e.g. \\cite{CMR,CMV, C,Chue, KL, MR,SC, WK, ZS}. However, we find that the existing works mainly focus on the case where the terms involving time lags have at most sublinear nonlinearities.\nHere we allow the nonlinearities $F_i(t,x)$ ($0\\leq i\\leq m$) in \\eqref{e1.16} to be superlinear in space variable $x$.\nSuppose\n\\begin{enumerate}\\item[{\\bf (F)}] there exist positive constants $p>q\\geq 1$, $\\alpha_i>0$ ($0\\leq i\\leq m$), and nonnegative measurable functions $\\beta_i(t)$ ($0\\leq i\\leq m$) on $\\mathbb{R}$ such that\n$$\n(F_0(t,x),x)\\leq -\\alpha_0 |x|^{p+1}+\\beta_0(t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,x\\in\\mathbb{R}^n,\\,\\,t\\in\\mathbb{R},\n$$$$\n |F_i(t,x)|\\leq \\alpha_i |x|^{q}+\\beta_i(t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,x\\in\\mathbb{R}^n,\\,\\,t\\in\\mathbb{R}.\n $$\n \\end{enumerate}\nWe show under some additional assumptions on $\\beta_i(t)$ ($0\\leq i\\leq m$) that system \\eqref{e1.16} is dissipative and has a global pullback attractor.\n\\vs\n\nAs our third example to illustrate applications of Theorems \\ref{t:3.1} and \\ref{t:3.2}, we finally consider the dynamics of retarded nonlinear evolution equations with sublinear nonlinearities in the general setting of the cocycle system:\n\\begin{equation}\\label{e1.18}\\frac{du}{dt}+Au=F(\\theta_tp,u_t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} p\\in {\\mathcal H}\\end{equation}\n in a Banach space $X$, where $A$ is a sectorial operator in $X$ with compact resolvent, ${\\mathcal H}$ is a compact metric space, and $\\theta_t$ is a dynamical system on ${\\mathcal H}$.\nWe will show under a hyperbolicity assumption on $A$ and some smallness requirements on the growth rate and the Lipschitz constant of $F(p,u)$ in $u$ that the system has a unique nonautonomous equilibrium solution $\\Gamma$. The global asymptotic stability and exponential stability of $\\Gamma$ will also be addressed.\n\n\n\\vs\nThis paper is organized as follows. Section 2 is devoted to the proofs of the main results, namely, Theorems \\ref{t:3.1} and \\ref{t:3.2}; and Section 3 consists of the two examples of ODE systems mentioned above. Section 4 is concerned with the dynamics of system \\eqref{e1.18}. We will also talk about in this section how to put a differential equation with multiple variable delays and external forces into the general setting of \\eqref{e1.18}.\n\n\n\n\n\\section{Proofs of Theorems \\ref{t:3.1} and \\ref{t:3.2}}\nFor convenience in statement, let us first introduce several classes of functions.\n\nDenote ${\\mathscr E}$ the family of {\\em bounded} nonnegative measurable functions on $Q:=(\\mathbb{R}^+)^2$ satisfying \\eqref{e1.13}, and let\n$$\\begin{array}{ll}\n{\\mathscr K}_1=\\{K\\in {\\mathscr M}^+(Q):\\,\\,\\,\\int_0^t K(t,s)ds<\\infty\\mbox{ for all }t\\geq 0\\},\n\\end{array}\n$$\n$$\\begin{array}{ll}\n{\\mathscr K}_2=\\{K\\in {\\mathscr M}^+(Q):\\,\\,\\,\\int_t^\\infty K(t,s)ds<\\infty\\mbox{ for all }t\\geq 0\\},\n\\end{array}\n$$\nwhere ${\\mathscr M}^+(Q)$ is the family of {nonnegative measurable functions} on $Q$.\nDenote $I(K_1,K_2)$ the constant defined in \\eqref{e1.14} for any $(K_1,K_2)\\in {\\mathscr K}_1\\times {\\mathscr K}_2$.\n\nLet ${\\mathcal C}$ be the space $C([-r,0])$ equipped with the usual sup-norm\n $$\n \\|\\phi\\|=\\sup_{s\\in[-r,0]}|\\phi(s)|,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\phi\\in{\\mathcal C}.\n $$\nGiven $y\\in C([-r,T))$ ($T>0$), one can assign a function $y_t$ from $[0,T)$ to ${\\mathcal C}$ as\nfollows: for each $t\\in[0,T)$, $y_t$ is the {element} in ${{\\mathcal C}}$ defined by \\eqref{e1.15}. For convenience, $y_t$ will be referred to as the {\\em lift} of $y$ in ${\\mathcal C}$.\n\\subsection{Proof of Theorem \\ref{t:3.1}}\nWe begin with the following lemma:\n\\begin{lemma}\\label{l:2.1} Assume that $\\kappa<1$. Then for any bounded function $y\\in {\\mathscr L}_r(E;K_1,K_2;\\rho)$,\n\\begin{equation}\\label{e:t2.1}\n \\|y_{t}\\|\\leq c \\|y_0\\|+\\mu \\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\n\\end{equation}\n where $c,\\mu$ are the constants defined in Theorem \\ref{t:3.1}.\n \\end{lemma}\n{\\bf Proof.} It can be assumed that there is $t>0$ such that $y(t)>\\|y_0\\|+\\mu \\rho$; otherwise \\eqref{e:t2.1} readily holds true.\nWrite $$\\sup_{t\\in{ \\mathbb{R}^+}}\\|y_t\\|=N_\\varepsilon(\\|y_0\\|+\\varepsilon)+\\mu \\rho$$ for $\\varepsilon>0$. We show that $N_\\varepsilon\\leq c $ for all $\\varepsilon>0$, and the conclusion follows.\n\n For each $\\delta>0$ sufficiently small, pick an $\\eta>0$ with\n$$y(\\eta)>\\sup_{t\\in{ \\mathbb{R}^+}}\\|y_t\\|-\\delta.$$\nThen by \\eqref{e1.1} we have\n$$\\begin{array}{ll}\nN_\\varepsilon (\\|y_0\\|+\\varepsilon)+\\mu \\rho-\\delta&=\\sup_{t\\in{ \\mathbb{R}^+}}\\|y_t\\|-\\delta0$, if we set $\\~y(t)=y(\\sigma+t)$ and define\n$$\\~E(t,s)=E(t+\\sigma,s+\\sigma),\\hs \\~K_i(t,s)=K_i(t+\\sigma,s+\\sigma)\\,\\,(i=1,2)\n $$\n for $t,s\\geq 0$, then one trivially checks that $\\~y\\in{\\mathscr L}_r(\\~E;\\~K_1,\\~K_2;\\rho)$ with\n $$I(\\~K_1,\\~K_2)\\leq I(K_1,K_2)\\leq\\kappa<1.$$\n Thus if $y$ is bounded, then by Lemma \\ref{l:2.1} one also concludes that\n \\begin{equation}\\label{e:3.4}\n \\|y_{t+\\sigma}\\|\\leq c \\|y_\\sigma\\|+\\mu \\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t,\\sigma\\geq 0.\n \\end{equation}\n\\vs\n\n\\noindent{\\bf Proof of Theorem \\ref{t:3.1}.} (1) \\,Assume $\\kappa<1$. To verify assertion (1), we first show that if $y\\in {\\mathscr L}_r(E;K_1,K_2;\\rho)$ is a bounded function, then\n\\begin{equation}\\label{e:3.6}{\\limsup_{t\\rightarrow\\infty} \\|y_t\\|}\\leq \\mu\\rho.\n\\end{equation}\nLet us argue by contradiction and suppose\n$${\\limsup_{t\\rightarrow\\infty} \\|y_t\\|}=\\mu\\rho+\\delta\n$$\nfor some $\\delta>0$.\nTake a monotone sequence $\\tau_n\\rightarrow\\infty$ such that $\\lim_{n\\rightarrow\\infty}{ y({\\tau_n}})=\\mu\\rho+\\delta$.\nFor any $\\varepsilon>0$, take a $\\tau>0$ sufficiently large so that\n$$\n\\|y_t\\|<\\mu\\rho+\\delta+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq\\tau.\n$$\nThen for $\\tau_n>\\tau$, by \\eqref{e1.1} we deduce that\n$$\\begin{array}{ll}\ny(\\tau_n)&\\leq E(\\tau_n,\\tau)\\|y_{\\tau}\\|+\\int_{\\tau}^{\\tau_n} K_1(\\tau_n,s)\\|y_s\\|ds+\\int_{\\tau_n}^\\8K_2(\\tau_n,s)\\|y_{s}\\|ds+\\rho\\\\[2ex]\n&\\leq E(\\tau_n,\\tau)\\|y_{\\tau}\\|+ \\kappa\\(\\mu \\rho+\\delta+\\varepsilon\\)+\\rho.\n\\end{array}\n$$\nSetting $n\\rightarrow\\infty$ in the above inequality, it yields\n$$\n\\mu \\rho+\\delta\\leq \\kappa\\(\\mu \\rho+\\delta+\\varepsilon\\)+\\rho.\n$$\nSince $\\varepsilon$ is arbitrary, we conclude that\n$$\n\\mu \\rho+\\delta\\leq (\\kappa\\mu +1)\\rho+\\kappa\\delta.\n$$\nTherefore by \\eqref{e:2.3} one has $\\delta{ \\le}\\kappa\\delta$, which leads to a contradiction and verifies \\eqref{e:3.6}.\n\n\\vs\nNow we complete the proof of assertion (1).\nLet $R>0$. Denote\n$$\n{\\mathscr B}_R=\\{y\\in{\\mathscr L}_r(E;K_1,K_2;\\rho):\\,\\,y\\mbox{ is bounded with }\\|y_0\\|\\leq R\\}.\n$$\nBy \\eqref{e:t2.1} we see that ${\\mathscr B}_R$ is uniformly bounded. Hence the envelope\n$$\ny^*(t)=\\sup_{y\\in{\\mathscr B}_R}y(t)\n$$\nof the family ${\\mathscr B}_R$ is a bounded nonnegative measurable function on $[-r,\\infty)$.\n(The measurability of $y^*$ follows from the simple observation that\n$$\\begin{array}{ll}\\{t\\in(-r,\\infty):\\,\\,y^*(t)>a\\}=\\Cup_{y\\in {\\mathscr B}_R}\\{t\\in(-r,\\infty):\\,\\,y(t)>a\\}\\end{array}$$ is an open subset of $\\mathbb{R}$ for any $a\\in\\mathbb{R}$.) As in the case of a continuous function, we use the notation $y^*_t$ ($t\\geq0$) to denote the lift of $y^*$ in the space of measurable functions on $[-r,0]$ ($y^*_t(\\.)=y^*(t+\\.)$) and write $\\|y^*_t\\|=\\sup_{s\\in[-r,0]}y^*_t(s)$.\n(One should distinguish $\\|y^*_t\\|$ with the $L^\\infty$-norm $\\|y^*_t\\|_{L^\\infty(-r,0)}$ of $y^*_t$, although it can be shown by using the definition of $y^*$ and the continuity of the functions $y\\in{\\mathscr B}_R$ that the two quantities coincide for $y^*_t$.) We claim that $\\varphi(t):=\\|y^*_t\\|$ is a measurable function on $[0,\\infty)$. Indeed, one trivially verifies that\n$$\n\\|y^*_t\\|=\\sup_{y\\in{\\mathscr B}_R}\\|y_t\\|,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0.\n$$\nSince $\\|y_t\\|$ is continuous in $t$ for every $y$, the conclusion immediately follows.\n\nWe infer from \\eqref{e1.1} that\n$$\\begin{array}{ll}\ny(t)\\leq &E(t,\\tau)\\|y^*_\\tau\\|+\\int_\\tau^t K_1(t,s)\\|y^*_s\\|ds\\\\[2ex]\n&\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\hs+\\int_t^\\infty K_2(t,s)\\|y^*_s\\|ds+\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\,t\\geq\\tau\\geq 0\n\\end{array}\n$$\nfor any $y\\in {\\mathscr B}_R$. Further taking supremum in the { lefthand} side of the above inequality with respect to $y\\in {\\mathscr B}_R$ it yields\n\\begin{equation}\\label{e:2.a1}\\begin{array}{ll}\ny^*(t)\\leq &E(t,\\tau)\\|y^*_\\tau\\|+\\int_\\tau^t K_1(t,s)\\|y^*_s\\|ds\\\\[2ex]\n&\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\hs+\\int_t^\\infty K_2(t,s)\\|y^*_s\\|ds+\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\,t\\geq\\tau\\geq 0.\n\\end{array}\n\\end{equation}\nThe only difference between \\eqref{e1.1} and the above inequality \\eqref{e:2.a1} is that the function $y^*$ in \\eqref{e:2.a1} may not be continuous.\nNote that we do not make use of any continuity requirement on $y$ {in the proof of Lemma \\ref{l:2.1} and \\eqref{e:3.6}}. Therefore all the arguments {therein} can be directly carried over to $y^*$ without any modifications except that the function $y$ is replaced by $y^*$. As a result, we deduce that $\\limsup_{t\\rightarrow\\infty}\\|y^*_t\\|\\leq \\mu\\rho$. Hence for any $\\varepsilon>0$ there is a $T>0$ such that\n$$\n\\|y^*_t\\|<\\mu\\rho+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t>T,\n$$\nfrom which assertion (1) immediately follows.\n\n\n\\vskip8pt}\\def\\vs{\\vskip4pt\n(2) \\,Now we assume $\\kappa<1\/(1+\\vartheta)$. To obtain the exponential decay estimate in \\eqref{e:gi}, we first prove a temporary result:\n\n\\vs There exist $T,\\lambda>0$ such that if $\\|y_0\\|\\leq N_0+\\gamma\\rho$ with $N_0> 0$, then\n\\begin{equation}\\label{e:gi'}\n{\\|y_t\\|}\\leq N_0 e^{-\\lambda t}+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq T.\n\\end{equation}\n\n For this purpose, we take\n \\begin{equation}\\label{sig}\\sigma= (1+\\kappa c)\/2.\\end{equation} Since $\\kappa c<1$ (see Remark \\ref{r1.1}), it is clear that $\\sigma<1$. Define\n$$\\eta=\\min\\{s{ \\ge 0}:\\,\\,\\|y_t\\|\\leq \\sigma N_0+\\gamma\\rho\\mbox{ for all }t\\geq s\\}.$$\nThe key point is to estimate the upper bound of $\\eta$.\n\nBecause $\\gamma> \\mu$ and $N_0>0$, by \\eqref{e:3.6} it is clear that $\\eta<\\infty$. We may assume $\\eta> r$ (otherwise we are done). Then by continuity of $y$ one necessarily has $$\\|y_\\eta\\|=\\sigma N_0+\\gamma\\rho.$$\nFor simplicity, write $E(t,0):=b(t)$. Given $t\\in[\\eta-r,\\eta]$, by \\eqref{e1.1} we have\n$$\n\\begin{array}{ll}\ny(t)&\\leq { b(t)}\\|y_{0}\\|+\\int_0^{t} K_1(t,s)\\|y_s\\|ds +\\int_{t}^\\8K_2(t,s)\\|y_s\\|ds+\\rho\\\\[2ex]\n&\\leq (\\mbox{by }\\eqref{e:t2.1})\\leq\\|b_{\\eta}\\|\\|y_0\\|+\\kappa (c \\|y_0\\|+\\mu\\rho)+\\rho\\\\[2ex]\n&\\leq \\({ \\|b_{\\eta}\\|}+\\kappa c \\)\\|y_0\\|+(\\kappa\\mu+1)\\rho \\\\[2ex]\n&\\leq \\({ \\|b_{\\eta}\\|}+\\kappa c \\)(N_0+\\gamma\\rho)+\\mu\\rho.\n\\end{array}\n$$\nHere we have used the fact that $\\kappa\\mu+1=\\mu$ (see \\eqref{e:2.3}). Therefore\n\\begin{equation}\\label{e:3.25}\n\\begin{array}{ll}\n\\sigma N_0+\\gamma\\rho&=\\|y_\\eta\\|=\\max_{t\\in[\\eta-r,\\eta]}y(t)\\\\[2ex]\n&\\leq \\({ \\|b_{\\eta}\\|}+\\kappa c \\)N_0+\\(\\({ \\|b_{\\eta}\\|}+\\kappa c \\)\\gamma+\\mu\\)\\rho.\n\\end{array}\n\\end{equation}\n\nTake a number $t_0>0$ such that\n\\begin{equation}\\label{et0}\nE(t+s,s)\\gamma\\leq 1,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq t_0,\\,\\,s\\in \\mathbb{R}^+.\n\\end{equation}\nIf $\\eta\\leq t_0+r$ then we are done. Thus we assume that $\\eta>t_0+r$. Then by the definition of $\\gamma$ and \\eqref{et0} one deduces that\n$$\n\\gamma=\\kappa c\\gamma+\\mu+1\\geq\\({ \\|b_{\\eta}\\|}+\\kappa c \\)\\gamma+\\mu.\n$$\nIt follows by \\eqref{e:3.25} that $\\sigma N_0\\leq \\({ \\|b_{\\eta}\\|}+\\kappa c \\)N_0$. Hence\n\\begin{equation}\\label{e:sig}\n\\|b_\\eta\\|\\geq \\sigma-\\kappa c =(1-\\kappa c)\/2>0.\\end{equation}\nTake a number $t_1>0$ such that\n\\begin{equation}\\label{e:3.27}\nE(t+s,s)< (1-\\kappa c )\/2,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t> t_1,\\,\\,s\\in\\mathbb{R}^+.\n\\end{equation}\n \\eqref{e:sig} then implies that\n$\\eta\\leq { t_1}+r.$\nHence we conclude that\n\\begin{equation}\\label{eT}\\eta\\leq T:=\\max\\(t_0,t_1\\)+r.\\end{equation}\n\n\\vs By far we have proved that if $\\|y_0\\|\\leq N_0+\\gamma\\rho$ ($N_0>0$) then\n$$\n \\|y_t\\|\\leq \\sigma N_0+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq T.\n $$\n\n\\vs Let $\\~y(t)=y(t+T)$, and set\n $$\n \\~E(t,s)=E(t+T,s+T),\\hs \\~K_i(t,s)=K_i(t+T,s+T)\n $$\nfor $t,s\\geq 0$, $i=1,2$. Then $\\~y\\in{\\mathscr L}_r(\\~E;\\~K_1,\\~K_2;\\rho)$ with $$I(\\~K_1,\\~K_2)\\leq I(K_1,K_2)\\leq\\kappa<1\/(1+\\vartheta).$$\n Since $\\|\\~y_0\\|\\leq \\sigma N_0+\\gamma\\rho$, the same argument as above applies to show that\n $$\n \\|\\~y_t\\|\\leq \\sigma(\\sigma N_0)+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq T,\n $$\n that is,\n $$\n \\|y_t\\|\\leq \\sigma^2 N_0+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 2T.\n $$\n (We emphasize that the numbers $t_0$ and $t_1$ in \\eqref{et0} and \\eqref{e:3.27} can be chosen independent of $s\\in\\mathbb{R}^+$. This plays a crucial role in the above argument.)\n Repeating the above procedure we finally obtain that\n \\begin{equation}\\label{e:3.16}\n \\|y_t\\|\\leq \\sigma^n N_0+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq nT,\\,\\,n=1,2,\\cdots.\n \\end{equation}\n Setting $\\lambda=-(\\mbox{ln\\,} \\sigma)\/{2T}$, one trivially verifies that\n $$\\sigma^n\\leq e^{-\\lambda t},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in[nT,(n+1)T]$$ for all $n\\geq1$. \\eqref{e:gi'} then follows from \\eqref{e:3.16}.\n\n\\vs\nWe are now in a position to complete the proof of the theorem.\\vs\n\nNote that \\eqref{e:t2.1} implies that if $\\|y_0\\|=0$ then\n$$\n\\|y_t\\|\\leq\\mu\\rho\\leq\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\n$$\nand hence the conclusion readily holds true. Thus we assume that $\\|y_0\\|>0$.\nTake $N_0=\\|y_0\\|$. Clearly $\\|y_0\\|= N_0\\leq N_0+\\gamma\\rho$. Therefore by \\eqref{e:gi'} we have\n\\begin{equation}\\label{e:3.11b}\n\\|y_t\\|\\leq\\|y_0\\|e^{-\\lambda t}+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq T.\n\\end{equation}\nOn the other hand, by \\eqref{e:3.4} we deduce that\n$$\n\\|y_t\\|\\leq c \\|y_0\\|+\\mu\\rho\\leq c \\|y_0\\|+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in[0,T].\n$$\nSet\n$M=c e^{\\lambda T}$. Then\n$$\n\\|y_t\\|\\leq c \\|y_0\\|+\\gamma\\rho\\leq Me^{-\\lambda t}\\|y_0\\|+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in[0,T].\n$$\nCombining this with \\eqref{e:3.11b} we finally arrive at the estimate\n$$\n\\|y_t\\|\\leq M \\|y_0\\|e^{-\\lambda t}+\\gamma\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0.\n$$\nThe proof of the theorem is complete. $\\Box$\n\n\n \\begin{remark}\\label{r:2.2}\nIn many examples from applications, the function $E(t,s)$ in \\eqref{e1.1} takes the form:\n$$\nE(t,s)=M_0e^{-\\lambda_0(t-s)},\n$$\nwhere $M_0$ and $\\lambda_0$ are positive constants. In such a case one can write out the constants $M$ and $\\lambda$ in \\eqref{e:gi} and \\eqref{e:gi1} explicitly.\n\nIndeed, the number $t_0$ and $t_1$ in \\eqref{et0} and \\eqref{e:3.27} can be taken, respectively, as $$t_0=\\lambda_0^{-1}\\mbox{ln\\,}(M_0\\gamma),\\hs t_1=\\lambda_0^{-1}\\mbox{ln\\,}\\(\\frac{2M_0}{1-\\kappa c}\\).$$\nConsequently the number $T$ in \\eqref{eT} reads as\n$\nT=\\lambda_0^{-1}M_1+r,\n$\n{where } $$M_1=\\max\\(\\mbox{ln\\,}(M_0\\gamma),\\,\\mbox{ln\\,}\\(\\frac{2M_0}{1-\\kappa c}\\)\\).$$\nThus we infer from the proof of Theorem \\ref{t:3.1} that\n$$\n\\lambda=-\\frac{\\mbox{ln\\,}\\sigma}{2T}=\\frac{\\ln2-\\mbox{ln\\,}\\({1+\\kappa c}\\)}{2\\(M_1+r\\lambda_0\\)}\\,\\lambda_0,\n$$\n$$\nM=ce^{\\lambda T}=c\\sqrt{2\/(1+\\kappa c)}.\n$$\nIn particular, if $r=0$ then we have\n$$\\lambda=\\theta\\lambda_0,\\hs \\theta=\\frac{\\ln2-\\mbox{ln\\,}\\({1+\\kappa c}\\)}{2M_1}\\,.$$\n\\end{remark}\n\n\\begin{remark}\\label{r:2.3}In the general case, \\eqref{e1.13} implies that there is a bounded nonnegative function $e(t)$ on $\\mathbb{R}^+$ with $e(t)\\rightarrow 0$ as $t\\rightarrow\\infty$ such that\n\\begin{equation}\\label{e:2.E}E(t+s,s)\\leq e(t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t,s\\geq 0.\\end{equation}\nOne can easily see that the numbers $t_0$ and $t_1$ in \\eqref{et0} and \\eqref{e:3.27} can be chosen in such a way that they only depend upon the constants $\\gamma,\\kappa,c$ and the function $e(t)$. Consequently the constants $M$ and $\\lambda$ in Theorem \\ref{t:3.1} $(2)$ (which are defined explicitly below \\eqref{e:3.16} in the proof of the theorem) only depend upon $\\gamma,\\kappa,c$, $\\sigma$ and $e(t)$. Since $\\gamma,c$ and $\\sigma$ are completely determined by $\\vartheta$ and $\\kappa$ (see Theorem \\ref{t:3.1} and \\eqref{sig} for the definitions of these constants), we finally conclude that $M$ and $\\lambda$ only depend upon $\\vartheta,\\kappa$ and $e(t)$.\\end{remark}\n\n\n\\subsection{Proof of Theorem \\ref{t:3.2}}\n\\noindent{\\bf Proof.} The conclusions of Theorem \\ref{t:3.2} immediately follow from Theorem \\ref{t:3.1} as long as Lemma \\ref{l:2.a1} below is proved. $\\Box$\n\n \\begin{lemma}\\label{l:2.a1} Let $E\\in{\\mathscr E}$, and $K_1=K\\in {\\mathscr K}_1$. Suppose $I(K,0)\\leq\\kappa<1$.\nLet $r,\\rho\\geq 0$, and let $y$ be a nonnegative continuous function on $[-r,T)$ $(0\\beta> 0$ are constants, then there exist $\\gamma>0$ and $k>0$ such that\n $$y(t)\\leq ke^{-\\gamma(t-t_0)},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq t_0;$$ see Halanay \\cite[pp. 378]{Halanay}. For simplicity we may put $t_0=0$.\n Using a similar argument as in the proof of Proposition \\ref{p:3.2} below, one can easily show that a function $y$ satisfying \\eqref{e:2.4} fulfills the integral inequality \\eqref{e1.1} with $K_2=0$ and\n $$\\begin{array}{ll}\nE(t,s)=e^{-\\alpha(t-s)},\\hs K_1(t,s)=\\beta E(t,s)\\end{array}\n$$\nfor $t,s\\geq 0$. Note that\n$$\n\\vartheta=\\sup_{t\\geq s\\geq 0}E(t,s)=1,\\hs \\kappa =\\sup_{t\\geq 0}\\int_0^tK_1(t,s)ds=\\beta\/\\alpha.\n$$\n Thus the assumption that $\\kappa<1$ in Theorem \\ref{t:3.2} amounts to say that $\\alpha>\\beta$. Hence Theorem \\ref{t:3.2} can be seen as a generalization of the Halanay's inequality.\n\n On the other hand, we emphasize that in the special case of \\eqref{e:2.4}, Halanay's result is stronger than Theorem \\ref{t:3.2} in the way that\n it guarantees the exponential convergence of $y(t)$ to $0$ under the assumption that $\\beta<\\alpha$, whereas under this weaker assumption Theorem \\ref{t:3.2} only gives convergence result.\nThis is also one of the reasons why we are interested in the question proposed in Remark \\ref{r:1.6}.\n\n An integral version of the Halanay's inequality can be found in a recent work of Chen \\cite[Lemma 3.2]{ChenH} along with a very simple proof: \\,\\,Let $y$ be a nonnegative continuous function on $[-r,\\infty)$. Suppose that for $\\alpha>0$, there exist two positive constants $M ,\\beta>0$ such that $y(t)\\leq M e^{-\\alpha t}$ $(t\\in[-r,0])$ and that\n \\begin{equation}\\label{e:2.5}\n y(t)\\leq M e^{-\\alpha t} +\\beta\\int_0^te^{-\\alpha(t-s)}\\|y_s\\|ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0.\n \\end{equation}\n If $\\beta<\\alpha$, then $y(t)\\leq M e^{-\\mu t}$ for $t\\geq -r$, where $\\mu\\in(0,\\alpha)$ is a constant satisfying that $\\frac{\\beta}{\\alpha-\\mu}e^{\\mu r}=1$. One advantage of this integral inequality is that it significantly reduces the smoothness requirement on the function $y$. This may greatly enlarge the applicability of the inequality.\n Other types of extensions of the Halanay's inequality can be found in \\cite{HPT,WLL} etc. and references therein.\n\n\n\\end{remark}\n\n\n\n\n\\section{Asymptotic Behavior of ODE Systems}\n This section consists of two examples of ODE systems illustrating possible applications of the integral inequalities given here. For the general theory of delay differential equations, one may consult the excellent books \\cite{Hale2,Kuang,S,Wu}.\n\n\n\n\n \\subsection{Asymptotic stability of a scalar functional ODE}\\label{s:3.1}\nOur first example concerns the asymptotic stability of the scalar functional differential equation:\n\\begin{equation}\\label{e:5.1}\n\\dot x=-a(t)x+B(t,x_t),\\hs\n\\end{equation}\nwhere $x_t$ is the lift of $x=x(t)$ in ${\\mathcal C}:=C([-r,0])$ ($r\\geq 0$ is fixed), $a\\in C(\\mathbb{R})$, and $B$ is a continuous function on $\\mathbb{R}\\times{\\mathcal C}$. We always assume that $B$ satisfies the following local Lipschitz condition in the second variable: For any compact interval $J\\subset\\mathbb{R}$ and $R>0$, there exists $L>0$ such that\n$$\n|B(t,\\phi)-B(t,\\phi')|\\leq L\\|\\phi-\\phi'\\|,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\,\\phi,\\phi'\\in\\overline{\\mathcal B}_R,\\,\\,t\\in J.\n$$\nHere and below ${\\mathcal B}_R$ denotes the ball in ${\\mathcal C}$ centered at $0$ with radius $R>0$.\n\n Given $(\\tau,\\phi)\\in\\mathbb{R}\\times{\\mathcal C}$, the above smoothness requirements on $a$ and $B$ are sufficient to guarantee the existence and uniqueness of a local solution $x(t)=x(t;\\tau,\\phi)$ ($t\\geq \\tau$) of \\eqref{e:5.1} with initial value $x_\\tau=\\phi\\in{\\mathcal C}$; see \\cite[Chap. 2, Theorems 2.1, 2.3]{Hale2}. We also assume that\n\\begin{equation}\\label{e:5.2}\n|B(t,\\phi)|\\leq b(t)\\|\\phi\\|, \\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} (t,\\phi)\\in \\mathbb{R}\\times{\\mathcal C}\n\\end{equation}\nfor some nonnegative function $b\\in C(\\mathbb{R})$, so that $x(t;\\tau,\\phi)$ globally exists for each $(\\tau,\\phi)\\in\\mathbb{R}\\times{\\mathcal C}$. Furthermore, \\eqref{e:5.2} implies that $0$ is a solution of \\eqref{e:5.1}.\n\\vs\n\n\\begin{definition}\\label{d:3.1}The null solution $0$ of \\eqref{e:5.1} is said to be\n\\begin{enumerate}\n \\item[$(1)$] {globally asymptotically stable} (GAS in short), if \\,{\\em (i)}\\, it is {stable}, i.e, for every $\\tau\\in\\mathbb{R}$ and $\\varepsilon>0$, there exists $\\delta>0$ such that $x(t;\\tau,\\phi)\\in {\\mathcal B}_\\varepsilon$ for all $t\\geq \\tau$ and $\\phi\\in {\\mathcal B}_\\delta$, and {\\em (ii)}\\,\n it is globally attracting, meaning that $x(t;\\tau,\\phi)\\rightarrow 0$ as $t\\rightarrow \\infty$ for every $(\\tau,\\phi)\\in\\mathbb{R}\\times {\\mathcal C}$.\n \\vs\n\\item[$(2)$] { globally exponentially asymptotically stable} (GEAS in short), if for every $\\tau\\in \\mathbb{R}$, there exist positive constants $M,\\lambda>0$ such that\n\\begin{equation}\\label{e:5.3}\n|x(t;\\tau,\\phi)|\\leq M\\|\\phi\\|e^{-\\lambda (t-\\tau)},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq \\tau,\\,\\,\\phi\\in {\\mathcal C}.\n\\end{equation}\n\\end{enumerate}\n\\end{definition}\n\\begin{remark}\nThe notions given in the above definition are the global versions of some corresponding local ones for functional differential equations in \\cite[Chap. 5, Def. 1.1]{Hale2} and \\cite[Def. 2.1-2.3]{Wins}, etc.\n\\end{remark}\n\n\nWe now assume that $a$ satisfies the following hypothesis:\n\\vs\\begin{enumerate}\n\\item[(A1)] \\,$\\int_s^{s+t} a(\\sigma)d\\sigma \\rightarrow \\infty$ as $t\\rightarrow \\infty$ uniformly with respect to $s\\in \\mathbb{R}$.\n \\end{enumerate}\n\\vs\\noindent\nDefine two functions $E(t,s)$ and $K(t,s)$ on $\\mathbb{R}^2$ as below: $\\forall\\,(t,s)\\in\\mathbb{R}^2$,\n$$\\begin{array}{ll}\nE(t,s)=\\exp\\(-\\int_s^ta(\\sigma)d\\sigma\\),\\hs K(t,s)=E(t,s)b(s).\\end{array}\n$$\nBy (A1) one trivially verifies that\n\\begin{equation}\\label{e:3E}\\begin{array}{ll}\n\\lim_{t\\rightarrow \\infty}E(t+s,s)=0\\mbox{ uniformly w.r.t. $s\\in\\mathbb{R}$}. \\end{array}\n\\end{equation}\nFor each $\\tau\\in\\mathbb{R}$, set\n$$\n\\vartheta_\\tau=\\sup_{t\\geq s\\geq \\tau}E(t,s),\\hs \\kappa_\\tau =\\sup_{t\\geq \\tau}\\int_\\tau^tK(t,s)ds.\n$$\n\n\n\\begin{proposition}\\label{p:3.2} The null solution of \\eqref{e:5.1} is GAS if $\\kappa_\\tau<1$ for all $\\tau\\in\\mathbb{R}$. If we further assume that $\\kappa_\\tau <1\/(1+\\vartheta_\\tau)$ for $\\tau\\in\\mathbb{R}$, then it is GEAS.\n\\end{proposition}\n\n\\noindent{\\bf Proof.} Let $\\tau\\in\\mathbb{R}$. Write $x(t)=x(t;\\tau,\\phi)$. For any $t\\geq\\eta\\geq\\tau$, multiplying \\eqref{e:5.1} with $E(t,\\eta)^{-1}=\\exp\\(\\int_\\eta^t a(\\sigma)d\\sigma\\)$, we obtain that\n\\begin{equation}\\label{e:5.0}\n\\frac{d}{dt}\\(E(t,\\eta)^{-1}x\\)= E(t,\\eta)^{-1}B(t,x_t).\n\\end{equation}\nIntegrating \\eqref{e:5.0} in $t$ between $\\eta$ and $t$, it yields\n\\begin{equation}\\label{e:5.5}\nx(t)=E(t,\\eta)x(\\eta)+\\int_\\eta^t E(t,s)B(s,x_s)ds.\n\\end{equation}\n(Here we have used the simple observation that $E(t,\\eta)E(s,\\eta)^{-1}=E(t,s)$.)\nHence\n\\begin{equation}\\label{e:3.7}\n|x(t)|\\leq E(t,\\eta)\\|x_\\eta\\|+\\int_\\eta^t K(t,s)\\|x_s\\|ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq\\eta\\geq\\tau.\n\\end{equation}\nRewriting $t,s$ and $\\eta$ in \\eqref{e:3.7} as $t+\\tau$, $s+\\tau$ and $\\eta+\\tau$, respectively, i.e., performing a $\\tau$-translation on the variables in \\eqref{e:3.7}, we obtain that\n\\begin{equation}\\label{e:3.8a}\ny(t)\\leq E_\\tau(t,\\eta)\\|y_\\eta\\|+\\int_\\eta^t K_\\tau(t,s)\\|y_s\\|ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq\\eta\\geq 0,\n\\end{equation}\nwhere $y(t)=|x(t+\\tau)|$, and\n\\begin{equation}\\label{e:3.8b}\nE_\\tau(t,s)=E(t+\\tau,s+\\tau),\\hs K_\\tau(t,s)=K(t+\\tau,s+\\tau)\n\\end{equation}\nfor $t,s\\geq 0.$ Note that\n$$\n\\vartheta_\\tau=\\sup_{t\\geq s\\geq 0}E_\\tau(t,s),\\hs \\kappa_\\tau =\\sup_{t\\geq 0}\\int_0^tK_\\tau(t,s)ds.\n$$\n\n\nAssume that $\\kappa_\\tau<1$. Then by Theorem \\ref{t:3.2} one deduces that for any $R,\\varepsilon>0$, there exists $T>0$ such that\n\\begin{equation}\\label{e:3.3}|x(t;\\tau,\\phi)|<\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t>\\tau+T,\\,\\,\\phi\\in{\\mathcal B}_R.\\end{equation}\nOn the other hand, we infer from Lemma \\ref{l:2.1} that\n$\n |x(t;\\tau,\\phi)|\\leq c_\\tau \\|\\phi\\|$ for all $t\\geq \\tau$ and $\\phi\\in {\\mathcal C}$,\n where $c_\\tau=\\max\\(\\vartheta_\\tau\/(1-\\kappa_\\tau),\\,1\\)$, from which it follows that the $0$ solution is stable at $\\tau$.\nThus we see that $0$ is GAS. (We mention that the stability of the null solution can be also deduced by using \\eqref{e:3.3} and the continuity property of $x(t;\\tau,\\phi)$ in $\\phi$. We omit the details.)\n\nThe second conclusion is a direct consequence of Theorem \\ref{t:3.2} (2). $\\Box$\n\n\\begin{remark}\\label{r:3.4} If $a$ is a bounded function on $\\mathbb{R}$ and $\\kappa_\\tau$ fulfills a stronger uniform smallness requirement:\n\\begin{equation}\\label{eu}\n\\kappa:=\\sup_{\\tau\\in\\mathbb{R}}\\kappa_\\tau<1\/(1+\\vartheta),\n\\end{equation}\nwhere $\\vartheta=\\sup_{\\tau\\in\\mathbb{R}}\\vartheta_\\tau$, then it can be shown that there exist positive constants $M,\\lambda>0$ independent of $\\tau\\in\\mathbb{R}$ such that \\eqref{e:5.3} holds true. In such a case we simply say that the solution $0$ of \\eqref{e:5.1} is uniformly GEAS.\n\nTo see this, we define for each $(\\tau,s)\\in\\mathbb{R}\\times\\mathbb{R}^+$ a function $e_{\\tau,s}$ on $\\mathbb{R}^+$:\n$$e_{\\tau,s}(t)=E_\\tau (s+t,s),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in\\mathbb{R}^+.$$ By {\\em (A1)} we see that $\\lim_{t\\rightarrow\\infty}e_{\\tau,s}(t)=0$ uniformly with respect to $(\\tau,s)\\in\\mathbb{R}\\times\\mathbb{R}^+$. Using this simple fact and the boundedness of $a$ one easily examines that the family $\\{e_{\\tau,s}\\}_{(\\tau,s)\\in\\mathbb{R}\\times\\mathbb{R}^+}$ is uniformly bounded on $\\mathbb{R}^+$. Define\n$$\ne(t)=\\sup_{(\\tau,s)\\in\\mathbb{R}\\times\\mathbb{R}^+}e_{\\tau,s}(t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in\\mathbb{R}^+.\n$$\nThen $e(t)\\ra0$ as $t\\rightarrow\\infty$. Since for every $\\tau\\in\\mathbb{R}$, we have $$E_\\tau (s+t,s)\\leq e(t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t,s\\geq 0,$$\ninvoking Remark \\ref{r:2.3} we deduce by \\eqref{e:3.8a} and \\eqref{eu} that there exist $M,\\lambda>0$ independent of $\\tau\\in\\mathbb{R}$ such that \\eqref{e:5.3} holds for all solutions of \\eqref{e:5.1}.\n\\end{remark}\n\n\n\n\\begin{remark} If $a(t)\\geq 0$ for $t\\in\\mathbb{R}$, then $\\vartheta_\\tau=1$ for all $\\tau\\in \\mathbb{R}$, and the hypothesis on $\\kappa_\\tau$ to guarantee GEAS of the null solution {reduces} to that $\\kappa_\\tau<1\/2$.\n\nIn such a case one can also easily verify that $\\kappa_\\tau\\leq\\theta<1$ for all $\\tau\\in \\mathbb{R}$ if the following hypotheses in Winston \\cite{Wins} are fulfilled:\n\\vs\n$(A2)$ $b(t)\\leq \\theta a(t)$ \\,$(t\\in\\mathbb{R})$ for some $\\theta<1$;\\, and $(A3)$ $\\int_0^\\8a(t)dt=\\infty$.\n\\vs\n\\noindent It follows that the null solution $0$ of \\eqref{e:5.1} is GAS. If $a$ is bounded and $\\theta<1\/2$, then we also infer from Remark \\ref{r:3.4} that $0$ is uniformly GEAS.\n\\end{remark}\n\n\n\n\n\\noindent{\\em Example} 3.1. Let $a(t)$ be a continuous $\\omega$-periodic ($\\omega>0$) function.\nDenote $a^+(t)$ ($a^-(t)$) the positive (negative) part of $a(t)$ (hence $a(t)=a^+(t)-a^-(t)$).\nLet\n$$I=\\int_0^\\omega a(t)dt,\\hs I^\\pm=\\int_0^\\omega a^\\pm(t)dt.$$ Clearly $I=I^+-I^-$.\nFor $s\\in\\mathbb{R}$ and $t\\geq 0$, we observe that\n\\begin{equation}\\label{e:3.8}\\begin{array}{ll}\n\\int_s^{s+t} a(\\sigma)d\\sigma&=\\int_s^{s+m_t\\omega} a(\\sigma)d\\sigma+\\int_{s+m_t\\omega}^{s+t} a(\\sigma)d\\sigma\\\\[2ex]\n&= m_t I+\\int_{s+m_t\\omega}^{s+t} a(\\sigma)d\\sigma\\\\[2ex]\n&\\geq m_t I-\\int_{s+m_t\\omega}^{s+t} a^-(\\sigma)d\\sigma\\geq m_t I-I^-,\n\\end{array}\\end{equation}\nwhere $m_t=[t\/\\omega]$ is the integer part of $t\/\\omega$.\n\nNow suppose that $I>0$. Then by \\eqref{e:3.8} we have that\n\\begin{equation}\\label{e:3.9}\n\\int_s^{s+t} a(\\sigma)d\\sigma\\geq m_t I-I^-\\geq \\(\\frac{t}{\\omega}-1\\)I-I^-=\\Lambda t-I^+,\n\\end{equation}\nwhere $\\Lambda=\\frac{I}{\\omega}$, and\n\\begin{equation}\\label{e:3.9b}\n\\int_s^{s+t} a(\\sigma)d\\sigma\\geq m_t I-I^-\\geq -I^-.\n\\end{equation}\nBy \\eqref{e:3.9} it is obviously that $a$ fulfills hypothesis (A1).\n\n\nWe infer from \\eqref{e:3.9b} that for any $\\tau\\in \\mathbb{R}$,\n\\begin{equation}\\label{e:5.9}\nE_\\tau(t,s)=\\exp\\(-\\int_s^ta(\\sigma+\\tau)d\\sigma\\)\\leq e^{I^-}:=\\vartheta,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq s\\geq 0.\n\\end{equation}\nAssume that the function $b$ in \\eqref{e:5.2} is bounded. Set $\\beta=\\sup_{t\\geq 0}b(t)$. Then\n\\begin{equation}\\label{e:5.11}\n\\int_0^tK_\\tau(t,s)ds=\\int_0^t E_\\tau(t,s)b(s+\\tau)ds\\leq (\\mbox{by }\\eqref{e:3.9})\\leq {\\beta\\omega e^{I^{+}}\/I}:=\\kappa\n\\end{equation}\nfor all $t\\geq 0$.\nThus in the case where $a$ is periodic and $b$ is bounded, we have\n\n\n\n\\begin{proposition}\\label{t:5.2}If $\\beta<\\beta_1:={I}\/({\\omega e^{I^{+}}})$, the null solution of \\eqref{e:5.1} is GAS; and if $\\beta<\\beta_2:={I\/(\\omega e^{I^+}(1+e^{I^{-}})})$, then it is GEAS.\n\\end{proposition}\n{\\bf Proof.} Assume $\\beta<\\beta_1$. Then by \\eqref{e:5.11} we see that\n $$\\begin{array}{ll}\\kappa_\\tau:=\\sup_{t\\geq 0}\\int_0^tK_\\tau(t,s)ds<1,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,\\tau\\in\\mathbb{R}.\\end{array} $$\n\nWe infer from \\eqref{e:5.9} that $\n\\vartheta_\\tau:=\\sup_{t\\geq s\\geq 0}E_\\tau(t,s)\\leq e^{I^-}:=\\vartheta$.\n {Thus if we assume $\\beta<\\beta_2$}, then one trivially verifies that $$\\kappa_\\tau\\leq(\\mbox{by }\\eqref{e:5.11})\\leq \\beta\\omega e^{I^{+}}\/I<1\/(1+\\vartheta),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\tau\\in \\mathbb{R}.$$\n\nNow the conclusion directly follows from Propositions \\ref{p:3.2}. $\\Box$\n\\vskip8pt}\\def\\vs{\\vskip4pt\nA concrete example as in Example 3.1 is the linear equation:\n\\begin{equation}\\label{e:5.1c}\n\\dot x=-(\\sin t+\\varepsilon)x+\\beta\\, x(t-1),\\hs t>0,\n\\end{equation}\nwhere $0<\\varepsilon,\\beta<1$ are constants.\nSimple calculations show that $$\nI^+< 2+2\\pi\\varepsilon,\\hs I^-<2.\n$$\nIt is easy to check that if $\\beta<\\varepsilon e^{-(2+2\\pi\\varepsilon)}$, then the first hypothesis in Proposition \\ref{t:5.2} is fulfilled, and hence the null solution $0$ of the equation is GAS. If we further assume that $\\beta<\\varepsilon e^{-(2+2\\pi\\varepsilon)}\/\\(1+e^{2}\\)$, then it is GEAS.\n\n\n\\subsection{Pullback attractors of an ODE system with delays}\nAs a second example, we consider in this part the existence of pullback attractors of the ODE system:\n\\begin{equation}\\label{ode1}\n\\dot x=F_0(t,x)+\\sum_{i=1}^mF_i(t,x(t-r_i)),\\hs x=x(t)\\in\\mathbb{R}^n\n\\end{equation}\nwith superlinear nonlinearities $F_i$ ($0\\leq i\\leq m$).\n\nAssume that $F_i$ ($0\\leq i\\leq m$) are continuous mappings from $\\mathbb{R}\\times \\mathbb{R}^n\\rightarrow \\mathbb{R}^n$ which are locally Lipschitz in the space variable $x$ in a uniform manner with respect to $t$ on bounded intervals and satisfy the structure condition {\\bf (F)} given in Section 1, and $r_i:\\mathbb{R}\\rightarrow [0,r]$ ($1\\leq i\\leq m$) are measurable functions.\n\nDenote ${\\mathcal C}$ the space $C([-r,0],\\mathbb{R}^n)$ equipped with the usual norm $\\|\\.\\|$.\nBy the hypotheses on $F_i$ and the delay functions $r_i$, it can be easily shown that the initial value problem of \\eqref{ode1} is well-posed. Specifically, for each $\\tau\\in\\mathbb{R}$ and $\\phi\\in{\\mathcal C}$ the system has a unique solution $x(t;\\tau,\\phi):=x(t)$ on a maximal existence interval $[\\tau-r,T_\\phi)$ ($T_\\phi>\\tau$) with\n$$\nx(\\tau+s)=\\phi(s),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} s\\in[-r,0].\n$$\nFor convenience, we call the lift $x_t$ of $x(t)$ the {\\em solution curve} of \\eqref{ode1} in ${\\mathcal C}$ with initial value $x_\\tau=\\phi$, denoted hereafter by $x_t(\\tau,\\phi)$.\n\n\n \\begin{lemma}\\label{l:ode1}Suppose that there exist $M,N>0$ such that\n \\begin{equation}\\label{ode3}\\begin{array}{ll}\n\\sum_{i=0}^m\\int_s^t\\beta_i(\\mu)d\\mu\\leq M(t-s)+N,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} -\\infty0$ independent of $\\tau\\in\\mathbb{R}$ such that\n\\begin{equation}\\label{ode6}\n|x(t;\\tau,\\phi)|\\leq C\\|\\phi\\|e^{-\\lambda (t-\\tau)}+\\rho,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq\\tau,\\,\\,(\\tau,\\phi)\\in\\mathbb{R}\\times {\\mathcal C}.\n\\end{equation}\n \\end{lemma}\n{\\bf Proof.} Let $x=x(t):=x(t;\\tau,\\phi)$ be a solution of \\eqref{ode1} with maximal existence interval $[\\tau-r,T_\\phi)$. Set $\\gamma:=p(q-1)\/(p-q)+1$.\nTaking the inner product of both sides of \\eqref{ode1} with $|x|^{\\gamma-1}x$, we find that\n$$\\begin{array}{lll}\n\\frac{1}{\\gamma+1}\\frac{d}{dt}|x|^{\\gamma+1}&=|x|^{\\gamma-1}(F_0(t,x),x)+|x|^{\\gamma-1}\\sum_{i=1}^m(F_i(t,x(t-r_i)),x)\\\\[2ex]\n&\\leq \\(-\\alpha_0|x|^{\\gamma+p}+\\beta_0(t)|x|^{\\gamma-1}\\)+\\sum_{i=1}^m\\(\\alpha_i |x|^{\\gamma}\\|x_t\\|^{q}+\\beta_i(t)|x|^{\\gamma}\\).\n\\end{array}\n$$\nThe classical Young's inequality implies that\n$$\n|x|^{\\gamma}\\|x_t\\|^{q}\\leq \\varepsilon\\|x_t\\|^{\\gamma+1}+C_\\varepsilon|x|^{\\gamma(\\gamma+1)\/((\\gamma+1)-q)}\n$$\nfor any $\\varepsilon>0$. Here and below $C_\\varepsilon$ denotes a general constant depending upon $\\varepsilon$. By the choice of $\\gamma$ one easily verify that $\\gamma(\\gamma+1)\/((\\gamma+1)-q)<\\gamma+p$. Hence using the Young's inequality once again we deduce that\n$$\n|x|^{\\gamma}\\|x_t\\|^{q}\\leq \\varepsilon\\|x_t\\|^{\\gamma+1}+\\varepsilon|x|^{\\gamma+p}+C_\\varepsilon.\n$$\nWe also have\n$$\n|x|^{\\gamma-1},|x|^{\\gamma}\\leq \\varepsilon |x|^{\\gamma+1}+C_\\varepsilon.\n$$\nCombining the above estimates together it gives\n \\begin{equation}\\label{ode2}\\begin{array}{lll}\n \\frac{1}{\\gamma+1}\\frac{d}{dt}|x|^{\\gamma+1}&\\leq -\\(\\alpha_0-\\varepsilon{\\alpha}\\)|x|^{\\gamma+p}+{\\varepsilon \\alpha}\\|x_t\\|^{\\gamma+1}\\\\[2ex]\n &\\hs+\\varepsilon \\beta(t)|x|^{\\gamma+1}+C_\\varepsilon(\\beta(t)+1),\\\\[2ex]\n \\end{array}\n \\end{equation}\nwhere $$\\begin{array}{ll}\\alpha=\\sum_{i=1}^m \\alpha_i,\\hs \\beta(t)=\\sum_{i=0}^m\\beta_i(t).\\end{array}$$\nIt can be assumed that $\\varepsilon\\alpha<\\alpha_0$. Noticing that $s^{\\gamma+1}\\leq s^{\\gamma+p}+1$ for all $s\\geq 0$, by \\eqref{ode2} we find that\n\\begin{equation}\\label{ode20}\\begin{array}{lll}\n\\frac{d}{dt}|x|^{\\gamma+1}&\\leq -a_\\varepsilon(t)|x|^{\\gamma+1}+{\\varepsilon (\\gamma+1)\\alpha}\\|x_t\\|^{\\gamma+1}+C_\\varepsilon(\\beta(t)+1),\n \\end{array}\n \\end{equation}\nwhere\n$\na_\\varepsilon(t)=(\\gamma+1)\\(\\alpha_0-{\\varepsilon\\alpha}-\\varepsilon \\beta(t)\\).\n$\n\\vs\nLet $E_\\varepsilon(t,s)=\\exp\\(-\\int_s^ta_\\varepsilon(\\mu)d\\mu\\)$ ($t\\geq s\\geq\\tau$). In what follows we always assume $\\varepsilon<1$ and that $\\varepsilon (\\gamma+1)(\\alpha+M)0$ sufficiently small so that\n$$\\begin{array}{ll}\n\\kappa:=\\varepsilon \\kappa_0<1\/(1+\\vartheta).\n\\end{array}\n$$\nThen the requirement in Theorem \\ref{t:3.2} is fulfilled.\nThus by virtue of Lemma \\ref{l:2.a1} we first deduce that $x(t)$ is bounded on $[\\tau-r,T_\\phi)$. It follows that $T_\\phi=\\infty$.\nFurther since $y\\in {\\mathscr L}_r(E;\\varepsilon K,0;C_\\varepsilon')$ for all $\\tau\\in\\mathbb{R}$, where ${\\mathscr L}_r(E;\\varepsilon K,0;C_\\varepsilon')$ denotes the family of nonnegative continuous functions on $\\mathbb{R}^+$ satisfying \\eqref{e:3.8a} (see also \\eqref{e1.5}), invoking Theorem \\ref{t:3.2}\none immediate concludes that there exist $C,\\lambda,\\rho>0$ independent of $\\tau$ such that \\eqref{ode6} holds. $\\Box$\n\n\n\\vskip8pt}\\def\\vs{\\vskip4pt\nLemma \\ref{l:ode1} enables us to define a {\\em process}\\, $\\Phi(t,\\tau)$ on ${\\mathcal C}$:\n\\begin{equation}\\label{e:3.20}\n\\Phi(t,\\tau)\\phi=x_t(\\tau,\\phi),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq\\tau>-\\infty,\\,\\,\\phi\\in{\\mathcal C},\n\\end{equation}\nwhere $x_t(\\tau,\\phi)$ is the { solution curve} of \\eqref{ode1} in ${\\mathcal C}$ with $x_\\tau(\\tau,\\phi)=\\phi$ defined as above. $\\Phi$ possesses the following basic properties:\n\\vs\n$\\bullet$ $\\Phi(t,\\tau):{\\mathcal C}\\rightarrow{\\mathcal C}$ is a continuous mapping for each fixed $(t,\\tau)\\in\\mathbb{R}^2$, $t\\geq\\tau$;\n\n$\\bullet$ $\\Phi(\\tau,\\tau)=\\mbox{id}_{\\mathcal C}$ for all $\\tau\\in\\mathbb{R}$, where $\\mbox{id}_{\\mathcal C}$ is the identity mapping on ${\\mathcal C}$;\n\n$\\bullet$ $\\Phi(t,\\tau)=\\Phi(t,s)\\Phi(s,\\tau)$ for all $t\\geq s\\geq\\tau$.\n\\vs\n\\noindent For system \\eqref{ode1}, the estimate given in Lemma \\ref{l:ode1} is sufficient to guarantee the existence of a global pullback attractor; see \\cite{Carab2,Carab} etc. (The interested reader is referred to \\cite{CLR} etc. for the general theory of pullback attractors.) Hence we have\n\n\\begin{theorem}\\label{t:ode1} Assume the hypotheses in Lemma \\ref{l:ode1}. Then $\\Phi$ has a (unique) global pullback attractor in ${\\mathcal C}$. Specifically, there is a unique family ${\\mathcal A}=\\{A(t)\\}_{t\\in\\mathbb{R}}$ of compact sets contained in the ball $\\overline{\\mathcal B}_\\rho$ in ${\\mathcal C}$ centered at $0$ with radius $\\rho$ such that\n\\begin{enumerate}\n\\item[$(1)$] $\\Phi(t,\\tau)A(\\tau)=A(t)$ for all $t\\geq\\tau$;\n\\item[$(2)$] for any bounded set $B\\subset{\\mathcal C}$,\n$$\\lim_{\\tau\\rightarrow-\\infty}d_H\\(\\Phi(t,\\tau)B,\\,A(t)\\)=0$$ for all $t\\in\\mathbb{R}$, where $d_H(\\.,\\.)$ denotes the Hausdorff semi-distance in ${\\mathcal C}$,\n$$\nd_H(M,N)=\\sup_{\\phi\\in M}\\inf\\{\\|\\phi-\\psi\\|:\\,\\,\\psi\\in N\\},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,M,N\\subset {\\mathcal C}.\n$$\n \\end{enumerate}\n\\end{theorem}\n\n\n\n\n\n\n\\section{On the Dynamics of Retarded Evolution Equations with Sublinear Nonlinearities}\nAs our third example to illustrate the application of Theorems \\ref{t:3.1} and \\ref{t:3.2}, we investigate the dynamics of abstract retarded functional differential equations with sublinear nonlinearities in the general setting of cocycle systems.\n\nLet ${\\mathcal H}$ be a compact metric space with metric $d(\\.,\\.)$.\nAssume that there has been given a dynamical system $\\theta$ on ${\\mathcal H}$, i.e., a continuous mapping $\\theta:\\mathbb{R}\\times{\\mathcal H}\\rightarrow{\\mathcal H} $ satisfying the group property: for all $p\\in{\\mathcal H}$ and $s,t\\in\\mathbb{R}$,\n$$\n{ \\theta(0,p)}=p,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\theta({s+t},p)=\\theta(s,\\theta(t,p)).\n$$\n\n\nAs usual, we will rewrite $\\theta(t,p)=\\theta_tp$.\n\nIn what follows we always assume that ${\\mathcal H}$ is {\\em minimal} (with respect to $\\theta$). This means that $\\theta$ has no proper nonempty compact invariant subsets in ${\\mathcal H}$.\n\nLet $X$ be a real Banach space with norm $\\|\\.\\|_0$, and let $A$ be a sectorial operator on $X$ with compact resolvent. Denote $X^s$ ($s\\geq0$) the fractional power of $X$ generated by $A$ with norm $\\|\\.\\|_s$; see \\cite[Chap.\\,1]{Henry} for details.\n\nLet $0\\leq r<\\infty,$ and $\\alpha\\in [0,1)$. Denote ${{\\mathcal C}_\\alpha}=C([-r,0],X^\\alpha)$. ${{\\mathcal C}_\\alpha}$ is equipped with the norm {$\\|\\.\\|_{{\\mathcal C}_\\alpha}$} defined by\n$$\n\\|\\phi\\|_{{\\mathcal C}_\\alpha}=\\max_{[-r,0]}\\|\\phi(s)\\|_\\alpha,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\phi\\in{{\\mathcal C}_\\alpha}.\n$$\nGiven a continuous function $u:[t_0-r,T)\\rightarrow X^\\alpha$, denote by $u_t$ the lift of $u$ in ${{\\mathcal C}_\\alpha}$,\n$$\nu_t(s)=u(t+s),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} s\\in[-r,0],\\,\\,t\\geq t_0.\n$$\n\nThe retarded functional cocycle system we are concerned with is as follows:\n \\begin{equation}\\label{e:4.1}\\frac{du}{dt}+Au=F(\\theta_tp,u_t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\\,\\, p\\in {\\mathcal H},\\end{equation}\nwhere $F$ is a continuous mapping from ${\\mathcal H}\\times {{\\mathcal C}_\\alpha}$ to $X$.\nLater we will show how to put a nonlinear evolution equation like\n $$\\frac{du}{dt}+Au={ f(u(t-r_1),\\cdots,u(t-r_m))}+h(t)$$\n into the abstract form of \\eqref{e:4.1}.\n For convenience in statement, ${\\mathcal H}$ and $\\theta$ are usually called the {\\em base space} and the {\\em driving system} of \\eqref{e:4.1}, respectively.\n\nDenote by ${\\mathcal B}_R$ the ball in ${{\\mathcal C}_\\alpha}$ centered at $0$ with radius $R$.\n\n Assume that $F$ satisfies the following conditions:\n \\begin{enumerate}\\item[(F1)] $F(p,\\phi)$ is {\\em locally Lipschitz} in $\\phi$ uniformly w.r.t $p\\in{\\mathcal H}$, namely, for any $R>0$, there exists $L_R>0$ such that\n$$\n\\|F(p,\\phi)-F(p,\\phi')\\|_0\\leq L_R\\|\\phi-\\phi'\\|_{{\\mathcal C}_\\alpha},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,\\phi,\\phi'\\in { \\overline{\\mathcal B}_R},\\,\\,p\\in{\\mathcal H}.\n$$\n\\item[(F2)] There exist $C_0,C_1>0$ such that\n $$\n \\|F(p,\\phi)\\|_{ 0}\\leq C_0\\|\\phi\\|_{{\\mathcal C}_\\alpha}+C_1,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,(p,\\phi)\\in {\\mathcal H}\\times{{\\mathcal C}_\\alpha}.\n $$\n \\end{enumerate}\n Under the above assumptions, the same argument as in the proof of \\cite[Proposition 3.1]{TW1} with minor modifications applies to show the existence and uniqueness of global mild solutions for \\eqref{e:4.1}: \\,For each initial data $\\phi\\in{{\\mathcal C}_\\alpha}:=C([-r,0],X^\\alpha)$ and $p\\in {\\mathcal H}$, there is a unique continuous function $u:[-r,\\infty)\\rightarrow X^{\\alpha}$ with $u(t)=\\phi(t)$ ($-r\\le t\\le0$) satisfying the integral equation\n$$\nu(t)=e^{-At}\\phi(0)+\\int_{0}^{t}e^{-A(t-s)}F(\\theta_sp,u_s)ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0.\n$$\n\nA solution of \\eqref{e:4.1} clearly depends on $p$. For convenience, given $p\\in {\\mathcal H}$, we call a solution $u$ of \\eqref{e:4.1} a {\\em solution pertaining to $p$}.\nWe will use the notation $u(t;p,\\phi)$ to denote the solution of \\eqref{e:4.1} on $[-r,\\infty)$ pertaining to $p$ with initial value $\\phi\\in{{\\mathcal C}_\\alpha}$.\nThe solutions of \\eqref{e:4.1} generates a {\\em cocycle} $\\Phi$ on ${{\\mathcal C}_\\alpha}$,\n$$\n\\Phi(t,p)\\phi=u_t,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\\,\\,(p,\\phi)\\in{\\mathcal H}\\times{{\\mathcal C}_\\alpha},\n$$\nwhere $u_t$ is the lift of the solution $u(t)=u(t;p,\\phi)$ in ${{\\mathcal C}_\\alpha}$.\n\nSince ${\\mathcal H}$ is compact and $A$ has compact resolvent, using a similar argument as in the proof of \\cite[Proposition 4.1]{TW2}, it can be shown that for each fixed $t>r$, $\\Phi(t,p)\\phi$ is compact as a mapping from ${\\mathcal H}\\times{\\mathcal C}_\\alpha$ to ${\\mathcal C}_\\alpha$. Making use of this basic fact one can easily verify that $\\Phi$ is {\\em asymptotically compact}, that is, $\\Phi$ enjoys the following property:\n\n\\begin{enumerate}\\item[{\\bf(AC)}] For any sequences $t_n\\rightarrow \\infty$ and $(p_n,\\phi_n)\\in{\\mathcal H}\\times {{\\mathcal C}_\\alpha}$, if $\\Cup_{n\\geq 1}\\Phi([0,t_n],p_n)\\phi_n$ is bounded in ${{\\mathcal C}_\\alpha}$ then the sequence $\\Phi(t_n,p_n)\\phi_n$ has a convergent subsequence.\n\\end{enumerate}\n\n\\subsection{Basic integral formulas on bounded solutions}\n{\nSuppose $A$ has a spectral decomposition $\\sigma(A)=\\sigma^-\\cup\\sigma^+$, where\n \\begin{equation}\\label{e:4.3}\n \\mbox{Re}\\,z\\leq -\\beta<0\\,\\,(z\\in\\sigma^-),\\hs \\mbox{Re}\\,z\\geq \\beta>0\\,\\,(z\\in\\sigma^+)\n \\end{equation}\n for some $\\beta>0$. Let $X=X_1\\oplus X_2$ be the corresponding direct sum decomposition of $X$ with $X_1$\n and $X_2$ being invariant subspaces of $A$.\nDenote $P_i:X\\rightarrow X_i$ ($i=1,2$) the projection from $X$ to $X_i$, and write $A_i=A|_{X_i}$. By the basic knowledge on sectorial operators (see Henry \\cite[Chap.\\,1]{Henry}), there exists $M\\geq1$ such that\n\\begin{equation}\\label{e:4.6a}\\|\\Lambda^\\alpha e^{-A_1t}\\|\\leq{ Me^{\\beta t}},\\hs \\|e^{-A_1t}\\|\\leq{ Me^{\\beta t}},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\leq0,\\end{equation}\n\\begin{equation}\\label{e:4.6b}\\|\\Lambda^\\alpha e^{-A_2t}P_2\\Lambda^{-\\alpha}\\|\\leq Me^{-\\beta t},\\hs\\|\\Lambda^\\alpha e^{-A_2t}\\|\\leq Mt^{-\\alpha}e^{-\\beta t},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t>0.\\end{equation}\n\nThe verification of the following basic integral formulas on bounded solutions are just slight modifications of the corresponding ones for that of equations without delays (see e.g. \\cite[pp. 180, Lemma A.1]{Hale0} and \\cite{Ju}), and hence is omitted.\n\n\n\\begin{lemma}\\label{l:3.3}\n Let $u:[-r,+\\infty)\\to X^{\\alpha}$ be a bounded continuous function. Then $u$ is a solution of (\\ref{e:4.1}) on $[-r,\\infty)$ pertaining to $p\\in{\\mathcal H}$ if and only if $u$ solves the integral equation\n$$\\begin{array}{ll}\nu(t)=&\\,\\,e^{-A_2t}P_2 u(0)+\\int_{0}^{t}e^{-A_2(t-s)}P_2 F(\\theta_{s}p, u_s)ds\\\\[2ex]\n&-\\int_{t}^{\\infty}e^{-A_1(t-s)}P_1 F(\\theta_{s}p, u_s)ds,\\hs t\\ge0.\n\\end{array}\n$$\n\n\\end{lemma}\n\n}\n\n\n\\subsection{Existence of bounded complete solutions}\nFor nonlinear evolution equations, bounded complete solutions are of equal importance as equilibrium ones. This is because that the long-term dynamics of an equation\nis determined not only by the distribution of its equilibrium solutions, but also by that of all its bounded complete trajectories. In fact, for a nonautonomous evolution equation it may be of little sense to talk about equilibrium solutions in the usual terminology.\n\nIn this subsection we establish an existence result for bounded complete solutions of equation \\eqref{e:4.1}. For this purpose we need first to give some a priori estimates.\n\nLet $C_0,C_1$ be the constants in (F2), and set\n\\begin{equation}\\label{e:kap}\\begin{array}{ll}\\kappa_0=\\sup_{t\\ge0}\\(\\int_{0}^{t}(t-s)^{-\\alpha}e^{-\\beta (t-s)}ds+\\int_{t}^{\\infty}e^{\\beta(t-s)}ds\\).\\end{array}\\end{equation}\n\n\\begin{lemma}\\label{l:4.1} Suppose $A$ has a spectral decomposition as in \\eqref{e:4.3}, and that\n$C_0<{1}\/{(\\kappa_0 M)}.$\nThen for any $R,\\varepsilon>0$, there exists $T>0$ such that for all bounded solutions $u(t)=u(t;p,\\phi)$ of \\eqref{e:4.1} with $\\phi\\in\\overline{\\mathcal B}_R$,\n\\begin{equation}\\label{e:4.5}\n\\|u(t)\\|_\\alpha<\\rho+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t{ >} T,\n\\end{equation}\n where $\\rho={ C_1M(1-\\kappa_0 C_0M)^{-1}\\int_{0}^{\\infty}(1+s^{-\\alpha})e^{-\\beta s}ds}$. Consequently\n\\begin{equation}\\label{e:4.5b}\\sup_{t\\in\\mathbb{R}}\\|\\gamma(t)\\|_{\\alpha}\\leq \\rho\\end{equation} for all bounded complete solutions $\\gamma(t)$ of \\eqref{e:4.1}.\n\\end{lemma}\n\n\\noindent{\\bf Proof.} (1) {Let $u(t)=u(t;p,\\phi)$ be a bounded solution of \\eqref{e:4.1} on $[-r,\\infty)$. For any $\\tau\\geq 0$, set $v(t)=u(t+\\tau)$ ($t\\geq 0$). Then $v$ is a bounded solution of \\eqref{e:4.1} pertaining to $q=\\theta_\\tau p$.\nHence we infer from Lemma \\ref{l:3.3} that\n\\begin{equation}\\label{e:4.8}\n\\begin{array}{ll}\nv(t)=&e^{-A_2t}P_2 v(0)+\\int_{0}^{t}e^{-A_2(t-s)}P_2 F(\\theta_{s}{ q},v_s)ds\\\\[2ex]\n&-\\int_{t}^{\\infty}e^{-A_1(t-s)}P_1 F(\\theta_{s}q,v_s)ds,\\hs t\\ge0.\n\\end{array}\n\\end{equation}\nTherefore by \\eqref{e:4.6a}, \\eqref{e:4.6b} and (F2), we deduce that\n$$\n\\begin{array}{ll}\n\\|v(t)\\|_\\alpha\n\\leq&Me^{-\\beta t}{ \\|v_0\\|_{{\\mathcal C}_\\alpha}}+\\int_{0}^{t} K_1(t,s)\\|v_s\\|_{{\\mathcal C}_\\alpha} ds\\\\[2ex]\n&+\\int_{t}^{\\infty}K_2(t,s)\\|v_s\\|_{{\\mathcal C}_\\alpha} ds+C_2,\\hs t\\ge0,\n\\end{array}\n$$\nwhere\n\\begin{equation}\\label{ek1}K_1(t,s)={ C_0M(t-s)^{-\\alpha}e^{-\\beta(t-s)}},\\hs K_2(t,s)={ C_0Me^{\\beta(t-s)}},\\end{equation}\nand $C_2=C_1M{ \\int_{0}^{\\infty}(1+s^{-\\alpha})e^{-\\beta s}ds}$. That is, $u$ satisfies\n\\begin{equation}\\label{e:4.11}\n\\begin{array}{ll}\n{ \\|u(t)\\|_\\alpha}\n\\leq& Me^{-\\beta (t-\\tau)}{ \\|u_\\tau\\|_{{\\mathcal C}_\\alpha}}+\\int_{\\tau}^{t} K_1(t,s)\\|u_s\\|_{{\\mathcal C}_\\alpha} ds\\\\[2ex]\n&+\\int_{t}^{\\infty}K_2(t,s)\\|u_s\\|_{{\\mathcal C}_\\alpha} ds+C_2,\\hs t\\ge\\tau\\geq 0.\n\\end{array}\n\\end{equation}\n Applying Theorem \\ref{t:3.1} one deduces that if $C_0<1\/(\\kappa_0 M)$ then\n for any $R, \\varepsilon >0$, there exists { $T>0$} such that\n\\eqref{e:4.5} holds true for all $p\\in{\\mathcal H}$ and $\\phi\\in\\overline{\\mathcal B}_R$.\n\\vs\n(2) Let $\\gamma(t)$ be a bounded complete solution of \\eqref{e:4.1} pertaining to some $q\\in{\\mathcal H}$. Pick an $R>0$ such that $\\|\\gamma(t)\\|_\\alpha< R$ for all $t\\in\\mathbb{R}$. Then for any $\\varepsilon>0$, there is $T>0$ such that \\eqref{e:4.5} holds for all $p\\in{\\mathcal H}$ and $\\phi\\in\\overline{\\mathcal B}_R$. Taking $p=\\theta_{-T} q$ and $\\phi=\\gamma(-T)$, one finds that\n$$\n\\|\\gamma(0)\\|_\\alpha=\\|u(T;p,\\phi)\\|_\\alpha<\\rho+\\varepsilon.\n$$\nSince $\\varepsilon$ is arbitrary, we conclude that $\\|\\gamma(0)\\|_\\alpha\\leq\\rho$.\n\nIn a similar fashion it can be shown that $\\|\\gamma(t)\\|_\\alpha\\leq\\rho$ for all $t\\in\\mathbb{R}$. $\\Box$\n\n}\n\\vskip8pt}\\def\\vs{\\vskip4pt\nThanks to Lemma \\ref{l:4.1}, one can now show by very standard argument via the Conley index theory that equation \\eqref{e:4.1} has a bounded complete solution $u$. Specifically, we have the following existence result.\n\n\\begin{theorem}\\label{t:4.1b} Assume the hypotheses in Lemma \\ref{l:4.1}. Then for any $p\\in{\\mathcal H}$, \\eqref{e:4.1} has at least one bounded complete solution $u$ pertaining to $p$.\n\\end{theorem}\n{\\bf Proof.} The estimate \\eqref{e:4.5b} allows us to prove by using the Conley index theory and some standard argument that \\eqref{e:4.1} has at least one bounded complete solution $\\gamma=\\gamma(t)$ pertaining to some $p_0\\in{\\mathcal H}$. The interested reader is referred to \\cite[Sect. 7]{Wang} and \\cite{LLZ} for details.\n\nTo show that for any $p\\in{\\mathcal H}$, equation \\eqref{e:4.1} has at least one bounded complete solution $u$ pertaining to $p$, we consider the skew-product flow $\\Pi$ on ${\\mathscr X}={\\mathcal H}\\times {{\\mathcal C}_\\alpha}$ defined as below:\n\\begin{equation}\\label{e:sk}\n\\Pi(t)(p,\\phi)=\\(\\theta_tp,\\Phi(t,p)\\phi\\),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} (p,\\phi)\\in {\\mathscr X},\\,\\,t\\geq 0.\n\\end{equation}\nThe asymptotic compactness of $\\Phi$ imply that $\\Pi$ is asymptotically compact.\nLet $\\varphi(t)=(\\theta_tp_0,\\,\\gamma_t)$. Then $\\varphi=\\varphi(t)$ is a bounded complete trajectory of $\\Pi$.\n\nLet ${\\mathcal S}=\\omega(\\varphi)$ be the $\\omega$-limit set of $\\varphi$,\n\\begin{equation}\\label{e:os}\\begin{array}{ll}\n\\omega(\\varphi)=\\bigcap}\\def\\Cup{\\bigcup_{\\tau\\geq0}\\,\\overline{\\{\\varphi(t):\\,\\,t\\geq\\tau\\}}.\\end{array}\n\\end{equation}\nBy the basic knowledge in the dynamical systems theory we know that ${\\mathcal S}$ is a nonempty compact invariant set of $\\Pi$.\nSet $K=P_{\\mathcal H}{\\mathcal S}$, where $P_{\\mathcal H}:{\\mathscr X}\\rightarrow {\\mathcal H}$ is the projection. One can easily verify that $K$ is a nonempty compact invariant set of the driving system $\\theta$. Hence due to the minimality hypothesis on ${\\mathcal H}$ we deduce that $K={\\mathcal H}$. Consequently for each $p\\in {\\mathcal H}$, there is a $\\phi\\in {{\\mathcal C}_\\alpha}$ such that $(p,\\phi)\\in{\\mathcal S}$. Let $(\\theta_tp,\\,u_t)$ be a bounded complete trajectory of $\\Pi$ in ${\\mathcal S}$ through $(p,\\phi)$. Set\n$$\nu(t)=u_t(0),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\in\\mathbb{R}.\n$$\nThen $u(t)$ is bounded complete solution of \\eqref{e:4.1} pertaining to $p$. $\\Box$\n\n\n\n\n\\subsection{Existence of a nonautonomous equilibrium solution}\\label{s:4.3}\nFor the sake of simplicity in statement, instead of (F1) and (F2), in this section we assume that $F(p,\\phi)$ is {\\em globally Lipschitz } in $\\phi$ uniformly w.r.t $p\\in {\\mathcal H}$, i.e.,\n \\begin{enumerate}\\item[(F3)] there exist $L>0$ such that\n$$\n\\|F(p,\\phi)-F(p,\\phi')\\|_0\\leq L\\|\\phi-\\phi'\\|_{{\\mathcal C}_\\alpha},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,\\phi,\\phi'\\in {\\mathcal C}_\\alpha,\\,\\,p\\in{\\mathcal H}.\n$$\n\\end{enumerate}\nIn such a case, since\n$$\n\\|F(p,\\phi)\\|_0= \\|F(p,\\phi)-F(p,0)\\|_0+\\|F(p,0)\\|_0\\leq L\\|\\phi\\|_{{\\mathcal C}_\\alpha}+\\|F(p,0)\\|_0,\n$$\nwe see that hypothesis (F2) is automatically fulfilled with\n\\begin{equation}\\label{e:4.31}C_0=L,\\hs C_1=\\max_{p\\in {\\mathcal H}}\\|F(p,0)\\|_0.\\end{equation}\n\n\n\n\\begin{definition} A nonautonomous equilibrium solution of \\eqref{e:4.1} is a continuous mapping $\\Gamma\\in C({\\mathcal H},X^\\alpha)$ such that\n$\\gamma_p(t):=\\Gamma(\\theta_tp)$ is a bounded complete solution of \\eqref{e:4.1} pertaining to $p$ for each $p\\in {\\mathcal H}$.\n \\end{definition}\n\\begin{theorem}\\label{t:4.1}Suppose $A$ has a spectral decomposition as in \\eqref{e:4.3}, and that\n$L<{1}\/{(\\kappa_0 M)}$.\n Then the following assertions hold:\n \\begin{enumerate}\\item[$(1)$] Equation $(\\ref{e:4.1})$ has a nonautonomous equilibrium solution $\\Gamma\\in C({\\mathcal H},{ X^{\\alpha}})$.\\vs\n \\item[$(2)$] For any $R,\\varepsilon>0$, there exists $T>0$ such that for any bounded solution $u(t)=u(t;p,\\phi)$ with $\\phi\\in\\overline{\\mathcal B}_R$,\n$$\n\\|u(t)-\\Gamma(\\theta_tp)\\|_\\alpha<\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t> T.\n$$\n\\item[$(3)$] There exists $c>0$ such that for any bounded solution $u(t)=u(t;p,\\phi)$,\n$$\n\\|u(t)-\\Gamma(\\theta_tp)\\|_\\alpha\\leq c\\max_{s\\in[-r,0]}\\|\\phi(s)-\\Gamma(\\theta_s p)\\|_\\alpha,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0.\n$$\n\\end{enumerate}\n\\end{theorem}\n{\\bf Proof.} (1)\\, We continue the argument in the proof of Theorem \\ref{t:4.1b}.\nSet $${\\mathcal S}[p]=\\{\\phi:\\,\\,(p,\\phi)\\in {\\mathcal S}\\},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} p\\in{\\mathcal H},$$\n where ${\\mathcal S}$ is the $\\omega$-limit set of $\\varphi$ given by \\eqref{e:os}.\n Using the compactness of ${\\mathcal S}$ one easily checks that ${\\mathcal S}[p]$ is upper semicontinuous, i.e., given $p\\in{\\mathcal H}$, for any $\\varepsilon>0$, there is a $\\delta>0$ such that ${\\mathcal S}{ [q]}$ is contained in the $\\varepsilon$-neighborhood of ${\\mathcal S}{ [p]}$ for all $q$ with $d(q,p)<\\delta$.\nIn what follows we show that ${\\mathcal S}[p]$ is a singleton. Consequently the upper semicontinuity of ${\\mathcal S}[p]$ reduces to the continuity of ${\\mathcal S}[p]$ in $p$.\n\nLet $\\phi_1,\\phi_2\\in {\\mathcal S}[p]$. As in the proof of { Theorem \\ref{t:4.1b}} we know that $\\Phi$ has two bounded complete trajectories $\\gamma^i_t$ ($i=1,2$) in ${{\\mathcal C}_\\alpha}$ pertaining to $p$ with $\\gamma^i_0=\\phi_i$. We check that\n$\\gamma_t:=\\gamma^1_t-\\gamma^2_t\\equiv 0$ for $t\\in\\mathbb{R}$, or equivalently,\n\\begin{equation}\\label{e:4.8}\\gamma(t):=\\gamma^1(t)-\\gamma^2(t)\\equiv0,\\hs\\mbox{ where $\\gamma^i(t)=\\gamma^i_t(0)$}.\\end{equation} It then follows that $\\phi_1=\\phi_2$, hence ${\\mathcal S}[p]$ is a singleton.\n\n\nFor $\\eta\\in\\mathbb{R}$, we write $\\varphi^i(t)=\\gamma^i(t+\\eta)$. Then $\\varphi^i(t)$ is a solution of \\eqref{e:4.1} pertaining to $q=\\theta_\\eta p$. By Lemma \\ref{l:3.3} we have\n\n{\\begin{equation*}\n\\begin{array}{ll}\n\\varphi^{i}(t)=&e^{-A_2t}P_2 \\varphi^{i}(0)+\\int_{0}^{t}e^{-A_2(t-s)}P_2 F\\(\\theta_s q,\\,\\varphi^{i}_s\\)ds\\\\[2ex]\n&-\\int_{t}^{\\infty}e^{-A_1(t-s)}P_1 F\\(\\theta_s q,\\,\\varphi^{i}_{s}\\)ds,\\hs t\\ge0.\n\\end{array}\n\\end{equation*}\n Let $\\varphi(t):=\\varphi^1(t)-\\varphi^2(t)$. Then\n\\begin{equation*}\n\\begin{array}{ll}\n\\varphi(t)=\\,&\\,e^{-A_2t}P_2 \\varphi(0)+\\int_{0}^{t}e^{-A_2(t-s)}P_2 \\(F\\(\\theta_s q,\\,\\varphi^1_s\\)- F\\(\\theta_s q,\\,\\varphi^2_s\\)\\)ds\\\\[2ex]\n&-\\int_{t}^{\\infty}e^{-A_1(t-s)}P_1 \\(F\\(\\theta_s q,\\,\\varphi^1_s\\)-F\\(\\theta_s q,\\,\\varphi^2_s\\)\\)ds,\\hs t\\ge0.\n\\end{array}\n\\end{equation*}\nThus by ({ F3}) we deduce that\n\\begin{equation}\\label{eq2}\n\\begin{array}{ll}\n\\|\\varphi(t)\\|_\\alpha&\\leq M e^{-\\beta t}{ \\|\\varphi_0\\|_{{\\mathcal C}_\\alpha}}+{ L}M\\int_0^t(t-s)^{-\\alpha}e^{-\\beta(t-s)}\\|\\varphi_s\\|_{{\\mathcal C}_\\alpha} ds\\\\[2ex]\n&\\hs+{ L }M\\int_t^{\\infty} e^{-\\beta(s-t)}\\|\\varphi_s\\|_{{\\mathcal C}_\\alpha} ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq 0.\n\\end{array}\n\\end{equation}\nSince $\\varphi(t)=\\gamma{ ^1}(t+\\eta)-\\gamma{ ^2}(t+\\eta)$ and $\\eta\\in\\mathbb{R}$ can be taken arbitrary, it can be easily seen that \\eqref{eq2} is readily satisfied by all the translations $\\varphi(\\.+\\tau)$ of $\\varphi$, i.e.,\n$$\n\\begin{array}{ll}\n\\|\\varphi(t+\\tau)\\|_\\alpha&\\leq M e^{-\\beta t}{ \\|\\varphi_\\tau\\|_{{\\mathcal C}_\\alpha}}+{ L}M\\int_0^t(t-s)^{-\\alpha}e^{-\\beta(t-s)}\\|\\varphi_{s+\\tau}\\|_{{\\mathcal C}_\\alpha} ds\\\\[2ex]\n&\\hs+{ L }M\\int_t^{\\infty} e^{-\\beta(s-t)}\\|\\varphi_{s+\\tau}\\|_{{\\mathcal C}_\\alpha} ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq 0.\n\\end{array}\n$$\nRewriting $t+\\tau$ as $t$, the above inequality can be put into the following one:\n\\begin{equation}\\label{e:4.4}\n\\begin{array}{ll}\n\\|\\varphi(t)\\|_\\alpha&\\leq E(t,\\tau){ \\|\\varphi_\\tau\\|_{{\\mathcal C}_\\alpha}}+\\int_\\tau^t K_1(t,s)\\|\\varphi_{s}\\|_{{\\mathcal C}_\\alpha} ds\\\\[2ex]\n&\\hs+\\int_t^{\\infty} K_2(t,s)\\|\\varphi_{s}\\|_{{\\mathcal C}_\\alpha} ds,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,t\\geq \\tau.\n\\end{array}\n\\end{equation}\nwhere $E(t,s)=M e^{-\\beta (t-s)}$, and\n\\begin{equation}\\label{eEK}\\mbox{$K_1=LM(t-s)^{-\\alpha}e^{-\\beta(t-s)}$,\\hs $K_2=LMe^{-\\beta(s-t)}$.}\\end{equation}\n\nApplying Theorem \\ref{t:3.1} (1) to $y(t)=\\|\\varphi(t)\\|_\\alpha$, we deduce by \\eqref{e:4.4} that if $L<1\/(\\kappa_0 M)$, then for any $\\varepsilon>0$ there exists $T>0$ (independent of $\\eta$) such that\n$\n\\|\\varphi(t)\\|_\\alpha<\\varepsilon$ for all $t> T$, that is,\n\\begin{equation}\\label{e:4.17}\\|\\gamma({t+{\\eta}})\\|_\\alpha<\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t> T,\\,\\,{\\eta}\\in \\mathbb{R}.\n\\end{equation}\nNow for any $\\tau\\in\\mathbb{R}$, setting $t=T+1$ and $\\eta=\\tau-(T+1)$ in \\eqref{e:4.17} we obtain that $\\|\\gamma({\\tau})\\|_\\alpha<\\varepsilon$. Since $\\varepsilon$ is arbitrary, one immediately concludes that $\\gamma(\\tau)=0$, which justifies the validity of \\eqref{e:4.8}.\n\n}\n\n\\vskip8pt}\\def\\vs{\\vskip4pt\nNow we write ${\\mathcal S}[p]=\\{\\phi_p\\}$. Then $\\phi_p$ is continuous in $p$, and the invariance property of ${\\mathcal S}$ implies that $\\gamma_{t}:=\\phi_{\\theta_tp}$ is a complete trajectory of the cocycle $\\Phi$ in ${{\\mathcal C}_\\alpha}$ for each $p\\in {\\mathcal H}$. Define\n$$\\Gamma(p)=\\phi_p(0),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} p\\in{\\mathcal H}.$$ Clearly $\\Gamma\\in C({\\mathcal H},X^\\alpha)$. It is easy to see that for each $p\\in{\\mathcal H}$, $\\gamma_p(t):=\\Gamma(\\theta_tp)=\\phi_{\\theta_tp}(0)$ is a complete solution of \\eqref{e:4.1} pertaining to $p$. Hence $\\Gamma$ is a nonautonomous equilibrium solution of equation \\eqref{e:4.1}.\n\\vs\n\n(2)-(3)\\, Let $p\\in{\\mathcal H}, $ and let $u(t)=u(t;p,\\phi)$ be a bounded solution of \\eqref{e:4.1}. Then the same argument as above with minor modifications applies to show that \\eqref{e:4.4} is fulfilled by $\\varphi(t):=u(t)-\\Gamma(\\theta_tp)$ for all $t\\geq\\tau\\geq0$.\nAssertions (2) and (3) then immediately follows from Theorem \\ref{t:3.1} and Lemma \\ref{l:2.1}. $\\Box$\n\n\n\\subsection{Global asymptotic stability of the equilibrium}\\label{s:4.4}\n\nNow we pay some attention to the particular case where $\\sigma(A)$ lies in the right half plane.\nWe continue the argument in Section \\ref{s:4.3} and assume that $F$ satisfies the global Lipschitz condition (F3).\n\nGiven $(p,\\phi)\\in{\\mathcal H}\\times{\\mathcal C}_\\alpha$, we write $u(t)=u(t;p,\\phi)$. Since the spectral set $\\sigma^-=\\emptyset$, using the constant variation formula it can be shown that\n\\begin{equation}\\label{e:4.36}\n\\|{ u(t)}\\|_\\alpha\n\\leq Me^{-\\beta (t-\\tau)}{ \\|u_\\tau\\|_{{\\mathcal C}_\\alpha}}+\\int_{\\tau}^{t} K_1(t,s)\\|u_s\\|_{{\\mathcal C}_\\alpha} ds{ +\\rho_0},\\hs\\,\\, t\\ge\\tau\\geq 0\n\\end{equation}\n where $\\rho_0=C_1M\\int_{0}^{\\infty} s^{-\\alpha}e^{-\\beta s}ds$ ($C_1$ is the constant given in \\eqref{e:4.31}), and $\n K_1(t,s)$ is the function given in \\eqref{eEK}.\n The calculations involved here are similar to those as in the proof of Lemma \\ref{l:4.1}. We omit the details.\n Let $$\\begin{array}{ll}\\kappa_0=\\sup_{t\\ge0}\\(\\int_{0}^{t}(t-s)^{-\\alpha}e^{-\\beta (t-s)}ds \\),\\hs \\rho=(1-\\kappa_0 L M)^{-1}\\rho_0,\\end{array}$$\n where $L$ is the constant in (F3). Applying Theorem \\ref{t:3.2} (1) one deduces that if $L<{{1}\/{(\\kappa_0 M)}}$, then for any $R, \\varepsilon>0$, there exists $T>0$ such that\n\\begin{equation}\\label{e:4.20}\n\\|u(t)\\|_\\alpha<\\rho+\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\, t{ >} T,\\,\\,(p,\\phi)\\in{\\mathcal H}\\times{ \\overline{\\mathcal B}_R}.\n\\end{equation}\n\nLet $\\Gamma$ be the nonautonomous equilibrium solution given by Theorem \\ref{t:4.1}.\nAs a direct consequence of \\eqref{e:4.20} and Theorem \\ref{t:4.1}, we have\n\\begin{theorem}\\label{l:4.1c} Suppose $L<{{1}\/{(\\kappa_0 M)}}$, and that\n\\begin{equation}\\label{e:sa}\\mbox{\\em Re}\\,z\\geq \\beta>0,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}\\forall\\,z\\in\\sigma(A).\n\\end{equation}\nThen $\\Gamma$ is uniformly globally asymptotically stable in the following sense:\n\\begin{enumerate}\n\\item[$(1)$] $\\Gamma$ is uniformly table, i.e., for any $\\varepsilon>0$, there exists $\\delta>0$ such that for all $(p,\\phi)\\in{\\mathcal H}\\times{\\mathcal C}_\\alpha$ with $\\max_{s\\in[-r,0]}\\|\\phi(s)-\\Gamma(\\theta_sp)\\|_\\alpha<\\delta$,\n\\begin{equation}\\label{e:4.34}\n\\|u(t)-\\Gamma(\\theta_tp)\\|_\\alpha<\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0.\n\\end{equation}\n\\item[$(2)$] $\\Gamma$ is uniformly globally attracting, i.e.,\nfor any $R,\\varepsilon>0$, there exists $T>0$ such that for all $p\\in{\\mathcal H}$ and $\\phi\\in\\overline{\\mathcal B}_R$,\n\\end{enumerate}\n\\begin{equation}\\label{e:4.35}\n\\|u(t)-\\Gamma(\\theta_tp)\\|_\\alpha<\\varepsilon,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t> T.\n\\end{equation}\n\\end{theorem}\n{\\bf Proof.} The uniform stability of $\\Gamma$ follows from Theorem \\ref{t:4.1} (3), and the\nuniform global attraction of $\\Gamma$ is a consequence of \\eqref{e:4.20} and some general results on the uniform forward attraction properties of pullback attractors; see e.g. \\cite[Theorem 3.3]{WLK}. $\\Box$\n\n\n\\vskip8pt}\\def\\vs{\\vskip4pt\nIf we impose on $L$ a stronger smallness requirement, then it can be shown that $\\Gamma$ is uniformly globally exponentially asymptotically stable.\n\\begin{theorem}\\label{l:4.1d} Assume that $A$ satisfies \\eqref{e:sa}. If $L<{1}\/{(\\kappa_0 M(1+M))}$, then there exist $C,\\lambda>0$ such that for all $(p,\\phi)\\in{\\mathcal H}\\times{\\mathcal C}_\\alpha$,\n$$\n\\|u(t)-\\Gamma(\\theta_tp)\\|_{\\alpha}\\leq { C}e^{-\\lambda t}\\max_{s\\in[-r,0]}\\|\\phi(s)-\\Gamma(\\theta_sp)\\|_\\alpha\\,,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq0.\n$$\n\\end{theorem}\n{\\bf Proof.}\nLet $\\varphi(t)=u(t)-\\Gamma(\\theta_t p)$. Using a parallel argument as in the proof of Lemma \\ref{l:4.1} (1), we can obtain that\n $$\n\\|\\varphi(t)\\|_\\alpha\n\\leq Me^{-\\beta (t-\\tau)}{\\|\\varphi_\\tau\\|_{{\\mathcal C}_\\alpha}}+\\int_{\\tau}^{t} K_1(t,s)\\|\\varphi_s\\|_{{\\mathcal C}_\\alpha} ds.\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\ge\\tau\\geq 0,\n $$\n If $L<{1}\/{(\\kappa_0 M(1+M))}$ then the functions $E(t,s):=Me^{-\\beta (t-s)}$ and $K_1(t,s)$ fulfill the requirements in Theorem \\ref{t:3.2}. Thus there exist constants $C,\\lambda>0$ independent of $\\varphi$ such that\n$$\n\\|\\varphi(t)\\|_{\\alpha}\\leq { C}e^{-\\lambda t}\\|\\varphi_0\\|_{{\\mathcal C}_\\alpha},\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0.\n$$\nThe conclusion of the theorem then immediately follows. $\\Box$\n\n\\subsection{Nonlinear evolution equations with multiple delays}\nLet us now consider the nonlinear evolution equation\n \\begin{equation}\\label{e:4.23}\\frac{du}{dt}+Au=f(u(t-r_1),\\cdots,u(t-r_m))+h(t)\\end{equation}\nwith multiple delays, where $X$ and $A$ are the same as in Subsection 4.1, $f$ is a continuous mapping from $(X^\\alpha)^{m}$ to $X$ for some $\\alpha\\in[0,1)$, $h\\in C(\\mathbb{R},X)$, $r_i\\in C(\\mathbb{R},\\mathbb{R}^+)$, and $$0\\leq r_i(t) \\leq r<\\infty,\\hs 1\\leq i\\leq m.$$ It is well known that \\eqref{e:4.23} covers a large number of concrete examples from applications. Our main goal here is to demonstrate how to put such an equation into the abstract form of \\eqref{e:4.1}.\n\nThe initial value problem of \\eqref{e:4.23} reads as follows:\n \\begin{equation}\\label{e:4.23b}\\left\\{\\begin{array}{ll}\\frac{du}{dt}+Au=f(u(t-r_1),\\cdots,u(t-r_m))+h(t),\\hs t\\geq\\tau,\\\\[1ex]\n u(\\tau+s)=\\phi(s),\\hs s\\in[-r,0],\\end{array}\\right.\\end{equation}\n where $\\phi\\in{{\\mathcal C}_\\alpha}=C([-r,0],X^\\alpha)$, and $\\tau\\in\\mathbb{R}$ is given arbitrary. Rewriting $t-\\tau$ as $t$, one obtains an equivalent form of \\eqref{e:4.23b}:\n{ \\begin{equation}\\label{e:4.23c}\\left\\{\\begin{array}{ll}\\begin{array}{ll}\\frac{dv}{dt}+Av=f(v(t-\\~r_1),\\cdots,v(t-\\~r_m))+\\~h(t),\\hs t\\geq 0,\\end{array}\\\\[1ex]\n v(s)=\\phi(s),\\hs s\\in[-r,0],\\end{array}\\right.\\end{equation}\n}\nwhere $v(t)=u(t+\\tau)$, and\n$$\n\\~r_i(t)=r_i(t+\\tau),\\hs \\~h(t)=h(t+\\tau).\n$$\n\nDenote ${\\mathcal Y}$ the space $C(\\mathbb{R})^m\\times C(\\mathbb{R},X)$ equipped with the {\\em compact-open topology} (under which a sequence $p_n(t)$ in ${\\mathcal Y}$ is convergent {\\em iff\\,} it is uniformly convergent on any compact interval $I\\subset \\mathbb{R}$).\nLet $\\theta$ be the translation operator on ${\\mathcal Y}$,\n$$\n\\theta_\\tau p=p(\\.+\\tau),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall p\\in{\\mathcal Y},\\,\\,\\tau\\in\\mathbb{R}.\n$$ Set\n\\begin{equation}\\label{e:hp}\np^*(t)=(r_1(t),\\cdots,r_m(t),\\,h(t)),\n\\end{equation}\nand assume that $p^*(t)$ is {\\em translation compact } in ${\\mathcal Y}$, i.e., the hull\n$${\\mathcal H}={\\mathcal H}[p^*]:=\\overline{\\{\\theta_\\tau p^*:\\,\\,\\tau\\in\\mathbb{R}\\}}$$ of $p^*$ in ${\\mathcal Y}$ is a compact subset of ${\\mathcal Y}$.\n\nWe also assume that ${\\mathcal H}$ is minimal w.r.t $\\theta$. This requirement is naturally fulfilled when $p^*$ is, say for instance, periodic, pseudo-periodic, or almost periodic.\n\nDefine a function $F:{\\mathcal H}\\times {{\\mathcal C}_\\alpha}\\rightarrow X$ as\n\\begin{equation}\\label{e:F}\nF(p, \\phi)=f(\\phi(-p_1(0)),\\cdots,\\phi(-p_m(0)))+p_{m+1}(0)\n\\end{equation}\nfor any $p=(p_1,\\cdots,p_{m+1})\\in {\\mathcal H}$. Observing that\n$$\\(r_1(t+\\tau),\\cdots,r_m(t+\\tau),\\,h(t+\\tau)\\)=p^*(t+\\tau)=(\\theta_{t+\\tau}p^*)(0),$$\nwe can rewrite the righthand side of the equation in \\eqref{e:4.23c} as follows:\n{\n$$\\begin{array}{ll}\n&f(v(t-\\~r_1),\\cdots,v(t-\\~r_m))+\\~h(t)\\\\[1ex]\n=& F(\\theta_{t+\\tau}p^*,v_{t})= F(\\theta_{t}p,v_t),\\hs p=\\theta_\\tau p^*.\n\\end{array}\n$$\n}\nConsequently \\eqref{e:4.23c} can be reformulated as\n \\begin{equation}\\label{e:4.24}\\left\\{\\begin{array}{ll}\\frac{dv}{dt}+Av=F(\\theta_tp,v_t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\\,\\,p\\in\\{\\theta_\\tau p^*:\\,\\,\\tau\\in\\mathbb{R}\\},\\\\\n v_0=\\phi.\\end{array}\\right.\\end{equation}\n Since $\\{p=\\theta_\\tau p^*:\\,\\,\\tau\\in\\mathbb{R}\\}$ is dense in ${\\mathcal H}$, for theoretical completeness we usually embed \\eqref{e:4.24} into the following cocycle system:\n \\begin{equation}\\label{e:4.25}\\left\\{\\begin{array}{ll}\\frac{dv}{dt}+Av=F(\\theta_tp,v_t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\\,\\,p\\in {\\mathcal H},\\\\\n v_0=\\phi.\\end{array}\\right.\\end{equation}\n\n\\vs\nNow assume $f$ satisfies the following conditions:\n \\begin{enumerate}\\item[(f1)] $f$ is {\\em locally Lipschitz}, namely, for any $R>0$, there exists $L_f=L_f(R)>0$ such that for all $u_i,u_i'\\in X^\\alpha$ ($1\\leq i\\leq m$) with $\\|u_i\\|_\\alpha,\\|u_i'\\|_\\alpha\\leq R$,\n$$\n\\|f(u_1,\\cdots,u_m)-f(u_1',\\cdots,u_m')\\|_0\\leq L_f(\\|u_1-u_1'\\|_\\alpha+\\cdots+\\|u_m-u_m'\\|_\\alpha).\n$$\n\n\\vs \\item[(f2)] There exist $C_0,C_1>0$ such that\n $$\n \\|f(u_{ 1},\\cdots,u_m)\\|_{ 0}\\leq C_0(\\|u_1\\|_\\alpha+\\cdots+\\|u_m\\|_\\alpha)+C_1,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} \\forall\\,u_i\\in X^\\alpha.\n $$\n \\end{enumerate}\nThen one can trivially verify that the mapping $F$ defined by \\eqref{e:F} satisfies hypotheses (F1) and (F2).\n\n\n\\begin{remark}\\label{r:4.6} Note that if the function $p^*$ in \\eqref{e:hp} is periodic (resp. quasi-periodic, almost periodic), then $\\theta_tp$ is periodic (resp. quasi-periodic, almost periodic) for any fixed $p\\in {\\mathcal H}:={\\mathcal H}[p^*]$. Let $\\Gamma$ be the equilibrium solution of \\eqref{e:4.25} given in Theorems \\ref{l:4.1c} and \\ref{l:4.1d}. Then since $\\Gamma(q)$ is continuous in $q$, we deduce that $\\gamma_p:=\\Gamma(\\theta_tp)$ is periodic (resp. quasi-periodic, almost periodic) as well. Therefore these two theorems give the existence of asymptotically stable periodic (resp. pseudo periodic, almost periodic) solutions for equation \\eqref{e:4.23}.\n\nThe interested reader is referred to \\cite{Hale2,Jones,Kaplan,Ken,LiYX,MN,MSS,NT,Nus2,Ou,Walt3,Walt2} etc. for some classical results and new trends on periodic solutions of delay differential equations,\nand to \\cite{Hino1,Layt,Naito,Seif,Wu,Yoshi,Yuan0,YuanR} and references therein for typical results on almost periodic solutions.\n\\end{remark}\n\\begin{remark}\\label{r:4.7}In the case where the functions $h$ and $r_i$ $(1\\leq i\\leq m)$ in the equation \\eqref{e:4.23} are not translation compact (or, the righthand side of the equation takes a more general form like $g(t,u(t-r_1),\\cdots,u(t-r_m))$\\,),\nthe framework of cocycle systems does not seem to be quite suitable to handle the problem, because the base space ${\\mathcal H}$ of the cocycle system corresponding to the equation may not be compact.\nInstead, the processes one may be more appropriate.\n\nSet\n$\nF(t,\\phi)=f(\\phi(-r_1),\\cdots,\\phi(-r_m))+h(t)$ \\,$(t\\in\\mathbb{R},\\,\\,\\phi\\in{\\mathcal C}_\\alpha).$ Then \\eqref{e:4.23} can be put into a functional one:\n\\begin{equation}\\label{e:4.2}\n\\frac{du}{dt}+Au=F(t,u_t).\\end{equation}\nSuppose \\eqref{e:4.2} has a unique global solution $x(t;\\tau,\\phi)$ $(t\\in[\\tau-r,\\infty))$ for each initial data $(\\tau,\\phi)\\in \\mathbb{R}\\times{\\mathcal C}_\\alpha$. Denote by $x_t(\\tau,\\phi)$ the lift of $x(t):=x(t;\\tau,\\phi)$ in ${\\mathcal C}_\\alpha$. Then as in \\eqref{e:3.20}, we can define a process $P(t,\\tau)$ on ${\\mathcal C}_\\alpha$.\nThis allows us to take some steps in the investigation of the dynamics of the equation. For instance, under similar hypotheses as in Section \\ref{s:4.4}, it is desirable to prove that the equation has a unique bounded complete (entire) solution $\\gamma(t)$ $(t\\in\\mathbb{R})$ which is uniformly globally (exponentially) asymptotically stable by employing the pullback attractor theory for processes.\n\n\nThe situation becomes quite complicated if the operator $A$ fails to be a dissipative one, i.e., the spectral set\n$\\sigma^-$ in \\eqref{e:4.3} is non-void. One drawback is that both the pullback attractor theory and the Conley index theory are not applicable in proving the existence of bounded complete solutions of the equation. If the delay functions $r_i(t)$ are constants, then since the external force $h(t)$ and the nonlinear term in the righthand side of \\eqref{e:4.23} are separate, one may try to get a bounded complete solution $\\gamma(t)$ of the equation by considering periodic approximations of $h(t)$.\nHowever, if the functions $r_i(t)$ also depend on $t$, we are not sure whether such a strategy still works. There are also many other interesting questions such as the synchronizing property of the bounded complete solution $\\gamma(t)$ with the external force (in case $r_i(t)$ are constant functions) and a more detailed description of the dynamics of the equation. (Note that even if in the case where $h(t)$ and $r_i(t)$ are translation compact, Theorem \\ref{t:4.1} only gives us some information on the asymptotic behavior of bounded solutions of the equation. A natural question is to ask: What can we say about those unbounded solutions\\,?) All these questions deserve to be clarified, and a further study on the geometric theory of functional differential equations in a processes fashion may be helpful for us to take some steps, in which the integral inequality \\eqref{e1.1} may once again play a fundamental role.\n\n\\end{remark}\n\n\\subsection{Neural networks with multiple delays}\n{As an concrete example, we consider the following reaction diffusion neural network system with multiple delays:\n\\begin{equation}\\label{e:4.27}\n\\left\\{\\begin{array}{ll}\\frac{\\partial{u_i}}{\\partial{t}}=\\mbox{div}\\(a_i(x)\\nabla u_i\\)\n+\\sum_{j=1}^{n}b_{ij}u_j+\\\\[1ex]\n\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm}+\\sum_{j=1}^{n}T_{ij}g_j(x,u_j,u_j(x,t-r_{ij}))+J_{i}(x,t),\\\\[1ex]\nu_i(x,t)=0,\\hs t\\ge0,\\,\\,x\\in\\partial{\\Omega}, \\hs i=1,2,...,n.\n\\end{array}\\right.\n\\end{equation}\nHere $\\Omega\\subset \\mathbb{R}^m$ is a bounded domain with a smooth boundary $\\partial\\Omega$, $a_i\\in C^1(\\overline\\Omega)$ and is positive everywhere on $\\overline\\Omega$, $b_{ij}$ and $T_{ij}$\nare constant coefficients,\n$$0\\leq r_{ij}\\leq r<\\infty,\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} 1\\leq i,j\\leq n,$$ and $J_i(x,t)$\nare bounded inputs. We refer the interested reader to \\cite{He,Z} etc. for a physical background of this type of systems.\n\n Let $A_i$ be the elliptic operator given by\n$$\nA_iu=-\\sum_{k=1}^{m}\\frac{\\partial{}}{\\partial{x_k}}\\(a_i(x)\\frac{\\partial{u}}{\\partial{x_k}}\\)$$\nassociated with the corresponding boundary condition. It is a basic knowledge (see e.g. Henry \\cite[Chap.7]{Henry}) that $A_i$ is a sectorial operator in $L^2(\\Omega)$ with compact resolvent.\n\nFor notational simplicity, we use the same notation $g_j$ to denote the Nemytskii operator generated by the function $g_j(x, u,v)$, i.e.,\n$$g_j(u,v)(x)=g_j(x,u,v)\\,\\,(x\\in\\Omega),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} u,v\\in L^2(\\Omega).$$\nLet $J_i(t)=J_i(\\.,t)$. Then \\eqref{e:4.27} takes a slightly abstract form:\n\\begin{equation}\\label{e:4.28}\n\\frac{d{u_i}}{d{t}}+{ A_i}u_i=\\sum_{j=1}^{n}b_{ij}u_j+\\sum_{j=1}^{n}T_{ij}g_j(u_j,u_j(t-r_{ij}))+J_{i}(t),\\hs 1\\leq i\\leq n.\n\\end{equation}\nSet $H=\\(L^2(\\Omega)\\)^n$, and let $u=(u_1,\\cdots,u_n)'$. Denote\n$$\nAu=(A_1u_1,...,A_nu_n)',\\hs u\\in D(A)\\subset H.\n$$ (It is clear that $A$ is a sectorial operator in $H$.)\nLet ${{\\mathcal C}_0}=C([-r,0],H)$, and define an operator $G:{{\\mathcal C}_0}\\rightarrow H^n=(L^2(\\Omega))^{n\\times n}$ as follows:\\,\\, $\\forall\\,\\phi=(\\phi_1,\\cdots,\\phi_n)'\\in{{\\mathcal C}_0}$,\n$$\nG(\\phi)=(\\psi_{ji})_{n\\times n},\\hs\\mbox{where } \\psi_{ij}=g_j(\\phi_j(0),\\phi_j(-r_{ij})).\n$$\n Let $T=\\(T_{ij}\\)_{n\\times n}$. Write $TG(\\phi)=\\([TG(\\phi)]_{ij}\\)_{n\\times n}$, and\n define\n $$\nF(\\phi)=(F_1(\\phi),F_2(\\phi),\\cdots,F_n(\\phi))',\\hs F_i(\\phi)=[TG(\\phi)]_{ii}.\n$$\n Then \\eqref{e:4.28} can be reformulated as\n\\begin{equation}\\label{e:4.29}\n\\frac{du}{dt}+Au=Bu+F(u_t)+J(t),\n\\end{equation}\nwhere $B=\\(b_{ij}\\)_{n\\times n}$, and $J=(J_1,\\cdots,J_n)'$.\n\n\nSince \\eqref{e:4.29} is nonautonomous,\ngenerally the initial value problem reads\n\\begin{equation}\\label{e:4.29b}\\left\\{\\begin{array}{ll}\n\\frac{dv}{dt}+Av=Bv+F({ v_t})+J(t+\\tau),\\hs t\\geq 0,\\\\\nv_0=\\phi\\in {{\\mathcal C}_0},\\end{array}\\right.\n\\end{equation}\nwhere $v(t)=u(t+\\tau)$, and $\\tau\\in\\mathbb{R}$ denotes the initial time.\nWe assume that $J$ is translation compact in ${\\mathcal Y}$. Denote ${\\mathcal H}$ the hull ${\\mathcal H}[J]$ of the function $J$ in ${\\mathcal Y}$. Then as in the previous subsection one can embed \\eqref{e:4.29b} into the cocycle system:\n \\begin{equation}\\label{e:4.30}\\left\\{\\begin{array}{ll}\\frac{dv}{dt}+(A-B)v=F(\\theta_tp,v_t),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} t\\geq 0,\\,\\,p\\in {\\mathcal H},\\\\\nv_0=\\phi\\in {{\\mathcal C}_0},\\end{array}\\right.\\end{equation}\nwhere\n$$\nF(p,\\phi)=F(\\phi)+p(0),\\hspace{1cm}}\\def\\hs{\\hspace{0.5cm} p\\in{\\mathcal H},\\,\\,\\phi\\in{{\\mathcal C}_0}.\n$$\n\n\nFor simplicity, we always assume that $g_j(x,u,v)$ are continuous and {\\em globally Lipschitz} in $(u,v)$ uniformly for $x\\in\\Omega$, that is, there exists $L_{j}>0$ such that\n$$\n|g_j(x,t_1,s_1)-g_j(x,t_2,s_2)|\\leq L_{j}(|t_1-{ t_2}|+|s_1-s_2|)\n$$\nfor all $t_i,s_i\\in\\mathbb{R}$ and $x\\in\\Omega$. Then for the Nemytskii operator $g_j$ of the function $g_j(x,u,v)$, we have\n\\begin{align*}\n\\|g_j(u_1,v_1)-g_j(u_2,v_2)\\|_{L^{2}(\\Omega)}\\le& L_{j}(\\|u_1-u_2\\|_{L^{2}(\\Omega)}+\\|v_1-v_2\\|_{L^{2}(\\Omega)}).\n\\end{align*}\nFurther by some simple calculations it can be shown that\n\\begin{align*}\n\\|F(p,\\phi)-F(p,\\phi')\\|_{H}\\le L\\|\\phi-\\phi'\\|_{{\\mathcal C}_0}\n\\end{align*}\nwith $L=2\\(\\sum_{i=1}^{n}\\big(\\sum_{j=1}^{n}|T_{ij}|L_j\\big)^{2}\\)^{1\/2}$.\nThis allows us to carry over all the results on the abstract evolution equation \\eqref{e:4.1} to system \\eqref{e:4.30}. In particular, by Remark \\ref{r:4.6} we have the following theorem.\n\n\n \\begin{theorem}\\label{t:4.9} Suppose $\\mbox{Re}\\,(\\sigma(A-B))\\geq \\beta>0$, and that $L<1\/\\(M I\\)$, where $M$ is the constant appearing in \\eqref{e:4.6a} corresponding to operator $A-B$, and $$I=\\sup_{t\\geq0}\\int_{0}^te^{-\\beta(t-s)}ds.$$ Let $J(t)=(J_1(t),\\cdots,J_n(t))'$ be a periodic (resp. quasi-periodic, almost periodic) function. Then system \\eqref{e:4.27} has a unique periodic (resp. quasi-periodic, almost periodic) solution $\\gamma$ which is globally uniformly asymptotically stable.\n\n If we further assume $L<1\/\\(MI(1+M)\\)$, then $\\gamma$ is globally exponentially asymptotically stable.\n\\end{theorem}\n\n\\noindent{\\bf Acknowledgement.} Our sincere thanks go to the referees for their valuable comments and suggestions which helped us greatly improve the quality of the paper.\n}\n\\section*{References}\n\n\n\n\\begin {thebibliography}{44}\n\\addcontentsline{toc}{section}{References}\n\\bibitem{BS}D. Ba\\v{\\i}nov, P. Simeonov, Integral Inequalities and Applications, Kluwer Academic Publishers Group, Dordrecht, 1992.\n\n\\bibitem{Bell1}R. Bellman, The stability of solutions of linear differential equations, Duke Math. J. 10 (1943) 643-647.\n \\newblock \\href {https:\/\/doi.org\/10.1215\/S0012-7094-43-01059-2}\n {\\path{doi: 10.1215\/S0012-7094-43-01059-2}}.\n\n\\bibitem{Carab2} T. Caraballo, G. Kiss, Attractors for differential equations with multiple variable delays, Discrete Contin. Dyn. Syst. 33 (4) (2013) 1365-1374.\n \\newblock \\href {https:\/\/dx.doi.org\/10.3934\/dcds.2013.33.1365}\n {\\path{doi: 10.3934\/dcds.2013.33.1365}}.\n\n\\bibitem{Carab} T. Caraballo, J. A. Langa, J. C. Robinson, Attractors for differential equations with variable delays, J. Math. Anal. Appl. 260 (2) (2001) 421-438.\n \\newblock \\href {https:\/\/dx.doi.org\/10.1006\/jmaa.2000.7464}\n {\\path{doi:10.1006\/jmaa.2000.7464}}.\n\n\\bibitem{CMR} T. Caraballo, A. M. M\\'{a}rquez-Dur\\'{a}n, J. Real, Pullback and forward attractors for a 3D LANS-$\\alpha$ model with delay, Discrete Contin. Dyn. Syst. 15 (2) (2006) 559-578.\n\n\\bibitem{CMV} T. Caraballo, P. Mar\\'{\\i}n-Rubio, J. Valero, Autonomous and non-autonomous attractors for differential equations with delays, J. Differential Equations 208 (1) (2005) 9-41.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.jde.2003.09.008}\n {\\path{doi:10.1016\/j.jde.2003.09.008}}.\n\n\\bibitem{CLR}A. N. Carvalho, J. A. Langa, J. C. Robinson, Attractors for Infinite-dimensional Non-autonomous Dynamical Systems. Applied Mathematical Sciences, 182. Springer, New York, 2013.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/978-1-4614-4581-4}\n {\\path{doi:10.1007\/978-1-4614-4581-4}}.\n\n\\bibitem{C}D. N. Cheban, Dissipative functional-differential equations, Izv. Akad. Nauk Respub. Moldova Mat. (2) (1991) 3-12.\n\n\\bibitem{ChenH} H. B. Chen, Asymptotic behavior of stochastic two-dimensional Navier-Stokes equations with delays, Proc. Indian Acad. Sci. Math. Sci. 122 (2) (2012) 283-295.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/s12044-012-0071-x}\n {\\path{doi:10.1007\/s12044-012-0071-x}}.\n\n\\bibitem{Chue}I. Chueshov, A. Rezounenko, Finite-dimensional global attractors for parabolic nonlinear equations with state-dependent delay, Commun. Pure Appl. Anal. 14 (5) (2015) 1685-1704.\n \\newblock \\href {https:\/\/doi.org\/10.3934\/cpaa.2015.14.1685}\n {\\path{doi:10.3934\/cpaa.2015.14.1685}}.\n\n\\bibitem{Crau} H. Crauel, A. Debussche, F. Flandoli, Random attractors, J. Dynam. Differential Equations 9 (2) (1997) 307-341.\n\n\\bibitem{Dee}A. A. El-Deeb, A variety of nonlinear retarded integral inequalities of gronwall type and their applications. Advances in Mathematical Inequalities and Applications, (2018) 143-164.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/978-981-13-3013-1\\_8}\n {\\path{doi:10.1007\/978-981-13-3013-1\\_8}}.\n\n\\bibitem{FT}Rui A.C. Ferreira, Delfim F.M. Torres, Retarded integral inequalities of Gronwall-Bihari type, preprint, 2008. \\href{http:\/\/arxiv.org\/abs\/0806.4709}{http:\/\/arxiv.org\/abs\/0806.4709}\n\n\\bibitem{He} K. Gopalsamy, X. Z. He, Stability in asymmetric Hopfield nets with transmission delays, Phys. D. 76 (4) (1994) 344-358.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0167-2789(94)90043-4}\n {\\path{doi:10.1016\/0167-2789(94)90043-4}}.\n\n\\bibitem{Gron}T. H. Gronwall, Note on the derivatives with respect to a parameter of the solutions of a system of differential equations, Ann. of Math. (2) 20 (4) (1919) 292-296.\n \\newblock \\href {https:\/\/doi.org\/10.2307\/1967124}\n {\\path{doi:10.2307\/1967124}}.\n\n\\bibitem{Halanay}A. Halanay, Differential Equations: Stability, Oscillations, Time Lags. Academic Press, New York-London 1966.\n\n\\bibitem{Hale0} J. K. Hale, Asymptotic Behavior of Dissipative Systems, American Mathematical Society, Providence, RI, 1988.\n\n\\bibitem{Hale1} J. K. Hale, Ordinary Differential Equations, Second edition, Robert E. Krieger Publishing Co., Inc., Huntington, N. Y., 1980.\n\n\\bibitem{Hale2} J. K. Hale, Theory of Functional Differential Equations, Second edition, Applied Mathematical Sciences, Vol. 3. Springer-Verlag, New York-Heidelberg, 1977.\n\n\\bibitem{Henry} D. Henry, Geometric Theory of Semilinear Parabolic Equations, Lecture Notes in Mathematics, 840. Springer-Verlag, Berlin-New York, 1981.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/BFb0089649}\n {\\path{doi:10.1007\/BFb0089649}}.\n\n\\bibitem{HPT} L. V. Hien, V. N. Phat, H. Trinh, New generalized Halanay inequalities with applications to stability of nonlinear non-autonomous time-delay systems, Nonlinear Dynam. 82 (1-2) (2015) 563-575.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/s11071-015-2176-0}\n {\\path{doi:10.1007\/s11071-015-2176-0}}.\n\n\\bibitem{Hino1} Y. Hino, S. Murakami, T. Yoshizawa, Almost periodic solutions of abstract functional-differential equations with infinite delay, Nonlinear Anal. 30 (2) (1997) 853-864.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/S0362-546X(96)00196-4}\n {\\path{doi:10.1016\/S0362-546X(96)00196-4}}.\n\n\\bibitem{Jones}G. S. Jones, The existence of periodic solutions of $f'(x)=-\\alpha f(x-1)[1+f(x)]$, J. Math. Anal. Appl. 5 (1962) 435-450.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0022-247X(62)90017-3}\n {\\path{doi:10.1016\/0022-247X(62)90017-3}}.\n\n\\bibitem{Ju} X. W. Ju, D. S. Li, Global synchronising behavior of evolution equations with exponentially growing nonautonomous forcing, Commun. Pure Appl. Anal. 17 (5) (2018) 1921-1944.\n\n\\bibitem{Kaplan}J. L. Kaplan, J. A. Yorke, Ordinary differential equations which yield periodic solutions of differential delay equations, J. Math. Anal. Appl. 48 (1974) 317-324.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0022-247X(74)90162-0}\n {\\path{doi:10.1016\/0022-247X(74)90162-0}}.\n\n\\bibitem{Ken}B. Kennedy, Multiple periodic solutions of an equation with state-dependent delay, J. Dynam. Differential Equations 23 (2) (2011) 283-313.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/s10884-011-9205-6}\n {\\path{doi:10.1007\/s10884-011-9205-6}}.\n\n\\bibitem{KL} P. E. Kloeden, T. Lorenz, Pullback attractors of reaction-diffusion inclusions with space-dependent delay, Discrete Contin. Dyn. Syst. Ser. B 22 (5) (2017) 1909-1964.\n \\newblock \\href {https:\/\/doi.org\/10.3934\/dcdsb.2017114}\n {\\path{doi:10.3934\/dcdsb.2017114}}.\n\n\\bibitem{Kloeden2} P. E. Kloeden, B. Schmalfuss, Nonautonomous systems, cocycle attractors and variable time-step discretization, Numer. Algorithms 14 (1-3) (1997) 141-152.\n \\newblock \\href {https:\/\/doi.org\/10.1023\/a:1019156812251}\n {\\path{doi:10.1023\/a:1019156812251}}.\n\n\\bibitem{Kloeden1} P. E. Kloeden, D. J. Stonier, Cocycle attractors in nonautonomously perturbed differential equations, Dynam. Contin. Discrete Impuls. Systems 4 (2) (1998) 211-226.\n \\newblock \\href {https:\/\/doi.org\/10.1080\/02681119808806260}\n {\\path{doi:10.1080\/02681119808806260}}.\n\n\\bibitem{Kuang} Y. Kuang, Delay Differential Equations with Applications in Population Dynamics, Academic Press, Inc., Boston, MA, 1993.\n\n\\bibitem{LL} V. Lakshmikantham, S. Leela, Differential and Integral Inequalities: Theory and Applications. Vol. II: Functional, Partial, Abstract, and Complex Differential Equations. Mathematics in Science and Engineering, Vol. 55-II. Academic Press, New York-London, 1969.\n\n\\bibitem{Layt} W. Layton, Existence of almost periodic solutions to delay differential equations with Lipschitz nonlinearities,\nJ. Differential Equations 55 (2) (1984) 151-164.\n\\newblock \\href {https:\/\/doi.org\/10.1016\/0022-0396(84)90079-2}\n {\\path{doi:10.1016\/0022-0396(84)90079-2}}.\n\n\\bibitem{LiYX} Y. X. Li, Existence and asymptotic stability of periodic solution for evolution equations with delays, J. Funct. Anal. 261 (5) (2011) 1309-1324.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.jfa.2011.05.001}\n {\\path{doi:10.1016\/j.jfa.2011.05.001}}.\n\n\\bibitem{LLZ} C. Q. Li, D. S. Li, Z. J. Zhang, Dynamic bifurcation from infinity of nonlinear evolution equations, SIAM J. Appl. Dyn. Syst. 16 (4) (2017) 1831-1868.\n \\newblock \\href {https:\/\/doi.org\/10.1137\/16M1107358}\n {\\path{doi:10.1137\/16M1107358}}.\n\n\\bibitem{Lip} O. Lipovan, A retarded Gronwall-like inequality and its applications, J. Math. Anal. Appl. 252 (1) (2000) 389-401.\n \\newblock \\href {https:\/\/doi.org\/10.1006\/jmaa.2000.7085}\n {\\path{doi:10.1006\/jmaa.2000.7085}}.\n\n\\bibitem{WangGT} X. H. Liu, L. H. Zhang, P. Agarwal, G. T. Wang, On some new integral inequalities of Gronwall-Bellman-Bihari type with delay for discontinuous functions and their applications, Indag. Math. (N.S.) 27 (1) (2016) 1-10.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.indag.2015.07.001}\n {\\path{doi:10.1016\/j.indag.2015.07.001}}.\n\n\\bibitem{MP} Q. H. Ma, J. Pe\\v{c}ari\\'c, Estimates on solutions of some new nonlinear retarded Volterra-Fredholm type integral inequalities, Nonlinear Anal. 69 (2) (2008) 393-407.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.na.2007.05.027}\n {\\path{doi:10.1016\/j.na.2007.05.027}}.\n\n\\bibitem{MN} J. Mallet-Paret, R. D. Nussbaum, Global continuation and asymptotic behaviour for periodic solutions of a differential-delay equation, Ann. Mat. Pura Appl. (4) 145 (1986) 33-128.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/BF01790539}\n {\\path{doi:10.1007\/BF01790539}}.\n\n\\bibitem{MR} P. Mar\\'{\\i}n-Rubio, J. Real, Pullback attractors for 2D-Navier-Stokes equations with delays in continuous and sub-linear operators, Discrete Contin. Dyn. Syst. 26 (3) (2010) 989-1006.\n \\newblock \\href {https:\/\/doi.org\/10.3934\/dcds.2010.26.989}\n {\\path{doi:10.3934\/dcds.2010.26.989}}.\n\n\\bibitem{MSS} M. Martelli, K. Schmitt, H. Smith, Periodic solutions of some nonlinear delay-differential equations, J. Math. Anal. Appl. 74 (2) (1980) 494-503.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0022-247X(80)90144-4}\n {\\path{doi:10.1016\/0022-247X(80)90144-4}}.\n\n\\bibitem{Naito} T. Naito, N. V. Minh, J. S. Shin, Periodic and almost periodic solutions of functional differential equations with finite and infinite delay, Nonlinear Anal. 47 (6) (2001) 3989-3999.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/S0362-546X(01)00518-1}\n {\\path{doi:10.1016\/S0362-546X(01)00518-1}}.\n\n\\bibitem{NT} P. H. A. Ngoc, H. Trinh, On contraction of functional differential equations, SIAM J. Control Optim. 56 (3) (2018) 2377-2397.\n \\newblock \\href {https:\/\/doi.org\/10.1137\/16M1092672}\n {\\path{doi:10.1137\/16M1092672}}.\n\n\\bibitem{Nus2} R. D. Nussbaum, Periodic solutions of some nonlinear, autonomous functional differential equations. II, J. Differential Equations 14 (1973) 360-394.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/bf02417109}\n {\\path{doi:10.1007\/bf02417109}}.\n\n\\bibitem{Ou}C. H. Ou, J. H. Wu, Periodic solutions of delay differential equations with a small parameter: existence, stability and asymptotic expansion, J. Dynam. Differential Equations 16(3) (2004) 605-628.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/s10884-004-4294-0}\n {\\path{doi:10.1007\/s10884-004-4294-0}}.\n\n\\bibitem{B.G.Pach} B. G. Pachpatte, Inequalities for Differential and Integral Equations, Mathematics in Science and Engineering, 197. Academic Press, Inc., San Diego, CA, 1998.\n\n\\bibitem{Qin} Y. M. Qin, Integral and Discrete Inequalities and Their Applications. Vol. I. Linear inequalities. Birkh$\\ddot{\\mbox{a}}$user\/Springer, 2016.\n\n\\bibitem{SC} R. Samprogna, T. Caraballo, Pullback attractor for a dynamic boundary non-autonomous problem with infinite delay, Discrete Contin. Dyn. Syst. Ser. B 23 (2) (2018) 509-523.\n \\newblock \\href {https:\/\/doi.org\/10.3934\/dcdsb.2017195}\n {\\path{doi:10.3934\/dcdsb.2017195}}.\n\n\\bibitem{Seif} G. Seifert, Almost periodic solutions for delay-differential equations with infinite delays, J. Differential Equations 41 (3) (1981) 416-425.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0022-0396(81)90046-2}\n {\\path{doi:10.1016\/0022-0396(81)90046-2}}.\n\n\\bibitem{S} H. Smith, An Introduction to Delay Differential Equations with Applications to the Life Sciences. Texts in Appled Mathematics, 57. Springer, New York, 2011.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/978-1-4419-7646-8}\n {\\path{doi:10.1007\/978-1-4419-7646-8}}.\n\n\\bibitem{TW2} C. C. Travis, G. F. Webb, Existence, stability, and compactness in the $\\alpha $-norm for partial functional differential equations. Trans. Amer. Math. Soc. 240 (1978) 129-143.\n \\newblock \\href {https:\/\/doi.org\/10.1090\/S0002-9947-1978-0499583-8}\n {\\path{doi:10.1090\/S0002-9947-1978-0499583-8}}.\n\n\\bibitem{TW1} C. C. Travis, G. F. Webb, Partial differential equations with deviating arguments in the time variable, J. Math. Anal. Appl. 56 (2) (1976) 397-409.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/0022-247X(76)90052-4}\n {\\path{doi:10.1016\/0022-247X(76)90052-4}}.\n\n\\bibitem{Z}P. van den Driessche, X. F. Zou, Global attractivity in delayed Hopfield neural network models, SIAM J. Appl. Math. 58 (6) (1998) 1878-1890.\n\n\\bibitem{Walt3} H. O. Walther, A periodic solution of a differential equation with state-dependent delay, J. Differential Equations 244 (8) (2008) 1910-1945.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.jde.2008.02.001}\n {\\path{doi:10.1016\/j.jde.2008.02.001}}.\n\n\\bibitem{Walt2}H. O. Walther, Topics in delay differential equations, Jahresber. Dtsch. Math.-Ver. 116 (2) (2014) 87-114.\n \\newblock \\href {https:\/\/doi.org\/10.1365\/s13291-014-0086-6}\n {\\path{doi:10.1365\/s13291-014-0086-6}}.\n\n\\bibitem{Wang}J. T. Wang, J. Q. Duan, D. S. Li, Compactly generated shape index theory and its application to a retarded nonautonomous parabolic equation, preprint, 2019. \\href{https:\/\/arxiv.org\/pdf\/1802.02867.pdf}{https:\/\/arxiv.org\/pdf\/1802.02867.pdf}\n\n\\bibitem{WK}Y. J. Wang, P. E. Kloeden, Pullback attractors of a multi-valued process generated by parabolic differential equations with unbounded delays, Nonlinear Anal. 90 (2013) 86-95.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.na.2013.05.026}\n {\\path{doi:10.1016\/j.na.2013.05.026}}.\n\n\\bibitem{WLK} Y. J. Wang, D. S. Li, P. E. Kloeden, On the asymptotical behavior of nonautonomous\ndynamical systems, Nonlinear Anal. 59 (1-2) (2004) 35-53.\n\\newblock \\href {https:\/\/doi.org\/10.1016\/j.na.2004.03.035}\n {\\path{doi:10.1016\/j.na.2004.03.035}}.\n\n\\bibitem{WLL}Y. Q. Wang, J. Q. Lu and Y. J. Lou, Halanay-type inequality with delayed impulses and\nits applications, Sci. China Inf. Sci. 62 (9) (2019), 192206, 10pp.\n\\newblock \\href {https:\/\/doi.org\/10.1007\/s11432-018-9809-y}\n {\\path{doi:10.1007\/s11432-018-9809-y}}.\n\n\\bibitem{Wins} E. Winston, Asymptotic stability for ordinary differential equations with delayed perturbations, SIAM J. Math. Anal. 5 (1974) 303-308.\n \\newblock \\href {https:\/\/doi.org\/10.1137\/0505033}\n {\\path{doi:10.1137\/0505033}}.\n\n\\bibitem{Wu} J. H. Wu, Theory and Applications of Partial Functional-Differential Equations, Applied Mathematical Sciences, 119. Springer-Verlag, New York, 1996.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/978-1-4612-4050-1}\n {\\path{doi:10.1007\/978-1-4612-4050-1}}.\n\n\\bibitem{YG}H. P. Ye, J. M. Gao, Henry-Gronwall type retarded integral inequalities and their applications to fractional differential equations with delay, Appl. Math. Comput. 218 (8) (2011) 4152-4160.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.amc.2011.09.046}\n {\\path{doi:10.1016\/j.amc.2011.09.046}}.\n\n\n\\bibitem{Yoshi}T. Yoshizawa, Extreme stability and almost periodic solutions of functional-differential equations, Arch. Rational Mech. Anal. 17 (1964) 148-170.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/BF00253052}\n {\\path{doi:10.1007\/BF00253052}}.\n\n\\bibitem{Yuan0} R. Yuan, Existence of almost periodic solutions of neutral functional-differential equations via Liapunov-Razumikhin function, Z. Angew. Math. Phys. 49 (1) (1998) 113-136.\n \\newblock \\href {https:\/\/doi.org\/10.1007\/s000330050084}\n {\\path{doi:10.1007\/s000330050084}}.\n\n\\bibitem{YuanR}R. Yuan, On almost periodic solutions of logistic delay differential equations with almost periodic time dependence, J. Math. Anal. Appl. 330 (2) (2007) 780-798.\n \\newblock \\href {https:\/\/doi.org\/10.1016\/j.jmaa.2006.08.027}\n {\\path{doi:10.1016\/j.jmaa.2006.08.027}}.\n\n\\bibitem{ZS} K. X. Zhu, C. Y. Sun, Pullback attractors for nonclassical diffusion equations with delays, J. Math. Phys. 56 (9) (2015).\n \\newblock \\href {https:\/\/doi.org\/10.1063\/1.4931480}\n {\\path{doi:10.1063\/1.4931480}}.\n\n\n\\end {thebibliography}\n\\end{document}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nOver the past years, there has been a surge of interest in machine learning in non-Euclidean domains. Representation learning in spaces of constant negative curvature, i.e. hyperbolic, has shown to outperform Euclidean embeddings significantly on data with latent hierarchical, taxonomic or entailment structures. Such data naturally arises in language modeling \\cite{nickel2017poincare, nickel2018learning, chamberlain2017neural, sala2018representation, ganea2018hyperbolic}, recommendation systems \\cite{chamberlain2019scalable}, image classification, and few-shot learning tasks \\cite{khrulkov2020hyperbolic}, to name a few. Grassmannian manifolds find applications in computer vision to perform video-based face recognition and image set classification \\cite{huang2018building, li2015face}. The Lie groups of transformations $\\mathrm{SO}(3)$ and $\\mathrm{SE}(3)$ appear naturally when dealing with problems like pose estimation and skeleton-based action recognition \\cite{hou2018computing, huang2017deep}. Stiefel manifolds are used for emotion recognition and action recognition from video data \\cite{chakraborty2019statistics, huang2017riemannian}. The space of symmetric positive definite (SPD) matrices characterize data from diffusion tensor imaging \\cite{pennec2006riemannian}, and functional magnetic resonance imaging \\cite{sporns2005human}. Among those advances is a family of neural network methods, which are parameterized by weights constrained to a particular non-Euclidean space \\cite{nickel2017poincare, chamberlain2017neural, huang2017deep, huang2018building, ganea2018hyperbolic, chamberlain2019scalable, khrulkov2020hyperbolic}.\n\nStochastic Gradient Descent (SGD) \\cite{robbins1951stochastic} algorithm has been used for the backpropagation learning in connectionist neural network models. The SGD is known to generate undesirable oscillations around optimal values of model parameters \\cite{qian1999momentum}. Using a momentum term \\cite{rumelhart1986learning} has been shown to improve the optimization convergence. More recently, adaptive learning rates \\cite{hinton2012neural, kingma2014adam, zeiler2012adadelta} have been proposed, and are popular optimization algorithms used for iterative optimization while training neural networks.\n\nThese adaptations of the SGD methods have been studies extensively under the lenses of Riemannian geometry and translated to non-Euclidean settings \\cite{bonnabel2013stochastic, roy2018geometry, becigneul2018riemannian, kasai2019riemannian}. Remannian stochastic optimization algorithms have been robustly integrated with many popular machine learning toolboxes. However, previous work in this space was mainly motivated by research use cases \\cite{townsend2016pymanopt, meghwanshi2018mctorch, miolane2020geomstats, kochurov2020geoopt}. Whereas practical aspects, such as deploying and maintaining machine learning models, were often overlooked.\n\nWe present TensorFlow ManOpt, a Python library for optimization on Riemannian manifolds in TensorFlow \\cite{abadi2016tensorflow}, to help bridge the aforementioned gap. The library is designed with the aim for a seamless integration with the TensorFlow ecosystem, targeting not only research, but also end-to-end machine learning pipelines with the recently proposed TensorFlow Extended platform \\cite{baylor2017tfx}.\n\n\\section{Riemannian optimization}\n\nIn this section we provide a brief overview of Riemannian optimization with a focus on stochastic methods. We refer to \\cite{edelman1998geometry, smith1994optimization, absil2009optimization} for more rigorous theoretical treatment.\n\nOptimization on Riemannian manifolds, or Riemannian optimization, is a class of methods for optimization of the form\n\\begin{equation}\n \\min_{x \\in \\mathcal{M}} f\n \\label{eq:optim}\n\\end{equation}\nwhere $f$ is an objective function, subject to constraints which are smooth, in the sense that the search space $\\mathcal{M}$ admits the structure of a differentiable manifold, equipped with a positive-definite inner product on the space of tangent vectors $\\operatorname{T}_{x} \\mathcal{M}$ at each point $x \\in \\mathcal{M}$. Conceptually, the approach of Riemannian optimization translates the constrained optimization problem (\\ref{eq:optim}) into performing optimization in the Euclidean space $\\mathbb{R}^n$ and projecting the parameters back onto the search space $\\mathcal{M}$ after each iteration \\cite{absil2009optimization}.\n\nIn particular, several first order stochastic optimization methods commonly used in the Euclidean domain, such as the SGD algorithm and related adaptive methods, have already been adapted to Riemannian settings.\n\nThe update equation of the (Euclidean) SGD for differentiable $f$ takes the form\n\\begin{equation}\n x_{t+1} = x_t - \\alpha \\nabla f_t\n \\label{eq:sgd}\n\\end{equation}\nwhere $\\nabla f_t$ denotes the gradient of objective $f$ evaluated at the minibatch taken at step $t$, and $\\alpha > 0$ is the learning rate. For a Riemannian manifold $\\mathcal{M}$, Riemannian SGD \\cite{bonnabel2013stochastic} performs the update\n\\begin{equation}\n x_{t+1} = \\operatorname{Exp}_{x_t}(- \\alpha \\operatorname{\\pi}_{x_t} \\nabla f_t)\n \\label{eq:rsgd}\n\\end{equation}\nwhere $\\operatorname{\\pi}_{x} : \\mathbb{R}^n \\to \\operatorname{T}_{x} \\mathcal{M}$ denotes the projection operator from the ambient Euclidean space on the tangent space $\\operatorname{T}_{x} \\mathcal{M}$. And $\\operatorname{Exp}_x(u) : \\operatorname{T}_{x} \\mathcal{M} \\to \\mathcal{M}$ is the exponential map operator, which moves a vector $u \\in \\operatorname{T}_{x} \\mathcal{M}$ back to the manifold $\\mathcal{M}$ from the tangent space at $x$.\n\nIn a neighborhood of $x$, the exponential map identifies a point on the geodesic, thus guarantees decreasing the objective function. Intuitively, the exponential map enables to perform an update along the shortest path in the relevant direction in unit time, while remaining in the manifold. In practice, when $\\operatorname{Exp}_x(u)$ is not known in closed-form or it is computationally expensive to evaluate, it is common to replace it by a retraction map $\\operatorname{R}_x(u)$, which is a first-order approximation of the exponential.\n\nAdaptive optimization techniques compute smoother estimates of the gradient vector using its first or second order moments. \\mbox{RMSProp} \\cite{hinton2012neural} is an example of a gradient based optimization algorithm which does this by maintaining an exponentially moving average of the squared gradient, which is an estimate of the second raw moment of the gradient. The update equations for RMSProp can be expressed as follows\n\\begin{equation}\n \\begin{aligned}\n m_{t+1} = \\rho m_{t} + (1 - \\rho) (\\nabla f_t \\odot \\nabla f_t) \\\\\n x_{t+1} = x_t - \\alpha \\frac{\\nabla f_t}{\\sqrt{m_{t+1}} + \\epsilon}\n \\end{aligned}\n \\label{eq:rmsprop}\n\\end{equation}\nwhere $\\rho$ is a hyperparameter, $\\alpha$ is the learning rate, and $\\odot$ denotes the \\textit{Hadamard} product.\n\nIn the Euclidean settings, these estimates can be obtained by linearly combining previous moment vectors due to the inherent ``flat'' nature of the underlying space. However, since general Riemannian manifolds can be curved, it is not possible to simply add moment vectors at different points on the manifold, as the resulting vectors may not lie in the tangent spaces at either points.\n\nRiemannian versions of adaptive SGD algorithms use the parallel transport to circumvent this issue \\cite{roy2018geometry, becigneul2018riemannian}. The parallel transport operator $\\operatorname{P}_{x \\to y}(v) : \\operatorname{T}_x \\mathcal{M} \\to \\operatorname{T}_y \\mathcal{M}$ takes $v \\in \\operatorname{T}_x \\mathcal{M}$ and outputs $v' \\in \\operatorname{T}_x \\mathcal{M}$. Informally, $v'$ is obtained by moving $v$ in a ``parallel'' fashion along the geodesic curve connecting $x$ and $y$, where the intermediate vectors obtained through this process have constant norm and always belong to tangent spaces. Such transformation enables combining moment vectors computed at different points of the optimization trajectory.\n\nGeometry-aware constrained RMSProp (cRMSProp) \\cite{roy2018geometry} translates the Equation (\\ref{eq:rmsprop}) to Riemannian settings as follows\n\\begin{equation}\n \\begin{aligned}\n m_{t+1} = \\rho \\operatorname{P}_{x_{t-1} \\to x_{t}} m_{t} + (1 - \\rho) \\operatorname{\\pi}_{x_t} (\\nabla f_t \\odot \\nabla f_t) \\\\\n x_{t+1} = \\operatorname{Exp}_{x_t}(- \\alpha \\frac{\\nabla f_t}{\\sqrt{m_{t+1}} + \\epsilon})\n \\end{aligned}\n \\label{eq:crmsprop}\n\\end{equation}\n\nIn practice, if $\\operatorname{P}_{x\\to y}(v)$ is not known for a given Riemannian manifold or it is computationally expensive to evaluate, it is replaced by its first-order approximation $\\mathfrak{T}_{x\\to y}(v)$.\n\nWhile many optimization problems are of the described form, technicalities of differential geometry and implementation of corresponding operators often pose a significant barrier for experimenting with these methods.\n\u2039\n\\section{Design goals}\n\nWorking out and implementing gradients is a laborious and error prone task, particularly when the objective function acts on higher rank tensors. TensorFlow ManOpt builds upon the TensorFlow framework \\cite{abadi2016tensorflow}, and leverages its automatic differentiation capabilities for computing gradients of composite functions. The design of TensorFlow ManOpt was informed by three main objectives:\n\n\\begin{enumerate}\n \\item \\textbf{Interoperability with the TensorFlow ecosystem.} All components, including optimization algorithms and manifold-constrained variables, can be used as drop-in replacements with the native TensorFlow API. This ensures transparent integration with the rich ecosystem of tools and extensions in both research and production settings.\n Specifically, TensorFlow ManOpt was tested to be compatible with the TensorFlow Extended platform in \\textit{graph} and \\textit{eager} execution modes.\n \\item \\textbf{Computational efficiency.} TensorFlow ManOpt aims for providing closed-form expressions for manifold operators. The library also implements numerical approximation as a fallback option, when such solutions are not available. TensorFlow ManOpt solvers can perform updates on dense and sparse tensors efficiently.\n \\item \\textbf{Robustness and numerical stability.} The library makes use of half-, single-, and double- precision arithmetic where appropriate.\n\\end{enumerate}\n\n\\section{Implementation overview}\n\nThe package implements concepts in differential geometry, such as manifolds and Riemannian metrics with the associated exponential and logarithmic maps, geodesics, retractions, and transports. For manifolds, where closed-form expressions are not known, the library provides numerical approximations. The core module also exposes functions for assigning TensorFlow variables to manifold instances.\n\n\\subsection{Manifolds}\n\nThe module \\texttt{manifolds} is modeled after the Manopt \\cite{boumal2014manopt} API, and provides implementations of the following manifolds:\n\n\\begin{itemize}\n \\item \\texttt{Cholesky} -- manifold of lower triangular matrices with positive diagonal elements \\cite{lin2019riemannian}\n \\item \\texttt{Euclidian} -- an unconstrained manifold with the Euclidean metric\n \\item \\texttt{Grassmannian} -- manifold of $p$-dimensional linear subspaces of the $n$-dimensional space \\cite{edelman1998geometry}\n \\item \\texttt{Hyperboloid} -- manifold of $n$-dimensional hyperbolic space embedded in the $n+1$-dimensional Minkowski space\n \\item \\texttt{Poincare} -- the Poincar{\\'e} ball model of the hyperbolic space\\cite{nickel2017poincare}\n \\item \\texttt{Product} -- Cartesian product of manifolds \\cite{gu2018learning}\n \\item \\texttt{SPDAffineInvariant} -- manifold of symmetric positive definite (SPD) matrices endowed with the affine-invariant metric \\cite{pennec2006riemannian}\n \\item \\texttt{SPDLogCholesky} -- SPD manifold with the Log-Cholesky metric \\cite{lin2019riemannian}\n \\item \\texttt{SPDLogEuclidean} -- SPD manifold with the Log-Euclidean metric \\cite{arsigny2007geometric}\n \\item \\texttt{SpecialOrthogonal} -- manifold of rotation matrices\n \\item \\texttt{Sphere} -- manifold of unit-normalized points\n \\item \\texttt{StiefelEuclidean} -- manifold of orthonormal $p$-frames in the $n$-dimensional space endowed with the Euclidean metric\n \\item \\texttt{StiefelCanonical} -- Stiefel manifold with the canonical metric \\cite{zimmermann2017matrix}\n \\item \\texttt{StiefelCayley} -- Stiefel manifold the retraction map via an iterative Cayley transform \\cite{li2019efficient}\n\\end{itemize}\n\nEach manifold is implemented as a Python class, which inherits the abstract base class \\texttt{Manifold}. The minimal set of methods required for optimization includes:\n\n\\begin{itemize}\n \\item \\texttt{inner(x, u, v)} -- inner product (i.e., the Riemannian metric) between two tangent vectors $u$ and $v$ in the tangent space at $x$\n \\item \\texttt{proju(x, u)} -- projection of a tangent vector $u$ in the ambient space on the tangent space at point $x$\n \\item \\texttt{retr(x, u)} -- retraction $\\operatorname{R}_x(u)$ from point $x$ with given direction $u$. Retraction is a first-order approximation of the exponential map introduced in \\cite{bonnabel2013stochastic}\n \\item \\texttt{transp(x, y, v)} -- vector transport $\\mathfrak{T}_{x\\to y}(v)$ of a tangent vector $v$ at point $x$ to the tangent space at point $y$. Vector transport is the first-order approximation of parallel transport\n\\end{itemize}\n\nSelected manifolds also support exact computations for additional operators:\n\n\\begin{itemize}\n \\item \\texttt{exp(x, u)} -- exponential map $\\operatorname{Exp}_x(u)$ of a tangent vector $u$ at point $x$ to the manifold\n \\item \\texttt{log(x, y)} -- logarithmic map $\\operatorname{Log}_{x}(y)$ of a point $y$ to the tangent space at $x$\n \\item \\texttt{ptransp(x, y, v)} -- parallel transport $\\operatorname{P}_{x\\to y}(v)$ of a vector $v$ from the tangent space at $x$ to the tangent space at $y$\n\\end{itemize}\n\nAll methods support broadcasting of tensors with different shapes to compatible shapes for arithmetic operations.\n\n\\subsection{Optimizers}\n\nThe module \\texttt{optimizers} provides TensorFlow 2 API for optimization algorithms on Riemannian surfaces, including recently proposed adaptive methods:\n\n \\begin{itemize}\n \\item \\texttt{RiemannianSGD} -- stochastic Riemannian gradient descent \\cite{bonnabel2013stochastic}\n \\item \\texttt{ConstrainedRMSprop} -- constrained RMSprop learning method \\cite{roy2018geometry}\n \\item \\texttt{RiemannianAdam} -- Riemannian Adam and AMSGrad algorithms \\cite{becigneul2018riemannian}\n \\end{itemize}\n\n Algorithms are implemented to support dense and sparse updates, as well as serialization, which is crucial for compatibility with TensorFlow functions.\n\n\\subsection{Layers}\n\nFinally, the module \\texttt{layers} exemplify building blocks of neural networks, which can be used alongside with the native Keras \\cite{chollet2015keras} layers.\n\n\\section{Usage}\n\nA simple illustrative example of using low-level API is depicted in Listing~\\ref{lst:low_level_api}. There, TensorFlow Manopt closely follows the Manopt semantics and naming convention. Geometric meaning of those operations is visualized on Figure~\\ref{fig:low_level_api}.\n\n\\begin{minipage}{\\linewidth}\n\\begin{lstlisting}[language=Python,caption={Low-level API usage example},label={lst:low_level_api}]\nimport tensorflow_manopt as manopt\n\n# Instantiate a manifold\nS = manopt.manifolds.Sphere()\nx = S.projx(tf.constant([0.1, -0.1, 0.1]))\nu = S.proju(x, tf.constant([1., 1., 1.]))\nv = S.proju(x, tf.constant([-0.7, -1.4, 1.4]))\n\n# Compute the exponential map and vector transports\ny = S.exp(x, v)\nu_ = S.transp(x, y, u)\nv_ = S.transp(x, y, v)\n\\end{lstlisting}\n\\end{minipage}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\linewidth]{usage.png}\n \\caption{Geometric operations on the $\\mathbb{S}^2$ manifold}\n \\label{fig:low_level_api}\n\\end{figure}\n\nConstructing an optimization problem is demonstrated on Listing~\\ref{lst:optimization_api}. Manifold-valued variables can be transparently passed to standard TensorFlow functions. And, conversely, native TensorFlow tensors are treated by TensorFlow ManOpt algorithms as data in the Euclidean space.\n\n\\begin{minipage}{\\linewidth}\n\\begin{lstlisting}[language=Python,caption={Optimization API usage example},label={lst:optimization_api}]\nimport tensorflow as tf\nimport tensorflow_manopt as manopt\n\n# Number of training steps\nSTEPS = 10\n\n# Instantiate a manifold\nsphere = manopt.manifolds.Sphere()\n\n# Instantiate and assign a variable to the manifold\nvar = tf.Variable(sphere.random(shape=(N, 2)))\nmanopt.variable.assign_to_manifold(var, sphere)\n\n# Instantiate an optimizer algorithm\nopt = manopt.optimizers.RiemannianAdam(learning_rate=0.2)\npole = tf.constant([0., 1.])\n\n# Compute the objective function and apply gradients\nfor _ in range(STEPS):\n with tf.GradientTape() as tape:\n loss = tf.linalg.norm(var - pole)\n grad = tape.gradient(loss, [var])\n opt.apply_gradients(zip(grad, [var]))\n\\end{lstlisting}\n\\end{minipage}\n\n\\section{Advanced Usage}\n\nThe folder \\texttt{examples} contains reference implementations of several neural network architectures with manifold-valued parameters. For example, the Rotation Mapping layer of the LieNet \\cite{huang2017deep} with weights constrained to the $\\mathrm{SO}(3)$ manifold is shown on Listing~\\ref{lst:layer_api}.\n\n\\begin{minipage}{\\linewidth}\n\\begin{lstlisting}[language=Python,caption={Layers API usage example},label={lst:layer_api}]\n@tf.keras.utils.register_keras_serializable(name=\"RotMap\")\nclass RotMap(tf.keras.layers.Layer):\n \"\"\"Rotation Mapping layer.\"\"\"\n\n def build(self, input_shape):\n \"\"\"Create weights depending on the shape of the inputs.\n\n Expected `input_shape`:\n `[batch_size, spatial_dim, temp_dim, num_rows, num_cols]`, where\n `num_rows` = 3, `num_cols` = 3, `temp_dim` is the number of frames,\n and `spatial_dim` is the number of edges.\n \"\"\"\n input_shape = input_shape.as_list()\n so = SpecialOrthogonal()\n self.w = self.add_weight(\n \"w\",\n shape=[input_shape[-4], input_shape[-2], input_shape[-1]],\n initializer=so.random,\n )\n assign_to_manifold(self.w, so)\n\n def call(self, inputs):\n return tf.einsum(\"sij,...stjk->...stik\", self.w, inputs)\n\\end{lstlisting}\n\\end{minipage}\n\n\\section{Related work}\nThis section compares TensorFlow ManOpt with related implementations on differential geometry and learning. While inspired by related work, the main difference of our library lies in the choice of the underlying framework and design objectives.\n\nThe library Geoopt \\cite{kochurov2020geoopt} is the most closely related to TensorFlow ManOpt. Geoopt is a research-oriented toolbox, which builds upon the PyTorch \\cite{paszke2019pytorch} library for tensor computation and GPU acceleration. Geoopt supports the Riemannian SGD, as well as adaptive optimization algorithms on Riemannian manifolds, and Markov chain Monte Carlo methods for sampling. However, PyTorch, being a research-first framework, lacks tooling for bringing machine learning pipelines to production, which limits Geoopt applicability in industrial settings.\n\nMcTorch \\cite{meghwanshi2018mctorch} is a manifold optimization library for deep learning, that extends the PyTorch C++ code base to closely follow its architecture. McTorch supports Riemannian variants of stochastic optimization algorithms, and also implements a collection of neural network layers with manifold-constrained parameters. McTorch shares the same limitations as Geoopt due to its dependency on PyTorch.\n\nPymanopt \\cite{townsend2016pymanopt} is a Python package that computes gradients and Hessian-vector products on Riemannian manifolds, and provides the following solvers: steepest descent, conjugate gradients, the Nelder-Mead algorithm, and the Riemannian trust regions. Pymanopt leverages on Theano \\cite{team2016theano} symbolic differentiation and on TensorFlow automatic differentiation for computing derivatives. Pymanopt is a great general-purpose tools for Riemannian optimization. However, it is not well-suited for neural network applications due to lack of supports for SGD algorithms. It is also not intended for production use.\n\nLastly but not least, there is Geomstats \\cite{miolane2020geomstats} Python package for computations and statistics on nonlinear manifolds. Geomstats supports a broad list of manifolds such as hyperspheres, hyperbolic spaces, spaces of symmetric positive definite matrices and Lie groups of transformations. It also provides a modular library of differential geometry concepts, which includes the parallel transports, exponential and logarithmic maps, Levi-Civita connections, and Christoffel symbols. Geomstats focuses on research in differential geometry and education use cases, by providing low-level code that follows the Scikit-Learn \\cite{pedregosa2011scikit} semantics. Geomstats examples also include a modified version of the Keras framework with support of the Riemannian SGD algorithm. However, it lacks engineering maintenance.\n\n\\section{Conclusion}\nWe propose TensorFlow ManOpt, a library that combines optimization on Riemannian manifolds with automated differentiation capabilities of the TensorFlow framework. The library enables researchers to experiment with different state of the art solvers for optimization problems in non-Euclidean domains, while also allowing practitioners to efficiently pursue large-scale machine learning. The benefits of TensorFlow ManOpt are most noticeable when it comes to taking Riemannian machine learning models from research to production, where it unlocks advantages of TensorFlow tooling, such as continuous training and model validation.\n\n\\bibliographystyle{abbrv}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Centrality determination in NA61\/SHINE}\nCentrality is a key parameter in heavy ion collisions as it allows to restrict the volume of the created system. The classical definition of centrality is based on impact parameter which is not directly measurable. Therefore experiments use different techniques to define centrality indirectly. The most common method is to apply restrictions on the produced particle multiplicity in some pseudorapidity intervals \\cite{ALICE centrality}. This definition possibly biases multiplicity fluctuation measurements. Therefore NA61\/SHINE uses for centrality determination information from a forward hadron colorimeter PSD \\cite{Abgrall:2014fa}, which measures the energy of all non-interacted forward nucleons. \n\\section{Fluctuation measures}\nFluctuation quantities are studied to probe the critical point CP of strongly interacting matter. The most widely used is the scaled variance of the multiplicity distribution:\n\\begin{eqnarray}\n &\\omega[N] = (\\langle N^{2}\\rangle -{\\langle N\\rangle}^{2})\/\\langle N\\rangle,\n\\end{eqnarray}\nwhere $\\langle\\cdots\\rangle$ stands for averaging over all events. Within the Wounded Nucleon Model (WNM) or the ideal Boltzmann multi-component gas in the grand canonical ensemble (IB-GCE) one can write \\cite{Gorenstein:2011vq}:\n\\begin{eqnarray}\n &\\omega[N] = \\omega[n]+\\omega[W]\\langle N\\rangle\/\\langle W\\rangle,\n\\end{eqnarray}\nwhere $n$ is the multiplicity produced from one wounded nucleon or from a fixed volume (IB-GCE) and $W$ is the number of wounded nucleons. To probe the CP it is important to suppress the volume fluctuation ($\\omega[W]$). Therefore the scaled variance should be measured for the most central collisions or strongly intensive quantities should be used. In Refs.~\\cite{Gorenstein:2011vq,Gazdzicki:2013ana} the following measures were proposed:\n\\begin{eqnarray}\n &\\Delta[A,B] = \\frac{1}{C_{\\Delta}} \\biggl[ \\langle B \\rangle \\omega[A] -\n \\langle A \\rangle \\omega[B] \\biggr] \\\\\n &\\Sigma[A,B] = \\frac{1}{C_{\\Sigma}} \\biggl[ \\langle B \\rangle \\omega[A] +\n \\langle A \\rangle \\omega[B] - 2 \\bigl( \\langle AB \\rangle -\n \\langle A \\rangle \\langle B \\rangle \\bigr) \\biggr],\n\\end{eqnarray}\nwhere $A$ and $B$ are extensive quantities and $C_{\\Delta}$ and $C_{\\Sigma}$ are normalization coefficients, which are chosen so that $\\Delta=\\Sigma=1$ for the models of independent particle production.\nA new quantity $\\Omega$~\\cite{Poberezhnyuk:2015Omega} can be constructed from $\\Delta$ and $\\Sigma$ by setting $C_{\\Delta}=C_{\\Sigma}=\\langle B \\rangle$:\n\\begin{eqnarray}\n&\\Omega[A,B]=\\frac{1}{2}(\\Delta[A,B]+\\Sigma[A,B]),\n\\end{eqnarray}\nIf $A$ and $B$ are uncorrelated from a fixed volume within the IB-GCE or from a single source then $\\Omega[A,B]=\\omega[a]$, where $\\omega[a]$ is $\\omega[A]$ in the fixed volume or the scaled variance of $A$ from a single source. Therefore for the most central collisions one expects that $\\Omega[A,B]\\approx\\omega[A]$.\n\\section{Analysis procedure}\nPreliminary results were obtained from forward energy selected Ar+Sc collisions at 19, 30, 40, 75 and 150{\\it A} GeV\/c respectively. The analysis was performed in the NA61\/SHINE acceptance with the restriction $00\n\\]\n\\[\n\\|u\\|_{X_k}=2^{\\frac{k}2} \\|u\\|_{L^2_{t,x}(A_{<-k})} + \\sup_{j\\ge -k} \\|(|x|+2^{-k})^{-1\/2} u\\|_{L^2_{t,x}(A_j)},\n\\quad k\\le 0.\n\\]\nThe local smoothing space ${X}$ is the completion of the Schwartz space\nwith respect to the norm\n\\[\n\\|u\\|_{{X}}^2 = \\sum_{k=-\\infty}^\\infty 2^k \\|S_k u\\|^2_{X_k}.\n\\]\nIts dual ${X}'$ has norm \n\\[\n\\|f\\|_{{X}'}^2=\\sum_{k=-\\infty}^\\infty 2^{-k} \\|S_k f\\|^2_{X_k'}.\n\\]\n\nIn dimension $n \\geq 3$ the space ${X}$ is a space of distributions,\nand we have the Hardy type inequality\n\\begin{equation}\n\\| \\langle x\\rangle^{-1} u\\|_{L^2_{t,x}} \\lesssim \\| u\\|_{{X}}.\n\\label{Hardy}\\end{equation}\nOn the other hand in dimensions $n=1,2$, the space ${X}$ is a space of\ndistributions modulo constants, and we have the BMO type inequality\n\\begin{equation}\n\\sum_{j\\ge 0}\\| \\langle x\\rangle^{-1} (u-u_{D_j})\\|^2_{L^2_{t,x}(A_j)} \\lesssim \\| u\\|^2_{{X}}\n\\label{Hardylow}\\end{equation}\nwhere $u_{D_j}$ represents the (time dependent) average \nof $u$ in $D_j$.\n At the same time ${X}'$ contains only\nfunctions with integral zero. We refer the reader to \\cite{T} for\nmore details. \n\n\nIn \\cite{T} the case of a small perturbation\nof the Laplacian is considered, and it is proved that\n\\begin{theorem}~\\cite{T}. Assume that either\n\n(i) $n \\geq 3$ and \\eqref{coeff}, \\eqref{coeffb},\\eqref{coeffc} hold\nwith a sufficiently small $\\kappa$ or \n\n(ii) $n = 1,2$, $b^i=0$, $c=0$ and \\eqref{coeff} holds\nwith a sufficiently small $\\kappa$.\n\nThen the local smoothing estimate \n\\begin{equation}\n \\label{lsesmall}\n\\| u\\|_{{X} \\cap L^\\infty_t L^2_x} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{X}'+L^1_tL^2_x} \n\\end{equation}\nholds for all solutions $u$ to \\eqref{main.equation}.\n\\end{theorem}\n\nAs one can see, the assumptions are more restrictive in low\ndimensions. This is related to the spectral structure of the operator\n$A$, precisely to the presence of a resonance at zero. This is the case\nif $A = -\\Delta$ or, more generally, if $b^i=0$ and $c=0$. However\nthe zero resonance is unstable with respect to lower order perturbations.\nTo account for non-resonant situations, it is convenient to \nintroduce a stronger norm which removes the quotient structure,\n\\[\n\\begin{split}\n\\|u\\|_{{{\\tilde{X}}}}^2 = &\\ \\| \\langle x\\rangle^{-1} u\\|_{L^2_{t,x}}^2 + \n \\sum_{k=-\\infty}^\\infty 2^k \\|S_k u\\|^2_{X_k}, \\qquad n \\neq 2\n\\\\\n\\|u\\|_{{{\\tilde{X}}}}^2 = &\\ \\| \\langle x\\rangle^{-1} (\\ln (2+|x|))^{-1} u\\|_{L^2_{t,x}}^2 + \n \\sum_{k=-\\infty}^\\infty 2^k \\|S_k u\\|^2_{X_k}, \\qquad n = 2.\n\\end{split}\n\\]\nIts dual is \n\\[\n{{\\tilde{X}}}' = {X}' + \\langle x \\rangle L^2_{t,x}, \\quad n \\neq 2,\\qquad {{\\tilde{X}}}' = {X}' + \\langle x\n \\rangle (\\ln(2+|x|)) L^2_{t,x}, \\quad n = 2.\n\\]\nDue to the Hardy inequality above, if $n \\geq 3$ we have ${{\\tilde{X}}} = {X}$. \nOn the other hand in low dimension the ${{\\tilde{X}}}$\n norm adds some local\nsquare integrability to the ${X}$ norm. Precisely, we have \n\\begin{lemma}\nLet $n=1,2$. Then\n\\begin{equation}\n\\| u\\|_{{{\\tilde{X}}}} \\lesssim \\|u\\|_{{X}} + \\|u\\|_{L^2_{t,x}(\\{|x| \\leq 1\\})}.\n\\end{equation}\n\\label{txdx}\n\\end{lemma}\n\n\nThe first goal of this article is to show, without any trapping\nassumption, that loss-less (with respect to regularity), global-in-time local smoothing and\nStrichartz estimates hold exterior to a sufficiently large ball,\nmodulo a localized error term. It is hoped that this error term can\nbe separately estimated for applications of interest. Moreover, in\nthe case of finite times, this error term can be trivially estimated\nby the energy inequality and immediately yields a $C^2$, long range,\ntime dependent analog of the result of \\cite{BT}.\n\nFor $M$ fixed and sufficiently large so that\n\\eqref{coeff.M}, \\eqref{coeffb.M} and \\eqref{coeffc.M} hold,\nwe consider a smooth, radial, nondecreasing cutoff function $\\rho$ which is supported in\n$\\{|x|\\ge 2^M \\}$ with $\\rho(|x|)\\equiv 1$ for $|x|\\ge 2^{M+1}$. \nThen we define the exterior local smoothing space ${{\\tilde{X}}}_e$\nwith norm\n\\[\n\\| u\\|_{{{\\tilde{X}}}_e} = \\| \\rho u\\|_{{{\\tilde{X}}}} + \\| (1-\\rho) u\\|_{L^2_{t,x}}\n\\]\nand the dual space ${{\\tilde{X}}}'_e$ with norm \n\\[\n\\|f\\|_{{{\\tilde{X}}}'_e} = \\inf_{f = \\rho f_1 +\n (1-\\rho) f_2} \\| f_1\\|_{{{\\tilde{X}}}'} + \\|f_2\\|_{L^2_{t,x}}. \n\\]\nNow we can state our exterior local smoothing estimates.\n\\begin{theorem}\n \\label{main.ls.theorem}\n Let $n \\geq 1$. Assume that the coefficients $a^{ij}$, $b^i$ and\n $c$ are real and satisfy \\eqref{coeff}, \\eqref{coeffb},\n \\eqref{coeffc}. Then the solution $u$ to \\eqref{main.equation}\n satisfies\n\\begin{equation}\n \\label{lsext}\n\\| u\\|_{{{\\tilde{X}}}_e \\cap L^\\infty_t L^2_x} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{{\\tilde{X}}}'_e+L^1_tL^2_x} + \n \\|u\\|_{L^2_{t,x}(\\{|x|\\le 2^{M+1}\\})}.\n\\end{equation}\n\\end{theorem}\n\nIn the low dimensional resonant case the situation is a bit more\ndelicate. First of all, the above theorem does not give a meaningful estimate in the \n$n=1,2$ resonant case as the last term in the right of \\eqref{lsext} blows up for\nconstant functions, which correspond to the zero resonance.\n Since we do not control the local $L^2$ norm for ${X}$\nfunctions, truncation by the cutoff function $\\rho$ does not preserve\nthe ${X}$ space. To remedy this we define a time dependent local\naverage for $u$, namely\n\\[\nu_\\rho = \\left( \\int_{{\\mathbb R}^n} (1-\\rho) \\ dx \\right)^{-1} \\int_{{\\mathbb R}^n} (1-\\rho) u\\ dx,\n\\]\nand define a modified truncation by the self-adjoint operator\n\\[\nT_\\rho u = \\rho u + (1-\\rho) u_\\rho.\n\\]\nWe note that $T_\\rho$ leaves constant functions unchanged, as well as the\nintegral of $u$ (if finite).\n\nThen we set\n\\[\n\\| u\\|_{{X}_e} = \\| T_\\rho u\\|_{{X}} + \\| u - T_\\rho u \\|_{L^2_{t,x}}\n\\]\nand have the dual space ${X}'_e$ with norm \n\\[\n\\|f\\|_{{X}'_e} = \\inf_{f = T_\\rho f_1 + (1-T_\\rho) f_2} \n\\| f_1\\|_{{X}'} + \\|f_2\\|_{L^2_{t,x}}. \n\\]\nWe now have the following alternative to\nTheorem~\\ref{main.ls.theorem} which is consistent with operators with\na constant zero resonance:\n\n\n\\begin{theorem}\n \\label{main.ls.theorem.res}\n Let $n = 1,2$. Assume that \n\n(i) the coefficients $a^{ij}$ are real and satisfy \\eqref{coeff};\n\n(ii) the coefficients $b^i$ are real, satisfy \\eqref{coeffb},\nand $\\partial_i b^i = 0$;\n\n(iii) there are no zero order terms, $c = 0$.\n\n Then the solution $u$ to \\eqref{main.equation}\n satisfies\n\\begin{equation}\n \\label{lsext.res}\n\\| u\\|_{{X}_e \\cap L^\\infty_t L^2_x} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{X}'_e+L^1_tL^2_x} + \n \\|u - u_\\rho\\|_{L^2_{t,x}(\\{|x|\\le 2^{M+1}\\})}.\n\\end{equation}\n\\end{theorem}\n\nOnce we have the local smoothing estimates, the parametrix\nconstruction in \\cite{T} allows us to obtain corresponding Strichartz\nestimates. If $(p,q)$ is a Strichartz pair we define the exterior\nspace ${{\\tilde{X}}}_e(p,q)$ with norm\n\\[\n\\| u\\|_{{{\\tilde{X}}}_e(p,q)} = \\| u\\|_{{{\\tilde{X}}}_e} + \\| \\rho u\\|_{L^p_t L^q_x}\n\\]\nand the dual space ${{\\tilde{X}}}'(p,q)$ with norm \n\\[\n\\|f\\|_{{{\\tilde{X}}}'_e(p,q)} = \\inf_{f = f_1 + \\rho f_2} \\| f_1\\|_{{{\\tilde{X}}}'_e} + \n\\|f_2\\|_{L^{p'}_t L^{q'}_x}. \n\\]\n\n\\begin{theorem}\n \\label{main.est.theorem}\n Let $n \\geq 1$. Assume that the coefficients $a^{ij}$, $b^i$ and\n $c$ are real and satisfy \\eqref{coeff}, \\eqref{coeffb},\n \\eqref{coeffc}. Then for any two Strichartz pairs\n $(p_1,q_1)$ and $(p_2, q_2)$, the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{fse}\n\\| u\\|_{{{\\tilde{X}}}_e(p_1,q_1) \\cap L^\\infty_t L^2_x} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{{\\tilde{X}}}'_e(p_2,q_2)+L^1_tL^2_x} + \n \\|u\\|_{L^2_{t,x}(\\{|x|\\le 2^{M+1}\\})}.\n\\end{equation}\n\\end{theorem}\n\n\n\nCorrespondingly, in the resonant case we define\n\\[\n\\| u\\|_{{X}_e(p,q)} = \\| u\\|_{{X}_e} + \\| \\rho u\\|_{L^p_t L^q_x}\n\\]\nand the dual space ${X}'_e(p,q)$ with norm \n\\[\n\\|f\\|_{{X}'_e(p,q)} = \\inf_{f = f_1 + \\rho f_2} \\| f_1\\|_{{X}'_e} + \n\\|f_2\\|_{L^{p'}_t L^{q'}_x}. \n\\]\nThen we have \n\n\\begin{theorem}\n \\label{main.est.theorem.res}\n Let $n = 1,2$. Assume that the coefficients of $P$ are as in\n Theorem~\\ref{main.ls.theorem.res}. Then for any two Strichartz\n pairs $(p_1,q_1)$ and $(p_2, q_2)$, the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{fselow}\\| u\\|_{{X}_e(p_1,q_1)\\cap L^\\infty_t L^2_x} \\lesssim \n\\|u_0\\|_{L^2} + \\|f\\|_{{X}'_e(p_2,q_2)+L^1_t L^2_x} \n + \\|u-u_\\rho\\|_{L^2_{t,x}(\\{|x|\\le 2^{M+1}\\})}.\n\\end{equation}\n\\end{theorem}\n\n\nIn both cases the space-time norms are over $[0,T]\\times {\\mathbb R}^n$ for any\ntime $T>0$ with constants independent of $T$.\nIf the time $T$ is finite, then we may use energy estimates to\ntrivially bound the error term. Doing so results in the following,\nwhich is a $C^2$-analog of the exterior Strichartz estimates of\n\\cite{BT}.\n\\begin{corr}\n \\label{corr.main.theorem}\n(a.) Assume that the coefficients $a^{ij}$, $b^i$, and $c$ are as in\n Theorem~\\ref{main.ls.theorem}. Then for any two Strichartz pairs\n $(p_1,q_1)$ and $(p_2, q_2)$, the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{fse2}\n\\| u\\|_{{{\\tilde{X}}}(p_1,q_1)\\cap L^\\infty_t L^2_x} \\lesssim_T \\|u_0\\|_{L^2}\n+ \\|f\\|_{{{\\tilde{X}}}'(p_2,q_2)+L^1_t L^2_x}.\n\\end{equation}\n\n(b.) Assume that the coefficients $a^{ij}$ and $b^i$ are as in\nTheorem~\\ref{main.ls.theorem.res}. Then for any two Strichartz pairs\n$(p_1,q_1)$ and $(p_2,q_2)$, the solution $u$ to \\eqref{main.equation}\nsatisfies\n\\begin{equation}\n \\label{fse2.res}\n\\|u\\|_{{X}(p_1,q_1)\\cap L^\\infty_t L^2_x}\\lesssim_T \\|u_0\\|_{L^2}\n+ \\|f\\|_{{X}'(p_2,q_2)+L^1_tL^2_x}.\n\\end{equation}\n\nIn both cases, the space-time norms are over $[0,T]\\times {\\mathbb R}^n$ and \n$T>0$ is finite.\n\\end{corr}\n\nWe conclude this subsection with a few remarks concerning \nseveral alternative set-ups for these results.\n\n\n\\subsubsection{ Boundary value problems }\n\nOur proof of Theorems~\\ref{main.ls.theorem},\\ref{main.ls.theorem.res},\n\\ref{main.est.theorem}~\\ref{main.est.theorem.res} treats the interior \nof the ball $B = \\{ |x| < 2^M \\}$ as a black box with the sole property\nthat the energy is conserved by the evolution.\nHence the results remain valid for exterior boundary problems.\nPrecisely, take a bounded domain $\\Omega \\subset B$\nand consider either the Dirichlet problem\n\\begin{equation}\n\\left\\{ \\begin{array}{lc} Pu = f & \\text{in } \\Omega^c \n\\cr u(0) = u_0 & \\cr \nu = 0& \\text{in } \\partial \\Omega \\end{array} \\right.\n\\label{D}\\end{equation}\nor the Neumann problem\n\\begin{equation}\n\\left\\{ \\begin{array}{lc} Pu = f & \\text{in } \\Omega^c \n\\cr u(0) = u_0 & \\cr \\displaystyle \n\\frac{\\partial u}{\\partial \\nu} = 0& \\text{in } \\partial \\Omega\n \\end{array} \\right.\n\\label{N}\\end{equation}\nwhere\n\\[\n\\frac{\\partial}{\\partial \\nu} = \\nu_i (a^{ij} D_j + b^i)\n\\]\nand $\\nu$ is the unit normal to $ \\partial \\Omega$.\n\n\nThen we have \n\\begin{corr}\n a) The results in Theorems~\\ref{main.ls.theorem} and\n \\ref{main.est.theorem} remain valid for both the Dirichlet\n problem \\eqref{D} and the Neumann problem \\eqref{N}.\n\n b) The results in Theorems~ \\ref{main.ls.theorem.res} and\n \\ref{main.est.theorem.res} remain valid for \nthe Neumann problem \\eqref{N} with the additional condition\n$b^i \\nu_i = 0$ on $ \\partial \\Omega$.\n\\end{corr}\n\nThe more restrictive hypothesis in part (b) is caused by the \nrequirement that constant functions solve the homogeneous\nproblem.\n\n\n\\subsubsection{Complex coefficients} \n\nThe only role played in our proofs by the assumption\nthat the coefficients $b^i$ and $c$ are real is to insure \nthe energy conservation in the interior region. Hence \nwe can allow complex coefficients in the region $\\{ |x| > 2^{M+1}\\}$\nwhere the coefficients satisfy the smallness condition.\n\nIn addition, allowing $c$ to be complex in the interior region does\nnot affect energy conservation either, since we are assuming \nan a priori control of the local $L^2$ space-time norm of the solution.\nHence we have\n\n\\begin{remark}\na) The results in Theorems~\\ref{main.ls.theorem} and \n\\ref{main.est.theorem} remain valid for complex coefficients\n $b^i$, $c$ with the restriction that $b^i$ are real \nin the region $\\{ |x| < 2^{M+1}\\}$.\n\nb) The results in Theorems~ \\ref{main.ls.theorem.res} and\n\\ref{main.est.theorem.res} remain valid for coefficients $b^i$ which\nare real in the region $\\{ |x| < 2^{M+1}\\}$.\n\\end{remark}\n\n\n\\subsection{ Non-trapping metrics}\nThe second goal of the article is to consider the previous setup but\nwith an additional non-trapping assumption. To state it we consider\nthe Hamilton flow $H_a$ for the principal symbol of the operator \n$A$, namely \n\\[\na(t,x,\\xi) = a^{ij}(t,x) \\xi_i \\xi_j.\n\\]\nThe spatial projections of the trajectories of the Hamilton flow $H_a$\nare the geodesics for the metric $a_{ij} d x^i dx^j$ where\n$(a_{ij})=(a^{ij})^{-1}$.\n\n\\begin{deff}\n We say that the metric $(a_{ij})$ is non-trapping if for each $R > 0$\n there exists $L > 0$ independent of $t$ so that any portion of a\n geodesic contained in $\\{|x| < R\\}$ has length at most $L$.\n\\end{deff}\n\nThe non-trapping condition allows us to use standard propagation of\nsingularities techniques to bound high frequencies inside a ball in\nterms of the high frequencies outside. Then the cutoff function\n$\\rho$ which was used before is no longer needed, and we obtain\n\\begin{theorem}\n \\label{main.ls.theoremnt}\n Let $R > 0$ be sufficiently large. Assume that the coefficients $a^{ij}$, $b^i$ and $c$ are real and\n satisfy \\eqref{coeff}, \\eqref{coeffb}, \\eqref{coeffc}. Assume also\n that the metric $a_{ij}$ is non-trapping. Then the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{lsnt}\n\\| u\\|_{{{\\tilde{X}}}} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{{\\tilde{X}}}'} + \n \\|u\\|_{L^2_{t,x}(\\{|x|\\le 2R\\})},\n\\end{equation}\n\\end{theorem}\n\nrespectively \n\n\\begin{theorem}\n \\label{main.ls.theoremnt.res}\n Let $R > 0$ be sufficiently large, and let $n=1,2$. Assume that the coefficients of $P$ are as\n in Theorem~\\ref{main.ls.theorem.res}. Assume also that the metric\n $a_{ij}$ is non-trapping. Then the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{lsntres}\n\\| u\\|_{{X}} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{X}'} + \n \\|u-u_\\rho\\|_{L^2_{t,x}(\\{|x|\\le 2R\\})}.\n\\end{equation}\n\\end{theorem}\n\nWe note that the high frequencies in the error term on the right are\ncontrolled by the ${X}$ norm on the left. Also the low frequencies ($\\ll\n1$) are controlled by the ${X}$ norm using the uncertainty\nprinciple. Hence the only nontrivial part of the error term corresponds\nto intermediate (i.e. $\\approx 1$ ) frequencies.\n\nThe proof combines the arguments used for the exterior estimates\nwith a standard multiplier construction from the theory of propagation\nof singularities. Adding to the above results the parametrix\nobtained in \\cite{T} we obtain\n\n\n\\begin{theorem}\n \\label{main.theoremnt}\n Let $R > 0$ be sufficiently large. Assume that the coefficients $a^{ij}$, $b^i$ and\n $c$ are real and satisfy \\eqref{coeff}, \\eqref{coeffb}, \\eqref{coeffc}. Assume\n also that the metric $a_{ij}$ is non-trapping. Then for any two\n Strichartz pairs $(p_1,q_1)$ and $(p_2, q_2)$, the solution $u$ to\n \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{fsent}\n\\| u\\|_{{{\\tilde{X}}} \\cap L^{p_1}_t L^{q_1}_x} \\lesssim \\|u_0\\|_{L^2}\n+ \\|f\\|_{{{\\tilde{X}}}'+L^{p'_2}_t L^{q'_2}_x} + \n \\|u\\|_{L^2_{t,x}(\\{|x|\\le 2R\\})},\n\\end{equation}\n\\end{theorem}\n\nrespectively \n\n\\begin{theorem}\n \\label{main.theoremnt.res}\n Let $n = 1,2$, and let $R > 0$ be sufficiently large. Assume that the coefficients of $P$ are as\n in Theorem~\\ref{main.ls.theorem.res}. Assume also that the metric\n$a_{ij}$ is non-trapping. Then for any two Strichartz pairs $(p_1,q_1)$\nand $(p_2, q_2)$, the solution $u$ to \\eqref{main.equation} satisfies\n\\begin{equation}\n \\label{fsentres}\\| u\\|_{{X}\\cap L^{p_1}_t L^{q_1}_x} \\lesssim \\|u_0\\|_{L^2}\n + \\|f\\|_{{X}'+L^{p'_2}_t L^{q'_2}_x} \n + \\| u - u_\\rho\\|_{L^2_{t,x}(\\{|x|\\le 2R\\})}.\n\\end{equation}\n\\end{theorem}\n\n\n\\subsubsection{ An improved result for trapped metrics}\n\nA variation on the above theme is obtained in the case when there are\ntrapped rays, but not too many. If they exist, they must be confined\nto the interior region $\\{|x| \\leq 2^M\\}$. Then we can define the\nconic set\n\\[\n\\begin{split}\n \\Omega_{trapped}^L = \\{&\\ (t,x,\\xi) \\in {\\mathbb R} \\times T^* B(0,2^M);\n \\text{ the $H_a$ bicharacteristic through } (t,x,\\xi)\n\\\\ & \\text{ has\n length at least } L \\text{ within } |x| \\leq 2^M\\}.\n\\end{split}\n\\]\nGiven a smooth zero homogeneous symbol $q(x,\\xi)$ which \nequals $1$ for $|x| > 2^M$, we define modified exterior spaces\nby\n\\[\n\\| u\\|_{{{\\tilde{X}}}_q} = \\| q(x,D) u\\|_{{{\\tilde{X}}}} + \\| u\\|_{L^2(\\{|x| \\leq 2^{M+1}\\})} \n\\]\nwith similar modifications for ${{\\tilde{X}}}'_q$, ${X}_q$ and ${X}'_q$.\n\nThen the same argument as in the proof of the above Theorems\ngives\n\n\\begin{corr} \n Assume that $q$ is supported outside $ \\Omega_{trapped}^L$ for some\n $L > 0$. Then the results in Theorems~\\ref{main.ls.theorem},\n \\ref{main.est.theorem}, \\ref{main.ls.theorem.res} and\n \\ref{main.est.theorem.res} remain valid with ${{\\tilde{X}}}_e$, ${{\\tilde{X}}}'_e$,\n ${X}_e$ and ${X}'_e$ replaced by ${{\\tilde{X}}}_q$, ${{\\tilde{X}}}'_q$, ${X}_q$ and\n ${X}'_q$.\n\\end{corr}\n\nWe also note that if $A$ has time independent coefficients then\n$\\Omega_{trapped}^L$ is translation invariant. Hence a compactness\nargument allows us to replace $\\Omega_{trapped}^L$ by\n$\\Omega_{trapped}^\\infty$, which contains all the trapped geodesics.\n\n\\subsubsection{Boundary value problems}\n\nConsider solutions $u$ for either the Dirichlet problem \\eqref{D} or\nthe Neumann problem \\eqref{N}. Then singularities will propagate along\ngeneralized broken bicharacteristics (see \\cite{MS1, MS2},\\cite{Hor},\\cite{BGT2}).\nHence the non-trapping condition needs to be modified accordingly.\n\n\\begin{deff}\n We say that the metric $(a_{ij})$ is non-trapping if for each $R > 0$\n there exists $L > 0$ independent of $t$ so that any portion of a\n generalized broken bicharacteristic is contained in $\\{|x| < R\\}$\n has length at most $L$.\n\\end{deff}\n\nWith this modification the results of\nTheorems~\\ref{main.ls.theoremnt}, \\ref{main.ls.theoremnt.res}, remain\nvalid. However, some care must be taken with the results on\npropagation of singularities near the boundary, as not all of them are\nknown to be valid for operators with only $C^2$ coefficients.\n\nOn the other hand we do not know whether the bounds in Theorems\n\\ref{main.theoremnt}, \\ref{main.theoremnt.res} are true or not. These\nhinge on the validity of local Strichartz estimates near the boundary.\nThis is currently an unsolved problem.\n\n\\subsubsection{ Complex coefficients}\n\nAgain, one may ask to what extent are our results in this section \nare valid if complex coefficients are allowed. We have \n\\begin{remark}\n The results in Theorems~\\ref{main.ls.theoremnt},\n \\ref{main.ls.theoremnt.res}, \\ref{main.theoremnt},\n \\ref{main.theoremnt.res} remain valid if the coefficients $b^i$ and\n $c$ are allowed to be complex.\n\n\\end{remark}\n\nThis result is obtained without making any changes to our proofs\nprovided that the constant $\\kappa$ in \\eqref{coeffb} is sufficiently\nsmall. Otherwise, the multiplier $q$ used in the proof has to change \ntoo much along bicharacteristics from entry to exit from \n$B(0,2^M)$; this in turn forces a modified multiplier for the exterior\nregion. See, e.g., \\cite{D1, Doi} and \\cite{ST}.\n\n\n\\subsection{ Time independent metrics}\n\nIt is natural to ask when can one eliminate the error term altogether.\nThis is a very delicate question, which hinges on the local in space\nevolution of low frequency solutions. For general operators $A$ with \ntime dependent coefficients this question seems out of reach for now.\n\n\nThis leads us to the third part of the paper where, in addition to the\nflatness assumption above and the non-trapping hypothesis on $a_{ij}$, we\ntake our coefficients $a^{ij},b^i,c$ to be time-independent. Then the natural \nobstruction to the dispersive estimates comes from possible\neigenvalues and zero resonances of the operator $A$.\n\nSince the operator $A$ is self-adjoint, it follows that its spectrum is\nreal. More precisely, $A$ has a continuous spectrum $\\sigma_c=\n[0,\\infty)$ and a point spectrum $\\sigma_p$ consisting of discrete\nfinite multiplicity eigenvalues in ${\\mathbb R}^-$, whose only possible\naccumulation point is $0$. \n\nFrom the point of view of dispersion there is nothing we can do about\neigenvalues. Consequently we introduce the spectral projector $P_c$\nonto the continuous spectrum, and obtain dispersive estimates only for\n$P_c u$ for solutions $u$ to \\eqref{main.equation}. \n\nThe resolvent \n\\[\nR_\\lambda = (\\lambda - A)^{-1}\n\\]\nis well defined in ${\\mathbb C} \\setminus (\\sigma_c \\cup \\sigma_p)$.\nOne may ask whether there is any meromorphic continuation \nof the resolvent $R_\\lambda$ across the positive real axis, starting\non either side. This is indeed possible. The poles of this\nmeromorphic continuation are called resonances.\nThis is of interest to us because the resonances which are close to\nthe real axis play an important role in the long time behavior\nof solutions to the Schr\\\"{o}dinger equation. \n\nIn the case which we consider here (asymptotically flat), there are no\nresonances nor eigenvalues inside the continuous spectrum i.e. in\n$(0,\\infty)$. However, the bottom of the continuous spectrum,\nnamely $0$, may be either an eigenfunction (if $n \\geq 5$) or \na resonance (if $n \\leq 4$). For zero resonances we use\na fairly restrictive definition:\n\n\n\\begin{deff}\nWe say that $0$ is a resonance for $A$ if there is a function \n$u \\in {{\\tilde{X}}}^0$ so that $Au = 0$. The function $u$ \nis called a zero resonant state of $A$.\n\\end{deff}\n\nHere ${{\\tilde{X}}}^0$ denotes the spatial part of the ${{\\tilde{X}}}$ norm. I.e. ${{\\tilde{X}}} = L^2_t {{\\tilde{X}}}^0$.\n\n\nThe main case we consider here is when $0$ is neither an eigenfunction\n(if $n \\geq 5$) nor a resonance (if $n \\leq 4$). This implies that\nthere are no eigenvalues close to $0$. Then $A$ has at most finitely\nmany negative eigenvalues, and the corresponding eigenfunctions decay\nexponentially at infinity.\n\n\n\\begin{theorem}\\label{theorem.PcSmoothing} \n Suppose that $a^{ij},b^i,c$ are real, time-independent, and satisfy the\n conditions \\eqref{coeff},\\eqref{coeffb}, and\n \\eqref{coeffc}. We also assume that the Hamiltonian vector field\n $H_a$ permits no trapped geodesics and that $0$ is not an\n eigenvalue or a resonance of $A$. Then for all solutions $u$ to\n \\eqref{main.equation} we have\n\\begin{equation}\\label{PcSmoothing}\n\\|P_c u\\|_{{\\tilde{X}}}\\lesssim \\|u_0\\|_2 + \\| f\\|_{{{\\tilde{X}}}'}.\n\\end{equation}\n\\end{theorem}\nFrom this, using the parametrix of \\cite{T}, we immediately obtain the corresponding global-in-time\nStrichartz estimates:\n\\begin{theorem}\\label{corr.nontrap.Strichartz}\n \n Suppose that $a^{ij},b^i,c$ are real, time-independent, and\n satisfy the conditions \\eqref{coeff},\\eqref{coeffb},\n and \\eqref{coeffc}. Moreover, assume that the Hamiltonian vector\n field $H_a$ permits no trapped geodesics. Assume, also, that $0$ is\n not an eigenvalue or a resonance of $A$. Then for all solutions $u$\n to \\eqref{main.equation}, we have\n\\begin{equation}\\label{PcStrichartz}\n\\|P_c u\\|_{L^{p_1}_t L^{q_1}_x \\cap {{\\tilde{X}}}} \\lesssim \\|u_0\\|_2 + \\|f\\|_{L^{p'_2}_t\n L^{q'_2}_x+{{\\tilde{X}}}'} ,\n\\end{equation}\nfor any Strichartz pairs $(p_1,q_1)$ and $(p_2,q_2)$.\n\\end{theorem}\n\nOne can compare this with the result of \\cite{RT}, where the authors\nconsider a smooth compactly supported perturbation of the metric in\n$3+1$ dimensions where no eigenvalues are present. Estimates in the\nspirit of \\eqref{PcStrichartz} have also recently be shown by\n\\cite{BT2}, though only for smooth coefficients and with a more\nrestrictive spectral projection. We also note the related work\n\\cite{EGS} on Schr\\\"odinger equations with magnetic potentials. In\ntheir work, the second order operator is taken to be $-\\Delta$.\nTheorem \\ref{corr.nontrap.Strichartz} is a more general version of\nthe main theorem in \\cite{EGS} in the sense that it allows a more\ngeneral leading order operator and that it assumes less flatness on\nthe coefficients.\n\n\nIn dimension $n \\geq 3$ zero is not an eigenvalue or a resonance \nfor $-\\Delta$, nor for small perturbations of it. However, in\ndimension $n=1,2$, zero is a resonance and the corresponding\nresonant states are the constant functions. This spectral picture\nis not stable with respect to lower order perturbations, but it does\nremain stable with respect to perturbations of the metric $a^{ij}$.\nHence there is some motivation to also investigate this case \nin more detail. We prove the following result.\n\n\n\n\\begin{theorem}\\label{theorem.PcSmoothing.res} \n Assume that the coefficients of $P$ are time-independent, but otherwise as in\n Theorem~\\ref{main.ls.theorem.res}. Assume also that the Hamiltonian\n vector field $H_a$ permits no trapped geodesics, and that there are\n no nonconstant zero resonant states of $A$. Then for all\n solutions $u$ to \\eqref{main.equation}, we have\n\\begin{equation}\\label{PcSmoothing.res}\n\\| u\\|_{X}\\lesssim \\|u_0\\|_2 + \\| f\\|_{{X}'}.\n\\end{equation}\n\\end{theorem}\n\nIn terms of Strichartz estimates, this has the following consequence:\n\n\\begin{theorem}\\label{corr.nontrap.Strichartz.res}\n Assume that the coefficients of $P$ are time-independent, but otherwise as in\n Theorem~\\ref{main.ls.theorem.res}. Assume also that the Hamiltonian\n vector field $H_a$ permits no trapped geodesics, and that there are\n no nonconstant zero resonant states of $A$. Then for all\n solutions $u$ to \\eqref{main.equation}, we have\n\\begin{equation}\\label{PcStrichartz.res}\n\\| u\\|_{L^{p_1}_t L^{q_1}_x \\cap {X}} \\lesssim \\|u_0\\|_2 + \\|f\\|_{L^{p'_2}_t\n L^{q'_2}_x+{X}'} \n\\end{equation}\nfor any Strichartz pairs $(p_1,q_1)$ and $(p_2,q_2)$.\n\\end{theorem}\n\nImplicit in the above theorems is the fact that there are, under their\nhypothesis, no eigenvalues for $A$. There is another\nsimplification if we make the additional assumption that $b = 0$.\n\n\\begin{remark}\\label{remark.nootherres}\nIf in addition $b = 0$, then there are no nonconstant generalized zero\neigenvalues of $A$.\n\\end{remark}\n\nIn order to prove Theorems \\ref{theorem.PcSmoothing} and\n\\ref{theorem.PcSmoothing.res}, we restate the bounds\n\\eqref{PcSmoothing} and \\eqref{PcSmoothing.res} in terms of estimates\non the resolvent using the Fourier transform in $t$. We then argue\nvia contradiction. Using the positive commutator method, we show an\noutgoing radiation condition (see Steps 8-10 of the proof), which allows us\nto pass to subsequences and claim that if \\eqref{PcSmoothing} were\nfalse, then there is a resonance or an eigenvalue $v$ within the\ncontinuous spectrum. By hypothesis this cannot occur at $0$. We use\nanother multiplier and the radiation condition to then show that $v\\in\nL^2$ and thus cannot be a resonance. As results of \\cite{KT2} show\nthat there are no eigenvalues embedded in the continuous spectrum, we\nreach a contradiction. If instead \\eqref{PcSmoothing.res} were false,\nthen the same argument produces a nonconstant zero resonance, again\nreaching a contradiction.\n \nThe paper is organized as follows. In the next section, we fix some\nfurther notations and our paradifferential setup. It is here that we\nshow that we may permit the lower order terms in the local smoothing\nestimates in a perturbative manner.\nIn the third section, we prove the local smoothing estimates using the\npositive commutator method, first in the exterior local smoothing\nspaces and then in the non-trapping case. The fourth section is\ndevoted to non-trapping, time-independent operators.\n In the final section, we review the parametrix of \\cite{T} \nand use it to show how the Strichartz estimates follow\nfrom the local smoothing estimates. \n\n{\\em Acknowledgements:} The authors thank W. Schlag and M. Zworski for helpful discussions\nregarding some of the spectral theory, and in particular the behavior of resonances, contained herein.\n\n\\bigskip\n\\newsection{Notations and the paradifferential setup}\\label{not_para}\n\n\\subsection{Notations}\nWe shall be using dyadic decompositions of both space and frequency.\nFor the spatial decomposition, we let $\\chi_k$ denote smooth functions\nsatisfying\n\\[\n1=\\sum_{j=0}^\\infty \\chi_j(x),\\quad \\text{supp } \\chi_0\\subset\\{|x|\\le 2\\},\n\\quad \\text{supp }\\chi_j\\subset \\{2^{j-1}<|x|<2^{j+1}\\}\\text{ for } j\\ge 1.\n\\]\nWe also set\n\\[\n\\chi_{k}$. In frequency, we use a\nsmooth Littlewood-Paley decomposition\n\\[\n1=\\sum_{j=-\\infty}^\\infty S_j(D), \\quad \\text{supp }\ns_j\\subset\\{2^{j-1}<|\\xi|<2^{j+1}\\}\n\\]\nand similar notations for $S_{k}$ are applied.\n\nWe say that a function is frequency localized at frequency $2^k$ if its Fourier transform is supported\nin the annulus $\\{2^{k-1}<|\\xi|<2^{k+1}\\}$. An operator $K$ is said to be frequency localized if\n$Kf$ is supported in $\\{2^{k-10}<|\\xi|<2^{k+10}\\}$ for any function $f$ which is frequency localized at\n$2^k$.\n\nFor $\\kappa$ as in \\eqref{coeff}, we may choose a positive, slowly varying sequence $\\kappa_j\\in \\ell^1$ satisfying\n\\begin{equation}\n\\label{kappa_j}\n\\sup_{A_j} \\langle x\\rangle^2 |\\partial_x^2 a(t,x)| + \\langle x\\rangle |\\partial_x a(t,x)| + |a(t,x)-I_n|\\le \\kappa_j,\n\\end{equation}\n$$\\sum \\kappa_j \\lesssim \\kappa,$$\nand\n$$|\\ln \\kappa_j - \\ln \\kappa_{j-1}|\\le 2^{-10}.$$\nWhen the lower order terms are present, we may choose $\\kappa_j$ so that each dyadic piece of\n\\eqref{coeffb} is also controlled similarly.\nWe may also assume that $M$ in \\eqref{coeff.M} is chosen sufficiently large that\n$$\\sum_{j\\ge M} \\kappa_j \\lesssim \\varepsilon.$$\nAssociated to this slowly varying sequence, we may choose functions $\\kappa_k(s)$ with\n$$\\kappa_0<\\kappa_k(s)<2\\kappa_0,\\quad 0\\le s < 2,$$\n$$\\kappa_j<\\kappa_k(s)<2\\kappa_j,\\quad 2^j 0$, we\nhave the dyadic bound\n\\[\n\\| \\langle x \\rangle^{-1} S_k u\\|_{L^2} \\lesssim \\|S_k u\\|_{X_k} \n\\]\nwhich we can easily sum over $k$ to obtain\n\\[\n\\| \\langle x \\rangle^{-1} S_{>0} u\\|_{L^2} \\lesssim \\|u\\|_{{X}}.\n\\]\n\n\nFor frequencies $k < 0$ it is easy to see that \n\\begin{equation}\\label{Tkbdd}\n\\|(1-T_k) S_k u \\|_{X_k} \\lesssim \\|S_k u \\|_{X_k}\n\\end{equation}\nfollows from the bound\n\\begin{equation}\\label{bernstein}\n \\|\\chi_{<-k} S_k u\\|_{L^2_t L^\\infty_x}\\lesssim 2^{\\frac{n-1}{2}k}\\|S_k u\\|_{X_k},\\quad k\\le 0.\n\\end{equation}\nwhich is a consequence of Bernstein's inequality.\n\nThe gain is that $(1-T_k) S_k u(t,0)=0$. This leads to the improved\npointwise bound\n\\[\n |x|^{-1}|(1-T_k) S_k u| \\lesssim 2^{\\frac{n+1}2 k} \\|S_k u \\|_{X_k},\\quad |x|<2^{-k}\n\\]\nand further to the improved $L^2$ bound\n\\begin{equation}\n\\sup_{j} \\| (2^k|x|+2^{-k}|x|^{-1})^\\frac12 |x|^{-1} (1-T_k) S_k\nu\\|_{L^2(A_j)} \\lesssim 2^{\\frac{k}2} \\|S_k u \\|_{X_k}.\n\\label{impl2}\\end{equation}\nThen, by orthogonality with respect \nto spatial dyadic regions, we can sum up\n\\[\n\\| \\langle x\\rangle^{-1} \\sum_{k < 0} (1-T_k) S_k u \\|_{L^2}^2 \\lesssim \\|u\\|_{{X}}^2 \n\\]\nwhich combined with the previous high frequency bound yields\n\\begin{equation}\n\\|\\langle x \\rangle^{-1} u^{out}\\|_{L^2} \\lesssim \\|u\\|_{{X}}.\n\\label{l2uout}\\end{equation}\n\n\nFor the terms in $u^{in}$, differentiation yields a $2^k$ factor, and\ntherefore we can estimate\n\\begin{equation}\n\\| u^{in}\\|_{\\dot H^1} \\lesssim \\|u\\|_{{X}}.\n\\label{l2uin}\\end{equation}\nIt remains to prove the bounds\n\\begin{equation}\n\\| \\langle x \\rangle^{-1} v \\|_{L^2} \\lesssim \\|v\\|_{L^2(B(0,1))} + \\|\nv\\|_{\\dot H^1},\n\\qquad n=1\n\\label{n=1emb} \n\\end{equation}\nrespectively\n\\begin{equation}\n\\| \\langle x \\rangle^{-1} (\\ln (1+ \\langle x \\rangle))^{-1}\n v \\|_{L^2} \\lesssim \\|v\\|_{L^2(B(0,1))} + \\|\nv\\|_{\\dot H^1},\n\\qquad n=2.\n\\label{n=2emb} \n\\end{equation}\n\nDue to the first factor in the right of both estimates, we may without loss of generality take\n$v$ to vanish in $B(0,1\/2)$.\nFor \\eqref{n=1emb} we integrate\n\\[\n2 \\int_{1\/2}^R x^{-1} v v_x dx = \\int_{1\/2}^R x^{-2} v^2 dx + R^{-1}\nv^2(R).\n\\]\nUsing Cauchy-Schwarz the conclusion follows.\n\nFor \\eqref{n=2emb} we argue in a similar fashion.\nWe have\n\\begin{multline*}\n2 \\int_{B_R \\setminus B_{1\/2}} |x|^{-2} (\\ln (2+|x|^2))^{-1} v x \\nabla v dx = \n\\int_{B_R \\setminus B_{1\/2}} (2+|x|^2)^{-1} (\\ln (2+|x|^2))^{-2} v^2 dx \n\\\\ + \\int_{\\partial B_R}|x|^{-1} (\\ln (2+|x|^2))^{-1} v^2 d\\sigma\n\\end{multline*}\nand conclude again by Cauchy-Schwarz. The lemma is proved. \\qed\n\nOn a related note, we include here another result which\nsimplifies the type of local error terms we allow in the non-trapping\ncase.\n\n\\begin{lemma}\nLet $n \\geq 1$ and $R > 0$. Then for each $\\epsilon > 0$ there is \n$m_\\epsilon > 0$ and $c_\\epsilon > 0$\nso that\n\\begin{equation}\\label{dXerror}\n\\| \\langle x \\rangle^{-\\frac32} u\\|_{L^2} \\leq \\epsilon \\|u\\|_{{X}} + c_\\epsilon\n\\|S_{k} S_{k} S_{k} u\\|_{L^2} < 1, \\qquad \n\\|u\\|_{L^2(\\{|x| < R\\})} = 0.\n\\]\nBut $u$ is also frequency localized in $|\\xi| < 2^{m+1}$ and\nis therefore analytic. Then the last condition above implies $u = 0$ which is a \ncontradiction.\n\\end{proof}\n\n\n\n\n\\subsection{Paradifferential calculus}\n\nHere, we seek to frequency localize the coefficients of $P$.\nA similar argument is present in \\cite{T}, where for solutions at \nfrequency $2^k$ the coefficients are localized at frequency\n\\[\n|\\xi| \\ll 2^{k\/2} \\langle x\\rangle^{-1\/2}.\n\\]\nSuch a strong localization was essential there in order to carry out\nthe parametrix construction. Here we are able to keep the setup\nsimpler and use a classical paradifferential construction, where for\nsolutions at frequency $2^k$ the coefficients are localized at\nfrequency below $ 2^k$. For a fixed frequency scale $2^k$, we set\n\\begin{align*}\na^{ij}_{(k)} = S_{0\n\\\\\n|\\partial^\\alpha (a^{ij}_{(k)}-I_n)|&\\lesssim \\kappa_k(|x|)\n{2^{|\\alpha| k}}{\\langle 2^k x \\rangle^{-|\\alpha|}},\\quad \n|\\alpha|\\le 2, \\quad k\\leq 0.\n\\end{split}\n\\end{equation}\n\n\nThe next proposition will be used to pass back and forth between $A_{(k)}$\nand $A$. We first define\n\\[\n\\tilde{A}=\\sum_k A_{(k)}S_k.\n\\]\n\n\\begin{prop}\n\\label{lemma.A.to.Ak}\nAssume that the coefficients $a^{ij}$ satisfy \\eqref{coeff}, and that\n$b=0$, $c=0$. Then\n\\begin{equation}\n \\label{aak}\n \\sum_{k} 2^{-k} \\|S_k (A-A_{(k)}) u\\|_{X_k'}^2 \\lesssim \n\\kappa^2 \\|u\\|_{{X}}^2, \n\\end{equation}\n\\begin{equation}\\label{amta}\n\\|(A-\\tilde{A})u\\|_{{X}'}\\lesssim \\kappa \\|u\\|_{{X}},\n\\end{equation}\n\\begin{equation}\n \\label{comak}\n2^{-k} \\|[A_{(k)},S_k] u\\|_{X'_k}\\lesssim \\kappa \\|u\\|_{X_k}.\n\\end{equation}\n\\end{prop}\n\\begin{proof}[ Proof of Lemma \\ref{lemma.A.to.Ak}:]\nWe begin by writing\n\\[\nS_k(A- A_{(k)})=A_k^{med}+A_k^{high}\n\\]\nwith\n\\begin{align*}\nA_k^{med}&= \n \\sum_{l=k-4}^{k+4} \\sum_{m=-\\infty}^{k+8} S_k D_i(S_l a^{ij})\nD_j S_m \\\\\nA_k^{high}&= \\sum_{l>k+4} \\sum_{m=l-4}^{l+4} \nS_k D_i (S_la^{ij})D_j S_m.\n\\end{align*}\n\n\nFor $A_k^{med}$ we take $l=k \\geq m$ for simplicity; then it suffices \n to establish the off-diagonal decay \n\\begin{equation}\n \\label{dplow3.small}\n\\|S_k D_i(S_k a^{ij}D_j S_m v)\\|_{X'_k}\\lesssim \\kappa\n2^m \\|S_m v\\|_{X_m}.\n\\end{equation}\n\nIf $k \\geq m \\geq 0$ then we have\n\\[\n\\begin{split}\n\\|S_k D_i(S_k a^{ij} D_j S_m v)\\|_{X'_k}\n&\\ \\lesssim 2^k \\|S_k a^{ij} D_j S_m v \\|_{X'_k}\n\\\\ &\\ \\lesssim \\kappa 2^{-k} \\| \\langle x\\rangle^{-2} D_j S_m v \\|_{X'_k}\n\\\\ &\\ \\lesssim \\kappa 2^{-k} \\| D_j S_m v \\|_{X_m}\n\\\\ &\\ \\lesssim \\kappa 2^{m-k} \\|S_m v \\|_{X_m}.\n\\end{split}\n\\]\n\nIf $k \\geq 0 > m$ then we have two spatial scales to deal with, namely\n$1$ and $2^{-m}$. To separate them we use the cutoff function\n$\\chi_{<-m}$. For contributions corresponding to large $x$ we estimate\n\\[\n\\begin{split}\n\\|S_k D_i(S_k a^{ij} \\chi_{\\ge -m} D_j S_m v)\\|_{X'_k}\n&\\ \\lesssim 2^k \\|S_k a^{ij} \\chi_{\\ge -m} D_j S_m v \\|_{X'_k}\n\\\\ &\\ \\lesssim \\kappa 2^{-k} \\| |x|^{-2} \\chi_{\\ge -m} D_j S_m v \\|_{X'_k}\n\\\\ &\\ \\lesssim \\kappa 2^{m-k} \\| D_j S_m v \\|_{X_m}\n\\\\ &\\ \\lesssim \\kappa 2^{2m-k} \\| S_m v \\|_{X_m}.\n\\end{split}\n\\]\nFor contributions corresponding to small $x$, we first note that by \nBernstein's inequality, see \\eqref{bernstein}, we have \n\\begin{equation}\\label{bern.2}\n\\| D_j S_m v\\|_{L^2_t L_x^\\infty(A_{\\leq -m})} \\leq 2^{\\frac{n+1}2 m} \\|S_m v\\|_{X_m}.\n\\end{equation}\nThen\n\\[\n\\begin{split}\n\\|S_k D_i(S_k a^{ij} \\chi_{<-m} D_j S_m v)\\|_{X'_k}\n&\\ \\lesssim 2^k \\|S_k a^{ij} \\chi_{<-m} D_j S_m v\\|_{X'_k}\n\\\\\n&\\ \\lesssim 2^{-k} 2^{\\frac{n+1}2 m} \\|\\langle x \\rangle^{-2} \\chi_{<-m}\n\\kappa(|x|)\\|_{(X_k^0)'} \\|S_m v\\|_{X_m}\n\\\\\n&\\ \\lesssim \\kappa 2^{-k}2^{\\frac{n+1}2 m} \\max\\{ 1, 2^{\\frac{3-n}{2} m} \\} \\|S_m v\\|_{X_m}\n\\\\\n&\\ \\lesssim \\kappa 2^{-k} \\max\\{ 2^{\\frac{n+1}2 m}, 2^{2 m} \\} \\|S_m v\\|_{X_m}\n\\end{split}\n\\]\nwhere $(X_k^0)'$ is the spatial part of the $X_k'$ norm, i.e. $X_k' = L^2_t (X_k^0)'$.\n\nFinally if $ 0 >k \\geq m$ then the spatial scales are $2^{-k}$ and\n$2^{-m}$, and we separate them using the cutoff function\n$\\chi_{<-m}$. The exterior part is exactly as in the previous case. For\nthe interior part we use again \\eqref{bern.2} \nto compute\n\\[\n\\begin{split}\n\\|S_k D_i(S_k a^{ij} \\chi_{<-m} D_j S_m v)\\|_{X'_k}\n&\\ \\lesssim 2^k \\|S_k a^{ij} \\chi_{<-m} D_j S_m v\\|_{X'_k}\n\\\\\n&\\ \\lesssim 2^{k} 2^{\\frac{n+1}2 m} \\|\\langle 2^k x \\rangle^{-2} \\chi_{<-m}\n\\kappa(|x|)\\|_{(X_k^0)'} \\|S_mv\\|_{X_m}\n\\\\\n&\\ \\lesssim \\kappa 2^{k}2^{\\frac{n+1}2 m} \n\\max\\{ 2^{-\\frac{n+1}2 k}, 2^{-2k} \n2^{\\frac{3-n}{2} m} \\} \\|S_m v\\|_{X_m}\n\\\\\n&\\ \\lesssim \\max\\{ 2^{\\frac{1-n}2 k} 2^{\\frac{n+1}2 m}, 2^{-k} 2^{2 m} \\} \\|S_m v\\|_{X_m}.\n\\end{split}\n\\]\nHence \\eqref{dplow3.small} is proved, which by summation yields the bound\n\\eqref{aak} for $A_k^{med}$. The bound for $A_k^{high}$ follows from\n summation of \\eqref{dplow3.small} in a duality argument.\n\nWe note that in all cases there is some room to spare in the\nestimates. This shows that our hypothesis is too strong for this\nlemma. Indeed, one could prove it without using at all the bound on\nthe second derivatives of the coefficients.\n\n\nThe bound \\eqref{amta} follows by duality from \\eqref{aak}. The proof\nof \\eqref{comak}, as in \\cite{T}, follows from the $|\\alpha|=1$ case\nof \\eqref{coeffak.greater}. \n\\end{proof}\n\n\n\nThe next proposition allows us to treat lower order terms perturbatively \nin most of our results.\n\n\n\\begin{prop}\\label{lemma.lower.order}\n a) Assume that $b,c$ satisfy \\eqref{coeffb} and \\eqref{coeffc}.\nThen\n \\begin{equation}\\label{bc}\n \\|(b^i D_i + D_i b^i +c)u\\|_{{{\\tilde{X}}}'}\\lesssim \\kappa \\|u\\|_{{{\\tilde{X}}}}.\n \\end{equation}\n \nb) Assume that $b$ satisfies \\eqref{coeffb} and $\\div \\ b=0$.\nThen\n \\begin{equation}\\label{bnoc}\n \\|(b^i D_i + D_i b^i)u\\|_{{X}'}\\lesssim \\kappa \\|u\\|_{{X}}.\n\\end{equation}\n\n\\end{prop}\n\n\\begin{proof}\nThis proof parallels a similar argument in \\cite{T}. However in there \nonly dimensions $n \\geq 3$ are considered, and the bound\n\\eqref{coeffc} is stronger to include the full gradient of $b$.\nThus we provide a complete proof here. We consider two cases,\nthe first of which is similar to \\cite{T}, while the second\nrequires a new argument.\n\n\n{\\bf Case 1: The estimate \\eqref{bc} for $n \\geq 3$ and \\eqref{bnoc}\n for $n=1,2$.}\nThe estimate for the $c$ term is straightforward since, by \\eqref{coeffc}, \n\\[ \\langle cu, v\\rangle \\lesssim \\kappa \\|\\langle x\\rangle^{-1} u\\|_{L^2_{t,x}} \\|\\langle x\\rangle^{-1} v\\|_{L^2_{t,x}}\n\\lesssim \\kappa \\|u\\|_{{{\\tilde{X}}}} \\|v\\|_{{{\\tilde{X}}}}.\\]\n \nFor the $b$ term, we consider a\nparadifferential decomposition,\n\\begin{multline}\\label{trichotomy}\n(b^i D_i + D_i b^i)u=\\ \\sum_{k} (S_{k}b^i D_i + D_i S_{>k} b^i)S_k u.\n\\end{multline}\n\nThe frequency localization is preserved in the first term; therefore \nit suffices to verify that\n\\[\n\\| (S_{k}b^i D_i S_k u -i S_{>k} \\div\\ b \\ S_k u.\n\\label{hl}\n\\end{equation}\nWe consider the two terms separately. The second one occurs only in \nthe case of \\eqref{bc} but the first one occurs also in \\eqref{bnoc}.\nSo we need to show that\n\\[\n\\| \\sum_{k} S_{>k}b^i D_i S_k u \\|_{{X}'} \\lesssim \\kappa \\| u\\|_{{X}}.\n\\]\nThis will follow from the dyadic estimates\n\\[\n\\| S_{m}b^i S_k u \\|_{X'_m} \\lesssim \\kappa \\| S_k\nu\\|_{X_k}, \\qquad m > k.\n\\]\nGiven the pointwise bound on $S_m b^i$, this reduces to\n\\[\n\\| S_k u \\|_{X_m} \\lesssim \\| S_k\nu\\|_{X_k}.\n\\]\nFor $|x| > \\max\\{2^{-k},1\\}$ this is trivial. For smaller $x$\nwe use \\eqref{bernstein},\nand the conclusion is obtained by a direct computation.\n\n\nIt remains to consider the second term in \\eqref{hl}, for which we \nwant to show that in dimension $n \\geq 3$\n\\begin{equation}\n\\| \\sum_{k} S_{>k} \\div\\ b \\ S_k u \\|_{{{\\tilde{X}}}'} \\lesssim \\kappa \\| u\\|_{{{\\tilde{X}}}}.\n\\label{hhll}\\end{equation}\n\nFor this we establish again off-diagonal decay,\n\\begin{equation}\n\\| S_{m} \\div\\ b \\ S_k u \\|_{X'_m} \\lesssim \\kappa (m-k) 2^k \\| S_k u\\|_{X_k},\n\\qquad m > k.\n\\label{odhhll}\\end{equation}\nThis follows from the pointwise bounds\n\\[\n| S_{m} \\div\\ b | \\leq \\kappa 2^{2m}\\langle 2^m x\\rangle^{-2}, \\qquad m < 0\n\\]\n\\[\n| S_{m} \\div\\ b | \\leq \\kappa \\langle x\\rangle^{-2}, \\qquad m \\geq 0.\n\\]\nWe consider the worst case $0 > m > k$ and leave the rest for the\nreader. We use $\\chi_{<-k}$ to separate small and large values of $x$.\nFor large $x$ we have\n\\[\n\\| \\chi_{>-k} S_{m} \\div\\ b \\ S_k u \\|_{X'_m} \\lesssim \n\\kappa \\| |x|^{-2} \\chi_{>-k} \\ S_k u \\|_{X'_k} \\lesssim \n\\kappa 2^k \\| S_k u\\|_{X_k}.\n\\]\nFor small $x$ we use \\eqref{bernstein} instead,\n\\[\n\\| \\chi_{<-k} S_{m} \\div\\ b \\ S_k u \\|_{X'_m} \\lesssim \n \\kappa 2^{2m} 2^{\\frac{n-1}2 k} \\| \\chi_{<-k} \\langle 2^m x\\rangle^{-2}\\|_{(X_m^0)'} \\|S_k u\\|_{X_k}\n\\lesssim \\kappa 2^{ k} \\|S_k u\\|_{X_k}.\n\\]\nThe last computation above is accurate if $n \\geq 4$. In dimension\n$n=3$ we encounter a harmless additional logarithmic factor $|m-k|$.\nHowever if $n=1,2$ then the above off-diagonal decay can no longer \nbe obtained.\n\n\n\n{\\bf Case 2: The estimate \\eqref{bc} in dimension $n=1,2$.}\nThe $c$ term is again easy to deal with. We write the \nestimate for $b$ in a symmetric way,\n\\[\n|\\langle (b^i D_i + D_i b^i) u,v \\rangle | \\lesssim \\kappa \\|u\\|_{{{\\tilde{X}}}} \\|v\\|_{{{\\tilde{X}}}}.\n\\]\nWe use the decomposition in Section~\\ref{embxs},\n\\[\nu = u^{in} + u^{out}, \\qquad v = v^{in} + v^{out}.\n\\]\nWe consider first the expression \n\\[\n\\langle (b^i D_i + D_i b^i) u^{out},v^{out} \\rangle.\n\\]\nFor this we can take advantage of the improved $L^2$ bound\n\\eqref{impl2} to carry out the same computation as in dimension \n$n \\geq 3$, establishing off-diagonal decay. Precisely,\nthe difference arises in the proof of \\eqref{odhhll}, whose\nreplacement is \n\\begin{equation}\n\\| S_{m} \\div\\ b \\ (1-T_k) S_k u \\|_{X'_m} \\lesssim \\kappa (m-k)2^k \\| S_k u\\|_{X_k},\n\\qquad m > k.\n\\label{odhhll.new}\\end{equation}\n\nConsider now one of the cross terms, \n\\[\n\\langle (b^i D_i + D_i b^i) u^{in},v^{out} \\rangle = \\langle (2 b^i D_i -i \\div b) u^{in},v^{out} \\rangle.\n\\]\nThe proof for the other cross term will follow similarly.\nFor the $\\div \\ b$ term we use the $L^2$ bound for both $u^{in}$ and\n$v^{out}$, as in the case of $c$. For the rest we use \\eqref{l2uin}\nand \\eqref{l2uout} to estimate\n\\[\n|\\langle b^i D_i u^{in},v^{out} \\rangle| \\lesssim \\|u^{in}\\|_{\\dot H^1} \\| b\nv^{out}\\|_{L^2} \\lesssim \\|u\\|_{{X}} \\|v\\|_{{X}}.\n\\]\n\n Finally, consider the last term\n \\[\n\\langle (b^i D_i + D_i b^i) u^{in},v^{in} \\rangle.\n\\]\nIn dimension $n=1$, we can easily estimate it by\n\\[\n|\\langle (b^i D_i + D_i b^i) u^{in},v^{in} \\rangle| \\lesssim \\|u^{in}\\|_{\\dot\n H^1} \\| \\langle x \\rangle^{-1} v^{in}\\|_{L^2} + \\|v^{in}\\|_{\\dot\n H^1} \\| \\langle x\\rangle^{-1} u^{in}\\|_{L^2} \\lesssim \\|u\\|_{{{\\tilde{X}}}} \\|v\\|_{{{\\tilde{X}}}}.\n\\]\nThis argument fails for $n=2$ due to the logarithmic factor in the\n$L^2$ weights. Instead we will take advantage of the\nspherical symmetry of both $u^{in}$ and $v^{in}$.\n\nIn polar coordinates we write\n\\[\nb^i D_i = b^r D_r + r^{-1} b^\\theta D_\\theta \n\\] \nand\n\\[\n\\div \\ b = \\partial_r b^r + r^{-1} b^r + r^{-1} \\partial_\\theta b^\\theta.\n\\]\nFor a function $b(r,\\theta)$, we denote $\\bar b(r)$ its spherical\naverage. By spherical symmetry, we compute\n\\[\n\\langle b^i D_i u^{in},v^{in} \\rangle = \\langle (b^r D_r + r^{-1} b^\\theta D_\\theta ) u^{in},v^{in} \\rangle\n= \\langle D_r u^{in}, \\bar{b^r} v^{in} \\rangle.\n\\]\nThen we can estimate\n\\[\n|\\langle (b^i D_i + D_i b^i) u^{in},v^{in} \\rangle| \\lesssim \\|u^{in}\\|_{\\dot\n H^1} \\| \\bar{b^r} v^{in}\\|_{L^2} + \\|v^{in}\\|_{\\dot H^1} \\|\\bar{b^r}u^{in}\\|_{L^2}\n\\lesssim \\|u\\|_{{{\\tilde{X}}}} \\|v\\|_{{{\\tilde{X}}}}\n\\]\nprovided we are able to establish the improved bound\n\\begin{equation}\n| \\bar{b^r}(r)| \\lesssim \\langle r \\rangle^{-1}(\\ln (2+r))^{-1}.\n\\label{bbr}\\end{equation}\n\nFor this we take spherical averages in the divergence equation\nto obtain\n\\[\n \\partial_r \\bar b^r + r^{-1} \\bar b^r = \\overline{\\div \\ b}.\n\\]\nAt infinity we have $b(r) = o(r^{-1})$. Integrating from infinity\nwe obtain\n\\[\n \\bar b^r(r) = \\int_{r}^\\infty \\frac{s}{r}\\ \\overline{\\div \\ b}(s) ds.\n\\]\nHence\n\\[\n| \\bar b^r(r) | \\lesssim \\int_{r}^\\infty \\frac{s}{r} (1+s)^{-2}\n(\\ln(2+s))^{-2} ds\n\\]\nand \\eqref{bbr} follows.\n\\end{proof}\n\n\n\\bigskip\n\\newsection{Local smoothing estimates}\\label{smoothing.section}\n\nIn this section we prove our main local smoothing estimates, \nfirst in the exterior region and then in the non-trapping case.\n\n\n\\subsection{The high dimensional case $n \\geq 3$: Proof of Theorem~\\ref{main.ls.theorem}}\nThe proof uses energy estimates and the positive commutator\nmethod. This turns out to be rather delicate. The difficulty \nis that the trapping region acts essentially as a black box,\nwhere the energy is conserved but little else is known. \nHence all the local smoothing information has to be estimated \nstarting from infinity along rays of the Hamilton flow which are \n incoming either forward or backward in time.\n\n\n\nWe begin with the energy estimate. This is standard\nif the right hand side is in $L^1_t L^2_x$, but we would like to allow \nthe right hand side to be in the dual smoothing space as well.\n\n\\begin{proposition}\nLet $u$ solve the equation\n\\begin{equation}\nD_t +A u = f_1 + f_2, \\qquad u(0) = u_0\n\\end{equation}\nin the time interval $[0,T]$. Then we have \n\\begin{equation}\n\\| u\\|_{L^\\infty_t L^2_x}^2 \\lesssim \\| u_0\\|_{L^2}^2 + \\| f_1\\|_{L^1_t L^2_x}^2\n+ \\| u\\|_{{{\\tilde{X}}}_e} \\|f_2\\|_{{{\\tilde{X}}}'_e}. \n\\label{eest}\\end{equation}\n\\end{proposition}\n\\begin{proof}\nThe proof is straightforward. We compute\n\\[\n\\frac{d}{dt} \\frac12 \\|u(t)\\|_{L^2}^2 = \\Im \\langle u, f_1+f_2 \\rangle.\n\\]\nHence for each $t \\in [0,T]$ we have\n\\[\n\\| u(t)\\|_{L^2}^2 \\lesssim \\|u(0)\\|_{L^2}^2 + \\|u\\|_{L^\\infty_t L^2_x}\n\\| f_1\\|_{L^1_t L^2_x} + \\| u\\|_{{{\\tilde{X}}}_e} \\|f_2\\|_{{{\\tilde{X}}}'_e}.\n\\]\nWe take the supremum over $t$ on the left and use bootstrapping\nfor the second term on the right. The conclusion follows.\n\\end{proof}\n\nTo prove \\eqref{lsext} we need a complementary estimate,\nnamely \n\\begin{equation}\n \\| \\rho u\\|_{{{\\tilde{X}}}}^2 \\lesssim \\| u\\|_{L^\\infty_t L^2_x}^2 + \n \\| f_1\\|_{L^1_t L^2_x}^2 \n + \\|\\rho f_2\\|_{{{\\tilde{X}}}'}^2 + \\| \\langle x \\rangle^{-2} u\\|_{L^2_{t,x}}^2. \n\\label{lsest}\\end{equation}\nGiven \\eqref{eest} and \\eqref{lsest}, the bound \\eqref{lsext}\nis obtained by bootstrapping, with some careful balancing \nof constants. \n\nIt remains to prove \\eqref{lsest}. \nWe will use a positive commutator method. We shall assume that $b=0$ and $c=0$.\nFor a self-adjoint operator\n$Q$, we have\n\\[\n2\\Im\\langle A u, Qu\\rangle = \\langle Cu,u\\rangle\n\\]\nwhere\n\\[\nC=i[A,Q].\n\\]\nAs a consequence of this, we see that\n\\[\n\\frac{d}{dt}\\langle u, Qu\\rangle = -2\\Im\\langle (D_t+A)u, Qu\\rangle + \\langle\nCu,u\\rangle.\n\\]\nTaking this into account, the estimate \\eqref{lsest} is an immediate\nconsequence of the following lemma.\n\n\\begin{proposition} \\label{Qprop}\n There is a family $\\mathcal Q$ of bounded self-adjoint operators $Q_\\rho$ with \nthe following properties:\n\n(i) $L^2$ boundedness,\n\\[\n\\|Q_\\rho\\|_{L^2 \\to L^2} \\lesssim 1\n\\]\n\n(ii) ${{\\tilde{X}}}$ boundedness,\n\\[\n|\\langle Q_\\rho u, f \\rangle | \\lesssim \\| \\rho f\\|_{{{\\tilde{X}}}'} \\| \\rho u\\|_{{{\\tilde{X}}}}\n\\]\n\n(iii) Positive commutator,\n\\[\n\\sup_{Q_\\rho \\in \\mathcal Q} \\langle Cu,u \\rangle \\geq c_1 \\|\\rho u\\|_{{{\\tilde{X}}}}^2 -\nc_2 \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}}^2.\n\\]\n\\end{proposition}\n\n\n We first note that the condition (ii) shows that $Q_\\rho u$ is supported\n in $\\{|x| > 2^M\\}$ and depends only on the values of $u$ in the same\n region. Hence for the purpose of this proof we can modify the\n operator $A$ arbitrarily in the inner region $\\{|x| < 2^M\\}$. In\n particular we can improve the constant $\\kappa$ in \\eqref{coeff} to\n the extent that \\eqref{coeff.M} holds globally. Similarly, we can\n assume without any restriction in generality that $u = 0$ in $\\{|x|\n < 2^M\\}$. \n\nUsing (ii), we may argue similarly and assume that \\eqref{coeffb.M} and\n\\eqref{coeffc.M} hold globally if lower order terms are present. \nThe estimate \\eqref{bc} then justifies\n neglecting the \nlower order terms in $A$. I.e., we may assume that $b=0$, $c=0$.\n\n\n\\begin{proof}\nThe main step in the proof of the proposition is to construct\nsome frequency localized versions of the operator $Q_\\rho $. Precisely,\nfor each $k \\in {\\mathbb Z}$ we produce a family ${\\mathcal Q}_k$ \nof operators $Q_k$, which we later use to construct $Q_\\rho $.\nWe consider two cases, depending on whether $k$ is positive or\nnegative.\n\n\nWe first introduce some variants of the spaces $X_k$. Let $k \\in {\\mathbb Z}$ and\n$k^-=\\frac{|k|-k}{2}$ be its negative part. For any positive, slowly varying\nsequence $(\\alpha_m)|_{m\\geq k^-}$ with\n\\[\n\\sum_{k \\ge k^-}\\alpha_j =1, \\qquad \\alpha_{k^-} \\approx 1,\n\\]\nwe define the space $X_{k,\\alpha}$ with norm\n\\begin{align*}\n \\|u\\|_{X_{k,\\alpha}}^2 &= 2^{-k^-} \\|u\\|^2_{L^2(A_{\\leq k^-})} +\n \\sum_{j>k^-} \\alpha_j\\| |x|^{-1\/2} u\\|^2_{L^2(A_j)}.\n\\end{align*}\nThen our low frequency result has the form\n\\begin{lemma} \\label{lemma.low.freq} \nLet $n \\geq 1$ and $k<0$. Then for any slowly varying sequence $(\\alpha_m)$ with\n$\\alpha_{-k} \\approx 1$ and $\\sum_{m\\ge -k}\\alpha_m=1$, there\n is a self-adjoint operator $Q_k$ so that\n\\begin{align}\n\\|Q_k u\\|_{L^2}&\\lesssim \\|u\\|_{L^2},\\label{Q.L2.low}\\\\\n\\|Q_k u\\|_{X_{k,\\alpha}}&\\lesssim \\|u\\|_{X_{k,\\alpha}},\\label{Q.X.low}\\\\\n\\langle C_k u, u\\rangle &\\gtrsim 2^{k} \\|u\\|^2_{X_{k,\\alpha}}, \\qquad \nC_k = i [ A_{(k)}, Q_k] \n\\label{C.low}\n\\end{align}\nfor all functions $u$ frequency localized at frequency $2^k$.\n\\end{lemma}\n\n\\begin{proof}\n We argue exactly as in \\cite[Lemma 9]{T}. The only difference is\n that here we work with the operator $A_{(k)}$ whose coefficients have\n less regularity, but this turns out to be nonessential.\n \n We first increase the sequence $(\\alpha_m)$ so that\n\\begin{equation}\n\\label{new.alpha}\n\\begin{cases}\n(\\alpha_m)\\text{ remains slowly varying,}\\\\\n\\alpha_m = 1\\text{ for } m\\le -k \\\\\n\\displaystyle \\sum_{m>-k} \\alpha_m \\approx 1,\\\\\n\\kappa_m \\le \\epsilon \\alpha_{m}\\text{ for } m > -k.\n\\end{cases}\n\\end{equation}\nTo this slowly varying sequence we may associate a slowly varying\nfunction $\\alpha(s)$ with\n\\[\n\\alpha(s)\\approx \\alpha_m,\\quad s\\approx 2^{m+k}.\n\\]\n\nWe construct an even smooth symbol $\\phi$ of order $-1$ satisfying\n\\begin{align}\n\\phi(s)&\\approx \\langle s\\rangle^{-1},\\quad s>0 \\label{phi1}\\\\\n\\phi(s)+s\\phi'(s)&\\approx \\frac{\\alpha(s)}{\\langle s\\rangle},\\quad\ns>0.\\label{phi2} \t\n\\end{align}\nWe notice that the radial function $ S_{<10}(D)\\phi(|x|)$ satisfies\nthe same estimates; therefore without any restriction in generality we\nassume that $\\phi(|x|)$ is frequency localized in $|\\xi| < 2^{10}$.\n\nWe now define the self-adjoint multiplier\n\\[\nQ_k(x,D)= \\delta(Dx\\phi( 2^k \\delta |x|)+\\phi(2^k \\delta |x|)xD).\n\\]\nFor small $\\delta$ this takes frequency $2^k$ functions to frequency\n$2^k$ functions. The first property \\eqref{Q.L2.low} follows immediately.\n\\begin{comment}\n\\begin{com}\n Indeed, this is easy since we are in $L^2$ and $\\phi_k$ preserves\n the frequency support. Thus, the $D$ introduces the $2^k$, and the\n $\\delta |x| \\phi_k \\le 1$ by the first property of $\\phi$ above.\n\\end{com}\n\\end{comment}\nThe estimate \\eqref{Q.X.low} is also straightforward as the weight in\nthe $X_{k,\\alpha}$ norm is slowly varying on the dyadic scale. It\nremains to prove \\eqref{C.low} for which we begin by computing the\ncommutator\n\\begin{equation}\n\\label{C}\n\\begin{split}\nC_k =&\\ 4\\delta D_i \\phi(2^k\\delta |x|)a^{ij}_{(k)}D_j \n\\\\ &\\ + 2^{k+1}\n\\delta^2 \\Bigl(Dx |x|^{-1}\\phi'(2^k \\delta |x|)\nx_i a^{ij}_{(k)} D_j + D_i a^{ij}_{(k)} x_j |x|^{-1}\\phi'(2^k \\delta |x|)xD\\Bigr)\n\\\\ &\\ -2\\delta D_i\\phi(2^k \\delta |x|)(x_l\\partial_l a^{ij}_{(k)})D_j + \\partial_i(a^{ij}_{(k)}(\\partial_j \\partial(\n\\delta x\\phi(2^k \\delta |x|)))).\n\\end{split}\n\\end{equation}\nThe positive contribution comes from the first two terms. Replacing\n$a^{ij}_{(k)}$ by the identity leaves us with the principal part\n\\[\nC_k^0=4\\delta D \\phi(2^k\\delta |x|)D + 4\\delta D \\frac{x}{|x|} 2^k \n\\delta |x|\\phi'(2^k\\delta |x|)\\frac{x}{|x|}D\n\\]\nwhich by \\eqref{phi2} satisfies\n\\[\n\\langle C_k^0 u,u\\rangle \\ge 4\\delta \\langle (\\phi(2^k \\delta |x|)+2^k \\delta\n|x|\\phi'(2^k \\delta |x|))\\nabla u,\\nabla u\\rangle\n\\gtrsim \\delta 2^{2k}\\Bigl\\langle \\frac{\\alpha(2^k \\delta |x|)}{\\langle 2^k \\delta\n x\\rangle}u,u\\Bigr\\rangle.\n\\]\nSince $a^{ij}_{(k)}(x)-\\delta^{ij} = O(\\kappa_k(|x|))$, the error we produce\nby substituting $a^{ij}_{(k)}$ by the identity has size\n\\[\n\\delta 2^{2k} \\left\\langle \\frac{\\kappa_k(|x|)}{\\langle 2^k \\delta x\\rangle}u,u\\right\\rangle.\n\\]\nIt remains to examine the last two terms in $C_k$. Using\n\\eqref{coeffak.greater}, we see that\n\\[\n|\\delta \\phi(2^k \\delta |x|)(x_l\\partial_l a^{ij}_{(k)})|\\lesssim\n\\frac{\\delta\\kappa_k(|x|)}{\\langle 2^k \\delta x\\rangle}.\n\\]\nSo, the third term yields an error similar to the above one.\n\nFinally, \n\\[\n|\\partial_i(a^{ij}_{(k)}(\\partial_j\\partial(\\delta x\\phi(2^k \\delta |x|))))|\\lesssim\n\\frac{\\delta^3 2^{2k}}{\\langle 2^k\\delta x\\rangle^3}\\lesssim\n\\frac{\\delta^3 2^{2k} \\alpha(2^k \\delta |x|)}{\\langle 2^k \\delta x\\rangle},\n\\]\nwhich yields\n\\[\n\\langle\\partial_i(a^{ij}_{(k)}(\\partial_j\\partial\\delta x\\phi(2^k \\delta |x|)))u,u\\rangle\n\\lesssim \\delta^3 2^{2k} \\Bigl\\langle \\frac{\\alpha(2^k \\delta |x|)}{\\langle 2^k \\delta\n x\\rangle} u,u\\Bigr\\rangle.\n\\]\n\nSumming up, we have proved that\n\\begin{equation}\n\\langle C_ku,u\\rangle \\geq c_1 \\delta 2^{2k}\\Bigl\\langle \\frac{\\alpha(2^k \\delta |x|)}{\\langle 2^k \\delta\n x\\rangle}u,u\\Bigr\\rangle\\! - c_2 \\delta^3 2^{2k}\n\\Bigl\\langle \\frac{\\alpha(2^k \\delta |x|)}{\\langle 2^k \\delta\n x\\rangle} u,u\\Bigr\\rangle \\!- c_3 \\delta 2^{2k} \\left\\langle \\frac{\\kappa_k(|x|)}{\\langle 2^k \\delta x\\rangle}u,u\\right\\rangle.\n\\label{lowcom}\\end{equation}\nIn order to absorb the second term into the first we need to know that\n$\\delta$ is sufficiently small. This determines the choice of $\\delta$\nas a small universal constant. In order to absorb the third term into\nthe first we use the last part of \\eqref{new.alpha} and the fact that \n$\\alpha$ is slowly varying on the dyadic scale to estimate\n\\[\n\\kappa(|x|) \\lesssim \\epsilon \\alpha(2^k |x|) \\lesssim \\delta^{-1}\n\\epsilon \\alpha(2^k \\delta |x|).\n\\]\nThus the third term is negligible if $\\epsilon \\ll \\delta$. This\ndetermines the choice of $\\epsilon$ in \\eqref{coeff.M},\n\\eqref{coeffb.M} and \\eqref{coeffc.M}.\n\\end{proof}\n\n\nWe continue with the result for high frequencies.\n\n\n\\begin{lemma}\n\\label{lemma.high.freq}\nLet $n \\geq 1$ and $k \\geq 0$. Then for any sequence $(\\alpha_m)$ with\n$\\alpha_0=1$ and $\\sum_{m\\ge 0} \\alpha_m=1$ there\nis a self-adjoint operator $Q_k$ so that\n\\begin{align}\n\\|Q_k u\\|_{L^2} &\\lesssim \\|u\\|_{L^2},\\label{Q.L2.high}\\\\\n\\|Q_k u\\|_{X_{k,\\alpha}} \n&\\lesssim \\|u\\|_{X_{k,\\alpha}},\\label{Q.X.high}\\\\\n\\langle C_k u,u\\rangle &\\gtrsim 2^{k}\\| u\\|^2_{X_{k,\\alpha}}\n, \\qquad \nC_k = i [ A_{(k)}, Q_k],\n\\label{C.high}\\\\\n2\\Im\\langle[A_{(k)},\\rho_{ 0$ and by $1$ for $k < 0$. This generates error terms\nwhich we need to estimate. \n\n\nIf $k < 0$ then these error terms are estimated\nas follows. We first want to substitute $A$ by $A_{(k)}$, and as such, we see errors of the\nform\n\\begin{equation}\\label{error1}\n |\\langle [A,\\rho]u,\\sum_{k<0} S_k Q_k S_k \\rho u\\rangle|\n+ |\\sum_{k<0} \\langle (A-A_{(k)})\\rho u, S_k Q_k S_k \\rho u\\rangle|.\n\\end{equation}\nFor the first term, we use \\eqref{Q.X.low} (after optimizing in $\\alpha$)\n\\[\n\\begin{split}\n|\\langle [A,\\rho]u, \\sum_{k<0}S_k Q_k S_k \\rho u\\rangle|&\\lesssim\n|\\langle -2iD_i a^{ij}(\\partial_j \\rho) u + \\partial_j((\\partial_i \\rho)a^{ij})u,\\sum_{k<0}S_k Q_k S_k \\rho u\\rangle|\\\\\n&\\lesssim \\|u\\|_{L^2_{t,x}(2^{M}< |x|< 2^{M+1})} \\|\\sum_{k<0}S_k Q_k S_k \\rho u\\|_{L^2_{t,x}(\\{\n2^{M}<|x|<2^{M+1}\\})}\\\\\n&\\lesssim \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}} \\|\\sum_{k<0} S_k Q_k S_k \\rho u\\|_{{X}}\\\\\n&\\lesssim \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}} \\|\\rho u\\|_{{{\\tilde{X}}}}.\n\\end{split}\n\\]\nFor the second term in \\eqref{error1}, we use \\eqref{aak} and \\eqref{Q.X.low} to see that\n\\[\n\\begin{split}\n |\\sum_{k<0} \\langle (A-A_{(k)})\\rho u, S_k Q_k S_k \\rho u\\rangle|&\\lesssim\n\\Bigl(\\sum 2^{-k} \\|S_k(A-A_{(k)})\\rho u\\|^2_{X'_k}\\Bigr)^{\\frac12} \\|\\rho u\\|_{{X}}\\\\\n&\\lesssim \\epsilon \\|\\rho u\\|^2_{{{\\tilde{X}}}}. \n\\end{split}\n\\]\nFor the remaining errors, we use the fact that $A_{(k)}$ preserves\nlocalizations at frequency $2^k$ combined with\n\\eqref{coeffak.greater}, and \\eqref{Q.L2.low} to see that\n\\[\n\\begin{split}\n |\\sum_{k<0} \\langle A_{(k)} (1-\\rho) u, S_k Q_k S_k \\rho u\\rangle|\n&\\lesssim \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}} \\|\\langle x\\rangle^2 \\sum_{k<0} S_k Q_k S_k A_{(k)} (1-\\rho)u\\|_{L^2_{t,x}}\\\\\n&\\lesssim \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}} \\|(1-\\rho)u\\|_{L^2_{t,x}}\n\\end{split}\n\\]\nand respectively,\n\\[\n\\begin{split}\n |\\sum_{k<0} \\langle A_{(k)} u, S_k Q_k S_k (1-\\rho)u\\rangle|&\\lesssim\n\\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}}\\|\\langle x\\rangle^2 A_{(k)} \\sum_{k<0}S_k Q_k S_k (1-\\rho)u\\|_{L^2_{t,x}}\\\\\n&\\lesssim \\|\\langle x\\rangle^{-2} u\\|_{L^2_{t,x}} \\|(1-\\rho)u\\|_{L^2_{t,x}}.\n\\end{split}\n\\]\nIn both formulas above the last step is achieved by commuting the\n$x^2$ factor to the right, where it is absorbed by the $(1-\\rho)$\nfactor. The two possible commutators may yield an extra $2^{-2k}$\nfactor, which is compensated for by the two derivatives in $A_{(k)}$.\n\n\nOn the other hand if $k \\geq 0$ then we have the bound\n\\[\n| \\rho - \\rho_{ 0} 2^k \\| S_k \\rho_{ 0} 2^k \\| S_k \\rho_{ < k}\n u\\|_{X_{k}}^2 \\right)\n\\\\ &\\\n- c_2 \\left( \\|\\langle x \\rangle^{-2}u\\|_{L^2_{t,x}}^2 + \n\\epsilon\n\\|\\rho u\\|_{{{\\tilde{X}}}}^2 \\right)\n\\end{split}\n\\]\nwhich for $\\epsilon$ sufficiently small yields part (iii) of the proposition. \n\\end{proof}\n\n\\subsection{The non-resonant low dimensional case $n=1,2$: Proof of\n Theorem~\\ref{main.ls.theorem}}\n\n Almost all the arguments\nin the high dimensional case apply also in low dimension. The only\ndifference arises in part (ii) of Proposition \\ref{Qprop}. Since\nthe multiplication by $\\rho$ is bounded in both ${{\\tilde{X}}}$ and ${{\\tilde{X}}}'$,\nthe property (ii) reduces to proving that\n\\[\n\\sum_{k=-\\infty}^\\infty S_k Q_k S_k : {{\\tilde{X}}} \\to {{\\tilde{X}}}.\n\\]\nIn dimension $n \\geq 3$ the ${{\\tilde{X}}}$ norm is described in terms of the\n$X_k$ norms of its dyadic pieces, and the above property follows\nfrom the $X_k$ boundedness of $Q_k$ at frequency $2^k$. \n\nHowever, in dimension $n=1,2$ the ${{\\tilde{X}}}$ norm also has a weighted $L^2$\ncomponent. The high frequency part $k \\geq 0$ of the above sum causes\nno difficulty, but the low frequency part does. We do know that\n\\[\n\\sum_{k=-\\infty}^0 S_k Q_k S_k : {X} \\to {X}.\n\\]\nTherefore, due to Lemma~\\ref{txdx}, it would remain to prove that\n\\[\n\\Bigl\\| \\sum_{k=-\\infty}^0 S_k Q_k S_k u\\Bigr\\|_{L^2_{t,x}(\\{|x| \\leq 1\\})} \\lesssim\n\\|u\\|_{{{\\tilde{X}}}}.\n\\]\nUnfortunately, the operators $S_k Q_k S_k$ act on the $2^{-k}$ spatial\nscale; therefore without any additional cancellation there is no\nreason to expect a good control of the output in a bounded region. The\naim of the next few paragraphs is to replace the above low frequency\nsum by a closely related expression which exhibits the desired\ncancellation property.\n\nFirst of all, it is convenient to replace the discrete parameter $k$\nby a continuous one $\\sigma$. The operators $S_\\sigma$ are defined in\nthe same way as $S_k$ by scaling. Let $\\phi_k$ be the functions in\nLemma~\\ref{lemma.low.freq}. The functions $\\phi_\\sigma$ are defined\nfrom $\\phi_k$ using a partition of unity on the unit scale in $\\sigma$.\nThe normalization we need is very simple, namely $\\phi_k(0) = 1$,\nwhich leads to $\\phi_{\\sigma}(0) = 1$. The operators $Q_\\sigma$ are\ndefined in a similar way. Then it is natural to substitute\n\\[\n \\sum_{k=-\\infty}^0 S_k Q_k S_k \\to \\int_{-\\infty}^0 S_\\sigma\n Q_\\sigma S_\\sigma d\\sigma\n\\]\nand all the estimates for the second sum carry over identically\nfrom the discrete sum.\n\nHowever, the desired cancellation is still not present in the second\nsum. To obtain that we consider a spherically symmetric Schwartz function \n$\\phi^0$ localized at frequency $\\ll 1$ with $\\phi^0(0) = 1$. \nThen we write $\\phi_\\sigma$ in the form\n\\[\n\\phi_\\sigma(x) = \\phi^0(x) + x^2 \\psi_\\sigma(x).\n\\]\n\nThe modified self-adjoint operators ${{\\tilde{Q}}}_\\sigma$ are defined as \n\\[\n{{\\tilde{Q}}}_\\sigma = S_\\sigma Q_{\\sigma,\\phi^0} S_\\sigma + 2^{2\\sigma} \\delta^2 x S_\\sigma\nQ_{\\sigma,\\psi_\\sigma} S_\\sigma x\n\\]\nwhere, as in Lemma~\\ref{lemma.low.freq}, we set\n\\[\nQ_{\\sigma,\\phi} = \\delta(Dx\\phi( 2^\\sigma \\delta |x|)+\\phi(2^\\sigma \\delta |x|)xD).\n\\]\n\nWe claim that the conclusion of Proposition~\\ref{Qprop} is valid with\nthe operator $Q$ defined as \n\\begin{equation}\nQ_\\rho = \\rho Q \\rho, \\qquad Q = \\int_{-\\infty}^0 {{\\tilde{Q}}}_\\sigma d\\sigma + \\sum_{k=0}^\\infty S_k Q_k S_k. \n\\label{qlowd}\n\\end{equation}\nThe family $\\mathcal Q$ is obtained as before by allowing the\nchoice of the functions $\\phi_k$ to depend on the slowly varying\nsequences $(\\alpha^\\sigma_{j})_{j \\in {\\mathbb N}}$ which are chosen\nindependently\\footnote{In effect, without any restriction in generality,\n one may also assume that $\\alpha^\\sigma_{j}$ is also slowly varying\n with respect to $\\sigma$} for different $k$.\n\nThere is no change in part (i) of Proposition~\\ref{Qprop}. For part\n(ii) we need to prove that\n\\begin{equation} \\label{qtx}\n\\left\\| Q u \\right \\|_{{{\\tilde{X}}}} \\lesssim \\|u\\|_{{{\\tilde{X}}}}.\n\\end{equation}\nThe high frequencies are estimated directly from the ${X}$ norm;\ntherefore we have to consider the integral term in $Q$ and show that\n\\[\n\\left\\| \\int_{-\\infty}^0 {{\\tilde{Q}}}_\\sigma u\\, d\\sigma\\right \\|_{{{\\tilde{X}}}} \\lesssim \\|u\\|_{{{\\tilde{X}}}}.\n\\]\n\nThe ${X}$ component of the ${{\\tilde{X}}}$ norm is easily estimated by Littlewood-Paley\ntheory, so due to Lemma~\\ref{txdx}, it would remain to prove the local\n$L^2$ bound\n\\begin{equation}\\label{quo}\n\\left\\| \\int_{-\\infty}^0 {{\\tilde{Q}}}_\\sigma u d\\sigma\\right \\|_{L^2_{t,x}(\\{|x| \\leq 1\\})} \\lesssim\n\\|u\\|_{{{\\tilde{X}}}}.\n\\end{equation}\n\nWe can neglect the time variable in the sequel.\nWe have the $L^2$ bound\n\\[\n\\| {{\\tilde{Q}}}_{\\sigma} u\\|_{X_\\sigma} \\lesssim\n \\| S_\\sigma u\\|_{X_\\sigma}\n\\]\nwhich leads to \n\\[\n\\| \\nabla {{\\tilde{Q}}}_{\\sigma} u\\|_{X_\\sigma} \\lesssim\n2^{\\sigma} \\| S_\\sigma u\\|_{X_\\sigma}\n\\]\nand the corresponding pointwise bound\n\\[\n\\| \\nabla {{\\tilde{Q}}}_{\\sigma} u\\|_{L^\\infty (A_{<-\\sigma})} \\lesssim\n2^{\\frac{n+1}2 \\sigma } \\| S_\\sigma u\\|_{X_\\sigma}\n\\]\nwhich establishes the convergence and the bound for the corresponding integral\n\\[\n\\left\\| \\int_{-\\infty}^0 \\nabla {{\\tilde{Q}}}_{\\sigma} u\\,d\\sigma \\right\\|_{L^\\infty(A_{<0})} \\lesssim\n\\| S_{\\leq 0} u\\|_{{X}}.\n\\]\nHence in order to prove \\eqref{quo} it remains to establish a similar\nbound for the integral at $x=0$. Assume first that $u \\in L^2$, which\narguing as above guarantees the uniform convergence of the integral.\nDenoting by $K_\\sigma$ the kernel of $S_\\sigma$ we have\n\\[\n\\begin{split}\n({{\\tilde{Q}}}_{\\sigma} u)(0) = &\\ (S_\\sigma Q_{\\sigma,\\phi^0} S_\\sigma) u(0) \n\\\\\n= &\\ \\langle K_\\sigma, Q_{\\sigma,\\phi^0} S_\\sigma u \\rangle\n= \\langle Q_{\\sigma,\\phi^0} K_\\sigma, S_\\sigma u \\rangle \n\\\\ \n= & \\ \\int Q_{\\sigma,\\phi^0}(x,D_x) K_\\sigma(x) \\int K_\\sigma(x-y) u(y) dy\ndx\n\\\\ =&\\ (S_\\sigma^1 u)(0)\n\\end{split}\n\\]\nwhere $S_\\sigma^1$ is the frequency localized multiplier with\nspherically symmetric Schwartz kernel\n\\[\nK_\\sigma^1 = Q_{\\sigma,\\phi^0}(x,D_x) K_\\sigma * K_\\sigma.\n\\]\nDue to the frequency localization we can define\n\\[\nS_{<0}^1 = \\int_{-\\infty}^{0} S_\\sigma^1 d \\sigma.\n\\]\nThe punch line is that by construction the operators $S_\\sigma^1$ have\nthe same kernel up to the appropriate rescaling. This implies that the\nsymbols of $S_{<0}^1$ are constant for $|\\xi| \\leq\n2^{-4}$. Hence both the symbols and the kernels\n$K_{<0}^1$ of $S_{<0}^1$ are Schwartz functions which\ncoincide modulo rescaling. Hence for all functions $u \\in L^2$ we have\n\\[\n\\int_{-\\infty}^0 {{\\tilde{Q}}}_{\\sigma} u (0) d\\sigma = \\langle K_{<0}^1, u\\rangle\n\\]\nwhich leads to the estimate\n\\[\n\\left | \\int_{-\\infty}^0 ({{\\tilde{Q}}}_\\sigma u)(0) d\\sigma\\right| \\lesssim\n\\|u\\|_{{{\\tilde{X}}}}.\n\\]\nThis completes the proof of the estimate \\eqref{quo} for all $u \\in\nL^2$, and, by density, shows that the integral\n\\[\n\\int_{-\\infty}^0 {{\\tilde{Q}}}_\\sigma d\\sigma\n\\]\nhas a unique bounded extension to ${{\\tilde{X}}}$.\n\nIt remains to prove part (iii) of Proposition~\\ref{Qprop}. If\n${{\\tilde{Q}}}_\\sigma$ is replaced by $S_\\sigma Q_\\sigma S_\\sigma$ then the high\ndimensional argument applies by simply replacing sums with integrals.\nHence it remains to estimate the difference. Commuting we obtain\n\\[\n{{\\tilde{Q}}}_\\sigma - S_\\sigma Q_\\sigma S_\\sigma = i \\delta^2 2^{2\\sigma} \\left( S'_\\sigma\nQ_{\\sigma,\\psi} x S_\\sigma - S_\\sigma x Q_{\\sigma,\\psi} \nS'_\\sigma - S'_\\sigma Q_{\\sigma,\\psi} S'_\\sigma (D) \\right).\n\\]\nCommuting again to take advantage of the cancellation between the\nfirst two terms, by semiclassical pdo calculus we can write\n\\[\n{{\\tilde{Q}}}_\\sigma - S_\\sigma Q_\\sigma S_\\sigma = \\delta^2 R_\\sigma (2^\\sigma\n\\delta x, 2^{-\\sigma} D)\n\\]\nwhere the symbol $r_\\sigma(y,\\eta)$ is localized in $\\{ |\\eta|\n\\approx 1\\}$ and satisfies\n\\[\n|\\partial_y^\\alpha \\partial_\\eta^\\beta r_\\sigma(y,\\eta)| \\leq\nc_{\\alpha \\beta} \\langle y \\rangle^{-2}.\n\\]\nThis implies the bound\n\\[\n\\| ({{\\tilde{Q}}}_\\sigma - S_\\sigma Q_\\sigma S_\\sigma) u \\|_{X'_\\sigma} \\lesssim\n\\delta^2 2^{-\\sigma} \\|S_\\sigma u\\|_{X_\\sigma}. \n\\]\nTherefore without any commuting we obtain\n\\[\n| \\langle [{{\\tilde{Q}}}_\\sigma - S_\\sigma Q_\\sigma S_\\sigma,A_{(\\sigma)}] u,u\\rangle |\n\\lesssim \\delta^2 \\| u \\|_{{X}}^2.\n\\]\nThis error is negligible since, as one can note in the proofs of\nLemmas~\\ref{lemma.low.freq}, \\ref{lemma.high.freq},\n the constant $c_1$ in (iii) has size $c_1 = O(\\delta)$.\n\n\n\n\n\n\n\n \\subsection{The resonant low dimensional case $n=1,2$: Proof of\n \\ref{main.ls.theorem.res}}\n\nThe proof follows the same outline as in the non-resonant case, with minor\nmodifications. The energy estimate \\eqref{eest} is now replaced\nby \n\\begin{equation}\n\\| u\\|_{L^\\infty_t L^2_x}^2 \\lesssim \\| u_0\\|_{L^2}^2 + \\| f_1\\|_{L^1_t L^2_x}^2\n+ \\| u\\|_{{X}_e} \\|f_2\\|_{{X}'_e}. \n\\label{eest.res}\\end{equation}\nInstead of the exterior smoothing estimate \\eqref{lsest},\nwe need to prove\n\\begin{equation}\n \\| T_\\rho u\\|_{{X}}^2 \\lesssim \\| u\\|_{L^\\infty_t L^2_x}^2 + \n \\| f_1\\|_{L^1_t L^2_x}^2 \n + \\|T_\\rho f_2\\|_{{X}'}^2 + \\| \\langle x \\rangle^{-2} (u-u_\\rho) \\|_{L^2_{t,x}}^2.\n\\label{lsest.res}\\end{equation}\nThe estimate \\eqref{lsext.res} then follows from the previous two estimates as\nwell as \\eqref{dXerror}.\n\nThe lower order terms will still be negligible. Indeed, letting\n$B=2b^i D_i$, we have\n\\[\n T_\\rho Bu = B T_\\rho u - (B \\rho)(u-u_\\rho) + (1-\\rho) \\left( \\int\n (1-\\rho) dx\\right)^{-1} \\int (B\\rho)(u-u_\\rho) dx.\n\\]\nTherefore by \\eqref{bnoc}, we obtain\n\\[\n\\| T_\\rho Bu\\|_{{X}'} \\lesssim \\epsilon \\|u\\|_{{X}_e},\n\\]\nwhich combined with the ${X}$ boundedness of our multiplier below shows that the lower order\nterms can be neglected.\n\nThe estimate \\eqref{lsest.res} follows from \n\n\\begin{proposition} \\label{Qprop.res}\nThere is a family $\\mathcal Q_{res}$ of bounded self-adjoint operators $Q_{res}$ with \nthe following properties:\n\n(i) $L^2$ boundedness,\n\\[\n\\|Q_{res}\\|_{L^2 \\to L^2} \\lesssim 1,\n\\]\n\n(ii) ${X}$ boundedness,\n\\[\n|\\langle Q_{res} u, f \\rangle | \\lesssim \\| T_\\rho f\\|_{{X}'} \\| T_\\rho u \\|_{{X}},\n\\]\n\n(iii) Positive commutator,\n\\[\n\\sup_{Q_{res} \\in \\mathcal Q_{res}} \\langle Cu,u \\rangle \\geq c_1 \\|T_\\rho u\\|_{{X}}^2 -\nc_2 \\|\\langle x\\rangle^{-2} (u-u_\\rho) \\|_{L^2_{t,x}}^2.\n\\]\n\\end{proposition}\n\n\\begin{proof}\n We construct $Q_{res}$ as in the non-resonant case but with the\n modified truncation operator\n\\[\nQ_{res} u = T_\\rho Q T_\\rho.\n\\]\nwith $Q$ given by \\eqref{qlowd}.\n\nThe properties (i) and (ii) are straightforward.\nFor (iii) we note that \n\\[\nS_k T_\\rho u = S_k \\rho(u-u_\\rho)\n\\]\nwhile\n\\[\n T_\\rho A u = \\rho Au + c (1-\\rho) \\int (1-\\rho) A (u-u_\\rho) dx = \n \\rho A(u-u_\\rho) - c (1-\\rho) \\int (u-u_\\rho) A \\rho dx.\n\\]\nHence we can express the bilinear form $\\langle Au, Q_{res} u \\rangle$ in\nterms of the operator $Q_\\rho$ in the nonresonant case\n\\[\n\\langle Au, Q_{res} u \\rangle = \\langle A(u-u_\\rho), Q_\\rho (u-u_\\rho) \\rangle - c \\int\n(u-u_\\rho) A \\rho dx \\ \\langle (1-\\rho), Q T_\\rho u \\rangle\n\\]\nwhich implies that\n\\[\n\\langle C_{res} u,u\\rangle = \\langle C (u-u_\\rho),u-u_\\rho\\rangle + c \\Im \\int\n(u-u_\\rho) A \\rho dx\\ \\langle (1-\\rho), Q T_\\rho u \\rangle.\n\\]\nHence we can apply part (iii) of Proposition~\\ref{Qprop} and\n\\eqref{qtx} to obtain the desired conclusion.\n\\end{proof}\n\n\n\n\\subsection{Non-trapping metrics: Proof of Theorem~\\ref{main.ls.theoremnt}.}\nThis requires some modifications of the previous argument. First of\nall, instead of the energy estimate \\eqref{eest}, we need a\nstraightforward modification of it, namely\n\\begin{equation}\n\\| u\\|_{L^\\infty_t L^2_x}^2 \\lesssim \\| u_0\\|_{L^2}^2 + \\| f_1\\|_{L^1_t L^2_x}^2\n+ \\| u\\|_{{{\\tilde{X}}}} \\|f_2\\|_{{{\\tilde{X}}}'}.\n\\label{eestnt}\\end{equation}\nWe still need the exterior local smoothing estimate \\eqref{lsest}.\nHowever, now we can complement it with an interior estimate,\nnamely\n\\begin{equation}\n\\| (1-\\rho) u\\|_{{{\\tilde{X}}}}^2 \\lesssim \\| u\\|_{L^\\infty_t L^2_x}^2 + \n\\| f_1\\|_{L^1_t L^2_x}^2 + \\| \\rho u\\|_{{{\\tilde{X}}}}^2\n+ \\|(1-\\rho) f_2\\|_{{{\\tilde{X}}}'}^2 + \\| (1-\\rho) u\\|_{L^2_{t,x}}^2. \n\\label{lsestnt}\\end{equation}\nThe conclusion of Theorem~\\ref{main.ls.theoremnt} is obtained by combining \nthe three estimates \\eqref{eestnt}, \\eqref{lsest} and \\eqref{lsestnt}.\n\nIt remains to prove \\eqref{lsestnt}. This is obtained by applying \nto the function $v = (1-\\rho) u$ the local bound\n\n\\begin{proposition}\n Assume that the coefficients $a^{ij}$, $b^i$, $c$ are real and satisfy \\eqref{coeff}, \\eqref{coeffb},\n and \\eqref{coeffc}. Moreover, assume that\n the metric $a^{ij}$ is non-trapping. Let $v$ be a\n function supported in $\\{|x| \\leq 2^{M+1}\\}$ which solves the\nequation\n\\begin{equation}\n(D_t +A) v = g_1 + g_2, \\qquad v(0) = v_0\n\\end{equation}\nin the time interval $[0,T]$. \nThen we have \n\\begin{equation}\n\\| v \\|_{L^2_t H^{\\frac12}_x}^2 \\lesssim \\| v\\|_{L^\\infty_t L^2_x}^2 + \n\\| g_1\\|_{L^1_t L^2_x}^2 + \\|g_2\\|_{L^2_t H^{-\\frac12}_x}^2 + \\| v\\|_{L^2_{t,x}}^2. \n\\label{leint}\\end{equation}\n\\end{proposition}\n\n\\begin{proof}\n We use again the multiplier method. The following lemma tells us\n how to choose an appropriate multiplier.\n\\begin{proposition}\\label{Doi_construction}\n Assume that the coefficients $a^{ij}$ satisfy \\eqref{coeff}.\n Moreover, we assume that the Hamiltonian vector field $H_a$ permits\n no trapped geodesics. Then there exists a smooth, time-independent, real-valued symbol $q\\in S^0_{hom}$\n so that\n\\[\nH_a q\\gtrsim |\\xi|,\\quad \\text{ in } \\{|x| \\leq 2^{M+1}\\}.\n\\]\n\\end{proposition}\n\nThis proposition is essentially from \\cite{D1}, if $a^{ij}$ were smooth. See also \nLemma 1 of \\cite{ST}, which includes some discussion of the limited regularity. \n\nWorking in the Weyl calculus and using this multiplier $Q$, we compute\n\\[\n\\frac{d}{dt}\\langle v,Qv\\rangle = -2\\Im\\langle (D_t+A)v,Qv\\rangle\n+ i\\langle [A,Q]v,v\\rangle \n\\]\nwhich after time integration yields\n\\[\n\\langle i [A,Q]v,v\\rangle = \\langle v,Qv \\rangle |^T_0 + 2\\Im\\langle g_1+g_2,Qv\\rangle. \n\\]\nFor the second term on the right, we apply Cauchy-Schwarz and use the\n$L^2$ and $H^\\frac12$ boundedness of $Q$ to obtain\n\\[\n|\\langle (D_t+A)v,Qv\\rangle| \\lesssim \\|v\\|_{L^\\infty_t L^2_x}^2 + \\|g_1\\|_{L^1_t\n L^2_x}^2 + \\|g_2\\|_{L^2_t H_x^{-\\frac12}} \\| v\\|_{L^2_t H_x^\\frac12}.\n\\]\nHence\n\\[\n\\langle i [A,Q]v,v\\rangle \\lesssim \n\\|v\\|_{L^\\infty_t L^2_x}^2 + \\|g_1\\|_{L^1_t\n L^2_x}^2 + \\|g_2\\|_{L^2_t H^{-\\frac12}_x} \\| v\\|_{L^2_t H^\\frac12_x}.\n\\]\nThen it remains to prove the positive commutator bound\n\\begin{equation}\n\\langle i [A,Q]v,v\\rangle \\geq c_1 \\|v\\|_{L^2_t H^\\frac12_x}^2 - c_2 \\|v\\|_{L^2_{t,x}}^2.\n\\end{equation}\nThe positive contribution comes from the second order terms in $P$.\nPrecisely, we have\n\\[\ni[ D_i a^{ij} D_j,Q(x,D)] = Op(H_a q) +O(1)_{L^2 \\to L^2}.\n\\]\nThe first symbol is positive, and we can obtain a bound from below by\nG\\aa rding's inequality. The first order term yields an $L^2$ bounded commutator, and the zero\norder term is $L^2$ bounded by itself.\n \nHere, we remind the reader that\nwe are not working with classical smooth symbols but instead with\nsymbols of limited regularity, and we refer the interested reader to\nthe discussion in Taylor \\cite[p. 45]{TaylorIII} for further details\non these otherwise classical results.\n\\end{proof}\n\n\\subsection{Non-trapping metrics: Proof of Theorem~\\ref{main.ls.theoremnt.res}.}\n\nThe argument is similar to the above one, with some obvious\nmodifications. Instead of \\eqref{eestnt} we have\n\\begin{equation}\n\\| u\\|_{L^\\infty_t L^2_x}^2 \\lesssim \\| u_0\\|_{L^2}^2 + \\| f_1\\|_{L^1_t L^2_x}^2\n+ \\| u\\|_{{X}} \\|f_2\\|_{{X}'} \n\\label{eestnt.res}\\end{equation}\nwhile \\eqref{lsestnt} is replaced by \n\\begin{multline}\n\\| (1-\\rho) (u-u_\\rho)\\|_{{X}}^2 \\lesssim \\| u\\|_{L^\\infty_t L^2_x}^2 + \n\\| f_1\\|_{L^1_t L^2_x}^2 + \\| \\rho (u-u_\\rho)\\|_{{X}}^2\n+ \\|f_2\\|_{{X}'}^2 \\\\+ \\| \\langle x\\rangle^{-2} (u-u_\\rho)\\|_{L^2_{t,x}}^2. \n\\label{lsestnt.res}\\end{multline}\n\nThe conclusion of Theorem~\\ref{main.ls.theoremnt.res} is obtained by combining \nthe estimates \\eqref{eestnt.res}, \\eqref{lsest.res} and \\eqref{lsestnt.res} and\napplying \\eqref{dXerror} to reduce the error terms to the form presented in \\eqref{lsntres}.\n\nIt remains to prove \\eqref{lsestnt.res}. We first compute\n\\[\\begin{split}\nD_t u_\\rho &= \\Bigl(\\int (1-\\rho)\\:dx\\Bigr)^{-1}\\Bigl[\\langle (D_t+A)u, (1-\\rho)\\rangle\n- \\langle Au, (1-\\rho)\\rangle\\Bigr]\\\\\n&= \\Bigl(\\int(1-\\rho)\\:dx\\Bigr)^{-1}\\Bigl[\\langle f_1+f_2,(1-\\rho)\\rangle - \\langle u-u_\\rho,A(1-\\rho)\\rangle\\Bigr].\n\\end{split}\n\\]\nThe function $v = (1-\\rho) (u-u_\\rho)$ solves\n\\begin{multline*}\nP v = (1-\\rho)(f_1+f_2) -(1-\\rho)\\Bigl(\\int(1-\\rho)\\:dx\\Bigr)^{-1}\\Bigl[\\langle f_1+f_2,(1-\\rho)\\rangle\n-\\langle u-u_\\rho, A(1-\\rho)\\rangle\\Bigr]\n\\\\+[A,(1-\\rho)](u-u_\\rho).\n\\end{multline*}\nThen we apply \\eqref{leint} to $v$ to obtain\n\\[\n\\begin{split}\n \\| v\\|_{L^2_t H^\\frac12_x}^2 &\\lesssim \\|v\\|_{L^\\infty_t L^2_x}^2 + \\|\n (1-\\rho) f_1 \\|_{L^1_t L^2_x}^2 +\\| [A,(1-\\rho)] (u-u_\\rho)\\|_{L^2_t\n H^{-\\frac12}_x} \\\\ &\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad + \\| (1-\\rho) f_2 \\|_{L^2_t H^{-\\frac12}_x}^2 \n + \\| \\langle x\\rangle^{-2}(u- u_\\rho)\\|_{L^2_{t,x}}^2 \\\\ & \\\n \\lesssim \\|u\\|_{L^\\infty_t L^2_x}^2 + \\| f_1 \\|_{L^1_t L^2_x}^2 +\n \\|\\rho(u-u_\\rho)\\|_{{X}}^2 + \\| f_2 \\|_{{X}'}^2 \n+ \\| \\langle x\\rangle^{-2}(u- u_\\rho)\\|_{L^2_{t,x}}^2\n\\end{split}\n\\]\nand \\eqref{lsestnt.res} follows.\n\n\n\n\\bigskip\n\\newsection{Time independent nontrapping metrics}\n\n\nThe aim of this section is to prove\nTheorems~\\ref{theorem.PcSmoothing},\\ref{theorem.PcSmoothing.res}.\nThus we work with a nontrapping, self-adjoint operator $A$ whose\ncoefficients are time independent. We prove\nTheorem~\\ref{theorem.PcSmoothing} in detail, and then outline the\nmodifications which are needed for\nTheorem~\\ref{theorem.PcSmoothing.res}.\n\n\n\\subsection{Proof of Theorem~\\ref{theorem.PcSmoothing}} Here we shall provide the\ndetails for the $n\\neq 2$ case. The general case follows with the obvious logarithmic adjustments\nto the ${{\\tilde{X}}}$ spaces in $n=2$.\n\nWe break the proof into steps.\n\n \\par{\\bf Step 1:} Without any restriction, we assume that $u_0=0$\n and that $u$ is the forward solution to \\eqref{main.equation}.\n Nonzero initial data $u_0$ can be easily added in via a $TT^*$\n argument.\n\n \\par{\\bf Step 2:} We add a damping term to the equation\n\\[\n(D_t+A-i\\varepsilon)u_\\varepsilon = f\n\\]\nin order to insure global square integrability of the solution\n$u_\\epsilon$. Applying our nontrapping estimate \\eqref{lsnt} we have\n\\begin{equation}\n\\|u_\\varepsilon\\|_{{\\tilde{X}}}\\lesssim\n\\|f\\|_{{{\\tilde{X}}}'}+\\|u_\\varepsilon\\|_{L^2_{t,x}({\\mathbb R}\\times B(0,2R))}.\n\\label{veps}\\end{equation}\nWe want to eliminate the second term on the right (when we add $P_c$\non the left).\n\n \\par{\\bf Step 3:} \n We want to take a Fourier transform in time and use Plancherel's\n theorem. For this we need to work with Hilbert spaces. These are\n defined using the structure introduced in the previous section. We\n denote by $\\alpha$ a family of positive sequences $(\\alpha(k)_j)_{j\n \\geq k^-}$ which have sum $1$ for each $k$ and by $\\mathcal\n A$ the collection of such sequences. For $\\alpha \\in \\mathcal A$ we\n define the Hilbert space ${{\\tilde{X}}}_\\alpha$ with norm\n\\[\n\\| u\\|_{{{\\tilde{X}}}_\\alpha}^2 = \\sum_{k} 2^k\\|S_k u\\|_{X_{k,\\alpha(k)}}^2 + \\|\\langle\nx\\rangle^{-1} u\\|_{L^2_{t,x}}^2\n\\]\nas well as its dual ${{\\tilde{X}}}_\\alpha'$. \nSince\n\\[\n\\| u\\|_{{{\\tilde{X}}}} \\approx \\sup_{\\alpha \\in \\mathcal A} \\| u\\|_{{{\\tilde{X}}}_\\alpha},\n\\qquad\n\\| u\\|_{{{\\tilde{X}}}'} \\approx \\inf_{\\alpha \\in \\mathcal A} \\| u\\|_{{{\\tilde{X}}}'_\\alpha}\n\\]\nwe can rewrite \\eqref{veps} in the equivalent form\n\\[\n\\|u_{\\varepsilon}\\|_{{{\\tilde{X}}}_\\alpha}\\lesssim \\|f\\|_{{{\\tilde{X}}}'_\\beta}+\n\\|u_\\varepsilon\\|_{L^2_{t,x}({\\mathbb R}\\times B(0,2R))}, \\qquad \\alpha, \\beta\n\\in \\mathcal A.\n\\]\nWe denote by $X^0_\\alpha$ the spatial version of $X_\\alpha$, i.e.\n$X_\\alpha = L^2_t X_\\alpha^0$. Then we take a time Fourier transform,\nand by Plunderer this is equivalent to\n\\[\n\\|\\hat{u}_\\varepsilon\\|_{L^2_\\tau {{\\tilde{X}}}_\\alpha^0}\\lesssim \n\\|\\hat{f}\\|_{L^2_\\tau ({{\\tilde{X}}}^0_\\beta)'} + \\|\\hat{u}_\\varepsilon\n\\|_{L^2_{\\tau,x}({\\mathbb R}\\times B(0,2R))}.\n\\]\nThis is in turn equivalent to the fixed $\\tau$ bound\n\\[\n\\|\\hat{u}_\\varepsilon(\\tau)\\|_{{{\\tilde{X}}}_\\alpha^0}\\lesssim\n\\|\\hat{f}(\\tau)\\|_{({{\\tilde{X}}}_\\beta^0)'}\n+\\|\\hat{u}_\\varepsilon(\\tau)\\|_{L^2(B(0,2R))},\n\\]\nwhich we rewrite in the form\n\\[\n\\|v \\|_{{{\\tilde{X}}}^0_\\alpha}\\lesssim\n\\|(A-\\tau-i\\varepsilon) v \\|_{({{\\tilde{X}}}_\\beta^0)'}+\\| v \\|_{L^2(B(0,2R))},\n\\]\nor, optimizing with respect to $\\alpha,\\beta \\in \\mathcal A$,\n\\begin{equation}\\label{have}\n\\|v \\|_{{{\\tilde{X}}}^0}\\lesssim\n\\| (A-\\tau-i\\varepsilon)v \\|_{({{\\tilde{X}}}^0)'}+\\| v \\|_{L^2(B(0,2R))}.\n\\end{equation}\nA similar\ncomputation shows that the estimate that we want to prove, namely \n\\eqref{PcSmoothing} with $u_0=0$, can be rewritten in the equivalent\nform \n\\begin{equation}\\label{want}\n \\|P_c v \\|_{{{\\tilde{X}}}^0}\\lesssim \\| (A-\\tau-i\\epsilon) v \\|_{({{\\tilde{X}}}^0)'}\n\\end{equation}\nuniformly with respect to $\\tau \\in {\\mathbb R}$, $\\epsilon > 0$.\n\n\n \\par{\\bf Step 4:} When $|\\tau|$ is large, \\eqref{want}\nfollows from \\eqref{have} combined with the elliptic bound\n\\begin{equation}\\label{ellip}\n\\tau^{1\/4} \\|v\\|_{L^2(B(0,2R))}\\lesssim \\|v\\|_{{{\\tilde{X}}}^0}+\\|(A-\\tau-i\\epsilon)v\\|_{({{\\tilde{X}}}^0)'}.\\end{equation}\nTo prove this we replace $v$ by $w = (1-\\rho) v$ and rewrite \nit in the form\n\\[\n\\tau^{1\/4} \\|w\\|_{L^2}\\lesssim \n\\|w\\|_{H^\\frac12}+\\|(A-\\tau-i\\epsilon)w\\|_{H^{-\\frac12}}\n\\]\nfor $w$ with compact support. Since\n\\[\n\\tau \\| w \\|_{H^{-\\frac32}} \\lesssim \\|(A-\\tau-i\\epsilon)w\\|_{H^{-\\frac32}}\n+ \\| A w\\|_{H^{-\\frac32}} \\lesssim \\|(A-\\tau-i\\epsilon)w\\|_{H^{-\\frac12}}\n+ \\| w\\|_{H^{\\frac12}},\n\\]\nthe bound \\eqref{ellip} follows by interpolation.\n\\par{\\bf Step 5:}\nFor $\\tau$ in a bounded set we argue by contradiction.\nIf \\eqref{want} does not hold uniformly then we find sequences \n\\[\n\\varepsilon_n\\to 0, \\qquad \\tau_n\\to \\tau, \n\\]\nand $ v_n \\in {{\\tilde{X}}}^0$ with $P_c v_n=v_n$ and\n\\[\n\\|(A-\\tau_n-i\\varepsilon_n)v_n\\|_{({{\\tilde{X}}}^0)'}\\to 0,\\quad\n\\|v_n\\|_{L^2(B(0,2R))}=1.\n\\]\nOn a subsequence we have\n\\[\nv_n\\to v\\quad \\text{weakly* in} \\quad {{\\tilde{X}}}^0.\n\\]\nSince ${{\\tilde{X}}}^0 \\subset H^{\\frac12}_{loc}$, on a subsequence\nwe have the strong convergence\n\\[\nv_n \\to v \\qquad \\text{in } L^2_{loc}.\n\\]\nHence we have produced a function $v $ with \n\\begin{equation}\n v \\in {{\\tilde{X}}}^0, \\qquad P_c v = v, \\qquad (A-\\tau) v = 0, \n\\qquad \\| v\\|_{L^2(B(0,2R))}=1.\n\\label{vprop}\\end{equation}\nDepending on the sign of $\\tau$ we consider three cases.\n\n\\par{\\bf Step 6:}\nIf $\\tau < 0$ then, using the bound \\eqref{bc} for the lower order terms\nin $A$, we obtain\n\\[\n\\| D_i a^{ij} D_j v -\\tau v\\|_{({{\\tilde{X}}}^0)'} \\lesssim \\| v\\|_{{{\\tilde{X}}}^0}.\n\\]\nThen\n\\[\n\\| v\\|_{{{\\tilde{X}}}^0}^2 \\gtrsim \\langle v, D_i a^{ij} D_j v -\\tau v \\rangle \n\\gtrsim \\|v\\|_{H^1}^2 ,\n\\]\nand therefore $v \\in L^2$ is an eigenfunction. This contradicts \nthe relation $P_c v = v$.\n\n\\par{\\bf Step 7:} \nIf $\\tau = 0$ then there is either a zero eigenvalue or a zero\nresonance, both of which are excluded by hypothesis.\n\n\\par{\\bf Step 8:} \nIt remains to consider the most difficult case $\\tau > 0$. Here \nthe properties \\eqref{vprop} of $v$ are no longer sufficient \nto obtain a contradiction. Instead we will establish an additional\nproperty of $v$, namely that $v$ satisfies an outgoing radiation\ncondition. In order to state this, we need an additional \nregularity property for $v$. We define the space ${{\\tilde{X}}}^{0}_{med}$\nwith norm\n\\[\n\\| v \\|_{{{\\tilde{X}}}^{0}_{med}} = \\| v\\|_{L^2(D_{0})} + \\|\\nabla\nv\\|_{L^2(D_{ 0})} + \\sup_{j > 0} \\| |x|^{-\\frac12} v\\|_{L^2(D_j)} +\n \\| |x|^{-\\frac12} \\nabla v\\|_{L^2(D_j)}\n\\]\nwhich coincides with the ${{\\tilde{X}}}^0$ norm for intermediate frequencies but\nimproves it at both low and high frequencies. Then we claim that $v\n\\in {{\\tilde{X}}}^0_{med}$. More precisely, we will prove the elliptic bound\n\\begin{equation}\n\\| v \\|_{{{\\tilde{X}}}^{0}_{med}} \\lesssim \\|v\\|_{{{\\tilde{X}}}^0} +\n \\|(A-\\tau-i\\epsilon)v\\|_{({{\\tilde{X}}}^0)'}, \\qquad 0 < \\tau_0 < \\tau < \\tau_1 \n\\label{vregb} \\end{equation}\nwith implicit constants which may depend on the thresholds\n$\\tau_0$, $\\tau_1$. \n\nNow we define the closed subspace ${{\\tilde{X}}}^0_{out}$ of ${{\\tilde{X}}}^0$,\n\\[\n{{\\tilde{X}}}^0_{out}=\\{v\\in {{\\tilde{X}}}^0_{med}\\,:\\, \\lim_{j\\to\\infty} \\|r^{-1\/2}(\\partial_r\n-i\\tau^{1\/2})v\\|_{L^2(D_j)}=0\\},\n\\]\nand also claim that $v$ has the additional property\n\\begin{equation}\nv \\in {{\\tilde{X}}}^0_{out}.\n\\label{vout} \\end{equation}\nIn other words this implies that $v$ is a resonance contained \ninside the continuous spectrum.\n\nWe postpone the proof of \\eqref{vregb} and \\eqref{vout} \nand conclude first our proof by contradiction, by showing\nthat there are no resonances inside the continuous spectrum.\nSuch results are known, see for instance \\cite{Agmon}, but perhaps\nnot in the degree of generality we need here. In any case,\nfor the sake of completeness, we provide a full proof. \n\nLet $\\chi$ be a smooth spherically symmetric increasing bump function\n$\\chi$ with $\\chi(r)\\equiv 0$ for $r<1\/2$ and $\\chi(r) \\equiv 1$\nfor $r>2$. Since $A$ is self-adjoint, for large $j$ we commute\n\\[\n\\begin{split}\n0=&\\ \\frac{i}{2}\\langle [A,\\chi(2^{-j}r)]v,v\\rangle \\\\\n = &\\ \\Im\\left\\langle 2^{-j} \\chi'(2^{-j} r)\n\\Bigl(\\frac{x_ia^{ij}}{r}\\partial_j -i\\tau^{1\/2}\\Bigr)v,v\\right\\rangle\n+2^{-j} \\tau^{1\/2}\\langle \\chi'(2^{-j}r) v,v\\rangle\n\\\\&\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\\qquad\n+2^{-j}\\Bigl\\langle b^i\\frac{x_i}{r} \\chi'(2^{-j}r) v, v\\Bigr\\rangle .\n\\end{split}\n\\]\nUsing the Schwarz inequality, \\eqref{coeffb.M}, and the outgoing radiation condition, we\nconclude that\n\\begin{equation}\\label{initial.decay}\n\\lim_{j\\to\\infty} \\|r^{-1\/2} v\\|_{L^2(D_j)}=0\n\\end{equation}\nwhich shows that $v$ has better decay at infinity. We note that \nthis is the only use we make of the radiation condition. From this, by\nelliptic theory, we also obtain a similar decay for the gradient,\n\\begin{equation}\\label{dinitial.decay}\n\\lim_{j\\to\\infty} \\|r^{-1\/2} \\nabla v\\|_{L^2(D_j)}=0.\n\\end{equation}\n\nTo conclude we use \\eqref{initial.decay} and \\eqref{dinitial.decay} to\nshow that in effect $v \\in L^2$; i.e. $v$ is an eigenvalue. Then by\nthe results of \\cite{KT2} $v$ must be $0$. Here, we shall again use a\npositive commutator argument. The multiplier we use is the operator\n$Q_k$, for some $k\\le 0$, in Lemma~\\ref{lemma.low.freq} but where for simplicity we set\n$\\delta = 1$. We have\n\\[\n0=-2\\Im \\langle Q_k v, (A-\\tau)v\\rangle = \\langle C_k v,v\\rangle - 2 \n\\Im \\langle Q_k v, (b^j D_j + D_j b^j + c)v\\rangle\n\\]\nwhere\n\\[\n C_k = i [D_l a^{lm} D_m,Q_k].\n\\]\nThe expression of the operator $C_k$ is exactly as in the \nformula \\eqref{C} but with unmollified coefficients $a^{ij}$.\nThe main contribution $C_k^0$ is estimated as there\nby\n\\[\n\\langle C_k^0 v,v\\rangle \\gtrsim \\left\\langle \\frac{\\alpha(2^k |x|)}{\\langle 2^k\n x\\rangle} \\nabla v,\\nabla v \\right\\rangle, \n\\]\nwhile the error terms are bounded by\n\\[\n\\left\\langle \\frac{\\kappa ( |x|)}{\\langle 2^k x\\rangle} \\nabla v,\\nabla v \\right\\rangle \n\\]\nrespectively \n\\[\n\\left\\langle \\langle x \\rangle^{-2} v, v \\right\\rangle. \n\\]\nThe expression $\\Im \\langle Q_k v, (b^j D_j + D_j b^j + c)v\\rangle$ can\nalso be included in the two error terms.\nThus we obtain\n\\[\n\\left\\langle \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} \\nabla v,\\nabla v \\right\\rangle \n\\lesssim \\left\\langle \\frac{\\kappa ( |x|)}{\\langle 2^k x\\rangle} \\nabla v,\\nabla v \\right\\rangle \n+ \\left\\langle \\langle x \\rangle^{-2} v, v \\right\\rangle. \n\\]\nFor $|x| > 2^{M}$ we have, by \\eqref{new.alpha},\n\\[\n\\kappa(x) \\lesssim \\epsilon \\alpha(2^k x);\n\\]\ntherefore the first term on the right is essentially negligible. We obtain\n\\[\n\\int \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} |\\nabla v|^2 dx \n\\lesssim \\int_{D_{ < M}} |\\nabla v|^2 dx \n+ \\int \\langle x \\rangle^{-2} |v|^2 dx .\n\\]\nAt the same time we have\n\\[\n0 = \\left\\langle \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} v, (A-\\tau) v \\right\\rangle, \n\\]\nwhich after an integration by parts yields\n\\[\n\\tau \\int \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} |v|^2 dx \n\\lesssim \\int \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} |\\nabla v|^2 dx \n+ \\int \\langle x \\rangle^{-2} |v|^2 dx. \n\\]\nCombining the two relations we obtain\n\\[\n\\int \\frac{\\alpha(2^k |x|)}{\\langle 2^k x\\rangle} ( |\\nabla v|^2 +|v|^2 ) dx \n\\lesssim \\int_{D_{ < M}} |\\nabla v|^2 dx \n+ \\int \\langle x \\rangle^{-2} |v|^2 dx. \n\\]\nFinally we let $k \\to -\\infty$ to obtain\n\\[\n\\int |\\nabla v|^2 +|v|^2 dx \n\\lesssim \\int_{D_{ < M}} |\\nabla v|^2 dx \n+ \\int \\langle x \\rangle^{-2} |v|^2 dx < \\infty \n\\]\nwhich shows that $v \\in L^2$. \n\nWe note that \\eqref{initial.decay} and\n\\eqref{dinitial.decay} are not used in any quantitative way but serve\nonly to justify the previous computations. More precisely, one can \nintroduce in the computation a cutoff outside a large enough ball\nand then pass to the limit. \n\nIt remains to prove \\eqref{vregb} and \\eqref{vout}.\n\n\n{\\bf Step 9:} Here we prove \\eqref{vregb}. We begin with the \nbounds on $v$. This is trivial for the high frequencies of $v$,\n\\[\n \\| S_{>0} v\\|_{X^0_0} \\lesssim \\| v\\|_{{{\\tilde{X}}}^0}.\n\\]\nTo estimate the low frequencies, we compute\n\\[\n(\\tau+i\\epsilon) S_{<0} v = S_{< 0} A v - S_{<0}(A-\\tau-i\\epsilon)v.\n\\]\nWriting $A$ in the generic form\n\\[\nA = D^2 a + D b + c, \n\\]\nwe have\n\\[\n\\begin{split}\n\\| S_{<0} v\\|_{X^0_{0}} \\lesssim &\\ \\| S_{< 0} D^2 av\\|_{X^0_0} +\n\\| S_{<0} bv \\|_{X^0_0} + \\| S_{<0} cv\\|_{X^0_0} + \\|S_{<0} (A-\\tau-i\\epsilon) v\\|_{X^0_0}\n\\\\\n \\lesssim &\\ \\| av\\|_{{X}^0} +\n\\| bv \\|_{X^0_0} + \\| cv\\|_{X^0_0} + \\|(A-\\tau-i\\epsilon)v\\|_{({{\\tilde{X}}}^0)'}\n\\\\ \\lesssim &\\ \\| v\\|_{{X}^0} +\n\\| \\langle x\\rangle^{-1} v \\|_{L^2} + \\|(A-\\tau-i\\epsilon)v\\|_{({{\\tilde{X}}}^0)'}.\n\\end{split}\n\\]\nOnce we control $\\| v\\|_{X^0_0}$, we can also obtain control of $\\| \\nabla\nv\\|_{X^0_0}$ by a straightforward elliptic estimate.\n\n\n\n{\\bf Step 10:} Here we prove the outgoing radiation condition\n\\eqref{vout} for $v$. This is obtained from similar outgoing radiation\nconditions for the functions $v_n$. However, $v_n$ only converges to\n$v$ in a weak sense. Hence we need to produce some uniform estimates\nfor $v_n$ which will survive in the limit.\n\\begin{multline}\\label{out}\n\\| r^{-\\frac12} (D_r - \\tau^\\frac12) u\\|_{L^2(D_j)}^2 \\\\\\lesssim \n\\sum_{k=0}^\\infty \n2^{-\\delta(k-j)^-} \n\\left( \\|\\langle r\\rangle^\\frac12 (A-\\tau-i\\epsilon) u\\|_{L^2(D_k)} \\| \\langle r\\rangle^{-\\frac12}\n (u,\\nabla u)\\|_{L^2(D_k)} \\right. \\\\ \\left. + \n\\kappa_k \\| r^{-\\frac12} (u,\\nabla u)\\|_{L^2(D_k)}^2 \\right).\n\\end{multline}\nIn other words, there is decay when $k2^{j+2} \\cr\n (2^{-j}R)^\\delta, & 1 < R< 2^{j+2}\n\\end{array} \\right.\n\\]\nwith $\\delta$ a small parameter.\nWe write\n\\begin{equation}\\label{commute.eqn}\n-2\\Im \\langle Qu, (A-\\tau-i\\varepsilon)u\\rangle = \\langle i [A,Q]u,u\\rangle - 2\\varepsilon \\langle Qu,u\\rangle.\n\\end{equation}\nWe expect to get the main positive contribution from the first term on\nthe right. The second term on the right on the other hand is\nessentially negative definite due to the fact that its symbol is\nnegative on the characteristic set of $A - \\tau$. Finally, the term on\nthe left is bounded simply by Cauchy-Schwarz.\n\n\nTo shorten the notations, in the sequel we denote by $E$ error terms\nof the form\n\\[\nE = D O( b(R) r^{-1} \\kappa(|x|)) D + O(b(R) r^{-1} \\kappa(|x|)).\n\\]\nSuch terms occur whenever $a^{ij}$ is either differentiated or\nreplaced by the identity and are easily estimated in terms of the \nright hand side of \\eqref{out}.\n\nWe evaluate the commutator $ i [A,Q]$. A similar computation\nwas already carried out in \\eqref{C.high.calc}, which we reuse with\n$k = 0$, $\\delta = 1$ and $\\phi(r) = b(R)\/R$. We obtain\n\\[\n\\begin{split}\n i [A,Q]= &\\ 4 D \\frac{b(R)}{R} D + 4 D x \\left( \\frac{b'(R)}{R^2} -\n \\frac{b(R)}{R^3}\\right) xD - 2\\tau^\\frac12 \\left(\\frac{b'(R)}{R} x D + D x \\frac{b'(R)}{R}\\right) +E\n\\\\\n= &\\ 2D \\left( 2 \\frac{b(R)}{R} - b'(R)\\right) D - 2 Dx \\left(\n 2 \\frac{b(R)}{R^3} - \\frac{b'(R)}{R^2}\\right) xD \n\\\\&\\qquad\n+ \n {b'(R)} (A -\\tau) + (A -\\tau) {b'(R)}\n+ 2 \\left( Dx - \\tau^\\frac12 r\\right ) \\frac{b'(R)}{r R} \\left(\n xD - r \\tau^\\frac12 \\right ) + E.\n\\end{split}\n\\]\nOur choice of $b$ insures that the coefficient in the first two terms\nis positive,\n\\[\n2 \\frac{b(R)}{R} - {b'(R)} \\geq 0 \\qquad R > 1.\n\\]\nHence we obtain\n\\[\n\\langle i [A,Q] u, u\\rangle \\gtrsim 2 \\langle b'(R) (D_r - \\tau^\\frac12) u, (D_r -\n\\tau^\\frac12) u\\rangle + 2 \\Re \\langle (A -\\tau-i\\epsilon) u, b'(R) u \\rangle +\n\\langle Eu,u\\rangle\n\\]\nwhere we have inserted a harmless $\\epsilon$ term.\n\n\nIt remains to evaluate the second term on the right in\n\\eqref{commute.eqn}. We have \n\\[\n\\begin{split}\n\\tau^\\frac12 Q = &\\ - \\Bigl(D_k \\frac{x_l a^{kl}}{R}-\\tau^{1\/2}\\Bigr)\nb(R) \\Bigl(\\frac{x_i a^{ij}}{R}D_j -\\tau^{1\/2}\\Bigr)\n + \\frac{b(R)}2 (A-\\tau) + (A-\\tau) \\frac{b(R)}2\n\\\\\n&\\ - \\Bigl(D_i -D_l\\frac{a^{lk}x_k x_i}{R^2}\\Bigr) a^{ij} b(R)\n\\Bigl(D_j -\\frac{x_j x_m a^{mn}}{R^2}D_n\\Bigr) - \\frac12 (A b(R)).\n\\end{split}\n\\]\nThe first and third terms are negative while the last term can be\nincluded in $E$. Hence we obtain\n\\[\n\\tau^{\\frac12} \\langle Q u,u\\rangle \\leq \\Re \\langle b(R) u, (A-\\tau-i\\epsilon) u \\rangle +\n\\langle Eu,u\\rangle .\n\\]\n\nReturning to \\eqref{commute.eqn}, we insert the bounds \nfor the two terms on the right to obtain\n\\[\n\\langle b'(R) (D_r - \\tau^\\frac12) u, (D_r -\\tau^\\frac12) u\\rangle \n\\lesssim \\Re \\langle (A -\\tau-i\\epsilon) u, (2 b'(R)+\\epsilon \\tau^{-\\frac12} b(R)\n+i Q)\nu \\rangle + \\langle Eu,u \\rangle.\n\\]\nIn the region $D_j$, we have $b' \\approx 2^{-j} \\approx r^{-1}$;\ntherefore \\eqref{out} follows.\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{theorem.PcSmoothing.res}}\n\nWe proceed as in the nonresonant case. The bound \\eqref{veps}\nis replaced by\n\\begin{equation}\n\\|u_\\varepsilon\\|_{X}\\lesssim\n\\|f\\|_{{X}'}+\\|u_\\varepsilon-u_{\\varepsilon \\rho}\n\\|_{L^2_{t,x}({\\mathbb R}\\times B(0,2R))}.\n\\label{veps.res}\\end{equation}\nUsing Plancherel as in Step 3, this is equivalent to the spatial\nbound\n\\begin{equation}\\label{have.res}\n\\|v \\|_{{X}^0}\\lesssim\n\\| (A-\\tau-i\\varepsilon)v \\|_{({X}^0)'}+\\| v-v_\\rho \\|_{L^2(B(0,2R))}\n\\end{equation}\nwhere ${X}^0$ is the fixed time counterpart of ${X}$. On the other\nhand the estimate that we want to prove, namely\n\\eqref{PcSmoothing.res} with $u_0=0$, has the\nequivalent form\n\\begin{equation}\\label{want.res}\n \\|v \\|_{{X}^0}\\lesssim \\| (A-\\tau-i\\epsilon) v \\|_{({X}^0)'}\n\\end{equation}\nuniformly with respect to $\\tau \\in {\\mathbb R}$, $\\epsilon > 0$.\n\nFor $\\tau$ away from $0$ we can easily bound the local average\nof $v$. We have\n\\[\n(\\tau+i\\epsilon) v_\\rho = (Av)_\\rho - ((A-\\tau-i\\varepsilon)v )_\\rho.\n\\]\nTherefore, by Cauchy-Schwarz,\n\\[\n\\tau |v_\\rho| \\lesssim \\| (A-\\tau-i\\varepsilon)v \\|_{({X}^0)'} + \\|\nv \\|_{L^2(B(0,2R))}.\n\\]\nHence we are able to bound $v$ in ${{\\tilde{X}}}^0$ as well,\n\\begin{equation}\\label{have.resa}\n\\|v \\|_{{{\\tilde{X}}}^0}\\lesssim\n\\| (A-\\tau-i\\varepsilon)v \\|_{({X}^0)'}+\\| v \\|_{L^2(B(0,2R))},\n\\qquad |\\tau| > \\tau_0.\n\\end{equation}\nConsequently, the argument for large $\\tau$ rests unchanged.\n\nConsider now the proof by contradiction. \n\nIn the case $\\tau < 0$,\nwe use the bound \\eqref{bnoc} instead of \\eqref{bc} for the \nlower order terms and show that $v$ is an eigenvalue.\nHowever, by the maximum principle, there can be no \nnegative eigenvalue for $A$.\n\n\nThe case $\\tau = 0$ is the interesting one. Then $v$ satisfies\n\\[\nv \\in {X}, \\qquad A v = 0, \\qquad \\| v-v_\\rho\\|_{L^2(B(0,2R))}=1. \n\\]\nHence $v$ is a zero generalized eigenvalue; therefore it must be\nconstant. But this contradicts the last relation.\n\nFinally, due to \\eqref{have.resa}, the case $\\tau > 0$ is identical to\nthe nonresonant case. \n\n\\subsection{ Proof of Remark~\\ref{remark.nootherres}}\n\nIf $Av = 0$ then from\n\\[\n0 = \\langle A(v-v_{D_j}),\\chi_{0}\nf(t,0) -\\sum_{j=k}^{-1} (\\phi_{j+1}(x)-\\phi_j(x))D_t^{-1} S^t_{>2j} f(t,0) \n\\]\nwith $\\phi_k(x) = \\phi(2^{k} x)$ and\n\\[\n\\phi(0) = 1, \\qquad \\text{supp } \\hat{\\phi} \\subset \\{|\\xi| \\in [1\/2,2]\\}.\n\\]\n\nNotice that $Tu=u^{in}$ with $u^{in}$ as in Section \\ref{embxs}. As such, the bound\n\\eqref{umtu} follows directly from \\eqref{Tkbdd} and \\eqref{l2uout}.\nThe bound \\eqref{lplqtu} follows similarly using a Bernstein bound, Littlewood-Paley theory, and\n\\eqref{kkf}.\nFor \\eqref{atu} we use Proposition~\\ref{lemma.A.to.Ak} to replace \n$A$ by $\\sum A_{(k)} S_k$. Then we use the spatial localization\ncoming from $T$, \\eqref{bernstein}, and the two derivatives gain from $ A_{(k)}$.\n\n\nWe consider now the ${X}$ bounds in \\eqref{arf}. For the second term in the left of \\eqref{arf}, \nusing Bernstein's inequality twice yields\n\\begin{align*}\n\\left\\| (\\phi_{j+1}(x)-\\phi_j(x))D_t^{-1}S_{>2j}^t\n(S_k f)(t,0)\\right\\|_{X_j}\n&\\lesssim 2^{\\frac{2-n}2 j} 2^{2j(-1+\\frac{1}{p_2'}-\\frac{1}{2})}\n\\|S_k f(t,0)\\|_{L^{p_2'}_t} \\\\\n&\\lesssim 2^{\\frac{2-n}2j} 2^{2j(-1+\\frac{1}{p_2'}-\\frac{1}{2})}\n2^{\\frac{n}{q_2'}k} \\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x} \\\\\n&= \n2^{\\frac{n}{q_2'}(k-j)} \\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x}.\n\\end{align*}\nThe $j=0$ term in $R_k$ is estimated in a similar fashion. Summing\nwith respect to $k \\leq j \\leq 0$ we use the off-diagonal decay to\nobtain\n\\begin{align*}\n\\|Rf\\|_X &\\lesssim \\left(\\sum_{j=-\\infty}^0 \\left(\\sum_{k=-\\infty}^j \n2^{\\frac{n}{q_2'}(k-j)} \\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x}\\right)^2\\right)^\\frac12\\\\\n&\\lesssim \\left(\\sum_{k=-\\infty}^0 \\|S_k f\\|^2_{L^{p_2'}_tL^{q_2'}_x}\\right)^\\frac12.\n\\end{align*}\n The bound $X$ bound for the second term in the left of \\eqref{arf} then follows from Littlewood-Paley theory.\nThe $L^{p_1}_tL^{q_1}_x$ estimate follows from similar applications of Bernstein estimates and\nLittlewood-Paley theory.\n \nFor the first term in the left of \\eqref{arf}, we may apply Proposition \\ref{lemma.A.to.Ak} to again\nreplace $A$ by $\\sum A_{(k)}S_k$. As the derivatives in $A_{(k)}$ yield a $2^{2k}$ factor, the estimate\nfor the first term in \\eqref{arf} follows from a very similar argument.\n\n\n\n\nIn order to complete the proof of \\eqref{arf}, we examine the $L^2$\npart of the ${{\\tilde{X}}}$ norm. We may first apply \\eqref{Hardy} and\n\\eqref{txdx} to reduce the problem to the bound\n\\[\n\\|\\sum_{k<0} R_k S_k f\\|_{L^2_{t,x}(\\{|x|\\le 1\\})}\\lesssim\n\\|f\\|_{L^{p_2'}_tL^{q_2'}_x}\n\\]\nin dimensions $n=1,2$.\nHere we use the fact that\n$\\phi_{j+1}(0)-\\phi_j(0)=0$. \nUsing this gain in a fashion similar to that from Section \\ref{embxs},\nwe have \n\\[\n\\|\\phi_{j+1}-\\phi_j\\|_{L^2(\\{|x|\\le 1\\})}\\lesssim 2^j.\n\\]\nThus, arguing as above,\n\\begin{align*}\n\\|R_k S_k f\\|_{L^2(\\{|x|\\le 1\\})}&\\lesssim \\sum_{j\\ge k} 2^j 2^{2j(-1+\\frac{1}{p_2'}-\\frac{1}{2})}\n2^{\\frac{n}{q_2'}k}\\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x}\\\\\n&\\lesssim 2^{\\frac{n}{2}k}\\|S_k f\\|_{L^{p_2'}_t L^{q_2'}_x}.\n\\end{align*}\nThis can clearly be summed to yield the desired bound.\n\n\nIt remains to prove \\eqref{tmdtr}. For this we will show the bound\n\\begin{equation}\n\\|\\langle x \\rangle (T-D_t R)f\\|_{L^2} \\lesssim \\| f\\|_{L^{p'}_tL^{q'}_x}.\n\\label{fdsa}\\end{equation}\nWe have \n\\[\n(T-D_t R)f = - \\sum_{k < 0} \\left(\\phi_0S^t_{\\le 0} (S_k f)(t,0) \n+\\sum_{j= k}^{-1} (\\phi_{j+1}-\\phi_j)S^t_{\\le 2j} (S_k f)(t,0)\\right).\n\\]\nArguing as above we obtain\n\\[\n \\| (\\phi_{j+1}-\\phi_j)S^t_{\\le 2j} (S_k f)(t,0)\\|_{L^2} \n\\lesssim 2^j 2^{\\frac{n}{q_2'}(k-j)} \\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x}\n\\]\nrespectively \n\\[\n\\| x (\\phi_{j+1}-\\phi_j)S^t_{\\le 2j} (S_k f)(t,0)\\|_{L^2}\n\\lesssim 2^{\\frac{n}{q_2'}(k-j)} \\|S_k f\\|_{L^{p_2'}_tL^{q_2'}_x}\n\\]\nand similarly for the $j=0$ term. Then \\eqref{fdsa} is obtained by\nsummation using the off-diagonal decay and Littlewood-Paley theory.\n\\end{proof}\n\nTheorems~\\ref{prop.Tataru.parametrix.nr}, \\ref{prop.Tataru.xtolp.nr}\nwill allow us to derive Theorems \\ref{main.est.theorem},\n\\ref{main.theoremnt}, \\ref{corr.nontrap.Strichartz} from\nTheorems~\\ref{main.ls.theorem}, \\ref{main.ls.theoremnt},\n\\ref{theorem.PcSmoothing}. Similarly,\nTheorems~\\ref{prop.Tataru.parametrix}, \\ref{prop.Tataru.xtolp}\nwill allow us to derive Theorems \\ref{main.est.theorem.res},\n\\ref{main.theoremnt.res}, \\ref{corr.nontrap.Strichartz.res} from\nTheorems~\\ref{main.ls.theorem.res}, \\ref{main.ls.theoremnt.res},\n\\ref{theorem.PcSmoothing.res}.\n\n\\subsection{Proof of Theorems~\\ref{main.theoremnt},\n \\ref{corr.nontrap.Strichartz},~\\ref{main.theoremnt.res},\n \\ref{corr.nontrap.Strichartz.res}}\n\nThe four proofs are almost identical, so we discuss only the first\ntheorem. Suppose the function $u$ solves\n\\[\nPu = f +g, \\qquad f \\in {{\\tilde{X}}}', \\quad g \\in L^{p'_2}_t L^{q'_2}_x\n\\]\nwith initial data \n\\[\nu(0) = u_0.\n\\]\nWe let $K$ be the parametrix of\nTheorem~\\ref{prop.Tataru.parametrix.nr} and denote\n\\[\nv = u - K g.\n\\]\nThen\n\\[\nPv = f + g -PKg, \\qquad v(0) = u(0) - Kg(0).\n\\]\nUsing the bounds \\eqref{bc}, \\eqref{kft}, and \\eqref{lperror2bt}, we obtain\n\\[\n\\| v(0)\\|_{L^2} + \\| Pv\\|_{{{\\tilde{X}}}'} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}.\n\\]\nThen Theorem~\\ref{main.ls.theoremnt} gives\n\\[\n\\| v\\|_{L^\\infty_t L^2_x \\cap {{\\tilde{X}}}} + \\| Pv\\|_{{{\\tilde{X}}}'} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x} + \\|v\\|_{L^2_{t,x}(A_{<2R})}.\n\\]\nHence by \\eqref{bc} and Theorem~\\ref{prop.Tataru.xtolp.nr} it follows that\n\\[\n\\| v\\|_{L^\\infty_t L^2_x \\cap {{\\tilde{X}}}}+ \\|v\\|_{L^{p_1}_t L^{p_2}_x}\n \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x} + \\|v\\|_{L^2_{t,x}(A_{<2R})}.\n\\]\nUsing again \\eqref{lperror2bt} we return to $u$ to obtain\n\\[\n\\| u\\|_{L^\\infty_t L^2_x \\cap {{\\tilde{X}}}}+ \\|u\\|_{L^{p_1}_t L^{p_2}_x}\n \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x} + \\|u\\|_{L^2_{t,x}(A_{<2R})}\n\\]\nconcluding the proof of the Theorem.\n\n\n\n\\subsection{Proof of Theorem~\\ref{main.est.theorem}}\n Suppose the function $u$ solves\n\\[\nPu = f +\\rho g, \\qquad f \\in {{\\tilde{X}}}'_e, \\quad g \\in L^{p'_2}_t L^{q'_2}_x\n\\]\nwith initial data \n\\[\nu(0) = u_0.\n\\]\nWe consider two\nadditional spherically symmetric cutoff functions $\\rho_1$ and\n$\\rho_2$ supported in $\\{|x| > 2^M\\}$ so that $\\rho_2 =1$ in the\nsupport of $\\rho_1$ and $\\rho_1 =1$ in the\nsupport of $\\rho$.\n\n Let $K$ be the parametrix of\nTheorem~\\ref{prop.Tataru.parametrix.nr} and denote\n\\[\nv = u - \\rho_1 K \\rho g.\n\\]\nThen\n\\[\nPv = f +\\rho_2(\\rho_1( \\rho g - P K \\rho g) - [P,\\rho_1]K\\rho g)\n, \\qquad v(0) = u(0) - \\rho_1 K \\rho g(0).\n\\]\nUsing the bounds \\eqref{bc}, \\eqref{kft}, and \\eqref{lperror2bt}, we obtain\n\\[\n\\| v(0)\\|_{L^2} + \\| Pv\\|_{{{\\tilde{X}}}'_{e2}} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}\n\\]\nwhere ${{\\tilde{X}}}'_{e2}$ is similar to ${{\\tilde{X}}}'_{e}$ but with $\\rho$ replaced\nby $\\rho_2$. Then we can apply Theorem~\\ref{main.ls.theorem}\nto $v$ to obtain\n\\[\n\\| v\\|_{L^\\infty_t L^2_x \\cap {{\\tilde{X}}}_{e}}\n + \\| Pv\\|_{{{\\tilde{X}}}'_{e2}} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|v\\|_{L^2_{t,x}(|x|\\le 2^{M+1})}.\n\\]\nWe truncate $v$ with $\\rho$ and compute\n\\[\nP \\rho v = [P,\\rho]v + \\rho P v .\n\\]\nThen we can estimate\n\\[\n\\| v\\|_{L^\\infty_t L^2_x} + \\|\\rho v\\|_{{{\\tilde{X}}}}\n + \\| P(\\rho v)\\|_{{{\\tilde{X}}}'} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x} + \\|v\\|_{L^2_{t,x}(|x|\\le 2^{M+1})}.\n\\]\nHence by \\eqref{bc} and Theorem~\\ref{prop.Tataru.xtolp.nr} applied to $\\rho v$,\nwe obtain\n\\[\n\\|v\\|_{L^\\infty_t L^2_x} + \\|\\rho v\\|_{{{\\tilde{X}}} \\cap L^{p_1}_t L^{q_1}_x}\n \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|v\\|_{L^2_{t,x}(|x|\\le 2^{M+1})}.\n\\]\nFinally, we use \\eqref{kft} to return to $u$ and obtain\n\\[\n\\|u\\|_{L^\\infty_t L^2_x} + \\|\\rho u\\|_{{{\\tilde{X}}} \\cap L^{p_1}_t L^{q_1}_x}\n\\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{{\\tilde{X}}}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|u\\|_{L^2_{t,x}(|x|\\le 2^{M+1})},\n\\]\nconcluding the proof of the Theorem.\n\n\n\n\\subsection{Proof of Theorem~\\ref{main.est.theorem.res}}\n\nThe argument is similar to the one above. The chief difference \nis that we can no longer use the truncations by $\\rho$, $\\rho_1$,\n$\\rho_2$ and instead we use the modified truncation operators\nsuch as $T_\\rho$.\n\n Suppose the function $u$ solves\n\\[\nPu = f +\\rho g, \\qquad f \\in {X}'_e, \\quad g \\in L^{p'_2}_t L^{q'_2}_x\n\\]\nwith initial data \n\\[\nu(0) = u_0.\n\\]\nWe let $K$ be the parametrix of\nTheorem~\\ref{prop.Tataru.parametrix} and denote\n\\[\nv = u - T_{\\rho_1} K \\rho g\n\\]\nThen we can write\n\\[\nPv = f +T_{\\rho_2}(T_{\\rho_1}( \\rho g - P K \\rho g) - \n[P,T_{\\rho_1}]K\\rho g)\n, \\qquad v(0) = u(0) - T_{\\rho_1}K\\rho g(0).\n\\]\nHere we compute the commutator\n\\[\n[A,T_{\\rho_1}] w = A \\rho_1 (w-w_{\\rho_1}) - \\rho_1 A (w-w_{\\rho_1}) -\n(1-\\rho)(Aw)_{\\rho_1}\n= [A,\\rho_1] (w -w_{\\rho_1}) - (1-\\rho)(Aw)_{\\rho_1}.\n\\]\nAlso we have \n\\[\n(Aw)_{\\rho_1} = c_\\rho \\int (1-\\rho_1)A(w-w_{\\rho_1}) dx\n = -c_\\rho \\int (w-w_{\\rho_1}) A\\rho_1 dx.\n\\]\nThen using the bounds \\eqref{bnoc}, \\eqref{kf}, and \\eqref{lperror2b}, we obtain\n\\[\n\\| v(0)\\|_{L^2} + \\| Pv\\|_{{X}'_{e2}} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{X}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}.\n\\]\nBy Theorem~\\ref{main.ls.theorem} for $v$ we get\n\\[\n\\| v\\|_{L^\\infty_t L^2_x \\cap {X}_{e}}\n + \\| Pv\\|_{{X}'_{e2}} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{X}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|(1-\\rho)(v-v_\\rho)\\|_{L^2_{t,x}}.\n\\]\nWe truncate $v$ with $T_\\rho$ and compute as above the \ncommutator $[P,T_\\rho]$.\nThen we estimate\n\\[\n\\| v\\|_{L^\\infty_t L^2_x} + \\|T_\\rho v\\|_{{X}}\n + \\| P(T_\\rho v)\\|_{{X}'} \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{X}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x} + \\|(1-\\rho)(v-v_\\rho)\\|_{L^2_{t,x}}.\n\\]\nHence by \\eqref{bnoc} and Theorem~\\ref{prop.Tataru.xtolp} applied to $T_\\rho v$,\nwe obtain\n\\[\n\\|v\\|_{L^\\infty_t L^2_x} + \\|T_\\rho v\\|_{{X} \\cap L^{p_1}_t L^{q_1}_x}\n \\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{X}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|(1-\\rho)(v-v_\\rho)\\|_{L^2_{t,x}}.\n\\]\nFinally, we use \\eqref{kf} to return to $u$ and obtain\n\\[\n\\|u\\|_{L^\\infty_t L^2_x} + \\|T_\\rho u\\|_{{X} \\cap L^{p_1}_t L^{q_1}_x}\n\\lesssim \\|u(0)\\|_{L^2} + \n\\| f\\|_{{X}'_e} + \\|g\\|_{L^{p'_2}_t L^{q'_2}_x}+ \\|(1-\\rho)(u-u_\\rho)\\|_{L^2_{t,x}},\n\\]\nconcluding the proof of the Theorem.\n\n\n\\begin{comment}\n\\newsection{Remarks}\n\nThis is a temporary section which contains some more detailed remarks and proofs. It will be removed\nfor the final version.\n\\subsection{Remarks from Section \\ref{not_para}:}\n\n\\begin{remark}\n\\label{remark.1}\nHere we shall show\n$$S_{8$.\n\nIndeed, the left side is bounded by\n\\begin{equation}\\label{remark.1.1}\n\\sum_{j\\ge 0} \\int_{y\\in D_j} \\frac{2^{kn\/2}}{(1+2^{k\/2}|x-y|)^N} (a^{ij}-I)(y)\\:dy,\\end{equation}\nfor any $N\\ge 0$. We examine the cases $0\\le jm+3$ separately.\n\\begin{itemize}\n\\item For $0\\le jm+3} 2^{-(j-m)(N-n)} (e^{2^{-10}})^{j-m}\\lesssim \\kappa_m.$$\n\\end{itemize}\n\\item Now, we look at the case $l+m<0$.\nHere, we have two cases to examine.\n\\begin{itemize}\n\\item For $0\\le j\\le -l+2$, we have that the above is\n$$\\lesssim \\sum_{0\\le j\\le -l+2} 2^{(l+j)n}\\kappa_j \\lesssim \\kappa_{-l} \\sum_{0\\le j\\le -l+2} 2^{(l+j)n}\n(e^{2^{-10}})^{|l+j|}\\lesssim \\kappa_{-l}.$$\n\\item For $j>-l+2$, we have that the above is\n$$\\lesssim \\sum_{j>-l+2} 2^{-N(l+j)}\\kappa_j \\lesssim \\kappa_{-l} \\sum_{j>-l+2} 2^{-N(l+j)}(e^{2^{-10}})^{l+j}\n\\lesssim \\kappa_{-l}.$$\n\\end{itemize}\n\\end{itemize}\n\\qed\n\\end{remark}\n\n\n\n\\end{comment}\n\n\n\n\n\n\n\n\\bigskip\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe occurrence of CO$_2$ within magmas and volcanic gases indicates a\nsignificant carbon presence within the Earth's lower mantle\n\\cite{Marty_1987,Halliday_2013}. Carbon has a low solubility in\nmantle silicates and the majority of the oxidized carbon in the\nEarth's mantle is believed to exist in the form of carbonates.\nCalcium and magnesium carbonate (CaCO$_3$ and MgCO$_3$) are the main\nsources and sinks of atmospheric CO$_2$ within the Earth's mantle.\nCarbonates are conveyed into the deep Earth by subduction, and carbon\nis recycled to the surface via volcanic processes in the form of\nCO$_2$-containing fluids and solids, and diamonds\n\\cite{Ghosh_2009,Frezzotti_2011}. However, the details of carbon\nstorage within the Earth's interior are unclear. The Deep Carbon\nObservatory \\cite{DCO} has been set up to investigate carbon within\nthe Earth's deep interior. CaCO$_3$ and MgCO$_3$ play fundamental\nroles in the global carbon cycle and influence the climate of our\nplanet \\cite{Dasgupta_2010,Hazen_2013_DCO}. Knowledge of the\nstructures, energetics and other properties of CaCO$_3$ and MgCO$_3$\nat high pressures is therefore important in understanding the Earth's\nmantle, and especially the carbon cycle.\n\nThe low-pressure calcite form \\cite{Bragg_structure_of_calcite_1914}\nof CaCO$_3$ is one of the most abundant minerals on the Earth's\nsurface and is the main constituent of metamorphic marbles. Several\nmetastable calcite-like phases have been observed\n\\cite{Bridgman_1939,Suito_2001,Merlini_2012}, and a calcite-related\nphase has been reported at around 25 GPa\n\\cite{Catalli_2005,Merlini_2012}. At pressures of about 2 GPa calcite\ntransforms to the aragonite structure\n\\cite{Bragg_structure_of_aragonite_1924} of $Pnma$ symmetry. At about\n40 GPa aragonite transforms into the ``post aragonite'' ($Pmmn$)\nstructure of CaCO$_3$, which is stable up to at least 86 GPa\n\\cite{Ono_2005,Oganov_2006}. The low pressure magnesite phase of\nMgCO$_3$ has the same structure as calcite. Experiments indicate that\nmagnesite is stable up to 80 GPa \\cite{Fiquet_2002}, and a phase\ntransition occurs above 100 GPa to an unknown magnesite II structure\n\\cite{Isshiki_2004_magnesite_and_high_pressure_form,Boulard_2011}.\n\n\\section{Structure searches}\n\nDensity functional theory (DFT) calculations for high pressure phases\nof CaCO$_3$ and MgCO$_3$ were performed by Oganov \\textit{et al.}\\\nusing an evolutionary structure searching algorithm\n\\cite{Oganov_2006,Oganov_2008}. These calculations predicted a\ntransition from the calcite to aragonite to ``post aragonite''\nstructures of CaCO$_3$, followed by a transition to a structure of\n$C222_1$ symmetry at pressures over 100 GPa. Similar calculations for\nMgCO$_3$ predicted transitions from magnesite to a structure of $C2\/m$\nsymmetry at 82 GPa, followed by a transition to a structure of $P2_1$\nsymmetry at 138 GPa, and a phase of $Pna2_1$ symmetry at 160 GPa\n\\cite{Oganov_2008}.\n\nCalculations using the \\textit{ab initio} random structure searching\n(AIRSS) technique \\cite{Airss_review} have led to the discovery of\nstructures that have subsequently been verified by experiment, for\nexample, in silane \\cite{pickard_silane}, aluminium hydride\n\\cite{pickard_aluminum_hydride}, ammonia monohydrate\n\\cite{fortes_ammonia_monohydrate_II} and ammonia dihydrate\n\\cite{griffiths_ammonia_dihydrate_II}. In the basic AIRSS approach a\ncell volume and shape is selected at random from within reasonable\nranges, the atoms are added at random positions, and the system is\nrelaxed until the forces on the atoms are negligible and the pressure\ntakes the required value. This procedure is repeated many times,\nleading to a reasonably unbiased scheme which allows a significant\nportion of the ``structure space'' to be investigated, although the\nsampling may be rather sparse. This approach is often successful for\nsmall systems, but it involves sampling a large portion of the\nhigh-energy structure space which is not normally of interest. We\ntherefore reduce the size of the structure space investigated by\nconstraining the searches.\n\nWe first perform searches in small cells, constraining the initial\nstructures so that all of the atoms are at least 1 \\AA\\ apart. The\nlow-enthalpy structures obtained from these calculations give us\ninformation about the favorable bonding configurations and likely\nnearest neighbor distances between the different atomic types. At low\npressures we find that the low-enthalpy structures contain\nwell-defined triangular CO$_3$ or ring C$_3$O$_9$ units, and therefore\nwe place these units and Ca or Mg atoms randomly within the cells of\nrandom shapes. We ensure that the atoms are not too close together by\nconstraining the initial values of the minimum distances between atoms\nfor each of the six possible pairs of atomic species. The six minimum\ndistances are obtained from low-enthalpy structures found in the\nsmall-cell searches. To construct the initial structures at higher\npressures we use minimum distances from low-enthalpy small-cell\nstructures to prepare new larger structures that approximately satisfy\nthe minimum distance constraints. This approach helps to space out\nthe different species appropriately, while retaining a high degree of\nrandomness. We perform searches at both low and high pressures, using\nstructures which are constrained to have a certain symmetry which is\nenforced during the relaxation, but are otherwise random\n\\cite{Airss_review}. This approach is useful because low energy\nstructures often possess symmetry \\cite{Pauling_1929,Wales_1998},\nalthough symmetry constraints break up the allowed structure space\ninto disconnected regions and can prevent some structures from\nrelaxing to lower energy ones \\cite{Airss_review}. We consider\nstructures containing up to eight formula units (f.u.) for CaCO$_3$\nand twelve f.u.\\ for MgCO$_3$.\n\nOur first-principles DFT calculations are performed using the\n\\textsc{Castep} plane-wave basis set pseudopotential code\n\\cite{ClarkSPHPRP05}. We use the Perdew-Burke-Ernzerhof (PBE)\ngeneralized gradient approximation (GGA) density functional\n\\cite{Perdew_1996_PBE}, default \\textsc{Castep} ultrasoft\npseudopotentials \\cite{Vanderbilt90}, and a plane-wave basis set\nenergy cutoff of 440 eV. We use a Brillouin zone sampling grid of\nspacing $2\\pi\\times$0.1~\\AA$^{-1}$ for the searches, and a finer\nspacing of $2\\pi\\times$0.05~\\AA$^{-1}$ for the final results reported\nin this paper.\n\n\\section{C\\lowercase{a}CO$_3$, pressure $\\leq$ 50 GPa}\n\nCalculated enthalpy-pressure curves for CaCO$_3$ phases are shown in\nFig.\\ \\ref{fig:enthalpy_CaCO3}, relative to the enthalpy of the ``post\naragonite'' phase. The transition from aragonite to ``post\naragonite'' becomes energetically favorable at about 42 GPa, in\nagreement with previous DFT results\n\\cite{Oganov_2006,Arapan_2007,Oganov_2008,Arapan_2010} and experiment\n\\cite{Ono_2005}. We performed calculations for the CaCO$_3$-VI\nstructure reported in Ref.\\ \\onlinecite{Merlini_2012}, which was\nsuggested as a possible high pressure phase of CaCO$_3$. However, we\nfound it to be very high in enthalpy, with a strongly anisotropic\nstress and large forces on the atoms. Relaxation of the CaCO$_3$-VI\nstructure at 40 GPa led to a reasonably stable structure with an\nenthalpy close to that of aragonite, but the relaxed structure does\nnot have a region of stability on our phase diagram (Fig.\\\n\\ref{fig:enthalpy_CaCO3}). We also found a structure of $Pnma$\nsymmetry (``{CaCO$_3$-$Pnma$-$h$}'', where $h$ denotes ``high\npressure'') that is predicted to be more stable than aragonite above\n40 GPa, and more stable than ``post aragonite'' below 47 GPa.\nHowever, {CaCO$_3$-$Pnma$-$h$} does not have a region of thermodynamic\nstability on our phase diagram because we find a previously unknown\nstructure of $P2_1\/c$ symmetry (``{CaCO$_3$-$P2_1\/c$-$l$}'', where $l$\ndenotes ``low pressure'') which is calculated to be the most stable\nphase in the pressure range 32--48 GPa, see Fig.\\\n\\ref{fig:enthalpy_CaCO3}.\n\nAt 42 GPa {CaCO$_3$-$P2_1\/c$-$l$} is calculated to be about 0.05 eV\nper f.u.\\ more stable than aragonite and ``post aragonite'' and,\nbecause these $sp^2$ bonded structures are similar, we expect that DFT\ncalculations should give rather accurate enthalpy differences between\nthem. However, our {CaCO$_3$-$P2_1\/c$-$l$} and {CaCO$_3$-$Pnma$-$h$}\nstructures do not provide as good a fit to the experimental X-ray\ndiffraction data as the ``post aragonite'' phase \\cite{Oganov_2006}.\nIt is possible that large energy barriers hinder formation of the\n{CaCO$_3$-$P2_1\/c$-$l$} structure. Another possibility is that the\nlaser-heated sample melts and the least stable polymorph crystallizes\nfrom the melt first, in analogy to ``Ostwald's rule''\n\\cite{Ostwald_1897}.\nIn any case, the conditions within the Earth's mantle are not the same\nas in diamond anvil cell experiments, and the timescales associated\nwith geological processes are enormously longer than those for\nlaboratory experiments.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth,clip]{Figure_1_Enthalpy.eps}\n \\caption{(Color online) Enthalpies per f.u.\\ of CaCO$_3$ phases\n relative to ``post aragonite'', with the number of f.u.\\ per\n primitive unit cell given within square brackets. The enthalpies\n of phases known prior to the current study are shown as dashed\n lines, while those found in the current study are shown as solid\n lines. The dotted red line shows the collapse of the\n {CaCO$_3$-$P2_1\/c$-$l$} structure into the more stable\n {CaCO$_3$-$P2_1\/c$-$h$} structure at 80--90 GPa.}\n \\label{fig:enthalpy_CaCO3}\n\\end{figure}\n\n\n\\section{C\\lowercase{a}CO$_3$, pressure $>$ 50 GPa}\n\nAt higher pressures we find another CaCO$_3$ structure of $P2_1\/c$\nsymmetry (``{CaCO$_3$-$P2_1\/c$-$h$}'') to be stable from 67 GPa to\nwell above 100 GPa. Our {CaCO$_3$-$P2_1\/c$-$h$} structure is about\n0.18 eV per f.u.\\ more stable than the $C222_1$ structure found by\nOganov \\textit{et al.}\\ \\cite{Oganov_2006}, see Fig.\\\n\\ref{fig:enthalpy_CaCO3}, and $C222_1$ does not have a region of\nthermodynamic stability. We also find that at about 80--90 GPa\n{CaCO$_3$-$P2_1\/c$-$l$} transforms into the more stable\n{CaCO$_3$-$P2_1\/c$-$h$} structure without any apparent energy barrier\n(dotted red line in Fig.\\ \\ref{fig:enthalpy_CaCO3}). Our calculations\nlead to the prediction of a new and more stable polymorph of CaCO$_3$\nat pressures $>67$ GPa.\n\n\n\\section{M\\lowercase{g}CO$_3$}\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth,clip]{Figure_2_Enthalpy.eps}\n \\caption{(Color online) Enthalpies per f.u.\\ of MgCO$_3$ phases\n relative to the $C2\/m$ phase, with the number of f.u.\\ per\n primitive unit cell given within square brackets. Previously\n known phases are shown as dashed lines, and those found in the\n current study are shown as solid lines.}\n \\label{fig:Enthalpies_MgCO3_higher_pressure}\n\\end{figure}\n\n\nCalculated enthalpy-pressure curves for MgCO$_3$ phases in the\npressure range 50--200 GPa are shown in Fig.\\\n\\ref{fig:Enthalpies_MgCO3_higher_pressure}, relative to the $C2\/m$\nphase. We find a previously unreported structure of $P\\bar{1}$\nsymmetry to be the most stable in the range 85--101 GPa. We also find\na phase of $P2_12_12_1$ symmetry that is marginally the most stable at\npressures around 144 GPa, see Fig.\\\n\\ref{fig:Enthalpies_MgCO3_higher_pressure}.\n\n\n\\section{Structures and bonding}\n\nThe carbon atoms in the calcite, aragonite, ``post aragonite'', and\nour {CaCO$_3$-$P2_1\/c$-$l$} and {CaCO$_3$-$Pnma$-$h$} structures\ncontain threefold coordinated carbon atoms, as does the magnesite\nphase of MgCO$_3$. These structures contain triangular CO$_3^{2-}$\nions with $sp^2$ bonding. In aragonite and ``post aragonite'' the\nCO$_3^{2-}$ ions are coplanar, but in our {CaCO$_3$-$Pnma$-$h$}\nstructure they are somewhat tilted, while in {CaCO$_3$-$P2_1\/c$-$l$}\nthey are tilted at approximately 90$^\\circ$ to one another, see Fig.\\\n\\ref{fig:structures_CaCO3_lower_pressure}. More details of the\nstructures are given in the Supplemental Material \\cite{Supplemental}.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.475\\textwidth,clip]{Figure_3_P21c-low.eps}\\\\\n \\includegraphics[width=0.37\\textwidth,clip]{Figure_3_Pnma.eps}\\\\ \\vspace{0.375cm}\n \\includegraphics[width=0.3\\textwidth,clip]{Figure_3_Pmmn.eps}\n \\caption{(Color online) The {CaCO$_3$-$P2_1\/c$-$l$} (top),\n {CaCO$_3$-$Pnma$-$h$} (middle), and ``post aragonite'' (bottom)\n structures of CaCO$_3$ at 40 GPa. The Ca atoms are in green, the\n carbon in grey, and the oxygen in red.}\n \\label{fig:structures_CaCO3_lower_pressure}\n\\end{figure}\n\n\nThe high-pressure {CaCO$_3$-$P2_1\/c$-$h$} and $C222_1$ structures\ncontain fourfold coordinated carbon atoms and are of the pyroxene\ntype. {CaCO$_3$-$P2_1\/c$-$h$} and $C222_1$ possess very similar\ncalcium lattices but the packing of the pyroxene chains is different,\nas can be seen in Fig.\\ \\ref{fig:structures_CaCO3_higher_pressure}.\nIn $C222_1$ each of the chains is orientated in the same manner, but\n{CaCO$_3$-$P2_1\/c$-$h$} alternate chains run in the reverse direction,\nsee Fig.\\ \\ref{fig:structures_CaCO3_higher_pressure}, and consequently\nthe unit cell of {CaCO$_3$-$P2_1\/c$-$h$} contains four f.u., whereas\n$C222_1$ contains two. When viewed along the axis of the chains, the\n{CaCO$_3$-$P2_1\/c$-$h$} and $C222_1$ structures appear almost\nidentical. {CaCO$_3$-$P2_1\/c$-$h$} and $C222_1$ have very similar\nvolumes at high pressures, with $C222_1$ being slightly denser, which\nleads to almost parallel enthalpy-pressure relations, see Fig.\\\n\\ref{fig:enthalpy_CaCO3}. The lower enthalpy of\n{CaCO$_3$-$P2_1\/c$-$h$} must therefore arise from more favorable\nelectrostatic interactions between the pyroxene chains.\n\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.35\\textwidth,clip]{Figure_4_C2221.eps}\n \\includegraphics[width=0.35\\textwidth,clip]{Figure_4_P21c.eps}\n \\caption{(Color online) The $C222_1$ (top) and {CaCO$_3$-$P2_1\/c$-$h$}\n pyroxene-type (bottom) structures of CaCO$_3$ at 60 GPa. The Ca\n atoms are in green, carbon in grey, and oxygen in red.}\n \\label{fig:structures_CaCO3_higher_pressure}\n\\end{figure}\n\n\n\\subsection{High-pressure X-ray data for C\\lowercase{a}CO$_3$}\n\nOno \\textit{et al.}\\ \\cite{Ono_2007} performed laser-heated diamond\nanvil cell experiments on CaCO$_3$ at 182 GPa. X-ray diffraction data\nfor the $C222_1$ \\cite{Oganov_2006} and {CaCO$_3$-$P2_1\/c$-$h$}\nstructures are compared in Fig.\\ \\ref{fig:XRD_CaCO3} with the\nexperimental data from Fig.\\ 1 of Ref.\\ \\onlinecite{Ono_2007}. Note\nthe appearance of three peaks marked with stars in the experimental\ndata that arise from the platinum used to enhance heat absorption\nduring the laser heating and as a pressure calibrant. The\nexperimental data is not of very high resolution. The diffraction\npatterns of the theoretical $C222_1$ and {CaCO$_3$-$P2_1\/c$-$h$}\nstructures share many common features. There are also clear\nsimilarities between the theoretical and experimental X-ray data, but\nthe experimental data is of insufficient resolution to allow the\nstructure to be determined unambiguously. We suggest that our\nCaCO$_3$-$P2_1\/c$-$h$ structure is the best available candidate for\nthe observed high pressure phase because it has a much lower enthalpy\nthan $C222_1$.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.4\\textwidth,clip]{Figure_5_Xray.eps}\n \\caption{(Color online) X-ray diffraction patterns of the $C222_1$\n \\cite{Oganov_2006} and {CaCO$_3$-$P2_1\/c$-$h$} phases of CaCO$_3$,\n compared with experimental data from Fig.\\ 1(b) of Ref.\\\n \\onlinecite{Ono_2007}. Data at 182 GPa are reported, with an\n incident wavelength of 0.415 \\AA. The stars indicate that the peak\n immediately to the right arises from platinum.}\n \\label{fig:XRD_CaCO3}\n\\end{figure}\n\n\n\\section{Chemical reactions in Earth's mantle}\n\nWe have investigated possible chemical reactions involving the mantle\nmaterials CaCO$_3$, MgCO$_3$, CO$_2$, MgSiO$_3$, CaSiO$_3$, SiO$_2$,\nCaO and MgO, following the approach of Oganov \\textit{et al.}\\\n\\cite{Oganov_2008}. The most stable structures of each compound at\nthe relevant pressures are used, as provided by DFT studies. We use\nthe $Pa\\bar{3}$, $P4_2\/mnm$, and $I\\bar{4}2d$ structures of CO$_2$\n\\cite{Ma_CO2_2013}, the stishovite, CaCl$_2$ and pyrite structures of\nSiO$_2$ \\cite{Karki_silica_1997}, the rocksalt structure of MgO, the\northorhombic structure of perovskite CaSiO$_3$ and the perovskite and\npost-perovskite structures of MgSiO$_3$ \\cite{Murakami_2004,\n Oganov_2004, Tsuchiya_2004}.\n\nDecomposition of CaCO$_3$ and MgCO$_3$ into the alkaline earth oxides\nplus CO$_2$ is found to be unfavorable. Under conditions of excess\nSiO$_2$, the reaction\n\\begin{eqnarray}\n{\\rm MgCO}_3 + {\\rm SiO}_2 & \\rightarrow & {\\rm MgSiO}_3 + {\\rm CO}_2\n\\end{eqnarray}\nis found to be energetically unfavorable up to 138 GPa, which is just\nabove the pressure at the mantle-core boundary, see Fig.\\\n\\ref{fig:Enthalpy_MgCO3+SiO2-MgSiO3+CO2}. We find that the reaction\n\\begin{eqnarray}\n{\\rm CaCO}_3 + {\\rm SiO}_2 & \\rightarrow & {\\rm CaSiO}_3 + {\\rm CO}_2\n\\end{eqnarray}\ndoes not occur below 200 GPa, see Fig.\\\n\\ref{fig:Enthalpy_CaSiO3+CO2-CaCO3+SiO2}, which is much higher than\nthe value of 135 GPa reported in Ref.\\ \\onlinecite{Oganov_2008}. We\nconclude that both MgCO$_3$ and CaCO$_3$ are stable within the Earth's\nmantle under conditions of excess SiO$_2$. These results suggest that\nfree CO$_2$ does not occur as an equilibrium phase within the Earth's\nmantle.\n\nMgCO$_3$ has generally been believed to be the dominant carbonate\nthroughout the Earth's mantle. This assumption can be tested when\nexcess MgO is present by determining the relative stability of\nCaCO$_3$+MgO and MgCO$_3$+CaO. We find that CaCO$_3$+MgO is the more\nstable up to pressures of about 200 GPa, so that CaCO$_3$ is the\nstable carbonate under these conditions. In the case of excess\nMgSiO$_3$ we consider the reaction\n\\begin{eqnarray} \n{\\rm CaCO}_3 + {\\rm MgSiO}_3 & \\rightarrow & {\\rm CaSiO}_3 + {\\rm MgCO}_3,\n\\end{eqnarray}\nfinding that CaCO$_3$ is more stable than MgCO$_3$ from 100 GPa up to\npressures well above those of 136 GPa found at the mantle-core\nboundary, see Fig.\\ \\ref{fig:Enthalpy_CaCO3+MgSiO3-CaSiO3+MgCO3}.\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.4\\textwidth,clip]{Figure6-Mg-CO2.eps}\n \\caption{(Color online). The relative stabilities per f.u.\\ as a\n function of pressure of MgCO$_3$+SiO$_2$ and MgSiO$_3$+CO$_2$.\n The vertical gray line indicates the pressure at the base of the\n mantle (136 GPa). In this and the following figures, the kinks\n arise from phase transitions. }\n \\label{fig:Enthalpy_MgCO3+SiO2-MgSiO3+CO2}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.4\\textwidth,clip]{Figure7-Ca-CO2.eps}\n \\caption{(Color online). The relative stabilities per f.u.\\ as a\n function of pressure of CaSiO$_3$+CO$_2$ and CaCO$_3$+SiO$_2$.\n The vertical gray line indicates the pressure at the base of the\n mantle (136 GPa). }\n \\label{fig:Enthalpy_CaSiO3+CO2-CaCO3+SiO2}\n\\end{figure}\n\n\n\\begin{figure}[h!]\n \\centering\n \\includegraphics[width=0.45\\textwidth,clip]{Figure8-CaCO3vMgCO3.eps}\n \\caption{(Color online) Enthalpy per f.u.\\ of CaCO$_3$+MgSiO$_3$\n compared with that of CaSiO$_3$+MgCO$_3$. Below 100 GPa we find\n that CaSiO$_3$+MgCO$_3$ is the most stable, while above 100 GPa\n CaCO$_3$+MgSiO$_3$ is the most stable. }\n \\label{fig:Enthalpy_CaCO3+MgSiO3-CaSiO3+MgCO3}\n\\end{figure}\n\n\n\\section{Conclusions}\n\nIn conclusion, we have searched for structures of CaCO$_3$ and\nMgCO$_3$ at mantle pressures using AIRSS\n\\cite{Airss_review,pickard_silane}. We have found a\n{CaCO$_3$-$P2_1\/c$-$l$} structure with $sp^2$ bonded carbon atoms that\nis predicted to be stable within the range 32--48 GPa. We have also\nfound a high pressure {CaCO$_3$-$P2_1\/c$-$h$} structure with $sp^3$\nbonded carbon atoms that is about 0.18 eV per f.u.\\ more stable than\nthe $C222_1$ phase proposed by Oganov \\textit{et al.}\\\n\\cite{Oganov_2006}. Both the {CaCO$_3$-$P2_1\/c$-$h$} and $C222_1$\nstructures are compatible with the available X-ray diffraction data\n\\cite{Ono_2007}. However, {CaCO$_3$-$P2_1\/c$-$h$} is the most stable\nstructure from 67 GPa to pressures well above those encountered within\nthe Earth's lower mantle ($\\leq$ 136 GPa).\nOur AIRSS calculations suggest a previously unknown phase of MgCO$_3$\nof $P\\bar{1}$ symmetry that is predicted to be thermodynamically\nstable in the pressure range 85--101 GPa.\nOur results suggest that CO$_2$ is not a thermodynamically stable\ncompound under deep mantle conditions.\nUnder conditions of excess MgSiO$_3$ we find that CaCO$_3$ is more\nstable than MgCO$_3$ above 100 GPa. This result arises directly from\nour discovery of the highly stable {CaCO$_3$-$P2_1\/c$-$h$} phase. The\nresults of our study change our understanding of the carbon cycle in\nthe lower part of the mantle and may have important consequences for\ngeodynamics\n\\cite{Javoy_carbon_geodynamic_cycle,Marty_1987_geodynamics,Dobretsov_2012_geodynamics}\nand geochemistry\n\\cite{Schrag_2013_geochemistry,Deines_geochem_review}.\n\n\n\n\n\n\\begin{acknowledgments}\n\n We acknowledge financial support from the Engineering and Physical\n Sciences Research Council (EPSRC) of the United Kingdom.\n\n\\end{acknowledgments}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzznqsb b/data_all_eng_slimpj/shuffled/split2/finalzznqsb new file mode 100644 index 0000000000000000000000000000000000000000..2d57b6f19c7784f4f46fb2a795e4460828bc13a5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzznqsb @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nA \\emph{vertex coloring} of a graph is a labeling of the vertices so that no\ntwo adjacent vertices receive the same label. A $k$-\\emph{ranking} of a graph\nis a coloring of the vertex set with $k$ positive integers such that on every\npath connecting two vertices of the same color there is a vertex of larger\ncolor. The \\textit{rank number of a graph} is defined\nto be the smallest $k$ such that $G$ has a $k$-ranking.\n\nEarly studies involving the rank number of a graph were sparked by its\nnumerous applications including designs for very large scale integration\nlayout (VLSI), Cholesky factorizations of matrices, and the scheduling of\nmanufacturing systems \\cite{torre,leiserson,sen}. Bodlaender et al. proved\nthat given a bipartite graph $G$ and a positive integer $n$, deciding whether\na rank number of $G$ is less than $n$ is NP-complete \\cite{bodlaender}. The\nrank number of paths, cycles, split graphs, complete multipartite graphs,\npowers of paths and cycles, and some grid graphs are well known\n\\cite{alpert,bodlaender,bruoth,chang,ghoshal,novotny,ortiz}.\n\nIn this paper we investigate an extremal property of rankings that has not yet\nbeen explored. We consider the maximum number of edges that may be added to\n$G$ without changing the rank number. Since the maximum number of edges that\ncan be added to a graph without changing the rank number varies with each\nparticular ranking, we will focus on families where an optimal ranking has a\nspecific structure. Here we investigate the problem for $G=P_{2^{k-1}}$,\n$C_{2^{k}}$, $K_{m_{1},m_{2},\\dots,m_{t}}$, and the union of two copies of\n$K_{n}$ joined by a single edge.\n\nIn addition to determining the maximum number of edges that may be added to $G$\nwithout changing the rank number we provide an explicit characterization of which\nedges change the rank number when added to $G$, and which edges do not. That is,\ngiven a vertex $v_n$ in $n$th position in the graph, we provide an algorithm to\nadd a new edge with $v_n$ as one of its vertices to the graph $G$ without\nchanging its ranking. For this construction we use the binary representation of $n$\nto determine the position of the second vertex of the new edge. We also construct the\nmaximum number of edges, so called \\emph{good edges}, that can be added to the graph\nwithout changing its ranking. We enumerate the maximum number of good edges.\n\n\n\\section{Preliminaries}\n\nIn this section we review elementary properties and known results about rankings.\n\nA labeling $f:V(G)\\rightarrow\\{1,2,\\dots,k\\}$ is a $k$-\\textit{ranking} of a\ngraph $G$ if and only if $f(u)=f(v)$ implies that every $u-v$ path contains a\nvertex $w$ such that $f(w)>f(u)$. Following along the lines of the chromatic\nnumber, the \\textit{rank number of a graph} $\\chi_{r}(G)$ is defined to be the\nsmallest $k$ such that $G$ has a $k$-ranking. If $H$ is a subgraph of $G$,\nthen $\\chi_{r}(H)\\leq\\chi_{r}(G)$ (see \\cite{ghoshal}).\n\nWe use $P_{n}$ to represent the path with $n$ vertices. It is well\nknown that a ranking of $P_{n}$ with $V\\left( P_{n}\\right) =\\left\\{\nv_{1},v_{2},...,v_{n}\\right\\} $ and $\\chi_{r}(P_{n})$ labels can be\nconstructed by labeling $v_{i}$ with $\\alpha+1$ where $2^{\\alpha}$ is the\nlargest power of $2$ that divides $i$ \\ \\cite{bodlaender}. We will call this\nranking the \\textit{standard ranking of a path}.\n\nWe use $C_{2^{k}}$ to denote a cycle with $2^{k}$ vertices. A multipartite\ngraph with $t$ components is denoted by $K_{m_{1},m_{2},\\dots,m_{t}}$ where\nthe $i${th} component has $m_{i}$ vertices. The complete graph with $n$\nvertices is denoted by $K_{n}$.\n\n\nLet $\\Gamma$ and $H$ be graphs with $V(H)\\subseteq V(\\Gamma)$ and $E(H)\\cap\nE(\\Gamma)=\\emptyset$. We say that an edge $e\\in H$ is \\emph{good} for $\\Gamma$\nif $\\chi_{r}\\left( \\Gamma\\cup\\{e\\}\\right) =\\chi_{r}(\\Gamma)$, and $e$ is\n\\emph{forbidden} for $\\Gamma$ if $\\chi_{r}(\\Gamma\\cup\\{e\\})>\\chi_{r}(\\Gamma)$. We use\n$\\mu (G)$ to represent the cardinality of the maximum set of good edges for $G$.\n\nFor example, in Figure \\ref{figure1} we show a ranking of a graph $P_{2^{4}-1}\\cup H_{P}$\nwhere $H_{P}$ is the set of all good edges for $P_{2^{4}-1}$. The set of\nvertices of $P_{2^{4}-1}$ is $\\{v_{1},\\dots,v_{15}\\}$. We can see that $\\chi_{r}(P_{2^{4}-1}\\cup H_{P})=\\chi_{r}(P_{2^{4}-1})=4$\nand that $E(H_{P})$ is comprised of 20 good edges. That is, $\\mu (P_{2^{4}-1})=20$.\nTheorems \\ref{maintheorem} and \\ref{maintheoremcomponents} give necessary and sufficient conditions to determine\nwhether graphs $G=P_{2^{k}-1}\\cup H_{P}$ and $P_{2^{k}-1}$ have the same rank number.\n\n\n\n\n\\begin{figure} [htbp]\n\\begin{center}\n\\psfrag{v1}[c]{$v_1$} \\psfrag{v15}[c]{$v_{15}$} \\psfrag{1}[c]{$1$}\n\\psfrag{2}[c]{$2$} \\psfrag{3}[c]{$3$} \\psfrag{4}[c]{$4$}\n\\includegraphics[width=110mm]{path1.eps}\n\\end{center}\n\\caption{$P_{2^4-1} \\cup H_P$} \\label{figure1}\n\\end{figure}\n\nFigure \\ref{figure2} Part (a) shows the graph $G=C_{2^{4} }\\cup H$\nwhere $H$ is the set of all good edges for $C_{2^{4}}$. We can see that\n $\\chi_{r}(C_{2^{4}}\\cup H)=\\chi_{r}(C_{2^{4}})=5$ and that $E(H)$ is comprised\n of 33 good edges. That is, $\\mu (C_{2^{4}})=33$. Theorem \\ref{goodarcscyclecorollay} gives\nnecessary and sufficient conditions to determine whether the graphs\n$G=C_{2^{k}}\\cup H$ and $C_{2^{k}}$ have the same rank number.\n\n\\begin{figure} [htbp]\n\\begin{center}\n\\psfrag{v1}[c]{$v_1$} \\psfrag{v15}[c]{$v_{15}$}\n\\psfrag{v16}[c]{$v_{16}$} \\psfrag{1}[c]{$1$} \\psfrag{2}[c]{$2$}\n\\psfrag{3}[c]{$3$} \\psfrag{4}[c]{$4$} \\psfrag{5}[c]{$5$}\n\\includegraphics[width=40mm]{cycle5.eps} \\hspace{2cm}\n\\includegraphics[width=40mm]{cycle3.eps}\n\\end{center}\n\\caption{(a)\\ \\ $G:=C_{2^k}\\cup H$ \\hspace{2cm} (b)\\ \\\n$G':=\\left(C_{2^4} \\setminus \\{ v_{16}\\}\\right)\\cup H' $}\n\\label{figure2}\n\\end{figure}\n\n\n\n\n\\begin{lemma}\n[\\cite{bodlaender, bruoth}]\\label{rankingofapath} If $k\\geq1$, then\n\n\\begin{enumerate}\n\\item $P_{2^{k}-1}$ has a unique $k-$ranking and $\\chi_{r}(P_{2^{k}-1}) =k$.\n\n\\item $C_{2^{k}}$ has a unique $k-$ranking and $\\chi_{r}(C_{2^{k}}) =k+1$.\n\\end{enumerate}\n\\end{lemma}\n\n\\section{Enumeration of the Set of Good Edges for $P_{2^{k}-1}$}\n\nIn this section we give two ways to find the maximum set of edges that may be added to $G$\nwithout changing the rank number. We give an algorithm to construct a good edge for $G$.\nThe algorithm is based on the binary representation of $n$, the position of the vertex $v_n$.\nThat is, given a vertex $v_n \\in G$ in $n$th position, the algorithm add a new edge,\nwith $v_n$ as one of its vertices, to the graph $G$ without changing its ranking.\nWe show that if the graph $G$ is the union of $P_{2^{t}-1}$ and one edge of the form as indicated\nin Procedure 1, then the ranking of $G$ is equal to the ranking of $P_{2^{t}-1}$. This guarantees that\nthe edges constructed using Procedure 1, are good edges. We also give sufficient and necessary\nconditions to determine whether a set of edges $H$ is a set of good set for $P_{2^{t}-1}$.\n\nSince one of our aims is to enumerate the maximum number of edges that can be added to a graph without\nchanging its rank, we give a recursive construction of the maximum set of ``good edges\". The recursive\nconstruction gives us a way to count the the number of edges in the set of good edges.\n\nWe recall that $(\\alpha_{r} \\alpha_{r-1} \\ldots \\alpha_{1}\\alpha_{0})_{2}$ with\n$\\alpha_{h} = 0 \\text{ or } 1$ for $0 \\le h \\le r$ is the binary representation of a positive\ninteger $b$ if $b=\\alpha_{r}2^{r}+\\alpha_{r-1}2^{r-1} + \\ldots+\\alpha_{1}\n2^{1}+\\alpha_{0}2^{0}$. We define\n\\[\ng(\\alpha_{i})= \\left\\{\n\\begin{array}\n[c]{ll}\n0 & \\mbox{if $\\alpha_i=1$}\\\\\n1 & \\mbox{if $\\alpha_i=0$}.\n\\end{array}\n\\right.\n\\]\n\n{\\bf Procedure 1.}\nLet $V(P_{2^{k}-1})=\\{v_{1},v_{2},\\ldots,v_{2^{k}-1}\\}$ be the set of vertices\nof $P_{2^{k}-1}$ and let $H_{P}$ be a graph with $V(H_{P}) = V(P_{2^{k}\n-1})$. Suppose that $mt$ or $n=\\Omega(s)$ with\n$\\Omega(s)=m+1+\\sum_{i=1}^{s}g(\\alpha_{i})2^{i}$ for $s=1,2,\\ldots,t-1$ where\n $m=(\\alpha_{t}\\alpha_{t-1}\\ldots\n\\alpha_{1} \\alpha_{0})_{2}$,\n\n\\item $m=2^{j} \\cdot(2l+1)$ and $2^{j} \\cdot(2l+1)+2 \\le n < 2^{j}\n\\cdot(2l+2)$, for $l>0$,\n\n\\item $m=2^{j} \\cdot(2l+1)$ and $n=2^{w}$ for $2^{w} \\ge2^{j} \\cdot(2l+2)$.\n\\end{enumerate}\n\n\n{\\bf Procedure 2.}\nLet $V(C_{2^{k}})=\\{v_{1},v_{2},\\ldots,v_{2^{k}}\\}$ be the set of vertices of\n$C_{2^{k}}$. Let $H$ be a graph with $V(H)= V(C_{2^{k}})$. Suppose that $mt$ or $n=\\Omega(s)$ with\n$\\Omega(s)=m+1+\\sum_{i=1}^{s}g(\\alpha_{i})2^{i}$ for $s=1,2,\\ldots,t-1$,\n\n\\item $m=2^{j} \\cdot(2l+1)$ and $2^{j} \\cdot(2l+1)+2 \\le n < 2^{j}\n\\cdot(2l+2)$,\n\n\\item $m=2^{j} \\cdot(2l+1)$ and $n=2^{w}$ for $2^{w} \\ge2^{j} \\cdot(2l+2)$\nwhere $l\\ge0$,\n\n\\item $12^{t}$.\nSuppose that $2^{w}t$, and $e$ is good for $P_{2^{k}-1}$.\n\nIf Case 2 holds, then $e$ connects vertices $v_{m}$ and $v_{n}$ with\n$n<2^{t-1}$. If $m=2^{t+1}-1$, the argument is similar to the above case, so\nwe suppose that $m\\not =2^{t+1}-1$. If $n$ is odd then $f(v_{m})=f(v_{n})=1$,\nwhich is a contradiction.\n\nFor the remaining part of this case, we suppose that $m+2f(v_1)$. Note that $w \\in A_j$. Since $w$ does\nnot belong to any of the components in $\\mathcal{C} (A_j)$, any other path connecting those two vertices must contain\n$w$. Therefore, $w\\in P$. This proves Part 3.\n\\end{proof}\n\n\n\\begin{theorem} \\label{maintheoremcomponents} If $n>3$, then\n\\begin{enumerate}\n\\item $\\chi_{r}(P_{2^{n}-1} \\cup \\bigcup_{j=4}^{n} E_j)= \\chi_{r}(P_{2^{n}-1} )=n$ if and only if $\\bigcup_{j=4}^{n} E_j$ is the set of good edges for $P_{2^{n}-1}$.\n\n\\item $\\bigcup_{j=4}^{n} E_j = E(H_{P}).$\n\n\\item $\\mu (P_{2^n-1})= (n-3)2^n+4$.\n\n\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} For this proof we denote by $f$ the standard $k$-ranking of $P_{2^n-1}$.\nWe prove of Part 1. The proof that the condition is sufficient is straightforward.\n\nTo prove that the condition is necessary we use induction. Let $S(t)$ be the statement\n\\[\\chi \\left(P_{2^n-1}\\cup \\bigcup_{j=4}^{t} E_j\\right)=\\chi (P_{2^n-1}). \\]\nLemma \\ref{components} Part 3. proves $S(4)$. Suppose the $S(k)$ is true for some $ 4\\le k f(v_1)$. Note that $w \\in A_{k+1}$. Since $w$ does not belong to any of the\ncomponents of $G_1\\setminus A_j$, any other path connecting those two vertices must\ncontain $w$. Therefore, $w\\in P$. This proves that $S(k+1)$ is true. Thus,\n$\\bigcup_{j=3}^{n} E_j$ is a set of good edges for $P_{2^{n}-1}$.\n\nWe now prove that $\\bigcup_{i=3}^{n} E_i$ is the largest possible set of good edges\nfor $P_{2^{n}-1}$. Suppose that $uv$ is a good edge for $P_{2^{n}-1}$ with $f(v)3$, then\n\n\\begin{enumerate}\n\\item $\\chi_{r}(C_{2^{k}} \\cup H_{C})= \\chi_{r}(C_{2^{k}})=k+1$ if and only if $H_{C}$ is the set of good edges for $C_{2^{k}}$.\n\n\\item $\\mu (C_{2^n})= (n-2)2^n+1$.\n\n\\end{enumerate}\n\\end{theorem}\n\n\n\\begin{proof} To prove Part 1, we first show the condition is sufficient. Suppose\n$\\chi_{r}(C_{2^{k}} \\cup H_{C} )=\\chi _{r}(C_{2^{k}})=k+1$.\nSuppose $E(H)$ is not a set of good edges. Thus, $E(H)$\ncontains a forbidden edge, therefore the rank number of $C_{2^{k}}$ is greater\nthan $k+1$. That is a contradiction.\n\nNext we show the condition is necessary. It is known that $\\chi_{r}(C_{2^{k}})=k+1$\nand that this ranking is unique (up to permutation of the two largest\nlabels) \\cite{bruoth}. Let $f$ be a ranking of $C_{2^{k}}$ with $k+1$ labels\nwhere $f(v_{2^{k}})=k+1$.\n\nLet $e_{1}=\\{v_{2^{k}-1},v_{2^{k}}\\}$ and $e_{2}=\\{v_{2^{k}},v_{1}\\}$ be two\nedges of $C_{2^{k}}$ and let $H^{\\prime}$ be the graph formed by edges of $H$\nwith vertices in $V^{\\prime}=V(H)\\setminus\\{v_{t}\\}=\\{v_{1},v_{2} ,\\ldots,v_{2^{k}-1}\\}$.\nTheorem \\ref{maintheoremcomponents} Parts 1 and 2 imply\n that $E(H^{\\prime})$ is a set of good edges for the graph\n $C_{2^{k}}\\setminus\\{e_{1},e_{2}\\}$ if and\nonly if $\\chi_{r}(C_{2^{k}}\\setminus\\{e_{1},e_{2}\\}\\cup H^{\\prime})=k$ (see\nFigure 2 Part (b)). Note that $V^{\\prime}$ is the set of vertices of\n$C_{2^{k}}\\setminus\\{e_{1},e_{2}\\}$. The vertices of $C_{2^{k}}\\setminus\n\\{e_{1},e_{2}\\}\\cup H^{\\prime}$ have same labels as vertices $V^{\\prime}$.\nCombining this property with $f(v_{2^{k}})=k+1$ gives $\\chi_{r}(C_{2^{k}}\\cup\nH^{\\prime})=k+1$.\n\nWe now prove that $\\chi_{r}(C_{2^{k}}\\cup H)=k+1$. Let $e$ be an edge in\n$H\\setminus H^{\\prime}$. Therefore, the end vertices of $e$ are $v_{2^{k}}$\nand $v_{n}$ for some $2\\leq n\\leq2^{k}-2$. From the ranking $f$ of a cycle we\nknow that $f(v_{2^{k}})=k+1$ and $f(v_{n})i+1$.\nFrom the proof of Theorem \\ref{productmultipartitegraphs} we know that\n$E_{m_s}$ is a set of good edges of\n$K_{m_{1},m_{2},\\dots,m_{t}}$ for $s=2, 3, \\ldots, t$ (if $s=1$, then $E_{m_1}$ will be a forbidden set).\nThe cardinality of $E_{m_s}$ is $(m_s-1)m_s\/2$ for $s=2, 3, \\ldots, t$.\nThis implies that \\[\\mu (K_{m_{1},m_{2},\\dots,m_{t}})= \\sum_{s=2}^t \\frac{(m_{s}-1)m_{s}}{2}.\\] This proves Part 3.\n\\end{proof}\n\nLet $G_{5}$ be the graph defined by the union of two copies of $K_{5}$ joined\nby an edge $e$. In Figure 3 we show $G_{5}\\cup H$ where $H$ is the set of all\ngood edges for $G_{5}$. So, $\\chi_{r}(G_{5}\\cup H)$ $=$ $\\chi_{r}(G_{5})=6$ and $\\mu(G_{5})=8$.\nWe generalize this example in Theorem \\ref{unioncompletegraphs}.\n\n\\begin{figure} [htbp]\n\\begin{center}\n\\psfrag{a}[c]{$Ale es una gueva$} \\psfrag{v15}[c]{$v_{15}$}\n\\psfrag{v16}[c]{$v_{16}$} \\psfrag{1}[c]{$1$} \\psfrag{2}[c]{$2$}\n\\psfrag{3}[c]{$3$} \\psfrag{4}[c]{$4$} \\psfrag{5}[c]{$5$}\n\\psfrag{e}[c]{$e$} \\psfrag{6}[c]{$6$}\n\\includegraphics[width=75mm]{twocompletegraphs1.eps}\n\\end{center}\n\\caption{$\\chi_r(G_5 \\cup H)=6$} \\label{figure3}\n\\end{figure}\n\n\n\n\\begin{theorem}\n\\label{unioncompletegraphs} Let $G_{n}$ be the union of two copies of $K_{n}$\njoined by an edge. Then,\n\n\\begin{enumerate}\n\\item any edge connecting a vertex with highest label in one\npart with any other vertex in the other part is good. All other edges are\nforbidden. Moreover, if $H$ is the set of all good edges for $G_{n}$, then\n$\\chi_{r}(G_{n}\\cup H)=\\chi_{r}(G_{n})=n+1$.\n\n\\item $\\mu (G_{n})= 2(n-1)$.\n\n\\end{enumerate}\n\\end{theorem}\n\n\\begin{proof} To prove Part 1. we suppose that\n$G_{n}=K\\cup K^{\\prime}\\cup e$ where $K=K^{\\prime}=K_{n}$. Let\n$W=\\{w_{1},w_{2},\\ldots,w_{n}\\}$ be the set of vertices of $K$ and let\n$V=\\{v_{1},v_{2},\\ldots,v_{n}\\}$ be the set of the vertices of $K^{\\prime}$\nand $\\{w_{1},v_{n}\\}$ the set of vertices of $e$. We consider the function\n\\[\nf(x)=\\left\\{\n\\begin{array}\n[c]{ll}\ni & \\text{ if }x=w_{i}\\text{ for some }w_{i}\\in W\\\\\ni & \\text{ if }x=v_{i}\\text{ for some }v_{i}\\in V\\setminus\\{v_{n}\\}\\\\\nn+1 & \\text{ if }x=v_{n}.\n\\end{array}\n\\right.\n\\]\n\n\n\\noindent It is easy to see that $f$ is a minimum ranking of $G_{n}$ and that\n$\\chi_{r}(G_{n})=n+1$. Let\n\\[\nH_{1}=\\{e \\mid e \\not \\in G_{n} \\text{ is an edge with vertices } w_{n}, v_{i}\n\\text{ for some } i \\in\\{1, 2, \\ldots, n-1 \\} \\}\n\\]\nand\n\\[\nH_{2}=\\{e \\mid e \\not \\in G_{n} \\text{ is an edge with vertices } v_{n}, w_{i}\n\\text{ for some } i \\in\\{1, 2, \\ldots, n-1 \\} \\}.\n\\]\nWe prove that $H=H_{1}\\cup H_{2}$ is the set of good edges for $G_{n}$.\n\nFrom the definition of $f$ we know the labels of the vertices in $K$ are\ndistinct and all of the labels in $K^{\\prime}$ are distinct. The combination\nof these properties and the definition of $f(v_{n})$ implies that if an edge\n$e$ connects one vertex in $K$ with $v_{n}$ it does not create a new edge\nconnecting two edges with same label. Similarly, if an edge $e$ connects one\nvertex in $K^{\\prime}$ with $w_{n}$ it does not create a path connecting two\nedges with the same label. This proves that $H$ is the set of good edges for\n$G_{n}$ and that $\\chi_{r}(G_{n}\\cup H)=\\chi_{r}(G_{n})=n+1$.\n\nSuppose that an edge $e$ connects the vertices $w_{i}\\in K$ and $v_{j}\\in\nK^{\\prime}$ with $i\\leq j\\not =n$. The path $w_{j}w_{i}v_{j}$ has two vertices\nwith same label without a larger label in between. The proof of the case\n$j\\leq i\\not =n$ is similar. Hence if $e\\not \\in H$, then $e$ is\\ a forbidden edge.\n\nProof of Part 2. From proof of Part 1. we can see that $H_{1}$ and $H_{2}$\nare the sets of good edges of $G_{n}$ and that the cardinality of each set is $n-1$.\nSo, $\\mu (G_{n})= \\mid H_1 \\mid + \\mid H_2 \\mid= 2(n-1)$.\n\\end{proof}\n\n\n\\section* {Acknowledgment}\nThe authors are indebted to the referees for their comments and corrections that helped to improve\nthe presentation.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\label{sec:intro}\n\nWhile $\\Lambda$CDM has so far passed most of its tests with flying colours, the possibility remains that it requires modification either at levels of sensitivity currently unattainable, or in regions of parameter space so far unexplored. In the past few decades, this has motivated a burgeoning programme aimed at formulating theoretical alternatives to concordance cosmology, developing phenomenological frameworks to explore their consequences and devising novel experimental and observational probes.\n\nA generic feature of fundamental extensions to the standard model is the introduction of new dynamical degrees of freedom. These may be produced by substituting dimensional parameters by dynamical fields (e.g. masses \\cite{Weinberg}, dark energy \\cite{Ratra} or the gravitational constant \\cite{Brans,Wetterich}), and also describe the effects of higher derivatives in the Lagrangian as well as extra dimensions. Any extension to the Einstein-Hilbert action will necessarily involve new fields~\\citep{Clifton}, and high-energy completions of known physics produce a plethora of new degrees of freedom which may operate on a wide range of scales. Given the ubiquity of novel fields in both gravitational and particle-based extensions to the standard model, seeking new physics is largely tantamount to devising methods to detect additional scalars, vectors or tensors.\n\nAny new field couples naturally to the Ricci scalar $R$ in the gravitational action, and hence contributes both to the Universe's energy-momentum and to the expression for the force. The simplest and most common case is a single scalar field with standard kinetic energy, potential $V(\\phi)$, and coupling $\\phi^2R$. This modifies the potential generated by a point mass $M$ to\n\\begin{equation}\n\\Phi_\\text{tot} = -\\frac{G_NM}{r}\\left(1+\\frac{\\Delta G}{G_N}e^{-mr}\\right) = \\Phi_N - \\frac{\\Delta G M}{r}e^{-mr}, \\label{eq:ff}\n\\end{equation}\nwhere $G_N$ is the bare (Newtonian) gravitational constant, $\\Phi_N$ is the Newtonian potential, and $m$ and $\\Delta G$ are new parameters describing the mass of the field and the strength of the new interaction respectively. In theoretical terms, $m$ is set by the potential as $m \\sim d^2V\/d\\phi^2$ while $\\Delta G$ depends on the strength of the non-minimal coupling and the background value of the field relative to the Planck mass. The bare mass may be altered to the effective mass $m_\\text{eff}$ given screening (see below), which sets the range of the field as $\\lambda_C \\simeq 1\/m_\\text{eff}$. Eq.~\\ref{eq:ff} describes a violation of the inverse-square law by the new field and the introduction of a \\emph{fifth force} ($F_5$) which adds to the standard gravitational force with a Yukawa form. General Relativity (GR) is recovered in the limit $\\Delta G\\rightarrow 0$, and also in the case where the field's mass is sufficiently large for the fifth force to be strongly Yukawa-suppressed ($m \\rightarrow \\infty$, $\\lambda_C \\rightarrow 0$). In the case where the field is light ($m \\rightarrow 0$, $\\lambda_C \\rightarrow \\infty$), the sole effect of the new field is to renormalise the gravitational constant to $G \\equiv G_N + \\Delta G$. Note that the existence of fifth forces requires only the presence of scalar fields, and hence does not necessarily require modifications to the standard model: the Higgs field for example leads a fifth force with sub-nuclear range.\n\nThe empirical programme of probing fifth forces (or, relatedly, violations of the inverse-square law) has been ongoing for around two decades; see~\\cite{Adelberger} for a review. At small scales ($\\sim 10^{-5}-10^{-2}$ m), the strongest limits are set by the E\\\"{o}t-Wash experiment which involves precise measurement of the force between two nearby objects~\\citep{Adelberger_0, Hoyle}, supplemented by high frequency torsion oscillators~\\citep{Colarado} and microcantilevers~\\citep{Stanford} at even shorter distances. These constraints are $\\Delta G\/G_N \\lesssim 10^{-6}$ at $\\lambda_C = 10^{-5}$ m and $\\Delta G\/G_N \\lesssim 10^{-2}$ at $\\lambda_C = 10^{-2}$ m. At astrophysical scales, constraints derive from the Earth-moon distance probed by lunar laser ranging ($\\Delta G\/G_N \\lesssim 10^{-10}$ at $\\lambda_C = 10^9$ m) and planetary orbits ($\\Delta G\/G_N \\lesssim \\text{few} \\times 10^{-9}$ at $\\lambda_C = 10^{11}$ m)~\\citep{Williams, Samain}. No direct constraint exists at larger scales due to the difficulty of predicting precisely the motions of masses beyond the Solar System, even under the assumption of standard gravity.\n\nUnlike an equivalence principle test, a fifth-force search is restricted to ranges $\\lambda_C$ comparable to the size of the system under investigation. Ranges much smaller than the system will go unnoticed as the scalar field will not couple its various parts; conversely, if the range is much larger, the entire system simply feels a renormalised Newton's constant $G$ and the inverse-square law is satisfied. This renders tests which rely on the distance-dependence of the force between masses, such as E\\\"{o}t-Wash and lunar laser ranging, inoperable. However, because a light scalar field affects timelike but not null geodesics it can be probed by means of the Eddington light-bending parameter $\\gamma = (1-\\Delta G\/G_N)\/(1+\\Delta G\/G_N)$ in the Parametrised Post-Newtonian framework. $\\gamma$ has been constrained to $\\lesssim 10^{-5}$ by the Cassini satellite~\\citep{Bertotti}, requiring $\\Delta G\/G_N \\lesssim 10^{-5}$. This by itself is sufficient to force a putative scalar into astrophysically and cosmologically uninteresting regions of parameter space, as its contribution to the Klein-Gordon equation cannot be more than a fraction $\\mathcal{O}(10^{-3})$ that of standard gravity.\n\nNevertheless, there exist ways in which a fifth force may escape these constraints and still have an impact at larger scales. In a number of theories (for example generalised scalar-tensor theories and massive gravity) the properties of $F_5$ depend on the local environment: in high density regions such as the Milky Way its strength or range becomes small, while it remains operative in the lower density environments of dwarf galaxies and the Universe as a whole. This phenomenon is called ``screening'' (see~\\cite{Jain_Khoury,Khoury_review} for reviews), and comes in three basic flavours. The \\emph{chameleon} mechanism~\\citep{chameleon} arises when the effective mass $m_\\text{eff}$ becomes dependent on the local density (and thus on $\\nabla^2\\Phi$), such that in denser regions $m_\\text{eff}$ becomes large and the fifth force short range, while in less dense regions (or on cosmological scales) $m_\\text{eff}\\rightarrow0$ and the fifth force reappears. In \\emph{kinetic} screening (e.g. K-mouflage;~\\cite{kinetic}), the scalar interaction is governed by a kinetic function $\\sim(\\partial \\phi)^2$, so that $\\phi$ becomes nonlinear, and hence effectively decouples, when its gradient is large. Finally, in the {\\it Vainshtein}~\\citep{Vainshtein} and {\\it symmetron}~\\citep{Hinterbichler} mechanisms, the coupling of the field to matter depends on the local environment, such that $\\Delta G\\rightarrow 0$ near massive bodies. In each case, Solar System tests would be expected to probe the screened regime and hence yield the GR result.\n\nIn the presence of screening, new scalar fields are possible with Compton wavelengths ranging from microscopic to cosmological values. This renders observationally viable a broad class of gravitational actions, including those that may be expected to form the effective low-energy limits of UV-complete theories. Further, screening holds opens the possibility for cosmic acceleration to receive a dynamical explanation, potentially circumventing the cosmological constant problem~\\citep{Carroll}, and sanctions the search for fifth-force phenomenology on all scales. Crucially, however, tests of screening require unscreened objects, which are the only ones to manifest $F_5$.\n\nThe aim of this paper is to constrain fifth forces in galaxies. Not only will this provide direct information on scales much larger than the Solar System ($\\sim0.4-50$ Mpc), it will also enable constraints in the presence of screening. This is possible due to the wide range of densities and environments of galaxies in the local Universe, including those with masses much lower than the Milky Way and in environments much sparser, which would remain unscreened even for small background values of a novel scalar field.\n\nWe restrict ourselves here to chameleon and symmetron screening,\\footnote{Similar behaviour is expected for the environmentally-dependent dilaton~\\cite{Brax}.} which are conceptually simpler and more amenable to testing through the signal we wish to explore. In these mechanisms, whether or not an object is screened depends on the value of its Newtonian potential $\\Phi$ \\cite{Brax_2}. For a single spherical source this is deducible analytically as a consequence of the fact that chameleon and symmetron charge is contained in a thin shell near the object's surface (the ``thin-shell effect'';~\\cite{chameleon, Khoury_LesHouches}). For multiple sources (and taking into account the possibility of environmental as well as self-screening), numerical simulations show that $\\Phi = \\Phi_\\text{in} + \\Phi_\\text{ex}$ -- where $\\Phi_\\text{in}$ is the Newtonian potential generated by the object itself at its surface, and $\\Phi_\\text{ex}$ that due to its surroundings -- remains the best proxy for the degree of screening, as measured by differences between dynamical mass, which is subject to the fifth force, and lensing mass which is not~\\citep{Zhao_1, Zhao_2, Cabre, Falck}. In addition to fifth-force strength $\\Delta G$ and range $\\lambda_C$, then, generic chameleon and symmetron models come also with a critical value of $\\Phi$, $\\Phi_c$, at which screening kicks in. To good approximation, objects with $|\\Phi|>|\\Phi_c|$ are in sufficiently dense environments to be screened ($m_\\text{eff} \\rightarrow \\infty$), while those with $|\\Phi|<|\\Phi_c|$ are not ($m_\\text{eff} \\rightarrow 0$).\\footnote{An alternative screening criterion (e.g.~\\cite{Cabre}) is $|\\Phi_\\text{in}| > |\\Phi_c|$ \\emph{or} $|\\Phi_\\text{ex}| > |\\Phi_c|$. As this is more difficult to satisfy than $|\\Phi_\\text{in}| + |\\Phi_\\text{ex}| > |\\Phi_c|$, our screened fractions may be considered upper bounds, making our constraints (which benefit from more \\emph{un}screened galaxies in which the predicted signal is $>$0) conservative in this regard.} For viable and interesting theory parameters, $|\\Phi_c|$ takes values in the range $\\sim 10^{-5}-10^{-8}$ ($c \\equiv 1$), putting some galaxies in the screened and others the unscreened regime.\n\nThin-shell screening possesses a range of intra-galaxy signals, all of which derive from a common basic effect. Within an unscreened galaxy, gas and dark matter interact via a fifth force with neighbouring unscreened mass, which leads to an effective increase in Newton's constant $\\Delta G = 2\\beta^2G$ if the scalar is light and coupled to matter by $\\nabla^2\\phi = 8\\pi\\beta G\\rho$. (If the galaxy is marginally screened there is an addition thin-shell factor, but this is not important for the cases of practical interest here.) Main sequence stars, however, are themselves massive objects with surface Newtonian potentials $|\\Phi_\\text{in}|\\sim10^{-6}$, and hence for $|\\Phi_c| \\lesssim 10^{-6}$ they self-screen and do not feel $F_5$. The effective equivalence principle violation \\cite{Hui} caused by the differential sensitivity of stars, gas and dark matter to the fifth force has four observable consequences \\cite{Jain_Vanderplas}: 1) A displacement between the stellar and gas disk in the direction of $F_5$, 2) warping of the stellar disk, 3) enhancement of the rotation velocity of the gas relative to the stars, and 4) asymmetry in the stellar disk's rotation curve. We focus in this paper on the first. By collating neutral atomic hydrogen (HI) and stellar (optical) data for a large sample of galaxies, calculating their gravitational environments and forward-modelling the expected signal for given theory parameters $\\{\\Delta G, \\lambda_C, \\Phi_c\\}$, we statistically constrain a fifth force screened by any thin-shell mechanism. We also provide constraints in the case without screening but where the fifth force is simply coupled differently to stars, gas and dark matter.\n\nThe structure of this paper is as follows. In Sec.~\\ref{sec:derivation} we derive the form of the HI-optical offset for a given coupling of $F_5$ to stars and gas, and in Sec.~\\ref{sec:data} we describe the survey data that we use to search for it. Sec.~\\ref{sec:method} details our method: first our mapping of the potential and fifth-force acceleration fields across the local Universe, second the calculation from these of the signal expected for given $\\{\\Delta G, \\lambda_C, \\Phi_c\\}$, and third the likelihood framework we use to constrain the theory parameters by Markov Chain Monte Carlo (MCMC). Sec.~\\ref{sec:results} presents our results and forecasts improvements from future data. In Sec.~\\ref{sec:discussion} we provide further information on our noise model, document the systematic uncertainties in our analysis, compare our work to the literature and offer suggestions for future study. Finally, Sec.~\\ref{sec:conc} concludes.\n\n\n\\section{The signal: gas--star offsets in the presence of a fifth force}\n\\label{sec:derivation}\n\nConsider an unscreened galaxy in a chameleon- or symmetron-screened theory subject to an external Newtonian acceleration $\\vec{a}$ and a Newtonian acceleration $\\vec{a}_5$ due to unscreened matter within $\\lambda_C$. Take $|\\Phi_c| \\lesssim 10^{-6}$ so that stars self-screen. Under the step-function approximation of Eq.~\\ref{eq:ff}, unscreened objects with separation $r < \\lambda_C$ interact through standard gravity and $F_5$, while those with $r > \\lambda_C$ interact only through the former. Thus the unscreened gas and dark matter will effectively couple to the mass generating $\\vec{a}_5$ with enhanced gravitation constant $G_N + \\Delta G$, and hence feel a total acceleration\n\\begin{equation} \\label{eq:agas}\n\\vec{a}_{g,DM} = \\vec{a} + \\frac{\\Delta G}{G_N} \\: \\vec{a}_5.\n\\end{equation}\n\nIn order for the stars to remain associated with the gas and dark matter, their insensitivity to $\\vec{a}_5$ must be compensated by an acceleration due to a separation between their centre of mass and that of the dark matter halo. If the stellar centroid lags behind the halo centre in the direction of $\\vec{a}_5$ by a distance $r_*$, its total acceleration will be\n\\begin{equation} \\label{eq:astar}\n\\vec{a}_* = \\vec{a} + \\frac{G_N M( |\\Phi_c|&\n\\end{align}\nwhere $\\Phi$ is the potential of the galaxy as a whole. This is the equation we will solve in Sec.~\\ref{sec:method}, as a function of $\\Delta G$, $\\lambda_C$ and $\\Phi_c$, to calculate the expected signal for the observed galaxies.\n\nAlthough we have couched our discussion in terms of thin-shell screening, the phenomenon we describe would also result from a generic fifth force, screened or unscreened, that couples differently to different galaxy mass components. In that case, $\\Delta G$ in Eq.~\\ref{eq:rstar} would be related to that in Eq.~\\ref{eq:ff} by a factor of the normalised coupling coefficient of $F_5$ to gas and dark matter minus that to stars.\n\n\n\\section{Observational data}\n\\label{sec:data}\n\nOur observational catalogue of HI-optical displacements is derived from the \\textit{Alfalfa} survey~\\citep{Giovanelli_1, Giovanelli_2, Saintonge, Kent, Martin, Haynes}, a second-generation blind HI survey of $\\sim 30,000$ extragalactic sources conducted with the Arecibo Observatory. \\textit{Alfalfa} is sensitive to objects of mass $10^6 \\lesssim M_\\text{HI}\/M_\\odot \\lesssim 10^{10.8}$, and covers over $7000 \\: \\text{deg}^2$ out to $z \\simeq 0.06$. More than 98\\% of HI detections were associated with optical counterparts (OCs) by interactively examining DSS2 and SDSS images, in addition to objects in the Nasa Extragalactic Database (NED).\n\nWe cut the sources designated as ``priors'' (quality flag 2), which have signal to noise ratio $s\\lesssim6.5$ but are nevertheless matched to OCs with optical redshifts consistent with those derived from HI, as well as those estimated to be high velocity clouds in the Milky Way (quality flag 9). We also remove galaxies with an offset angle between HI and OC centroids of $>2$ arcmin, which is likely caused by poor signal to noise. These are similar cuts to those employed in previous work along these lines~\\cite{Vikram}. Objects at larger distance $D$ have larger physical uncertainty in HI-optical displacement for given angular uncertainty $\\theta$, and we will find in Sec.~\\ref{sec:results} that for realistic $\\theta$ values (see Sec.~\\ref{sec:uncertainties}) objects with $D>100$ Mpc do not provide further constraining power on $\\{\\Delta G, \\lambda_C, \\Phi_c\\}$. We therefore cut the catalogue at 100 Mpc, yielding 10,822 galaxies in the final sample.\n\n\n\\section{Method}\n\\label{sec:method}\n\nWe construct a Bayesian framework for constraining $\\{\\Delta G, \\lambda_C, \\Phi_c\\}$ by comparing forward-modelled $\\vec{r}_*$ values to those observed in \\textit{Alfalfa} galaxies. To do so, we derive or specify probability distributions for the input parameters necessary to solve Eq~\\ref{eq:rstar}, and use Monte Carlo sampling to propagate their uncertainties into the predicted $\\vec{r}_*$. We thus create a likelihood function for that signal, and score models by applying Bayes' theorem and an MCMC algorithm to it.\n\nTo simplify our inference, we first approximate the exponential in Eq.~\\ref{eq:ff} by a step function at $\\lambda_C$,\\footnote{As our gravitational maps become inaccurate beyond $\\sim200$ Mpc, this approximation is necessary for larger fifth-force ranges because we cannot include contributions from sources $\\gtrsim2\\lambda_C$ from test points $\\sim100$ Mpc from us. With future maps extending to higher redshift it will be preferable to treat Eq.~\\ref{eq:ff} more exactly.} and second impose the relation between $\\lambda_C$ and screening threshold $\\Phi_c$ implied by a specific chameleon-screened theory, the Hu-Sawicki~\\citep{Hu_Sawicki} model of $f(R)$ gravity~\\citep{fR, fR2}. In this case, both $\\lambda_C$ and $\\Phi_c$ are set by the background value of the scalar field, which is equal to the derivative of the function $f$ at the cosmological value $R_0$ of the Ricci scalar $R$, $f_{R0} \\equiv df\/dR|_{R_0}$ (also the background scalar field value $\\phi_0$):\n\\begin{subequations}\n\\label{eq:f(R)}\n\\begin{align}\n&\\lambda_C = 32 \\: \\sqrt{f_{R0}\/10^{-4}} \\: \\text{Mpc}, \\label{eq:f(R)1}\\\\\n&|\\Phi_c| = \\frac{3}{2} f_{R0}, \\label{eq:f(R)2}\\\\ \\nonumber\n&\\text{so that}\\\\\n&|\\Phi_c| = \\frac{3}{2} \\times 10^{-4} \\left(\\frac{\\lambda_C}{32 \\: \\text{Mpc}}\\right)^2. \\label{eq:f(R)3}\n\\end{align}\n\\end{subequations}\nQualitatively, these relations are also of more general applicability with $\\lambda_C$ interpreted in terms of the self-screening parameter $\\phi_0\/(2\\beta M_\\text{pl})$, often called $\\chi_c$. Although in the most general screening theory $\\lambda_C$ and $\\Phi_c$ are likely unrelated, we will find the observational data to possess insufficient information for a three dimensional inference, and hence impose Eq.~\\ref{eq:f(R)} by fiat. We leave it to future modelling with better data to open the parameter space further. The model that we seek to test, then, is fully specified by $\\Delta G$ and $\\lambda_C$.\n\nFor given $\\{\\Delta G, \\lambda_C\\}$, we derive the gravitational inputs to the predicted $\\vec{r}_*$ ($\\Phi$ and $\\vec{a}_5$) in Secs~\\ref{sec:maps} and~\\ref{sec:a5} and the galactic input ($M( 7.63 \\times 10^{10} \\: M_\\odot$ (the 1000-particle resolution limit of the simulation;~\\cite{Diemer_Kravtsov}):\n\\begin{equation}\n\\Phi_\\text{hal} \\equiv c_\\Phi \\Phi_\\text{vis}; \\; \\; |\\vec{a}_\\text{hal}| \\equiv c_a |\\vec{a}_\\text{vis}|; \\; \\; c_\\theta \\equiv \\frac{\\vec{a}_\\text{hal} \\cdot \\vec{a}_\\text{vis}}{|\\vec{a}_\\text{hal}| |\\vec{a}_\\text{vis}|}.\n\\end{equation}\nTo apply this correction in the real Universe, we correlate these factors in the simulation with the observational proxies $d$ (distance to the object), $N_{10}$ (number of objects visible to 2M++ within 10 Mpc) and $\\Phi_\\text{vis,10}$ (potential due to objects visible to 2M++ within 10 Mpc). We calculate $\\{d, N_{10}, \\Phi_\\text{vis,10}\\}$ for each real test galaxy and assign it the $\\vec{c}$ values of a randomly-chosen \\textsc{darksky-400} halo in the same bin of proxy values.\n\n\\item{} Add the contributions $\\Phi_\\text{sm}$ and $\\vec{a}_\\text{sm}$ from the remaining matter, not associated with resolved halos, by estimating the mass in linear and quasi-linear modes of the density field (subscript `sm' denotes `smooth'). This is achieved by means of a Bayesian reconstruction of the $z \\lesssim 0.05$ density field with the Bayesian Origin Reconstruction from Galaxies (BORG) algorithm~\\citep{Jasche_10, Jasche_15, Jasche_Wandelt_12, Jasche_Wandelt_13, Lavaux}, which propagates information on the number densities and peculiar velocities of 2M++ galaxies assuming $\\Lambda$CDM cosmology (with best-fit Planck parameter values) and a bias model. The parameters of the bias model are not known a priori but fitted directly to the data. We use the brand-new particle mesh BORG algorithm~\\cite{BORG_PM}, which improves in accuracy on the previous 2-loop perturbation theory determination used in~\\cite{Desmond}.\n\nFor our purposes, the output of this algorithm is a posterior distribution of density fields for the local $(677.7 \\: h^{-1} \\: \\text{Mpc})^3$ volume, with a grid scale of $\\Delta r = 2.65 \\: h^{-1}$ Mpc. We approximate each field as a set of point masses at the grid cell centres. Although sufficient for $\\lambda_C \\gtrsim 10$ Mpc, when several grid cells are contained within $\\lambda_C$ around each test point, this discretisation will generate excessive shot noise at smaller $\\lambda_C$ due to the relative sparsity of source masses. Test points equidistant from neighbouring source masses may enclose none within $\\lambda_C$, and hence artificially receive $\\Phi_\\text{ex} = \\vec{a} = r_* = 0$. To rectify this, we define a minimum value for the radius out to which source objects are considered, $R_\\text{max}$, of $R_\\text{thresh} =10\\;\\text{Mpc}\\approx 2.5\\Delta r$, which we adopt for $\\lambda_C < R_\\text{thresh}$. To convert $\\Phi_\\text{sm}(R_\\text{thresh})$ and $\\vec{a}_\\text{sm}(R_\\text{thresh})$ to $\\Phi_\\text{sm}(\\lambda_C)$ and $\\vec{a}_\\text{sm}(\\lambda_C)$, we use the fact that the region within $\\lambda_C$ is small and of nearly uniform density to Taylor expand $\\rho$ to lowest order around the position of the test point. Putting the origin at that point,\n\\begin{align}\n\\Phi(R_\\text{max}) = G \\int_0^{R_\\text{max}} dV \\frac{\\rho(\\vec{r})}{r} \\approx G \\int_0^{R_\\text{max}} dV \\frac{\\rho_0}{r} \\propto R_\\text{max}^2,\n\\end{align}\n\\begin{align}\n\\vec{a}(R_\\text{max}) &= G \\int_0^{R_\\text{max}} dV \\frac{\\rho(\\vec{r})}{r^2} \\hat{r} \\\\ \\nonumber\n&\\approx G \\int_0^{R_\\text{max}} dV \\frac{\\rho_0 + \\vec{\\partial}_r\\rho|_0 \\cdot \\vec{r}}{r^2} \\hat{r} \\\\ \\nonumber\n&= G \\: \\vec{\\partial}_r\\rho|_0 \\cdot \\int_0^{R_\\text{max}} dV \\frac{\\vec{r}}{r^2} \\: \\hat{r} \\propto R_\\text{max}^2.\n\\end{align}\nThus both $\\Phi$ and $\\vec{a}$ scale with the surface area of the region which sources them.\\footnote{We have verified by greatly increasing the resolution of the grid in small subvolumes that these scalings provide estimates of $\\Phi_\\text{ex}$ and $\\vec{a}$ accurate to within 10\\% for $\\lambda_C$ as low as $\\Delta r\/20$.} For $\\lambda_C < R_\\text{thresh}$ we therefore set\n\\begin{equation}\n\\Phi_\\text{sm}(\\lambda_C) = \\frac{\\lambda_C^2}{R_\\text{thresh}^2} \\Phi_\\text{sm}(R_\\text{thresh}),\n\\end{equation}\n\\begin{equation}\n\\vec{a}_\\text{sm}(\\lambda_C) = \\frac{\\lambda_C^2}{R_\\text{thresh}^2} \\vec{a}_\\text{sm}(R_\\text{thresh}).\n\\end{equation}\nand estimate the total (external) potential and acceleration fields as\n\\begin{equation}\n\\Phi_\\text{ex} = \\Phi_\\text{hal} + \\Phi_\\text{sm}, \\; \\; \\; \\; \\vec{a} = \\vec{a}_\\text{hal} + \\vec{a}_\\text{sm}.\n\\end{equation}\n\\end{enumerate}\n\n\n\\subsection{Monte Carlo sampling}\n\\label{sec:MC}\n\nThe calculations of~\\cite{Desmond} used approximate maximum likelihood estimates of the input model parameters, resulting in one $\\Phi_\\text{ex}$ and one $\\vec{a}$ field (for given $R_\\text{max}$) over space. However, as anticipated in that work, it is desirable to marginalise over the uncertainties in those parameters to generate a set of plausible gravitational fields, weighted by their respective likelihoods. We implement this here by repeating our calculation of $\\Phi$ and $\\vec{a}_5$ $N_\\text{MC}=1000$ times, in each case drawing randomly and independently from the probability distributions of the input parameters to generate a set of final fields. This generates a full probability distribution for $\\Phi_\\text{ex}$ and $\\vec{a}_5$ at each test galaxy, and hence, by further propagation with Eq.~\\ref{eq:rstar}, for the expected signal $r_*$. This will allow us to derive posteriors on $\\{\\Delta G, \\lambda_C\\}$ by means of a full likelihood framework.\n\nEach of the three steps in Sec.~\\ref{sec:maps} naturally lends itself to this approach by supplying a probability distribution for the relevant parameters, as follows:\n\n\\begin{enumerate}\n\n\\item{} AM with non-zero scatter imposes an incomplete constraint on the galaxy--halo connection, in the sense that many halos may plausibly be associated with a given galaxy. We therefore repeat the inverse-AM step 200 times to generate a set of catalogues relating 2M++ galaxies to \\textsc{darksky-400} halos.\\footnote{Note that we do not marginalise over the uncertainties in the AM parameters themselves, which therefore constitute a systematic uncertainty of our analysis. However, due to the tight clustering constraints on these parameters~\\citep{Lehmann}, this uncertainty is almost certainly subdominant to the statistical AM uncertainty. We collect and discuss our systematic uncertainties in Sec.~\\ref{sec:systematics}.}\n\n\\item{} Given values for the observational proxies $d$, $N_{10}$ and $\\Phi_\\text{vis,10}$, a range of correction factors $\\vec{c}$ are possible, with probability distributions given by their relative frequencies over resolved \\textsc{darksky-400} objects. Each random draw from their joint distribution corresponds to the association of a given real galaxy with a different simulated object at fixed $\\{d,N_{10},\\Phi_\\text{vis,10}\\}$. Thoroughly sampling this distribution therefore marginalises over the possible contributions to $\\Phi$ and $\\vec{a}$ from halos of mass $M_\\text{vir} > 7.63 \\times 10^{10} M_\\odot$ hosting galaxies that are unresolved by 2M++.\n\n\\item{} Uncertainties in the inputs to the BORG algorithm -- which include galaxy number densities and bias model -- produce uncertainty in the corresponding smooth density field. Because the particle-mesh BORG run that we use produces $\\lesssim10$ independent samples, we marginalise over $10$ realisations from the burnt-in part of the chain, chosen further apart from one another than the correlation length. \n\n\\end{enumerate}\n\nThese complete the first three rows of Table~\\ref{tab:params}, which lists the probability distributions of all the parameters of our model. The fourth describes an additional scatter of $10\\%$ we impose between the observed and true values of halo mass, concentration and position, and the remainder will be described later in this chapter. By giving conservative uncertainties to these parameters, we intend the width of the fifth-force part of our likelihood function -- and hence our inference -- to be conservative also.\n\n\n\\begin{table*}\n \\begin{center}\n \\begin{tabular}{|l|l|l|}\n \\hline\n \\textbf{Parameter} & \\textbf{Description \/ origin of uncertainty} & \\textbf{Parent distribution \/ sampling procedure}\\\\\n \\hline\n\\rule{0pt}{3ex}\n P$(M_\\text{vir}, c|L; \\alpha, \\sigma_\\text{AM})$ & Stochasticity in galaxy--halo connection & 200 mock AM catalogues at fixed $\\{\\alpha, \\sigma_\\text{AM}\\}$\\\\\n\\rule{0pt}{3ex}\n P($\\vec{c}|d, N_{10}, \\Phi_\\text{vis,10})$ & Location of unresolved halos & Random draw from frequency distribution in N-body sim.\\\\\n\\rule{0pt}{3ex}\n Smooth density field & Galaxy number densities, bias model etc & 10 independent draws from BORG posterior\\\\\n\\rule{0pt}{3ex}\n P$(M_{\\text{vir},t}, c_t, \\vec{r}_t|M_{\\text{vir},o}, c_o, \\vec{r}_o)$ &Halo properties & Uncorrelated 10\\% Gaussian uncertainty\\\\\n\\rule{0pt}{3ex}\n P$(\\Phi_{\\text{in},t}|\\Phi_{\\text{in},o})$ & Internal potential & 10\\% (with NSA data) or 30\\% (without) Gaussian uncertainty\\\\\n\\rule{0pt}{3ex}\n P$(\\text{early}|M_*)$ & Probability of being early-type & Linear fit to~\\cite{Henriques}, fig. 5\\\\\n\\rule{0pt}{3ex}\n P$(M_*, R_\\text{eff}, R_\\text{d,gas}|M_\\text{HI})$ & Galaxy properties from \\textit{Alfalfa} data & Scalings of~\\cite{Dutton11} with scatters as quoted there\\\\\n\\rule{0pt}{3ex}\n P$(\\Sigma_{D,t}^0|\\Sigma_{D,o}^0)$ & Central dynamical surface density & 0.5 (with NSA data) or 1 (without) dex Gaussian uncertainty\\\\\n \\hline\n \\end{tabular}\n \\caption{Parameters forming the input to our calculation of the gas--star offset $\\vec{r}_*$ for each \\textit{Alfalfa} galaxy, and the distributions from which they are drawn. For each Monte Carlo realisation of the fifth-force model we sample randomly and independently from these distributions to build an empirical likelihood function $\\mathcal{L}(\\vec{r}_*|\\lambda_C, \\Delta G)$. Subscripts `t' and `o' denote `true' and `observed' values respectively. Further information is provided in Sec.~\\ref{sec:method}.}\n \\label{tab:params}\n \\end{center}\n\\end{table*}\n\n\n\\subsection{Relating $\\vec{a}$ to $\\vec{a}_5$}\n\\label{sec:a5}\n\nSec.~\\ref{sec:maps} describes the calculation of $\\Phi$ and $\\vec{a}$ from \\emph{all} mass within a distance $\\lambda_C$ of a given test point. $\\Phi$ is ready for use in Eq.~\\ref{eq:rstar}, but the acceleration of interest, $\\vec{a}_5$, is due to the fifth force alone, which couples only to \\emph{unscreened} mass. We must therefore determine which source masses are screened and which unscreened, in order to include only the latter. This section gives our method.\n\nFirst, we calculate $\\Phi_\\text{ex}(\\lambda_C)$ at the position of each 2M++ galaxy by the method of Sec.~\\ref{sec:maps}. We estimate $\\Phi_\\text{in}$ by plugging into Eq.~\\ref{eq:Vin} the properties of the halo associated with the galaxy in the inverse-AM step (with a random scatter given by the fifth row of Table~\\ref{tab:params}). The halo is unscreened, and hence contributes to $\\vec{a}_5$, if $|\\Phi_\\text{ex}(\\lambda_C)| + |\\Phi_\\text{in}| < |\\Phi_c|$. As $\\Phi$ depends on the values of the probabilistic input parameters described in Sec.~\\ref{sec:MC}, we recalculate it for each of the $N_\\text{MC}$ realisations of the underlying model, and hence derive a probability $p_j(\\lambda_C)$ for halo $j$ to be unscreened. We repeat this also using solely the external part of the potential (i.e. setting $\\Phi_\\text{in}=0$) to make $p_\\text{j,ex}(\\lambda_C)$, which we will use below.\n\nWith the screening properties of the 2M++ halos determined, we turn next to the remaining mass component: the smooth density field. To ascertain where this is screened, we reconstruct the complete $p_\\text{ex}(\\vec{r}; \\lambda_C)$ field by linearly interpolating its values $p_\\text{j,ex}(\\lambda_C)$ at the positions of the 2M++ halos, which, by virtue of the high completeness of that survey, provide a fair sampling of the entire $z<0.05$ volume. We then simply evaluate $p_\\text{ex}$ at the position $\\vec{r}_k$ of each point mass $k$ representing the smooth density field, and assume that the structures in which this mass is located are sufficiently diffuse never to self-screen. $p_\\text{ex}(\\vec{r}_k; \\lambda_C)$ thus provides the probability for portion $k$ of the smooth density field to be unscreened.\n\nWe are now in position to calculate the fifth-force acceleration $\\vec{a}_5$ for each \\textit{Alfalfa} galaxy. For each Monte Carlo realisation, we begin by setting each source halo and portion of the smooth density field to be screened or unscreened according to its probability $p_j(\\lambda_C)$ or $p_\\text{ex}(\\vec{r}_k; \\lambda_C)$ as calculated above.\\footnote{By doing this independently for each object we assume the screening fractions to be uncorrelated. Although not strictly true, this is a good approximation: variations in most of the input parameters affect regions of size $<\\lambda_C$, while typical objects are separated by much larger distances.} We calculate $\\vec{a}_5(\\lambda_C)$, separately for each realisation, by evaluating the sum in Eq.~\\ref{eq:asum} over unscreened halos and adding the $G M\/r^2$ contribution from each unscreened portion of the smooth density field within $\\lambda_C$. As the screened stars do not contribute to $\\vec{a}_5$ even in unscreened galaxies, however, we first subtract $M_*$ from $M_\\text{vir}$ in the halo contribution. In practice this modification is minimal since stars comprise $\\lesssim5$\\% of total halo mass.\n\nWhile calculating $\\vec{a}_5$ itself requires sources at most $\\lambda_\\text{C,max}$ beyond the maximum distance of any test galaxy, determining whether those sources are themselves screened requires sources a further $\\lambda_\\text{C,max}$ out. Since the 2M++ catalogue and BORG-reconstructed density fields become unreliable beyond $\\sim200$ Mpc, and, as we find below, the fixed angular uncertainty in the \\textit{Alfalfa} HI-optical offset means that test points beyond $\\sim100$ Mpc bring little further constraining power to $\\Delta G$ and $\\lambda_C$, we take $\\lambda_\\text{C,max} = 50$ Mpc.\\footnote{By Eq.~\\ref{eq:f(R)3}, $\\lambda_C=50$ Mpc corresponds to $|\\Phi_c| = 2.3 \\times 10^{-4}$. This is larger than the potential of typical main sequence stars $\\sim10^{-6}$, suggesting that in this model stars would not be sufficiently dense to screen themselves. Thus both gas and stars would feel $F_5$ and the signal $r_*$ would go away. (This model would also leave the Milky Way, with similar $\\Phi$, unscreened.) Nevertheless, we include this model to explore more fully the constraining power of our signal: as in the case when screening is switched off, strictly our constraints require $F_5$ to couple differently to stars and gas.} Our minimum $\\lambda_C$ value is set by the point at which the predicted $r_*$ values become so small that the corresponding $\\Delta G\/G_N$ constraint becomes uninteresting. We will show in Sec.~\\ref{sec:constraints} that $\\Delta G\/G_N$ cannot be constrained to better than $\\mathcal{O}(1)$ for $\\lambda_C \\simeq 400$ kpc, which is the value we therefore adopt for $\\lambda_\\text{C,min}$. This is also the approximate scale at which tidal effects within galaxies may be expected to become important.\n\n\n\\subsection{Calculating $\\vec{r}_*$}\n\\label{sec:signal}\n\nArmed with a determination of $\\Phi$ and $\\vec{a}_5$ for each \\textit{Alfalfa} galaxy, we are now nearing our goal of using Eq.~\\ref{eq:rstar} to calculate the expected signal $\\vec{r}_*$ as a function of $\\Delta G$ and $\\lambda_C$. The remaining unknown is $M( 0$ to force $\\theta>0$. We will show in Sec.~\\ref{sec:uncertainty_model} that in each case $\\theta$ converges to values consistent with those of the first method and produces an almost identical $\\Delta G\/G_N$ posterior. This is because by far the greater part of the signal derives from noise for the $\\Delta G\/G_N$ values of interest in our analysis. Marginalising over $\\bar{\\theta}$, $A-B$ or $C-F$ broadens the $\\Delta G\/G_N$ posteriors only slightly. As discussed further in Sec.~\\ref{sec:systematics}, operationally $\\theta$ also includes the contribution to the signal from other physics, under the assumption that it supplies a Gaussian component to the likelihood with width a function only of $s$. The degeneracy between measurement uncertainty and additional physics in the determination of $\\theta$ will become important for future HI surveys with greater spatial resolution (see Sec.~\\ref{sec:forecast}).\n\nWe will also show results for a choice we call ``conservative,'' in which the $\\theta$ values of the fiducial method are doubled. This yields robust upper limits on $\\Delta G\/G_N$, and is the choice we focused on in~\\cite{Desmond_PRL}. Note however that this model produces significantly lower overall likelihood values than the fiducial one due to overprediction of the dispersions of the HI-OC displacements at each $s$, and in a model comparison sense is therefore clearly disfavoured.\n\n\n\\subsection{Likelihood model}\n\\label{sec:likelihood}\n\nOver the $N_\\text{MC}$ realisations of our probabilistic model, and for given $\\Delta G$ and $\\lambda_C$, we have now derived a distribution of predicted $r_{*,\\alpha}$ and $r_{*,\\delta}$ values for each \\textit{Alfalfa} galaxy. Heuristically these constitute the likelihood distribution of the signal, and by scoring it against the observations and applying Bayes' theorem one can derive constraints on $\\Delta G$ and $\\lambda_C$ by MCMC. In practice we proceed as follows.\n\nWe begin by constructing a template signal, assuming that each test galaxy is unscreened and taking $\\Delta G\/G_N=1$ and one of 20 values for $\\log_{10}(\\lambda_C\/\\text{Mpc})$ uniformly spaced between $\\log_{10}(0.4)$ and $\\log_{10}(50)$. For each test galaxy, we find the minimum ($r_0$) and maximum ($r_1$) of $r_{*,\\alpha}$ and $r_{*,\\delta}$ separately over the $N_\\text{MC}$ realisations of the model. We then partition the results of these realisations into $N_\\text{bins}=10$ uniform bins between these limits and define $P_j \\equiv N_j\/N_\\text{MC}$, where $N_j$ is the number of realisations falling in bin $j$. We take this histogram to represent the unscreened component of the model's likelihood function in $\\{r_{*,\\alpha}, r_{*,\\delta}\\}$ space. We calculate also the fraction $f$ of realisations in which the galaxy in question is unscreened. Given that the model predicts $r_*=0$ for the screened remainder, and treating the orthogonal RA and DEC components as independent, the overall likelihood $\\mathcal{L}_i(r_{*,\\alpha}, r_{*,\\delta})$ for test galaxy $i$ to have $\\vec{r}_*$ components $r_{*,\\alpha}$ and $r_{*,\\delta}$ is given by\n\\begin{equation} \\label{eq:L1}\n\\mathcal{L}_i(r_{*,\\alpha}, r_{*,\\delta}|\\Delta G, \\lambda_C) = \\mathcal{L}_i(r_{*,\\alpha}|\\Delta G, \\lambda_C) \\times \\mathcal{L}_i(r_{*,\\delta}|\\Delta G, \\lambda_C),\n\\end{equation}\nwhere\n\\begin{align} \\label{eq:L2}\n\\mathcal{L}_i(r_{*,\\alpha}|&\\Delta G, \\lambda_C) = (1-f) \\: \\delta(r_{*,\\alpha}) \\\\ \\nonumber\n&+ f \\: \\Sigma_{j=0}^{N_\\text{bins}} \\: P_j \\: \\delta(r_{*,\\alpha} - \\Delta G \\: (r_{0,\\alpha} + j \\Delta r_\\alpha)).\n\\end{align}\nHere $\\Delta r_\\alpha \\equiv (r_{1,\\alpha} - r_{0,\\alpha})\/(N_\\text{bins}-1)$ is the width of each bin, and we have approximated the unscreened component of the likelihood as a discrete sum of delta-functions $\\delta$ at the midpoints of the bins. $f$, $P_j$, $r_0$ and $r_1$ are implicit functions of $\\lambda_C$: by Eq.~\\ref{eq:f(R)3} and the procedure of Sec.~\\ref{sec:method}, a larger $\\lambda_C$ corresponds to a larger $|\\Phi_c|$ and hence a larger unscreened fraction $f$ and signal limits $r_0$ and $r_1$. Since $r_*$ is proportional to $\\Delta G$ (Eq~\\ref{eq:rstar}), this parameter simply multiplies the bin positions. The likelihood for the DEC component is defined analogously.\n\nWe convolve this with a Gaussian probability distribution function for these \\emph{true} values given \\emph{observed} values $r_{*,\\alpha,\\text{obs}}$ and $r_{*,\\delta,\\text{obs}}$ and their measurement uncertainties $\\sigma_i$. Dropping the dependence on $\\Delta G$ and $\\lambda_C$ for brevity:\n\\begin{equation} \\label{eq:L3}\n\\mathcal{L}_i(r_{*,\\alpha,\\text{obs}}) = \\int dr_{*,\\alpha} \\; \\mathcal{L}_i(r_{*,\\alpha}) \\: \\times \\: \\mathcal{L}_i(r_{*,\\alpha, \\text{obs}}|r_{*,\\alpha}),\n\\end{equation}\nwhere\n\\begin{equation} \\label{eq:L4}\n\\mathcal{L}_i(r_{*,\\alpha, \\text{obs}}|r_{*,\\alpha}) = \\frac{\\exp\\{{-(r_{*,\\alpha, \\text{obs}} - r_{*,\\alpha})^2}\/(2 \\sigma_{i,\\alpha}^2)\\}}{\\sqrt{2 \\pi} \\: \\sigma_{i,\\alpha}}.\n\\end{equation}\nThe observational uncertainty is given by\n\\begin{equation} \\label{eq:ra_unc}\n\\sigma_{i,\\alpha} = d_{A,i} \\: \\cos(\\delta_i) \\: \\theta_i,\n\\end{equation}\nwhere $d_{A,i}$ is the angular diameter distance to galaxy $i$, $\\delta_i$ is the galaxy's declination, and $\\theta_i$ is the angular uncertainty assigned by either the fiducial or conservative method of Sec.~\\ref{sec:data}.\\footnote{As explained in Secs.~\\ref{sec:uncertainties} and~\\ref{sec:forecast}, $\\theta$ also models a Gaussian component of the likelihood model due to non-fifth-force physics. We refer to it as measurement uncertainty here simply for brevity.} The corresponding DEC uncertainty is given by\n\\begin{equation} \\label{eq:dec_unc}\n\\sigma_{i,\\delta} = d_{A,i} \\: \\theta_i.\n\\end{equation}\nPerforming the convolution in Eq.~\\ref{eq:L3} then yields\n\\begin{align} \\label{eq:L7}\n&\\mathcal{L}_i(r_{*,\\alpha,\\text{obs}}|\\Delta G, \\lambda_C) = (1-f) \\: \\frac{\\exp\\{{-r_{*,\\alpha, \\text{obs}}^2}\/(2\\sigma_{i,\\alpha}^2)\\}}{\\sqrt{2 \\pi} \\: \\sigma_{i,\\alpha}} \\\\ \\nonumber\n& + f \\: \\Sigma_{j=0}^{N_\\text{bins}} \\: P_j \\: \\frac{\\exp\\{{-(r_{*,\\alpha,\\text{obs}} - \\Delta G \\: r_j)^2\/(2\\sigma_{i,\\alpha}^2)}\\}}{\\sqrt{2 \\pi} \\: \\sigma_{i,\\alpha}},\n\\end{align}\nwhere $r_j \\equiv (r_{0,\\alpha} + j \\Delta r_\\alpha)$ is the position of the $j$th bin centre. Treating the galaxies as independent, the likelihood of the entire dataset $d$ is then\n\\begin{equation}\n\\mathcal{L}(d|\\Delta G, \\lambda_C) = \\prod_i \\mathcal{L}_i(r_{*,\\alpha,\\text{obs}}|\\Delta G, \\lambda_C) \\: \\mathcal{L}_i(r_{*,\\delta,\\text{obs}}|\\Delta G, \\lambda_C).\n\\end{equation}\nFinally, we apply Bayes' theorem to deduce the probability of $\\Delta G$ and $\\lambda_C$ themselves:\n\\begin{equation}\nP(\\Delta G, \\lambda_C|d) = \\frac{\\mathcal{L}(d|\\Delta G, \\lambda_C) \\: P(\\Delta G, \\lambda_C)}{P(d)}.\n\\end{equation}\nwhere $P(\\Delta G, \\lambda_C)$ is the prior distribution of the model parameters and $P(d)$ is the constant probability of the data for any $\\{\\Delta G, \\lambda_C\\}$. We take a uniform prior in $\\log_{10}(\\lambda_C\/\\text{Mpc})$ in the range $\\log_{10}(0.4)-\\log_{10}(50)$, and the improper prior $\\Delta G \\ge 0$ flat in $\\Delta G$.\\footnote{We find a Jeffrey's prior to yield almost identical results.} We will find our likelihood function to provide clear upper bounds on $\\Delta G$, obviating the need to impose such a bound a priori.\n\nDeriving the likelihood function directly from the results of the Monte Carlo realisations in this way removes the need to assume a particular functional form for it. This reduces bias in the inference, and we have checked explicitly that our method accurately reconstructs $\\Delta G\/G_N$ when mock data generated with a particular value is fed in (Sec.~\\ref{sec:valid}). Since several of the most important input probability distributions in Table~\\ref{tab:params} are non-Gaussian, we find that assuming the overall likelihood to be Gaussian (i.e. using the mean and standard deviation of the $N_\\text{MC}$ predicted $r_{*,\\alpha}$ and $r_{*,\\delta}$ values rather than the full histogrammed distribution) biases inference of a known input $\\Delta G\/G_N$ by a factor of $\\sim2$.\n\n\n\\subsection{Convergence and consistency tests}\n\\label{sec:convergence}\n\nWe have taken 10 realisations of the BORG smooth density field and 200 of the AM galaxy--halo connection. We verify that these numbers are sufficient to fully propagate these probability distributions into the predicted signal by directly investigating the impact of changing them. While with fewer realisations the spread in $r_{*,\\alpha}$ and $r_{*,\\delta}$ in the signal templates rises with the number of realisations, indicating additional uncertainty in the marginalisation, this effect levels off near the values chosen. Using a larger number of realisations would not therefore appreciably impact the results. We have also checked that using a different set of 10 smooth density fields and 200 AM galaxy--halo connections does not alter any of our conclusions.\n\nSimilarly, we have used a finite number of Monte Carlo draws from the model to estimate the likelihood function. With $N_\\text{MC} \\lesssim 1000$, repetitions of the entire procedure yield constraints on $\\Delta G\/G_N$ and $\\lambda_C$ (Sec.~\\ref{sec:results}) that differ at the $\\gtrsim 20$\\% level. That this is not true with $\\gtrsim 1000$ realisations indicates that in this case the probability distributions of the input parameters of Table~\\ref{tab:params} have been thoroughly sampled and their uncertainties fully marginalised over in the final constraints. We check also that varying our estimates of $\\Phi_\\text{in}$, $M_\\text{vir}$, $c$, $\\Sigma_D^0$, $M_*$, $R_\\text{eff}$ and $R_\\text{d,gas}$ within a factor of $2$ does not significantly affect the results; this change is subdominant not only to the measurement uncertainties, but also to the other theoretical uncertainties deriving from the smooth density field, galaxy--halo connection and location of low-mass halos. (The \\emph{scatter} in $\\Sigma_D^0$ is however important; see Sec.~\\ref{sec:systematics}.)\n\nWe check that our inference is not driven by potential outliers by excising up to 10\\% of points with the most extreme values of $\\langle r_{*,\\alpha} \\rangle$, $\\langle r_{*,\\delta} \\rangle$, $r_{*,\\alpha,\\text{obs}}$ or $r_{*,\\delta,\\text{obs}}$, where angle brackets denote a modal average over the $N_\\text{MC}$ model realisations. This has little impact on the results. In Sec.~\\ref{sec:valid} we will show explicitly that jackknife or bootstrap resamples of the \\textit{Alfalfa} dataset behave similarly.\n\nFinally, we check the convergence of our MCMC chains (run with the affine-invariant Python sampler \\textsc{emcee}) by visual inspection and by means of the Gelman-Rubin statistic. The low dimensionality of the inference combined with the relatively simple form of the likelihood function and unimodality of the posteriors makes exploring the parameter space relatively easy. With 10 walkers, we find 300 burn-in steps to suffice for 1D inference ($\\Delta G\/G_N$ at fixed $\\lambda_C$), and 600 burn-in steps to suffice for 2D inference (both $\\Delta G\/G_N$ and $\\lambda_C$).\n\nWe caution that errors in the probability distributions of the model parameters (including the implicit choice of delta-function priors for parameters which should in reality have some uncertainty) may lead to systematic error. We discuss this further in Sec.~\\ref{sec:systematics}.\n\n\n\\section{Results}\n\\label{sec:results}\n\n\\subsection{Comparison of observed and predicted signal}\n\\label{sec:correlations}\n\nTo begin our investigation of the presence of a fifth-force signal in the \\textit{Alfalfa} data, we show in Fig.~\\ref{fig:1a} (red) the distribution of the predicted values of $\\langle r_* \\rangle = \\langle (r_{*,\\alpha}^2 + r_{*,\\delta}^2)^{1\/2} \\rangle$ for fiducial model parameters $\\Delta G\/G_N=1$ and $\\lambda_C=5$ Mpc. For each galaxy this is the mode of the binned distribution of the $N_\\text{MC}$ model realisations, i.e. $\\langle r_* \\rangle \\equiv r_0 + j_0 \\Delta r$ where $j_0$ maximises $P_j$. For comparison, we show in green the case in which screening is switched off. This is tantamount to setting $|\\Phi_c| \\rightarrow \\infty$, implying $f=1$ for all \\textit{Alfalfa} galaxies and that all matter within $\\lambda_C$ of each test point is fully unscreened. Thus any pair of masses within that separation interact with a fifth force, leading to an enhanced signal. We show the analogous distribution for the real data in Fig.~\\ref{fig:1b}. In Fig.~\\ref{fig:1c} we show the uncertainty on the predicted $r_*$, defined as the minimal width enclosing 68\\% of the model realisations. As in Fig.~\\ref{fig:1a} we show the results both with and without screening. Fig. ~\\ref{fig:1d} shows the distribution of uncertainties $\\theta_i$ in the HI centroids in the real data, assigned by the fiducial method.\n\nFig.~\\ref{fig:2a} shows the correlation of $\\langle r_* \\rangle$ with $\\Phi$. The green points are for the case in which screening is switched off, the red points for when it is included -- with the signal of screened ($f<0.5$) galaxies set to 0 -- and the blue points as the red except without the $f<0.5$ points being lowered. The black dashed line shows the screening threshold $\\Phi_c$ for this choice of $\\lambda_C$, and the median errorbars in both directions, enclosing 68\\% of the model realisations, are shown in the top left. Fig.~\\ref{fig:2b} shows the analogous correlation with $a_5$ with identical colour scheme, and Figs.~\\ref{fig:2c} and~\\ref{fig:2d} show the corresponding correlations for the real data. In Figs.~\\ref{fig:3a} and~\\ref{fig:3b} we plot the RA and DEC components of the observed and predicted signals directly against each other.\n\nWe draw four inferences at this stage:\n\n\\begin{enumerate}\n\n\\item{} For this choice of model parameters, the predicted signals are typically $\\mathcal{O}(1)$ kpc while the observed ones are $\\mathcal{O}(10)$ kpc with a narrower distribution. This reflects the strength of the restoring force on the stellar centroid due to its offset from the halo centre (derived from the central density in Eq.~\\ref{eq:M(r)}), on the one hand, and the relatively large observational uncertainties in the HI centroid position on the other. This indicates that were the observed $r_*$ values due entirely to a fifth force, the required $\\Delta G\/G_N$ would be $\\mathcal{O}(10)$. If competitive constraints are to be placed on $\\Delta G$ with this signal, therefore, it must primarily be by means of the correlation of the relative directions and magnitudes of $\\vec{r}_*$ and $\\vec{a}_5$ across the galaxy sample. Assuming $\\Delta G\/G_N \\lesssim 1$, modified gravity can account for at most $\\sim10\\%$ of the HI-OC offset.\n\n\\item{} For $\\Delta G\/G_N=1$, the uncertainty on the predicted signal (width of the likelihood function) is typically $\\sim 10^{-2}-10^{-1}$ kpc, while that of the observations is again $\\mathcal{O}(10)$ kpc. As these two terms contribute equally to the likelihood in Eq.~\\ref{eq:L7}, this implies that the strength of the constraints we set in Sec.~\\ref{sec:constraints} is determined predominantly by the precision $\\theta$ with which the HI centroid of the \\textit{Alfalfa} galaxies is located. As the width of the likelihood function scales with $\\Delta G$ (Eq.~\\ref{eq:L7}), this is even more pronounced at the small $\\Delta G\/G_N$ values ($\\mathcal{O}(10^{-2})$) of interest there.\n\n\\item{} Both $\\langle r_* \\rangle$ and $\\sigma(r_*)$ are slightly larger for the unscreened than screened model. This is because $\\vec{a}_5$ is larger in the unscreened case due to contributions from additional masses in the vicinity of a given test point. Note however that this does not take into account the fact that in the model with screening the test galaxies themselves may be screened and hence have a predicted $r_*$ of 0 (the results here are only for the unscreened fraction). In terms of our inference this will prove to be the most significant difference between the two models.\n\n\\item{} By construction $\\langle r_* \\rangle$ shows clear and characteristic dependence on $\\Phi$ and $a_5$. The predicted signal is proportional to $a_5$ by Eq.~\\ref{eq:rstar}, generating a positive correlation whether or not screening is included. Without screening, the positive correlation between $|\\Phi|$ and $a_5$ (which are sourced by the same mass distribution) simply induces a positive correlation also between $r_*$ and $|\\Phi|$. The introduction of screening has two effects. First, some source masses surrounding test galaxies in high-density regions become screened and hence no longer contribute to $\\vec{a}_5$, causing the predicted signal to drop: this is the difference between the blue and green points at $|\\Phi|>|\\Phi_c|$ in Fig.~\\ref{fig:2a}. That this offset is small indicates that in this regime $\\Phi$ is set mostly by the test galaxies' own masses rather than their environment, so that most surrounding mass is unscreened and hence contributes to $a_5$ and $r_*$ even when screening is included. Second, test galaxies with $|\\Phi|>|\\Phi_c|$ become themselves screened, causing their predicted signal to fall to 0 (red). These galaxies may span the entire range of $a_5$ (Fig.~\\ref{fig:2b}), although are more likely to be found at higher values; the largest predicted signals in the case with screening therefore tend to be produced in galaxies very close to being screened. On the contrary, $r_{*,\\text{obs}}$ shows no apparent correlation with $\\Phi$ or $a_5$. This again implies that the greater part of the observed signal must be of non-fifth-force origin.\n\n\\end{enumerate}\n\n\n\\begin{figure*}\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{fig1a}\n \\label{fig:1a}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{fig1b}\n \\label{fig:1b}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{fig1c}\n \\label{fig:1c}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{fig1d}\n \\label{fig:1d}\n }\n \\caption{Histograms of the expectation (upper) and uncertainty (lower) of the fifth-force predicted (left) and observed (right) gas--star offset $r_*$ over all \\textit{Alfalfa} galaxies. For the fifth-force prediction we take the mode of the 1000 Monte Carlo realisations for each galaxy of the screened model with $\\lambda_C = 5$ Mpc and $\\Delta G\/G_N = 1$, given 10 bins for both the RA and DEC components ($r_{*,\\alpha}$ and $r_{*,\\delta}$) between their minimum and maximum values. The uncertainty is the minimal width enclosing $68\\%$ of the realisations. For the observed signal the uncertainty is assigned using the fiducial method of Sec.~\\ref{sec:uncertainties}. $r_{*,\\text{obs}}$ is typically several times larger than $\\langle r_* \\rangle$, and the corresponding uncertainty two orders of magnitude larger, even for the relatively extreme case in which the fifth force is as strong as gravity ($\\Delta G = G_N$). The majority of the signal must therefore be due to non-fifth-force effects.}\n \\label{fig:hists}\n\\end{figure*}\n\n\n\\begin{figure*}\n \\subfigure[Predicted $r_*-\\Phi$]\n {\n \\includegraphics[width=0.48\\textwidth]{fig2a}\n \\label{fig:2a}\n }\n \\subfigure[Predicted $r_*-a_5$]\n {\n \\includegraphics[width=0.48\\textwidth]{fig2b}\n \\label{fig:2b}\n }\n \\subfigure[Observed $r_*-\\Phi$]\n {\n \\includegraphics[width=0.48\\textwidth]{fig2c}\n \\label{fig:2c}\n }\n \\subfigure[Observed $r_*-a_5$]\n {\n \\includegraphics[width=0.48\\textwidth]{fig2d}\n \\label{fig:2d}\n }\n \\caption{Correlations of predicted (upper) and observed (lower) signal with Newtonian potential $\\Phi$ (left) and magnitude of fifth-force acceleration $a_5$ (right). The prediction is calculated as in Fig.~\\ref{fig:hists}, again for the case $\\lambda_C = 5$ Mpc, $\\Delta G\/G_N=1$, and $\\langle \\Phi \\rangle$ and $\\langle a_5 \\rangle$ are the modal averages over the 1000 Monte Carlo model realisations. In the top row, the red points are for the full chameleon- or symmetron-screened model, the blue points as the red except without the $r_*$ values of screened galaxies set to 0, and the green points show the case where screening is entirely switched off. The black dashed line shows the threshold $|\\Phi_c|$, above which galaxies are screened. The median sizes of the errorbars in the four directions -- minimal 68\\% bounds for the prediction, $1\\sigma$ for the observations -- are shown by the red crosses in the top left corners (the uncertainties in $\\langle r_* \\rangle$ are the same size as the red point). While the predictions show a clear cutoff in $r_*$ for $|\\Phi|>|\\Phi_c|$ and a positive correlation with $a_5$ (Eq.~\\ref{eq:rstar}), this is not apparent in the observations.}\n \\label{fig:Va_corr} \n\\end{figure*}\n\nThese estimations of fifth-force effect are relatively coarse. A precise quantitative determination of the preference of the data for $\\Delta G > 0$ -- and of the constraints on $\\Delta G$ and $\\lambda_C$ that it implies -- requires the full likelihood formalism of Sec.~\\ref{sec:method}. It is to this that we now turn.\n\n\\begin{figure*}\n \\subfigure[RA projection]\n {\n \\includegraphics[width=0.48\\textwidth]{fig3a}\n \\label{fig:3a}\n }\n \\subfigure[DEC projection]\n {\n \\includegraphics[width=0.48\\textwidth]{fig3b}\n \\label{fig:3b}\n }\n \\caption{Correlation of the observed and predicted components of $r_*$ in the RA ($r_{*,\\alpha}$, left) and DEC ($r_{*,\\delta}$, right) directions, with the predicted signal again calculated as in Fig.~\\ref{fig:hists} and errorbars as in Fig.~\\ref{fig:Va_corr}.}\n \\label{fig:radec_corr}\n\\end{figure*}\n\n\n\\subsection{Constraints on $\\Delta G$ and $\\lambda_C$}\n\\label{sec:constraints}\n\n\\subsubsection{Effect of noise model}\n\\label{sec:uncertainty_model}\n\nFrom Sec.~\\ref{sec:correlations} it is clear that for any reasonable value of $\\Delta G\/G_N$ modified gravity accounts for at most a small fraction of the total gas--star offset, justifying the fiducial method of Sec.~\\ref{sec:uncertainties} for setting the measurement uncertainties $\\theta_i$. This ought to manifest in an agreement between the results of that model and those of the others in that section where $\\theta$ is parametrised and fitted for. Here we show this explicitly for the screened model with $\\lambda_C = 1.8$ Mpc.\\footnote{We focus on this value of $\\lambda_C$ because it maximises the overall likelihood, as will be shown in Sec.~\\ref{sec:screening}.} Fig.~\\ref{fig:corner} shows the posteriors of $\\{\\Delta G\/G_N, \\bar{\\theta}\\}$ in the case where $\\theta = \\bar{\\theta}$ arcsec is a free parameter, and $\\{\\Delta G\/G_N, A, B\\}$ and $\\{\\Delta G\/G_N, C, D, E, F\\}$ for the models of Eqs.\\ref{eq:unc_shorter} and~\\ref{eq:unc_short2} respectively. These are to be compared with Fig.~\\ref{fig:dG3}, which shows the $\\Delta G\/G_N$ posterior for the fiducial noise model in which $\\theta$ is fixed a priori to equal the standard deviation of $r_{*,\\text{obs}}$ in bins of $\\log_{10}(s)$. The same solution for $\\Delta G\/G_N$ is picked out in each case, and in Fig.~\\ref{fig:corner1} it is apparent that the maximum-likelihood $\\bar{\\theta}$ is close to the average scatter in $r_{*,\\text{obs}}$ across the entire dataset (18 arcsec). The model of Eq.~\\ref{eq:unc_short2} picks out almost the same $\\theta(s)$ as \\cite{Haynes} eq. 1. We have checked that these results hold at other $\\lambda_C$ values also. For simplicity we therefore restrict ourselves from now on to the fiducial model in which $\\theta$ is fixed from the outset.\n\n\\begin{figure*}\n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{Corner_1}\n \\label{fig:corner1}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{Corner_2}\n \\label{fig:corner2}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{Corner_3}\n \\label{fig:corner3}\n }\n \\caption{For the screened model with $\\lambda_C = 1.8$ Mpc, posteriors of $\\Delta G\/G_N$ and additional parameters specifying the angular width $\\theta_i$ of the Gaussian, non-fifth-force component of the likelihood function for galaxy $i$. In Fig.~\\ref{fig:corner1}, $\\theta_i = \\bar{\\theta}$ arcsec for all galaxies. In Figs.~\\ref{fig:corner2} and~\\ref{fig:corner3} $\\theta_i$ is determined as a function of signal to noise ratio $s$ of the detection according to Eqs. \\ref{eq:unc_shorter} and~\\ref{eq:unc_short2} respectively. In each case the chains converge to the same solution for $\\Delta G\/G_N$ as when $\\theta$ is set a priori to the standard deviation of the measured $r_*$ in bins of $\\log_{10}(s)$ (Fig.~\\ref{fig:dG3}), indicating that our inference is robust to variations in the noise model.}\n \\label{fig:corner}\n\\end{figure*}\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{deltaG_3_int_1p0_comp}\n \\caption{Posterior of $\\Delta G\/G_N$ for $\\lambda_C = 1.8$ Mpc under the fiducial noise model, with and without screening.}\n \\label{fig:dG3}\n\\end{figure}\n\n\n\\subsubsection{1D inference without screening}\n\\label{sec:noscreening}\n\nFor our first main analysis we switch screening off, fix $\\lambda_C$ at one of its 20 values between 400 kpc and 50 Mpc and constrain $\\Delta G$ alone. We find the $\\Delta G\/G_N$ posterior to be peaked at or near $0$ for all $\\lambda_C$, indicating no significant deviation from GR. We show in green the $1\\sigma$ upper limit on $\\Delta G\/G_N$ as a function of $\\lambda_C$ in Fig.~\\ref{fig:constraint}, three example posteriors in Fig.~\\ref{fig:6}, and the maximum increase in log-likelihood over the case $\\Delta G=0$ in Fig.~\\ref{fig:like}. As may be expected, the constraints are tighter for larger $\\lambda_C$: $\\Delta G\/G_N \\lesssim 10^{-4}$ $(1\\sigma)$ for $\\lambda_C=400$ kpc, and $\\lesssim$ few$\\times10^{-3}$ for $\\lambda_C=50$ Mpc. This is because there are more source masses within a larger $\\lambda_C$, leading to an enhanced $a_5$ and hence predicted $r_*$, and therefore requiring $\\Delta G\/G_N$ to be smaller for consistency with the fixed observations. Assuming there is indeed no preference for $\\Delta G>0$ in this model, the bumps in the green lines of Figs.~\\ref{fig:constraint} and~\\ref{fig:like} indicate the noise level of our experiment.\n\nA limit of particular interest for the case without screening is $\\lambda_C \\rightarrow \\infty$, which corresponds to a light scalar field as found for example in Brans--Dicke theory. Extrapolating our constraints to high $\\lambda_C$, our analysis may be expected to require $\\Delta G\/G_N \\lesssim 10^{-4}$ in this case, which corresponds to $\\omega \\gtrsim 500$ in Brans--Dicke. Although not competitive with Solar System results \\cite{Alsing}, this constraint derives from a very different gravitational regime. Were $f(R)$ gravity (which requires $\\Delta G\/G_N = 1\/3$) not to employ screening, it would be ruled out to $1\\sigma$ across our parameter space, requiring $f_{R0} < 10^{-8}$.\n\nIt is important to remember, however, that the physical interpretation of these no-screening results is complicated by the fact that the signal itself relies on a difference between the fifth-force interactions of stars and gas, which most naturally arises when the former are screened and the latter not. Absent screening entirely, the constraints here would require the scalar field simply to couple to the gas and dark matter but not the stars. Our analysis can however be readily extended to the case of arbitrary differential coupling to galaxy mass components. In that case $\\vec{a}_5$ in Eq.~\\ref{eq:rstar} must be multiplied by the difference in the relative coupling coefficient of the stars and gas, and the constraints on $\\Delta G\/G_N$ are simply weakened by that factor.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Limits_FINAL_4}\n \\caption{$1\\sigma$ upper limit on $\\Delta G\/G_N$ $(\\Delta G_{1\\sigma}$), as a function of $\\lambda_C$, using the fiducial uncertainties for the model without screening (green) and the conservative uncertainties for the model with screening (red). Switching to the conservative uncertainties makes only a small ($\\sim0.1$ dex) difference in the unscreened case. The results with screening are weaker at small $\\lambda_C$ first because a sizeable fraction of the test galaxies are screened, and hence have predicted $r_*=0$, and second because fewer neighbouring masses contribute to $\\vec{a}_5$.}\n \\label{fig:constraint}\n\\end{figure}\n\n\\begin{figure*}\n \\subfigure[$\\lambda_C = 500$ kpc]\n {\n \\includegraphics[width=0.3\\textwidth]{deltaG_0_1p0_comp}\n \\label{fig:6a}\n }\n \\subfigure[$\\lambda_C = 5$ Mpc]\n {\n \\includegraphics[width=0.3\\textwidth]{deltaG_5_int_1p0_comp}\n \\label{fig:6b}\n }\n \\subfigure[$\\lambda_C = 50$ Mpc]\n {\n \\includegraphics[width=0.3\\textwidth]{deltaG_9_1p0_comp}\n \\label{fig:6c}\n }\n \\caption{Posteriors on $\\Delta G\/G_N$ for three values of $\\lambda_C$ under the fiducial noise model, with and without screening. In Fig.~\\ref{fig:6a}, the posterior for the model without screening is at very small $\\Delta G\/G_N$. For $\\lambda_C \\lesssim 3$ Mpc a non-zero $\\Delta G$ is statistically preferred, but only when screening is included.}\n \\label{fig:6}\n\\end{figure*}\n\n\\begin{figure*}\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{Like_FINAL_log}\n \\label{fig:like_like}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{Like_dG_FINAL_log}\n \\label{fig:like_dg}\n }\n \\caption{\\textit{Left:} maximum increase in log-likelihood over the case $\\Delta G=0$, for any $\\Delta G\/G_N$, as a function of $\\lambda_C$ in the range $0.4-50$ Mpc. We use the fiducial uncertainties and show the results both with (red) and without (green) screening. The model without screening achieves maximum log-likelihood values at most a few larger than GR, which we believe to indicate the noise level of our inference. The model with screening however achieves a maximum increase of $16$ for $\\lambda_C = 1.8$ Mpc, with a coherent trend over similar models differing only minorly in fifth-force range. \\textit{Right}: best-fit $\\Delta G\/G_N$ value corresponding to the maximum-likelihood point at each $\\lambda_C$. Larger $\\Delta G\/G_N$ values are inferred at smaller $\\lambda_C$ where the radius is lower within which mass contributes to the fifth force, and hence $\\vec{a}_5$ and the predicted $r_*$ are smaller.} \n \\label{fig:like}\n\\end{figure*}\n\n\n\\subsubsection{1D inference with screening}\n\\label{sec:screening}\n\nWith screening switched on, $\\lambda_C$ values in the range $\\sim0.4-4$ Mpc prefer $\\Delta G>0$. We show in red in Fig.~\\ref{fig:6} a range of $\\Delta G\/G_N$ posteriors, and in Figs.~\\ref{fig:like_like} and~\\ref{fig:like_dg} the maximum log-likelihood increase over GR and best-fit $\\Delta G\/G_N$ values respectively as a function of $\\lambda_C$. The preference for a screened fifth force reflects a typically positive correlation between $\\vec{a}_5$ and $\\vec{r}_{*,\\text{obs}}$. In particular, the data would seem to favour $\\Delta G\/G_N \\sim \\mathcal{O}(10^{-2}-10^{-1})$ at $\\lambda_C \\sim 1$ Mpc ($f_{R0}, \\Phi_c \\approx 10^{-7}$), with the maximum-likelihood value $\\lambda_C = 1.8$ Mpc, $\\Delta G\/G_N = 0.025$ (also shown in Fig.~\\ref{fig:dG3}). This corresponds to $|\\Phi_c| = 4.7 \\times 10^{-7}$ by Eq.~\\ref{eq:f(R)3}, and is not ruled out by any previous analysis. If such a force were realised in nature, neighbouring $\\lambda_C$ values would also be expected to be preferred over GR simply because for a given galaxy $\\vec{a}_5$ is a continuous function of $\\lambda_C$, and hence the predicted $r_*$ values over nearby $\\lambda_C$s are correlated. This is the qualitative trend exhibited in Fig.~\\ref{fig:like_like}; in Sec.~\\ref{sec:valid} we show this trend to be quantitatively as expected also. That no model without screening achieves a meaningful increase in likelihood over GR indicates both that our noise model is robust and that the phenomenon we are observing depends crucially on the environment- and internal structure-dependence of screening. This seems hard to mock up by other means. We test our possible detection in detail in following sections.\n\nThe results for the conservative measurement uncertainties (twice fiducial) are shown as the red curve in Fig.~\\ref{fig:constraint}. That the $\\Delta G\/G_N$ constraint is degraded when screening is introduced reflects the fact that when source objects are screened they contribute neither to $a_5$ nor therefore to $r_*$, while when test objects are screened they automatically receive $r_*=0$. The weaker the predicted signal at fixed $\\Delta G$, the larger $\\Delta G$ may be before it becomes statistically discrepant with the observations. The effect of screening is greater when $|\\Phi_c|$ is lower, which by Eq.~\\ref{eq:f(R)3} is at lower $\\lambda_C$. Conversely, for $\\lambda_C \\gtrsim 10$ Mpc the majority of source and test points are unscreened in any case, so removing screening has little effect. Note that for $\\Delta G\/G_N = 1\/3$, as in $f(R)$, our conservative error choice requires $\\lambda_C \\lesssim 500$ kpc, which corresponds to $f_{R0} \\lesssim 2 \\times 10^{-8}$.\n\n\n\\subsubsection{2D inference with screening}\n\\label{sec:2D}\n\nFinally, we perform the full 2D inference for both $\\Delta G$ and $\\lambda_C$ in the presence of screening. Under the fiducial noise model the posterior simply picks out the maximum-likelihood solution $\\lambda_C = 1.8$ Mpc, with corresponding $\\Delta G\/G_N = 0.025 \\pm 0.003$ (Figs.~\\ref{fig:2D_dG}--\\ref{fig:2D_comb}). Under the conservative uncertainties (Figs.~\\ref{fig:2D_dG_2SNR}--\\ref{fig:2D_comb_2SNR}) the preference for low $\\lambda_C$ is simply a volume effect: by allowing $\\Delta G\/G_N$ to span a wider range, a smaller $\\lambda_C$ boosts the contribution from the prior. Even with these artificially inflated uncertainties, however, we are able to place the strong constraint $\\Delta G\/G_N < 0.95$ ($3\\sigma$) for any $\\lambda_C$ in the range $0.4-50$ Mpc.\n\n\\begin{figure*}\n \\subfigure[Fiducial]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_deltaG_1p0}\n \\label{fig:2D_dG}\n }\n \\subfigure[Fiducial]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_lambda_1p0}\n \\label{fig:2D_lambda}\n }\n \\subfigure[Fiducial]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_G-lambda_1p0}\n \\label{fig:2D_comb}\n }\n \\subfigure[Conservative]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_deltaG_2SNR_1p0}\n \\label{fig:2D_dG_2SNR}\n }\n \\subfigure[Conservative]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_lambda_2SNR_1p0}\n \\label{fig:2D_lambda_2SNR}\n }\n \\subfigure[Conservative]\n {\n \\includegraphics[width=0.315\\textwidth]{2D_G-lambda_2SNR_1p0}\n \\label{fig:2D_comb_2SNR}\n } \n \\caption{Posteriors of $\\Delta G\/G_N$ and $\\lambda_C$ for the full 2D inference, using either the fiducial (top) or conservative (bottom) noise model. The left and centre panels show the marginalised constraints while the right panels show the full 2D posteriors. With the fiducial uncertainties the MCMC picks out the maximum-likelihood point of Fig.~\\ref{fig:like_like}, $\\lambda_C = 1.8$ Mpc: as this corresponds to a single bin the posterior of $\\lambda_C$ is flat. With the conservative uncertainties the preference for low $\\lambda_C$ is purely a volume effect arising from the uniform prior, since lower $\\lambda_C$ permits a larger range of $\\Delta G\/G_N$.}\n \\label{fig:2D}\n\\end{figure*}\n\n\\subsection{Validation}\n\\label{sec:valid}\n\nFig.~\\ref{fig:like_like} appears to show a clear preference for $\\Delta G>0$ for $\\lambda_C \\simeq 1-4$ Mpc in the case where screening is included. Here we document the tests we have performed to validate this detection, including the use of mock data with a $\\Delta G\/G_N$ value injected by hand, jackknife and bootstrap resampling, and rotation of the measured displacement on the plane of the sky.\n\n\\subsubsection{Mock data with $\\Delta G\/G_N=0$}\n\nWe begin by generating $100$ mock datasets with $\\Delta G=0$, so that $r_{*,\\text{obs},i,\\alpha\/\\delta} \\sim \\mathcal{N}(0,\\theta_{i,\\alpha\/\\delta})$. By giving the offset a random direction on the plane of the sky we ensure its complete non-correlation with $\\vec{a}_5$ over the sample. We repeat our inference for each mock dataset with the maximum-likelihood model $\\lambda_C=1.8$ Mpc, relaxing the prior $\\Delta G \\ge 0$ and refitting $\\theta_{i,\\alpha\/\\delta}$ from the spread in the $r_{*,\\alpha\/\\delta}$ values in bins of $\\log_{10}(s)$ (Sec.~\\ref{sec:uncertainties}). In each case we calculate the magnitude of the deviation (in $\\sigma$) of the best-fit $\\Delta G\/G_N$ value from 0 by dividing the median of the posterior by the standard deviation. The histogram over all mock data sets is shown in Fig.~\\ref{fig:sig_median}. As expected this forms roughly a standard normal distribution, confirming that the offset from $\\Delta G=0$ inferred from the real data -- $6.6\\:\\sigma$ -- would be highly unlikely in the frequentist sense to arise from the noise alone.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{MOCK}\n \\caption{Distribution of deviations of best-fit values of $\\Delta G\/G_N$ from $0$ (median of $\\Delta G\/G_N$ posterior divided by standard deviation) derived by fitting a model with $\\lambda_C = 1.8$ Mpc to mock data sets with $r_{*,\\text{obs},\\alpha\/\\delta}$ randomly scattered around 0 by the uncertainties. This forms a roughly standard normal distribution, as expected for a healthy noise model. By contrast, the deviation for the real data at this $\\lambda_C$ is $6.6\\:\\sigma$, indicating that it is very unlikely to have been generated by a model with noise only.}\n \\label{fig:sig_median}\n\\end{figure}\n\n\\subsubsection{Mock data with maximum-likelihood $\\Delta G\/G_N$}\n\nNext we generate mock data with a fifth-force signal injected by hand. This provides an independent test of the validity of the fifth-force interpretation by checking for biases in the reconstructed value of $\\Delta G\/G_N$. We generate $250$ mock datasets at the maximum-likelihood point of our analysis -- $\\lambda_C = 1.8 \\: \\text{Mpc}, \\Delta G\/G_N = 0.025$ -- by scattering the expected $r_*$ values\\footnote{As in Sec.~\\ref{sec:correlations}, we take this to be the mode of the 1000 Monte Carlo model realisations in the case that the galaxy is likely unscreened ($f>0.5$), and 0 otherwise. We have checked that using the mean instead does not lead to appreciably different results.} by the fiducial Gaussian noise in bins of $s$. We refit each one with the $\\lambda_C = 1.8$ Mpc model (including recalculation of $\\theta_{i,\\alpha\/\\delta}$, as above), and calculate both the maximum-likelihood $\\Delta G\/G_N$ and the increase in log-likelihood $\\Delta\\log(\\mathcal{L})$ that value achieves over the GR case $\\Delta G=0$. The results are shown in Fig.~\\ref{fig:mocks}. We find the reconstructed $\\Delta G\/G_N$ values to be centred around the input value (vertical red line), indicating that our likelihood function picks out a known truth without bias. (We have checked that this holds also for mock data generated by models of different $\\lambda_C$.) The considerable variation between mock data sets is due to the dominance of the random noise.\n\nWe show in Fig.~\\ref{fig:mock_dG-dML} that the $\\Delta\\log(\\mathcal{L})$ values are strongly correlated with best-fit $\\Delta G\/G_N$, in the sense that mock datasets in which a stronger signal is inferred achieve a larger likelihood increase over GR. The recovered $\\Delta\\log(\\mathcal{L})$ values are however a factor $\\sim4$ smaller on average than that in the real data (Fig.~\\ref{fig:mock_dML}). This indicates that the relative contribution of signal and noise to our mocks does not perfectly reflect that in the observations: the mock samples have a relatively \\emph{weaker} signal than \\textit{Alfalfa}. This almost certainly derives from an inadequacy in our noise model, most likely its failure to account for a correlation of the non-fifth-force component of $\\vec{r}_*$ with anything other than $s$. This will need to be improved in the future when better priors on both the HI centroid measurement uncertainty and dependence of $\\vec{r}_*$ on ``galaxy formation'' physics are available. Other factors which may contribute to disagreement in Fig.~\\ref{fig:mock_dML} include an inaccurate $\\Phi_c-\\lambda_C$ relation (Eq.~\\ref{eq:f(R)3} strictly holds only for Hu-Sawicki $f(R)$) and hence to a bias in galaxies' screening fractions as a function of fifth-force range, or to inaccuracies in the galaxy and halo properties input to the model. We discuss these issues further in Sec.~\\ref{sec:discussion}.\n\nWe have checked also that when the inference is repeated using models with a range of $\\lambda_C$ values for the same mock data, the greatest $\\Delta\\log(\\mathcal{L})$ is achieved at the input value $\\lambda_C = 1.8$ Mpc. The peak in $\\Delta\\log(\\mathcal{L})$ is however narrower with $\\lambda_C$ than in the data, so that neighbouring $\\lambda_C$ models have a discrepancy in $\\Delta\\log(\\mathcal{L})$ even greater than a factor of $4$.\n\n\\begin{figure*}\n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{mock_dG}\n \\label{fig:mock_dG}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{mock_dG-dML}\n \\label{fig:mock_dG-dML}\n } \n \\subfigure[]\n {\n \\includegraphics[width=0.315\\textwidth]{mock_dML}\n \\label{fig:mock_dML}\n } \n \\caption{Results of refitting a screened model with $\\lambda_C = 1.8$ Mpc to $250$ mock data generated by that model, differing only in noise under the fiducial noise model. Fig.~\\ref{fig:mock_dG} compares the distribution of best-fit $\\Delta G\/G_N$ values to the input value shown by the red line ($0.025$), demonstrating negligible bias. Fig.~\\ref{fig:mock_dG-dML} shows that the increase in log-likelihood over the case $\\Delta G=0$ is strongly correlated with the $\\Delta G\/G_N$ value inferred. This increase is however considerably smaller than that achieved in the real data (red line in Fig.~\\ref{fig:mock_dML}), indicating that the model is not fully sufficient to account for the observations.}\n \\label{fig:mocks}\n\\end{figure*}\n\n\n\\subsubsection{Bootstrap and jackknife resampling}\n\nNext we repeat our analysis with $350$ bootstrap- or jackknife-resampled mock datasets drawn from the $\\textit{Alfalfa}$ data. For the jackknife case we retain a random $70\\%$ of the galaxies. Again for the model with $\\lambda_C=1.8$ Mpc, we calculate the discrepancy in $\\sigma$ between the $\\Delta G\/G_N$ posterior and $\\Delta G=0$, and plot the resulting histograms, along with the corresponding maximum-likelihood $\\Delta G\/G_N$ values, in Fig.~\\ref{fig:boot_jack}. The vertical red lines indicate the results in the full \\textit{Alfalfa} sample. On the whole the mock datasets have a smaller significance due to the reduced statistics, especially in the jackknife case where the sample size is lower. Nevertheless, the majority of resamples in both cases have a discrepancy with $\\Delta G=0$ of $>3\\sigma$ and a best-fit $\\Delta G\/G_N$ value close to $0.025$, indicating that our conclusions do not depend sensitively on peculiar features of the \\textit{Alfalfa} sample.\n\n\\begin{figure*}\n \\subfigure[Bootstrap $\\sigma$]\n {\n \\includegraphics[width=0.315\\textwidth]{BOOT_2}\n \\label{fig:boot}\n }\n \\subfigure[Jackknife $\\sigma$]\n {\n \\includegraphics[width=0.315\\textwidth]{JACK_2}\n \\label{fig:jack}\n }\n \\subfigure[Maximum-likelihood $\\Delta G\/G_N$]\n {\n \\includegraphics[width=0.315\\textwidth]{JackBoot_dG}\n \\label{fig:jackboot_dG}\n }\n \\caption{\\textit{Left and centre:} Distributions of deviations as in Fig.~\\ref{fig:sig_median} (for the model with $\\lambda_C = 1.8$ Mpc), but for datasets derived by bootstrap- or jackknife-resampling the \\textit{Alfalfa} sample. Each jackknife dataset contains $70$\\% of the full sample, and the vertical red lines show the deviation in the real data ($6.6\\sigma$). Although on the whole both sets imply $\\Delta G>0$ with lower significance than the full dataset due to reduced statistics, the fact that the majority of resamples achieve a $\\gtrsim3\\sigma$ detection indicates that the preference for a screened fifth force is not a peculiar property of the \\textit{Alfalfa} sample. \\textit{Right:} Maximum-likelihood $\\Delta G\/G_N$ values inferred from the bootstrap and jackknife resamples. These cluster around the \\textit{Alfalfa} result, shown as the vertical red line.}\n \\label{fig:boot_jack}\n\\end{figure*}\n\n\\subsubsection{Rotation of the signal}\n\nFinally, we repeat the analysis with the predicted HI-OC displacements (i.e. direction of $\\vec{a}_5$) rotated through $90\\degree$ or $180\\degree$ on the plane of the sky. This is to check that the directions of the observed and predicted signal are indeed positively correlated, as must be the case if part of the signal has a fifth-force origin. The inference of $\\Delta G>0$ should be weakened when one of the vectors is turned through $90\\degree$, and eliminated when turned through $180\\degree$. Fig.~\\ref{fig:rotation} shows that this is indeed the case (c.f. the unrotated result in Fig.~\\ref{fig:dG3}). We have checked that similar results hold for different $\\lambda_C$.\n\n\\vspace{5mm}\n\n\\noindent In a companion piece we present a similar detection from a largely independent dataset by analysing warps in galactic disks~\\citep{Desmond_warp}.\n\n\\begin{figure*}\n \\subfigure[$90\\degree$ rotation clockwise]\n {\n \\includegraphics[width=0.315\\textwidth]{deltaG_3_int_1p0_90t}\n \\label{fig:90}\n }\n \\subfigure[$90\\degree$ rotation anticlockwise]\n {\n \\includegraphics[width=0.315\\textwidth]{deltaG_3_int_1p0_90t2}\n \\label{fig:90_2}\n } \n \\subfigure[$180\\degree$ rotation]\n {\n \\includegraphics[width=0.315\\textwidth]{deltaG_3_int_1p0_180}\n \\label{fig:180}\n } \n \\caption{$\\Delta G\/G_N$ posteriors at $\\lambda_C = 1.8$ Mpc for the case in which the predicted $\\vec{r}_*$ of each galaxy is turned through $90\\degree$ in either direction on the plane of the sky (Figs.~\\ref{fig:90} and~\\ref{fig:90_2}), or $180\\degree$ (Fig.~\\ref{fig:180}). In the first and third cases the preference for $\\Delta G\/G_N>0$ is eliminated, indicating no or anti-alignment of the measured $\\vec{r}_*$ with the rotated $\\vec{a}_5$. In the second a residual correlation remains, although weaker than the unrotated case as evidenced by the lower maximum-likelihood $\\Delta G\/G_N$ and tail towards small values relative to Fig.~\\ref{fig:dG3}. This demonstrates that the observed $\\vec{r}_*$ points on the whole in the direction of $\\vec{a}_5$.}\n \\label{fig:rotation}\n\\end{figure*}\n\n\n\\subsection{Forecasting constraints from future surveys}\n\\label{sec:forecast}\n\nWe now investigate the potential of our constraints for further improvement in the future, focusing solely on the conservative error choice which yields only only upper bounds on $\\Delta G\/G_N$. To begin, we recall from Sec.~\\ref{sec:correlations} (Fig.~\\ref{fig:hists}) that the uncertainties in the measured $\\vec{r}_*$ are larger by over two orders of magnitude on average than the theoretical uncertainties due to the potential and acceleration fields and test galaxy structure. This implies that the best way to strengthen the constraints is to improve the HI survey; only once the HI uncertainties have been greatly reduced will significant further improvement come from increasing the precision of $\\Phi$, $\\vec{a}_5$ and $M(N_\\text{Alf}$ we will extrapolate these results.\n\nFor a given mock dataset specified by $\\Theta$ and $f_N$, we calculate the $1\\sigma$ limit on $\\Delta G\/G_N$ at the fiducial value $\\lambda_C=5$ Mpc. To sample both the noise in the mock data and the specific galaxies retained we repeat this procedure 10 times to find an average $\\Delta G\/G_N$ constraint, $\\Delta G_{1\\sigma}$. We show the result as a contour plot in Fig.~\\ref{fig:contour}, and in Figs.~\\ref{fig:arcsec} and~\\ref{fig:Ngal} we show the variation of $\\Delta G_{1\\sigma}$ with $\\Theta$ and $N_\\text{gal}$ separately (for fixed value of the other parameter) for three example values.\n\nWe find $\\Delta G_{1\\sigma}$ to have approximately power-law dependence on $\\langle \\theta \\rangle$ and $N_\\text{gal}$, and therefore fit to it a function of the form\n\\begin{eqnarray} \\label{eq:fit}\n\\Delta G_{1\\sigma} \\simeq a \\left(\\frac{10^3}{N_\\text{gal}}\\right)^b \\left(\\frac{\\langle \\theta \\rangle}{1 \\ \\rm{arcsec}}\\right)^c,\n\\end{eqnarray}\nfinding best-fit values $\\{a, b, c\\} = \\{8.6 \\times 10^{-4}, 0.91, 1.00\\}$. We show these fits as the solid lines in Fig.~\\ref{fig:forecast}. $\\Delta G_{1\\sigma}$ therefore falls more rapidly with $N_\\text{gal}$ than predicted by the scaling $\\Delta G_{1\\sigma} \\sim 1\/\\sqrt{N_\\text{gal}}$, indicating greater sensitivity to sample size than would apply for the case of $N_\\text{gal}$ random draws from a stochastic model under GR. The dependence on $\\Theta$ in linear. This fit may be used to forecast constraints on $\\Delta G\/G_N$ for any future survey: for example (again subject to the proviso that the entire scatter in HI-OC displacement is due to measurement uncertainty), $\\{\\langle \\theta \\rangle, N_\\text{gal}\\} \\approx \\{10^{-1} \\: \\text{arcsec}, 10^8\\}$, achievable with the `mid' configuration of SKA1 \\cite{SKA0, SKA}, should probe $\\Delta G\/G_N$ to the $10^{-9}$ level. Comparing to figure 4 of~\\cite{Adelberger}, we see that at this stage fifth-force constraints by our method will be comparable to lunar laser ranging and planetary probes at much smaller scales; they will also be competitive with those from proposed Solar System tests such as laser ranging to Phobos and optical networks around the sun~\\cite{Sakstein_proposal}. Even before SKA, its pathfinders APERTIF~\\citep{Apertif} and ASKAP~\\citep{Askap} will provide significant further constraining power over \\textit{Alfalfa}. We caution however that further modelling work will be necessary to extend our gravitational maps to the distance that this $N_\\text{gal}$ requires ($z\\sim0.5$), and potentially to model time-variation in parameters such as $f_{R0}$. Given future datasets with more constraining power it will also be desirable to drop or generalise the constraint of Eq.~\\ref{eq:f(R)3} in order to treat more general cases of screening.\n\nAs we account for the overall statistical impact of sample size but not a systematic shift in gravitational environment or galaxy mass, accuracy of our extrapolated constraints requires the locations and structures of galaxies of lower HI luminosity, observed by future surveys, to be similar on average to those already measured by \\textit{Alfalfa}. As fainter galaxies are less likely to self-screen, and may also tend to live in sparser environments, our forecasts are conservative in this regard.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{Contour_log_FINAL}\n \\caption{Contour plot forecasting the $1\\sigma$ $\\log_{10}(\\Delta G\/G_N)$ constraints obtainable with a dataset of $N_\\text{gal}$ galaxies and average angular HI resolution $\\langle \\theta \\rangle = \\Theta \\times 36$ arcsec, for the screened model with $\\lambda_C = 5$ Mpc. We assume the most optimistic case for the noise, in which the entire non-fifth-force part of the signal derives from measurement uncertainty in the position of the HI centroid.}\n \\label{fig:contour}\n\\end{figure}\n\n\\begin{figure*}\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{Contour_arcsec_log_Fit_FINAL}\n \\label{fig:arcsec}\n }\n \\subfigure[]\n {\n \\includegraphics[width=0.48\\textwidth]{Contour_Ngal_log_Fit_FINAL}\n \\label{fig:Ngal}\n }\n \\caption{Variation of $1\\sigma$ $\\Delta G\/G_N$ constraint, $\\Delta G_{1\\sigma}$, achievable with a dataset of average angular resolution $\\langle \\theta \\rangle$ and size $N_\\text{gal}$, for fixed values of the other parameter as indicated in the legend. The solid lines show the fits of Eq.~\\ref{eq:fit}.}\n \\label{fig:forecast}\n\\end{figure*}\n\n\n\\section{Discussion}\n\\label{sec:discussion}\n\n\\subsection{Noise model}\n\\label{sec:HI_unc}\n\nThe evidence for a screened fifth force depends crucially on the the non-fifth-force part of the likelihood function, modelled by a Gaussian with angular width $\\theta_i$. This is poorly known a priori. While average values of $\\theta_i$ may be fitted as part of the model, leading to results consistent with our fiducial analysis, correlations with variables other than signal to noise could bias our inference. It is however hard to see how this could eliminate the fifth-force signal we have inferred: this would require that $\\theta_i$ correlate not only with galaxies' environments similarly to $\\vec{a}_5$, but also with their degree of screening (recall that no model without screening achieves an increase in maximum-likelihood over GR) \\emph{and} internal structures in the manner of $\\vec{r}_*$ in Eq.~\\ref{eq:rstar}. Indeed, our fiducial errors are already conservative in setting the signal to noise ratio of the detection of \\emph{any} offset between stars and gas to $\\sim1$, a valuable safeguard against erroneously imputing the signal a modified gravity origin in a framework that neglects more mundane phenomena. No additional effect is therefore necessary to explain the observations: the analysis could easily prefer $\\Delta G=0$ for all $\\lambda_C$. While our conservative error choice is useful for sidestepping the question of $\\theta_i$ and hence achieving robust upper bounds on $\\Delta G\/G_N$, its inadequacy is clear from its severe overprediction of the dispersion of the measured displacements and resultant low overall likelihood.\n\nWe have been forced to derive the measurement uncertainties empirically due to insufficient prior information on the precision of the HI centroid location: it is simply too difficult to propagate uncertainties in the telescope pointing, data reduction pipeline and HI-optical cross-correlation, which themselves are poorly known. Future HI surveys, such as those afforded by interferometric observatories such as SKA, may provide better control of these. However, even if information of this nature cannot be used to set informative priors on the HI uncertainties, the increased precision in the centroiding will greatly improve the strength of the test performed here. As demonstrated in Sec.~\\ref{sec:forecast}, simply repeating our analysis on an SKA-class sample would probe $\\Delta G\/G_N$ to $\\mathcal{O}(10^{-9})$ for $\\lambda_C \\sim \\mathcal{O}(1-10 \\: \\text{Mpc})$, easily sufficient to confirm or rule out our putative detection regardless of the precise manner in which uncertainties are modelled.\n\n\n\\subsection{Galaxy formation physics and other systematics}\n\\label{sec:systematics}\n\nWe have been careful to incorporate known uncertainties in the input parameters of our model into probability distributions for those parameters, and hence marginalise over them in the prediction of $\\vec{r}_*$. Provided the true parameter values are contained within the priors, these uncertainties cannot cause systematic error in our inference but will simply inflate our posteriors and lead to conservative bounds. Nevertheless, there remain a number of inputs, some of them implicit, whose true values may not lie within our priors. We discuss these here in roughly decreasing order of importance.\n\nOur model assumes that the true $r_{*,\\alpha}$ and $r_{*,\\delta}$ values (after accounting for noise) arise purely from fifth-force effects. It is however highly likely that these could be non-zero even in the case $\\Delta G=0$. This may arise from a number of baryonic processes within a galaxy that affect gas and stars differently, for example hydrodynamical drag, ram pressure and the transfer of energy and momentum by stellar feedback. A realistic model must therefore contain a contingent for $r_* \\ne 0$ when $\\Delta G=0$ by introducing further parameters describing these baryonic effects. This is present to first order in our model, as the principal component of the likelihood function is simply a Gaussian in $r_{*,\\alpha}$ and $r_{*\\delta}$ with a width $\\theta$ that correlates with the signal to noise ratio $s$ of a galaxy's detection (Sec.~\\ref{sec:uncertainties}). Absent modified gravity, larger HI-optical offsets are likely to be caused by baryonic physics, which a larger uncertainty $\\theta$ would effectively absorb: in each bin of $s$ the baryonic contribution to the signal adds to the measurement uncertainty in quadrature. The information we glean on modified gravity comes primarily from the relative directions of $\\vec{r}_*$ and $\\vec{a}_5$, the correlation of their magnitudes across the sample, and the dependence of the screened fraction on the gravitational environment, quantified here by $\\Phi$.\n\nNevertheless, it is important to check that baryonic effects could not generate an environment-dependence of $\\vec{r}_*$ similar to the effect of screening. This may be done by applying our inference framework to mock datasets generated by hydrodynamical simulations in which these effects are included. An informative test however requires better than $\\mathcal{O}$(kpc) resolution in the central regions of halos, which rules out the majority of current-generation cosmological simulations~\\cite{Horizon, EAGLE_1, Illustris, MB-II}. Conversely, zoom simulations do not have the halo numbers required for statistically significant results. Only a few present simulation sets may therefore be of use (e.g.~\\cite{Stinson,FIRE, Chan_FIRE, Apostle}). As these simulations differ in the sub-grid models required at the small scales of interest here, it will ultimately be necessary to check several of them. Although the manner in which hydrodynamics affects the internal and environmental properties of halos is at least qualitatively known (e.g~\\cite{Hellwing, EAGLE_2, Chisari}), the specific signal in which we are interested has not yet been investigated.\n\nIf such effects in $\\Lambda$CDM are not sufficient to account for the correlations we attribute here to a screened fifth force, our constraints on $\\Delta G\/G_N$ must be further validated by examining the impact of baryonic physics in a universe governed by modified gravity. While such simulations have begun in recent years, and confirm that baryonic and gravitational effects on large scale structure and galaxies' internal dynamics are non-linearly coupled and hence not fully orthogonal (e.g.~\\cite{Puchwein, Mead}), they focus on fifth-force strengths at least an order of magnitude larger than those that constitute our maximum-likelihood models.\n\nWe have assumed the model specified by $\\{\\Delta G, \\lambda_C\\}$ to produce a cosmology consistent with $\\Lambda$CDM, specifically in terms of its prediction for the smooth density field at $z=0$ and the properties of the halo population. While chameleon- or symmetron-screened modified gravity does alter the expansion history of the Universe and growth rate of structure, and hence the halo mass function and internal structure of halos, its effects in the region of parameter space of most interest here ($\\Delta G\/G_N \\lesssim 0.1$, $\\lambda_C \\lesssim 5$ Mpc) are small~\\citep{Shi, Fontanot, Lombriser_2, Lombriser_rev}. They are likely subdominant to the uncertainties we do model in the smooth density field, galaxy--halo connection and properties of the individual halos themselves. Our method should not therefore be thought a means of testing modified gravity in cosmology -- and has little if any relevance to cosmic acceleration -- but rather as simply a means of seeking fifth forces, of gravitational or non-gravitational origin, on the scales of galaxies and their environments. Screened modified gravity also affects stellar luminosities and hence galaxy mass-to-light ratios~\\citep{Davis} as well as the relative kinematics of stars and gas~\\citep{Vikram_RC}, but again these effects are small for $\\lambda_C \\sim 1$ Mpc and would not be expected to skew our inference.\n\nAs discussed in~\\cite{Desmond}, our incorporation of both halos and a non-halo-based mass contribution leads us to overestimate the total density on $\\gtrsim 10$ Mpc scales by $\\sim 10$\\%. This will cause a similar overestimation of $|\\Phi|$, biasing the screened fraction of our test points high and the template signal low. This leads to a weakening of the $\\Delta G$ constraints, making our limits conservative. While this effect is partly mitigated by a corresponding overestimation of $\\vec{a}_5$, we showed in~\\cite{Desmond} that the acceleration is more sensitive than the potential to mass on smaller scales around the test point, which is less susceptible to double counting.\n\nAlthough we marginalise over the possible galaxy--halo connections for fixed AM proxy and scatter, we do not marginalise over the distributions of these parameters themselves or otherwise consider the possibility that this model inadequately describes our source galaxy population. Given that AM parameters are fairly well constrained by clustering studies, however, this extra uncertainty is unlikely to be significant.\n\nFactors that affect the overall magnitude of the predicted signal (e.g. $\\Sigma_D^0$, $h$ and $|\\vec{a}_5|$ in Eq.~\\ref{eq:rstar2}) are degenerate with the inferred value of $\\Delta G\/G_N$. As mentioned in Sec.~\\ref{sec:more-sampling}, a particularly important quantity is the scatter in central dynamical surface density $\\Sigma_D^0$ around the result of Eq.~\\ref{eq:CDR}, which affects the magnitude of the predicted $r_*$ and hence our $\\Delta G\/G_N$ constraints by a relatively large factor. In Fig.~\\ref{fig:dG_scatt} we plot the scatter in $\\Sigma_D^0$ (for the galaxies without NSA information; the scatter for those with is pinned to half this value) against the maximum-likelihood value of $\\Delta G\/G_N$ inferred under the fiducial uncertainties for the screened model with $\\lambda_C = 1$ Mpc. The slope of the best-fit line is $-3.8$, indicating that a change to the $\\Sigma_D^0$ scatter of $0.26$ dex is sufficient to shift the best-fit $\\Delta G\/G_N$ by an order of magnitude. Were the evidence for $\\Delta G>0$ to be confirmed by independent signals or data, it would become important to refine model inputs such as these to better determine $\\Delta G\/G_N$. Consistency between signals under a generic fifth-force scenario could even be used to constrain them.\n\nBy contrast, the $\\lambda_C$-dependence of the maximum-likelihood and $\\Delta G\/G_N$ constraint is degenerate only with factors that scale with $\\lambda_C$ in the same way as the predicted $\\vec{r}_*$. As this condition is far harder to satisfy due to the peculiar dependence of $\\vec{r}_*$ on $\\Phi$ and $\\vec{a}_5$ (and the further dependences of those on both halo structure and environment), we consider the trend of $\\Delta \\log(\\mathcal{L})$ with $\\lambda_C$ to be considerably more robust than the constraint on $\\Delta G\/G_N$ itself. For example, altering the $\\Sigma_D^0$ scatter within the range of Fig.~\\ref{fig:dG_scatt} does not significantly affect Fig.~\\ref{fig:like_like}.\n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.5\\textwidth]{dG-scatt}\n \\caption{Dependence of maximum-likelihood $\\Delta G\/G_N$ on the scatter in central dynamical surface density $\\Sigma_D^0$ (Eqs.~\\ref{eq:CDR}--\\ref{eq:rstar2}), in the case where NSA information is not available, for the screened model with $\\lambda_C = 1$ Mpc. Our fiducial scatter is 1 dex, as quoted in Table~\\ref{tab:params}. The $\\Delta G\/G_N$ constraint is highly sensitive to this choice, and differs by more than an order of magnitude across the range of a priori reasonable values. This makes our inference of $\\Delta G\/G_N$ quite uncertain, and the formal uncertainties in Figs.~\\ref{fig:dG3} and~\\ref{fig:6} potentially misleading. The trend of best-fit $\\Delta G\/G_N$ with $\\lambda_C$ however is not sensitive to this uncertainty because it derives from the scale-dependence of the predicted signal rather than its overall normalisation and scatter.}\n \\label{fig:dG_scatt}\n\\end{figure}\n\n\n\\subsection{Comparison with the literature}\n\\label{sec:comparison}\n\nWe begin by comparing our work with previous studies at the scales of interest here ($0.4-50$ Mpc; Sec.~\\ref{sec:astroscales}) before broadening our discussion to include tests of related models at larger and smaller scales (Sec.~\\ref{sec:otherscales}).\n\n\\subsubsection{On astrophysical scales}\n\\label{sec:astroscales}\n\nMuch of the groundwork for constraining modified gravity with intra-galaxy data was laid by~\\cite{Jain_Vanderplas}, which provided the first calculations of the functional forms and sizes of the expected signals. These authors estimated the magnitude of $\\vec{r}_*$ by taking the external field $\\vec{a}_5$ to be generated by a single halo towards which the test galaxy is falling. Assuming the test halo has constant density, they argue that $r_*$ could typically be expected to be $\\sim1$ kpc for $\\Delta G\/G_N=1$. This is quantitatively confirmed by Fig.~\\ref{fig:1a}. We note however that our estimates of $r_*$ employ significantly more complex and realistic assumptions than those of \\cite{Jain_Vanderplas}. First, the fifth-force accelerations for our test galaxies are derived from their full environments rather than a single nearby object (which is itself liable to be screened for lower values of $|\\Phi_c|$). Second, we take only the central regions of the test halos to be cored, and calculate their densities from the central baryon surface density rather than by averaging over the entire halo. As halo density presumably falls with $r$ outside the core, this leads us to assign a larger central density than~\\cite{Jain_Vanderplas}, reducing the expected $r_*$. However, the significant scatter that we apply to the central densities (1 dex in most cases) causes many halos within each model realisation to have lower density and hence greater predicted $r_*$.\n\nThe study most similar to ours is that of~\\cite{Vikram} (specifically their section 4), whose authors also searched for evidence of screening in the HI-optical offsets of $\\textit{Alfalfa}$ detections. Our work expands on this in three main ways.\n\n\\begin{enumerate}\n\n\\item{} In~\\cite{Vikram}, the fifth-force acceleration and central mass $M( 10^{-6}$ and $\\Delta G\/G_N \\lesssim 1$ for $|\\Phi_c| = \\text{few} \\times 10^{-7}$~\\citep{Sakstein}. Similar results are achievable comparing the gas and stellar rotation curves of isolated dwarf galaxies \\cite{Vikram_RC}. Despite a weak signal and large observational uncertainties, our inference is several times stronger due to a combination of huge sample size, great range of environments probed, and use of a vector rather than scalar observable which effectively affords two orthogonal signals in the plane of the sky. No previous analysis has reached the level of sensitivity required to probe the region $\\{\\lambda_C \\simeq 1.8 \\: \\text{Mpc}, \\Delta G\/G_N \\simeq 0.025\\}$ in which we find evidence for a signal.\n\nAdditional dynamical galactic signals which have not yet been used to place quantitative constraints on fifth forces include warping of stellar disks, asymmetries in disk kinematics and offsets in the rotation curves of stars and gas~\\citep{Jain_Vanderplas, Vikram}. Galaxy warps are the subject of~\\cite{Desmond_warp}, and the other effects will be studied in future work.\n\n\n\\subsubsection{On smaller and larger scales}\n\\label{sec:otherscales}\n\nTesting fifth forces on astrophysical scales is a relatively new endeavour: other direct constraints come only from signals at much smaller scales, while cosmological tests are capable of producing independent though indirect and weaker limits. As discussed in Sec.~\\ref{sec:intro}, unscreened fifth forces with ranges $10^{-14} < \\lambda_C\/\\text{m} < 10^{-6}$ are strongly constrained by torsion balance experiments, lunar laser ranging and planetary data within the Solar System. Although our $\\Delta G\/G_N$ constraints are weaker than these by up to 4 orders of magnitude, reflecting the complex nature of astrophysical systems, they provide the possibility of filling in the parameter space of possible distance-dependent (or potential-, acceleration- or curvature-dependent) modifications to GR, which incorporate a fifth force, at the galaxy scale~\\citep{Baker}.\n\nThe strongest constraint from Solar System tests for $\\lambda_C$ as large as that of interest here comes from the Eddington light bending parameter measured by Cassini, which requires $\\Delta G\/G_N \\lesssim 10^{-5}$~\\citep{Bertotti}. Our constraint in the absence of screening is 1--2 orders of magnitude worse, and also requires a differential coupling of the fifth force to stars, gas and dark matter. Nevertheless we use a qualitatively different signal, and, as discussed in Sec.~\\ref{sec:forecast}, our method is straightforwardly scalable with future data and may be expected to reach the sensitivities of Solar System tests with next-generation surveys.\n\nWithout a means of shielding the scalar field from the screening influence of the surrounding dense environment, screened theories cannot be probed within the Milky Way if the screening threshold puts it in the screened regime. This is the case for most viable models. Nevertheless, it is possible to construct laboratory setups, in particular by means of a vacuum chamber with thick walls, in which the chameleon field is decoupled from the field outside~\\citep{Burrage}. This allows the creation of regions where the field remains unscreened, enabling tests of the mechanism by means of Casimir forces~\\citep{Casimir}, levitated microspheres~\\citep{Rider} and atom~\\citep{Hamilton} and neutron~\\citep{Brax_neutron} interferometry. All of these tests probe short-range fifth forces; see~\\cite{Burrage_review, Burrage_review2} for a review.\n\nA number of other tests are possible at cosmological scales (see the review of~\\cite{Lombriser_rev}). In general, a cosmological fifth force causes an enhancement of the growth rate of structure at late times, which affects the CMB through the integrated Sachs-Wolfe and Sunyaev-Zel'dovich effects as well as gravitational lensing~\\citep{ISW, ISW_2}. Constraints are also possible by means of the galaxy power spectrum~\\citep{Dossett}, comparison of dynamical and lensing masses of elliptical galaxies~\\citep{Smith}, redshift space distortions~\\citep{RSD}, and the abundance~\\citep{Schmidt, Ferraro, Cataneo}, internal structure~\\citep{Lombriser} and relative dynamical and lensing masses \\cite{Terukina, Wilcox} of clusters. These constraints are typically couched in terms of Hu-Sawicki models of $f(R)$, where $\\Delta G\/G_N = 1\/3$ and $f_{R0}$ is related to $\\lambda_C$ and $\\Phi_c$ through Eq.~\\ref{eq:f(R)}. The strongest constraints are at the $f_{R0} \\sim 10^{-5}$ level, showing cosmological tests to be significantly less useful for these theories than astrophysical ones.\n\n\n\\subsection{Suggestions for further work}\n\\label{sec:future}\n\nOur work provides a case study of the use of the gravitational maps of~\\cite{Desmond} to constrain aspects of modified gravity on the scale of galaxies and their environments. This scale is characterised by $1 \\: \\text{kpc} \\; \\lesssim r \\lesssim 50$ Mpc, $10^{-8} \\lesssim |\\Phi| \\lesssim 10^{-4}$, $10^{-17} \\lesssim |\\vec{a}|\/\\text{km s}^{-2} \\lesssim 10^{-13}$ and curvature $10^{-57} \\lesssim K\/\\text{cm}^{-2} \\lesssim 10^{-50}$.\n\nEven continuing to focus solely on fifth-force phenomenology, a range of intra-galaxy statistics besides the separation of stars and gas may be expected to provide new information: as mentioned in Secs.~\\ref{sec:intro} and~\\ref{sec:astroscales}, a screened fifth force would induce warps in stellar disks and asymmetries in kinematics, as well as boost the rotation velocity of unscreened mass relative to screened~\\citep{Jain_Vanderplas}. Provided these signals could be suitably quantified, each could be used to derive constraints on $\\Delta G$ and $\\lambda_C$ by a method directly analogous to our own. As these analyses would utilise independent data, and possess sensitivity to different types of galaxy in different environments, their results would be complementary. For the evidence for a chameleon- or symmetron-screened fifth force with $\\lambda_C \\simeq 2$ Mpc presented here to be convincing it must be corroborated (or shown to be below the sensitivity level of current data) by each of these signals. We do this for galaxy warps in \\cite{Desmond_warp}. It is also possible to make predictions for future or present data sets from our best-fit $\\{\\lambda_C, \\Delta G\/G_N\\}$ values, either by taking the maximum-likelihood point or by means of the Bayesian posterior predictive distribution. This would help to avoid confirmation bias in the analysis of that data. As discussed in Sec.~\\ref{sec:systematics}, the non-fifth-force part of the likelihood function could be refined by examining the signals in cosmological hydrodynamical $\\Lambda$CDM simulations, and the inferences made fully self-consistent by basing the determination of galaxy and large scale structure formation on simulations in modified gravity.\n\nThe constraints derivable from all of these signals will straightforwardly strengthen as future galaxy surveys begin observation. Given the amount of information evidently present in this data (as quantified by the strength of our constraints) and the great range of theories testable in tandem through simple phenomenological parametrisations, we propose that gravitational physics be incorporated as a key science driver of these surveys (see also~\\cite{Jain_surveys, Jain_Khoury, Jain_NovelProbes}). Constraints based on intra-galaxy statistics may in fact scale better with survey quality than conventional tests based on inter-galaxy statistics due to lower sensitivity to cosmic variance. At the very least they are complementary and come for little extra effort or expense with ongoing survey programmes.\n\nAs discussed in~\\cite{Desmond}, our methods may find more general use beyond the search for fifth forces. Any theory predicting deviations from GR that correlate with the local gravitational field may in principle be tested; this includes not only the other screening mechanisms listed in Sec.~\\ref{sec:intro} (kinetic screening roughly occurs at a critical value of acceleration and Vainshtein at a critical value of curvature;~\\cite{Khoury_LesHouches}), but also generic equivalence principle violations (e.g. in Modified Newtonian Dynamics [MOND], whose external field effect imposes an acceleration threshold) and models of dynamical dark energy which entail novel behaviour near the curvature scale of $\\Lambda$. Indeed, it is even possible to perform model-independent tests by simply correlating galaxies' gravitational properties with potential modified gravity signals, although the lack of a prediction would preclude forward modelling and Bayesian inference in this case. This method relies on the assumption that any breakdown of GR would impose a different dependence of dynamical behaviour on gravitational environment than would contaminating ``galaxy formation'' effects, as we believe to be the case for the present signal $\\vec{r}_*$. To check this assumption a more precise determination of the impact of baryonic physics would be helpful. This programme would build gravity phenomenology from the ground up in galaxies, rather than test specific theories top down.\n\nWe made no a priori distinction here between galaxies in different environments, but rather fed the entire \\textit{Alfalfa} dataset into our inference framework. This approach is ideal when all galaxies provide useful information on the parameters to be inferred: in our case, even massive galaxies or those in dense environments are of interest because their differences with the remainder inform the constraint on the screening threshold $\\Phi_c$. The screened subsample for a given $\\Phi_c$ calibrates the behaviour of the signal in the absence of fifth forces. By contrast, information on some modified gravity theories may only be gleaned from galaxies in very specific environments. For example, testing the external field effect in equivalence principle-violating theories would require galaxies in a dominant external field. The prototypical model in this class, MOND, requires $a_\\text{ex} > a_0 \\approx 1.2 \\times 10^{-10} \\: \\text{m\/s}^2$ for the external field to dominate, which is at the extreme high end of the $a_\\text{ex}$ distribution of 2M++ galaxies in the local Universe~\\citep{Desmond}. In such cases it will be preferable to begin by searching for regions of the local Universe satisfying certain constraints on the gravitational field, and analyse only the subset of galaxies situated therein.\n\nTo generalise the discussion in Sec.~\\ref{sec:forecast}, we close by listing generic desiderata for surveys and modelling to be useful for probing fundamental physics with galaxies.\n\n\\begin{itemize}\n\n\\item{} Any test that depends on a galaxy's gravitational environment must make use of a reconstruction of the gravitational field along the lines of~\\cite{Desmond}. Improving the precision of the $\\Phi$, $\\vec{a}$ and $K$ maps requires principally an increase in the limiting magnitude out to which galaxies, the primary mass tracers, are visible. This is particularly important for pushing studies to higher redshift $z \\gtrsim 0.05$ where current all-sky surveys become substantially incomplete. This will improve statistics and enhance the range of environments able to be probed. We estimate in~\\cite{Desmond} the improvement in the precision of gravitational parameter inferences afforded by upcoming photometric surveys. This information may be augmented by weak lensing data, spectroscopic information and photometry at other wavelengths to further constrain the overall distribution of mass.\n\n\\item{} Robust detections of the fifth-force signals described above require low-redshift, high resolution data at multiple wavelengths to distinguish galaxies' different mass components. In general, data quality is more important than sample size, area or redshift range. There is no requirement that data be homogeneous or uniform across the sky, since spatial statistics of the galaxy population as a whole are not in question. This enables smaller surveys and isolated observations to play a larger role than is possible for traditional probes.\n\n\\item{} Resolved spectroscopy is necessary to study dynamical signals of new fields, such as differences between the rotation curves of stars and gas, kinematical asymmetries and mass discrepancies. These are complicated in the case of screened theories by the fact that the most common tracer of stellar kinematics, $H\\alpha$ emission, derives from the ionised Str\\\"{o}mgren spheres surrounding stars which are likely to be at least partly unscreened. The screened kinematics of the stars themselves must therefore be derived from stellar absorption lines, which are considerably more difficult to measure with high signal to noise. Spectroscopy will also be useful to tighten constraints on galaxies' central dynamical masses, which provide the restoring forces on stellar centroids that compensate for their insensitivity to $F_5$. The combination $\\vec{a}_5 \\: \\Delta G \\: (M(