diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbgzr b/data_all_eng_slimpj/shuffled/split2/finalzzbgzr new file mode 100644 index 0000000000000000000000000000000000000000..aa83850e140dc9305f2130ff4bdc018b28a73136 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbgzr @@ -0,0 +1,5 @@ +{"text":"\\section{Abstract}\n{\n\\bf Centered Kernel Alignment (CKA) was recently proposed as a similarity metric for comparing activation patterns in deep networks. Here we experiment with the modified RV-coefficient (RV2), which has similar properties to CKA while being less sensitive to dataset size. We compare the representations of networks that received varying amounts of training on different layers: a standard trained network (all parameters updated at every step), a freeze-trained network (layers gradually frozen during training), random networks (only some layers trained), and a completely untrained network. We found that RV2 was able to recover expected similarity patterns and provide interpretable similarity matrices that suggested hypotheses about how representations are affected by different training recipes. We propose that the superior performance achieved by freeze training can be attributed to representational differences in the penultimate layer. Comparisons to random networks suggest that the inputs and targets serve as anchors on the representations in the lowest and highest layers.\n\n}\n\\begin{quote}\n\\small\n\\textbf{Keywords:} \nsimilarity analysis, random features, CNNs, freeze training, RV coefficient\n\\end{quote}\n\n\n\n\\section{Introduction}\nThe study of artificial and biological neural networks often requires quantification of the similarity of activation patterns between two networks. Common approaches to this problem are variants of canonical correlation analysis (CCA) \\cite{Hotelling1936RelationsVariates}. For example, Singular Vector CCA and Projection-Weighted CCA have recently been used to uncover insights about training dynamics and generalization in deep networks \\cite{Raghu2017, Morcos2018}. Regularized CCA is often used in neuroscience to find relationships between neural and behavioural or clinical variables \\cite{Bilenko2016Pyrcca:Neuroimaging}. However, these variants of CCA can require large amounts of data and so are often impractical for analyzing neural activations where the number of observations may be small and the dimensionality may be large.\n\n\nWhen comparing two sets of variables $\\mathbf{X}$ and $\\mathbf{Y}$, CCA will find the linear combinations of $\\mathbf{X}$ and $\\mathbf{Y}$ which maximizes their correlation. This means that CCA is invariant to any invertible linear transformation. There are several reasons why one might want a similarity metric with different invariance properties. For example, in a deep network, it is not just the linear information content of a representation that is meaningful but also the specific configuration of that information. For example, the insertion of an invertible linear transformation between two layers of a deep network can alter the network's behaviour (e.g. in batch normalization). Therefore, when comparing representations in deep neural networks, one may wish to use a similarity metric that is not invariant to invertible linear transformation so as to be sensitive to meaningful differences between representations \\cite{Kornblith2019SimilarityRevisited, Thompson2016HowProcessing}.\n\n\\citeA{Kornblith2019SimilarityRevisited} propose the use of Centered Kernel Alignment (CKA) based on the fact that CKA is only invariant to orthogonal transformations and isomorphic scaling (not arbitrary linear invertible transformations) and that it demonstrates intuitive notions of similarity, namely that corresponding layers are most similar to themselves in networks of identical architecture trained from different random initializations. \nThey state that CKA with a linear kernel is equivalent to the RV coefficient. \nThe RV coefficient is a matrix correlation method for comparing paired observations $\\mathbf{X}$ and $\\mathbf{Y}$ with different numbers of columns \\cite{Robert1976ACoefficient}. \n\\begin{equation}\n RV(\\mathbf{X}, \\mathbf{Y}) = \\frac{tr(\\mathbf{XX}^\\prime\\mathbf{YY}^\\prime)}{\\sqrt{tr[(\\mathbf{XX}^\\prime)^2]tr[(\\mathbf{YY}^\\prime)^2]}}\n\\end{equation}\n\nThe RV coefficient is still sensitive to dataset size. When the number of observations is too small relative to the number of dimensions, the RV coefficient will tend to $1$, even for random, unrelated matrices. The modified RV coefficient (RV2) addresses this problem by ignoring the diagonal elements of $\\mathbf{XX^\\prime}$ and $\\mathbf{YY^\\prime}$, which pushes the numerator to zero when $\\mathbf{X}$ and $\\mathbf{Y}$ are random matrices, even for small sample sizes \\cite{Smilde2009MatrixRV-coefficient}.\n\\begin{equation}\n RV_2(\\mathbf{X}, \\mathbf{Y}) = \\frac{Vec(\\mathbf{\\widetilde{XX^\\prime}})^\\prime Vec(\\mathbf{\\widetilde{YY^\\prime}})}{\\sqrt{Vec(\\mathbf{\\widetilde{XX^\\prime}})^\\prime Vec(\\mathbf{\\widetilde{XX^\\prime}}) \\times Vec(\\mathbf{\\widetilde{YY^\\prime}})^\\prime Vec(\\mathbf{\\widetilde{YY^\\prime}})}}\n\\end{equation}\nWhere $\\mathbf{\\widetilde{XX^\\prime}} = \\mathbf{XX^\\prime} - diag(\\mathbf{XX^\\prime})$ and similarly for $\\mathbf{\\widetilde{YY^\\prime}}$. Thus RV2 provides a similarity metric with the same invariance properties as CKA while being less sensitive to dataset size, making it a good candidate for comparing neural activities of large artificial and biological neural networks. \n\nHere we explore the use of RV2 to characterize intermediate representations of simple convolutional neural networks. Our main contributions are (a) extending \\citeA{Kornblith2019SimilarityRevisited}'s validation of CKA-flavored similarity metrics by using RV2 to recover expected similarity patterns in simple networks, and (b) showing that RV2 can generate interpretable patterns that can suggest hypotheses about the nature of intermediate representations in deep neural networks. \n\n\\section{Experiments}\nTrained networks in the following analyses were previously reported in \\citeA{Thompson2019HowLanguages}. All networks were of identical architecture consisting of nine convolutional layers and three fully connected layers. Networks were trained to recognize context-dependent English or Dutch phones for 100 epochs (except for the untrained network). Networks differed in the training that they received. The standard networks were randomly initialized and all parameters were updated on every mini-batch. The untrained network was randomly initialized and never trained. The procedures for the freeze-trained and random networks are described below. Please refer to the original text for details about the datasets, architecture and training.\n\nActivations to one hour of English speech from 60 speakers (1-minute each) were measured from all networks. We used the hoggorm python package to calculate RV2 for all pairs of layers. To make the experiments feasible, we performed average-pooling on all feature maps and downsampled the resulting activation vectors by a factor of 40, leading to activation vectors of length 23,582 per `unit'. \n\n\\subsection{Untrained vs Trained}\nWe replicated Figure~F.4 from \\citeA{Kornblith2019SimilarityRevisited} to verify that a slightly different metric, RV2, applied to activations from a different model trained on a different task generates similar patterns of similarity between trained and untrained networks. Figure~\\ref{fig:untrained-vs-trained} (bottom row) shows the self-similarity of an untrained network and the similarity between the untrained network and two different trained networks: standard training and transfer freeze-training (described in the next section). We observe approximately the same patterns as are reported in \\citeA{Kornblith2019SimilarityRevisited}.\n\\begin{figure}[h!]\n\\centering\n\\includegraphics[width=\\columnwidth]{Dutch_to_English_Freeze_and_Untrained_bigfont.png}\n\\caption{\\small \\textbf{Top row} Similarity between English and Dutch standard networks and the Dutch-to-English transfer freeze-trained network. The largest differences are in fc2. Lower layers in the transfer freeze-trained network are most similar to their corresponding layer in the Dutch standard model. \\textbf{Bottom row} Network self-similarity at initialization (left) and the similarity between untrained and trained networks, either standard net (middle) or the transfer freeze-trained net (right). The parenthetical percentages indicates the top-1 accuracy.}\n\\label{fig:untrained-vs-trained}\n\\end{figure}\n\\subsection{Freeze Training}\nIt has been suggested that convolutional neural networks converge `bottom-up', with early layers converging to their final form earlier in training \\cite{Raghu2017, Alain2016}. Based on this observation, \\citeA{Raghu2017} proposed \\textit{freeze training}. During freeze training, at regular intervals, the parameters of an additional layer are frozen (i.e. removed from the set of trainable variables). Layers are frozen in order by depth such that, by the end of training, only the final layer is being updated. The freeze-trained transfer networks from \\citeA{Thompson2019HowLanguages}, which were initialized with parameters from a network previously trained on one language and then freeze-trained on another, outperformed all other freeze-trained networks (no transfer) and other transfer networks (no freeze training). Here, we compare the activations of the English standard, Dutch standard and Dutch-to-English freeze-trained networks from \\citeA{Thompson2019HowLanguages}. We predict with high confidence that the early layers of the Dutch-to-English freeze-trained network will be more similar to the Dutch than the English standard model since they were initialized with the parameters from the Dutch standard network and received relatively little training afterwards. This provides a good test of whether RV2 is able to recover this expected pattern. Additionally, we were interested to see if the superior performance of the transfer freeze-trained network could be attributed to any representational differences between the compared networks.\n\nFor all comparisons between the standard and transfer freeze-trained networks (Figure~\\ref{fig:untrained-vs-trained}, top row), the highest similarity values were near the diagonal.\nThis pattern provides further validation that, like CKA, RV2 finds the most similar layer in one network to be near the corresponding layer in another network of identical architecture. \nAs predicted, early layers in the Dutch-to-English freeze-trained network were most similar to the corresponding layer in the Dutch standard model and less similar to the English standard model. Near corresponding layers in the English and Dutch standard models were considerably similar to one another, despite being trained on different languages. The largest differences in all comparisons occured in layer fc2. Thus, the superior performance of the transfer freeze-trained network may be primarily attributable to differences in representation at fc2.\n\n\\subsection{Random Features}\n\\citeA{Yosinski2014} investigated the effect on performance of leaving progressively more layers untrained in convolutional neural networks trained to recognize objects in images. \nPerformance dropped sharply to zero when the first three layers were left at their random initialization and only subsequent layers were trained. \\citeA{Thompson2019HowLanguages} replicated this experiment with networks trained on speech and found a different pattern (see Figure~\\ref{fig:rand}).\nPerformance gradually declined as more layers were left untrained, only reaching near-zero performance when all but the last layer were left untrained \\cite{Thompson2019HowLanguages}. \n\n\\begin{figure}[hb]\n \\centering\n \\includegraphics[width={\\columnwidth}]{random_figure.png}\n \\caption{\\small Performance of random networks as reported in \\citeA{Thompson2019HowLanguages} (left) and \\citeA{Yosinski2014} (right).}\n \\label{fig:rand}\n\\end{figure}\n\nRandom features have a long history of success in kernel machines \\cite{Rahimi2007}. \nHowever, the effect of several consecutive random layers is less well understood. In particular, how do intermediate representations reconfigure as more layers are left untrained?\n\nWe presume that the effect of several consecutive random layers is the same as the effect of one random layer: a random projection of the input. None of the work of disentangling the relevant factors of variation has been performed by these random layers and so the remaining trainable layers have the same job to do as was done by the full set of layers in the standard network. According to this hypothesis, the representational transformations originally performed by all 12 layers in the standard network must be somehow compressed into the remaining trained layers of the random networks. The hypothesis that these representational transformations will be evenly distributed across the remaining trainable layers is depicted in Figure~\\ref{fig:hypothesis}. The performance of the random network would only be dependent on whether the structure and capacity of the remaining layers is sufficient to learn and represent the necessary transformations. Under this interpretation, a gradual degradation in performance as more layers are left untrained seems more likely and the sharp drop in performance observed in \\citeA{Yosinski2014} is unexplained. To test this hypothesis, we calculated RV2 similarity matrices comparing each random network to the standard English network.\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=\\columnwidth]{hypothesis.png}\n\\caption{\\small \\textbf{Left} Self-similarity of the English standard network. \\textbf{Right} Idealized diagram of the hypothesis that the representational transformations of the standard network will be evenly distributed across the trained layers of a random net. } \n\\label{fig:hypothesis}\n\\end{figure}\n\nThe comparisons between the standard model and the random networks are shown in Figure~\\ref{fig:random-feats}. In the following, `random net $n$' refers to the network with random layers up to layer $n$; only layers above layer $n$ were trained. Layers are named c1, c2, ..., c9, fc1, fc2 to distinguish the convolutional and fully connected layers.\n\\begin{figure*}[h!]\n\\centering\n\\includegraphics[width=2\\columnwidth]{all_mats_10_2rows_relabeled.png}\n\\caption{\\small The RV2 similarity between the baseline model (all layers trained) and networks of identical architecture with only layers above layer $n$ trained, $\\forall n \\in $ [c1, fc1]. } \n\\label{fig:random-feats}\n\\end{figure*}\nIn contrast with our hypothesis, late layers remain most similar to their corresponding layer in the standard network, even as more early layers are left untrained. This pattern is especially clear in the similarity matrix for random net 4. The first trained layer of random net 4, layer c5, is diffusely similar to layers c2--c6 in the standard network, while the remaining layers show maximum similarity near the diagonal. \nWhen a network is mostly composed of random layers and only the fully connected layers are trained (e.g. random nets 9-10), the trained layers are not similar to any layer in the standard network. While these networks are still able to perform the task to some extent, they clearly do so in a way that does not mimic the standard network. \n\n\\section{Discussion}\n\\citeA{Kornblith2019SimilarityRevisited} validated the CKA method by showing that it can identify corresponding layers in two networks trained from different random initializations. \nOur comparisons of freeze-trained networks, standard networks and untrained networks extend this validation by showing that a related similarity metric, RV2, applied to networks trained on speech, can recover expected and interpretable patterns of similarity.\n\n\nOur random networks do not show an even distribution of the needed representational transformations across all trained layers. Instead, early trained layers compensate more for the reduced number of trained layers, such that the representations in late trained layers are less affected. \nThis may reflect architectural constraints on representation. For example, fully connected layers may tend to be more similar to other fully connected layers than to convolutional layers and the fully connected layers may require a particular representation in the preceding convolutional layers. This top-down influence on representations in late layers may also be attributable to the targets serving as an anchor in the same way that the inputs anchor the representations in early layers. While there may be many computational solutions to the classification problem at hand, the form of the inputs and targets themselves are fixed, which may constrain the form of representations near the input and targets. \n\n\\section{Acknowledgments}\nThanks to Nuance and Mitacs for their support and to Jo\\~ao Felipe Santos and Guillaume Alain for helpful comments.\n\n\\bibliographystyle{apacite}\n\n\\setlength{\\bibleftmargin}{.125in}\n\\setlength{\\bibindent}{-\\bibleftmargin}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Background, New Challenges, and Achievement}\\label{sec:introduction}\n\n{\\em Constraint satisfaction problems} (or CSPs) have appeared in many different contexts, such as graph theory, database theory, type inferences, scheduling, and notably artificial intelligence, \nfrom which the notion of CSPs was originated. The importance of CSPs comes partly from the fact that the framework of the CSP is broad enough to capture numerous natural problems that arise in real applications. \nGenerally, an input instance of a CSP is a set of ``variables'' \n(over a specified domain) and a set of ``constraints'' (such a set of constraints is \nsometimes called a {\\em constraint language}) among these variables. \nWe are particularly interested in the case of {\\em Boolean variables} throughout this paper. \n\nAs a decision problem, a CSP asks whether there exists \nan appropriate variable assignment, \nwhich satisfies all the given constraints. In particular, Boolean constraints can be expressed \nby Boolean functions or equivalently propositional logical formulas; \nhence, the CSPs with Boolean constraints are all $\\np$ problems. \nTypical examples of CSP are the {\\em satisfiability problem} \n(or SAT), the vertex cover problem, and the colorability problem, \nall of which are known to be $\\np$-complete. On the contrary, other CSPs, \nsuch as the perfect matching problem on planar graphs, fall into $\\p$. \nOne naturally asks \nwhat kind of constraints make them solvable efficiently \nor $\\np$-complete. To be more precise, \nwe first restrict our attention on CSP instances that depend only on constraints chosen from a given set $\\FF$ of constraints. \nSuch a restricted CSP is conventionally denoted $\\mathrm{CSP}(\\FF)$. \nA classic {\\em dichotomy theorem} of Schaefer \\cite{Sch78} states that \nif $\\FF$ is included in one of six clearly specified classes,\\footnote{These classes are defined in terms of $0$-valid, $1$-valid, weakly positive, weakly negative, affine, and bijunctive constraints. See \\cite{Sch78} for their definitions.} $\\mathrm{CSP}(\\FF)$ belongs to $\\p$; otherwise, it is indeed $\\np$-complete. To see the significance of this theorem, let us recall a result of Ladner \\cite{Lad75}, who demonstrated under the $\\p\\neq\\np$ assumption that \nall $\\np$ problems fill \ninfinitely many disjoint categories located between the class $\\p$ and the class of $\\np$-complete problems. Schaefer's claim implies that \nthere are no intermediate categories for Boolean CSPs. \n\nAnother challenging question is to count the number of satisfying assignments for a given CSP instance. The counting satisfiability problem, $\\#\\mathrm{SAT}$, is a typical counting CSP (or succinctly, $\\sharpcsp$), \nwhich is known to be complete for Valiant's counting class $\\sharpp$ \\cite{Val79a}. Restricted to a set $\\FF$ of Boolean constraints, \nCreignou and Hermann \\cite{CH96} gave a dichotomy theorem, \nconcerning the computational complexity of the restricted counting problem \n$\\sharpcsp(\\FF)$. \n\\begin{quote}\nIf all constraints in $\\FF$ are affine,\\footnote{An affine relation is a set of solutions of a certain set of linear equations over $\\mathrm{GF}(2)$.} then $\\sharpcsp(\\FF)$ is solvable in polynomial time. Otherwise, $\\sharpcsp(\\FF)$ is $\\sharpp$-complete.\n\\end{quote}\nIn real applications, constraints often take real numbers, and this fact leads us to concentrate on ``weighted'' \\#CSPs (namely, \\#CSPs with arbitrary-valued constraints). In this direction, \nDyer, Goldberg, and Jerrum \\cite{DGJ09} extended the above result to \nnonnegative-weighted Boolean \\#CSPs. Eventually, Cai, Lu, and Xia \n\\cite{CLX09} \nfurther pushed the scope of Boolean \\#CSPs to \ncomplex-weighted Boolean \\#CSPs, and thus all Boolean \\#CSPs have been completely classified. \n\nHowever, when we turn our attention from exact counting \nto {\\em (randomized) approximate counting}, a situation looks quite different. \nInstead of the aforementioned dichotomy theorems, Dyer, Goldberg, and Jerrum \\cite{DGJ10} presented \na {\\em trichotomy theorem} for the complexity of approximately counting the number of satisfying assignments for each Boolean CSP instance. \nWhat they actually proved \nis that, depending on the choice of a set $\\FF$ of Boolean constraints, the complexity of approximately solving $\\sharpcsp(\\FF)$ can be classified into three categories. \n\\begin{quote}\nIf all constraints in $\\FF$ are affine, then $\\sharpcsp(\\FF)$ is polynomial-time solvable. Otherwise, if all constraints in $\\FF$ are in a \nwell-defined class, known as $IM_2$, \nthen $\\sharpcsp(\\FF)$ is equivalent in complexity \nto $\\#\\mathrm{BIS}$. Otherwise, \n$\\sharpcsp(\\FF)$ is equivalent to $\\#\\mathrm{SAT}$. \nThe equivalence is defined \nvia polynomial-time {\\em approximation-preserving reductions} \n(or AP-reductions, in short).\n\\end{quote}\nHere, \\#BIS is the problem of counting the number of independent sets in a given bipartite graph. \n\nThere still remains a nagging question on the approximation complexity \nof a ``weighted'' version of Boolean \\#CSPs: what happens if we expand the scope of Boolean \\#CSPs from unweighted ones to complex-weighted ones? Unfortunately, there have been few results showing the hardness of approximately solving \\#CSPs with real\/complex-valued constraints, \nexcept for, \\eg \\cite{GJ07}. \nUnlike unweighted constraints, when we deal with complex-valued constraints, a significant complication occurs as a result of massive cancellations of weights in the process of summing \nall weights given by constraints. This situation demands a quite different approach toward the complex-weighted \\#CSPs. Do we still have a classification theorem similar to the theorem of \nDyer \\etal or something quite different? \nIn this paper, we answer this question under a reasonable assumption \nthat all unary (\\ie arity $1$) constraints are freely available to use. \nMeanwhile, let the notation \n$\\sharpcspstar(\\FF)$ denote the complex-weighted counting problem $\\sharpcsp(\\FF)$ \nthat satisfies this extra assumption. \nA free use of unary constraints appeared in the past literature for Holant problems \\cite{CLX09x,CLX@}. Even in case of bounded-degree Boolean \n\\#CSPs, Dyer \\etalc~\\cite{DGJR09} assumed free unary \nBoolean-valued constraints. \nAlthough it is reasonable, this extra assumption makes the approximation \ncomplexity of $\\sharpcspstar(\\FF)$ look quite different from the approximation complexity of \n$\\sharpcsp(\\FF)$, except for the case of Boolean-valued constraints. \nIf we restrict our interest on Boolean constraints, then the only nontrivial unary constraints are $\\Delta_{0}$ and $\\Delta_{1}$ (which are called ``constant constraints'' and will be explained in Section \\ref{sec:basic-definition}) and thus, \nas shown in \\cite{DGJ10}, \nwe can completely eliminate them from the definition of $\\sharpcspstar(\\FF)$ using polynomial-time randomized approximation algorithms. The elimination of those constant constraints is also possible in our general setting of complex-weighted \\#CSPs when all values are limited to algebraic complex numbers. \n\nFor the approximation complexity of $\\sharpcspstar(\\FF)$'s, \nthe expressive power of unary complex-valued constraints leads us to \na {\\em dichotomy theorem}---Theorem \\ref{dichotomy-theorem}---which depicts a picture that looks quite different from that of Dyer \\etal \n\\begin{theorem}\\label{dichotomy-theorem}\nLet $\\FF$ be any set of complex-valued constraints. If \n$\\FF\\subseteq \\ED$, then $\\sharpcspstar(\\FF)$ is solvable in polynomial time. \nOtherwise, $\\sharpcspstar(\\FF)$ is at least as hard as $\\#\\mathrm{SAT}_{\\complex}$ under AP-reductions. \n\\end{theorem}\nHere, the counting problem $\\#\\mathrm{SAT}_{\\complex}$ is a complex-weighted analogue of $\\#\\mathrm{SAT}$ and the set $\\ED$ is composed of products of the equality\/disequality constraints (which \nwill be explained in Section \\ref{sec:constraint-set}) together with unary constraints. \n\nThis theorem marks a significant progress in the quest for determining the approximation complexity of all counting problems $\\sharpcsp(\\FF)$ in the most general form.\nOur proof heavily relies on the previous work of Dyer \n\\etalc~\\cite{DGJ09,DGJ10} and, particularly, the work of \nCai \\etalc~\\cite{CLX09x,CLX09}, \nwhich is based on a theory of {\\em signature} \n(see, \\eg \\cite{CL08,CL11}) that formulate underlying concepts of {\\em holographic algorithms} (which are Valiant's \\cite{Val02a,Val02b,Val06,Val08} manifestation \nof a new algorithmic design method of solving seemingly-intractable counting problems in polynomial time). \nA challenging issue for this paper is that core arguments of Dyer \\etalc~\\cite{DGJ10} \nexploited Boolean natures of Boolean-valued constraints and \nthey are not designed to lead to a dichotomy theorem for \ncomplex-valued constraints. \nCai's theory of signature, on the contrary, deals with complex-valued \nconstraints (which are formally called {\\em signatures}); however, the theory has been developed \nover polynomial-time Turing reductions but it is not meant to be valid under AP-reductions. For instance, a useful technical tool known as {\\em polynomial interpolation}, which is frequently used in an analysis of exact-counting of Holant problems, is no longer applicable in general. \nTherefore, our first task is to re-examine the well-known \nresults in this theory and salvage its key arguments that are still valid for our AP-reductions. \n{}From that point on, we need to find our own way to establish an approximation theory. \n\nToward forming a solid approximation theory, a notable technical tool developed in this paper \nis a notion of {\\em T-constructibility} in tandem with the aforementioned AP-reducibility. Earlier, Dyer \\etalc~\\cite{DGJ09} utilized an existing notion of (faithful and perfect) {\\em implementation} for Boolean-valued constraints in order to induce their desired AP-reductions. The T-constructibility similarly maintains the validity of the AP-reducibility; in addition, it is more suitable to handle complex-valued constraints in a more systematic fashion. \nOther proof techniques involved in proving our main theorem include \n(1) {\\em factorization} (of Boolean parts) of complex-valued constraints and (2) {\\em arity reduction} of constraints. Factoring complex-valued constraints helps us conduct crucial analyses on fundamental properties of those constraints, and reducing the arities of constraints helps construct, from constraints of higher arity, binary constraints, which we can handle directly by a case-by-case analysis. In addition, a particular binary constraint---$Implies$---plays a pivotal role in the proof of Theorem \\ref{dichotomy-theorem}. This situation is quite different from \\cite{CLX09x,CLX@}, which instead utilized the affine property. \n\n\nTo prove our dichotomy theorem, we will organize the subsequent sections in the following fashion. Section \\ref{sec:basic-definition} gives the detailed descriptions of our key terminology: constraints, Holant problems, counting CSPs, and \nAP-reductions. In particular, an extension of the notion of \nrandomized approximation scheme over non-negative integers to \narbitrary complex numbers is described in Section \\ref{sec:randomized-scheme}. Briefly explained in \nSection \\ref{sec:constructibility} is the concept of T-constructibility, \na technical tool developed exclusively in this paper. \nFor readability, a basic property of T-constructibility is proven in Section \\ref{sec:elimination}. \nSection \\ref{sec:constraint-set} introduces several crucial sets of constraints, which are bases of our key results. \nToward our main theorem, we will develop solid \nfoundations in Sections \\ref{sec:elementary-reduction} \nand \\ref{sec:support-reduction}. Notably, a free use of ``arbitrary'' unary constraint is heavily required in Section \\ref{sec:elementary-reduction} to prove approximation-complexity bounds of $\\sharpcspstar(f)$. As an important ingredient of the proof of the \ndichotomy theorem, we will present in Section \\ref{sec:IM2-support} the approximation hardness of $\\sharpcspstar(f)$ for two types of constraints $f$. \nThe dichotomy theorem is finally proven in Section \\ref{sec:main-theorem}, achieving the goal of this paper. \n\nGiven a constraint, if its outcomes are limited to algebraic complex numbers, we succinctly call the constraint an {\\em algebraic constraint}. \nWhen all input instances are only algebraic constraints, as we noted earlier, we can further eliminate the constant constraints and thus strengthen the main theorem. To describe our next result, we introduce a special notation $\\sharpcspplus_{\\algebraic}(\\FF)$ to indicate $\\sharpcspstar(\\FF)$ in which (i) all input instances are limited to algebraic constraints and (ii) free unary constraints take neither of the forms $c\\cdot \\Delta_0$ nor $c\\cdot\\Delta_1$ for any constant $c$. \nSimilarly, $\\#\\mathrm{SAT}_{\\algebraic}$ is induced from $\\#\\mathrm{SAT}_{\\complex}$ by limiting node-weights within algebraic complex numbers. The power of AP-reducibility leads us to establish the following corollary of the main theorem.\n\\begin{corollary}\\label{algebraic-main-theorem}\nLet $\\FF$ be any set of complex-valued constraints. If $\\FF\\subseteq\\ED$, then $\\sharpcspplus_{\\algebraic}(\\FF)$ is solvable in polynomial time; otherwise, $\\sharpcspplus_{\\algebraic}(\\FF)$ is at least as hard as $\\#\\mathrm{SAT}_{\\algebraic}$ under AP-reductions. \n\\end{corollary}\n\nThis corollary will be proven in Section \\ref{sec:main-theorem}. A key to the proof of the corollary is an AP-equivalence between $\\sharpcspstar_{\\algebraic}(\\FF)$ and $\\sharpcspplus_{\\algebraic}(\\FF)$ \nfor any constraint set $\\FF$, where the subscript ``$\\algebraic$'' in $\\sharpcspstar_{\\algebraic}(\\FF)$ emphasizes the restriction on input instances within algebraic constraints. This AP-equivalence is a direct consequence of the elimination of $\\Delta_0$ and $\\Delta_1$ from $\\sharpcspstar_{\\algebraic}(\\FF)$ and this elimination will be demonstrated in Section \\ref{sec:elimination}.\n\n\n\\ms\n\\n{\\bf Outline of the Proof of the Main Theorem:}\\hs{1} \nThe proof of the dichotomy theorem is outlined as follows. \nFirst, we will establish in Section \\ref{sec:upper-bound} the equivalence between $\\#\\mathrm{SAT}_{\\complex}$ and $\\sharpcspstar(OR)$, where $OR$ represents the logical ``or'' on two Boolean variables. \nThis makes it possible to work solely with $\\sharpcspstar(OR)$, instead of \n$\\#\\mathrm{SAT}_{\\complex}$ in the subsequent sections. \nWhen a constraint set $\\FF$ is completely included in $\\ED$, we will show in Lemma \\ref{basic-case-FPC} that $\\sharpcspstar(\\FF)$ is polynomial-time solvable. On the contrary, when $\\FF$ is not included in $\\ED$, \nwe choose a constraint $f$ not in $\\ED$. Such a constraint will be treated by Proposition \\ref{key-proposition}, in which we will AP-reduce $\\sharpcspstar(OR)$ to $\\sharpcspstar(f)$. The proof of this proposition will be split into two cases, depending on whether or not $f$ has ``imp support,'' which is a property associated with the constraint $Implies$. When $f$ has such a property, Proposition \\ref{no-affine-and-IM2} helps establish the hardness of $\\sharpcspstar(f)$; namely, an AP-reduction of $\\sharpcspstar(f)$ from $\\sharpcspstar(OR)$ and thus from $\\#\\mathrm{SAT}_{\\complex}$. In contrast, if $f$ lacks the property, then we will examine two subcases. If $f$ is a non-zero constraint, then Lemma \\ref{affine-PP-induction} together with Proposition \\ref{1-x-y-z-case} \nleads to the hardness of \n$\\sharpcspstar(f)$. Otherwise, Proposition \\ref{IM-XOR-IMP} establishes the desired AP-reduction. Therefore, the proof of the theorem is completed. \n\n\n\\ms\n\nNow, we begin with an explanation of basic definitions.\n\n\\section{Basic Definitions}\\label{sec:basic-definition}\n\nThis section briefly presents fundamental notions and notations, which will be used in later sections. For any finite set $A$, the notation $|A|$ \ndenotes the {\\em cardinality} of $A$. \nA {\\em string} over an alphabet $\\Sigma$ is a finite sequence of symbols from $\\Sigma$ and $|x|$ denotes the {\\em length} of a string $x$, where an {\\em alphabet} is a non-empty finite set of ``symbols.'' \nLet $\\nat$ denote the set of all {\\em natural numbers} (\\ie non-negative integers). For convenience, $\\nat^{+}$ denotes $\\nat-\\{0\\}$. \nMoreover, $\\real$ and $\\complex$ denote respectively the sets of all real numbers and of all complex numbers. Given a complex number $\\alpha$, \nlet $|\\alpha|$ and $\\arg(\\alpha)$ respectively denote the {\\em absolute value} and the {\\em argument} of $\\alpha$, where we always assume that $-\\pi<\\arg(\\alpha)\\leq\\pi$. The special notation $\\algebraic$ represents the set of all {\\em algebraic complex numbers}. For each number \n$n\\in\\nat$, $[n]$ expresses the integer set $\\{1,2,\\ldots,n\\}$. \nThe notation $A^{T}$ for any matrix $A$ indicates the {\\em transposed matrix} of $A$. We always treat ``vectors'' as {\\em row vectors}, unless stated otherwise. \n\nFor any undirected graph $G=(V,E)$ (where $V$ is a {\\em node set} and $E$ is an {\\em edge set}) and a node $v\\in V$, an {\\em incident set} $E(v)$ of $v$ is the set of all edges incident on $v$, and $deg(v)=|E(v)|$ \nis the {\\em degree} of $v$. \nWhen we refer to nodes in a given undirected graph, unless there is any ambiguity, we call such nodes by their labels instead of their original node names. For example, if a node $v$ has a label of Boolean variable $x$, then we often call it ``node $x$,'' although there are many other nodes labeled $x$, as far as it is clear from the context which node is referred to. Moreover, when $x$ is a Boolean variable, as in this example, we succinctly call any node labeled $x$ a ``variable node.'' \n\n\\subsection{Constraints, Signatures, Holant Problems, and \\#CSP}\\label{sec:constraint}\n\nThe most fundamental concept in this paper is ``constraint'' on the \nBoolean domain. \nA function $f$ is called a {\\em (complex-valued) constraint} of arity $k$ \nif it is a function from $\\{0,1\\}^k$ to $\\complex$. Assuming the standard lexicographic order on $\\{0,1\\}^{k}$, we express $f$ as a series of its output values, which is identified with an element in the complex space $\\complex^{2^{k}}$. \nFor instance, if $k=1$, then $f$ equals $(f(0),f(1))$, and if $k=2$, \nthen $f$ is expressed as $(f(00),f(01),f(10),f(11))$. \n A constraint $f$ is {\\em symmetric} if the values of $f$ depend only on the {\\em Hamming weights} of inputs; otherwise, $f$ is called \n{\\em asymmetric}. When $f$ is a \nsymmetric constraint of arity $k$, we use another notation $f=[f_0,f_1,\\ldots,f_k]$, where each $f_i$ is the value of $f$ on inputs of \nHamming weight $i$. As a concrete example, when $f$ is the equality function ($EQ_k$) of arity $k$, it is expressed as $[1,0,\\ldots,0,1]$ (including $k-1$ zeros). We denote by $\\UU$ the set of all unary constraints \nand we use the following special unary constraints: $\\Delta_0 =[1,0]$ and $\\Delta_1=[0,1]$. These constraints are often referred to as ``constant constraints.'' \n\nBefore introducing \\#CSPs, we will give a brief description of Holant problem; however, we focus our attention only on ``bipartite Holant problems'' whose input instances are ``signature grids'' containing bipartite graphs $G$, in which all nodes on the left-hand side of $G$ are labeled by signatures in $\\FF_1$ and all nodes on the right-hand side of $G$ are labeled by signatures in $\\FF_2$, where ``signature'' is another name for complex-valued constraint, and $\\FF_1$ and $\\FF_2$ are two sets of signatures. \nFormally, a {\\em bipartite Holant problem}, denoted $\\holant(\\FF_1|\\FF_2)$, (on a Boolean domain) is a counting problem defined as follows. \nThe problem takes an input instance, called \na {\\em signature grid} $\\Omega =(G,\\FF'_1|\\FF'_2,\\pi)$, that consists of a finite undirected bipartite graph $G=(V_1|V_2,E)$ (where all nodes in $V_1$ appear on the left-hand side and all nodes in $V_2$ appear on the right-hand side), \ntwo {\\em finite} subsets $\\FF'_1\\subseteq \\FF_1$ and $\\FF'_2\\subseteq \\FF_2$, and a labeling function $\\pi:V_1\\cup V_2\\rightarrow\\FF_1'\\cup\\FF'_2$ such that $\\pi(V_1)\\subseteq \\FF'_1$, $\\pi(V_2)\\subseteq \\FF'_2$, and each node $v\\in V_1\\cup V_2$ is labeled $\\pi(v)$, which is a function mapping $\\{0,1\\}^{deg(v)}$ to $\\complex$. \nFor convenience, we often write $f_{v}$ for this $\\pi(v)$. \nLet $Asn(E)$ denote the set of all {\\em edge assignments} $\\sigma: E\\rightarrow \\{0,1\\}$. The bipartite Holant problem is meant to compute the complex value $\\holant_{\\Omega}$: \n\\[\n\\holant_{\\Omega} = \\sum_{\\sigma\\in Asn(E)} \n\\prod_{v\\in V_1\\cup V_2}f_{v}(\\sigma|E(v)), \n\\] \nwhere $\\sigma|E(v)$ denotes the binary string $(\\sigma(w_1),\\sigma(w_2),\\cdots,\\sigma(w_k))$ if $E(v)=\\{w_1,w_2,\\ldots,w_k\\}$, sorted in a certain pre-fixed order associated with $f_{v}$. \n\nLet us define complex-weighted Boolean $\\sharpcsp$ problems associated with a set $\\FF$ of constraints. Conventionally, a complex-weighted Boolean $\\sharpcsp$ problem, denoted $\\sharpcsp(\\FF)$, takes a finite set $\\Omega$ \nof ``elements'' of the form $\\pair{h,(x_{i_1},x_{i_2},\\ldots,x_{i_k})}$ on Boolean variables $x_1,x_2,\\ldots,x_n$, where $h\\in\\FF$ and $i_1,\\ldots,i_k\\in[n]$. The problem outputs the value $\\csp_{\\Omega}$: \n\\[\n\\csp_{\\Omega} = \\sum_{\\sigma} \\prod_{\\pair{h,x'}\\in \\Omega} h(\\sigma(x_{i_1}),\\sigma(x_{i_2}),\\ldots,\\sigma(x_{i_k})),\n\\] \n\\sloppy where $x'=(x_{i_1},x_{i_2},\\ldots,x_{i_k})$ and $\\sigma:\\{x_1,x_2,\\ldots,x_n\\}\\rightarrow\\{0,1\\}$ ranges over the set of all {\\em variable assignments}. \n\nExploiting a close resemblance to Holant problems, we intend to \nadopt the Holant framework and re-define $\\sharpcsp(\\FF)$ in a form of ``bipartite graphs'' as follows: an input instance to $\\sharpcsp(\\FF)$ is a triplet $\\Omega=(G,X|\\FF',\\pi)$, which we call a ``constraint frame'' (to distinguish it from the aforementioned conventional framework), where $G$ is an undirected bipartite graph whose left-hand side contains nodes labeled by Boolean variables and the right-hand side contains nodes labeled by constraints in $\\FF'$. \nThroughout this paper, we take this constraint-frame formalism to treat complex-weighted Boolean \\#CSPs; that is, we always assume that an input instance to $\\sharpcsp(\\FF)$ is a certain constraint frame $\\Omega$ and an output of $\\sharpcsp(\\FF)$ is the value $\\csp_{\\Omega}$. \n\nThe above concept of constraint frame is actually inspired by the fact that $\\sharpcsp(\\FF)$ can be viewed as a special case of bipartite Holant problem $\\holant(\\{EQ_{k}\\}_{k\\geq1}|\\FF)$ by the following translation: any constraint frame $\\Omega$ given to $\\sharpcsp(\\FF)$ is viewed as a signature grid $\\Omega'=(G,\\{EQ_k\\}_{k\\geq1}|\\FF',\\pi)$ in which each Boolean variable appearing in the original constraint frame $\\Omega$ corresponds to all edges incident on a node labeled $EQ_k$ in $G$, and thus each variable assignment for $\\Omega$ matches the corresponding 0-1 edge assignment for $\\Omega'$. Obviously, each outcome of the constraint frame $\\Omega$ coincides with the outcome of the signature grid $\\Omega'$. \n\nTo improve readability, we often omit the set notation and express, \\eg $\\sharpcsp(f,g)$ and $\\sharpcsp(f,\\FF,\\GG)$ to mean $\\sharpcsp(\\{f,g\\})$ and $\\sharpcsp(\\{f\\}\\cup \\FF\\cup\\GG)$, respectively. \nWhen we allow unary constraints to appear in any instance freely, \nwe succinctly write $\\sharpcspstar(\\FF)$ instead of $\\sharpcsp(\\FF,\\UU)$. In the rest of this paper, we will target the counting problems $\\sharpcspstar(\\FF)$. \n\n\n\\ms\n\\n{\\bf Our Treatment of Complex Numbers.}\\hs{1} \nHere, we need to address a technical issue concerning how to handle complex numbers as well as complex-valued functions. Recall that each input instance to a \\#CSP involves a finite set of constraints, which are actually complex-valued functions. \nHow can we compute or manipulate those functions? More importantly, how can we ``express'' them as part of input instances even before starting to compute their values? \n\nThe past literature has exhibited numerous ways to treat complex numbers in an existing framework of theory of string-based computation. There are several reasonable definitions of ``polynomial-time computable'' complex numbers. They vary depending on which viewpoint we take. To state our results independent of the definitions of computable complex numbers, however, we rather prefer to treat complex numbers as basic ``objects.'' \nWhenever complex numbers are given as part of input instances, we implicitly assume that we have a clear and concrete means of specifying those numbers within a standard framework of computation. Occasionally, however, we will limit our interest within a scope of algebraic numbers, \nas in Lemma \\ref{equivalent-plus-star}. \n\nTo manipulate such complex numbers algorithmically, we are limited to \nperform only ``primitive'' operations, such as, multiplications, addition, division, etc., on the given numbers in a very plausible fashion. The execution time of an algorithm that handles those complex numbers is generally measured by the number of those primitive operations. \nTo given complex numbers, we apply such primitive operations only; therefore, our assumption on the execution time \nof the operations causes no harm in a later discussion on the computability of $\\sharpcsp(\\FF)$. (See \\cite{CL08,CL11} for further justification.) \n\n\\ms\n\nBy way of our treatment of complex numbers, we naturally define the function class $\\fp_{\\complex}$ as the set of all complex-valued \nfunctions that can be computed deterministically on input strings in time \npolynomial in the sizes of the inputs. \n\n\\subsection{Randomized Approximation Schemes}\\label{sec:randomized-scheme}\n\nWe will lay out a notion of randomized approximation scheme, \nparticularly, working on complex numbers. Let \n $F$ be any counting function mapping from \n$\\Sigma^*$ (over an appropriate alphabet $\\Sigma$) to $\\complex$. Our goal is to approximate each value $F(x)$ when $x$ is given as an input instance to $F$. \nA standard approximation theory (see, \\eg \\cite{ACG+98}) deals mostly with natural numbers; however, treating complex numbers in the subsequent sections requires an appropriate modification of the standard definition of computability and approximation. In what follows, we will make a specific form of complex-number approximation. \n\nA fundamental idea behind ``relative approximation error'' is that a maximal ratio between an approximate solution $w$ and a true solution $F(x)$\nshould be close to $1$. \nIntuitively, a complex number $w$ is an ``approximate solution'' for $F(x)$ if a performance ratio $z=w\/F(x)$ (as well as $z=F(x)\/w$) is \nclose enough to $1$. \nIn case when our interest is limited to ``real-valued'' functions, we can expand a standard notion of relative approximation of functions producing non-negative integers (\\eg \\cite{ACG+98}) and we demand $2^{-\\epsilon} \\leq w\/F(x) \\leq 2^{\\epsilon}$ (whenever $F(x)=0$, we further demand $w=0$). This requirement is logically equivalent to both $2^{-\\epsilon} \\leq \\left|w\/F(x)\\right| \\leq 2^{\\epsilon}$ and $\\arg(F(x))=\\arg(w)$ (when $F(x)=0$, $w=0$ must hold), where the ``positive\/negative signs'' of real numbers $F(x)$ and $w$ are represented by the ``arguments'' of them in the complex plane. Because our target object is complex numbers $z$, which are always specified by their absolute values $|z|$ and their arguments $\\arg(z)$, both values must be approximated simultaneously. \nGiven an {\\em error tolerance parameter} $\\epsilon\\in[0,1]$, we call a value $w$ a {\\em $2^{\\epsilon}$-approximate solution} for $F(x)$ if $w$ satisfies the following two conditions:\n\\[\n2^{-\\epsilon} \\leq \\left| \\frac{w}{F(x)} \\right| \\leq 2^{\\epsilon} \n\\hs{5}\\text{and}\\hs{5} \n\\left| \\arg\\left( \\frac{w}{F(x)} \\right) \\right|\\leq \\epsilon, \n\\]\nprovided that we apply the following exceptional rule: when $F(x)=0$, we instead require $w=0$. Notice that this way of approximating complex numbers is more suitable to establish Lemma \\ref{equivalent-plus-star} than the way of approximating both the real parts and the imaginary parts of the complex numbers.\n\nA {\\em randomized approximation scheme} for (complex-valued) $F$ is a randomized algorithm that takes a standard input $x\\in\\Sigma^*$ together with an error tolerance parameter $\\varepsilon\\in(0,1)$, and outputs a $2^{\\epsilon}$-approximate solution (which is a random variable) for $F(x)$ with probability at least $3\/4$. \nA {\\em fully polynomial-time randomized approximation scheme} \n(or simply, {\\em FPRAS}) for $F$ is a randomized approximation scheme for $F$ that runs in time polynomial in $(|x|,1\/\\varepsilon)$.\n\nNext, we will describe our notion of approximation-preserving reducibility among counting problems. Of numerous existing notions of approximation-preserving reducibilities (see, \\eg \\cite{ACG+98}), we choose a notion introduced by Dyer \\etalc~\\cite{DGGJ03}, which can be viewed as a randomized variant of Turing reducibility, described by a mechanism of {\\em oracle Turing machine}. Given two counting functions $F$ and $G$, a {\\em polynomial-time (randomized) approximation-preserving (Turing) reduction} (or {\\em AP-reduction}, in short) from $F$ to $G$ is a randomized algorithm $N$ that takes a pair $(x,\\varepsilon)\\in\\Sigma^*\\times(0,1)$ as input, \nuses an arbitrary randomized approximation scheme (not necessarily polynomial time-bounded) $M$ for $G$ as {\\em oracle}, \nand satisfies the following three conditions:\n(i) $N$ is a randomized approximation scheme for $F$; \n(ii) every {\\em oracle call} made by $N$ is of the form $(w,\\delta)\\in\\Sigma^*\\times(0,1)$ satisfying \n$1\/\\delta \\leq p(|x|,1\/\\varepsilon)$, where $p$ \nis a certain absolute polynomial, \nand an oracle answer is an outcome of $M$ on \nthe input $(w,\\delta)$; and \n(iii) the running time of $N$ is bounded from above by a certain polynomial in $(|x|,1\/\\varepsilon)$, not depending on the choice of the oracle $M$. In this case, we write $F\\APreduces G$ and we also say that $F$ is {\\em AP-reducible} (or {\\em AP-reduced}) to $G$. If $F\\APreduces G$ and $G\\APreduces F$, then $F$ and $G$ are {\\em AP-equivalent}\\footnote{This concept was called ``AP-interreducible'' by Dyer \\etalc~\\cite{DGGJ03} but we prefer this term, which is originated from ``Turing equivalent'' in computational complexity theory.} \nand we write $F\\APequiv G$. The following lemma is straightforward.\n\n\\begin{lemma}\\label{T-con-to-AP-reduction}\nIf $\\FF\\subseteq \\GG$, then \n$\\sharpcspstar(\\FF)\\APreduces \\sharpcspstar(\\GG)$. \n\\end{lemma}\n\n\\section{Underlying Relations and Constraint Sets}\\label{sec:constraint-set}\n\nA {\\em relation} of arity $k$ is a subset of $\\{0,1\\}^k$. Such a relation can be viewed as a ``function'' mapping Boolean variables to $\\{0,1\\}$ (by setting $R(x)=0$ and $R(x)=1$ whenever $x\\not\\in R$ and $x\\in R$, respectively, for every $x\\in\\{0,1\\}^k$) and it can be treated as a Boolean constraint. For instance, logical relations $OR$, $NAND$, $XOR$, and $Implies$ are all expressed as Boolean constraints in the following manner: \n$OR=[0,1,1]$, $NAND=[1,1,0]$, $XOR=[0,1,0]$, and $Implies=(1,1,0,1)$. The negation of $XOR$ is $[1,0,1]$ and it is simply denoted $EQ$ for convenience. \nNotice that $EQ$ coincides with $EQ_2$. \n\nFor each $k$-ary constraint $f$, its {\\em underlying relation} \nis the relation $R_f=\\{x\\in\\{0,1\\}^k\\mid f(x)\\neq0\\}$, which \ncharacterizes the non-zero part of $f$. \nA relation $R$ belongs to the set {\\em $IMP$} (slightly different from $IM_{2}$ in \\cite{DGJ10}) \nif it is logically equivalent to a conjunction of a certain ``positive'' number of relations of the form $\\Delta_0(x)$, $\\Delta_1(x)$, and $Implies(x,y)$. \nIt is worth mentioning that $EQ_2\\in IMP$ but $EQ_1\\not\\in IMP$. \nMoreover, the empty relation ``$\\setempty$'' also belongs to $IMP$. \n\nThe purpose of this paper is to extend the scope of \n the approximation complexity of \\#CSPs from Boolean constraints of Dyer \\etalc~\\cite{DGJ10}, stated in Section \\ref{sec:introduction}, to complex-valued constraints. To simplify later descriptions, it is better for us to introduce the following six special sets of constraints, the first of which has been already introduced in Section \\ref{sec:constraint}. \nThe notation $f\\equiv0$ below means that $f(x_1,\\ldots,x_k)=0$ for all $k$ vectors $(x_1,\\ldots,x_k)\\in\\{0,1\\}^k$, \nwhere $k$ is the arity of $f$. \n\n\\begin{enumerate}\n\\item Denote by $\\UU$ the set of all unary constraints. \n\\vs{-2}\n\\item Let $\\NZ$ be the set of all constraints $f$ of arity $k\\geq1$ such that $f(x_1,x_2,\\ldots,x_k)\\neq0$ for all $(x_1,x_2,\\ldots,x_k)\\in\\{0,1\\}^k$. We succinctly call such functions {\\em non-zero functions}. Notice that this case is different from the case where $f\\not\\equiv0$. Obviously, $\\Delta_0,\\Delta_1\\not\\in\\NZ$ holds. \n\\vs{-2}\n\\item Let $\\DG$ denote the set of all constraints $f$ of arity $k\\geq1$ \nsuch that $f(x_1,x_2,\\ldots,x_k) = \\prod_{i=1}^{k}g_i(x_i)$ for certain unary constraints $g_1,g_2,\\ldots,g_k$. A constraint in $\\DG$ is called {\\em degenerate}. Obviously, $\\DG$ includes $\\UU$ as a proper subset. \n\\vs{-2}\n\\item Define $\\ED$ to be the set of functions $f$ of arity $k\\geq1$ such that $f(x_1,x_2,\\ldots,x_k) = \\left(\\prod_{i=1}^{\\ell_1}h_i(x_{j_i})\\right) \\left(\\prod_{i=1}^{\\ell_2} g_i(x_{m_i},x_{n_i})\\right)$ with $\\ell_1,\\ell_2\\geq0$, $\\ell_1+\\ell_2\\geq1$, and $1\\leq j_i,m_i,n_i\\leq k$, where each $h_i$ is a unary constraint and each $g_i$ is either \nthe binary equality $EQ$ or the disequality $XOR$. Clearly, $\\DG\\subseteq \\ED$ holds. The name ``$\\ED$'' refers to its key components, ``equality'' and ``disequality.'' See \\cite{CLX09} \nfor its basic property.\n\\vs{-2}\n\\item Let $\\IM$ be the set of all constraints $f\\not\\in\\NZ$ of arity $k\\geq1$ such that \n$f(x_1,x_2,\\ldots,x_k) = \\left(\\prod_{i=1}^{\\ell_1}h_i(x_{j_i})\\right) \\left(\\prod_{i=1}^{\\ell_2} Implies(x_{m_i},x_{n_i})\\right)$ with $\\ell_0,\\ell_1\\geq0$, $\\ell_1+\\ell_2\\geq1$, and $1\\leq j_i,m_i,n_i\\leq k$, where each $h_i$ is a unary constraint. \n\\end{enumerate}\n\nWe will present four simple properties of the above-mentioned \nsets of constraints. The first property concerns the set $\\NZ$ of non-zero constraints. \nNotice that non-zero constraints will play a quite essential role in \nLemma \\ref{affine-PP-induction} and Proposition \\ref{IM-XOR-IMP}. \nIn what follows, we claim that two sets $\\DG$ and $\\ED$ coincide with each other, when they are particularly restricted to non-zero constraints. \n \n\\begin{lemma}\\label{AG-vs-DD}\nLet $f$ be any constraint of arity $k\\geq1$ in $\\NZ$. It holds that \n$f\\in \\DG$ iff $f\\in\\ED$. \n\\end{lemma}\n\n\\begin{proof}\nLet $f$ be any non-zero constraint of arity $k$. \nNote that $f\\in \\NZ$ iff $|R_f|=2^k$, where $|R_f|$ is the cardinality of the set $R_f$. \nSince $\\DG\\subseteq \\ED$, it is enough to show \nthat $f\\in\\ED$ implies $f\\in \\DG$. Assume \nthat $f$ is in $\\ED$. Since $f$ is a product of certain constraints of the forms: $EQ$, $XOR$, and unary constraints. Since $|R_f|=2^k$, $f$ cannot be made of $EQ$ as well as $XOR$ as its ``factors,'' \nand thus it should be of the form \n$\\prod_{i=1}^{k}U_i(x_i)$, where each $U_i$ is a {\\em non-zero} unary constraint. We therefore conclude that $f$ is degenerate and it belongs to $\\DG$. \n\\end{proof}\n\n\\begin{lemma}\\label{R-f-cardinality-1}\nLet $f$ be any constraint. If $f\\not\\in\\DG$, then it holds that $|R_f|\\geq2$. \n\\end{lemma}\n\n\\begin{proof}\nWe prove the lemma by contrapositive. Take a constraint $f(x_1,x_2,\\ldots,x_k)$ of arity $k\\geq1$ and assume that $|R_f|\\leq 1$. \nWhen $|R_f|=0$, since $f$'s output is always zero, $f(x_1,\\ldots,x_k)$ can be expressed as, for instance, $\\Delta_0(x_1)\\Delta_1(x_1)$. On the contrary, when $|R_f|=1$, we assume that $R_f=\\{(a_1,a_2,\\ldots,a_k)\\}$ for a certain vector $(a_1,a_2,\\ldots,a_k)\\in\\{0,1\\}^k$. Let us define $b$ as \n$b=f(a_1,a_2,\\ldots,a_k)$. It is not difficult to show that $f(x_1,x_2,\\ldots,x_k)$ equals $b\\cdot \\prod_{i=1}^{k}\\Delta_{a_i}(x_i)$. Thus, $f$ belongs to $\\DG$, as required. \n\\end{proof}\n\nSeveral sets in the aforementioned list satisfy the {\\em closure property} under multiplication. For any two constraints $f$ and $g$ of arities $c$ and $d$, respectively, the notation $f\\cdot g$ denotes the function defined as follows. For any Boolean vector $(x_1,\\ldots,x_k)\\in\\{0,1\\}^k$, let $(f\\cdot g)(x_{m_1},\\ldots,x_{m_k}) = f(x_{i_1},\\ldots,x_{i_c})g(x_{j_1},\\ldots,x_{j_d})$ if $\\{m_1,\\ldots,m_k\\} = \\{i_1,\\ldots,i_c\\}\\cup\\{j_1,\\ldots,j_d\\}$, where the order of the indices in $\\{m_1,\\ldots,m_k\\}$ should be pre-determined from $(i_1,\\ldots,i_c)$ and $(j_1,\\ldots,j_d)$ before multiplication. For instance, we obtain $(f\\cdot g)(x_1,x_2,x_3,x_4)$ from $f(x_1,x_3,x_2)$ and $g(x_2,x_4,x_1)$. \n\n\\begin{lemma}\\label{closure-multiplication}\nFor any two constraints $f$ and $g$ in $\\ED$, the constraint $f\\cdot g$ is also in $\\ED$. A similar result holds for $\\DG$, $\\NZ$, $\\IM$, and $IMP$.\n\\end{lemma}\n\n\\begin{proof}\nAssume that $f,g\\in\\ED$. Note that $f$ and $g$ are both products of constraints, each of which has one of the following forms: $EQ$, $XOR$, unary constraints. Clearly, the multiplied constraint $f\\cdot g$ is a product of those factors, and hence it is in $\\ED$. The other cases are similarly proven. \n\\end{proof}\n\nExponentiation can be considered as a special case of multiplication. To express an exponentiation, we introduce the following notation: for any number $r\\in\\real-\\{0\\}$ and any constraint $f$, let $f^r$ denote the function defined as $f^r(x_1,\\ldots,x_n) = (f(x_1,\\ldots,x_n))^r$ for any $k$-tuple $(x_1,\\ldots,x_k)\\in\\{0,1\\}^k$. \n\n\\begin{lemma}\\label{f-vs-f-power-in-PP}\nFor any number $m\\in\\nat^{+}$ and any constraint $f$, $f\\in\\ED$ iff $f^{m}\\in\\ED$. A similar result holds for $\\DG$, $\\NZ$, $\\IM$, and $IMP$. \n\\end{lemma}\n\n\\begin{proof}\nLet $m\\geq1$. Since $f^m$ is the $m$-fold function of $f$, by Lemma \\ref{closure-multiplication}, $f\\in\\ED$ implies $f^m\\in\\ED$. Next, we intend to show that $f^m\\in\\ED$ implies $f\\in\\ED$. \nLet us assume that $f^m\\in\\ED$. By setting $g=f^m$, it holds that $f(x_1,\\dots,x_n) = (g(x_1,\\ldots,x_n))^{1\/m}$ for any vector $(x_1,\\ldots,x_n)\\in\\{0,1\\}^n$. \nNow, assume that $g = g_1\\cdots g_k$, \nwhere each $g_i$ is one of $EQ$, $XOR$, and unary constraints. \nIf $g_i$ is either $EQ$ or $XOR$, then we define $h_i=g_i$. If $g_i$ is a unary function, define $h_i = (g_i)^{1\/m}$, which is also a unary constraint. Obviously, all $h_i$'s are well-defined and also \nbelong to $\\ED$, because $\\ED$ contains all unary constraints. \nSince $f = h_1\\cdots h_k$, by the definition of $\\ED$, we conclude that \n$f$ is in $\\ED$. \n\nThe second part of the lemma can be similarly proven.\n\\end{proof}\n\n\\section{Typical Counting Problems}\\label{sec:upper-bound}\n\nWe will discuss the approximation complexity of \nspecial counting problems that has arisen naturally in \nthe past literature. When we use complex numbers in the subsequent discussion, we always assume our special way of handling those numbers, as discussed in Section \\ref{sec:basic-definition}. \n\nThe {\\em counting satisfiability problem}, \\#SAT, is a problem of counting the number of truth assignments that make each given propositional formula true. This problem was proven to \nbe complete for $\\sharpp$ under AP-reduction \\cite{DGGJ03}. \nDyer \\etalc~\\cite{DGJ10} further showed that \\#SAT possesses the computational power equivalent to $\\sharpcsp(OR)$ under AP-reduction, namely, \n$\\sharpcsp(OR) \\APequiv \\#\\mathrm{SAT}$. \n\nNevertheless, to deal particularly with complex-weighted counting problems, \nit is desirable to introduce a complex-weighted version of $\\#\\mathrm{SAT}$. \nIn the following straightforward way, we define \n$\\#\\mathrm{SAT}_{\\complex}$, a complex-weighted version of \n$\\#\\mathrm{SAT}$. Let $\\phi$ be any propositional formula (with three logical connectives, $\\neg$ (not), $\\vee$ (or), and $\\wedge$ (and)) and \nlet $V(\\phi)$ be the set of all variables appearing in $\\phi$. Let \n$\\{w_x\\}_{x\\in V(\\phi)}$ be any series of {\\em node-weight functions} \n$w_{x}:\\{0,1\\}\\rightarrow\\complex-\\{0\\}$. \nGiven such a pair $(\\phi,\\{w_x\\}_{x\\in V(\\phi)})$, $\\#\\mathrm{SAT}_{\\complex}$ asks to compute the sum of all weights \n$w(\\sigma)$ for every truth assignment $\\sigma$ \nsatisfying $\\phi$, where $w(\\sigma)$ denotes the product of all $w_{x}(\\sigma(x))$ for any $x\\in V(\\phi)$. \nIf $w_x(\\sigma(x))$ always equals $1$ for every pair of $\\sigma$ and $x\\in V(\\phi)$, then we immediately obtain $\\#\\mathrm{SAT}$. This indicates that $\\#\\mathrm{SAT}_{\\complex}$ naturally extends $\\#\\mathrm{SAT}$. \n\n\\begin{lemma}\\label{SAT-to-OR}\n$\\#\\mathrm{SAT}_{\\complex}\\APreduces \\sharpcspstar(OR)$. \n\\end{lemma}\n\nThe following proof is based on the proof of \\cite[Lemma 6]{DGJ10}, which uses approximation results of \\cite{DGGJ03} on the {\\em counting independent set problem} $\\#\\mathrm{IS}$. A set $S$ of nodes in a graph $G$ is called {\\em independent} if, for any pair of nodes in $S$, there is no edge connecting them.\nDyer \\etalc~\\cite{DGGJ03} showed that $\\#\\mathrm{IS}$ \nis AP-equivalent with $\\#\\mathrm{SAT}$. \nAs a complex analogue of $\\#\\mathrm{IS}$, we introduce $\\#\\mathrm{IS}_{\\complex}$. An input \ninstance to $\\#\\mathrm{IS}_{\\complex}$ is an undirected graph $G=(V,E)$ and a series $\\{w_x\\}_{x\\in V}$ of node-weight functions with each $w_x$ mapping $\\{0,1\\}$ to $\\complex-\\{0\\}$. An output of $\\#\\mathrm{IS}_{\\complex}$ is the sum of all weights $w(S)$ for any independent set $S$ of $G$, where $w(S)$ equals the products of all values $w_{x}(S(x))$ over all nodes $x\\in V$, where $S(x)=1$ ($S(x)=0$, resp.) iff $x\\in S$ ($x\\not\\in S$, resp.). \n\nTo describe the proof, we wish to introduce a new notation, which will appear again in Sections \\ref{sec:main-theorem} and \\ref{sec:elimination}. \nThe notation $\\sharpcspplus(\\FF)$ denotes the counting problem $\\sharpcsp(\\FF,\\UU\\cap\\NZ)$. \n\n\\begin{proofof}{Lemma \\ref{SAT-to-OR}}\nWe can modify the construction of an AP-reduction from $\\#\\mathrm{SAT}$ to $\\#\\mathrm{IS}$, \ngiven in \\cite{DGGJ03}, by adding a node-weight function to each variable node. Hence, we instantly obtain $\\#\\mathrm{SAT}_{\\complex}\\APreduces \\#\\mathrm{IS}_{\\complex}$. We leave the details of the proof \nto the avid reader. Next, we claim that $\\#\\mathrm{IS}_{\\complex}$ and \n$\\sharpcspplus(NAND)$ are AP-equivalent. \nBecause this claim is a concrete example of how to relate \\#CSPs to more popular counting problems, here we include the proof of the claim. \n\n\\begin{claim}\\label{IS-equiv-NAND}\n$\\#\\mathrm{IS}_{\\complex} \\APequiv \\sharpcspplus(NAND)$.\n\\end{claim}\n\n\\begin{proof}\nWe want to show that $\\#\\mathrm{IS}_{\\complex}$ is AP-reducible to $\\sharpcspplus(NAND)$. Let $G=(V,E)$ and $\\{w_x\\}_{x\\in V}$ be any instance pair to $\\#\\mathrm{IS}_{\\complex}$. In the way described below, we will construct a constraint frame $\\Omega=(G',X|\\FF',\\pi)$ that becomes \nan input instance to $\\sharpcspstar(NAND)$, where $G'=(V|V',E')$ is an undirected bipartite graph whose $V'$ and $E'$ \n($\\subseteq V\\times V'$) are defined by the following procedure. Choose any edge $(x,y)\\in E$, \nprepare three new nodes $v_1,v_2,v_3$ labeled $NAND, w_{x}, w_{y}$, respectively, and place four edges $(x,v_1),(y,v_1),(x,v_2),(y,v_3)$ into $E'$. \nAt the same time, place these new nodes into $V'$. In case where variable $x$ ($y$, resp.) has been already used to insert a new node $v_2$ ($v_3$, resp.), we no longer need to add the node $v_2$ ($v_3$, resp.). \nWe define $X$ to be the set of all labels of the nodes in $V$ and define $\\FF'$ to be $\\{w_{x}\\}_{x\\in V}\\cup \\{NAND\\}$. A labeling function $\\pi$ is naturally induced from \n$G'$, $X$, and $\\FF'$ ans we omit its formal description. \n\nNow, we want to use variable assignment to compute $\\csp_{\\Omega}$. Given any independent set $S$ for $G$, we define its corresponding variable assignment $\\sigma_S$ as follows: for each variable node $x\\in V$, let $\\sigma_S(x) = S(x)$. \nNote that, for every edge $(x,y)$ in $E$, $x,y\\in S$ iff $NAND(\\sigma_S(x),\\sigma_S(y))=0$. Let $\\tilde{V}$ denote a subset of $V'$ whose elements have the label $NAND$. \nSince all unary constraints appearing as node labels in $V'$ are $w_x$'s, \n$w(S)$ coincides with $\\prod_{v\\in \\tilde{V}}\\prod_{x,y\\in E'(v)}f_{v}(\\sigma_{S}(x),\\sigma_{S}(y))\\cdot \\prod_{x\\in V}w_x(\\sigma_{S}(x))$, where each label $f_v$ of node $v$ is $NAND$. \n{}Using this equality, it is not difficult to show that $\\csp_{\\Omega}$ equals the outcome of $\\#\\mathrm{IS}_{\\complex}$ on the instance $(G,\\{w_x\\}_{x\\in V})$. Therefore, $\\#\\mathrm{IS}_{\\complex}$ is AP-reducible to $\\sharpcspplus(NAND)$. \n\nNext, we will construct an AP-reduction from $\\sharpcspplus(NAND)$ to $\\#\\mathrm{IS}_{\\complex}$. Given any input instance $\\Omega=(G,X|\\FF',\\pi)$ with $G=(V_1|V_2,E)$ to $\\sharpcspplus(NAND)$, \nwe first simplify $G$ as follows. Notice that $\\FF'$ is a finite subset of $\\{NAND\\}\\cup\\UU$. \nIf any two distinct nodes $v_1,v_2\\in V_2$ labeled $u_1,u_2\\in\\UU$, respectively, satisfy $E(v_1)=E(v_2)$, then we merge the two nodes into \none node with {\\em new} label $u'$, where $u'(x) = u_1(x)u_2(x)$.\nSimilarly, if $v_1,v_2\\in V_2$ with the same label $NAND$ satisfy $E(v_1)=E(v_2)$, then we delete the node $v_1$ and all its incident edges. \nBy abusing the notation, we denote the obtained graph by $G$. \n\n\\sloppy {}From the graph $G$, we define another graph $G'=(V_1,E')$ with $E' = \\{(x,y)\\in V_1\\times V_1 \\mid \\;\\;\\text{$\\exists v\\in V_2$ s.t. $v$ has label $NAND$ and $x,y\\in E(v)$}\\}$. \nLet $x$ be any variable that appears in $G'$. For each node $w$ in $V_1$ \nlabeled $x$, if $w$ is adjacent to a certain node whose label is a unary constraint, say, $u$, then define $w_x$ to be $u$; otherwise, define $w_x(z)=1$ for any $z\\in\\{0,1\\}$. Let $\\tilde{V}$ be the set of all nodes in $V_2$ whose labels are $NAND$. \nFix a variable assignment $\\sigma$ arbitrarily and define \n$S_{\\sigma} = \\{ x\\in V_1 \\mid \\sigma(x)=1 \\}$. It follows that \n$w(S_{\\sigma}) = \\prod_{v\\in V_2}f_{v}(\\sigma(x_{i_1}),\\ldots,\\sigma(x_{i_k}))$, where each $k$-tuple $(x_{i_1},\\ldots,x_{i_k})$ depends on the choice of $f_v$. \nThus, $\\sum_{\\sigma}w(S_{\\sigma})$ equals $\\csp_{\\Omega}$.\nMoreover, it holds that \n$\\prod_{v\\in V_2}f_{v}(\\sigma(x_{i_1}),\\ldots,\\sigma(x_{i_k})) = \n\\prod_{v\\in \\tilde{V}}\\prod_{x,y\\in E(v)} f_{v}(\\sigma(x),\\sigma(y)) \n\\cdot \\prod_{x\\in V_1}w_x(\\sigma(x))$. Hence, \n$\\prod_{v\\in V_2}f_{v}(\\sigma(x_{i_1}),\\ldots,\\sigma(x_{i_k})) \\neq0$ \niff $S_{\\sigma}$ is an independent set. \nThese conditions give the desired AP-reduction from $\\sharpcspplus(NAND)$ to $\\#\\mathrm{IS}_{\\complex}$. \nThis completes the proof of Claim \\ref{IS-equiv-NAND}.\n\\end{proof}\n\nNaturally, $\\sharpcspplus(NAND)$ is AP-reducible to $\\sharpcspstar(NAND)$. To complete the proof, we want to show that $\\sharpcspstar(NAND)\\APreduces \\sharpcspstar(OR)$. This is easily shown by, roughly speaking, exchanging the roles of $0$ and $1$ in variable assignments. More precisely, given an instance $\\Omega=(G,X|\\FF',\\pi)$ to $\\sharpcspstar(NAND)$, we build another instance $\\Omega'$ by replacing any unary constraint $u$ by $\\overline{u}$, where $\\overline{u}=[b,a]$ if $u=[a,b]$, and by replacing $NAND$ by $OR$. \nIt clearly holds that $\\csp_{\\Omega} = \\csp_{\\Omega'}$, and thus $\\sharpcspstar(NAND)\\APreduces \\sharpcspstar(OR)$. \n\\end{proofof}\n\nWe remark that, by carefully checking the above proof, we can AP-reduce $\\#\\mathrm{SAT}_{\\complex}$ to $\\sharpcspplus(OR)$ instead of $\\sharpcspstar(OR)$. \nFor another remark, we need two new notations. The first notation $\\sharpcspplus_{\\algebraic}(\\FF)$ indicates the counting problem \nobtained from $\\sharpcspplus(\\FF)$ under the restriction that input instances are limited to algebraic constraints. \nWhen the outcomes of all node-weight functions of $\\#\\mathrm{SAT}_{\\complex}$ are limited to algebraic complex numbers, we briefly write $\\#\\mathrm{SAT}_{\\algebraic}$. Similar to the first remark, we can prove that $\\#\\mathrm{SAT}_{\\algebraic}$ is AP-reducible to $\\sharpcspplus_{\\algebraic}(OR)$. This fact will be used in Section \\ref{sec:main-theorem}. \n\n\\section{T-Constructibility}\\label{sec:constructibility}\n\nOne of key technical tools of Dyer \\etalc~\\cite{DGJ09} in manipulating Boolean constraints is a notion of ``implementation,'' which is used to help establish certain AP-reductions among \\#CSPs with Boolean constraints. \nIn light of our AP-reducibility, we prefer a more ``operational'' or ``mechanical'' approach toward the manipulation of constraints in a rather systematic fashion. \nHere, we will present our key technical tool, called {\\em T-constructibility}, \nof constructing target constraints from a \ngiven set of presumably simpler constraints by applying repeatedly such mechanical operations, \nwhile maintaining the AP-reducibility. This key tool \nwill be frequently used in Section \\ref{sec:elementary-reduction} \nto establish several AP-reductions among \\#CSPs with constraints. \n\nIn an exact counting case of, \\eg Cai \\etalc~\\cite{CLX09x,CLX@,CLX09}, numerous ``gadget'' constructions were used to obtain required properties of constraints. Our systematic approach with the T-constructibility naturally supports most gadget constructions and the results obtained by them can be re-proven by appropriate applications of T-constructibility. \nThe minimal set of constraints that are T-constructed from a fixed set $\\GG$ of ``basis'' constraints, denoted $CL_{T}^{*}(\\GG)$, together with arbitrary free unary constraints is certainly an interesting research object in promoting our understanding of the AP-reducibility. \nAn advantage of taking such a systematic approach can be exemplified, for instance, by Lemma \\ref{IMP-substitute}, in which we are able to argue the {\\em closure property} under AP-reducibility (without the projection operation). This property is a key to the subsequent lemmas and propositions. \nThis line of study was lately explored in \\cite{BDGJ12}. \n\n\\sloppy To pursue notational succinctness, we use the following notations in the rest of this paper. For any index $i\\in[k]$ \nand any bit $c\\in\\{0,1\\}$, \nlet the notation $f^{x_i=c}$ denote the function $g$ satisfying that $g(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k) = f(x_1,\\ldots,x_{i-1},c,x_{i+1},\\ldots,x_k)$ for any vector $(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k)\\in\\{0,1\\}^{k-1}$. \nSimilarly, for any two {\\em distinct} \nindices $i,j\\in[k]$, we denote by $f^{x_i=x_j}$ the function $g$ defined as $g(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k) =\n f(x_1,\\ldots,x_{i-1},x_{j},x_{i+1},\\ldots,x_k)$. \nMoreover, let $f^{x_i=*}$ be the function $g$ defined as $g(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k) = \\sum_{x_i\\in\\{0,1\\}}f(x_1,\\ldots,x_{i-1},x_i,x_{i+1},\\ldots,x_k)$, where $x_i$ is no longer a free variable. \nBy extending these notations naturally, we can write, \\eg $f^{x_i=0,x_{m}=*}$ as the shorthand for $(f^{x_i=0})^{x_m=*}$ and $f^{x_i=1,x_m=0}$ for $(f^{x_i=1})^{x_m=0}$. \n\nWe say that a constraint $f$ of arity $k$ is {\\em T-constructible} (or {\\em T-constructed}) from a constraint set $\\GG$ if $f$ can be obtained, initially from constraints in $\\GG$, by applying recursively a finite number (possibly zero) of functional operations described below.\n\\begin{enumerate}\\vs{-1}\n\\item {\\sc Permutation:} for two indices $i,j\\in[k]$ with $i1$. For the desired $h$ stated in the lemma, we set $h=f^{x_i=1}$, which belongs to $COMP_1(f)$. \nNext, we want to show that $R_h$ has a good factor list. Let us define $L'=L-\\{Implies(x_j,x_i)\\}$. It is easy to show that $L'$ is a factor list for $R_h$. With this list $L'$, define $G_h$ to be an imp graph of $R_h$ and $L'$. Note that $G_{h,L'}$ contains no node named $x_i$. \nObviously, every node in $G_{h,L'}$ is adjacent to at least one node in $G_{h,L'}$. \nMoreover, there is no cycle in $G_{h,L'}$ because any cycle in $G_{h,L'}$ becomes a cycle in $G_{f,L}$. Therefore, $L'$ is a good factor list for $R_h$. \n\n(b) Assume that $|E(x_j)|=1$. This means $E(x_j)=\\{x_i\\}$, and the graph $H=(\\{x_i,x_j\\},\\{(x_j,x_i)\\})$ forms a connected component of $G_{f,L}$. \nHere, we set $h=f^{x_i=1,x_j=1}$ so that $h$ belongs to $COMP_2(f)$. We define $L'=L-\\{Implies(x_j,x_i)\\}$, which becomes a factor list for $R_h$. \nNote that $L'$ cannot be empty because, otherwise, $L$ consists only of $Implies(x_j,x_i)$ and thus $k=2$ follows, a contradiction. \nNow, we claim that $L'$ is good. Let $G_{h,L'}$ be an imp graph of $R_h$, which has neither the node $x_i$ nor the node $x_j$. Note that every node \nin $G_{h,L'}$ is adjacent to at least one node because deleting the \nsubgraph $H$ does not affect the adjacency property of the other nodes \nin $G_{f,L}$. Thus, $L'$ is a good factor list for $R_h$. \n\n(2) Assume that Case (1) does not happen. Choose a variable $x_i$ so that $(x_j,x_i)\\not\\in E$ for any $j\\in [k]-\\{i\\}$. Such a variable should exist because there is no cycle in $G_{f,L}$. The desired $h$ is now defined as \n$h= f^{x_{1}=0}$, which clearly falls into $COMP_1(f)$. \nLet $L' = L-\\{Implies(x_i,x_j)\\mid j\\in[k]-\\{i\\}\\}$. \nThis $L'$ becomes a factor list for $R_h$. If any node $x_j$ with $j\\neq i$ is deleted from $G_{f,L}$, then $|E(x_j)|=1$ follows and this $x_j$ satisfies Case (1). This is a contradiction; hence, an imp graph of $R_h$ and $L'$ lacks only the node $x_i$. This ensures that the properties of $L$ are naturally inherited to $L'$; therefore, $L'$ is good. \n\\end{proof}\n\nThe notion of good factor list is closely related to that of simple form. Exploring this relationship, we can prove the following corollary, in which we decrease the arity of a given constraint while maintaining the imp-support property and the non-membership property to $\\ED$. \n\n\\begin{corollary}\\label{support-IM-not-affine}\nLet $f$ be any constraint of arity $k\\geq3$. \nAssume that $f$ is in simple form. \nIf $f$ has imp support and $f\\not\\in\\ED$, then there exists a constraint $h$ of arity less than $k$ \nsuch that $h$ has imp support, $h\\not\\in\\ED$, and $h\\leq_{con}f$.\n\\end{corollary}\n\n\\begin{proof}\nLet $k\\geq3$ and let $f$ be any arity-$k$ constraint having imp support. \nAssume that $f$ is in simple form but it is not in $\\ED$. \nSince $f$ has imp support, by Lemma \\ref{IMP-substitute}(1), \nevery constraint $h$ that is T-constructed from $f$ by applications of the pinning operation has imp support. Since $R_f\\in IMP$, \nwe choose an imp-distinctive factor list $L$ for $R_f$. \nNote that every factor in $L$ is of the form $Implies$ because if $L$ contains a factor $\\Delta_c$, where $c\\in\\{0,1\\}$, then $M_f$ must contain an all-$c$ column, a contradiction against the simple-form property of $f$. \n\nTo appeal to Lemma \\ref{good-factor-list}, we need to show that $L$ is a good factor list for $R_f$. Let $G_{f,L}$ denote an imp graph of $R_f$ and $L$. \nFirstly, we deal with a situation where there exists a variable that appears in no factor in $L$. We choose such a variable, say, $x_i$ and define $h= f^{x_i=0}$, which clearly belongs to $COMP_1(f)$. Moreover, the value of $x_i$ does not affect the computation of $f$; thus, it follows that $f^{x_i=0}(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k) = f^{x_i=1}(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k)$. \nTherefore, we obtain $f(x_1,\\ldots,x_k) = h(x_1,\\ldots,x_{i-1},x_{i+1},\\ldots,x_k)$, which implies that $h$ has imp support. \nSince $f\\not\\in\\ED$, we conclude that $h\\not\\in \\ED$. \nHereafter, we assume that every variable appears in \nat least one factor in $L$. \n\nSecondly, we will show that $G_{f,L}$ has no cycle. Suppose otherwise; namely, for a certain series $(x_{i_1},x_{i_2},\\ldots,x_{i_m})$ of variables, the set \n$\\{Implies(x_{i_j},x_{i_{j+1}}), Implies(x_{i_m},x_{i_1}) \\mid j\\in[m-1]\\}$ is included in $L$. Clearly, $M_f$ includes two identical columns $i_1$ and $i_m$, \nand thus $f$ cannot be in simple form, a contradiction. Therefore, \nno cycle exists in $G_{f,L}$. \nOverall, we conclude that $L$ is a good factor list for $R_f$. \nLemma \\ref{good-factor-list} then gives a constraint $h$ such that $h$ is T-constructed from $f$ by one or more applications of the pinning operation and $R_h$ has a good factor list. Therefore, $h$ should have imp support. Moreover, the definition of good factor list implies that $h$ should not belong to $\\ED$. The use of the pinning operation guarantees that \nthe arity of $h$ should be less than $k$. \n\\end{proof}\n\n\nFinally, we will give the proof of Proposition \\ref{no-affine-and-IM2}.\n\n\\begin{proofof}{Proposition \\ref{no-affine-and-IM2}}\nLet $f$ be any constraint of arity $k\\geq1$ and let $\\FF$ be any constraint set. \nWe will show by induction on $k$ that if $f$ has imp support but it is \nnot in $\\ED$ then \n$\\sharpcspstar(OR,\\FF)$ is AP-reduced to $\\sharpcspstar(f,\\FF)$. \nLet us assume that $f$ has imp support and $f\\not\\in\\ED$. Note that $f\\not\\equiv0$ because, otherwise, $f$ belongs to $\\ED$. \n\n[Basis Case: $k=1$] In this case, the proposition is trivially true, because all unary functions are already in $\\ED$. \n\n[Next Case: $k=2$] Assume that $f=(a,b,c,d)$ with \n$a,b,c,d\\in\\complex$. Since $f$ is a binary constraint, the imp-support property of $f$ makes $f$ belong to $\\IM$. Since $f\\not\\equiv0$, \nCorollary \\ref{aryty-2-IM-abcd} yields $bc=0$. \nNow, we examine the following three possible cases. \n\n(1) The first case is that $b=0$ but $c\\neq0$. Let us examine all four possible values of $f$. Write $u$ for the constraint $[c,d]$. (i) If \n$f=(0,0,c,0)$, then $f$ is clearly in $\\ED$. (ii) Let $f=(0,0,c,d)$ with $d\\neq0$. The value $f(x_1,x_2)$ actually equals $\\Delta_{1}(x_1)u(x_2)$, \nand thus $f$ belongs to $\\ED$. (iii) If $f=(a,0,c,0)$ with $a\\neq0$, then \n$f$ has the form $u(x_1)\\Delta_0(x_2)$, implying $f\\in\\ED$. These three cases immediately lead to a contradiction against the assumption $f\\not\\in\\ED$. \n(iv) The remaining case is that $f=(a,0,c,d)$ with $ad\\neq0$. \nBy normalizing $f$ appropriately, we may assume that $f$ has the form $(1,0,c,d)$. Now, \nwe apply Lemma \\ref{arity-2-implies-reduction} and then obtain the desired AP-reduction from $\\sharpcspstar(OR,\\FF)$ to $\\sharpcspstar(f,\\FF)$. \n\n(2) The second case where $b\\neq0$ and $c=0$ is symmetric to Case (1) and is omitted. \n\n(3) Let us consider the third case where $b=c=0$. There are four possible choices for $f$: (i') $f=(a,0,0,0)$ with $a\\neq0$, (ii') $f=(0,0,0,d)$ with $d\\neq0$, (iii') $f=(a,0,0,d)$ with $ad\\neq0$, and (iv') $f=(0,0,0,0)$. In all those four cases, clearly $f$ belongs to $\\ED$, a contradiction. This completes the case of $k=2$. \n\n[Induction Case: $k\\geq3$] \nAs the induction hypothesis, we assume that the proposition is true \nfor any constraint of arity less than $k$. \n\n(1) Assume that $f$ falls into $(IMP\\cap\\ED)\\cup\\{EQ_1\\}$ after appropriate normalization; in other words, $f$ equals $c\\cdot R$, where $c\\in\\complex$ and $R\\in (IMP\\cap\\ED)\\cup\\{EQ_1\\}$. There are two cases, $R\\in IMP\\cap\\ED$ or $R=EQ_1$, to consider. In either case, however, $f$ belongs to $\\ED$. This contradicts our assumption. \n\n(2) Assume that Case (1) does not occur. \nLemma \\ref{simple-form} then provides a relation $R$ in \n$(IMP\\cap \\ED)\\cup\\{EQ_1\\}$ \nand a constraint $g$ in simple form \nthat satisfy $g\\leq_{con}f$ and $f= R\\cdot g$. Moreover, the second part of Lemma \\ref{simple-form} implies that $g$ has imp support and $g$ does not belong to $\\ED$. \nFirstly, we consider the case where $R\\neq EQ_1$. Since $f\\neq g$, an execution \nof the sweeping procedure given in the proof of Lemma \\ref{simple-form} makes the arity of $g$ smaller than that of $f$. \nThe induction hypothesis therefore implies that $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(g,\\FF)$. \nSince $g\\leq_{con}f$, by Lemmas \\ref{AP-transitivity} and \\ref{constructibility}, we obtain $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(f,\\FF)$. \nSecondly, we consider the case of $R= EQ_1$. This case implies \n$f=g$, and thus $f$ should be in simple form. Appealing to \nCorollary \\ref{support-IM-not-affine}, we obtain a \nconstraint $h$ of arity smaller than $k$ satisfying that $h\\leq_{con}f$, $h\\not\\in\\ED$, and $h$ has imp support. Our induction hypothesis \nthen ensures that $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(h,\\FF)$ holds. Moreover, since $h\\leq_{con}f$, \n$\\sharpcspstar(h,\\FF)$ is AP-reduced to $\\sharpcspstar(f,\\FF)$ by Lemma \\ref{constructibility}. The desired conclusion of the proposition follows by combining those two AP-reductions. \n\\end{proofof}\n\n\nIn Proposition \\ref{no-affine-and-IM2}, \nwe have discussed constraints with imp support. Our second focal point is to discuss constraints that {\\em lack} imp support, provided that they are chosen from the outside of $\\ED\\cup\\NZ$. \n\n\\begin{proposition}\\label{IM-XOR-IMP}\nLet $f$ be any constraint not in $\\ED\\cup\\NZ$. If $f$ has no imp support, \nthen $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(f,\\FF)$ for any constraint set $\\FF$. \n\\end{proposition}\n\nThe proof of this proposition relies on Lemma \\ref{arity-2-IM-support}, which gives a complete characterization of binary \nconstraints inside $\\IM\\cup\\NZ$. The proposition is proven easily by an \nassist of Lemma \\ref{IMP-substitute} as well. \n\n\\begin{proofof}{Proposition \\ref{IM-XOR-IMP}}\nLet $f$ be any constraint of arity $k\\geq1$ and assume that $f$ has no imp support and $f\\not\\in\\ED\\cup\\NZ$. \nOur proof proceeds by induction on $k$. \nThe base case $k=1$ is trivial since all unary constraints belong to \n$\\ED$. Next, assume that $k=2$. Notice that $f$ cannot be in $\\IM$ since $R_f\\not\\in IMP$ by Lemma \\ref{binary-IM-IMP}. We then apply Lemma \\ref{arity-2-IM-support} to $f$. \nIt then follows that $f$ must have one of the following forms: $(0,b,c,0)$, $(0,b,c,d)$, and $(a,b,c,0)$. Since $f\\not\\in\\ED$, $f$ cannot be of the form $(0,b,c,0)$. In all the other cases, Lemma \\ref{0-b-c-case} establishes \nan AP-reduction from $\\sharpcspstar(OR,\\FF)$ to $\\sharpcspstar(f,\\FF)$ for any constraint set $\\FF$. \n\nFinally, assume that $k\\geq3$. Now, we want to build \na constraint $g\\not\\in\\ED\\cup\\NZ$ of arity two such that \n$g\\leq_{con} f$ and $g$ has no imp support.\nSince $R_f\\not\\in IMP\\cup\\NZ$, \nLemma \\ref{DGJ10-IMP-property} supplies two vectors \n$a=(a_1,\\ldots,a_k)$ and $b=(b_1,\\ldots,b_k)$ in $R_f$ satisfying \neither $a\\wedge b\\not\\in R_f$ or $a\\vee b\\not\\in R_f$ (or both). \nFirst, we will claim that (*) there are indices $i,j\\in[k]$ such that $(a_i,b_i)=(0,1)$ and $(a_j,b_j)=(1,0)$. Assume otherwise; namely, \neither $(a_i,b_i)\\in\\{(0,1),(0,0),(1,1)\\}$ for all $i\\in[k]$ or $(a_i,b_i)\\in\\{(1,0),(0,0),(1,1)\\}$ for all $i\\in[k]$. Let us consider the case where $a\\vee b\\not\\in R_f$. It easily \nfollows that either $a = a\\vee b$ or $b=a\\vee b$ holds. This is a contradiction against the assumption $a\\vee b\\not\\in R_f$. The other case where $a\\wedge b\\not\\in R_f$ is similarly treated. Therefore, the claim (*) should hold. \n\nHereafter, we assume that $a\\vee b\\not\\in R_f$ since the other case \n(\\ie $a\\wedge b\\not\\in R_f$) is similarly handled. \nFor simplicity, let $(a_1,b_1)=(0,1)$ and $(a_2,b_2)=(1,0)$. Now, we recursively define a new constraint $g$. Initially, we set $f_2= f$. \nIf $f_{i-1}$ ($3\\leq i\\leq k$) has been already defined, \nthen we define $f_{i}$ as follows. \nFor each bit $c\\in\\{0,1\\}$, if $(a_i,b_i)=(c,c)$, then set \n$f_{i}= f_{i-1}^{x_i=c}$. If $(a_i,b_i)=(a_1,b_1)$, then let \n$f_{i} = f_{i-1}^{x_i=x_1}$. If $(a_i,b_i)=(a_2,b_2)$, then let \n$f_{i} = f_{i-1}^{x_i=x_2}$. Finally, we define $g$ to be $f_{k}$. \nBy this construction of $g$, $(0,1)$ and $(1,0)$ are \nin $R_g$; however, $(1,1)$ \nis not in $R_f$ because $a\\vee b= (1,1,c_3,c_4,\\ldots,c_k)\\not\\in R_f$ (for certain bits $c_i$'s) implies $g(1,1)=0$. In summary, it holds that $g(0,1) g(1,0)\\neq0$ and \n$g(0,0) g(1,1)=0$. Lemma \\ref{arity-2-IM-support} then concludes \nthat $g$ is not in $\\IM\\cup\\NZ$. In particular, since $g$ is of arity two, $g$ has no imp support by Lemma \\ref{arity-2-IM-support}. \nMoreover, the above construction is actually {\\em T-construction}, and thus this fact ensures that $g\\leq_{con} f$. Because this T-construction obviously uses no projection operation, by Lemma \\ref{IMP-substitute}(2), $f\\not\\in\\ED$ implies $g\\not\\in\\ED$. \nTo end our proof, we will claim that $g(0,0)\\neq0$. Assume otherwise; namely, $g$ has the form $(0,x,y,0)$ with $xy\\neq0$. Obviously, $g$ belongs to $\\ED$, a contradiction. Hence, $g(0,0)\\neq0$ holds. We then conclude that $g$ equals $(w,x,y,0)$ for certain non-zero constants $x,y,w$. By Lemma \\ref{0-b-c-case}, it follows that $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(g,\\FF)$. Since $g\\leq_{con}f$, we obtain the desired consequence. \n\\end{proofof}\n\n\n\\section{Dichotomy Theorem}\\label{sec:main-theorem}\n\nOur dichotomy theorem states that all counting problems of the form $\\sharpcspstar(\\FF)$ can be classified into exactly two categories, one of which consists of polynomial-time solvable problems and the other consists of $\\sharpp_{\\complex}$-hard problems, assuming that $\\#\\mathrm{SAT}_{\\complex}\\not\\in\\fp_{\\complex}$. \nThis theorem steps forward in a direction toward a complete analysis of a more general form of constraint than Boolean constraints (\\eg \\cite{DGJ10}). The theorem also gives an approximation version of the dichotomy theorem of Cai \\etalc~\\cite{CLX09} for exact counting problems. Here, we rephrase the theorem given in Section \\ref{sec:introduction} as follows. \n\n\\ms\n\\n{\\bf {\\em Theorem 1.1}} (rephrased)\\hs{2}\n{\\em Let $\\FF$ be any set of constraints. If \n$\\FF\\subseteq \\ED$, then $\\sharpcspstar(\\FF)$ is in $\\fp_{\\complex}$. \nOtherwise, $\\#\\mathrm{SAT}_{\\complex} \\APreduces \\sharpcspstar(\\FF)$ holds.}\n\\ms\n\nThrough Sections \\ref{sec:constraint-set} to \\ref{sec:IM2-support}, we have developed necessary foundations to the proof of this dichotomy theorem. Now, we are ready to apply them properly to prove the theorem. The next proposition is a center point of the proof of the theorem. To simplify a later discussion, however, the proposition targets only a single constraint, instead of a set of constraints as in the theorem. \n\n\n\\begin{proposition}\\label{key-proposition}\nLet $f$ be any constraint. \nIf $f$ is not in $\\ED$, then $\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(f,\\FF)$ holds for any set $\\FF$ of constraints.\n\\end{proposition}\n\n\\begin{proof}\nLet $f$ be any constraint not in $\\ED$. Moreover, let $\\FF$ be any constraint set. We want to establish an AP-reduction from $\\sharpcspstar(OR,\\FF)$ to $\\sharpcspstar(f,\\FF)$. \nFirst, suppose that $f$ has imp support. \nSince $f\\not\\in\\ED$, we apply Proposition \\ref{no-affine-and-IM2} and instantly obtain the desired AP-reduction from \n$\\sharpcspstar(OR,\\FF)$ to $\\sharpcspstar(f,\\FF)$, as requested. \nNext, suppose that $f$ has no imp support. \nTo finish the proof, we hereafter consider two independent cases. \n\n[Case: $f\\not\\in\\NZ$] Since $f\\not\\in\\ED\\cup\\NZ$, Proposition \\ref{IM-XOR-IMP} leads to an AP-reduction from $\\sharpcspstar(OR,\\FF)$ to \n$\\sharpcspstar(f,\\FF)$. \n\n[Case: $f\\in \\NZ$] \nNotice that $f\\not\\in\\DG$ since $\\DG\\subseteq\\ED$. \nLemma \\ref{affine-PP-induction} provides a constraint \n$h=(1,x,y,z)$ satisfying that $xyz\\neq0$, $z\\neq xy$, and $h\\leq_{con}f$. \nTo this $h$, we apply Proposition \\ref{1-x-y-z-case}, from which it follows that \n$\\sharpcspstar(OR,\\FF)\\APreduces \\sharpcspstar(h,\\FF)$. Since $h\\leq_{con}f$, Lemma \\ref{T-con-to-AP-reduction} implies $\\sharpcspstar(h,\\FF)\\APreduces \\sharpcsp(f,\\FF)$. Combining those two AP-reductions, we obtain the desired AP-reduction from $\\sharpcspstar(OR,\\FF)$ to $\\sharpcspstar(f,\\FF)$. \n\nTherefore, we have completed the proof.\n\\end{proof}\n\nFinally, we give the long-awaited proof of Theorem \\ref{dichotomy-theorem} and accomplish the main task of this paper.\n\n\\begin{proofof}{Theorem \\ref{dichotomy-theorem}}\nLet $\\FF$ be any constraint set. If $\\FF\\subseteq\\ED$, then \nLemma \\ref{basic-case-FPC} implies that $\\sharpcspstar(\\FF)$ \nbelongs to $\\fp_{\\complex}$. \nHenceforth, we assume that $\\FF\\nsubseteq \\ED$. \n{}From this assumption, we choose a constraint $f\\in\\FF$ for which $f\\not\\in\\ED$. \nProposition \\ref{key-proposition} then yields the AP-reduction: $\\sharpcspstar(OR)\\APreduces \\sharpcspstar(f)$. \nSince $f\\in\\FF$, it holds that \n$\\sharpcspstar(f)\\APreduces \\sharpcspstar(\\FF)$. By the transitivity of AP-reducibility, \n$\\sharpcspstar(OR)\\APreduces \\sharpcspstar(\\FF)$ follows. Note that, \nby Lemma \\ref{SAT-to-OR}, we obtain $\\#\\mathrm{SAT}_{\\complex} \\APreduces \\sharpcspstar(OR)$. Therefore, we conclude that $\\#\\mathrm{SAT}_{\\complex}$ is AP-reducible to $\\sharpcspstar(\\FF)$. \n\\end{proofof}\n\n\nAs demonstrated in Theorem \\ref{dichotomy-theorem}, a free use of unary constraint helps us obtain a truly stronger claim---dichotomy theorem---than a trichotomy theorem of Dyer \\etalc~\\cite{DGJ09} on unweighted \nBoolean \\#CSPs. Is this phenomenon an indication that we could eventually prove a similar type of {\\em dichotomy theorem} for all weighted Boolean \\#CSPs? \nIn our dichotomy theorem, we have shown that all seemingly ``intractable'' \\#CSPs are at least as hard as $\\#\\mathrm{SAT}_{\\complex}$. Are those problems are all AP-equivalent to $\\#\\mathrm{SAT}_{\\complex}$? Those questions demonstrate that we still have a long way to acquire a full understanding of the approximation complexity of the weighted \\#CSPs. \n\n\nNext, we wish to prove Corollary \\ref{algebraic-main-theorem}, which strengthens Theorem \\ref{dichotomy-theorem} when limiting the free use of ``arbitrary'' constraints within ``algebraic'' constraints. Recall from Section \\ref{sec:introduction} that an algebraic constraint outputs only algebraic complex numbers. Moreover, we recall the notations $\\sharpcspplus(\\FF)$ and $\\#\\mathrm{SAT}_{\\algebraic}$ from Section \\ref{sec:upper-bound}. Similarly, we write $\\sharpcspstar_{\\algebraic}(\\FF)$ to denote $\\sharpcspstar(\\FF)$ whose instances are only algebraic constraints. \nNow, let us re-state the corollary given in Section \\ref{sec:introduction}. \n\n\\ms\n\\n{\\bf {\\em Corollary 1.2}} (rephrased)\\hs{2}\n{\\em Let $\\FF$ be any set of constraints. If $\\FF\\subseteq\\ED$, then $\\sharpcspplus_{\\algebraic}(\\FF)$ is in $\\fp_{\\algebraic}$; otherwise, $\\#\\mathrm{SAT}_{\\algebraic}\\APreduces \\sharpcspplus_{\\algebraic}(\\FF)$ holds. }\n\\ms\n\nEarlier, Dyer \\etalc~\\cite{DGJ09} demonstrated how to eliminate two constant constraints---$\\Delta_0$ and $\\Delta_1$---using randomized \napproximation schemes for unweighted Boolean \\#CSPs. Similarly, \nwe can eliminate those two constraints and thus prove the corollary by approximating them by \nany two non-zero constraints of the following forms: $[1,\\lambda]$ and $[\\lambda,1]$ with $|\\lambda|<1$. \nIn Lemma \\ref{equivalent-plus-star}, we will demonstrate how to eliminate from $\\sharpcspplus_{\\algebraic}(\\FF)$ all unary constraints whose output values contain zeros. This elimination is possible by a use of AP-reductions and this exemplifies a significance of the AP-reducibility. \n\n\\begin{lemma}\\label{equivalent-plus-star}\nFor any constraint set $\\FF$, it holds that $\\sharpcspstar_{\\algebraic}(\\FF) \\APequiv \\sharpcspplus_{\\algebraic}(\\FF)$. \n\\end{lemma}\n\nAn argument of Dyer \\etalc~\\cite{DGGJ03} for their claim of eliminating both $\\Delta_0$ and $\\Delta_1$ exploits their use of non-negative integers. However, since our target is {\\em arbitrary} (algebraic) complex numbers, the proof of Lemma \\ref{equivalent-plus-star} demands a quite different argument. To make the paper readable, we postpone the proof until the last section. Finally, we will give the proof of Corollary \\ref{algebraic-main-theorem} using Lemma \\ref{equivalent-plus-star}. \n\n\\begin{proofof}{Corollary \\ref{algebraic-main-theorem}}\nUsing Lemma \\ref{equivalent-plus-star}, all results in this paper on $\\sharpcspstar(\\FF)$'s can be restated in terms of $\\sharpcspplus_{\\algebraic}(\\FF)$'s. Therefore, we obtain from Theorem \\ref{dichotomy-theorem} that $\\sharpcspplus_{\\algebraic}(\\FF)\\in\\fp_{\\algebraic}$ if $\\FF\\subseteq\\ED$, and $\\sharpcspplus_{\\algebraic}(OR)\\APreduces \\sharpcspplus_{\\algebraic}(\\FF)$ otherwise. It thus remains to show that $\\#\\mathrm{SAT}_{\\algebraic} \\APreduces \\sharpcspplus_{\\algebraic}(OR)$. \n\nAs remarked in the end of Section \\ref{sec:upper-bound}, following a similar argument given in the proof of Lemma \\ref{SAT-to-OR}, it is possible to prove that $\\#\\mathrm{SAT}_{\\algebraic}$ is AP-reducible to $\\sharpcspstar_{\\algebraic}(OR)$. By Lemma \\ref{equivalent-plus-star}, it follows that $\\#\\mathrm{SAT}_{\\algebraic} \\APreduces \\sharpcspplus_{\\algebraic}(OR)$. Therefore, the corollary holds. \n\\end{proofof}\n\n\\section{Proofs of Lemmas \\ref{constructibility} and \\ref{equivalent-plus-star}}\\label{sec:elimination}\n\nThis last section will fill the missing proofs of Sections \\ref{sec:constructibility} and \\ref{sec:main-theorem} to complete the proofs of our main theorem and its corollary. \nFirst, we will give the proof of Lemma \\ref{equivalent-plus-star}. \nA use of algebraic numbers in the lemma ensures the correctness of a randomized approximation scheme \nused in the proof of the lemma. Underlying ideas of the scheme \ncome from the proofs of \\cite[Lemma 10]{DGJ09} and \\cite[Theorem 3(2)]{Yam03}. Particularly, the latter relied on the following well-known \nlower bound of the absolute values of polynomials in algebraic numbers. \n\n\\begin{lemma}\\label{complex-lower-bound}{\\rm \\cite{Sto74}}\\hs{1}\nLet $\\alpha_1,\\ldots,\\alpha_m\\in\\algebraic$ and let $c$ be the degree of $\\rational(\\alpha_1,\\ldots,\\alpha_m)\/\\rational$. There exists a constant $e>0$ that satisfies the following statement for any complex number $\\alpha$ of the form $\\sum_{k}a_{k}\\left(\\prod_{i=1}^{m}\\alpha_i^{k_i}\\right)$, where $k=(k_1,\\ldots,k_m)$ ranges over $[N_1]\\times\\cdots\\times[N_m]$, $(N_1,\\ldots,N_m)\\in\\nat^{m}$, and $a_k\\in\\integer$. If $\\alpha\\neq0$, then $|\\alpha|\\geq \\left(\\sum_{k}|a_k|\\right)^{1-c}\\prod_{i=1}^{m}e^{-cN_i}$. \n\\end{lemma}\n\n\nNow, we start the proof of Lemma \\ref{equivalent-plus-star}.\n\n\\begin{proofof}{Lemma \\ref{equivalent-plus-star}}\nSince any input instance to $\\sharpcspplus_{\\algebraic}(\\FF)$ is obviously an instance to $\\sharpcspstar_{\\algebraic}(\\FF)$, it easily follows that $\\sharpcspplus_{\\algebraic}(\\FF)$ is AP-reduced to $\\sharpcspstar_{\\algebraic}(\\FF)$. Hereafter, we wish to prove the other direction, namely, $\\sharpcspstar_{\\algebraic}(\\FF)\\APreduces \n\\sharpcspplus_{\\algebraic}(\\FF)$. Since $\\sharpcspstar_{\\algebraic}(\\FF)$ coincides with $\\sharpcspplus_{\\algebraic}(\\FF,\\Delta_0,\\Delta_1)$, we wish to \ndemonstrate how to eliminate $\\Delta_0$ from $\\sharpcspplus_{\\algebraic}(\\FF,\\Delta_0,\\Delta_1)$. \nWithout loss of generality, we aim at proving \nthat $\\sharpcspplus_{\\algebraic}(\\FF,\\Delta_0)$ is AP-reducible to $\\sharpcspplus_{\\algebraic}(\\FF)$. \n\nLet $\\Omega=(G,X|\\FF',\\pi)$ be any constraint frame given as an input instance to $\\sharpcspplus_{\\algebraic}(\\FF,\\Delta_0)$, where $G=(V_1|V_2,E)$, $X=\\{x_1,x_2,\\ldots,x_n\\}$, and $\\FF'\\subseteq \\FF\\cup\\{\\Delta_0\\}$. Let $n$ be the number of distinct variables used in $G$. If $\\FF$ contains $\\Delta_0$, the lemma is trivially true. Henceforth, we assume that $\\Delta_0\\not\\in \\FF$.\nChoose any complex number $\\lambda$ satisfying $0<|\\lambda|<1$ and \ndefine $u(x)=[1,\\lambda]$, which is clearly in $\\UU\\cap\\NZ$. \nFor later use, let $|\\Omega|$ denote $\\prod_{v\\in V_2}\\max\\{1,|f_v|\\}$, where $|f_v|= \\max\\{|f_v(x)| \\mid x\\in\\{0,1\\}^k\\}$ and $k$ is the arity of $f_v$. \n\nFirst, we modify the graph $G$ as follows. Let us select all nodes in $V_1$ that are adjacent to certain nodes in $V_2$ having the label $\\Delta_0$. \nWe first merge all selected variable nodes into a single node, say, \n$v$ with a ``new'' label, say, $x$, and then delete all the nodes labeled $\\Delta_0$ and their incident edges. Finally, we attach a ``new'' node labeled $\\Delta_0$ to the \nnode $v$ by an additional single edge. It is not difficult to show that this modified graph produces the same output value as its original one. In what follows, we assume that the constraint $\\Delta_0$ appears exactly once as a node label in the graph $G$ and it depends only on the variable $x$. \n\nLet $G_0$ be the graph obtained from $G$ by removing the unique node $\\Delta_0$. Its associated constraint frame is briefly denoted $\\Omega_0$. Note that $\\csp_{\\Omega_0}$ can be expressed as $\\sum_{x\\in\\{0,1\\}}h(x)$ using an appropriate complex-valued function $h$ depending on the value of \n$x$. With this $h$, $\\csp_{\\Omega}$ is calculated as $\\sum_{x\\in\\{0,1\\}}h(x)\\Delta_0(x)$, which obviously equals $h(0)$. \nMoreover, let $u_m = u^m$ for any fixed number $m\\in\\nat^{+}$. Denote by $G_m$ the graph obtained from $G$ by replacing $\\Delta_0$ by $u_{m}$ and let $\\Omega_m$ be its associated constraint frame. Since $u_m=[1,\\lambda^m]$, \nit holds that \n$\\csp_{\\Omega_m}=\\sum_{x}h(x)u_m(x) = h(0)+\\lambda^m h(1)$. \nLetting $K= h(1)$, we obtain $\\csp_{\\Omega} = \\csp_{\\Omega_m} - \\lambda^{m}K$. Note that, for each fixed variable assignment $\\sigma:X\\rightarrow\\{0,1\\}$, the product of the outcomes of all constraints is at most $|\\Omega|$. Since there are $2^n$ distinct variable assignments, $|K|$ is thus upper-bounded by \n$2^n|\\Omega|$. \n\nMeanwhile, we assume that $\\csp_{\\Omega}\\neq0$. Since all entries of any constraint in $\\FF'$ are taken from $\\algebraic$, we want to apply Lemma \\ref{complex-lower-bound}. For a use of this lemma, however, we need to express the value $|\\csp_{\\Omega}|$ using three series $\\{a_{k}\\}_{k}$, $\\{\\alpha_{i}\\}_{i}$, and $\\{k_i\\}_{i}$ given in the lemma. Let us define them as follows. \nLet $I=\\{\\pair{v,w}\\mid v\\in V_2, w\\in\\{0,1\\}^{r}\\}$, where $r$ is the arity of $f_v$. Here, we assume a fixed enumeration of all elements in $I$. For each variable assignment $\\sigma:X\\rightarrow\\{0,1\\}$, we define a vector $k^{(\\sigma)}=(k^{(\\sigma)}_{i})_{i\\in I}$ as follows: for each $i=\\pair{v,w}\\in I$, let $k^{(\\sigma)}_{i} =1$ if $f_v$ depends on a certain variable series $(x_{i_1},\\ldots,x_{i_r})$ and $w$ equals $(\\sigma(x_{i_1}),\\ldots,\\sigma(x_{i_k}))$; otherwise, let $k^{(\\sigma)}_{i}=0$. \nMoreover, let $N_i$ ($i\\in I$) equal $1$ and set $N=\\prod_{i\\in I}[N_i]$. For any vector $k\\in N$, let $a_{k} = 1$ if there exists a valid assignment $\\sigma$ satisfying $k=k^{(\\sigma)}$; otherwise, let $a_{k}=0$. \nFinally, let $\\alpha_{\\pair{v,w}}= f_{v}(w)$, where $\\pair{v,w}\\in I$. \nBy these definitions, the value $\\csp_{\\Omega} = \\sum_{\\sigma}\\prod_{v\\in V_2}f_{v}(\\sigma(x_{i_1}),\\ldots,\\sigma(x_{i_k}))$ equals $\\sum_{k\\in N}a_{k}\\left(\\prod_{i\\in I}\\alpha^{k_i}_{i}\\right)$. \nNow, Lemma \\ref{complex-lower-bound} provides two constants $c,e>0$ for which $|\\csp_{\\Omega}|$ is lower-bounded by the value $\\left(\\sum_{k\\in N}a_k\\right)^{1-c}\\prod_{i\\in I}e^{-cN_i}$. For our purpose, we set $d= (1\/2)\\left(\\sum_{k\\in N}a_k\\right)^{1-c}\\prod_{i\\in I}e^{-cN_i}$, from which we obtain $|\\csp_{\\Omega}|>d$. \nFor convenience, whenever $d\\geq1$, we automatically set $d=1\/2$ so that we can always assume that $0d$ holds. \nSince the oracle returns a $2^{\\delta}$-approximate solution $z$ for \n$\\csp_{\\Omega_m}$, it follows that $2^{-\\delta}|\\csp_{\\Omega_m}|\\leq |z| \\leq 2^{\\delta}|\\csp_{\\Omega_m}|$. \nNow, we want to show that (iii) $2^{-\\epsilon}|\\csp_{\\Omega}|\\leq 2^{-\\delta}(|\\csp_{\\Omega}|-|\\lambda^mK|)$ and \n(iv) $2^{\\delta}(|\\csp_{\\Omega}|+|\\lambda^mK|)\\leq 2^{\\epsilon}|\\csp_{\\Omega}|$, because these bounds together imply \n\\[\n2^{-\\epsilon}|\\csp_{\\Omega}|\\leq 2^{-\\delta}(|\\csp_{\\Omega}|-|\\lambda^mK|) \\leq |z| \n\\leq 2^{\\delta}(|\\csp_{\\Omega}|+|\\lambda^mK|)\\leq 2^{\\epsilon}|\\csp_{\\Omega}|.\n\\]\nIn other words, $2^{-\\epsilon}\\leq \\left| z\/\\csp_{\\Omega} \\right|\\leq 2^{\\epsilon}$ holds. \nOur next task is to prove Conditions (iii) and (iv). \nCondition (iii) is equivalent to $|\\lambda^mK|\\leq (1-2^{-\\delta})|\\csp_{\\Omega}|$, whereas Condition (iv) is equivalent to $|\\lambda^mK| \\leq (2^{\\delta}-1)|\\csp_{\\Omega}|$. \nSince $2^{\\delta}-1\\geq 1-2^{-\\delta}$ holds for our choice of $\\delta$, Condition (iv) follows instantly from Condition (iii). \nBy Condition (i), we obtain $|\\lambda^mK|\\leq 2^n|\\Omega||\\lambda|^m\\leq (1-2^{-\\delta})d$. This immediately implies Condition (iii) since $|\\csp_{\\Omega}|>d$. \n\nTo complete the proof, we still need to show that $|\\arg(z\/\\csp_{\\Omega})|\\leq\\varepsilon$ whenever $\\csp_{\\Omega}\\neq0$. Let us assume that $\\csp_{\\Omega}\\neq0$. Since $z$ is an output of $M$ on the input $(\\Omega_{m},1\/\\delta)$, it holds that $|\\arg(z)-\\arg(\\csp_{\\Omega_m})|\\leq \\delta$. Now, we set $\\theta=|\\arg(\\csp_{\\Omega_m}) - \\arg(\\csp_{\\Omega})|$. Notice that the value $\\theta$ represents an angle in the complex plane between two ``vectors'' $\\csp_{\\Omega_m}$ and $\\csp_{\\Omega}$. Since $\\csp_{\\Omega_m} = \\csp_{\\Omega}+\\lambda^mK$, the value $\\theta$ is maximized when the vector $\\lambda^mK$ is perpendicular to the line extending the vector $\\csp_{\\Omega_m}$. This implies that $|\\csp_{\\Omega}|\\sin\\theta \\leq |\\lambda^mK|$. \nCondition (ii) implies that $|\\lambda^mK|\\leq 2^n|\\Omega||\\lambda|^m\\leq \\delta'd$. Because $|\\csp_{\\Omega}|>d$, we also obtain $\\sin\\theta\\leq \\frac{|\\lambda^mK|}{|\\csp_{\\Omega}|}\\leq \\frac{\\delta' d}{d} = \\delta'$. Since $\\delta'=\\frac{\\pi}{3}\\delta = \\frac{\\pi}{6}\\varepsilon<\\frac{\\pi}{24}<\\frac{1}{2}$, we may assume that $0\\leq \\theta\\leq \\frac{\\pi}{6}$. Within this range, it always holds that $\\frac{\\pi}{3}\\theta \\leq \\sin\\theta$. Therefore, we conclude that $\\theta \\leq \\frac{3}{\\pi}\\delta' = \\delta$. By the triangle inequality, it thus follows that \n\\[\n|\\arg(z)-\\arg(\\csp_{\\Omega})| \\leq |\\arg(z)-\\arg(\\csp_{\\Omega_m})| + |\\arg(\\csp_{\\Omega_m})-\\arg(\\csp_{\\Omega})|\\leq \\delta+\\delta = \\varepsilon.\n\\]\n\nBy following the above argument closely, it is also possible to prove that $\\sharpcspplus_{\\algebraic}(\\FF,\\Delta_1)$ is AP-reducible to $\\sharpcspplus_{\\algebraic}(\\FF)$. Therefore, we have completed the proof of the lemma. \n\\end{proofof}\n\nIn our argument toward the dichotomy theorem, we have omitted the proof of Lemma \\ref{constructibility}, which shows a fundamental property of T-constructibility. Now, we will give the proof of the lemma. The proof \nwill proceed by induction on the number of operations applied to construct a target constraint. \n\n\\begin{proofof}{Lemma \\ref{constructibility}}\nLet $\\FF$ be any set of constraints. For simplicity, assume that $f$ is obtained from $g$ (or $\\{g_1,g_2\\}$ in the case of the multiplication operation) \nby applying exactly one of the seven operations described in Section \\ref{sec:constructibility}. Our purpose is to show that \n$\\sharpcspstar(f,\\FF)$ is AP-reducible to $\\sharpcspstar(g,\\FF)$. \nNotationally, we set $\\Omega=(G,X|\\HH,\\pi)$ \nand $\\Omega'=(G',X'|\\HH',\\pi')$ to be any \nconstraint frames associated with $g$ and $f$, respectively. Note that \n$\\HH$ and $\\HH'$ are finite subsets of $\\{g\\}\\cup\\FF\\cup\\UU$ and \n$\\{f\\}\\cup\\FF\\cup\\UU$, and $X$ and $X'$ are both finite sets of Boolean variables. To improve the readability, we assume that $X=X'=\\{x_1,x_2,\\ldots,x_n\\}$. Let $\\epsilon$ be any error tolerance parameter. \nFor each operation, we want to explain how to produce $G$ and $\\pi$ from $G'$ and $\\pi'$ in polynomial time so that, after making a query $(\\Omega,1\/\\epsilon)$ to any oracle (which is a randomized approximation scheme solving $\\sharpcspstar(g,\\FF)$), from its oracle answer $\\csp_{\\Omega}$, we can compute a $2^{\\epsilon}$-approximate solution for $\\csp_{\\Omega'}$ with high probability. This procedure indicates that $\\sharpcspstar(f,\\FF)\\APreduces \\sharpcspstar(g,\\FF)$. For simplicity, we will omit the mentioning of $\\epsilon$ in the following description. \n\n\\s\n\n[{\\sc Permutation}] Assume that $f$ is obtained from $g$ by exchanging two indices $i$ and $j$ of variables $\\{x_i,x_j\\}$. \n{}From $G'$, we build $G$ by swapping only the labels $x_i$ and $x_j$ of the corresponding nodes (without changing any edge incident on them). The labeling function $\\pi$ is also naturally induced from $\\pi'$. \nClearly, this step requires linear time. \nOur underlying randomized approximation scheme $N$ works as follows: it first constructs $\\Omega$ from $\\Omega'$, makes a single query to obtain a $2^{\\epsilon}$-approximate solution $z$ for $\\csp_{\\Omega}$ from the oracle $\\sharpcspstar(g,\\FF)$, and outputs $z$ instantly. \nSince $\\csp_{\\Omega'} = \\csp_{\\Omega}$, the output of $N$ is also a $2^{\\epsilon}$-approximate solution for $\\csp_{\\Omega'}$. \n\n[{\\sc Pinning}] Let $f=g^{x_i=c}$ for $i\\in[k]$ and $c\\in\\{0,1\\}$. {}From $G'$, we construct $G$ in polynomial time as follows: append a new node whose label is $\\Delta_c$ and connect it to the node labeled $x_i$ by a new edge. Because $\\csp_{\\Omega'} = \\csp_{\\Omega}$ holds, an algorithm similar to the one in the previous case can approximate $\\csp_{\\Omega'}$. \n\n[{\\sc Projection}] Let $f = g^{x_i=*}$ with index $i\\in[k]$. \nNotice that $f$ does not have the variable $x_i$. For simplicity, assume that $G'$ has no node with the label $x_i$. \nLet $V'$ denote the set of all nodes having the label $f$ in $G'$. \nNow, we construct $G$ from $G'$ by adding a new node labeled $x_i$ to $V_2$, \nby replacing the label $f$ by $g$, and by connecting the node $x_i$ to all nodes in $V'$ by new single edges. This transformation implies that $\\csp_{\\Omega'} = \\csp_{\\Omega}$. The rest is similar to the previous cases. \n\n[{\\sc Linking}] Let $f=g^{x_i=x_j}$ and assume that $i\\beta$}\\\\\n1&\\mbox{if $S_T<-\\alpha$},\n\\end{cases}\n$$\nwhere $T=\\inf\\{n\\ge 1:S_n\\notin[-\\alpha,\\beta]\\}$.\n\nIn this paper, we consider two kinds of constraints on the sample size $T$. The first is the {\\em probabilistic constraint}; that is, for any integer $n$ and error tolerance $0<\\varepsilon<1$, the sample size $T$ satisfies that $\\max_{i=0,1}{P_i(T> n)}\\le \\varepsilon$. The second is the {\\em expectation constraint}; that is, for any integer $n$, $\\max_{i=0,1}{{\\mathbb{E}} _{P_i}[T]}\\le n$. We say that an error exponent pair $(E_0, E_1)$ is {\\em achievable under the expectation constraint} if there exists a sequence of SHTs $\\{(\\delta_n,T_n)\\}_{n=1}^{\\infty}$ with $\\max_{i=0,1}{{\\mathbb{E}} _{P_i}[T_n]}\\le n$ such that\n\\begin{align*}\n \\liminf_{n\\to\\infty}\\frac{1}{n}\\log \\frac{1}{P_{1|0}(\\delta_n,T_n)}&\\ge E_0 ,\\quad \\mbox{and}\\\\\n \\liminf_{n\\to\\infty}\\frac{1}{n}\\log \\frac{1}{P_{0|1}(\\delta_n,T_n)}&\\ge E_1.\n\\end{align*}\nThe set of all achievable $(E_0, E_1)$ is denoted as ${\\cal E}(P_0,P_1)$. \nIt has been shown in~\\cite{WaldWolf} that \n$${\\cal E}(P_0,P_1) =\\{(E_0, E_1):E_0E_1\\le\nD(P_1\\|P_0)D(P_0\\|P_1)\\}.$$\nThe error exponent pair $(E_0,E_1)=(D(P_1\\|P_0),D(P_0\\|P_1))$ can be achieved by a sequence of SPRTs. In this paper, we are concerned with the {\\em speed} or {\\em rate of convergence} to this point of the achievable region of the error exponents $(D(P_1\\| P_0), D(P_0\\| P_1))$ under the two kinds of constraints on the sample size $T_n$. The rates of convergence are formally defined in the following:\n\n\\begin{definition} For fixed $(\\lambda , \\varepsilon)\\in[0,1]\\times (0,1)$ and an SHT $(\\delta_n,T_n)$, \nlet \\begin{align}\n& g_n(\\lambda,\\varepsilon|\\delta_n,T_n)\\nonumber\\\\\n &\\;=\\lambda\\left(\\frac{1}{\\sqrt{n}}\\log \\frac{1}{P_{1|0}(\\delta_n, T_n)}-\\sqrt{n} \\,D(P_1\\|P_0)\\right)\\notag\\\\\n&\\quad+(1\\!-\\!\\lambda)\\left(\\frac{1}{\\sqrt{n}}\\log \\frac{1}{P_{0|1}(\\delta_n, T_n)}\\! -\\!\\sqrt{n}\\,D(P_0\\|P_1)\\right).\\notag\n\\end{align}\nand \n\\begin{align}\\label{probabilistic1}\n\\hspace{-6mm}G_n(\\lambda,\\varepsilon)&=\\sup_{ (\\delta_n, T_n):\\max_{i=0,1}P_i( T_n> n)\\le\\varepsilon} g_n(\\lambda,\\varepsilon|\\delta_n,T_n).\n\\end{align}\nLet $\n\\overline{G}(\\lambda,\\varepsilon)=\\limsup_{n\\to\\infty}G_n(\\lambda,\\varepsilon)$ and $\\underline{G}(\\lambda,\\varepsilon)=\\liminf_{n\\to\\infty}G_n(\\lambda,\\varepsilon).\n$\nIf \n$\\overline{G}(\\lambda,\\varepsilon)=\\underline{G}(\\lambda,\\varepsilon)$, then we term this common value as the {\\em second-order exponent of SHT under the probabilistic constraint} and we denote it simply as $G(\\lambda,\\varepsilon)$. \n\\end{definition}\n\n\\begin{definition} For fixed $\\lambda\\in[0,1]$ and an SHT $(\\delta_n,T_n)$, \nlet\n\\begin{align}\nf_n(\\lambda|\\delta_n,T_n)&=\\lambda\\left(\\log {P_{1|0}(\\delta_n, T_n)}+nD(P_1\\|P_0)\\right)\\notag\\\\\n&\\quad+(1-\\lambda)\\left(\\log {P_{0|1}(\\delta_n, T_n)}+nD(P_0\\|P_1)\\right)\\notag\n\\end{align}\nand\n \\begin{align}\\label{expectationconstraint}\nF_n(\\lambda)&=\\sup_{ (\\delta_n, T_n):\\max_{i=0,1}{\\mathbb{E}}_{P_i}[ T_n]\\le n} f_n(\\lambda|\\delta_n,T_n).\n\\end{align}\nLet $\\overline{F}(\\lambda)=\\limsup_{n\\to\\infty}F_n(\\lambda)$ and $\\underline{F}(\\lambda)=\\liminf_{n\\to\\infty}F_n(\\lambda).\n$\nIf $\\overline{F}(\\lambda)=\\underline{F}(\\lambda)$, then we term this common value as the {\\em second-order exponent of SHT under the expectation constraint} and we denote it simply as $F(\\lambda)$.\\footnote{We note that $G_n(\\lambda, \\varepsilon)$ and $F_n(\\lambda)$ in \\eqref{probabilistic1} and \\eqref{expectationconstraint} respectively are defined with opposing signs; this is to ensure that the results are stated as cleanly as possible. The normalizations of $g_n(\\lambda,\\varepsilon|\\delta_n,T_n)$ and $f_n(\\lambda|\\delta_n,T_n)$ by different functions of the expected length $n$ are also different. } \n\\end{definition}\n\n \nThroughout the paper, $\\Phi(\\cdot)$ is used to denote the cumulative distribution function of a standard Gaussian and $\\Phi^{-1}(\\cdot)$ is used to denote the generalized inverse of $\\Phi(\\cdot)$.\n\\section{Main Results}\nOur first theorem characterizes the second-order exponent under the probabilistic constraint on the sample size. For $i=0,1$, we define the {\\em relative entropy variance}~\\cite{Vincentbook} between $P_i$ and $P_{1-i}$ as \n\\begin{align}\nV(P_i\\|P_{1-i})&={\\mathbb{E}}_{P_i}\\left[\\bigg(\\log\\frac{p_{i}(X_1)}{p_{1-i}(X_1)}-D(P_i\\|P_{1-i})\\bigg)^2\\right]\\notag\\\\\n&= \\mathrm{Var}_{ P_i}\\left[\\log\\frac{p_{i}(X_1)}{p_{1-i}(X_1)} \\right].\\notag\n\\end{align}\n\\begin{thm}\\label{probabilistic-thm}\nLet $P_0$ and $P_1$ be such that \n\\begin{align}\\label{thirdmoment}\n\\max_{i=0,1}{\\mathbb{E}}_{P_i}\\left[\\Big|\\log\\frac{p_0(X_1)}{p_{1}(X_1)}\\Big|^3\\right]<\\infty.\n\\end{align} Then, for every $\\lambda\\in[0,1]$ and $0<\\varepsilon<1$, we have\n\\begin{align}\\label{probabilistic}\n G(\\lambda,\\varepsilon) &=\\overline{G}(\\lambda,\\varepsilon)=\\underline{G}(\\lambda,\\varepsilon)\\notag\\\\\n&= \\lambda \\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon) \\notag\\\\\n&\\qquad+(1-\\lambda)\\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon).\n\\end{align}\n\\end{thm}\nThe proof of Theorem~\\ref{probabilistic-thm} can be found in Section~\\ref{sec:prf_prob} and essentially relies on the Berry-Esseen theorem for the maximal sum of independent random variables~\\cite{CLTmaximal}.\n\nOur second main theorem concerns the second-order exponent under the expectation constraint. To set things up before stating our result, we introduce a few definitions. A real-valued random variable $Z$ is said to be {\\em arithmetic} if there exists a positive real number $d$ such that $P(Z\\in d\\mathbb{Z})=1$; otherwise $Z$ is said to be {\\em non-arithmetic}. If $Z$ is arithmetic, then the smallest positive $d$ such that $P(Z\\in d\\mathbb{Z})=1$ is called the {\\em span} of $Z$.\n\n\n\nLet $\\{\\alpha_k\\}_{k=1}^{\\infty}$ and $\\{\\beta_k\\}_{k=1}^{\\infty}$ be two increasing sequences of positive real numbers such that $\\alpha_k\\to \\infty$ and $\\beta_k\\to \\infty$ as $k\\to \\infty$. Let ${T}(\\beta_k)=\\inf\\{n\\ge 1:S_n>\\beta_k\\}$ and $\\tilde{T}(\\alpha_k)=\\inf\\{n\\ge 1:-S_n>\\alpha_k\\}$. Furthermore, let $R_k=S_{T(\\beta_k)}-\\beta_k$ and $\\tilde{R}_k=-S_{\\tilde{T}(\\alpha_k)}-\\alpha_k$. From~\\cite[Theorem~2.3, pp.~18]{nonlinearrenewaltheory}, it follows that \n\\begin{itemize}\n\\item\nif the true hypothesis is $H_0$, $\\{ {R}_k\\}_{k=1}^{\\infty}$ converges in distribution to some random variable $ {R}$ and the limit is independent of the choice of $\\{\\alpha_k\\}_{k=1}^{\\infty}$;\n\\item \nif the true hypothesis is $H_1$, $\\{\\tilde{R}_k\\}_{k=1}^{\\infty}$ converges in distribution to some random variable $\\tilde{R}$ and the limit is independent of the choice of $\\{\\beta_k\\}_{k=1}^{\\infty}$.\n\\end{itemize}\nDefine \n\\begin{alignat}{2}\nA(P_0,P_1) &={\\mathbb{E}}[ {R}],&\\qquad\\tilde{A}(P_0,P_1) &={\\mathbb{E}}[\\tilde{R}], \\notag\\\\\n B(P_0,P_1)&=\\log{\\mathbb{E}}[e^{-R}],&\\qquad\\tilde{B}(P_0,P_1)&=\\log{\\mathbb{E}}[e^{-\\tilde{R}}].\\notag\n\\end{alignat}\nWe note that these quantities are, in general, not symmetric in their arguments, i.e., $\\tilde{A}(P_0,P_1 )\\ne A(P_1,P_0)$ and $\\tilde{B}(P_0,P_1)\\ne B(P_1,P_0)$ in general.\n \\begin{thm}\\label{expectation-thm} \nLet $P_0$ and $P_1$ be such that $$\\max_{i=0,1}{\\mathbb{E}}_{P_i}\\left[\\Big|\\log\\frac{p_0(X_1)}{p_1(X_1)}\\Big|^2\\right]<\\infty$$ and $\\log\\frac{p_0(X_1)}{p_1(X_1)}$ is non-arithmetic when $X_1\\sim P_0$. Then for every $\\lambda\\in[0,1]$, \n \\begin{align}\\label{thm2}\nF(\\lambda)&= \\overline{F}(\\lambda)=\\underline{F}(\\lambda)\\notag\\\\\n&=\\lambda \\big(\\tilde{A}(P_0,P_1)+\\tilde{B}(P_0,P_1)\\big)\\notag\\\\\n&\\hspace{0.4cm}+(1-\\lambda)\\big(A(P_0,P_1)+ B(P_0,P_1)\\big).\n\\end{align}\n\\end{thm}\nTheorem \\ref{expectation-thm} is proved in Section~\\ref{sec:prf_exp} and relies on results in nonlinear renewal theory~\\cite{durrettprobability, nonlinearrenewaltheory} as well as key properties of the SPRT~\\cite{ferguson, SequentialAnalysis ,Lotov86,AsymptoticsSPRT}.\n \n\\begin{remark}\nIf $X_1\\sim P_0$ and $\\log\\frac{p_0(X_1)}{p_1(X_1)}$ is arithmetic, say with span $d>0$, Theorems~\\ref{asymptotic-nonlattice} and~\\ref{asymptoticserrornonlattice} (see Section~\\ref{tools}) that we leverage on in the proof of Theorem \\ref{expectation-thm} hold only for SPRTs with parameters $(\\alpha_n,\\beta_n)$ where $\\alpha_n$ and $\\beta_n$ are integer multiples of $d$. In this case $(\\alpha_n,\\beta_n)$ should be chosen as $(\\lfloor{\\frac{nD(P_1\\|P_0)}{d}-a_0\\rfloor}d,\\lfloor{\\frac{nD(P_0\\|P_1)}{d}-a_1\\rfloor}d)$ for some constants $a_0$ and $a_1$ depending on $P_0$ and $P_1$. However, the limit of $(\\lfloor{\\frac{nD(P_1\\|P_0)}{d}-a_0\\rfloor}d,\\lfloor{\\frac{nD(P_0\\|P_1)}{d}-a_1\\rfloor}d)$ as $n\\to\\infty$ may not exist. This technical difficulty makes the problem challenging for arithmetic $\\log\\frac{p_0(X_1)}{p_1(X_1)}$ and we defer the analysis of this case to future work. \n\\end{remark}\n\n\\begin{remark}\nFrom Theorems \\ref{probabilistic-thm} and \\ref{expectation-thm}, we see that the rate of convergence of the optimal $\\lambda$-weighted finite-length exponents $\\sup_{(\\delta_n,T_n)}-\\frac{\\lambda}{n} \\log {P_{1|0} (\\delta_n,T_n)} -\\frac{1-\\lambda}{n} \\log {P_{0|1} (\\delta_n,T_n)} $ to the $\\lambda$-weighted exponents $\\lambda D(P_1\\| P_0)+ (1-\\lambda ) D(P_0\\|P_1)$ is faster under the\nexpectation constraint as compared to the probabilistic constraint. In fact, the former rate is $\\Theta(\\frac{1}{n})$ while the latter rate is $\\Theta(\\frac{1}{\\sqrt{n}})$. Thus, the latter constraint is more stringent. Our main contribution is to nail down the exact order and the constants of $\\Theta(\\frac{1}{\\sqrt{n}})$ and $\\Theta(\\frac{1}{n})$ given in~\\eqref{probabilistic} and~\\eqref{thm2} respectively. The expectation constraint is reminiscent of variable-length channel coding with feedback~\\cite{PPV11, TT17} in which the speed of convergence to the $\\varepsilon$-capacity is not $\\Theta(\\frac{1}{\\sqrt{n}})$ but rather $O(\\frac{\\log n}{n})$. However, different from~\\cite{PPV11, TT17}, the ``strong converse'' holds as the first-order fundamental limit $(D(P_1\\| P_0), D(P_0\\| P_1))$ does not depend on $\\varepsilon$.\n\\end{remark}\n\\subsection{Examples} \\label{sec:examples}\nIn this subsection, we present two examples and numerically compute their second-order exponents under the expectation constraint (since the computation of second-order terms under the probabilistic constraint is straightforward). The following theorem extracted from~\\cite[Corollary~2.7 and Theorem~3.3, pp.~24 and~32]{nonlinearrenewaltheory} provides more concrete characterizations of the quantities $A(P_0,P_1), \\tilde{A}(P_0,P_1), B(P_0,P_1)$, and $\\tilde{B}(P_0,P_1)$. For a real number $a$, we write $a^+=\\max\\{a,0\\}$ and $a^{-}=-\\min\\{a, 0\\}$.\n\n\\begin{thm}\\label{computation}\nLet $P_0$ and $P_1$ be such that $$\\max_{i=0,1}{\\mathbb{E}}_{P_i}\\left[\\Big|\\log\\frac{p_0(X_1)}{p_1(X_1)}\\Big|^2\\right]<\\infty$$ and $\\log\\frac{p_0(X_1)}{p_1(X_1)}$ is non-arithmetic when $X_1\\sim P_0$. Then\n \\begin{align}\n A(P_0,P_1)&=\\frac{{\\mathbb{E}}_{P_0}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]}{2D(P_0\\|P_1)}-\\sum_{k=1}^{\\infty}\\frac{1}{k}{\\mathbb{E}}_{P_0}\\left[S_k^{-}\\right], \\notag\\\\\n \\tilde{A}(P_0,P_1)&=\\frac{{\\mathbb{E}}_{P_1}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]}{2D(P_1\\|P_0)}-\\sum_{k=1}^{\\infty}\\frac{1}{k}{\\mathbb{E}}_{P_1}\\left[S_k^{+}\\right],\\notag\n\\end{align}\nand \n\\begin{align}\n B(P_0,P_1)&=-\\log D(P_0\\|P_1)\\notag\\\\\n &\\quad-\\sum_{k=1}^{\\infty}\\frac{1}{k}\\big(P_{0}(S_k<0)+P_1(S_k>0)\\big),\\notag\\\\\n \\tilde{B}(P_0,P_1)&=-\\log D(P_1\\|P_0)\\notag\\\\\n &\\quad-\\sum_{k=1}^{\\infty}\\frac{1}{k}\\big(P_{0}(S_k<0)+P_1(S_k>0)\\big).\\notag\n\\end{align}\n\\end{thm}\n\n\n\n\n\\begin{example}[Two Gaussians] \\label{ex:gauss} Let $\\theta_0$ and $\\theta_1$ be two distinct real numbers. Let $p_0(x)=\\frac{1}{\\sqrt{2\\pi}}e^{-\\frac{(x-\\theta_0)^2}{2}}$ and $p_1(x)=\\frac{1}{\\sqrt{2\\pi}}e^{-\\frac{(x-\\theta_1)^2}{2}}$ for $x\\in\\mathbb{R}$. Then $Y_k=\\log\\frac{p_0(X_k)}{p_1(X_k)}=\\frac{\\theta_1^2-\\theta_0^2}{2}+(\\theta_0-\\theta_1) X_k$. Let $\\Delta\\theta=\\theta_1-\\theta_0$ be the difference of the means. Thus, under hypothesis $H_0$, $Y_k\\sim \\mathcal{N} \\big(\\frac{(\\Delta\\theta)^2}{2},(\\Delta\\theta)^2 \\big)$, and under $H_1$, $Y_k\\sim \\mathcal{N}\\big(-\\frac{(\\Delta\\theta)^2}{2}, (\\Delta\\theta)^2\\big)$. Therefore, under hypothesis $H_0$, $S_k\\sim \\mathcal{N}\\big(\\frac{k(\\Delta\\theta)^2}{2},k(\\Delta\\theta)^2\\big)$, and under $H_1$, $S_k\\sim \\mathcal{N}\\big(-\\frac{k(\\Delta\\theta)^2}{2},k(\\Delta\\theta)^2\\big)$. We can then derive the following important quantities: \n\\begin{itemize}\n\\item $D(P_0\\|P_1)=D(P_1\\|P_0)=\\frac{(\\Delta\\theta)^2}{2}$;\n\\item $\\begin{aligned}[t]{\\mathbb{E}}_{P_0}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]&={\\mathbb{E}}_{P_1}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]\\\\\n&=\\frac{(\\Delta\\theta)^4}{4}+(\\Delta\\theta)^2\\mbox{;}\\end{aligned}$\n\\item $P_{0}(S_k<0)+P_1(S_k>0)=2\\Phi\\left(-\\frac{\\sqrt{k}|\\Delta\\theta|}{2}\\right);$\n\\item $\\begin{aligned}[t]&{\\mathbb{E}}_{P_1}\\left[S_k^{+}\\right]={\\mathbb{E}}_{P_0}\\left[S_k^{-}\\right]=-\\frac{k(\\Delta\\theta)^2}{2}\\Phi\\bigg(-\\frac{\\sqrt{k}|\\Delta\\theta|}{2}\\bigg)\\\\\n&\\hspace{4cm}+\\frac{\\sqrt{k|\\Delta\\theta|}}{2\\pi}e^{-\\frac{k|\\Delta\\theta|}{8}}.\n\\end{aligned}$ \n\\end{itemize}\nUsing Theorems~\\ref{expectation-thm} and~\\ref{computation}, we can numerically compute the second-order exponent under the expectation constraint in \\eqref{thm2}. This is illustrated in Figure~\\ref{fig:1}. We note that for this case of discriminating between two Gaussians, $F(\\lambda)$ does not depend on $\\lambda\\in [0,1]$. \n\\end{example} \n\n\\begin{figure}\n\\centering\n \\begin{subfigure}{0.45\\textwidth}\n \\includegraphics[width=7.0cm]{gaussian.pdf}\n \\caption{Illustration of $F(\\lambda)$ for two Gaussian distributions as in Example~\\ref{ex:gauss}}\n \\label{fig:1}\n \\end{subfigure}\n\\hspace{1cm}\n \\begin{subfigure}{0.45\\textwidth}\n \\includegraphics[width=7.0cm]{Exponential.pdf}\n \\caption{Illustration of $F(\\lambda)$ for two exponential distributions as in Example~\\ref{ex:exp} with $\\gamma_0=\\gamma$ and $\\gamma_1=1$}\n \\label{fig:2}\n \\end{subfigure}\n \\caption{Illustration of the second-order exponents under the expectation constraint }\n \\end{figure}\n\n\n\n\\begin{example}[Two Exponentials] \\label{ex:exp} Let $\\gamma_0$ and $\\gamma_1$ be two positive real numbers such that $\\gamma_0<\\gamma_1$. Let $p_0(x)=\\gamma_0 e^{-\\gamma_0 x}$ and $p_1(x)=\\gamma_1 e^{-\\gamma_1 x}$ for $x>0$. Then $Y_k=(\\gamma_1-\\gamma_0)X_k+\\log \\frac{\\gamma_0}{\\gamma_1}$ and $S_k=(\\gamma_1-\\gamma_0)\\sum_{i=1}^{k}X_i+k\\log \\frac{\\gamma_0}{\\gamma_1}$. Let $x \\in [0,\\infty)\\mapsto U(x;k,\\gamma)$ denote the cumulative distribution function of the Erlang distribution with parameters $(k,\\gamma)$.\n\nIn~\\cite[Section~3.1.6]{SequentialAnalysis}, by using tools which we will not discuss here, exact formulas for $A(P_0,P_1)$, $\\tilde{A}(P_0,P_1), B(P_0,P_1)$ and $ \\tilde{B}(P_0,P_1) $ for this example concerning exponential distributions were derived. Here, we use Theorem~\\ref{computation} together with the fact that the sum of $k$ i.i.d.\\ exponential random variables with parameter $\\gamma$ is the Erlang distribution with parameters $(k,\\gamma)$ to derive a formula for the second-order term in~\\eqref{thm2}.\nWe can derive the following important quantities:\n\\begin{itemize}\n\\item $D(P_0\\|P_1)=\\log\\frac{\\gamma_0}{\\gamma_1}+\\frac{\\gamma_1-\\gamma_0}{\\gamma_0}$ and $D(P_1\\|P_0)=\\log\\frac{\\gamma_1}{\\gamma_0}+\\frac{\\gamma_0-\\gamma_1}{\\gamma_1}$;\n\\item ${\\mathbb{E}}_{P_0}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]=\\Big(\\log\\frac{\\gamma_0}{\\gamma_1}+\\frac{\\gamma_1-\\gamma_0}{\\gamma_0} \\Big)^2+\\frac{(\\gamma_0-\\gamma_1)^2}{\\gamma_0^2}$ and \\\\ ${\\mathbb{E}}_{P_1}\\left[\\log^2\\frac{p_0(X_1)}{p_1(X_1)}\\right]=\\Big(\\log\\frac{\\gamma_1}{\\gamma_0}+\\frac{\\gamma_0-\\gamma_1}{\\gamma_1} \\Big)^2+\\frac{(\\gamma_1-\\gamma_0)^2}{\\gamma_1^2}$;\n\\item $P_{0}(S_k <0) + P_1(S_k>0)\\!=\\!1+U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k,\\gamma_0\\right)-U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k,\\gamma_1\\right);$\n\\item${\\mathbb{E}}_{P_0}\\left[S_k^{-}\\right]=k U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k,\\gamma_0\\right)\\log\\frac{\\gamma_1}{\\gamma_0}\\\\-\\frac{k(\\gamma_1-\\gamma_0)}{\\gamma_0}U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k+1,\\gamma_0\\right)$ and\\vspace{.05in} \\\\ \n${\\mathbb{E}}_{P_1}\\left[S_k^{+}\\right] = k\\left(1 \\! -\\! U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k,\\gamma_1\\right)\\right)\\log\\frac{\\gamma_0}{\\gamma_1}\\\\+\\frac{k(\\gamma_1-\\gamma_0)}{\\gamma_1}\\left(1\\! -\\! U\\left(\\frac{k}{\\gamma_1-\\gamma_0}\\log\\frac{\\gamma_1}{\\gamma_0};k + 1,\\gamma_1\\right)\\right)$.\n\\end{itemize}\nUsing these facts, together with Theorems~\\ref{expectation-thm} and~\\ref{computation}, we can numerically compute the second-order exponent under the expectation constraint. This is illustrated in Figure~\\ref{fig:2} for various $\\lambda$'s. \n\\end{example} \n\n\n\n\n\\section{Proof of Theorem~\\ref{probabilistic-thm}} \\label{sec:prf_prob}\nIn this section we prove Theorem~\\ref{probabilistic-thm} by establishing upper and lower bounds on error probabilities, respectively. Throughout this section we use $\\chi_E$ to denote the indicator function taking the value $1$ on the set $E$ (and $0$ otherwise). In the proof, we use the {\\em Berry-Esseen theorem for the maximal sum of independent random variables}~\\cite{CLTmaximal}. For ease of reference, we restate it here as follows.\n\\begin{thm}[Rogozin~\\cite{CLTmaximal}]\\label{CLTmaximal}\nLet $X=\\{X_i\\}_{i=1}^{\\infty}$ be an i.i.d.\\ sequence of random variables such that ${\\mathbb{E}}[X_1]>0$ and ${\\mathbb{E}}[|X_1|^3]<\\infty$. Let $S_k=\\sum_{i=1}^{k}X_i$. Then for all $n\\ge 1$,\n\\begin{align*}\n \\sup_{a\\in\\mathbb{R}}\\left|{P\\bigg(\\frac{(\\max_{1\\le k\\le n}S_k)-n{\\mathbb{E}}[X_1]}{\\sqrt{n\\mathrm{Var}(X_1)}}\\le a\\bigg)-\\Phi(a)}\\right|\\le \\frac{M}{\\sqrt{n}},\n\\end{align*}\nwhere $M$ is a constant depending only on $\\mathrm{Var}(X_1)$ and ${\\mathbb{E}}[|X_1|^3]$.\n\\end{thm}\n\\subsection{Upper Bound on the Error Probabilities}\nFor any $\\eta \\in(0,\\varepsilon)$, let $c_0=-\\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon-\\eta)$ and $c_1=-\\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon-\\eta)$. Let $\\alpha_n=n(D(P_1\\|P_0)-\\frac{c_0}{\\sqrt{n}})$ and $\\beta_n=n(D(P_0\\|P_1)-\\frac{c_1}{\\sqrt{n}})$ and let $(\\delta_n,T_n)$ be the SPRT with parameters $(\\alpha_n,\\beta_n)$. As $0n)&=P_0\\left(\\alpha_n<\\min_{1\\le i\\le n}S_i\\le \\max_{1\\le i\\le n}S_i\\le \\beta_n\\right)\\notag\\\\\n&\\le P_0\\left( \\max_{1\\le i\\le n}S_i\\le \\beta_n\\right)\\notag\\\\\n&\\le\\Phi\\bigg(\\frac{-c_1}{\\sqrt{V(P_0\\|P_1)}}\\bigg)+\\frac{M}{\\sqrt{n}}\\label{im5}\\\\\n&=\\varepsilon-\\eta+\\frac{M}{\\sqrt{n}},\\notag\n\\end{align}\nwhere~(\\ref{im5}) holds due to Theorem~\\ref{CLTmaximal} ($X_i$ in Theorem~\\ref{CLTmaximal} is taken to be the log-likelihood ratio $\\log\\frac{p_0(X_i)}{p_1(X_i)}$) noting that $D(P_0\\|P_1)>0$ and~$M$ is a positive finite constant due to the finiteness of the third absolute moments of the log-likelihood ratios as assumed in~(\\ref{thirdmoment}).\n\nTherefore for sufficiently large $n$, $P_0(T_n>n)\\le \\varepsilon-\\eta\/2<\\varepsilon$.\nSimilarly, $P_1(T_n>n)< \\varepsilon$ for sufficiently large $n$. Hence we obtain that\n\\begin{align*}\n\\underline{G}(\\lambda,\\varepsilon)&=\\liminf_{n\\to\\infty} G_n(\\lambda,\\varepsilon)\\\\\n&\\ge \\lambda \\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon-\\eta) \\\\\n&\\qquad+(1-\\lambda)\\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon-\\eta),\n\\end{align*}\nwhich further implies by \nletting $\\eta\\to 0^{+}$ that\n\\begin{align*}\n\\underline{G}(\\lambda,\\varepsilon)&\\ge \\lambda \\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon)\\\\\n&\\hspace{1.5cm}+(1-\\lambda)\\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon),\n\\end{align*}\nas desired.\n\\subsection{Lower Bound on the Error Probabilities}\nIn this section, we derive the lower bound on the error probabilities. First, we slightly generalize~\\cite[Lemma~9.2]{WuPolyanskiy}.\n\n\\begin{lemma}\\label{probabilisticerror} \nLet $(\\delta,T)$ be an SHT such that $\\min_{i=0,1}P_i(T<\\infty)=1$. Then for any set $E\\in \\mathcal{F}_T$ and $\\gamma>0$, we have $P_0(E)-\\gamma P_1(E)\\le P_{0}(S_T\\ge \\log \\gamma)$.\n\\end{lemma}\n\\begin{proof}\nAs $P_1(T<\\infty)=1$, then from~\\cite[Theorem~1.1, pp.~4]{nonlinearrenewaltheory} it follows that $P_1^{T}$ is absolutely continuous with respect to $P_0^{T}$ and the Radon-Nikodym derivative $\\frac{\\mathrm{d}P_1^{T}}{\\mathrm{d}P_0^{T}}=e^{-S_T}$.\nTherefore we have that for any $E\\in\\mathcal{F}_T$, $P_1(E)={\\mathbb{E}}_{P_0}[\\chi_{E}e^{-S_T}]$. \n This further implies that\n\\begin{align}\nP_0(E)-\\gamma P_1(E)&={\\mathbb{E}}_{P_0}[\\chi_{E}]-\\gamma{\\mathbb{E}}_{P_1}[\\chi_{E}]\\notag\\\\\n&={\\mathbb{E}}_{P_0}[\\chi_{E}]-\\gamma {\\mathbb{E}}_{P_0}\\left[\\chi_{E}e^{-S_T}\\right]\\notag\\\\\n&= {\\mathbb{E}}_{P_0}\\left[\\chi_{E}(1-\\gamma e^{-S_T}) \\right]\\notag\\\\\n&\\le {\\mathbb{E}}_{P_0} \\left[\\chi_{E\\cap \\{S_T\\ge \\log\\gamma\\}}(1-\\gamma e^{-S_T}) \\right]\\notag\\\\\n&\\le {\\mathbb{E}}_{P_0}\\left[\\chi_{E\\cap \\{S_T\\ge \\log\\gamma\\}} \\right]\\notag\\\\\n&\\le P_{0}(S_T\\ge \\log \\gamma),\\notag\n\\end{align}\nas desired.\n\\end{proof}\n\\begin{remark}\nIn~\\cite[Lemma 9.2]{WuPolyanskiy}, the authors proved a similar result for the {\\em fixed-length} hypothesis testing problem.\n\\end{remark}\nLet $\\{(\\delta_n, T_n)\\}_{n=1}^{\\infty}$ be a sequence of SHTs such that $\\max_{i=0,1}P_i({T_n>n})\\le \\varepsilon$ and let $A_i (T_n)=\\{\\delta_n=i\\}$ for $i=0,1$. Then $P_{1|0}(\\delta_n, T_n)=P_{0}(A_1(T_n))$ and $P_{0|1}(\\delta_n, T_n)=P_1(A_0 (T_n))$. Using Lemma~\\ref{probabilisticerror} with $E=A_0 (T_n)$, we have that \n \\begin{align}\n&1-P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)\\notag\\\\\n&\\hspace{0.5cm}\\le P_{0}(S_{T_n}\\ge \\log \\gamma)\\notag\\\\\n&\\hspace{0.5cm}\\le P_{0}(S_{T_n}\\ge \\log \\gamma, T_n\\le n)+P_0(T_n>n)\\notag,\n\\end{align}\nwhich further implies that\n\\begin{align}\\label{im4}\nP_0(T_n>n)&\\ge 1-P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)\\notag\\\\\n&\\hspace{0.65cm}- P_{0}(S_{T_n}\\ge \\log \\gamma, T_n\\le n)\\notag\\\\\n&\\ge 1-P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)\\notag\\\\\n&\\hspace{0.65cm}- P_{0}\\Big(\\max_{1\\le i\\le n}S_i\\ge \\log \\gamma \\Big)\\notag\\\\\n&\\ge 1-P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)-\\frac{M}{\\sqrt{n}}\\notag\\\\\n&\\hspace{0.65cm}-\\left(1-\\Phi \\bigg(\\frac{\\log \\gamma-n D(P_0\\|P_1)}{\\sqrt{n V(P_0\\|P_1)}}\\bigg)\\right)\\\\\n&\\ge -P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)-\\frac{M}{\\sqrt{n}}\\notag\\\\\n&\\hspace{0.5cm}+\\Phi \\bigg(\\frac{\\log \\gamma-n D(P_0\\|P_1)}{\\sqrt{n V(P_0\\|P_1)}}\\bigg),\\notag\n\\end{align}\nwhere~(\\ref{im4}) follows from Theorem~\\ref{CLTmaximal} and $M$ is the constant in Theorem~\\ref{CLTmaximal}. Again, $M$ is finite because we assume~(\\ref{thirdmoment}) holds.\n\nLet $\\log\\gamma= n\\big(D(P_0\\|P_1)+\\Phi^{-1}(\\varepsilon+\\frac{2M}{\\sqrt{n}})\\sqrt{\\frac{V(P_0\\|P_1) }{n}} \\big) $. As $P_{0}(T_n>n)\\le \\varepsilon$, we then have that\n\\begin{align*}\n\\varepsilon&\\ge P_{0}(T_n>n)\\\\\n&\\ge -P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)\\\\\n&\\hspace{0.5cm}+\\Phi \\bigg(\\frac{\\log \\gamma-n D(P_0\\|P_1)}{\\sqrt{n V(P_0\\|P_1)}}\\bigg)-\\frac{M}{\\sqrt{n}}\\\\\n&\\ge -P_{1|0}(\\delta_n, T_n)-\\gamma P_{0|1}(\\delta_n, T_n)+\\varepsilon+\\frac{M}{\\sqrt{n}},\n\\end{align*}\nwhich implies that\n\\begin{align*}\n&\\frac{1}{n}\\log P_{0|1}(\\delta_n, T_n)\\ge -\\frac{\\log\\gamma}{n}+\\frac{\\log \\left(\\frac{M}{\\sqrt{n}}-P_{1|0}(\\delta_n, T_n)\\right)}{n}\\notag\\\\\n&=-D(P_0\\|P_1)-\\sqrt{\\frac{V(P_0\\|P_1)}{n}}\\Phi^{-1}\\left(\\varepsilon+\\frac{2M}{\\sqrt{n}}\\right)\\\\\n&\\hspace{0.5cm}+\\frac{\\log \\left(\\frac{M}{\\sqrt{n}}-P_{1|0}(\\delta_n, T_n)\\right)}{n}\\notag\\\\\n&=-D(P_0\\|P_1)-\\sqrt{\\frac{V(P_0\\|P_1)}{n}}\\Phi^{-1}(\\varepsilon)\\\\\n&\\hspace{0.5cm}+\\frac{\\log \\left(\\frac{M}{\\sqrt{n}}-P_{1|0}(\\delta_n, T_n)\\right)}{n}+O\\bigg(\\frac{1}{n}\\bigg).\n\\end{align*}\nLet $\\log t_n=-n(\\min\\{D(P_0\\|P_1),D(P_1\\|P_0)\\}\/2)$ and \n$$\n\\mathcal{T}^{(n)}=\\left\\{(\\delta_n,T_n): \\begin{array}{c}\\max_{i=0,1}P_i({T_n>n})\\le \\varepsilon\\\\ P_{1|0}(\\delta_n, T_n)\\le t_n\\\\\nP_{0|1}(\\delta_n, T_n)\\le t_n\\end{array}\\right\\}.\n$$ For any $(\\delta_n,T_n)\\in \\mathcal{T}^{(n)}$, we know $P_{1|0}(\\delta_n,T_n)$ tends to zero exponentially fast, hence we have that \n$$\\lim_{n\\to\\infty}\\frac{\\log \\left(\\frac{M}{\\sqrt{n}}-P_{1|0}(\\delta_n, T_n)\\right)}{\\sqrt{n}}= 0.$$ Thus it then follows that for any $(\\delta_n,T_n)\\in \\mathcal{T}^{(n)}$,\n\\begin{align}\\label{im8}\n&\\limsup_{n\\to\\infty}\\sqrt{n}\\left(-\\frac{1}{n}\\log P_{0|1}(\\delta_n, T_n)-D(P_0\\|P_1)\\right)\\notag\\\\\n&\\hspace{3.6cm}\\le \\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon).\n\\end{align}\nSimilarly, we have that for any $(\\delta_n,T_n)\\in \\mathcal{T}^{(n)}$,\n\\begin{align}\\label{im9}\n&\\limsup_{n\\to\\infty}\\sqrt{n}\\left(-\\frac{1}{n}\\log P_{1|0}(\\delta_n, T_n)-D(P_1\\|P_0)\\right)\\notag\\\\\n&\\hspace{3.6cm}\\le \\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon).\n\\end{align}\nFrom~(\\ref{upper}) we know that for sufficiently large $n$, the sequence of exponents of the SHTs that approaches the supremum in~(\\ref{probabilistic1}) is lower bounded by $\\min\\{D(P_0\\|P_1),D(P_1\\|P_0)\\}\/2$. By the (strict) positivity of the relative entropies, it follows that for sufficiently large $n$, the sequence of SHTs that approaches the supremum in~(\\ref{probabilistic1}) belongs to $\\mathcal{T}^{(n)}$. Therefore, combining~(\\ref{im8}) and~(\\ref{im9}), we have \n\\begin{align*}\n\\overline{G}(\\lambda,\\varepsilon)&=\\limsup_{n\\to\\infty}{G}_n(\\lambda,\\varepsilon)\\\\\n&\\le \\!\\lambda \\sqrt{V(P_0\\|P_1)}\\Phi^{-1}(\\varepsilon) +(1\\!-\\!\\lambda)\\sqrt{V(P_1\\|P_0)}\\Phi^{-1}(\\varepsilon),\n\\end{align*}\nas desired.\n\n\n\\section{Proof of Theorem~\\ref{expectation-thm}}\\label{sec:prf_exp}\nIn this section, we first present the tools used to prove Theorem~\\ref{expectation-thm} in Subsection~\\ref{tools}. We then establish upper and lower bounds on the error probabilities in Subsections~\\ref{upperexpectation} and~\\ref{lowerexpectation}, respectively.\n\n\\subsection{Auxiliary Tools}\\label{tools}\nIn the proof of Theorem~\\ref{expectation-thm}, we use the following results on the asymptotics of the first passage time from~\\cite{HeydeAsymptotics, nonlinearrenewaltheory,GutAsymptotics}. For ease of reference, we include the results as follows. Theorem~\\ref{asymptotic-nonlattice} characterizes the asymptotics of the {\\em first passage times} of a stochastic process. \n\n\nLet $\\{\\alpha_i\\}_{i=1}^\\infty$ and $\\{\\beta_i\\}_{i=1}^\\infty$ be two increasing sequences of positive real numbers such that $\\alpha_i\\to \\infty$ and $\\beta_i\\to \\infty$ as $i\\to \\infty$. Let $(\\delta_i,T_i)$ be an SPRT with parameters $(\\alpha_i,\\beta_i)$. Recall that $Y_i=\\log\\frac{p_0(X_i)}{p_1(X_i)}$, $S_k=\\sum_{i=1}^{k}Y_i$, and $T_n=\\inf\\{k\\ge 1:S_{k}\\notin [-\\alpha_n,\\beta_n]\\}$. The following two theorems, taken from~\\cite[Theorem 3.1, pp.~31]{nonlinearrenewaltheory}, characterize the asymptotics of the expected sample size and the two types of error probabilities.\n \n\\begin{thm}\\label{asymptotic-nonlattice}\nAssume that $\\max\\{{\\mathbb{E}}_{P_1} [Y_1^2],{\\mathbb{E}}_{P_0} [Y_1^2]\\}<\\infty$ and $Y_1$ is non-arithmetic. Then as $n\\to\\infty$,\n\\begin{align}\n {\\mathbb{E}}_{P_0}[T_n]=\\frac{\\beta_n}{D(P_0\\|P_1)}+\\frac{A(P_0,P_1)}{D(P_0\\|P_1)}+o(1),\\notag\n\\end{align}\nand\n\\begin{align}\n {\\mathbb{E}}_{P_1}[T_n]=\\frac{\\alpha_n}{D(P_1\\|P_0)}+\\frac{\\tilde{A}(P_0,P_1)}{D(P_1\\|P_0)}+o(1).\\notag\n\\end{align}\n \\end{thm}\n\n\n\\begin{thm}\\label{asymptoticserrornonlattice}\nAssume that $\\max\\{{\\mathbb{E}}_{P_1} [Y_1^2],{\\mathbb{E}}_{P_0} [Y_1^2]\\}<\\infty$ and $Y_1$ is non-arithmetic.\nThen,\n$$\n\\lim_{i\\to\\infty}P_{1|0}(\\delta_i,T_i){e^{\\alpha_i}}=e^{\\tilde{B}(P_0,P_1)}$$\nand\n$$\n\\lim_{i\\to\\infty}P_{0|1}(\\delta_i,T_i){e^{\\beta_i}}=e^{B(P_0,P_1)}.\\notag\n$$\n\\end{thm}\n\nThe following lemma~\\cite[Problem 3, pp.~369]{ferguson} characterizes the optimality of the SPRT. This lemma is also a simple consequence of Wald and Wolfowitz's work~\\cite{WaldWolf}.\n\\begin{lemma}\\label{optimality}\nLet $(\\delta,T)$ be an SPRT. Let $(\\tilde\\delta,\\tilde T)$ be any SHT such that\n$$\n{\\mathbb{E}}_{P_0}[\\tilde T]\\le {\\mathbb{E}}_{P_0}[T]\\quad\\mbox{and}\\quad {\\mathbb{E}}_{P_1}[\\tilde T]\\le {\\mathbb{E}}_{P_1}[T].\n$$\nThen\n$$\nP_{0|1}(\\delta,T)\\le P_{0|1}(\\tilde\\delta,\\tilde T)\\quad\\mbox{and}\\quad P_{1|0}(\\delta,T)\\le P_{1|0}(\\tilde\\delta,\\tilde T).\n$$\n\\end{lemma}\n\n\n\n\n\n\\subsection{Upper Bound on the Error Probabilities}\\label{upperbound}\\label{upperexpectation}\nFor any $\\eta>0$, let \n$$\n\\alpha_n=n D(P_1\\|P_0)\\bigg(1-\\frac{\\tilde{A}(P_0,P_1)}{nD(P_1\\|P_0)}-\\frac{\\eta}{n}\\bigg)\n$$\nand\n$$\n\\beta_n=n D(P_0\\|P_1)\\bigg(1-\\frac{A(P_0,P_1)}{nD(P_0\\|P_1)}-\\frac{\\eta}{n}\\bigg).\n$$\nConsider the SPRT $(\\delta_n,T_n)$ with parameters $(\\alpha_n,\\beta_n)$. \nAs $\\alpha_n\\to\\infty$ and $\\beta_n\\to\\infty$, from Theorem~\\ref{asymptoticserrornonlattice}, it follows that \n\\begin{align}\nP_{1|0}(\\delta_n,T_n)&=P_{0}(S_T\\le \\alpha_n)= (e^{\\tilde{B}(P_0,P_1)}+o(1))e^{-\\alpha_n}.\\notag\n\\end{align}\nand similarly,\n$$\nP_{0|1}(\\delta_n,T_n)= (e^{B(P_0,P_1)}+o(1))e^{-\\beta_n}.\n$$\nWe now show that ${\\mathbb{E}}_{P_0} [T_n]\\le n$ and ${\\mathbb{E}}_{P_1} [T_n]\\le n$. From Theorem~\\ref{asymptotic-nonlattice}, it follows that\n\\begin{align}\\label{BST}\n{\\mathbb{E}}_{P_0}[T_n]&=\\frac{\\beta_n}{D(P_0\\|P_1)}+\\frac{A(P_0,P_1)}{D(P_0\\|P_1)}+o(1)\\notag\\\\\n&=n-\\eta+o(1),\n\\end{align}\nwhere~(\\ref{BST}) follows from the definition of $\\beta_n$.\nSimilarly, ${\\mathbb{E}}_{P_1}[T_n]\\le n-\\eta+o(1).$ Thus for sufficiently large $n$, we have that\n$\n{\\mathbb{E}}_{P_0}[T_n]\\le n\n$\nand\n$\n{\\mathbb{E}}_{P_1}[T_n]\\le n.\n$\nIt then follows that\n\\begin{align}\n\\overline{F}(\\lambda)&=\\limsup_{n\\to \\infty}F_{n}(\\lambda)\\notag\\\\\n&\\le \\lambda (\\tilde{A}(P_0,P_1)+ \\tilde{B}(P_0,P_1))\\notag\\\\\n&\\hspace{0.5cm}+(1-\\lambda)(A(P_0,P_1)+ B(P_0,P_1))\\notag\\\\\n&\\hspace{0.5cm}+(\\lambda D(P_0\\|P_1)+(1-\\lambda)D(P_1\\|P_0))\\eta.\\notag\n\\end{align}\nAs $\\eta>0$ is arbitrary, we conclude that\n\\begin{align}\n\\overline{F}(\\lambda)&\\le \\lambda (\\tilde{A}(P_0,P_1)+\\tilde{B}(P_0,P_1))\\notag\\\\\n&+(1-\\lambda)(A(P_0,P_1)+ B(P_0,P_1)).\\notag\n\\end{align}\n\n\n\n\n\\subsection{Lower Bound on the Error Probabilities}\\label{lowerexpectation}\n\nFor any $\\eta>0$, let \n$$\n\\hat{\\alpha}_n=n D(P_1\\|P_0)\\bigg(1-\\frac{\\tilde{A}(P_0,P_1)}{nD(P_1\\|P_0)}+\\frac{\\eta}{n}\\bigg)$$\nand\n$$\\hat{\\beta}_n=n D(P_0\\|P_1)\\bigg(1-\\frac{A(P_0,P_1)}{nD(P_0\\|P_1)}+\\frac{\\eta}{n}\\bigg).\n$$\nConsider a sequence of SPRTs $\\{(\\hat{\\delta}_n,\\hat{T}_n)\\}_{n=1}^{\\infty}$ with parameters $\\{(\\hat{\\alpha}_n,\\hat{\\beta}_n)\\}_{n=1}^{\\infty}$.\nNow we show that for sufficiently large $n$, we have that for $i=0,1$,\n\\begin{align}\n{\\mathbb{E}}_{P_i}[\\hat{T}_n]\\ge n+\\frac{\\eta}{2}.\\notag\n\\end{align}\nFrom Theorem~\\ref{asymptotic-nonlattice} and the choice of $\\hat{\\beta}_n$, it follows that for sufficiently large $n$,\n\\begin{align}\n{\\mathbb{E}}_{P_0}[\\hat{T}_n]&=\\frac{\\hat{\\beta}_n}{D(P_0\\|P_1)}+\\frac{A(P_0,P_1)}{D(P_0\\|P_1)}+o(1)\\notag\\\\\n&=n+\\eta+o(1)\\notag\\\\\n&\\ge n+\\frac{\\eta}{2}> n.\\notag\n\\end{align}\nSimilarly, for sufficiently large $n$, \n$$\n{\\mathbb{E}}_{P_1}[\\hat{T}_n]\\ge n+\\frac{\\eta}{2}> n.\n$$\nThen from Lemma~\\ref{optimality}, we conclude that for any SHT $(\\delta_n, T_n)$ with $\\max\\{{\\mathbb{E}}_{P_0}[T],{\\mathbb{E}}_{P_1}[T]\\}\\le n$, \n\\begin{align}\n&P_{0|1}(\\delta_n, T_n)\\ge {P}_{0|1}(\\hat{\\delta}_n,\\hat{T}_n)\\label{largertest}\\\\\n&P_{1|0}(\\delta_n, T_n)\\ge {P}_{1|0}(\\hat{\\delta}_n,\\hat{T}_n). \\nonumber\n\\end{align}\nFrom Theorem~\\ref{asymptoticserrornonlattice}, we have that \n\\begin{align}\\label{im11}\n\\log{P}_{1|0}(\\hat{\\delta}_n,\\hat{T}_n)= \\tilde{B}(P_0,P_1)-\\hat{\\alpha}_n+\\log(1+o(1))\n\\end{align}\nand\n\\begin{align}\n\\log{P}_{0|1}(\\hat{\\delta}_n,\\hat{T}_n)= B(P_0,P_1)-\\hat{\\beta}_n+\\log(1+o(1)). \\nonumber\n\\end{align}\nCombining~(\\ref{largertest}) and~(\\ref{im11}), we have that\n\\begin{align}\\label{im1}\n&\\log P_{1|0}(\\delta_n, T_n)+nD(P_1\\|P_0)\\notag\\\\\n&\\hspace{1.5cm}\\ge \\log {P}_{1|0}(\\hat{\\delta}_n,\\hat{T}_n)+nD(P_1\\|P_0)\\notag\\\\\n&\\hspace{1.5cm}\\ge \\tilde{B}(P_0,P_1)+\\tilde{A}(P_0,P_1) \\notag\\\\\n&\\hspace{2cm}-\\eta D(P_0\\|P_1)+\\log(1+o(1)).\n\\end{align}\nSimilarly, we have that\n\\begin{align}\\label{im2}\n&\\log P_{0|1}(\\delta_n, T_n)+nD(P_0\\|P_1)\\notag\\\\\n&\\hspace{1.5cm} \\ge B(P_0,P_1)+A(P_0,P_1) \\notag\\\\\n&\\hspace{2cm}-\\eta D(P_1\\|P_0)+\\log(1+o(1)).\n\\end{align}\nAs $\\lim_{x\\to0}\\log(1+x)=0$, combining~(\\ref{im1}) and~(\\ref{im2}), we have that\n\\begin{align}\n\\underline{F}(\\lambda)&\\ge \\liminf_{n\\to \\infty}F_{n}(\\lambda)\\notag\\\\\n&\\ge \\lambda(\\tilde{B}(P_0,P_1)+\\tilde{A}(P_0,P_1))\\notag\\\\\n&\\hspace{0.5cm}+(1-\\lambda)( B(P_0,P_1)+A(P_0,P_1))\\notag\\\\\n&\\hspace{0.5cm}-(\\lambda D(P_0\\|P_1)+(1-\\lambda)(D(P_1\\|P_0))\\eta.\\notag\n\\end{align}\nFinally letting $\\eta\\to0^{+}$, we have \n\\begin{align}\n\\underline{F}(\\lambda)&\\ge \\lambda(\\tilde{B}(P_0,P_1)+\\tilde{A}(P_0,P_1))\\notag\\\\\n&\\hspace{0.5cm}+(1-\\lambda)( B(P_0,P_1)+A(P_0,P_1)),\\notag\n\\end{align}\nas desired.\n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Work}\nIn this paper, we have quantified the backoff of the finite-length error exponents from their asymptotic limits of $ D(P_1\\| P_0)$ and $D(P_0\\| P_1)$ for sequential binary hypothesis testing~\\cite{WaldWolf}. We considered both the expectation and probabilistic constraint and concluded that the former is less stringent in the sense that the rate of convergence to $( D(P_1\\| P_0),D(P_0\\| P_1))$ is faster. \n\nIn the future, one may consider the following three natural extensions of this work. First, instead of the probabilistic constraint $\\max_{i=0,1}P_i( T_n> n)\\le\\varepsilon $ in \\eqref{probabilistic1}, one can consider replacing the non-vanishing constant $\\varepsilon$ by a subexponentially decaying sequence. That is, one can consider the so-called {\\em moderate deviations regime} \\cite[Theorem~3.7.1]{Dembo}. Second, one can consider a {\\em classification} counterpart~\\cite{Gutman} of this problem in which in addition to the test sequence $X = \\{X_i\\}_{i=1}^\\infty$, one also has training sequences $Y_0 = \\{Y_{0,i}\\}_{i=1}^\\infty$ and $Y_1= \\{Y_{1,i}\\}_{i=1}^\\infty$ drawn respectively from $P_0$ and $P_1$, which are assumed to be unknown. Finally, one may consider {\\em universal} versions of sequential hypothesis testing and quantify the price one has to pay as a result of complete incognizance of the generating distributions $P_0$ and~$P_1$.\n\n\\subsubsection*{Acknowledgements}\nThe authors would like to sincerely thank Dr.~Anusha Lalitha and Prof.~Tara Javidi for many helpful discussions during the initial phase of this work.\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{CMI, majorisation and the entropic binary relation $\\rhd$}\n\n\\subsection{The classical mutual information attached to an $m\\times n$ probability matrix}\n\nLet $N$ be any positive integer and define the usual probability simplex to be\n$$\\Delta^N=\\{(p_1,p_2,\\ldots,p_{N+1})\\in\\mathbb{R}^{N+1}\\ :\\ \\sum_{i=1}^{N+1} p_i=1\\hbox{ and }p_i\\geq0\\hbox{ for all }i\\}.$$\nNow consider the case where $N+1=mn$ is a composite number and \nlet $\\mathbf{p}=(p_i)\\in\\Delta^{mn-1}$ be any probability vector: we view this as a set of joint probabilities for two systems of size $m$ and $n$. \nWe reflect the split into subsystems by arranging the $p_k$ into an $m\\times n$-matrix $P$ as follows:\n\\begin{equation}\\label{rowsncols}\n\\bordermatrix{\n&c_1 & c_2 & \\ldots & c_n\\cr\nr_1&p_1 & p_2 & \\ldots & p_n \\cr\nr_2&p_{n+1} & p_{n+2} & \\ldots & p_{2n} \\cr\n\\vdots&\\vdots & \\vdots & \\ddots & \\vdots \\cr\nr_m&p_{(m-1)n+1} & p_{(m-1)n+2} & \\ldots & p_{mn} \\cr\n}=P.\n\\end{equation}\nAs depicted above we let the row sums (which are the marginal probabilities for the first subsystem) be denoted by $r_i=\\sum_{j=1}^n p_{(i-1)n+j}$ for $i=1,\\ldots,m$ and similarly for the column sums (which are the marginal probabilities for the second subsystem): $c_j=\\sum_{i=1}^m p_{(i-1)n+j}$ for $j=1,\\ldots,n$.\nThen given any permutation $\\sigma$ in the symmetric group $\\Sym{mn}$ on $mn$ letters sending $p_i$ to $p_{\\sigma(i)}$\nwe define a new $m\\times n$-matrix $P^\\sigma$ as follows:\n\\begin{equation*}\nP^\\sigma=\\begin{pmatrix}\np_{\\sigma(1)} & p_{\\sigma(2)} & \\ldots & p_{\\sigma(n)} \\cr\np_{\\sigma(n+1)} & p_{\\sigma(n+2)} & \\ldots & p_{\\sigma(2n)} \\cr\n\\vdots & \\vdots & \\ddots & \\vdots \\cr\np_{\\sigma((m-1)n+1)} & p_{\\sigma((m-1)n+2)} & \\ldots & p_{\\sigma(mn)} \\cr\n\\end{pmatrix},\n\\end{equation*}\ndefining the appropriate marginal probabilities in a similar fashion.\n\nTo define the classical mutual information~\\cite[\\S2.3]{cover} we take the sum of the entropies of the~$r_i$ and the~$c_j$ over all~$i,j$ and then subtract the sum of the individual entropies of the~$p_k$, for~$k=1,\\ldots,mn$. \n\n\\begin{definition}\\label{cmile}\nWith notation as above, the classical mutual information $I(P)$ of the matrix $P$ is given by\n\\begin{equation}\\label{CMIdef}\nI(P) = \\sum_{i=1}^m-r_i\\log r_i + \\sum_{j=1}^n-c_j\\log c_j - \\sum_{k=1}^{mn}-p_k\\log p_k.\n\\end{equation}\nWe will often write $H(x)=-x\\log x$ for $x\\in [0,1]$ and so we may rewrite~(\\ref{CMIdef}) as\n\\begin{equation*}\nI(P) = \\sum_{i=1}^m H(r_i) + \\sum_{j=1}^n H(c_j) - \\sum_{k=1}^{mn}H(p_k).\n\\end{equation*}\n\\end{definition}\n\n\n\\subsection{Majorisation between two elements of $\\Sym{mn}$}\\label{majdef}\n\nFor definitions and basic results connected with majorisation, see~\\cite{bhatia} and~\\cite{marshall}. We shall use the standard symbol~$\\succ$ to denote majorisation between vectors.\nFor any~$m\\times n$ probability matrix~$M$, let us denote by~$\\mathbf{r}(M)\\in\\mathbb{R}^m$ the vector of marginal probabilities represented by the sums of the rows of~$M$ and similarly by~$\\mathbf{c}(M)\\in\\mathbb{R}^n$ the vector of marginal probabilities created from the sums of the columns of~$M$. Throughout the paper we shall use the symbol $H$ interchangeably for the function of one variable as well as the function on probability vectors, where if $\\mathbf{v}=(v_i)\\in\\mathbb{R}^N$ is any such vector then\n$$H(\\mathbf{v})=\\sum_{i=1}^N H(v_i)=-\\sum_{i=1}^N v_i\\log v_i.$$\n\n\\begin{lemma}\\label{majoris}\nLet $M_1,M_2$ be two probability matrices.\nIf $\\mathbf{r}(M_1)\\succ\\mathbf{r}(M_2)$ and if $\\mathbf{c}(M_1)\\succ\\mathbf{c}(M_2)$, then\n$$I(M_1)\\leq I(M_2).$$\n\\end{lemma}\n\n\\begin{proof}\nSee~\\cite{santerdav}: it follows from the fact that $H$ is a Schur-concave function~\\cite[\\S II.3]{bhatia}.\n\\end{proof}\n\nIt should be pointed out that the converse is definitely NOT true: indeed it is this very failure which gives substance to the definition of the entropic binary relation $\\rhd$.\n\n\\begin{definition}\\label{critta}\nIf the hypotheses of Lemma \\ref{majoris} hold then we write\n$$M_1\\succ M_2$$\nand we shall say that $M_1$ majorises $M_2$: \\bf but this matrix terminology is not standard.\n\\end{definition}\n\nNote that definition~\\ref{critta} has nothing intrinsically to do with entropy: it is the fact that entropy is a Schur-concave function which enables us to link it to majorisation~\\cite{marshall}.\nBy the symmetry of the entropy function $H$ upon vectors, the relation of majorisation between matrices which we have just defined is invariant under row swaps and column swaps; moreover if $m=n$ then it is also invariant under transposition. \n\nFrom now on we shall use~$\\mathbf{p}$ to denote a probability vector of length~$mn$ (where~$m,n$ will be clear from the context) \\emph{written in non-increasing order} and~$P$ to denote the corresponding~$m\\times n$ matrix derived from~$\\mathbf{p}$ as above by successively writing its entries along the rows. Similarly~$\\mathbf{p}^\\sigma,P^\\sigma$ will denote the respective images under an element~$\\sigma\\in\\Sym{mn}$. Notice that our $\\mathbf{p}$ are thereby chosen from a much smaller convex set than $\\Delta^{mn-1}$, namely from the analogue of the `positive orthant' of a vector space:\n\\begin{equation}\n\\mathfrak{D}_{mn} = \\{\\ \\mathbf{p}\\in\\Delta^{mn-1}\\ :\\ p_1\\geq p_2\\geq\\ldots\\geq p_{mn}\\ \\},\n\\end{equation}\nwhich is the topological closure of a fundamental domain for the action of~$\\Sym{mn}$ upon~$\\Delta^{mn-1}$. Henceforth all of the probability vectors with which we shall work will be assumed to be chosen from this set~$\\mathfrak{D}_{mn}$; the corresponding set of matrices (constructed from each $\\mathbf{p}\\in\\mathfrak{D}_{mn}$ as above and therefore also with entries in non-increasing order as we go along successive rows) will be denoted $\\mathfrak{M}_{mn}$.\n\n\\begin{definition}\\label{crittasymbols}\nLet $\\sigma,\\sigma'\\in\\Sym{mn}$. If $\\mathbf{r}(P^\\sigma)\\succ\\mathbf{r}(P^{\\sigma'})$ and if $\\mathbf{c}(P^\\sigma)\\succ\\mathbf{c}(P^{\\sigma'})$ for all $P\\in\\mathfrak{M}_{mn}$\nthen we write\n$$\\sigma\\succ\\sigma'$$\nand we shall say that $\\sigma$ majorises $\\sigma'$: \\bf but again this terminology is not standard.\\it\n\\end{definition}\n\nWe are now ready to define a finer relation than the one which majorisation gives upon permutations of a fixed probability vector. This relation is the key to all of the results in this paper.\n\n\n\\subsection{Definition of the entropic binary relation $\\rhd$ between two elements of $\\Sym{mn}$}\n\nIf we consider the class of $(mn)!$ matrices formed by permuting the entries in the matrix $P$ in~(\\ref{rowsncols}) under the full symmetric group $\\Sym{mn}$ and then look at the CMI of each of the resulting matrices, there is a rigid \\it a priori \\rm partial order which holds between them, and which does not vary as $P$ moves over the whole of $\\mathfrak{M}_{mn}$. That is to say, \\bf it does not depend on the sizes of the $\\{p_i\\}$ but only upon their ordering\\rm. In low dimensions, much of the partial order can be explained by majorisation considerations. However there is a substantial set of relations which depends on a much finer graining than majorisation gives. In dimension~6 this fine-graining will become our entropic partial order~$\\mathfrak{E}$. \n\nWe denote the individual relational operator by~$\\rhd$ and define it as follows. \n\n\\begin{definition}\\label{bump}\nGiven permutations $\\sigma,\\sigma'\\in\\mathbf{S}_{mn}$ we say that \n$$\\sigma\\ \\rhd\\ \\sigma'$$\nif it can be shown that $I(P^{\\sigma'})-I(P^\\sigma)$ is non-negative for all $P\\in\\mathfrak{M}_{mn}$.\nThat is to say, given an ordered matrix $P\\in\\mathfrak{M}_{mn}$, the relation $I(P^\\sigma)\\leq I(P^{\\sigma'})$ holds \\bf irrespective of the relative sizes of the entries\\it.\nThis is the same as saying that\n$$H(\\mathbf{r}({P^\\sigma}))+H(\\mathbf{c}({P^\\sigma}))\\ \\leq\\ H(\\mathbf{r}({P^{\\sigma'}}))+H(\\mathbf{c}({P^{\\sigma'}}))$$\nfor all $P\\in\\mathfrak{M}_{mn}$.\n\nIn order to keep the notation consistent with that of majorisation, we have adopted the convention that $\\sigma\\rhd\\sigma'$ corresponds to $I(P^\\sigma) \\leq I(P^{\\sigma'})$ for all $P\\in\\mathfrak{M}_{mn}$.\n\\end{definition}\n\n\\begin{remark}\nA key observation at this stage is that the partial order is not really connected with the notion of classical mutual information~(CMI) so much as it is with entropy itself, for the term which is the sum of the entropies of the individual joint probabilities is common to all permutations of a given fixed matrix~$P$, and so as we pointed out in definition~\\ref{bump} the ordering depends only upon \\emph{the relative sizes of the sums of the entropies of the marginal probability vectors}. Indeed, nothing meaningful may be said within this framework about any relation between the~CMI of matrices whose (sets of) entries are distinct: the ordering is effectively concerned solely with permutations.\n\\end{remark}\n\n\nNow majorisation implies $\\rhd$, but not vice-versa: we have the following result which was proven in~\\cite{me}. The notation $(\\alpha,\\beta)$ for the transposition will be clarified in the next section.\n\\begin{proposition}\\label{majmcmaj}\nLet $\\sigma\\in\\Sym{mn}$ and let $\\tau=(\\alpha,\\beta)\\in\\Sym{mn}$ be the transposition swapping elements $\\alpha$ and $\\beta$. Then\n\\begin{equation*}\n\\left(\\sigma\\succ\\sigma\\tau\\right) \\Rightarrow \\left(\\sigma\\rhd\\sigma\\tau\\right).\n\\end{equation*}\nFurthermore if $\\alpha$ and $\\beta$ belong to the same row or column of the corresponding $m\\times n$ matrix, then the two notions are the same.\\qed\n\\end{proposition}\n\nWe now explore the relations which arise from single transpositions.\n\n\n\n\\subsection{The entropic binary relation $\\rhd$ for a single transposition}\n\nIn order to see what the entropic binary relation is in the case which will most interest us - that of a single transposition - we once again consider a general $m\\times n$ probability matrix $P=(p_i)\\in\\mathfrak{M}_{mn}$ as depicted in~\\ref{rowsncols}. \nLet $\\sigma$ be some element of $G$, so our starting matrix will be $P^\\sigma$.\nLet $\\tau$ be any transposition acting on $P^\\sigma$, interchanging two elements which we shall refer to as $\\alpha$ and $\\beta$ (by a slight abuse of notation, since the positions and the values will be referred to by the same symbols). The following diagram illustrates this action of $\\tau$ on $P^\\sigma$: we write $P^{\\tau\\sigma}=(P^\\sigma)^\\tau$ for the image of $P^\\sigma$ under $\\tau$ since we always write abstract group actions on the \\emph{left}; but note that when it comes to the comparison we are trying to effect between group elements then since $\\tau$ actually multiplies $\\sigma$ on the \\emph{right}, we will be comparing $\\sigma$ with $\\sigma\\tau$ as required.\n\n\\begin{equation}\n\\label{matricks}\n\\bordermatrix{\n&&&&c_\\beta&&c_\\alpha&\\cr\n&p_{\\sigma(1)} & p_{\\sigma(2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(n)} \\cr\n&p_{\\sigma(n+1)} & p_{\\sigma(n+2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(2n)} \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\nr_\\alpha&\\ldots & \\ldots & \\ldots & \\ldots & \\ldots & \\alpha & \\ldots \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\nr_\\beta&\\ldots & \\ldots & \\ldots & \\beta & \\ldots & \\ldots & \\ldots \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\n&p_{\\sigma((m-1)n+1)} & p_{\\sigma((m-1)n+2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(mn)}\\cr\n}=P^\\sigma\n\\end{equation}\nand under the action of $\\tau$ this is mapped to:\n\\begin{equation}\n\\label{matrickstau}\n\\bordermatrix{\n&&&&c_\\beta^\\tau&&c_\\alpha^\\tau&\\cr\n&p_{\\sigma(1)} & p_{\\sigma(2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(n)} \\cr\n&p_{\\sigma(n+1)} & p_{\\sigma(n+2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(2n)} \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\nr_\\alpha^\\tau&\\ldots & \\ldots & \\ldots & \\ldots & \\ldots & \\beta & \\ldots \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\nr_\\beta^\\tau&\\ldots & \\ldots & \\ldots & \\alpha & \\ldots & \\ldots & \\ldots \\cr\n&\\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots & \\vdots \\cr\n&p_{\\sigma((m-1)n+1)} & p_{\\sigma((m-1)n+2)} & \\ldots & \\ldots & \\ldots & \\ldots & p_{\\sigma(mn)}\\cr\n}=P^{\\tau\\sigma}.\n\\end{equation}\n\nWithout loss of generality we may stipulate that \\bf as matrix entries \\rm $\\alpha>\\beta$ (if they are equal there is nothing to be done). We wish to compare $I(P^\\sigma)$ with $I(P^{\\tau\\sigma})$. Note firstly that by the definition of CMI, the difference $I(P^{\\tau\\sigma})-I(P^\\sigma)$ depends only on the rows and columns containing $\\alpha,\\beta$: all of the rest of the terms vanish as they are not affected by the action of $\\tau$. We denote by $r_\\alpha$ (respectively $r_\\beta$) the sum of the entries in the row of $P^\\sigma$ which contains $\\alpha$ (resp. $\\beta$), and by $c_\\alpha$ (respectively, $c_\\beta$) the sum of the entries in the column of $P^\\sigma$ which contains $\\alpha$ (resp. $\\beta$). Similarly, we denote by $r_\\alpha^\\tau,r_\\beta^\\tau,c_\\alpha^\\tau,c_\\beta^\\tau$ the image of these quantities under the action of $\\tau$. See the diagrams~(\\ref{matricks}),~(\\ref{matrickstau}) above.\n\nNB: $r_\\alpha^\\tau,c_\\alpha^\\tau$ (respectively, $r_\\beta^\\tau,c_\\beta^\\tau$) no longer contain $\\alpha$ (respectively~$\\beta$), but rather~$\\beta$ (respectively~$\\alpha$).\n\nSo the quantity we are interested in becomes\n\\begin{eqnarray}\nI(P^{\\tau\\sigma})-I(P^\\sigma) & = & \nH(\\mathbf{r}(P^{\\tau\\sigma}))+H(\\mathbf{c}(P^{\\tau\\sigma}))-H(\\mathbf{r}(P^\\sigma))-H(\\mathbf{c}(P^\\sigma)) \\nonumber \\\\\n& = & H(r_\\alpha^\\tau)-H(r_\\alpha)+H(r_\\beta^\\tau)-H(r_\\beta)+H(c_\\alpha^\\tau)-H(c_\\alpha)+H(c_\\beta^\\tau)-H(c_\\beta),\\label{willit}\n\\end{eqnarray}\nwith the proviso that if $\\alpha$ and $\\beta$ happen to be in the same row (respectively column) then the $r_\\ast^\\ast$ (respectively, $c_\\ast^\\ast$) terms vanish.\nThe terms in~(\\ref{willit}) are grouped in pairs of the form $\\pm(H(x+(\\alpha-\\beta))-H(x))$, which means we may write it in a more suggestive form:\n\\begin{equation}\\label{slump}\nI(P^{\\tau\\sigma})-I(P^\\sigma) = (\\alpha-\\beta)\\left(-\\frac{H(r_\\alpha)-H(r_\\alpha^\\tau)}{\\alpha-\\beta}+\\frac{H(r_\\beta^\\tau)-H(r_\\beta)}{\\alpha-\\beta} -\\frac{H(c_\\alpha)-H(c_\\alpha^\\tau)}{\\alpha-\\beta}+\\frac{H(c_\\beta^\\tau)-H(c_\\beta)}{\\alpha-\\beta}\\right).\n\\end{equation}\nTo take advantage of the link with calculus, we introduce Lagrangian means~\\cite[VI \\S2.2]{bullen}.\n\n\\begin{definition}\nLet $\\varphi$ be a continuously differentiable and strictly convex or strictly concave function defined on a real interval $J$, with first derivative $\\varphi'$. Define the {\\bf Lagrangian mean $\\mu_\\varphi$ associated with $\\varphi$} \\it to be:\n\\begin{equation}\\label{lagrean}\n\\mu_\\varphi(x,y) = \\begin{cases} {\\varphi'}^{-1}\\left(\\frac{\\varphi(y)-\\varphi(x)}{y-x}\\right)&\\mbox{if }y\\neq x \\\\\n x &\\mbox{if }y=x \\end{cases}\n\\end{equation}\nfor any $x,y\\in J$, where ${\\varphi'}^{-1}$ denotes the unique inverse of $\\varphi'$.\n\\end{definition}\nIn other words, $\\mu_\\varphi$ is the function which arises from the Lagrangian mean value theorem in the process of going from the points $(x,\\varphi(x))$ and $(y,\\varphi(y))$ subtending a secant on the curve of $\\varphi$, to the unique point in $[x,y]$ where the slope of the tangent to the curve $\\varphi$ is equal to that of the secant. See figure~\\ref{secant}. Note that the hypothesis about strict convexity\/concavity is necessary in order to ensure the uniqueness of the inverse of the derivative.\n\n\\begin{figure}[h!btp]\n\\begin{center}\n\\mbox{\n\\subfigure{\\includegraphics[width=3in,height=3in,keepaspectratio]{badgraphcropped.pdf}}\n}\n\\caption{Definition of $\\mu_\\varphi$}\\label{secant}\n\\end{center}\n\\end{figure}\n\nIf we focus on the case where $\\varphi=H$ which is continuously differentiable and strictly concave on $J=[0,1]$ we may rewrite (\\ref{slump}) as:\n\\begin{eqnarray*}\nI(P^{\\tau\\sigma})-I(P^\\sigma) & = & (\\alpha-\\beta)\\left(-H'(\\mu_H(r_\\alpha^\\tau,r_\\alpha))+H'(\\mu_H(r_\\beta,r_\\beta^\\tau))\n-H'(\\mu_H(c_\\alpha^\\tau,c_\\alpha))+H'(\\mu_H(c_\\beta,c_\\beta^\\tau))\\right);\n\\end{eqnarray*}\nindeed $\\varphi'(x) = H'(x) = -(1+\\log(x))$ and so this becomes:\n\\begin{eqnarray*}\nI(P^{\\tau\\sigma})-I(P^\\sigma) & = & (\\alpha-\\beta)\\log \\frac{\\mu_H(r_\\alpha^\\tau,r_\\alpha)\\mu_H(c_\\alpha^\\tau,c_\\alpha)}{\\mu_H(r_\\beta,r_\\beta^\\tau)\\mu_H(c_\\beta,c_\\beta^\\tau)}.\n\\end{eqnarray*}\nSince $(\\alpha-\\beta)>0$, in order to determine which of the matrices gives higher CMI we only need consider the relative sizes of the numerator and denominator of the argument of the logarithm. So it is enough to study the quantity\n\\begin{eqnarray}\n\\mu_H(r_\\alpha^\\tau,r_\\alpha)\\mu_H(c_\\alpha^\\tau,c_\\alpha) - \\mu_H(r_\\beta,r_\\beta^\\tau)\\mu_H(c_\\beta,c_\\beta^\\tau),\n\\label{wing}\n\\end{eqnarray}\nas we did in~\\cite{me} and as we do for the entropic binary relation below.\n\nWe are now in a position to re-state what is meant by $\\rhd$ for this special case of a transposition.\n\n\\begin{lemma}\nWith notation as above, $\\sigma\\rhd \\sigma\\tau$ if and only if it can be shown that the quantity in (\\ref{wing}) is non-negative for all $P\\in\\mathfrak{M}_{mn}$.\\qed\n\\end{lemma}\n\nFor convenience later on we state the following sufficient condition for $\\rhd$ which is proven in~\\cite{me}. Consider the four terms which constitute the first arguments of the function $\\mu_H$ in (\\ref{wing}), namely\n\\begin{equation}\\label{titanic}\nr_\\alpha^\\tau,c_\\alpha^\\tau,r_\\beta,c_\\beta.\n\\end{equation}\nObserve that there are no \\it a priori \\rm relationships between the sizes of these quantities.\nLet us consider the possible orderings of the four terms based upon what we know of the ordering of the matrix elements of $P$. In principle there are $24$ such possibilities; however in certain instances of small dimension such as our $2\\times 3$ case, most of these may be eliminated and we are left with only a few orderings.\n\n\\begin{proposition}\\label{titrate}\nSuppose that the minimum element in (\\ref{titanic}) is either $r_\\beta$ or $c_\\beta$. In addition suppose that we can verify that\n$r_\\beta+c_\\beta \\leq r_\\alpha^\\tau+c_\\alpha^\\tau$ holds for any $P\\in\\mathfrak{M}_{mn}$.\nThen $\\sigma\\rhd \\sigma\\tau$.\n\nConversely, suppose that the minimum element in (\\ref{titanic}) is either $r_\\alpha^\\tau$ or $c_\\alpha^\\tau$ and in addition suppose that we can verify that\n$r_\\beta+c_\\beta \\geq r_\\alpha^\\tau+c_\\alpha^\\tau$ holds for any $P\\in\\mathfrak{M}_{mn}$.\nThen $\\sigma\\tau\\rhd\\sigma$.\\qed\n\\end{proposition}\n\n\n\\subsection{Properties of the \\emph{identric mean} $\\mu_H$}\n\nWe now prove some facts specifically about $\\mu_H$ which will give us an insight into the sign of the quantity in~(\\ref{wing}).\n\n\\begin{lemma}\\label{lollipop}\nFix $t\\in(0,1)$. For $x\\in(0,1-t)$:\n\\vspace{2 mm}\n\n(i) $\\mu_H(x,x+t)>0$ and is strictly monotonically increasing in $x$;\n\\vspace{2 mm}\n\n(ii) $\\mu_H(x,x+t)$ is strictly concave in $x$;\n\\vspace{2 mm}\n\n(iii) $\\frac{1}{e} < \\frac{1}{t}(\\mu_H(x,x+t)-x) < \\frac{1}{2}$;\n\\vspace{2 mm}\n\n(iv) $\\frac{\\partial}{\\partial x}\\left(\\mu_H(x,x+t)\\right)$ is strictly monotonically increasing in $t$ for fixed $x$.\n\\vspace{4 mm}\n\nLet $\\delta\\in(0,1-t-x)$.\n\\vspace{2 mm}\n\n(v) $\\frac{\\mu_H(x+\\delta,x+\\delta+t)}{\\mu_H(x,x+t)}$ is monotonic decreasing in $t$ for fixed $x$;\n\\vspace{2 mm}\n\n(vi) $\\frac{\\mu_H(x+\\delta,x+\\delta+t)}{\\mu_H(x,x+t)}$ is monotonic decreasing in $x$ for fixed $t$.\n\\vspace{4 mm}\n\nNow let $01.$ Then $qr>ps$.\n\n\\end{lemma}\n\nLet $y>x>0$: then we note that (iii) says that the Lagrangian mean of $x$ and $y$ occurs between $x+\\frac{y-x}{e}$ and $x+\\frac{y-x}{2}$. Both extremes occur in the limit, so \\it a priori \\rm we cannot narrow the range down further than this.\n\n\\begin{proof}\nFirst, solving~(\\ref{lagrean}) explicitly for $\\varphi=H$ we see that $\\mu_H$ is in fact what is known as the {\\it identric mean of $x$ and $y$\\rm}:\n\\begin{equation*}\n\\mu_H(x,y) = e^{-1}\\left(\\frac{y^y}{x^x}\\right)^\\frac{1}{y-x},\n\\end{equation*}\nor if we set $t=y-x$:\n\\begin{eqnarray*}\n\\mu_H(x,x+t) & = & e^{-1}\\left(\\frac{(x+t)^{(x+t)}}{x^x}\\right)^\\frac{1}{t}\\\\\n & = & e^{-1}(x+t)(1+\\frac{t}{x})^\\frac{x}{t}\\ .\n\\end{eqnarray*}\n\nNow parts (i) and (ii) are proven as lemma~6 of~\\cite{me} and since (iii) follows by similar techniques we omit the proof. Part~(iv) follows by taking the derivative \n$\\frac{\\partial^2}{\\partial t\\partial x}\\left(\\mu_H(x,x+t)\\right)$ and observing that its sign is the same as the sign of\n$$\\frac{t^2}{x(x+t)}-\\log^2(1+\\frac{t}{x}),$$\nwhich is shown to be positive in the course of proving the above lemma in~\\cite{me}.\n\nPart (vi) follows directly by taking the partial derivative of~$\\frac{\\mu_H(x+\\delta,x+\\delta+t)}{\\mu_H(x,x+t)}$ with respect to $x$; taking the partial derivative with respect to $t$ instead we see that part~(v) boils down to the inequality\n$$(1+\\frac{t}{x})^x < (1+\\frac{t}{x+\\delta})^{x+\\delta},$$\nwhich on taking derivatives is seen to be a standard fact\n$$\\log(1+\\frac{t}{x})>\\frac{t}{x+t}$$\nabout logarithms~\\cite{hardy}.\n\nTo prove (vii): let $w\\leq x\\leq y\\leq z$ be any 4 positive real numbers arranged in the order shown, and let $\\lambda\\geq0$. Define a positive real function\n$$\\chi_\\lambda(w,x,y,z) = \\frac{(w+\\lambda)^{(w+\\lambda)}(z+\\lambda)^{(z+\\lambda)}}{(x+\\lambda)^{(x+\\lambda)}(y+\\lambda)^{(y+\\lambda)}}.$$\nThen it follows from the explicit form for $\\mu_H$ above that\n$$\\left[ \\frac{\\mu_H(q,q+t)\\mu_H(r,r+t)}{\\mu_H(p,p+t)\\mu_H(s,s+t)}\\right] ^t = \\frac{\\chi_0(p,q,r,s)}{\\chi_t(p,q,r,s)}.$$\nFrom now on we shall simply write $\\chi_\\lambda$ for $\\chi_\\lambda(p,q,r,s)$, for any $\\lambda\\geq0$, with $00$ by assumption and since the term in square brackets is always positive it follows that\n$$\\frac{\\mu_H(q,q+t)\\mu_H(r,r+t)}{\\mu_H(p,p+t)\\mu_H(s,s+t)}>1 \\Leftrightarrow \\frac{\\chi_0}{\\chi_t}>1.$$\nSo to prove (vii) it is enough to show that\n$$\\frac{\\chi_0}{\\chi_t}>1\\ \\Rightarrow\\ qr>ps.$$\nNow\n\\begin{equation}\\label{ching}\n\\chi_\\lambda' = \\frac{\\partial}{\\partial\\lambda}\\left(\\chi_\\lambda\\right) = \\chi_\\lambda\n\\log\\frac{(p+\\lambda)(s+\\lambda)}{(q+\\lambda)(r+\\lambda)}\n\\end{equation}\nand so for $\\lambda\\geq0$ (again noting that $\\chi_\\lambda$ is always positive) it follows that the sign of $\\chi_\\lambda'$ is exactly the sign of\n\\begin{equation}\\label{bagosh}\n(p+\\lambda)(s+\\lambda)-(q+\\lambda)(r+\\lambda)=ps-qr+((p+s)-(q+r))\\lambda.\n\\end{equation}\nSo suppose $qrp+s\\ \\Rightarrow\\ qr>ps,$ whence\n$$qr1$ must indeed imply that $qr>ps$, as claimed.\n\\end{proof}\n\n\\begin{remark}\nThe significance of condition (vii) of lemma~\\ref{lollipop} is that it may be used to derive a \\emph{necessary} condition for $\\rhd$, which in the $2\\times3$-case in combination with proposition~\\ref{titrate} yields necessary and sufficient conditions for the relation $\\rhd$ between two permutations related by a single transposition. See theorem~\\ref{cripes} below.\n\\end{remark}\n\n\n\n\n\n\\section{The analytical construction of an entropic partial order $\\mathfrak{E}$ for the $2\\times 3$ case}\\label{qmpo}\n\n\\subsection{The entropic relation $\\rhd$ does give rise to a partial order}\n\nLet $m,n\\in\\mathbb{N}$ be arbitrary. So far we have constructed an abstract framework for the study of the binary relation~$\\rhd$ between elements of $\\Sym{mn}$ based on the~entropy function~$H$. Moreover we have shown that it is a necessary condition for `majorisation' between matrices related by a permutation, in the sense of definition~\\ref{critta}. We now prove that it does indeed give rise to a partial order on the quotient of $\\Sym{mn}$ by the subgroup~$K_{mn}$ generated by the appropriate~$H$-invariant (and so also~CMI-invariant) matrix transformations.\n\n\\begin{proposition}\nThe binary relation $\\rhd$ gives a well-defined partial order on the coset space of the symmetric group~$\\Sym{mn}$ modulo its subgroup~$K=K_{mn}$ of row- and column-swaps (together with the transpose operation if~$m=n$).\n\\end{proposition}\n\n\\begin{proof}\nFrom its definition we see immediately that~$\\rhd$ is reflexive and transitive. It is also anti-symmetric: let~$T_K$ be any right transversal of~$K$ in~$G=\\Sym{mn}$. We need to show that if there exists a pair~$\\sigma,\\sigma'\\in T_K$ for which both~$\\sigma\\rhd\\sigma'$ and~$\\sigma'\\rhd\\sigma$ hold simultaneously (meaning of course that~$I(P^\\sigma)=I(P^{\\sigma'})$ for all~$P\\in\\mathfrak{M}_{mn}$) then in fact~$\\sigma=\\sigma'$.\n\nWe proceed by a kind of induction on the number of transpositions needed to express $\\sigma^{-1}\\sigma'$. Suppose that a single transposition $\\tau$ takes $\\sigma$ to $\\sigma'$:\n$$\\sigma^{-1}\\sigma' = \\tau.$$\nOur hypothesis that $I(P^\\sigma)=I(P^{\\sigma'})$ for all~$P\\in\\mathfrak{M}_{mn}$ means that the quantity in~(\\ref{wing}) is always zero; hence in particular its derivative with respect to~$t=\\alpha-\\beta$ will be zero. Recall the function~$\\chi_\\lambda$ which we defined in order to study the effect of varying~$t$ inside the expression~(\\ref{wing}): if we look at its first partial derivative with respect to~$\\lambda$ we find the expression in~(\\ref{ching}). Now by our hypothesis the value of~$\\chi_\\lambda$ is always~1 and so the expression~(\\ref{ching}) reduces to~$\\log\\frac{(p+\\lambda)(s+\\lambda)}{(q+\\lambda)(r+\\lambda)}$ with the~$p,q,r,s$ being some appropriate ordering of the four terms in~(\\ref{titanic}). Our hypothesis implies this is identically zero, which clearly is nonsense as we vary~$\\lambda$ provided we do not always have equality between the sets~$\\{p,s\\}$ and~$\\{q,r\\}$, which in the general case we do not. So for the case where~$\\sigma^{-1}\\sigma'$ is assumed to be a single transposition we have produced a contradiction: so indeed~$\\sigma=\\sigma'$.\n\nNext suppose that \n$$\\sigma^{-1}\\sigma' = \\tau_1\\tau_2,$$\na product of two distinct transpositions. Without loss of generality we may assume that~$\\tau_1$ interchanges two positions which `bracket' at most one of the positions interchanged by~$\\tau_2$ (in the sense that if the two positions swapped by~$\\tau_1$ are occupied by the same value $x$ then that must also be true of every other position which is in-between these positions in the ordering of the entries, and hence at most one of those positions swapped by~$\\tau_2$ will be forced to be occupied by the same number $x$, but not both). If this is not the case we swap~$\\tau_1$ with~$\\tau_2$ and the argument will go through unchanged.\nSo let~$P$ be such a matrix, where the two positions swapped by~$\\tau_1$ are occupied by the same value~$\\delta$ say, but where one or both of the positions swapped by $\\tau_2$ (depending on whether there is an overlap of one of them with~$\\tau_1$) are assigned one or two different values. The key thing is that the values for~$\\tau_2$ be different from one another and that at least one of them be different from~$\\delta$. By construction the transposition~$\\tau_1$ will have no effect on the~CMI of~$P^\\sigma$, so by our hypotheses the transposition~$\\tau_2$ cannot change the value either. Since we have factored out by the~$K$-symmetry of the matrices, by the \\emph{strict Schur-concavity} of the entropy function~\\cite[\\S3A]{marshall} any two distinct column sum vectors (respectively, row sum vectors) which are not permutations of one another will yield different entropies, and therefore \\emph{ceteris paribus} different CMI's. Now if~$\\tau_2$ were to swap two elements of the same row, then clearly the column sum vector would change but the row vector would not, giving a different~CMI; a similar argument goes for two elements of the same column. So $\\tau_2$ must be a \\emph{diagonal transposition} (see definition~\\ref{dhv}), swapping elements which lie both in different rows and in different columns; moreover the difference between the entropy of the row vectors before and after the action by~$\\tau_2$ must be exactly equal to that between the column vectors, with the opposite sign. But then we are back to the convexity argument for the case of a single transposition above.\n\nThe general case follows by the same argument, noting that we may have to reduce either to the first or the last transposition in the expression for~$\\sigma^{-1}\\sigma'$ depending on the `bracketing' effect mentioned above.\n\\end{proof}\n\n\n\\subsection{The existence of a unique maximum CMI configuration in the $2\\times3$ case}\n\nFor the rest of the paper we specialise to the case where~$m=2$ and~$n=3$, and we shall often merely state many of the results from~\\cite{me}. The sections on definitions are identical in many places to those in~\\cite{me} but are reproduced here for convenience.\n\nFrom now on we denote our six probabilities by~$\\{a,b,c,d,e,f\\}$ and assume that they satisfy~$a\\geq b\\geq c\\geq d\\geq e\\geq f\\geq0$ and~$a+b+c+d+e+f=1$. In the main we shall treat these as though they were \\bf strict \\rm inequalities in order to derive sharper results. However we shall occasionally require recourse to the possibility that one or more of the relations be an equality: see for example the proof of theorem~\\ref{cripes}.\n\nWe state the main theorem from~\\cite{me}:\n\n\\begin{theorem}\\label{biggun}\nThe matrix\n$$X = \\left( \\begin{array}{ccc}a & d & e\\\\\nf & c & b\\end{array}\\right)$$\nhas \\bf maximal \\it CMI among all $720$ possible $2\\times3$ arrangements of $\\{a,b,c,d,e,f\\}$.\n\n\\bf This is the case \\emph{irrespective of the relative sizes of} $\\mathbf {a,b,c,d,e,f}$.\\rm\\qed\n\\end{theorem}\n\n\\begin{remark}\\label{ream}\nIt is worth pointing out that one may arrive at the conclusion of theorem~\\ref{biggun} by a process of heuristic reasoning, as follows. Recall from definition~\\ref{cmile} that the CMI consists of three components, of which the last one is identical for all matrices which are permutations of one another. So in order to understand maxima\/minima we restrict our focus to the first two terms, namely the entropies of the marginal probability vectors. Now entropy is a measure of the `randomness' of the marginal probabilities: the more uniform they are the higher will be the contribution to the CMI from these row and column sum vectors. Beginning with the columns since in general they will contribute more to the overall entropy, if we look at the a priori ordering $a>b>c>d>e>f$ it is evident that the most uniform way of selecting pairs in general so as to be as close as possible to one another would be to begin at the outside and work our way in: namely the column sum vector should read $(a+f,\\ b+e,\\ c+d)$. Similarly for the row sums: we need to add small terms to $a$, but the position of $f$ is already taken in the same column as $a$, so that just leaves $d$ and $e$ in the top row, and $c$ and $b$ fill up the bottom row in the order dictated by the column sums. See also the final appendix of~\\cite{me} where in fact we can achieve a total ordering by the same method for the simpler case of 2x2 matrices.\n\\end{remark}\n\n\n\\subsection{The canonical matrix class representatives $\\mathbf{R_{2\\times3}}$ and the identification with a quotient of $\\Sym{6}$}\\label{scub}\n\nThere are $6!=720$ possible permutations of the fixed probabilities $\\{a,b,c,d,e,f\\}$, giving a set of matrices in the usual way which we shall refer to throughout as~$\\cal{M}_{\\rm 2\\times3}$. However since simple row and column swaps do not change the CMI, and since there are $12=|\\mathbf{S}_3|.|\\mathbf{S}_2|$ such swaps, we are reduced to only $60=720\/12$ different possible values for the CMI (provided that the probabilities $\\{a,b,c,d,e,f\\}$ are all distinct: clearly repeated values within the elements will give rise to fewer possible CMI values). We now classify these 60 classes of matrices according to rules which will make our subsequent proofs easier, defining a fixed set of matrices which will be referred to as~$\\mathbf{R_{2\\times3}}$. \n\nThroughout we shall use the symbol $K=K_6$ for the subgroup of $G=\\Sym{6}$ generated by the row- and column-swaps referred to just now. This is the same subgroup~$K$ as we shall use in section~\\ref{algview}. In the usual cycle notation\n\\begin{equation}\\label{Kdef}\nK\\ \\ =\\ \\ \\langle\\ \\ (1,2)(4,5)\\ ,\\ (1,3)(4,6)\\ ,\\ (1,4)(2,5)(3,6)\\ \\ \\rangle\\ \\ <\\ \\ G,\n\\end{equation}\nwhere we fix for the remainder of this paper the convention that cycles multiply from right to left; so for example~$(1,2)(2,3)=(1,2,3)$ and not~$(1,3,2)$ as many authors write.\nIt follows that given any permutation $\\sigma\\in G$ the action of $K$ on rows and columns is via \\emph{left multiplication}, meaning our~60 CMI-equivalence classes correspond to \\emph{right cosets} of~$K$ in~$G$; whereas permutations to move us from one right~$K$-coset to another act via \\emph{multiplication on the right}. \n\nSince we may always make $a$ the top left-hand entry of any of the matrices in~$\\cal{M}_{\\rm 2\\times3}$ by row and\/or column swaps, we set a basic form for our matrices as $M = \\left( \\begin{array}{ccc}a & x & y\\\\u & v & w\\end{array}\\right)$, where (as sets) $\\{x,y,u,v,w\\}=\\{b,c,d,e,f\\}$.\nThis leaves us with only $5!=120$ possibilities which we further divide in half by requiring that $x>y$. So our final form for representative matrices will be:\n\\begin{equation}\nM = \\left( \\begin{array}{ccc}a & x & y\\\\\nu & v & w\\end{array}\\right),{\\rm\\ with\\ }x>y.\n\\label{mates}\n\\end{equation}\n\nThis yields our promised~60 representatives~$\\mathbf{R_{2\\times3}}$ in the form (\\ref{mates}) for the~60 possible~CMI values associated with the \\bf fixed \\rm set of probabilities~$\\{a,b,c,d,e,f\\}$. \nWe shall both implicitly and explicitly identify this set~$\\mathbf{R_{2\\times3}}$ with a set of coset representatives for~$K\\backslash G$: and given two matrix classes~$M$,~$N$ the statement that~$M\\succ N$ or~$M\\rhd N$ will be taken to mean that the corresponding coset representatives satisfy such a relation.\nWe now need to subdivide~$\\mathbf{R_{2\\times3}}$ as follows. Matrices whose rows and columns are arranged in descending order will be said to be in \\it standard form\\rm. It is straightforward to see that only five of the~60 matrices we have just constructed have this form, namely matrix classes~$\\mathbf{1},\\mathbf{7},\\mathbf{13},\\mathbf{25}$ and~$\\mathbf{31}$ from appendix~\\ref{frog} which are explicitly:\n\\begin{equation}\n\\label{minz}\n\\left( \\begin{array}{ccc}a & b & c \\\\d & e & f\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & b & d \\\\c & e & f\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & b & e \\\\c & d & f\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & c & d \\\\b & e & f\\end{array}\\right),\\textrm{ and }\n\\left( \\begin{array}{ccc}a & c & e \\\\b & d & f\\end{array}\\right).\n\\end{equation}\nNotice that all of these are in the form~(\\ref{mates}) with the additional condition that~$u>v>w$.\nIf we allow the bottom row of any of these to be permuted we obtain~$5=|\\mathbf{S}_3|-1$ new matrices which are not in standard form. In all this gives a total of~$30$ matrices split into five groups of~$6$, indexed by each matrix in~(\\ref{minz}).\n\nNow consider matrices in~$\\mathbf{R_{2\\times3}}$ which cannot be in standard form by virtue of having top row entries which are `too small' but nevertheless which still have the rows in descending order, viz:\n\\begin{equation}\n\\label{minzoneup}\n\\left( \\begin{array}{ccc}a & b & f \\\\c & d & e\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & c & f \\\\b & d & e\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & d & e \\\\b & c & f\\end{array}\\right),\n\\left( \\begin{array}{ccc}a & d & f \\\\b & c & e\\end{array}\\right),\\textrm{ and }\n\\left( \\begin{array}{ccc}a & e & f \\\\b & c & d\\end{array}\\right).\n\\end{equation}\nOnce again, by permuting the bottom row of each we obtain five new matrices: again a total of $30$ matrices split into five groups of $6$, indexed by each matrix in (\\ref{minzoneup}).\nThis completes our basic categorization of the subsets of matrices in~$\\mathbf{R_{2\\times3}}$. \n\nHere are a few results from~\\cite{me} which help us to classify the relations between the~$\\mathbf{R_{2\\times3}}$ classes. Call two matrices $M = \\left( \\begin{array}{ccc}a & p & q\\\\r & s & t\\end{array}\\right),\\ \\ N = \\left( \\begin{array}{ccc}a & x & y\\\\u & v & w\\end{array}\\right)$ \\it lexicographically ordered \\rm if the pair of row vectors $(a,p,q,r,s,t)$ and $(a,x,y,u,v,w)$ is so ordered (ie the word ``apqrst'' would precede the word ``axyuvw'' in an English dictionary).\n\n\\begin{lemma}\nWe may order the matrices in $\\mathbf{R_{2\\times3}}$ lexicographically, and majorisation respects that ordering.\\qed\n\\end{lemma}\n\nThat is to say, if $M$ lies above $N$ lexicographically then $N$ \\bf cannot \\rm majorise $M$. Note that this is \\bf not \\rm the case for the relation $\\rhd$. \n\n\\begin{remark}\nWe have set out this ordering explicitly in appendix~\\ref{frog}. We shall sometimes refer to matrix classes in~$\\mathbf{R_{2\\times3}}$ by these numbers: when we do so, they will appear in \\bf bold \\it figures as per the appendix. Equally we may refer to them by a right coset from $K\\backslash G$, a representative of each of which is also tabulated in appendix~\\ref{frog}.\n\\end{remark}\n\n\\begin{lemma}\\label{treecritter}\nFix any matrix $M=\\left( \\begin{array}{ccc}a & x & y\\\\u & v & w\\end{array}\\right) \\in\\mathbf{R_{2\\times3}}$ with the additional requirement that $u>v>w$. Permuting the elements of the bottom row under the action of the symmetric group $\\Sym{3}$ we have the following majorisation relations:\n\\begin{equation}\n\\begin{array}{ccccccc}\\label{flea}\n\\left( \\begin{array}{ccc}a & x & y\\\\u & v & w\\end{array}\\right) & \\succ &\n\\begin{array}{c}\\left( \\begin{array}{ccc}a & x & y\\\\v & u & w\\end{array}\\right)\\\\\n\\left( \\begin{array}{ccc}a & x & y\\\\u & w & v\\end{array}\\right) \\end{array} & \\succ &\n\\begin{array}{c}\\left( \\begin{array}{ccc}a & x & y\\\\w & u & v\\end{array}\\right)\\\\\n\\left( \\begin{array}{ccc}a & x & y\\\\v & w & u\\end{array}\\right) \\end{array} & \\succ &\n\\left( \\begin{array}{ccc}a & x & y\\\\w & v & u\\end{array}\\right)\n\\end{array}\\end{equation}\nThere are no \\rm a priori \\it majorisation relations within the two vertical pairs, with the exception of the instance~$\\mathbf{46}\\succ\\mathbf{47}$ in corollary~\\ref{cannotprove}. \\qed\n\\end{lemma}\n\nNote that the rightmost matrix in~(\\ref{flea}) corresponds to multiplication by the permutation $\\varpi=(m_{21},m_{23})$ of the matrix $M=(m_{ij})$, that is: \n$\\left( \\begin{array}{ccc}a & x & y\\\\w & v & u\\end{array}\\right) = \\varpi\\left(\\left( \\begin{array}{ccc}a & x & y\\\\u & v & w\\end{array}\\right)\\right).$\nBy proposition~\\ref{majmcmaj} the fact that $A$ majorises $B$ implies that $I(A)\\beta$ may be represented in the form $M = \\left( \\begin{array}{ccc}\\alpha & x & y\\\\\\beta & u & v\\end{array}\\right)$ being acted upon by switching the places of~$\\alpha$ and~$\\beta$. Furthermore there exists a matrix in the CMI-equivalence class of $M^\\tau$ such that we may write $M^\\tau = \\left( \\begin{array}{ccc}\\alpha & u & v\\\\\\beta & x & y\\end{array}\\right)$. Now since CMI is invariant in particular if we swap columns 2 and 3, it is evident (by possibly interchanging $M$ and $M^\\tau$) that we may stipulate that $x>u,v,y$. So having chosen $\\alpha,\\beta$ our choice of $x$ is fixed. The remaining 3 letters will then have 6 possible orderings, of which exactly 4 satisfy either $uu+v$ (recall that we are only interested in row sums here since the column sums are fixed, and note that $M^\\tau\\succ M$ is not possible since $x+y$ cannot be \\it a priori \\rm less than $u+v$). But $x$ is greater than both $u$ and $v$, hence \\it a priori \\rm majorisation will occur if and only if either $u\\beta$ there exist exactly 4 majorisation relations, giving a total of 60 arising from vertical transpositions.\n\nTurning to the horizontal transpositions, they may be written in the form $M = \\left( \\begin{array}{ccc}\\alpha & \\beta & x\\\\y & u & v\\end{array}\\right)\\mapsto \\left( \\begin{array}{ccc}\\beta & \\alpha & x\\\\ y & u & v\\end{array}\\right) = M^\\tau$. Notice first of all that our calculations of CMI differentials will be independent of the rightmost column $\\binom{x}{v}$ and so we may regard this as a majorisation comparison between vectors $(\\alpha+y,\\beta+u)$ and $(\\alpha+u,\\beta+y)$, which reduces to a contest between $u$ and $y$. We see that $M\\succ M^\\tau$ if and only if $y>u$ (note that this is the same relation as if we interchanged $\\alpha$ with $y$ and $\\beta$ with $u$ so to avoid counting twice we stipulate that $x>v$). \nEach of the $\\binom{6}{2}=15$ possible pairs $\\{\\alpha,\\beta\\}$ with $\\alpha>\\beta$ gives us $\\binom{4}{2}=6$ relations where both $x>v$ and $y>u$, yielding\na total of $90$ majorisation relations in total, arising from horizontal transpositions.\n\nIt is clear from the definitions that the respective sets of diagonal, vertical and horizontal majorisation relations are mutually exclusive. Moreover the property of being diagonal\/vertical\/horizontal is invariant under the equivalence relations used to construct the right cosets in $\\mathbf{R_{2\\times3}}$. Once we have theorem~\\ref{cripes} below we shall have proven the following (the second part is easy to check using a program like SAGE).\n\n\n\\begin{proposition}\\label{crikey}\nThere is a total of 165 distinct (strict) majorisation relations arising exclusively from transpositions between $\\mathbf{R_{2\\times3}}$ matrix classes. This comprises 15 from the diagonal transpositions, 60 from the vertical and 90 from the horizontal. By taking the transitive reduction of the directed graph on 60 nodes whose edges are the 165 majorisation relations just described, we find that 30 of them are redundant and so there are only 135 covering relations in this set. \\qed\n\\end{proposition}\n\n\\begin{figure}\n\\begin{center}\n\\mbox{\n\\subfigure{\\includegraphics[width=7in,height=5in]{GR2.pdf}}\n}\n\\caption{Graph GR2 of covering relations between column sum coordinates}\\label{GR2}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}\n\\begin{center}\n\\mbox{\n\\subfigure{\\includegraphics[width=7in,height=5in]{GR3.pdf}}\n}\n\\caption{Graph GR3 of covering relations between row sum coordinates: note this is the Bruhat poset $S_6^{(3)}$ of figure 2.7 of~\\cite{bjorner}}\\label{GR3}\n\\end{center}\n\\end{figure}\n\n\\begin{figure}[h!btp]\n\\begin{center}\n\\mbox{\n\\subfigure{\\includegraphics[width=4in,height=4in,keepaspectratio]{plotGR2hexagons.pdf}}\n}\n\\caption{2-simplex $\\Sigma_\\mathbf{c}$ of possible ordered triples of values in GR2, with the hexagon $\\mathbf{H}(M)$ of a matrix $M$ with $\\mathbf{c}(M)=(0.6,0.3,0.1)$}\n\\label{herpes}\n\\end{center}\n\\end{figure}\n\n\nThe two figures GR2 and GR3 shown on pages~\\pageref{GR2} and~\\pageref{GR3} depict schematically all of the possible column sums (respectively row sums) formed from the 6 probabilities $a,b,c,d,e,f$ in the rows and columns of the matrices in $\\cal{M}_{\\rm 2\\times3}$. It is apparent after a bit of thought that there is a 1-1 correspondence between matrices in $\\cal{M}_{\\rm 2\\times3}$ and `compatible' pairs~$\\{\\mathbf{c}(M),\\ \\mathbf{r}(M)\\}$ where~$\\mathbf{c}(M)$ is a vector of~3 mutually exclusive entries from~GR2 (the column sums), and~$\\mathbf{r}(M)$ is a vector of~2 mutually exclusive entries from~GR3 (the row sums), and where we mean by `compatible' that the chosen column sums \\it can \\rm coexist in a matrix with the chosen row sums. Moreover two matrices $M,N$ are in the same class in $\\mathbf{R_{2\\times3}}$ if and only if there are permutations $\\sigma\\in\\mathbf{S_3},\\ \\tau\\in\\mathbf{S_2}$ such that $\\mathbf{c}(M)=\\mathbf{c}(N)^{\\sigma}$ and $\\mathbf{r}(M)=\\mathbf{r}(N)^{\\tau}$, where we view the actions of the groups as usual as simply permuting the coordinates of the vectors.\n\nThe arrows in~GR2 and~GR3 all indicate a `covering' relation between sums of probabilities: in other words if there is an arrow from a quantity~$X$ to a quantity~$Y$ then~$X>Y$ for \\emph{every} matrix in~$\\mathfrak{M}_6$ and there is no quantity~$Z$ (within the possible column or row sums respectively) such that~$X>Z>Y$ for \\emph{every} possible choice of matrix in~$\\mathfrak{M}_6$.\n\n\\begin{proposition}\\label{mipelice}\nLet $M,N\\in\\cal{M}_{\\rm 2\\times3}$. Then \n$M$ majorises $N$ \nif and only if:\n\n(i) each one of the coordinates of $\\mathbf{r}(N)$ lies on a (directed) path in GR3 joining some pair of coordinates of~$\\mathbf{r}(M)$; \\emph{and}\n\n(ii) each one of the coordinates of $\\mathbf{c}(N)$ lies on a (directed) path in GR2 joining some pair of coordinates of~$\\mathbf{c}(M)$.\n\\end{proposition}\n\n\\begin{proof}\nLet $\\vec{x},\\vec{y}\\in\\mathbb{R}^n$. By a well-known result on majorisation~\\cite[4.C.1]{marshall} we know that $\\vec{x}\\succ\\vec{y}\\ $~if and only if $\\vec{y}$ lies in the convex hull of the $n!$ points formed by all of the permutations of the coordinates of $\\vec{x}$. In our situation the column sum vectors are all elements of the 2-simplex~$\\Sigma_\\mathbf{c}=\\{(x,y,z)\\in\\mathbb{R}^3:\\ x+y+z=1;\\ x,y,z\\geq0\\}$, whose vertices are the units on the axes $(1,0,0),(0,1,0)$ and $(0,0,1)$. Similarly the row sum vectors lie inside a 1-simplex $\\Sigma_\\mathbf{r}=\\{(u,v)\\in\\mathbb{R}^2:\\ u+v=1;\\ u,v\\geq0\\}$ with endpoints~$(1,0)$ and~$(0,1)$.\n\nEach matrix $M\\in\\cal{M}_{\\rm 2\\times3}$ gives us a column sum vector $\\mathbf{c}(M)$ which in turn gives (via the permutations of its coordinates under the action of $\\mathbf{S}_3$) a suite of six points whose convex hull is a closed, irregular, possibly degenerate hexagon $\\mathbf{H}(M)$ lying entirely inside the closed simplex $\\Sigma_\\mathbf{c}$ (see figure~\\ref{herpes}), whose individual coordinates are all nodes of~GR2. Similarly $M$ gives us a row sum vector $\\mathbf{r}(M)$ whose convex hull is (under the action of $\\mathbf{S}_2$) the line segment $\\mathbf{L}(M)\\subseteq\\Sigma_\\mathbf{r}$ and whose endpoints are $\\mathbf{r}(M)$ and $\\mathbf{r}(M)^\\ast$, the image of $\\mathbf{r}(M)$ under transposing the $u,v$ coordinates: themselves nodes of~GR3. \nNow let $N$ be any other matrix in~$\\cal{M}_{\\rm 2\\times3}$. By definition~\\ref{critta}, the hypothesis that $M\\succ N$ is the same as saying that $\\mathbf{r}(M)\\succ\\mathbf{r}(N)$ and $\\mathbf{c}(M)\\succ\\mathbf{c}(N)$ which from above is equivalent to saying that $\\mathbf{c}(N)\\in\\mathbf{H}(M)$ and that $\\mathbf{r}(N)\\in\\mathbf{L}(M)$. We remark that each vector in~$\\mathfrak{D}_6$ will give us a different hexagon and a different line segment for this same~$M$, hence the point is to show that these statements are true for every choice of vector.\n\nSo what we need to show is that $\\mathbf{r}(N)\\in\\mathbf{L}(M)$ if and only if each one of the coordinates of $\\mathbf{r}(N)$ lies on a (directed) path in GR3 joining some pair of coordinates of~$\\mathbf{r}(M)$, and that $\\mathbf{c}(N)\\in\\mathbf{H}(M)$ if and only if each one of the coordinates of $\\mathbf{c}(N)$ lies on a (directed) path in GR2 joining some pair of coordinates of~$\\mathbf{c}(M)$. But the arrows in~GR2 and~GR3 represent order relations between real numbers which hold for all choices of vector in~$\\mathfrak{D}_6$. So the result follows from the definitions of~$\\mathbf{L}(M)$ and~$\\mathbf{H}(M)$.\\end{proof}\n\n\n\n\n\n\n\\begin{remark}\nIt is clear from the foregoing that~$\\left[\\mathbf{H}(N)\\subseteq\\mathbf{H}(M){\\rm\\ and\\ }\\mathbf{L}(N)\\subseteq\\mathbf{L}(M)\\right]$ if and only if $M\\succ N$.\n\\end{remark}\n\n\\newpage\n\\subsubsection{The case of more than one transposition}\n\n\\begin{corollary}\\label{cannotprove}\nLet $M,N\\in\\cal{M}_{\\rm 2\\times3}$. Suppose that $M\\succ N$ but that each element of $\\widehat{M}$ is separated from every element of $\\widehat{N}$ by a product of at least $n\\geq2$ transpositions. Then with just \\bf two\\it\\ exceptions, there is an intermediate matrix class $\\widehat{L}$ separated from $\\widehat{M}$ by a single transposition and from $\\widehat{N}$ by $(n-1)$ transpositions, such that $M\\succ L\\succ N$.\n\nThe exceptions are $(34,47)$ and $(46,47)$, namely:\n\n$$\\left( \\begin{array}{ccc}a & c & e\\\\d & f & b\\end{array}\\right)\\succ\\left( \\begin{array}{ccc}a & d & e\\\\f & b & c\\end{array}\\right){\\rm\\ and\\ }\\left( \\begin{array}{ccc}a & d & e\\\\c & f & b\\end{array}\\right)\\succ\\left( \\begin{array}{ccc}a & d & e\\\\f & b & c\\end{array}\\right).$$\n\nBoth of these `exceptional' covering relations factorise once the finer relation $\\rhd$ is introduced: that is to say they are no longer covering relations in $\\mathfrak{E}$. The factorisation paths are as follows:\n$$\\mathbf{34}\\succ\\mathbf{53}\\rhd\\mathbf{47}{\\rm\\ and\\ }\\mathbf{46}\\rhd\\mathbf{34}\\succ\\mathbf{53}\\rhd\\mathbf{47}.$$\n\\end{corollary}\n\n\\begin{proof}\n(The matrices referred to in this proof are reproduced in appendix~\\ref{mxs}).\n\nConstruct (by hand, or in a simple computer program) two matrices $\\mathbf{M_2}$ and $\\mathbf{M_3}$ representing the transitive reductions of the partial orders in GR2 and GR3. Since GR2 has 15 nodes and 20 directed edges and GR3 has 20 nodes and 30 directed edges we obtain a $15\\times15$-matrix with 20 non-zero entries for $\\mathbf{M_2}$, and a $20\\times20$-matrix with 30 non-zero entries for $\\mathbf{M_3}$. In order to simplify things for a moment, let us speak only of column sums. Recall by proposition~\\ref{mipelice} that all possible (column sum) majorisation relations for matrices $M,N$ will show up as each coordinate of $\\mathbf{c}(N)=(n_1,n_2,n_3)$ lying on some directed path between two coordinates of $\\mathbf{c}(M)=(m_1,m_2,m_3)$. But this is the same as saying that for each $j=1,2,3$, there exist distinct $k,l\\in\\{1,2,3\\}$ such that some power $\\mathbf{M_2}^p$ of the matrix $\\mathbf{M_2}$ contains a non-zero entry at $(m_k,n_j)$ and another power $\\mathbf{M_2}^q$ contains a non-zero entry at $(n_j,m_l)$. So if we form the sum (in reality a finite sum since $\\mathbf{M_2}$ is nilpotent; but note that we need the identity matrix since the quantities are $\\geq$ themselves):\n$$\\mathbf{\\overline{M}_2}=\\sum_{p=0}^{\\infty}\\mathbf{M_2}^p$$\nwe need only check the respective entries $(m_k,n_j)$ and $(n_j,m_l)$ of $\\mathbf{\\overline{M}_2}$ for $j=1,2,3$ to find whether such $k,l$ exist; if so then $\\mathbf{c}(M)\\succ\\mathbf{c}(N)$. Similarly we form\n$$\\mathbf{\\overline{M}_3}=\\sum_{p=0}^{\\infty}\\mathbf{M_3}^p$$\nand perform an identical procedure (with only two entries of course this time) to check for row sum majorisation. If we find non-zero entries for row and column sums in all $5=2+3$ cases then we must have $M\\succ N$.\n\nIf we now look at the adjacency matrix afforded by this procedure (where we put a 1 in position (i,j) iff matrix $i$ is found to majorise matrix $j$ under this test) then we produce a $60\\times60$-matrix with 423 non-zero entries. Its transitive reduction $\\mathbf{T}$ has 134 non-zero entries.\n\nIf on the other hand we generate the $60\\times60$ adjacency matrix of the directed graph produced by the methods of proposition~\\ref{crikey} (that is to say, only using single transpositions) and take its powers we find a matrix with 421 non-zero entries, with a transitive reduction $\\mathbf{T'}$ computed by SAGE to contain 135 entries.\n\nNow if we subtract the second of these two matrices from the first we find that $\\mathbf{U}=\\mathbf{T}-\\mathbf{T'}$ has just 5 non-zero entries as follows (recall that we use the lexicographic ordering on the $\\mathbf{R_{2\\times3}}$ matrix classes to index these adjacency matrices):\n$$\\mathbf{U}_{34,47}=+1;\\ \\mathbf{U}_{44,47}=-1;\\ \\mathbf{U}_{45,47}=-1;\\ \\mathbf{U}_{46,47}=1;\\ \\mathbf{U}_{46,48}=-1,$$\nwhich is precisely what is expected if we introduce the two exceptional relations mentioned in the statement of the corollary, into the relations in $\\mathbf{T'}$. (See appendix~\\ref{stasi}).\n\\end{proof}\n\nSo all but two complicated majorisation relations will decompose into smaller majorisation relations arising from single transpositions. Indeed modulo~$\\rhd$ proposition~\\ref{crikey} tells the whole story of majorisation as promised in the outline of the proof of theorem~\\ref{analalg}. One might hope that such a benign situation would also be the case for the relation~$\\rhd$ in this~$2\\times 3$ case: and indeed, there are again very few exceptions (we can prove that there are at least two, and possibly up to five). In order to establish the structure of the poset~$\\mathfrak{E}$, we need to establish necessary and sufficient conditions for the occurrence of a relation~$\\rhd$ between matrix equivalence classes which are related by a single transposition, and then as we have just done with majorisation, establish which are the exceptions.\n\n\n\\subsection{The entropic relation $\\rhd$ in $\\mathbf{R_{2\\times3}}$: necessary and sufficient conditions}\n\nWe are able to obtain quite a dense partial ordering of the $60$ matrix classes in $\\mathbf{R_{2\\times3}}$ on the basis of the entropic partial order relation $\\rhd$. Indeed almost one half of the possible pairs of distinct matrix classes are (conjecturally in the case of~4 pairs - see theorem~\\ref{proveit}) related to one another: we obtain 830 relations out of a possible $\\binom{60}{2}=1770$. The transitive reduction of these 830 yields 186 covering relations, as we shall show below. In the last section we found necessary and sufficient conditions for the majority of these relations which arise through `horizontal' and `vertical' transpositions and the consequent \\emph{majorisation} which occurs. As per proposition~\\ref{majmcmaj}, the notions of majorisation and the entropic partial order relation~$\\rhd$ are the same thing in these cases: so only the `diagonal' transpositions remain to be studied.\n\nHere we develop necessary and sufficient conditions for the relation $\\rhd$ to obtain in the case of a single \\emph{diagonal} transposition. Given any probability matrix in the form $M^\\sigma = \\left( \\begin{array}{ccc}\\alpha & x & y\\\\u & \\beta & v\\end{array}\\right)$, for some $M\\in\\mathfrak{M}_6$, a diagonal transposition $\\tau=(\\alpha,\\beta),\\ \\alpha>\\beta$ takes this to $M^{\\tau\\sigma} = \\left( \\begin{array}{ccc}\\beta & x & y\\\\u & \\alpha & v\\end{array}\\right)$ representing a class of CMI-invariant matrices of which one is $\\left( \\begin{array}{ccc}\\alpha & u & v\\\\x & \\beta & y\\end{array}\\right)$. Now by possibly interchanging the classes of $M^\\sigma$ and $M^{\\tau\\sigma}$ it is clear that we may require that $x>u$. Since we are examining only binary relations between pairs of matrices we are able to require that the pairs be ordered like this for the purposes of checking whether $\\sigma\\rhd\\sigma\\tau$ or $\\sigma\\tau\\rhd\\sigma$. (Note once again that we do \\bf NOT \\rm assume that $x>y$ here, nor that $\\alpha=a$ as we are not in general working with matrices in the form in $\\mathbf{R_{2\\times3}}$).\n\n\\begin{theorem}\\label{cripes}\nLet $M^\\sigma = \\left( \\begin{array}{ccc}\\alpha & x & y\\\\u & \\beta & v\\end{array}\\right)$ be a matrix as above with $\\alpha>\\beta$ and $x>u$. For $\\tau=(\\alpha,\\beta)$:\n\n\\rm\\bf Type A:\\ \\ \\ \\ \\ \\it$\\sigma\\tau\\rhd \\sigma\\ \\Leftrightarrow\\ v>y.$\n\\vspace{2mm}\n\nMoreover we have a stronger relation as a sub-class of this (`type A majorisation')\n$$\\sigma\\tau\\succ \\sigma\\ \\Leftrightarrow\\ v>x>u>y\\ .$$\n\n\nConversely,\n\n\\vspace{2mm}\n\n\\rm\\bf Type B:\\ \\ \\ \\ \\ \\it\n$\\sigma\\rhd \\sigma\\tau\\ \\Leftrightarrow\\ y>x>u>v.$\n\\end{theorem}\n\n\\begin{proof}\nIt is convenient to divide the single-transposition entropic cases into two types as we have done in the statement of the theorem, which we shall henceforth refer to as type~A and type~B. Type~A is where in the above notation we are able to say that~$\\mu_H(r_\\alpha^\\tau,r_\\alpha)\\mu_H(c_\\alpha^\\tau,c_\\alpha)<\\mu_H(r_\\beta,r_\\beta^\\tau)\\mu_H(c_\\beta,c_\\beta^\\tau)$ for all matrices~$M\\in\\mathfrak{M}_6$, which is the same as saying that~$I(M^\\sigma)>I(M^{\\tau\\sigma})$ for all~$M\\in\\mathfrak{M}_6$: that is, that~$\\sigma\\tau\\rhd\\sigma$. Type~B is exactly the opposite set of inequalities.\n\n\\begin{figure}[h!btp]\n\\begin{center}\n\\mbox{\n\\subfigure{\\includegraphics[width=4in,height=4in,keepaspectratio]{diamondslide.pdf}\n}\n\\end{center}\\caption{All fixed relations between the quantities~$r_\\alpha,\\ r_\\beta,\\ c_\\alpha,\\ c_\\beta,\\ r_\\alpha^\\tau,\\ r_\\beta^\\tau,\\ c_\\alpha^\\tau,\\ c_\\beta^\\tau$ in the $2\\times n$ case assuming always that~$\\alpha>\\beta$. The additional dashed lines complete the picture for the case~$n=3$ where we assume in addition that~$x>u$.}\\label{diamonds}\n\\end{figure}\n\nRecall the quantities~$r_\\alpha,\\ r_\\beta$ etc.~from proposition~\\ref{titrate} and consider the matrix~(\\ref{matricks}) for the special case where~$m=2$ (and~$n$ is any integer). We let~$\\alpha,\\beta$ represent any two of the~$p_{ij}$ which are in different rows and in different columns from one another. The assumption that~$\\alpha>\\beta$ implies the relations in figure~\\ref{diamonds}, where a solid downward arrow from~$k$ to~$l$ indicates that~$k>l$.\n\n\nIn our case~$r_\\beta=\\beta+u+v,\\ \\ c_\\beta=\\beta+x,\\ \\ r_\\alpha^\\tau=\\beta+x+y,\\ \\ c_\\alpha^\\tau=\\beta+u$. Since the hypotheses of the theorem include the requirement that~$uy\\right)\\Longleftrightarrow\\left(r_\\beta+c_\\beta>r_\\alpha^\\tau+c_\\alpha^\\tau\\right),$$\nso by assuming $v>y$ we shall have fulfilled the hypotheses of the second part of proposition~\\ref{titrate}.\nHence the condition given for type~A is sufficient. That~$v>y$ is also a necessary condition will follow from the results on type~B which we are about to prove. We remark that since type~A and type~B are mutually exclusive, it also follows that $vx>v>u$}\n\\item{$y>x>u>v$}\n\\item{$y>v>x>u$\n\\item{$x>y>v>u$}\n\\item{$x>y>u>v$}\n\\item{$x>v>y>u$}\n\\item{$x>v>u>y$}\n\\item{$x>u>y>v$}\n\\item{$x>u>v>y$}\n\\item{$v>y>x>u$\n\\item{$v>x>y>u$}\n\\item{$v>x>u>y$\n\\end{enumerate}\n\nWe should point out here that any of the above inequalities may be relaxed to~$\\geq$: of course if~$\\alpha$ or~$\\beta$ (or any other variable) should happen to be in-between two values of~$u,v,x,y$ which are equal then they shall also be forced to be equal to their neighbours - but this does not affect any of the arguments below. However because of this we shall need to prove strict violations of inequalities (that is, if we are trying to prove a contradiction to some expression~$f>g$ then we shall need to provide an example where actually~$f\\lneq g$).\n\n\nOne sees straight away that cases~VI, VII, IX, X, XI and~XII are all of type~A, since~$v>y$. We now proceed to show that case~II is the only type~B and that the remaining cases (I, III, IV, V and~VIII) are neither type~A nor type~B.\nWe first claim that\n\\begin{equation}\\label{finally}\nr_\\beta c_\\beta < r_\\alpha^\\tau c_\\alpha^\\tau\n\\end{equation}\nis a necessary and sufficient condition for type~B. Consider once again the fundamental expression~(\\ref{wing}).\nRecalling that~$c_\\alpha^\\tau$ is the smallest of the four terms,~(\\ref{finally}) implies that we must have $r_\\beta\\frac{y}{v}\\ .\n\\end{equation}\nSince~$y>v$ is a necessary condition for type~B as observed above, we shall have proven the necessity of~(\\ref{finally}) for type~B if we can prove the following:\n\\begin{claim}\nIf \\rm(\\ref{endlich}) \\it holds then $y\\leq v$.\n\\end{claim}\n\nFor suppose to the contrary that we have some matrix $M\\in\\mathfrak{M}_6$ satisfying both~(\\ref{endlich}) and $y>v$, so in particular we must be in one of the situations~I, II, III, IV, V or~VIII above. In probability distributions of type I, II, III and~VIII we may set $u=x$ and so $y=v$, a contradiction. In~IV and~V we may set~$y=x$ and~$u=v$ and since we can always construct an example where~$\\beta>0$, we have $ux+\\beta u>ux+\\beta x$ which is a contradiction since $x>u$. This proves the claim.\n\nSince (\\ref{wally}) is equivalent to~(\\ref{finally}), to complete the proof of the theorem for type~B it only remains to show that~(\\ref{wally}) is equivalent to the condition~II, namely~$y>x>u>v$. Now~II certainly implies~(\\ref{wally}), so we just need to prove that~(\\ref{wally}) implies~II.\nOur hypotheses include the assumption that~$x>u$ so it is enough to show that~$y>x$ and~$u>v$. \nRecall that we are still in one of the cases~I, II, III, IV, V or VIII, because $v>y$ would produce an immediate contradiction to~(\\ref{wally}) since $x>u$.\nSuppose that~$v\\gneq u$ (ie forcing us into cases~I, III and~IV): then setting~$v=y$ we see that~(\\ref{wally}) reduces to~$vxu$. So~$u>v$ as required. Similarly suppose that~$x\\gneq y$ (ie cases~V and~VIII): then again setting $v=y$, the inequality~(\\ref{wally}) contradicts~$x>u$. So $y>x$, completing the picture that condition~II is a necessary and sufficient condition for type~B.\n\nWe now prove that the condition~$v>y$ is necessary for type~A. \nSuppose to the contrary that we have type~A but that $y>v$. By figure~\\ref{diamonds} we know that $c_\\betav$ implies $r_\\beta1,$$\nand so the lemma implies that $r_\\beta c_\\beta>r_\\alpha^\\tau c_\\alpha^\\tau$ which we know from above is equivalent to~(\\ref{endlich}). But the claim above showed that this cannot hold under the assumption that $y>v$, yielding the desired contradiction.\n\nThis completes the proof of the central assertions of the theorem. It remains to show that if majorisation occurs for a diagonal transposition then it must be in the situation of condition~XII, and conversely that in the sub-class of type~A where~$v>x>u>y$ in fact we have majorisation.\nThe latter follows immediately on substituting these relations into~$M^\\sigma$ and~$M^{\\tau\\sigma}$. \nConversely, consider the column sums: since~$\\alpha>\\beta$ and~$x>u$ by hypothesis it follows that~$x+\\alpha>u+\\alpha>u+\\beta$ and~$x+\\alpha>x+\\beta>u+\\beta$, hence the columns of~$M^{\\tau\\sigma}$ must always majorise those of~$M^\\sigma$. In particular this rules out `type~B majorisation'. So the only type of majorisation which is possible in this diagonal transposition setup is type~A. Suppose then that~$M^{\\tau\\sigma}\\succ M^\\sigma$. By considering the row sums this time we see that~$\\alpha+u+v>\\alpha+x+y$, ie~$u+v>x+y$. Since~$x>u$ we must have that~$v>x$ and~$u>y$ (to see this, consider once again the diagram~GR2 on page~\\pageref{GR2}). So we may conclude that a necessary condition for type~A majorisation is that~$v>x>u>y$. So we have proven the claim about majorisation.\n\\end{proof}\n\n\n\\begin{corollary}\\label{yep}\nGiven a probability distribution $a>b>c>d>e>f$ as above, for any ordered pair $\\alpha>\\beta$ chosen from $\\{a,b,c,d,e,f\\}$ there exist precisely $7$ diagonal entropic relations, of which exactly one is moreover a majorisation relation. Since there are $\\binom{6}{2}=15$ such ordered pairs, there exist exactly $105$ diagonal entropic relations between the matrices in $\\mathbf{R_{2\\times3}}$ arising solely from transpositions. Furthermore 90 of these CANNOT be derived by majorisation considerations.\n\\end{corollary}\n\n\\begin{proof}\nGiven any one of the 15 possible pairs $(\\alpha,\\beta)$ with $\\alpha>\\beta$: exactly one of the $\\frac{4!}{2}=12$ configurations of the remaining letters $u,v,x,y$ (remembering always that $uy$, of which one further satisfies $v>x>u>y$. This means of course that 5 of the remaining configurations satisfy neither type~A nor type~B.\n\\end{proof}\n\nWe now deal with the situation when there is more than one transposition.\n\n\\subsection{The case of more than one transposition: the `sporadic 5'}\n\n\\subsubsection{Definition of the `sporadic 5' and proof of two of the relations}\n\nRecall that a relation $x>y$ in a partial order is called a {\\it covering relation\\rm} if no $z\\neq x,y$ may be found such that $x>z>y$.\n\n\\begin{theorem}\\label{proveit}\nThere are at least~2 and at most~5 covering relations between equivalence classes in~$\\mathbf{R_{2\\times3}}$ which arise exclusively from products of two or more transpositions. They are:\n\\begin{eqnarray*}\n\\mathbf{15}\\rhd\\mathbf{10}:\\ \\ \\left( \\begin{array}{ccc}a & b & e\\\\d & c & f\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\e & f & c\\end{array}\\right)\\ \\ \\hbox{(proven below)}\\\\\n\\mathbf{26}\\rhd\\mathbf{10}:\\ \\ \\left( \\begin{array}{ccc}a & c & d\\\\b & f & e\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\e & f & c\\end{array}\\right)\\ \\ \\hbox{(proven below)}\\\\\n\\mathbf{37}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & c & f\\\\b & d & e\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)\\ \\ \\hbox{(conjectured below)}\\\\\n\\mathbf{43}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & d & e\\\\b & c & f\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)\\ \\ \\hbox{(conjectured below)}\\\\\n\\mathbf{49}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & d & f\\\\b & c & e\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)\\ \\ \\hbox{(conjectured below)}.\n\\end{eqnarray*}\n\\end{theorem}\n\nWe shall require an $n$-dimensional analogue of lemma~5 of~\\cite{me}.\n\n\\begin{lemma}\\label{moa}\nLet $\\vec{v}=(v_i),\\ \\vec{w}=(w_i)$ be two vectors in $\\mathbb{R}^n$ with non-negative entries and suppose that $\\vec{v}\\succ\\vec{w}$. Let $\\phi$ be any strictly log-concave function defined on $\\mathbb{R}^+$. \nThen\n$$\\prod_{i=1}^n\\phi(v_i) < \\prod_{j=1}^n\\phi(w_j).$$\n\\end{lemma}\n\\begin{proof}\nBy chapter 3, E.1 of~\\cite{marshall} the product of $\\phi$ on the components is strictly Schur-concave.\n\\end{proof}\n\n\n\\begin{proof}[Proof (of the theorem)]\nA general rule similar to that in theorem~\\ref{cripes} for cases where matrix classes are related by two transpositions seems to be very difficult to formulate. So to avoid having to do this we first of all invoke the following empirical result. We constructed a program on Matlab which easily shows by counterexample that any pairs \\emph{not} related by a sequence of covering relations arising from proposition~\\ref{crikey}, theorem~\\ref{cripes} and\/or the above list of five, will \\emph{not} have any~$\\rhd$ relations between them. It never seems to require more than $10^6$ randomly chosen probability vectors (just using the~\\emph{rand(1,6)} function on Matlab with no modifications other than normalisation) in order to find a counterexample in any given instance - usually of course one needs far fewer than this. So it remains to prove that the two relations above indeed do hold, and we shall be done.\n\nWe remark that the first and second relations, and the third and fifth relations, are each pairs of relations which are images of one another under the automorphism~$\\xi_\\omega$ (see appendix~\\ref{transco}). So our proof that~$\\mathbf{26}\\rhd\\mathbf{10}$ actually points us to a kind of `mirror image' proof of the relation~$\\mathbf{15}\\rhd\\mathbf{10}$; and we would expect similarly for~$\\mathbf{37}\\rhd\\mathbf{11}$ and~$\\mathbf{49}\\rhd\\mathbf{11}$.\n\nWe first show that~$\\mathbf{26}\\rhd\\mathbf{10}$. We have to prove that\n$$H(a+e)+H(b+f)+H(c+d)+H(a+b+d)+H(c+e+f) \\geq H(a+b)+H(c+f)+H(d+e)+H(a+c+d)+H(b+e+f),$$\nwhich using the same technique as in~(\\ref{slump}) we may rewrite as\n\\begin{equation}\\label{smog}\n(b-c)\\log\\frac{\\mu_H^{(b-c)}(a+c)\\ \\mu_H^{(b-c)}(c+e+f)}{\\mu_H^{(b-c)}(a+c+d)\\ \\mu_H^{(b-c)}(c+f)}+(a-d)\\log\\frac{\\mu_H^{(a-d)}(c+d)}{\\mu_H^{(a-d)}(d+e)}\\geq0,\\end{equation}\nwhere we have written~$\\mu_H^t(x)$ for $\\mu_H(x,x+t)$. We have added in a \"dummy\" factor~$H(a+c)$ and then taken it out again, which has enabled us effectively to `factorise' the path from~$\\mathbf{26}$ to~$\\mathbf{10}$ via the matrix class~$\\mathbf{8}$.\nThe monotonicity in~$x$ of~$\\mu_H^t(x)$ for fixed~$t$ (lemma~\\ref{lollipop}(i)) shows that the second term is always~$>0$ (indeed this is simply the expression which shows directly that~$\\mathbf{8}\\rhd\\mathbf{10}$); so since~$a>b>c>d$ the left-hand side of~(\\ref{smog}) is greater than or equal to\n\\begin{equation}\\label{ta}\n(b-c)\\log\\frac{\\mu_H^{(b-c)}(a+c)\\ \\mu_H^{(b-c)}(c+e+f)\\ \\mu_H^{(a-d)}(c+d)}{\\mu_H^{(b-c)}(a+c+d)\\ \\mu_H^{(b-c)}(c+f)\\ \\mu_H^{(a-d)}(d+e)},\n\\end{equation}\nand once again by lemma~\\ref{lollipop}(i) we know that $\\mu_H^{(b-c)}(a+c)\\geq\\mu_H^{(b-c)}(a+d)$ so~(\\ref{ta}) is in turn greater than or equal to the following expression:\n\\begin{equation}\\label{simmo}\n(b-c)\\log\\frac{\\mu_H^{(b-c)}(a+d)\\ \\mu_H^{(b-c)}(c+e+f)\\ \\mu_H^{(a-d)}(c+d)}{\\mu_H^{(b-c)}(a+c+d)\\ \\mu_H^{(b-c)}(c+f)\\ \\mu_H^{(a-d)}(d+e)},\n\\end{equation}\nwhich has the added symmetry that the sum of the arguments of the various $\\mu_H$'s in the numerator equals the sum of the arguments in the denominator. \nSo we may compare these vectors of arguments and we find that\n\\begin{equation}\\label{majo}\n(a+c+d,\\ c+f,\\ d+e)\\ \\succ\\ (a+d,\\ c+d,\\ c+e+f),\n\\end{equation}\nsince in $\\mathbb{R}^3$ a necessary and sufficient condition that a vector $\\vec{v}$ majorise $\\vec{w}$ is that $\\vec{v}$ contain the overall maximum of all~6 components of $\\vec{v},\\vec{w}$ (in this case $a+c+d$) as well as the overall minimum (in this case either $c+f$ or $d+e$). Since $b>c$ we shall be done if we can show that the argument of the logarithm in~(\\ref{simmo}) is~$\\geq1$.\n\nWe now claim that\n\\begin{equation}\\label{moo}\n\\frac{\\mu_H^{(b-c)}(a+d)\\mu_H^{(b-c)}(c+e+f)\\mu_H^{(a-d)}(c+d)}{\\mu_H^{(b-c)}(a+c+d)\\mu_H^{(b-c)}(c+f)\\mu_H^{(a-d)}(d+e)}>1.\n\\end{equation}\n\nFirst we note that \n$$\\frac{\\partial^2}{\\partial x^2}\\left(\\log(\\mu_H(x,x+t))\\right) = -\\frac{1}{tx+x^2}$$\nwhich proves that $\\mu_H(x,x+t)$ is strictly log-concave in $x$ for fixed $t$. So if the terms $\\mu_H^t(X)$ in~(\\ref{moo}) all had their $t$-terms equal then~(\\ref{majo}) would give us our result, by lemma~\\ref{moa}. The strategy therefore is to replace the rightmost top and bottom terms in~(\\ref{moo}) respectively by terms of the form $\\mu_H^{(b-c)}(X+(c-e))$ and $\\mu_H^{(b-c)}(X)$ whose ratio is less than or equal to $\\frac{\\mu_H^{(a-d)}(c+d)}{\\mu_H^{(a-d)}(d+e)}$: provided that the corresponding majorisation relation still holds then we shall have finished. Note that by lemma~\\ref{lollipop}(v)\n$$\\frac{\\mu_H^{(b-c)}(c+d)}{\\mu_H^{(b-c)}(d+e)}\\ >\\ \\frac{\\mu_H^{(a-d)}(c+d)}{\\mu_H^{(a-d)}(d+e)},$$\nwhile part~(vi) tells us that for any $\\epsilon\\in(0,1-b-d)$:\n$$\\frac{\\mu_H^{(b-c)}(c+d)}{\\mu_H^{(b-c)}(d+e)}\\ >\\ \\frac{\\mu_H^{(b-c)}(c+d+\\epsilon)}{\\mu_H^{(b-c)}(d+e+\\epsilon)};$$\nthat is to say, increasing the arguments of the numerator and denominator by the same amount will \\emph{decrease} the value of the expression. So we know that such an~$X$, if it exists, must be greater than~$d+e$.\nHowever we cannot increase the arguments so as to disrupt the majorisation relation~(\\ref{majo}), which means that the maximum value of the new argument in the numerator cannot be greater than~$a+c+d$, which in turn translates into the value of~$X$ being less than~$a+d+e$. (Note that the \\emph{minimum} in~(\\ref{majo}) will not be violated because~$c+f$ is still a component of the vector of arguments of the denominator). So again using lemma~\\ref{lollipop}(vi) we see by continuity that such an~$X$ must exist provided we can prove that\n$$\\frac{\\mu_H^{(b-c)}(a+d+e+(c-e))}{\\mu_H^{(b-c)}(a+d+e)}<\\frac{\\mu_H^{(a-d)}(c+d)}{\\mu_H^{(a-d)}(d+e)}.$$\nNow the internality of the identric mean~\\cite{bullen} guarantees that~$\\mu_H^{(a-d)}(d+e)<\\mu_H^{(b-c)}(a+d+e)$ and that~$\\mu_H^{(a-d)}(c+d)<\\mu_H^{(b-c)}(a+c+d)$ which together with lemma~\\ref{lollipop}(i) gives us the following ordering:\n$$\\mu_H^{(a-d)}(d+e)<\\{\\mu_H^{(a-d)}(c+d),\\ \\mu_H^{(b-c)}(a+d+e)\\}<\\mu_H^{(b-c)}(a+c+d).$$\nSo if we can show that the sum of the central two terms exceeds that of the outer two terms then by lemma~4 of~\\cite{me} we shall be done (alternatively, apply lemma~\\ref{moa} to the function $\\phi(x)=x$). This is equivalent to showing that\n\\begin{equation}\\label{slopes}\n\\mu_H^{(a-d)}(c+d)-\\mu_H^{(a-d)}(d+e)\\ >\\ \\mu_H^{(b-c)}(a+c+d)-\\mu_H^{(b-c)}(a+d+e).\n\\end{equation}\nBut the difference between the pairs of arguments on both sides is the same value $(c-e)$, so this becomes a question about the relative steepness of $\\mu_H^{(b-c)}$ and $\\mu_H^{(a-d)}$.\nWe know that $\\mu_H^t(x)$ itself is strictly concave in~$x$ by lemma~\\ref{lollipop}(ii), so we may define new Lagrangian means $\\mathfrak{m}=\\mu_{\\mu_H^{(a-d)}}^{(c-e)}(d+e)$ and $\\mathfrak{M}=\\mu_{\\mu_H^{(b-c)}}^{(c-e)}(a+d+e)$ which by the internality of the Lagrangian mean~\\cite{bullen} VI.2.2 satisfy $\\mathfrak{m}<\\mathfrak{M}$. Denote by~$\\mu_H^{t\\ '}(\\xi)$ the slightly more awkward expression~$\\frac{\\partial}{\\partial x}\\left(\\mu_H^t(x)\\right)\\mid_{x=\\xi}$.\nDividing~(\\ref{slopes}) through by a factor of $(c-e)$ we obtain\n$$\\mu_H^{(a-d)\\ '}(\\mathfrak{m}) > \\mu_H^{(b-c)\\ '}(\\mathfrak{M}),$$\nwhich is what we now must prove.\nBut using lemma~\\ref{lollipop} once again:\n$$\\mu_H^{(a-d)\\ '}(\\mathfrak{m}) > \\mu_H^{(b-c)\\ '}(\\mathfrak{m}) > \\mu_H^{(b-c)\\ '}(\\mathfrak{M}),$$\nwhere the first inequality is from part~(iv) and the second from part~(ii). This completes the proof that~$\\mathbf{26}\\rhd\\mathbf{10}$.\n\nTo prove that~$\\mathbf{15}\\rhd\\mathbf{10}$ we need only mimic the above proof replacing each probability $a,b,c,d,e,f$ by its respective image~$f,e,d,c,b,a$ under the obvious linear extension of~$\\xi_\\omega$ and then reversing all the signs. With a little care, the proof goes through exactly as above; we shall just mention the key points. One word of warning: using our abbreviated notation~$\\mu_H^t(x)$ for~$\\mu_H(x,x+t)$ can be a little confusing because the image under~$\\xi_\\omega$ will be~$\\mu_H^{-\\xi_\\omega(t)}(\\xi_\\omega(x+t))$.\n\nThe equivalent of~(\\ref{ta}) will be:\n\\begin{equation}\\label{tata}\n(d-e)\\log\\frac{\\mu_H^{(d-e)}(a+e)\\ \\mu_H^{(d-e)}(c+e+f)\\ \\mu_H^{(c-f)}(b+f)}{\\mu_H^{(d-e)}(a+b+e)\\ \\mu_H^{(d-e)}(e+f)\\ \\mu_H^{(c-f)}(d+f)},\n\\end{equation}\nand our corresponding move to obtain something in the form of~(\\ref{simmo}), with comparable vectors of arguments on the top and the bottom, is to add~$c-d$ to the argument of~$\\mu_H^{(d-e)}(e+f)$, giving us finally the following expression which we must show is always~$\\geq1$:\n\\begin{equation}\\label{somme}\n\\frac{\\mu_H^{(d-e)}(a+e)\\ \\mu_H^{(d-e)}(c+e+f)\\ \\mu_H^{(c-f)}(b+f)}{\\mu_H^{(d-e)}(a+b+e)\\ \\mu_H^{(d-e)}(e+f+(c-d))\\ \\mu_H^{(c-f)}(d+f)}.\n\\end{equation}\nThe remainder of the proof now proceeds in an identical fashion to that for~$\\mathbf{26}\\rhd\\mathbf{10}$: we show the existence of an $X\\in(d+f,\\ a+d+e)$ such that\n$$\\frac{\\mu_H^{(d-e)}(X+b-d)}{\\mu_H^{(d-e)}(X)} = \\frac{\\mu_H^{(c-f)}(b+f)}{\\mu_H^{(c-f)}(d+f)}$$\nby showing using Lagrangian means, that \n$$\\frac{\\mu_H^{(d-e)}(a+d+e+(b-d))}{\\mu_H^{(d-e)}(a+d+e)} < \\frac{\\mu_H^{(c-f)}(b+f)}{\\mu_H^{(c-f)}(d+f)} < \\frac{\\mu_H^{(d-e)}(b+f)}{\\mu_H^{(d-e)}(d+f)},$$\nthereby squeezing the desired value between two points on the curve of the monotonically decreasing function $\\frac{\\mu_H^{(d-e)}(x+(b-d))}{\\mu_H^{(d-e)}(x)}$.\nWe then use lemma~\\ref{moa} to relate that to the original question. \\end{proof}\n\n\\subsubsection{The three conjectural sporadics}\n\nUnfortunately I have been unable to prove $\\mathbf{37}\\rhd\\mathbf{11}$, $\\mathbf{43}\\rhd\\mathbf{11}$ and $\\mathbf{49}\\rhd\\mathbf{11}$: the structure of these three is markedly different from the ones we have just proven, and does not seem to yield to any similar techniques. So we may merely state the following conjecture:\n\n\\begin{conjecture}\\label{ohno}\nIn the above notation,\n\\begin{eqnarray*}\n\\mathbf{37}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & c & f\\\\b & d & e\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)\\\\\n\\mathbf{43}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & d & e\\\\b & c & f\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)\\\\\n\\mathbf{49}\\rhd\\mathbf{11}:\\ \\ \\left( \\begin{array}{ccc}a & d & f\\\\b & c & e\\end{array}\\right) & \\rhd &\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right).\n\\end{eqnarray*}\n\\end{conjecture}\n\nAs mentioned in the introduction we shall collectively refer to the above three relations together with the corollary relation~$\\mathbf{31}\\rhd\\mathbf{11}$ as~$\\mathbf{C4}$. Also recall the definition of the \\emph{binary entropy function}~$h(x)=H(x)+H(1-x)$.\nOne fascinating result of our numerical work - which to some extent highlights the unusual nature of these four relations - is that if we simply substitute~$h$ for~$H$ then we obtain a partial order which shares all~826 relations which we have proven for~$H$, together with seven other relations, but the~$\\mathbf{C4}$ are broken as may easily be shown by example. Moreover they are broken around fifty percent of the time. So somehow the extra symmetry of the binary entropy function, as opposed to the simple entropy function, wipes out precisely these four relations. One might hope that such a schism in behaviour would point the way to a proof of the conjecture above, although I have been unable to find one: in particular because $h$ does not lend itself to analysis by the methods of this paper (for example, lemma~\\ref{lollipop} (ii) fails for $h$). There are many other functions which also break the~$\\mathbf{C4}$ exclusively out of the~830, including all quadratics: one way to prove the above conjecture would be to show that entropy lies on a continuous manifold of functions well within the family of functions which respect all~830 relations. However such a proof also seems very difficult because of the convoluted nature of the Lagrangian mean functions which are involved (indeed they are often only piecewise defined).\n\n\\subsubsection{Summary of the partial order structure}\n\nSo we have a total of $262=165+2+90+5$ relations which in some sense are `primitive': there are no duplicates and the list exhausts all possibilities, bearing in mind that three of these are conjectural. In fact as we mentioned in the proof just now, it is easy to check that any pair {\\bf not} included in the relations obtained by viewing these~262 as a (nilpotent) adjacency matrix~$A_{\\mathbf{R_{2\\times3}}}$ and then looking at all the powers of~$A_{\\mathbf{R_{2\\times3}}}$, is not able to be a relation by constructing a few simple random samples say on Matlab. Taking the transitive reduction of this larger graph the overall number of covering edges reduces to~186, made up of~115 majorisation relations and~71 pure entropic relations. That is to say, the process of taking the transitive reduction of~$A_{\\mathbf{R_{2\\times3}}}$ factorises~50 relations from the majorisation side and~24 from the entropic side. As mentioned at the beginning of this section these primitive relations give rise to a total of between~826 and~830 relations overall.\n\nThis completes the proof of the analytic side of theorem~\\ref{analalg}, once we note that the density of the partial order~$\\mathfrak{E}$ is given by a number between $\\frac{826}{1770}$ and $\\frac{830}{1770}$, that is approximately~0.47 as claimed. It remains to outline the algebraic structure of~$\\mathfrak{E}$ in the next chapter. We conclude this chapter with a curious fact about the entropic relations.\n\n\n\\subsubsection{An aside: strange factorisations in the no-man's land between majorisation and $\\rhd$}\n\nWe remark on a phenomenon which arises in the interplay between majorisation and the relation $\\rhd$ which perhaps is a clue to delineating the kind of `majorisation versus disorder' behaviour which Partovi explores in~\\cite{partovi}.\n\nAdding the `sporadic' entropic relations from theorem~\\ref{proveit} to the~90 `pure entropic' relations from corollary~\\ref{yep} we obtain a maximal total of 95 relations which are NOT achievable through majorisation. It turns out that the transitive reduction of the (somewhat artificial) graph on 60 nodes whose edges are these 95 relations in fact is identical to the original graph. That is to say, all 95 are covering relations when we consider only the pure entropic relations (ie no majorisation). Curiously however when the majorisation relations are added in, there are many cases where an entropic edge ceases to be a covering relation and factors through a majorisation plus an entropic, so we have the following strange situation for right coset representatives $L,M,N\\in {\\mathbf{R_{2\\times3}}}$:\n$$L\\succ M\\rhd N\\Rightarrow L\\rhd N{\\ \\rm but\\ }L\\not\\succ N {\\rm\\ \\ \\ \\ \\ !!}$$\nAn example of this occurs if we set $L=\\mathbf{31}=\\left( \\begin{array}{ccc}a & c & e\\\\b & d & f\\end{array}\\right)$, $M=\\mathbf{43}=\\left( \\begin{array}{ccc}a & d & e\\\\b & c & f\\end{array}\\right)$ and $N=\\mathbf{10}=\\left( \\begin{array}{ccc}a & b & d\\\\e & f & c\\end{array}\\right)$. Then as is easy to check using the conditions in theorem~\\ref{cripes} and the discussion preceding proposition~\\ref{crikey}, $L\\succ M\\rhd N$ (which implies $L\\rhd N$ by the transitivity of $\\rhd$ and proposition~\\ref{majmcmaj}) but $L\\not\\succ N$.\n\nSimilarly a kind of `inverse' situation also occurs - albeit less frequently - namely\n$$R\\rhd S\\succ T\\Rightarrow R\\rhd T{\\ \\rm but\\ }R\\not\\succ T.$$\nFor completeness we mention an example of this too: take $R=\\mathbf{5}=\\left( \\begin{array}{ccc}a & b & c\\\\f & d & e\\end{array}\\right)$, $S=\\mathbf{11}=\\left( \\begin{array}{ccc}a & b & d\\\\f & c & e\\end{array}\\right)$ and $T=\\mathbf{35}=\\left( \\begin{array}{ccc}a & c & e\\\\f & b & d\\end{array}\\right)$.\n\n\n\n\nTo get some insight into this we need to show to what extent the two relations $\\succ$ and $\\rhd$ are the same. Recall from proposition~\\ref{majmcmaj} that majorisation implies $\\rhd$, but not conversely: for when $M\\succ N$ we know from the fact that entropy is a Schur-concave function that $H(\\mathbf{r}(M))\\beta$ and $x>u$. So we are done for all type~A entropic relations which arise from a single transposition. For type~B we note again from theorem~\\ref{cripes} that a necessary and sufficient condition is $y>x>u>v$, which implies in particular that $x+y>u+v$, which together with $\\alpha>\\beta$ means that the vector of row sums of $M$ must majorise that of $M^\\tau$. \n\nSo it remains to show that the proposition holds for the sporadic~5 relations of theorem~\\ref{proveit}, and that it holds when we compose successive entropic relations. The former is easy to show directly (in each of the five sporadics $M\\rhd N$ it is the case that $\\mathbf{c}(M)\\succ\\mathbf{c}(N)$). That the proposition holds under composition of relations is obvious (by the transitivity of majorisation) when we consider a sequence of two or more type~A relations and\/or sporadic relations; or indeed if we were to consider a sequence consisting only of type~B relations. So the only issue is what happens when we compose a type~B with a sporadic or with a type~A.\n\nThe sporadic relations are easy to deal with: recall from appendix~\\ref{stasi} the~15 type~B relations. Comparing this list with the list of the sporadic instances in theorem~\\ref{proveit} we see that only the following sequences can occur between the two sets:\n$\\mathbf{15}\\rhd\\mathbf{10}\\rhd\\mathbf{60}$, $\\mathbf{15}\\rhd\\mathbf{10}\\rhd\\mathbf{24}$, $\\mathbf{26}\\rhd\\mathbf{10}\\rhd\\mathbf{60}$, and $\\mathbf{26}\\rhd\\mathbf{10}\\rhd\\mathbf{24}$.\nIn particular there are no relations of the form type~B followed by a sporadic.\nConsidering each in turn we are able to show directly (using say the graph~GR2) that the column sum vectors of the left-hand sides always majorise those of the right-hand sides, hence proving the claim. Indeed the middle two relations $\\mathbf{15}\\rhd\\mathbf{24}$ and $\\mathbf{26}\\rhd\\mathbf{60}$ actually exhibit full majorisation.\n\nThe claim for the composition of type~B with type~A follows from a similar case-by-case analysis of the instances where they `match up' (ie where we have a type~A relation~$X\\rhd Y$ followed by a type~B relation~$Y\\rhd Z$, and vice-versa), using the matrix of~830 relations referred to above. We omit the details.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\section{A purely algebraic construction of the entropic partial order $\\mathfrak{E}$}\\label{algview}\n\nSo we have our partial order~$\\mathfrak{E}$ which has been defined entirely in terms of the entropy function. In this next section we shall briefly describe a combinatorial or algebraic construction which presupposes nothing about entropy but whose derivation mimics the case-by-case constructions of proposition~\\ref{crikey} and theorem~\\ref{cripes}.\nUnfortunately I have not been able to find a more natural expression for these relations than this: it is tantalisingly close to a closed form but it seems always to be burdened with some `exceptional' relations which must be subtracted, no matter how they are phrased. \n\nWhen we use another strictly convex or strictly concave function~$f$ instead of entropy and define a kind of~$f$-CMI by substituting~$f$ for~$H$ in the definitions, then it is these exceptions which come into play: the coefficients of the summands in~(\\ref{ether}) will change depending upon the curvature properties of~$f$, yielding new partial orders. Indeed by studying the simple family of functions~$\\{\\ \\pm x^p\\ :\\ p\\in\\mathbb{R}\\ \\}$ we are able to construct functions which `tune into' or `tune out of' various components of the partial order~$\\mathfrak{E}$, yielding a phenomenon akin to that of the family of Renyi entropies on vectors which approximates Shannon entropy near~1. For example, it is easy to show that the equivalent conditions to theorem~\\ref{cripes} for $f(x)=-x^2$ are that type~A occurs if and only if $v>y$, and type~B occurs iff $y>v$; moreover together with the same majorisation relations as for $f=H$ these generate \\emph{all} of the~1184 relations which hold for this $f$. Perhaps the most curious fact is that just like the binary entropy function~$h$ defined in the last section, the only relations which are actually \\emph{broken} from~$\\mathfrak{E}$ in going from $H$ to $f$ are those we have called~$\\mathbf{C4}$. As mentioned in the introduction, this is a vast topic for further study.\n\n\nWe say a quick word on the process of finding this algebraic description, which to some extent ties in with the statement of theorem~\\ref{analalg}. The `shape' of the group ring elements below was discovered by considering the image of the right coset space~$K\\backslash G$ under some of the outer automorphisms of~$G$: namely those which send~$K$ to a parabolic subgroup. There are six parabolic subgroups which are isomorphic to~$K$: $\\langle(1,2),(2,3),(4,5)\\rangle$, $\\langle(1,2),(2,3),(5,6)\\rangle$, $\\langle(1,2),(3,4),(4,5)\\rangle$, $\\langle(1,2),(4,5),(5,6)\\rangle$, $\\langle(2,3),(3,4),(5,6)\\rangle$, $\\langle(2,3),(4,5),(5,6)\\rangle$, and for each one there exist several outer automorphisms which map $K$ onto it. Choose any such~$J$ and a corresponding outer automorphism $\\zeta$. The right coset space~$J\\backslash G$ is isomorphic as a~$G$-set to~$K\\backslash G$. The image under~$\\zeta$ of each matrix class forms a kind of pyramid, with the row- and column-swap equivalences being transformed into equivalences between the positions of a singleton, a pair and a triple of probabilities. Relations between these pyramids turn out to be much easier to visualise than those between matrices, and the (almost-) cyclic structure of our group ring element~$\\eta_{\\tau,{\\bf cyc}}$ below was much more apparent in that form. \n\n\n\\subsection{The abstract combinatorial construction}\n\nLet~$G=\\Sym{6}$ be the symmetric group on the set of six elements~$\\{1,2,3,4,5,6\\}$. If~$\\sigma\\in\\Sym{6}$ acts by sending~$i$ to~$\\sigma(i)$ then one way of representing~$\\sigma$ is to write it as the ordered~$6$-tuple $[\\sigma(1),\\sigma(2),\\sigma(3),\\sigma(4),\\sigma(5),\\sigma(6)].$ \nOn the other hand we shall also represent elements of~$G$ in standard cycle notation: as in the rest of the paper, elements are understood to act on the \\bf left \\rm. That is to say for example that the product~$(1,2)(2,3)$ is equal to~$(1,2,3)$ rather than to~$(1,3,2)$. Define~$K$ to be the subgroup of~$\\Sym{6}$ generated by the elements~$(1,4)(2,6)(3,5)$ and~$(1,6,2,4,3,5)$. Then~$K$ is isomorphic to the dihedral group of order~12.\nThe reason for choosing this particular subgroup is that when the vectors are arranged in the~$2\\times3$-matrix form, left multiplication by this subgroup gives exactly the row- and column-swap operations under which~CMI is invariant: this is clearer if we choose the more obvious generators~$(1,2)(4,5),\\ (1,3)(4,6),\\ (1,4)(2,5)(3,6)$.\n\nThe right coset space~$K\\backslash G$ contains~60 elements and may be made into a right module for the action of the group ring~$\\mathbb{Z}[G]$ by taking the free abelian group whose generators are the right cosets~$K\\sigma$ of~$K\\backslash G$. Let~$\\mathbf{1}$ denote the multiplicative identity element of~$\\mathbb{Z}[G]$, which is identified in the usual way with~$1\\cdot1_G$ where~$1_G$ is the identity element of~$G$ and~$1$ represents the integer~1. \nLet $\\tau=(\\alpha,\\beta)$ be any transposition in~$G$, and let $\\{r,s,t,u\\}=\\{1,2,3,4,5,6\\}\\setminus\\{\\alpha,\\beta\\}$ represent the four elements left after removing $\\alpha$ and $\\beta$. Assume that we have ordered them so that $r>s>t>u$. Let~$\\psi_\\tau = (r,s),\\ \\chi_\\tau = (s,t)$ and~$\\gamma_\\tau = (\\alpha,u,t)(\\beta,s,r).$\nLet~$\\mu_\\tau$ be~$(\\alpha,\\beta)(r,t)(s,u)$, the unique involution which fixes~$\\gamma_\\tau$ and which interchanges~$(r,s)$ with~$(t,u)$.\nFinally, define~$\\sigma_\\tau$ to be any one of the~12 elements of~$G$ which take~$[1,2,3,4,5,6]$ into the right~$K$-coset of the permutation~$[\\alpha,r,s,\\beta,u,t]$ by right multiplication.\n\nUsing the same notation for group ring elements as for their counterparts in~$G$, with coefficients assumed to be~1 unless otherwise stated, let\n$$\\eta_{\\tau,{\\bf horiz}} = (\\mathbf{1}+\\psi_\\tau)(\\mathbf{1}+\\psi_\\tau^{\\mu_\\tau})(\\mathbf{1}+\\chi_\\tau)-(\\mathbf{1}+\\psi_\\tau\\psi_\\tau^{\\mu_\\tau})\\chi_\\tau,$$\nwhich upon expansion has six terms, and let\n$$\\eta_{\\tau,{\\bf cyc}} = \\sigma_\\tau(\\mathbf{1}+\\gamma_\\tau+\\gamma_\\tau^2)(\\mathbf{1}+\\psi_\\tau)(\\mathbf{1}+\\chi_\\tau)-\\sigma_\\tau\\gamma_\\tau^2\\psi_\\tau\\chi_\\tau,$$\nwhich has eleven terms. Finally define the group ring element\n\\begin{equation}\\label{ether}\n\\eta_\\tau =\\left(\\eta_{\\tau,{\\bf horiz}}+\\eta_{\\tau,{\\bf cyc}}\\right) (\\tau-1),\n\\end{equation}\nwhich therefore has a total of~17 terms of the form~$\\mathfrak{z}(\\tau-1)$ for some~$\\mathfrak{z}$ representing an element $z\\in G$.\n\n\\begin{definition}\\label{reflepo}\nLet $\\tau$ run over the~15 transpositions in~$G$.\nDefine a binary relation $\\blacktriangleright$ on $K\\backslash\\Sym{6}$ by letting each summand of each $\\eta_\\tau$ of the form $\\mathfrak{z}(\\tau-1)$ represent a relation of the form\n$$K\\mathfrak{z} \\blacktriangleright K\\mathfrak{z}\\tau.$$\nThis yields $15*17=255$ binary relations.\n\\end{definition}\n\n\\begin{theorem}\n The transitive closure of the relations~$\\blacktriangleright$ just defined together with the five \\emph{sporadic} relations of theorem~\\ref{proveit}, is identical to~$\\mathfrak{E}$.\n\\end{theorem}\n\n\\begin{proof}\nTake the relations from definition~\\ref{reflepo} and theorem~\\ref{proveit} and generate their transitive reduction: it is identical to that of~$\\mathfrak{E}$ as per appendix~\\ref{transco}.\n\\end{proof}\n\nWe may define such an element~$\\eta_\\tau$ for any of the~15 transpositions in~$G$; or we could equally well take a starting transposition $\\tau$ arbitrarily and then `navigate' between all of its conjugates by using only \\emph{adjacent} transpositions $\\kappa$ which \\emph{share a common element with $\\tau$}. That is to say,~$\\kappa=(\\alpha,\\alpha\\pm1)$ or~$\\kappa=(\\beta\\pm1,\\beta)$ with the possibilities obviously constrained by where~$\\alpha,\\beta$ lie in the set~$\\{1,2,3,4,5,6\\}$. Denoting by~$g^\\kappa$ as usual conjugation of~$g\\in G$ by~$\\kappa$ we then define $\\sigma_\\kappa=\\sigma_\\tau\\kappa$, $\\psi_\\kappa=\\psi_\\tau^\\kappa$, $\\chi_\\kappa=\\chi_\\tau^\\kappa$, $\\gamma_\\kappa=\\gamma_\\tau^\\kappa$, $\\mu_\\kappa=\\mu_\\tau^\\kappa$ and we get the same outcome for~$\\eta_\\tau$ as we would have done with the direct definitions above. \\emph{So it is possible to generate inductively \\emph{all} of the~255 relations from one starting point}. The adjacency and common element conditions for~$\\kappa$ are necessary because they preserve the rigidity of the orderings~$\\alpha^\\kappa>\\beta^\\kappa$ and $r^\\kappa>s^\\kappa>t^\\kappa>u^\\kappa$.\n\nAll of this raises an intriguing question. Does~$\\mathfrak{E}$ correspond to any of the well-known orders on quotients of the symmetric group? The na\\\"ive answer is no: our partial order is `complicated' in the sense that it is not properly graded: many covering relations have length~$\\geq2$ rather than just~$1$ as with the inherited Bruhat orders on the parabolic quotients of the symmetric group from classical Lie algebra theory. So the answer to what~$\\mathfrak{E}$ `is' may lie in the more general framework of \\it generalised Bruhat quotients\\rm~\\cite{bjorner}. \n\n\\subsection{The unique involution $\\xi$ of the entropic partial order $\\mathfrak{E}$}\\label{epoi}\n\nHaving completed the proof of theorem~\\ref{analalg} it remains just to make some final observations about the internal structure of~$\\mathfrak{E}$ which arise when one considers whether its graph has any symmetry. Consider the `maximal' involution in the Bruhat order~\\cite{bjorner} which is $\\omega=(1,6)(2,5)(3,4)\\in K$ in the usual cycle notation, and define $\\xi_\\omega\\in\\textbf{Aut}(G)$ to be the unique element of the automorphism group whose action is given by conjugation by $\\omega$. We prove here a structure theorem for the graph of the entropic poset $\\mathfrak{E}$ on the elements of $K\\backslash G$.\n\n\\begin{theorem}\\label{karma}\n$\\xi_\\omega$ is the unique automorphism of $G$ which respects the entropic partial order $\\mathfrak{E}$ on $K\\backslash G$.\n\\end{theorem}\nIn other words, $\\xi_\\omega$ induces a graph automorphism of the directed graph on~60 nodes with~186 edges which is conjecturally the graph of covering relations of~$\\mathfrak{E}$. Moreover if we ignore the~3 covering relations contained in the unproven relations~$\\mathbf{C4}$, this involution still induces an automorphism of the graph of the remaining~183 relations. See appendix~\\ref{mxs} for the details of these directed graphs.\n\n\\begin{proof}\n\\it `Analytical': \\rm In theorems~\\ref{cripes} and~\\ref{proveit} we derived from first principles the set of relations which arise only from the binary relation~$\\rhd$. In proposition~\\ref{crikey} (and see also corollary~\\ref{cannotprove}) we explored those relations which arise from majorisation and saw that they are subsumed under the first set. This gave a directed graph on 60 nodes with 262 edges, whose covering relations boil down to~186 edges on the~60 nodes:~$\\mathfrak{E}$ is defined to be the transitive closure of these covering relations. Feeding the adjacency matrix of this graph into the program SAGE (www.sagemath.org) gave us a graph automorphism group~$\\{1,\\kappa\\}$ of order~2, which fixes~16 nodes and acts as an involution on the other~44, splitting them into~22 orbits of~2 matrix classes each. We should also mention that we confirmed the uniqueness of the graph automorphism result using~SAUCY (http:\/\/vlsicad.eecs.umich.edu\/BK\/SAUCY\/).\n\nTo discover to which (if any) automorphism \\emph{of the group~$G$} this graph automorphism~$\\kappa$ might correspond we proceeded as follows. The normalizer~$\\mathbf{N}_G(K)$ of~$K$ in~$G$ is just~$K$ itself, and no outer automorphism of~$G$ can fix~$K$: consider for example the row-swap element~$(1,4)(2,5)(3,6)$ which must map under any non-trivial outer automorphism to a single transposition~\\cite[chapter~7]{rotman}. But there are no single transpositions in~$K$. So the only possible candidates to give by conjugation an (inner) automorphism of~$G$ which preserves the structure of~$K\\backslash G$ are the elements of~$K$ itself. Of these only~$\\omega$ respects the binary relation~$\\rhd$ in every instance (we used the computer program~GAP (www.gap-system.org) to check this, using orbit sizes). So in fact~$\\kappa=\\xi_\\omega$ as claimed. \n\n\\it `Algebraic': \\rm once we know the individual relations constructed in definition~\\ref{reflepo} we are also able to verify algebraically that conjugation by~$\\omega$ swaps these relations among themselves modulo equivalence by left multiplication by elements of~$K$, leaving the total structure unaltered.\n\\end{proof}\n\nFinally we make a few comments on why this involution preserves the single-transpositional relations within the partial order, this time from a purely theoretical point of view. That $\\xi_\\omega$ respects majorisation follows from proposition~\\ref{mipelice} and the observation that the action of $\\xi_\\omega$ on figures~\\ref{GR2} and~\\ref{GR3} is to reflect them in a horizontal line passing through the centre of each: hence the property of lying on a path joining two nodes is unaltered by the action of~$\\xi_\\omega$.\nIt is also possible to show directly that~$\\xi_\\omega$ respects~$\\rhd$ relations separated by a single transposition, as follows. Given any $g\\in G$, denote by $g^{\\xi_\\omega}$ the image $\\xi_\\omega(g)$ of $g$ under the inner automorphism $\\xi_\\omega$, or in other words $g^{\\xi_\\omega}=\\omega g \\omega^{-1}$.\n\n\\begin{proposition}\\label{respect}\nSuppose $\\sigma\\rhd\\sigma\\tau$ for some transposition $\\tau$. Then\n$$\\sigma^{\\xi_\\omega}\\rhd\\sigma^{\\xi_\\omega}\\tau^{\\xi_\\omega}.$$\n\\end{proposition}\n\n\\begin{proof}\nThe easiest way to approach this is to use again the criteria from theorem~\\ref{cripes} on pairs of matrix classes. For any letter~$z$ in the set of six letters acted upon by~$G$ let us write~$\\bar{z}$ for its image under~$\\omega$: so for example~$\\bar{a}=f$, etc. Since~$\\omega\\in K$ and~$\\omega^{-1}=\\omega$ it follows that the impact of conjugation by~$\\omega$ upon a right coset~$K\\sigma$ is the same as that of right multiplication by~$\\omega$, which in matrix format means we simply replace~$z$ with~$\\bar{z}$ everywhere.\nSo the image under~$\\xi$ of the matrix class represented by~$M=\\left( \\begin{array}{ccc}\\alpha & x & y\\\\u & \\beta & v\\end{array}\\right)$ will be\n$$\\bar{M}=\\left( \\begin{array}{ccc}\\bar{\\alpha} & \\bar{x} & \\bar{y}\\\\\\bar{u} & \\bar{\\beta} & \\bar{v}\\end{array}\\right).$$\nNow $\\xi_\\omega$ reverses all size relations and so~$\\alpha>\\beta$,~$x>u$ become~$\\bar{\\alpha}<\\bar{\\beta}$ and~$\\bar{x}<\\bar{u}$. \nFurthermore the transposition $\\tau=(\\alpha,\\beta)$ becomes ${\\tau^{\\xi_\\omega}}=(\\bar{\\beta},\\bar{\\alpha})$.\nPutting the matrix~$\\bar{M}$ back into the form in the hypotheses of theorem~\\ref{cripes} requires that we choose as a representative of the same class~$\\bar{M}$ instead:\n$$M'=\\left( \\begin{array}{ccc}\\bar{\\beta} & \\bar{u} & \\bar{v}\\\\\\bar{x} & \\bar{\\alpha} & \\bar{y}\\end{array}\\right).$$\nWe need to show that $M\\rhd M^\\tau$ implies that~$M'\\rhd {M'}^{\\tau^{\\xi_\\omega}}$ and that $M^\\tau\\rhd M$ implies~${M'}^{\\tau^{\\xi_\\omega}}\\rhd M'$.\nLooking again at theorem~\\ref{cripes} we see that the necessary and sufficient conditions for type~A and type~B relations give\n$$M^\\tau\\rhd M\\Longleftrightarrow v>y \\Longleftrightarrow \\bar{y}>\\bar{v} \\Longleftrightarrow {M'}^{\\tau^{\\xi_\\omega}}\\rhd M',$$\nand\n$$M\\rhd M^\\tau\\Longleftrightarrow y>x>u>v\\Longleftrightarrow\\bar{v}>\\bar{u}>\\bar{x}>\\bar{y}\\Longleftrightarrow M'\\rhd {M'}^{\\tau^{\\xi_\\omega}}$$\nThis completes the proof.\n\\end{proof}\n\n\n\\section{Appendices}\n\n\\subsubsection{Introduction and statement of the main theorem}\n\nLet $\\mathbf{p}=(p_i)$ represent any probability vector in $\\mathbb{R}^6$.\nThis paper is concerned with a partial order $\\mathfrak{E}$ among the~720 coordinatewise permutations of $\\mathbf{p}$, based on the Shannon entropy function~$H(x)=-x\\log x,$\nwhich is dependent only upon the ordering of the $p_i$ and not upon their values. It arose originally in the guise of a question in quantum information theory about \\emph{classicality} versus \\emph{quantumness}~\\cite{me}; however the structure theory turns out to be quite general. Because its natural setting is joint quantum systems the definition requires that we stipulate `subsystems' of dimensions~2 and~3 and then take the entropy of the marginal probability vectors from $\\mathbf{p}$ with respect to these subsystems. This construction brings with it a natural equivalence class structure and so the partial order is in fact defined only upon~60 equivalence classes of these permutations, each of size~12. We summarise this as our main theorem, as follows. Recall that the \\emph{density} of a partial order on a finite set of size~n is defined to be $r\/\\binom{n}{2}$, where~$r$ is the number of relations which appear in the partial order, and~$\\binom{n}{m}$ denotes the binomial symbol.\n\n\\begin{theorem}\\label{analalg}\nLet $G=\\Sym{6}$ be the symmetric group on six letters and let $J$ be any one of the six parabolic subgroups of~$G$ which are isomorphic to the dihedral group of order~12. There is a partial order~$\\mathfrak{E}$ on the right coset space~$J\\backslash G$ of density~$0.47$ whose analytical description may be given solely in terms of the Shannon entropy function~$H$. Moreover it has a concise independent algebraic description in terms of group ring elements.\n\\end{theorem}\n\nThe proof of this theorem, together with an in-depth analysis of the structure of~$\\mathfrak{E}$, are essentially what constitute the remainder of the paper. We must mention here that our description of~$\\mathfrak{E}$ is unfortunately incomplete: while we believe that there are~830 relations which constitute~$\\mathfrak{E}$ there are nevertheless four of these relations, which we shall refer to throughout as~$\\mathbf{C4}$, which we have been unable to prove or disprove analytically; although the numerical evidence for their validity is compelling. So our statements about the partial order must be read with the caveat that there is still a possibility that some or all of the~$\\mathbf{C4}$ are not in fact valid relations. However the structure of the remaining~826 relations of the partial order is independent of these four.\n\nSuch a partial order may in fact be described for any function $f$ instead of $H$ provided that certain convexity conditions are met: essentially we obtain a kind of `pseudo-norm' based upon the function $f$ that we choose. A curious consequence is that we may describe a whole suite of functions apparently unconnected to entropy, whose partial orders nevertheless appear numerically to mimic $\\mathfrak{E}$ exactly. At one level this is not very surprising, since the partial order is in some sense merely a discrete approximation to the curvature of the function concerned - hence there will be many different functions whose curvature is sufficiently similar on the appropriate region of space to give the same discrete approximation. But at another level this points to a deeper connection between certain of these functions and discrete entropies: perhaps there is an easier way to model entropy-related phenomena for low-dimensional joint systems than to attack the rather difficult entropy function itself. The space of relatively simple functions which would appear to mimic the entropy function - in this albeit limited context - is incredibly varied. For example, the function~$f(x) = \\cos(\\frac{2\\pi}{3}x)$ seems numerically to give exactly the same partial order as~$H(x)$, despite having a markedly different curvature function; the same is true of the function~$q(x)=(\\alpha x)^3-(\\alpha x)^2$ when~$\\alpha=\\frac{4}{9}$. Moreover any slight variation in the respective coefficients~$\\frac{2\\pi}{3}$ or~$\\alpha$ will `break' the respective partial order. However these functions are not concave on the full interval~$(0,1)$ and so the techniques of this paper will not work on them. \n\nAs we vary the underlying function $f$, another key question arises as to how the algebraic description needs to be modified in order to reflect the new analytical structure. Both the analytic and algebraic approaches are rich topics for further study.\n\nThe constructions here are not specific to the~6-dimensional case; however dimension~6 gives the first non-trivial partial order and sadly also the last easily-tractable one. Even for~$2\\times4$ (which is the next interesting case), numerical studies indicate that the number of separate relations which need to be considered is of the order of~$10^5$, the~$3\\times3$ case yields around~3 million, and for~$2\\times5$ it is of the order of~20 million. Also only where the dimensions are~$2\\times2$ and~$2\\times3$ are we able to single out a definite permutation which is guaranteed to give the maximal classical mutual information~(CMI) no matter what the probability vector chosen~\\cite{me}: in all other dimensions this grows into a larger and larger set of possibilities. However the constructions of this paper may be extended to any situation where we have joint systems of dimensions $m$ and $n$: for any sufficiently well-behaved function $f$ we obtain a binary relation between certain permutations of the probabilities of the joint system, yielding what may be viewed as a partial order upon (some quotient of) the symmetric group $\\Sym{mn}$ itself. We shall always assume $2\\leq m\\leq n$, for if $m>n$ then the situation is identical just with the subsystems reversed; if $m=1$ then there is nothing to be said since every permutation will give the same result, as will be seen from the definitions below.\n\nWe conclude this introductory section with a word on how this partial order arose. Suppose that we have ordered the~$p_i$ so that~$p_1\\geq p_2\\geq p_3\\geq p_4\\geq p_5\\geq p_6$. \nIn~\\cite{me} it was shown that the permutation\n$$(p_1,p_4,p_5,p_6,p_3,p_2)$$\nwill always yield the maximal CMI out of all of the possible permutations given by~$\\Sym{6}$. This built on work in~\\cite{santerdav} and~\\cite{santerdavPRL} which showed that the minimum CMI of all of these permutations was contained in a set of five possibilities, all of which do in fact occur in different examples. The results on the minima were achieved solely using considerations of majorisation among marginal probability vectors; however in order to prove maximality it was necessary to invoke a more refined entropic binary relation denoted~$\\rhd$. In exploring this finer ordering we found that it did indeed give rise to a well-defined partial order which moreover had a neat description in terms of symmetric group elements. So the paper is the result of this exploration.\n\n\\subsubsection{Structure of the proof of the main theorem}\n\nWe now outline how the proof of theorem~\\ref{analalg} will proceed. First of all however we need to decipher the connection with the parabolic subgroups $J$, since this barely appears elsewhere in the paper. The point is that because $\\Sym{6}$ has a class of non-trivial outer automorphisms~\\cite{rotman} we are able to study some phenomena via their image under any particular outer automorphism of our choosing: a trick which often makes things much clearer. Let $K$ be the dihedral group corresponding to row and column swaps which we shall define in section~\\ref{qmpo}. As is easy to verify, for any $J$ as described in the theorem there exists at least one outer automorphism mapping $K$ onto $J$ and so any partial order which we may define upon $K\\backslash G$ will also give an isomorphic partial order on $J\\backslash G$, and vice-versa. So we define our partial order in its natural context on the coset space $K\\backslash G$ and then merely translate the result into the more familiar language of parabolic subgroups in the statement of the theorem. Indeed there is no reason - other than the richness of structure which has been investigated for parabolic subgroups - for phrasing it in these terms. One could equally well describe the partial order on the quotient of~$G$ by \\emph{any} dihedral subgroup of order~12, for there are two conjugacy classes of subgroups of $G$ which are dihedral of order~12 - namely the class containing~$K$ and the class containing the parabolics~$J$ - each of size~60, and they are mapped onto one another by the action of the outer automorphisms.\n\nSo the proof of theorem~\\ref{analalg} will go as follows. Once we establish the basic definitions regarding entropy, classical mutual information, majorisation and the entropic binary relation~$\\rhd$, we begin to examine each of them in the context where two permutations differ by right multiplication by just a single transposition: first because this is the simplest case; but secondly because it actually generates all but~5 out of~186 covering relations in the partial order. A general rule for comparing pairs of permutations differing by more than one transposition under the entropic binary relation~$\\rhd$, moreover, seems to be very difficult: we are fortunate that only these five \\emph{`sporadic'} relations exist which cannot be generated via some concatenation of single-transposition relations. We elaborate necessary and sufficient conditions for permutations separated by a single transposition both for majorisation and for the entropic binary relation~$\\rhd$, noting the result from~\\cite{me} that majorisation implies~$\\rhd$ but not vice-versa. This gives a total of~165 relations arising from majorisation, and~90 relations arising solely from the binary entropy relation~$\\rhd$: a grand total of~255 relations arising from single transpositions. The transitive closure of these~255 relations contains~818 relations in total.\n\nOnce this is proven we shall almost have completed our description of~$\\mathfrak{E}$, for numerically it is easy to show that with the exception of the~12 relations which are generated when the \\emph{sporadic~5} are included, any other possible pairings are precluded by counterexample. So the partial order must have between~818 and~830 relations. With the two proven in theorem~\\ref{proveit} the transitive closure grows to~826 relations, leaving just the set~$\\mathbf{C4}$ mentioned above. This completes the `analytic' description of~$\\mathfrak{E}$.\n\nIt then remains to prove that~$\\mathfrak{E}$ has a neat description in terms of the group ring $\\mathbb{Z}[G]$. We give an iterative algorithm for constructing the entire web of~255 single-transposition relations referred to above starting from scratch, using simple rules which have no apparent connection to entropy. Of course we would not have `seen' this description had it not been for the analytic work which went before; however once we know what we are looking for, the entire complex of~255 relations is describable in very straightforward terms. The sporadics however must be added in to both descriptions: there seems to be no easy way of unifying their structures with the bigger picture.\n\n\\subsubsection{Acknowledgments}\nFirst of all thank you to Terry Rudolph and the QOLS group at Imperial College for their hospitality. I would also like to thank Peter Cameron, Ian Grojnowski and David Jennings for many helpful conversations.","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nA social network is referred as a structure of ``Internet users'' interconnected through a variety of relations \\cite{ahuja2013}. For a single user, he\/she has some different relationships in different social networks such as friends and followers. Also, one user has diverse social activities, e.g., post messages, photos and other media on Facebook and upload, view, share and comment on videos on YouTube. According to statistics, almost $500$ TB social data are generated per day. It takes high operational costs to store the data and it is a waste of resources without using them. Hence, we want to conduct the social big data analysis, in which the users active in a social, collaborative context to make sense of data. However, handling such a volume of social data brings us many challenges. We next describe the main challenges and the corresponding approaches to them.\n\n\nThe social data are generated all around the world and collected over distributed sources into different and interconnected data centers. Hence, it is hard to process the data in a centralized model. Concerned with this problem, cloud computing may be a good choice. As is known, many social networking websites (e.g., Facebook, Twitter, LinkedIn and YouTube) obtain computing resources across a network. These corporations host their social networks on a cloud platform. This cloud-based model owns some advantages, chief among which is the lowered costs in infrastructure. They can rent cloud computing services from other third part due to their actual needs and scale up and down at any time without taking additional cost in infrastructure \\cite{dusenbery2012}. Beyond that, they are able to choose different cloud computing services according to the distribution of social data. Naturally, for social data analysis in cloud, a distributed online learning algorithm is needed to handle the massive social data in distributed scenarios \\cite{jiang2014}. Based on cloud computing, we equip each data center with the independent online learning ability and they can exchange information with other data centers across the network. Each data center is urged to build a reliable model to recommend its local users without directly sharing social data with each other.\nIn theory, this approach is a distributed optimization technology and many researches \\cite{duchi2012dual,nedic2009distributed,ram2010distributed} have been devoted to it. To estimate the utility of the proposed model, we use the notion ``regret'' \\cite{shalev2011} in online learning (see Definition 3).\n\nIn Big Data era, social big data are both large scale and high dimension. A single person has a variety of social activities in a social network, so the corresponding vector of his\/her social information is ``long''. However, when a data miner studies the consumer behavior about one interest, some of the information in the vector may not be relevant. For example, a person's height and age cannot contribute to predicting his taste. Thus, high dimension could enhance the computational complexity of algorithms and weaken the utility of online learning models. To deal with this problem, we introduce a sparse solution in social big data. In this paper, we introduce two classical groups of effective methods for sparse online learning \\cite{wang2013large, wang2015, shalev2011b}. The first group (e.g., \\cite{langford2009}) induces sparsity in the weights of online learning algorithms via truncated gradient. The second group studies on sparse online learning follows the dual averaging algorithm \\cite{xiao2010}. In this paper, we will exploit \\emph{online mirror descent} \\cite{duchi2010} and \\emph{Lasso-$L_1$ norm} \\cite{tibshirani1996} to make the parameter updated in algorithm sparse.\n\nFurthermore, exchanging information contained in social data among data centers may lead to privacy breaches as it flows across the social network. Once social data are mined without any security precautions, it is of high probability to divulge privacy.\nAdmittedly, preserving privacy consequentially lead to the lowered performance of knowledge discovery in cloud-based social data.\nTherefore, we intend to design an algorithm, which protects the privacy while makes full use of the social data. Finally, we choose the ``differential privacy'' \\cite{dwork2006} technology to guarantee the safety of data centers in cloud. At a high level, a differentially private online learning model guarantees that its output of data mining does not change ``too much\" because of perturbations (i.e., add some random noise to the data transmitted) in any individual social data point. That means whether or not a data point being in the database, the mining outputs are difficult to distinguish and then the miner cannot obtain the sensitive information based on search results. \n\nIn conclusion, we make three contributions: 1) we propose a distributed online learning algorithm to handle decentralized social data in real time and demonstrate its feasibility; 2) sparsity is induced to compute the high-dimension social data for enhancing the accuracy of predictions; 3) differential privacy is used to protect the privacy of data without seriously weaken the performance of the online learning algorithm.\n\nThis paper is organized as follows. Section II introduces the system model and propose the algorithm. The privacy analysis is done in Section III. We analyze the utility of the algorithm in Section IV. Numerical results and performance improvements are shown in Section V. Section VI concludes the paper.\n \n \n \\section{system model}\nIn this section, the system model and our private online learning algorithm are presented. \n\nConsider a social network, in which all online users are served on cloud platforms, e.g., Fig.1. These users operate on their own personal page and the generated social data are collected and transmitted to the nearest data center on cloud, just as shown in Fig.1, all data are collected by the data centers marked with $A \\to G$. Because of the huge network, many data centers are widely distributed. Each data center has its corresponding cloud computing node, where the nearby social data are processed in real time . As a holonomic system, the social network should have a good knowledge of all data it owns, thus data centers should exchange information with each other. Since there are too many data centers and most of them are located over the world, a data center never can communicate with all other centers. To achieve better economic benefits, each data center just can exchange information with neighboring ones (e.g., $D$ is just connected to its adjacent centers $C$ and $G$). Furthermore, random noise should be added to each communication for protecting the privacy (yellow arrows in Fig.1). Since such social big data need to be efficiently and privately processed with the limited communications, we focus on distributed optimization and differential privacy technologies.\n\n\nWe next introduce how the communications among data centers on cloud are conducted. Recall that we intend to realize knowledge discovery in social data in real time. A new parameter, e.g., $w$, should be created to denote the online learning parameter (containing the knowledge mined from data). At each iteration, each cloud node updates $w$ based on its local data center and then exchanges $w$ with neighbors. This communication mechanism forms a network topology. The network topology can be fixed or time-variant, which is proved to have no great influence on the utility of our algorithm in Section IV. \n\n \\subsection{Communication Graph}\n\n For our online learning social network, we denote the communication matrix by $A$ and let $a_{ij}$ be the $(i,j)$-th element of $A$. In the system, $a_{ij}$ is the weight of the learning parameter which the $i$-th cloud node transmits to the $j$-th one. ${a_{ij}}(t)>0$ means there exists a communication between the $i$-th and $j$-th nodes at round $t$, while ${a_{ij}}(t)=0$ means non-communication between them. . For a clear description, we denote the communication graph for a node $i$ at round $t$ by \n\\begin{eqnarray}{\\mathcal{G}_i} = \\{ (i,j):{a_{ij}} > 0\\},\\end{eqnarray}\n where $\\notag{a_{ij}}\\in {A}.$\n \n To achieve the global convergence, we make some assumptions about $A$.\n \n\n \\begin{figure}[tbp]\n \\label{fig:subfig:a}\n\\includegraphics[width=3.5in]{network.eps}\n\\caption{Private Social Big\nData Computing over Data Center Networks\n} \n\\vspace{-.9em}\n\\end{figure}\n\n\n\n\\textbf{Assumption 1.} For an arbitrary node $i$, there exists a minimal scalar $\\eta$, $0 < \\eta < 1$ such that\n\\begin{itemize}\n\\item[(1)]${a_{ij}} > 0 $ for $(i,j) \\in \\mathcal{G}_i$,\n\\item[(2)] $\\sum\\nolimits_{j = 1}^m {{a_{ij}}} = 1$ and $\\sum\\nolimits_{i = 1}^m {{a_{ij}}} = 1$,\n\\item[(3)] ${a_{ij}} > 0$ implies that ${a_{ij}}\\ge \\eta $.\n\\end{itemize}\n\nHere, Assumptions (1) and (2) state that each node computes a weighted average of neighboring learning parameters. Assumption (3) ensures that the influences among the nodes are significant. \n\nThe above assumption is a necessary condition which presents in all researches (e.g.,\\cite{duchi2012dual,nedic2009distributed,ram2010distributed}) about distributed optimization. Fortunately, this technology can be used to solve our distributed online learning in social networks.\n\n \\subsection{Sparse Online Learning}\n \n As described, a social data is high dimensional. Hence, its corresponding learning parameter $w$ is a long vector. In order to find the factors most related to one predicting behavior, we need to aggressively make the irrelevant dimensions zero. Lasso \\cite{tibshirani1996} is a famous method to produce some coefficients that are exactly $0$. Lasso cannot be directly used in the algorithm, we combine it with online mirror descent (see Algorithm 1) which is a special online learning algorithm.\n\nFor convenient analysis, we next make some assumptions about the mathematical model of online learning system in social network. We assume to have $m$ data centers over the social network. Each data center collects massive social data every minute and processes them on cloud computing. For the data generated from social networks, we use $x$ to denote the social data of individual person. $\\widehat y$ (e.g., $\\widehat y = {w^T}x$)\n denotes the prediction for a user, which helps the online website offer the user satisfying service. Then, the user will give a feedback denoted by $y$ telling the website whether the previous prediction makes sense for him. Finally, due to the loss function (e.g., $f\\left( {w,x,y} \\right) = {\\left[ {1 - {w^T}x \\cdot y} \\right]_ + }$), we compare the $\\widehat y$ and $y$ to find how many ``mistakes'' the online learning algorithm makes. Summing these ``mistakes'' over time and social networks, we obtain the regret of the whole system, based on which we can know the performance of our algorithm. \n \n\n\\textbf{Assumption 2.} Let $W$ denote the set of $w$, we assume $W$ and the loss function $f$ satisfy:\n\\begin{itemize}\n\\item[(1)] The set $W$ is closed and convex subset of $\\mathbb{R}^n$. Let $R \\buildrel \\Delta \\over = \\mathop {\\sup }\\limits_{x,y \\in W} \\left\\| {x - y} \\right\\|$ denotes the diameter of $W$.\n\\item[(2)] The loss function $f$ is \\emph{strongly convex} with modulus $\\gamma \\ge 0$. For all $x,y \\in W$, we have\n\\vspace{-.5em}\\begin{eqnarray}\n\\left\\langle {\\nabla f_t^i,y - x} \\right\\rangle \\le f_t^i(y) - f_t^i(x) - \\frac{\\gamma }{2}{\\left\\| {y - x} \\right\\|^2}.\n\\end{eqnarray}\n\\item[(3)] The subgradients of $f$ are uniformly bounded, i.e., there exists $L > 0$ , for all $x \\in W$, we have\n\\vspace{-.5em}\\begin{eqnarray}\\left\\| {\\nabla f_t^i(x)} \\right\\| \\le L.\\end{eqnarray}\n\\end{itemize}\n\nAssumption (1) guarantees that there exists an optimal solution in our algorithm. Assumptions (2) and (3) help us analyze the convergence of our algorithm.\n\n \\subsection{Differential Privacy} Dwork \\cite{dwork2006} first proposed the definition of differential privacy which makes a data miner be able to release some statistic of its database without revealing sensitive information about a particular value itself. In this paper, we realize output perturbation by adding a random noise denoted by $\\delta$. This noise interferes some malicious data miners to steal sensitive information (e.g., birthday and contact info). Based on the parameters defined above, we give the following definition.\n \n\n\\textbf{Definition 1.} Assume that $\\mathcal{A}$ denotes our differentially private online learning algorithm. Let $\\mathcal{X}=\\left\\langle {x_1,x_2,...,x_T} \\right\\rangle $ be a sequence of social data taken from an arbitrary node's local data center. Let $\\mathcal{\\theta } = \\left\\langle {\\theta _1,\\theta _2,...,\\theta_T} \\right\\rangle $ be a sequence of $T$ results of the node and $\\mathcal{\\theta} =\\mathcal{A}(\\mathcal{X})$. Then, our algorithm $\\mathcal{A}$ is $\\epsilon$-differentially private if given any two adjacent question sequences $\\mathcal{X}$ and $\\mathcal{{X'}}$ that differ in one social data entry, the following holds: \n \\begin{eqnarray}\\Pr \\left[ {\\mathcal{A\\left( X \\right)}} \\right] \\le {e^\\epsilon}\\Pr \\left[ {\\mathcal{A\\left( {X'} \\right)}} \\right].\\end{eqnarray} \n\nThis inequality guarantees that whether or not an individual participates in the database, it will not make any significant difference on the output of our algorithm, so the adversary is not able to gain useful information about the individual person.\n\n\\subsection{Private Distributed Online Learning Algorithm}\nWe present a private distributed online learning algorithm for cloud-based social networks. Specifically, each cloud computing node propagates the parameter with noise added to neighboring nodes. After receiving the parameters from others, each node compute a weight average of the received and its old parameters. Then, each node updates the parameter due to general online mirror descent and induce sparsity using Lasso. The algorithm is summarized in Algorithm 1. Note that $w_t^i$ denotes the parameter of the $i$-th cloud node at time $t$. ${\\varphi _{t{\\rm{ = 1,}}...{\\rm{,T}}}}$ are a series of $\\beta$-strongly convex functions.\n\n\\begin{algorithm}\n\\caption{Private Distributed Online Learning}\n\\begin{algorithmic}[1]\n\\STATE \\textbf{Input}: Cost functions $f_t^i(w ): = \\ell (w,x_t^i,y_t^i)$, $i \\in [1,m]$ and $t \\in [1,T]$; double stochastic matrix $A = (a_{ij}) \\in {R^{m \\times m}}$; \n\\STATE \\textbf{Initiaization:} $\\theta _1^i=0$, $i \\in [1,m]$ \n\\FOR {$t = 1,...,T$ }\n \\FOR{each node $i = 1,...,m$}\n\\STATE receive $x_t^i \\in {\\mathbb{R}^n}$\n\\STATE $p_t^i = \\nabla \\varphi _t^ * \\left( {{\\theta _t^i}} \\right)$\n\\STATE $w_t^i = {{\\mathop{\\rm argmin}\\nolimits} _w}\\frac{1}{2}\\left\\| {p_t^i - w} \\right\\|_2^2 + {\\lambda _t}{\\left\\| w \\right\\|_1}$ \\\\\n\\STATE predict ${\\widehat y}_t^i$\n\\STATE receive $y_t^i$ and obtain $f_t^i(w_t^i ): = \\ell (w_t^i,x_t^i,y_t^i)$\n\\STATE $\\theta_{t + 1}^i = \\sum\\nolimits_j {{a_{ij}}\\widetilde{\\theta} {{_t^j} }} - {\\alpha _t}g_t^i$, where $g_t^i = \\nabla f_t^i(w_t^i)$\n\\STATE broadcast to neighbors: ${\\widetilde{\\theta} _{t + 1}^i} = \\theta _{t + 1}^i + \\delta _t^i$\n\\ENDFOR\n\\ENDFOR\n\\end{algorithmic}\n\\end{algorithm}\n\n\n\n\\section{Privacy Analysis}\nAs mentioned, exploiting differential privacy (DL) protects the privacy while guarantees the usability of social data. In step 11 of Algorithm 1, $\\theta $ is the parameter exchanged, to which we add a random noise. The added noise leads to the perturbation of $\\theta $, so someone else cannot mine individual privacy according to an exact parameter. To recall, DL is defined mathematically in Definition 1, which aims at weakening the significantly difference between $\\mathcal A\\left( X \\right)$ and $\\mathcal A\\left( {X'}\\right)$.\nOnly satisfying the inequality (4), can we ensure the privacy of social data in each data center.\n\n\\subsection{Adding Noise}\nSince we add noise to mask the difference of two datasets differing at most in one point, the sensitivity should be known. Dwork \\cite{dwork2006} proposed that the magnitude of the noise depends on the largest change that a single entry in data source could have on the output of Algorithm 1; this quantity is referred to as the \\emph{sensitivity} of the algorithm. The sensitivity of Algorithm 1 in defined.\n\n\\textbf{Definition 2 (Sensitivity).} Based on Definition 1, for any $\\mathcal{X}$ and $\\mathcal{{X'}}$, which differ in exactly one entry, we define the sensitivity of Algorithm 1 at $t$-th round as\n\\begin{align}\n{\\rm{S}}({\\rm{t}}){\\rm{ = }}\\mathop {\\sup }\\limits_{{\\cal X},{\\cal X}'} {\\left\\| {{\\cal A}\\left( {\\cal X} \\right){\\rm{ - }}{\\cal A}\\left( {{\\cal X}'} \\right)} \\right\\|_{\\rm{1}}}.\n\\end{align}\n\n\\textbf{Lemma 1.} Under Assumption 1, if the $L_1$-sensitivity of the parameter $\\theta$ is computed as (5), we obtain\n \\begin{equation}{\\rm{S}}(t) \\le 2{\\alpha _t\\sqrt n L},\n \\end{equation}\nwhere $n$ denotes the dimensionality of the vectors.\n\\begin{proof} \nSee Algorithm 1, $\\theta$ is the exchanged parameter and added with the noise $ \\delta$. According to Definition 1, we have \\vspace{-.5em}\\[ \\left\\| {\\mathcal{A\\left( X \\right)}- \\mathcal{A\\left( {X'} \\right)}} \\right\\|_1=\\left\\| {\\theta_t^i -{\\theta_t^i}'} \\right\\|_1.\\]\nAssuming that the only differenct social data comes at time $t$, we have\n\\vspace{-.5em}\\[\\theta _{t + 1}^i = \\sum\\nolimits_j {{a_{ij}}\\widetilde {\\theta _t^i}} - {\\alpha _t}g_t^i\\ and \\ \\\n{\\theta _{t + 1}^i}' = \\sum\\nolimits_j {{a_{ij}}\\widetilde {\\theta _t^i}} - {\\alpha _t}{g_t^i}',\\]\nwhere $ (x_t^i,y_t^i)$ and $ ({x_t^i}',{y_t^i}')$ lead to $g_t^i$ and ${g_t^i}'$ due to Step 9 and 10 in Algorithm 1.\n\nThen, we have\n\\begin{align}\n\\notag&\\left\\| {\\theta_{t+1}^i -{\\theta_{t+1}^i}'} \\right\\|_1\\\\\n\\notag& = \\left\\| { (\\sum\\nolimits_j {{a_{ij}}\\widetilde {\\theta _t^i}}- {\\alpha _t}g_t^i) -( \\sum\\nolimits_j {{a_{ij}}\\widetilde {\\theta _t^i}} - {\\alpha _t}{g_t^i}')}\\right\\|_1\\\\\n\\notag& \\le {\\alpha _t}\\sqrt n \\left( {{{\\left\\| {g_{t }^i} \\right\\|}_2} + {{\\left\\| {g_{t}^i}' \\right\\|}_2}} \\right)\\\\\n& \\le 2{\\alpha _{t}}\\sqrt nL.\n\\end{align}\n\nBy Definition 2, we know \\vspace{-.5em}\\begin{equation}\\notag{\\rm{S}}(t) \\le \\left\\| {\\theta_t^i -{\\theta_t^i}'} \\right\\|_1.\n \\end{equation}\n\nFinally, combining (5) and (7), we obtain (6).\n\\end{proof}\n\n We determine the magnitude of the noise as follows.\n $\\sigma \\in {\\mathbb{R}^n}$ is a Laplace random noise vector drawn independently according to the density function:\n\\begin{equation}\nLap\\left( {x|\\mu } \\right) = \\frac{1}{{2\\mu }}\\exp \\left( { - \\frac{{\\left| x \\right|}}{\\mu }} \\right),\n\\end{equation}\nwhere $\\mu = {{S\\left( t \\right)} \\mathord{\\left\/ {\\vphantom {{S\\left( t \\right)}\\epsilon}} \\right. \\kern-\\nulldelimiterspace} \\epsilon}$. After this, we use $Lap\\left( \\mu \\right)$ to denote the Laplace distribution.\n\n\\subsection{Guaranteeing $\\epsilon$-Differentially Private}\nIn our system model, as an independent cloud node, each data center should protect the privacy at every moment. If there is a data center invaded by a malicious user, this ``bad kid'' is able to get some information about other users' social data stored in other data center across the network. Hence, every data transmitted should be processed by DL (i.e., satisfy (4)). Recalling from Fig.1, we add random noise to every communication in the data center network.\n\n Having described the method and magnitude of adding noise, we next prove how to guarantee $\\epsilon$-differentially private for $\\theta$. First, we demonstrate the privacy preserving at each time $t$.\n\n\\textbf{Lemma 2.} At the $t$-th round, the $i$-th cloud node's output of $\\mathcal{A}$, $\\widetilde \\theta_t^i$, is $\\epsilon$-differentially private.\n\\vspace{-.8em}\n\\begin{proof}\nLet $\\widetilde \\theta_t^i = \\theta_t^i + \\sigma _t^i$ and $\\widetilde {\\theta}{_t^i}' = \\theta{_t^i}' + \\sigma _t^i$, then\nby the definition of differential privacy (see Definition 1), $\\widetilde \\theta_t^i$ is $\\epsilon$-differentially private if\n\\vspace{-.8em}\\begin{equation}\\Pr [\\widetilde \\theta_t^i ] \\le {e^\\epsilon}\\Pr [\\widetilde {\\theta}{_t^i}' ].\n\\end{equation}\n We have\n\\begin{align}\n\\notag\\frac{{\\Pr \\left( {\\widetilde \\theta_t^i} \\right)}}{{\\Pr \\left( {\\widetilde {\\theta}{_t^i}'} \\right)}}& = \\prod\\limits_{j = 1}^n {\\left( {\\frac{{\\exp \\left( { - \\frac{{\\epsilon\\left| {\\theta_t^i[j] - \\theta[j]} \\right|}}{{S\\left( t \\right)}}} \\right)}}{{\\exp \\left( { - \\frac{{\\epsilon\\left| {{\\theta_t^i}'[j] -\\theta[j]} \\right|}}{{S\\left( t \\right)}}} \\right)}}} \\right)} \\\\\n\\notag& \\le \\prod\\limits_{j = 1}^n {\\exp \\left( {\\frac{{\\epsilon\\left| {{\\theta_t^i}'[j] -\\theta_t^i[j]} \\right|}}{{S\\left( t \\right)}}} \\right)} \\\\\n\\notag &= \\exp \\left( {\\frac{{\\epsilon{{\\left\\| {{\\theta_t^i}' -\\theta_t^i} \\right\\|}_1}}}{{S\\left( t \\right)}}} \\right)\\le \\exp \\left( \\epsilon \\right),\n\\end{align}\nwhere the first inequality follows from the triangle inequality, and the last inequality follows from Lemma 1.\n\\end{proof}\n\nMcSherry \\cite{mcsherry2009} has proposed that the privacy guarantee does not degrade across rounds as the samples used in the rounds are disjoint. Obviously, our system model is an online processing website, where the social data is flowing. We dynamically serve the users with favorite recommendations due to users' recent social behavior. Hence, during the $T$ rounds of our Algorithm 1, the social data are disjoint. As Algorithm 1 runs, the privacy guarantee will not degrade. Then we obtain the following theorem.\n\n\\textbf{Theorem 1 (Parallel Composition).} On the basis of Definition 1 and 3, under Assumption 1 and Lemma 2, our algorithm is $\\epsilon$-differentially private.\n\nFor details of proof of Theorem 1, readers are advised to \\cite{mcsherry2009}.\n\n\n\n\\section{utility analysis}\nWe have mentioned the notion regret, which is used to estimate the utility of online learning algorithms. The regret of our online learning algorithm represents a sum of mistakes, which are made by all data centers during the learning and predicting process. When social websites conduct personalized recommendations (e.g., songs, videos and news etc.) for users, not all recommendations make sense for individuals. But we wish that with the system working and more social data being learnt, the predictions used for recommending become more accurate. That means the regret should have an upper bound.\nTherefore, lower regret bounds indicates better and faster distributed online learning algorithms. Firstly, we give the definition of ``regret''.\n\n\\textbf{Definition 3.} We propose Algorithm 1 for social websites over data center networks. Then, we measure the regret of the algorithm as\n\\vspace{-.6em}\\begin{eqnarray}\n {R} = \\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {f_t^i} } (w_t) - \\mathop {\\min }\\limits_{w \\in W} \\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {f_t^i} } (w),\n\\end{eqnarray}\nwhere ${w_t} = \\frac{1}{m}\\sum\\nolimits_i {w_t^i} $, denotes the average of $m$ parameters of all data centers at time $t$.\nHence, ${R}$ is computed with respect to an average of $m$ parameters $w_t^j$, which approximately estimates the actual performance of the whole system.\n\nFor analyzing the regret $R$ of Algorithm 1, we firstly present a special lemma.\n\n\\textbf{Lemma 3.} Let $\\varphi _t$ be $\\beta$-strongly convex functions, which have the norms ${\\left\\| \\cdot \\right\\|_{{\\varphi _t}}}$\n and dual norms $\\left\\| \\cdot \\right\\|_{{\\varphi _t}}^ * $. When Algorithm 1 keeps running, we have the following inequality\\vspace{-.6em}\n\\begin{align}\n\\notag&\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {{{\\left( {{w_t} - w} \\right)}^T}} } {g_t}\\\\\n \\notag&\\le {{m{\\varphi _T}\\left( w \\right)} \\mathord{\\left\/\n {\\vphantom {{m{\\varphi _T}\\left( w \\right)} {{\\alpha _t}}}} \\right.\n \\kern-\\nulldelimiterspace} {{\\alpha _t}}} + \\frac{1}{{{\\alpha _t}}}\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {\\left[ {\\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right)} \\right.} } \\\\\n &\\quad+ \\left. {\\frac{{{\\alpha ^2}_t}}{{2\\beta }}\\left\\| {{g_t}} \\right\\|_2^2 + {\\alpha _t}{{\\left\\| {{\\delta _t}} \\right\\|}_2} + {\\lambda _t}{{\\left\\| {{g_t}} \\right\\|}_1}} \\right].\n\\end{align}\n \\begin{proof}\nWe define $\\Phi _t^i = \\varphi _t^ * \\left( {{\\theta _{t + 1}}} \\right) - \\varphi _{t-1}^ * \\left( {{\\theta _t}} \\right)$, where ${\\theta _t} = \\frac{1}{m}\\sum\\nolimits_i {\\theta _t^i} $.\\vspace{-.6em}\n\\begin{align}\n\\notag{\\theta _{t + 1}} &= \\frac{1}{m}\\sum\\nolimits_i {\\theta _{t + 1}^i} = \\frac{1}{m}\\sum\\nolimits_i {\\left( {\\sum\\nolimits_j {{a_{ij}}\\widetilde {\\theta _t^j}} - {\\alpha _t}g_t^i} \\right)} \\\\\n \\notag&= \\frac{1}{m}\\sum\\nolimits_j {\\left( {\\sum\\nolimits_i {{a_{ij}}} } \\right)\\widetilde {\\theta _t^j} - \\frac{1}{m}\\sum\\nolimits_i {\\alpha _tg_t^i} } \\\\\n\\notag& = \\frac{1}{m}\\sum\\nolimits_j {\\widetilde {\\theta _t^j}} - \\frac{1}{m}\\sum\\nolimits_i {\\alpha _tg_t^i} \\\\\n\\notag& = \\frac{1}{m}\\sum\\nolimits_j {\\left( {\\theta _t^j + \\delta _t^j} \\right)} - \\frac{1}{m}\\sum\\nolimits_i {\\alpha _tg_t^i} \\\\\n \\notag&= {\\theta _t}{\\rm{ + }}\\frac{1}{m}\\sum\\nolimits_j {\\left( {\\delta _t^j - \\alpha _tg_t^j} \\right)} \\\\\n\\, & = {\\theta _t} + {\\delta _t} - {\\alpha _tg_t},\n\\end{align}\nwhere intuitively ${\\delta _t} = \\frac{1}{m}\\sum\\nolimits_j {\\delta _t^j} $ and ${g_t} = \\frac{1}{m}\\sum\\nolimits_j {g_t^j} $.\n\nFirst, according to Fenchel-Young inequality, we have\n\\begin{align}\n\\notag\\sum\\limits_{t = 1}^T {\\Phi _t^i} &= \\varphi _T^ * \\left( {{\\theta _{T + 1}}} \\right) - \\varphi _0^ * \\left( {{\\theta _1}} \\right) = \\varphi _T^ * \\left( {{\\theta _{T + 1}}} \\right)\\\\\n &\\ge {w^T}{\\theta _{T + 1}} - {\\varphi _T}\\left( w \\right).\n\\end{align}\n\\vspace{-.6em}\nThen, \\begin{align}\n\\notag&\\Phi _t^i = \\varphi _t^ * \\left( {{\\theta _{t + 1}}} \\right) - \\varphi _t^ * \\left( {{\\theta _t}} \\right) + \\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right)\\\\\n &\\le \\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right) - {\\alpha _t}{{\\nabla \\varphi _t^ * \\left( {{\\theta _t}} \\right)}^T}{g_t} + \\frac{{{\\alpha _t^2}}}{{2\\beta }}\\left\\| {{g_t}} \\right\\|_2^2 + {\\alpha _t}{\\left\\| {{\\delta _t}} \\right\\|_2}.\n\\end{align}\n\nCombining (14) and (15), summing over $T=1,...T$ and $i=1,...,m$ , we get\n\\begin{small} \\begin{align}\n\\notag&\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {{\\alpha _t}{{\\left( {\\nabla \\varphi _t^ * \\left( {{\\theta _t}} \\right) - w} \\right)}^T}} } {g_t}\\\\\n &\\le m{\\varphi _T}\\left( w \\right) + \\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {\\left[ {\\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right) + \\frac{{{\\alpha ^2}_t}}{{2\\beta }}\\left\\| {{g_t}} \\right\\|_2^2 + {\\alpha _t}{{\\left\\| {{\\delta _t}} \\right\\|}_2}} \\right]} }.\n\\end{align}\n\\end{small} \nAccording to Lemma 1 of Wang et al.\\cite{wang2015}, we know\n\\begin{equation}{w_t}^T{g_t} \\le \\nabla \\varphi _t^ * {\\left( {{\\theta _t}} \\right)^T}{g_t} + {\\lambda _t}{\\left\\| {{g_t}} \\right\\|_1} \\end{equation}\n\nFinally, using (16) and (17), we obtain (12).\n\\end{proof}\n\nBased on Lemma 3, we easily have the regret bound of our system model.\n\n\\textbf{Theorem 2.} We propose Algorithm 1 for social big data computing over data center networks. Under Assumption 1 and 2, we define regret function as (11). Set ${\\varphi _t}\\left( w \\right) = \\frac{1}{2}\\left\\| w \\right\\|_2^2$, which is $1$-strongly convex. Let ${\\lambda _t} = {\\alpha _t}\\lambda $, then the regret bound is\n\\begin{equation}R \\le R\\sqrt {\\left( {L + \\lambda } \\right)mTL} + \\frac{{2\\sqrt 2 {m^2}nTL}}{\\epsilon}\\left( {\\sqrt T - \\frac{1}{2}} \\right)\\end{equation}\n\n\\begin{proof}\nFor convex functions, we know that\n\\[f_t^i\\left( {{w_t}} \\right) - f_t^i\\left( w \\right) \\le {\\left( {{w_t} - w} \\right)^T}{g_t}.\\]\n\nIntuitively, due to (11) and (12), we obtain\n\\begin{align}\n\\notag R &\\le {{m{\\varphi _T}\\left( w \\right)} \\mathord{\\left\/\n {\\vphantom {{m{\\varphi _T}\\left( w \\right)} {{\\alpha _t}}}} \\right.\n \\kern-\\nulldelimiterspace} {{\\alpha _t}}} + \\frac{1}{{{\\alpha _t}}}\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {\\left[ {\\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right)} \\right.} } \\\\\n &\\quad\\,+ \\left. {\\frac{{{\\alpha ^2}_t}}{{2\\beta }}\\left\\| {{g_t}} \\right\\|_2^2 + {\\alpha _t}{{\\left\\| {{\\delta _t}} \\right\\|}_2} + {\\lambda _t}{{\\left\\| {{g_t}} \\right\\|}_1}} \\right]\n\\end{align}\n\nSince ${\\varphi _t}\\left( w \\right) = \\frac{1}{2}\\left\\| w \\right\\|_2^2$, we have $\\varphi _t^ * \\left( {{\\theta _t}} \\right) - \\varphi _{t - 1}^ * \\left( {{\\theta _t}} \\right) = 0$ and $\\beta=1$.\\vspace{-.6em}\n\n\\[R \\le \\underbrace {\\frac{{\\frac{1}{2}\\left\\| w \\right\\|_2^2}}{{{\\alpha _t}}} + \\frac{{mT{L^2}{\\alpha _t}}}{2} + {\\alpha _t}\\lambda mTL}_{S1} + \\underbrace {mT\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {{{\\left\\| {{\\delta _t}} \\right\\|}_2}} } }_{S2},\\]\nwhere $\\left\\| {{g_t}} \\right\\| \\le L$ is defined previously.\n\nWe first compute $S1$:\nsetting ${\\alpha _t} = \\frac{{{{\\left\\| w \\right\\|}_2}}}{{2\\sqrt {\\left( {L + \\lambda } \\right)mTL} }}$, we have \n\\begin{equation}S1 \\le R\\sqrt {\\left( {L + \\lambda } \\right)mTL} \\sim O\\left( {\\sqrt T } \\right), \\end{equation} where $R$ is defined in Assumption 2.\n\nThen, for $S2$, we have\n \\begin{align}\n\\notag &E\\left[ {\\sum\\limits_{t = 1}^T {\\sum\\limits_{i = 1}^m {\\left\\| {\\sigma _t^i} \\right\\|} } } \\right] \\le \\frac{{2\\sqrt 2mnL }}{\\epsilon}\\left( {\\sqrt T - \\frac{1}{2}} \\right),\\\\\n& S2 \\le \\frac{{2\\sqrt 2 {m^2}nTL}}{\\epsilon}\\left( {\\sqrt T - \\frac{1}{2}} \\right)\\sim O\\left( {\\sqrt T } \\right).\n\\end{align}\n\nCombining (20) and (21), we obtain (18).\n\\end{proof}\n\n\n\n\\begin{figure}\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1.9in]{1.eps}\\vspace{-.9em}\n\\caption{ Nodes=64 and Sparsity=52.3{\\%}}\n\\label{fig:side:a}\n\\end{minipage\n\\begin{minipage}[t]{0.6\\linewidth}\n\\centering\n\\includegraphics[width=1.9in]{2.eps}\\vspace{-.9em}\n\\caption{Nodes=64 and $\\epsilon=0.1$}\n\\label{fig:side:b}\n\\end{minipage}\n\\begin{minipage}[t]{0.5\\linewidth}\n\\centering\n\\includegraphics[width=1.78in]{3.eps}\n\\caption{Nodes=64 and $\\epsilon=0.1$}\n\\label{fig:side:c}\n\\end{minipage\n\\begin{minipage}[t]{0.6\\linewidth}\n\\centering\n\\includegraphics[width=1.8in]{4.eps}\n\\caption{ Sparsity=64.5{\\%}}\n\\label{fig:side:d}\n\\end{minipage}\n\\end{figure}\n\n\nAccording to Theorem 2, the regret bound becomes the classical square root regret $O(\\sqrt T )$ \\cite{zinkevich2003}, which means less mistakes are made in social recommendations as the algorithm runs. This result demonstrates that our private online learning algorithm for the social system makes sense. Further, due to (18), we find: 1) a higher privacy level $\\epsilon$ can enhance the regret bound; 2) the number of data centers gets more, the regret bound become higher; 3) the communication matrix $a_{ij}$ seems not to affect the bound, but we think it may affect the convergence. All the observations will be simulated in the following numerical experiments.\n\n\n\n\\section{simulations}\nIn this section, we conduct four simulations. The first one is to study the privacy and predictive performance trade-offs. The second one is to find whether the topology of social networks has a big influence on the performance. The third one is to study the sparsity and performance trade-offs. The final one is to analyze the performance trade-offs between the number of data centers and accuracy.\nAll the simulations are operated on real large-scale and high-dimension social data. \n\n For our implementations, we have the hinge loss $f_t^i\\left( w \\right) = \\max \\left( {1 - y_t^i\\left\\langle {w,x_t^i} \\right\\rangle } \\right)$, where $\\left\\{ {\\left( {x_t^i,y_t^i} \\right) \\in {\\mathbb{R}^n} \\times \\left\\{ { \\pm 1} \\right\\}} \\right\\}$, are the data available only to the $i$-th data center. For powers of persuasion, we use $100,000$ social data to experiment and the dimensionality of data is $10,000$. Since the tested data are real social data, we should pretreat the data. Each dimension in vectors is normalized into a certain numerical interval. Each data point is labeled with a value into $\\left[ {0,1} \\right]$ according to its classification attribute. For the simulated model, we design it as Fig.1. A few computing nodes are distributed randomly. Each node is only connected with its adjacent nodes. Everytime information exchanging is perturbed with Laplace noise. \nAll the experiments were conducted on a distributed model designed by Hadoop under Linux (with 8 CPU cores, 64GB memory).\n \nIn Fig.2, the regret bound of the non-private algorithm has the lowest regret as expected and it shows that the regret gets closer to the non-private regret as its privacy preservation is weaker. The higher privacy level of $\\epsilon$ leads to the more regret. Fig.3 demonstrates that different topologies make no significant difference on the utility of the algorithm.\n Fig.4 indicates that an appropriate sparsity can have the best performance and other lower or higher sparsity would lead to a worse performance. Specifically, inducing sparsity can enhance the accuracy, obtaining nearly $18{\\%}$ more than the non-sparse computing does. Fig.5 studies the performance with respect to the number of data center nodes. More centers can have a little decline (as much as $4\\%$ per 4 nodes) in the accuracy.\n\n\\section{conclusion}\nInternet has come into Big Data era. Social networks are faced with massive data to handle. Faced with these challenges, we proposed a private distributed online learning algorithm for social big data over data center networks. We demonstrated that higher privacy level leads to weaker utility of the system and the appropriate sparsity enhances the performance of online learning for high-dimension data.\nFurthermore, there must exist delay in social networks, which we did not consider. Hence, we hope that online learning with delay can be presented in the future work.\n\n\n\\section*{Acknowledgment}\nThis research is supported by National Science Foundation of China with Grant 61401169.\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbidm b/data_all_eng_slimpj/shuffled/split2/finalzzbidm new file mode 100644 index 0000000000000000000000000000000000000000..ed797a5d5a48274ff4ec1d2331ec733ef7372fa1 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbidm @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nThe brain is organized into a hierarchy of functionally specialized regions, which selectively coordinate during behavior \\cite{hipp_oscillatory_2011,vezoli_brain_2021,engel_dynamic_2001,salinas_correlated_2001} and rest \\cite{fox_spontaneous_2007,brookes_investigating_2011,de_pasquale_cortical_2012}. Effective function relies on dynamic coordination between brain regions, in response to a changing environment, on an essentially fixed and limited anatomical substrate \\cite{kopell_beyond_2014,quiroga_closing_2020,tononi_information_2004,kohn_principles_2020}. Through these anatomical connections multiplexing occurs: multiple signals that combine for transmission through a single communication channel must then be differentiated at a downstream target location \\cite{akam_oscillatory_2014,becker_resolving_2020}. How information \u2013 communicated via coordinated transmission of spiking activity \\cite{hahn_portraits_2019} \u2013 dynamically routes through the brain's complex, distributed, hierarchical network remains unknown \\cite{palmigiano_flexible_2017}. \n\n\\noindent Brain rhythms \u2013 approximately periodic fluctuations in neural population activity \u2013 have been proposed to control the flow of information within the brain network \\cite{akam_oscillatory_2014, fries_mechanism_2005,gonzalez_communication_2020,canolty_functional_2010,bonnefond_communication_2017,buzsaki_brain_2012} and proposed as the core of cognition \\cite{palva_discovering_2012,siegel_spectral_2012,siebenhuhner_genuine_2020,williams_modules_2021}. Through periodic modulations in neuronal excitability, rhythms may support flexible and selective communication, allowing exchange of information through coordination of phase at rhythms of the same frequency (e.g., coherence \\cite{fries_mechanism_2005,bonnefond_communication_2017,bastos_communication_2015,fries_rhythms_2015,schroeder_low-frequency_2009}) and different frequencies (e.g., phase-amplitude coupling \\cite{canolty_functional_2010,lisman_theta-gamma_2013,hyafil_neural_2015} or n:m phase locking \\cite{palva_phase_2005,belluscio_cross-frequency_2012,tass_detection_1998}). Recent evidence shows that neural oscillations appear as transient, isolated events \\cite{lundqvist_gamma_2016,sherman_neural_2016}; how such transient oscillations route information through neural networks remains unclear \\cite{van_ede_neural_2018}. \n\n\\noindent Significant evidence supports the organization of brain rhythms into a small set of discrete frequency bands (e.g., theta [4-8 Hz], alpha [8-12 Hz], beta [12-30 Hz], gamma [30-80 Hz]) \\cite{buzsaki_rhythms_2011,buzsaki_neuronal_2004}. Consistent frequency bands appear across mammalian species (mouse, rat, cat, macaque, and humans \\cite{buzsaki_scaling_2013}) and in some cases the biological mechanisms that pace a rhythm are well-established (e.g., the decay time of inhibitory post-synaptic potentials sets the timescale for the gamma rhythm \\cite{whittington_inhibition-based_2000}). Why brain rhythms organize into discrete bands, and whether these rhythms are fixed by the brain's biology or organized to optimally support brain communication, remains unclear. For example, an alternative organization of the brain's rhythms (e.g., into a larger set of different frequency bands) may better support communication but remain inaccessible given the biological mechanisms available to pace brain rhythms.\n\n\\noindent While much evidence supports the existence of brain rhythms and their importance to brain function, few theories explain their arrangement. Different factors have been proposed for the spacing between the center frequencies of neighboring bands: Euler's number ($e\\approx2.718$) \\cite{penttonen_natural_2003}, the integer 2 \\cite{klimesch_algorithm_2013}, or the golden ratio ($\\phi\\approx1.618$) \\cite{roopun_temporal_2008}. Existing theory shows that irrational factors (e.g., $e$ and $\\phi$) minimize interference between frequency bands, in support of separate rhythmic communication channels for multiplexing information in the brain \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998,izhikevich_weakly_1999}. However, if separate rhythmic channels communicate different information, and the organization of brain rhythms prevents interference, how a target location coordinates information across these rhythms is unclear. For example, how in theory a neural population integrates top-down and bottom-up input communicated in separate rhythmic channels (lower [$<$40 Hz] and higher [$>$40 Hz] frequency ranges, respectively \\cite{bastos_communication_2015,fries_rhythms_2015,michalareas_alpha-beta_2016,fontolan_contribution_2014,bastos_layer_2020}) remains unclear. We propose a solution to this problem: addition of a third rhythm. Motivated by an existing mathematical theory \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998,izhikevich_weakly_1999}, we show that effective communication among three rhythms is optimal for rhythms arranged according to the golden ratio.\n\n\\noindent In what follows, we show that golden rhythms \u2013 rhythms organized by the golden ratio \u2013 are the optimal choice to integrate information among separate rhythmic communication channels. We propose that brain rhythms organize in the discrete frequency bands observed, with the specific spacing observed, to optimize segregation and integration of information transmission in the brain.\n\n\\section{Methods}\n\nAll simulations and analysis methods to reproduce the manuscript results and figures are available at \\url{https:\/\/github.com\/Mark-Kramer\/Golden-Framework}.\n\n\\subsection{Damped harmonic oscillator model} \\label{meth:dsho}\nAs a simple model of rhythmic neural population activity (e.g., observed in the local field potential (LFP) or magneto\/electroencephalogram (M\/EEG)) we implement a network of coupled damped harmonic oscillators \\cite{moorman_golden_2007}. We choose the damped harmonic oscillator for three reasons. First, a harmonic oscillator (e.g., a spring) mimics the restorative mechanisms governing displacements about a stable equilibrium in neural dynamics (e.g., excitation followed by inhibition in the gamma rhythm \\cite{whittington_inhibition-based_2000,fries_gamma_2007}, depolarization followed by hyperpolarization \u2013 and vice versa \u2013 in bursting rhythms \\cite{izhikevich_synchronization_2001}). Second, brain rhythms are transient \\cite{lundqvist_gamma_2016,sherman_neural_2016}. In the model, damping (e.g., friction) produces transient oscillations that decay to a stable equilibrium. Third, the damped harmonic oscillator driven by noise is equivalent to an autoregressive model of order two (AR(2), see Appendix \\ref{Appendix1}). The AR(2) model simulates stochastic brain oscillations \\cite{spyropoulos_spontaneous_2020}, consistent with the concept of a neural population with resonant frequency driven by random inputs.\n\\\\\n\n\\noindent We simulate an 8-node network of damped, driven harmonic oscillators. We model the activity $x_k$ at node $k$ as,\n\\begin{equation}\\label{eq:dsho}\n \\ddot{x}_k + 2 \\beta \\dot{x}_k + \\omega_k^2 x_k = \\left( \\bar{g}_C + \\bar{g}_S\\cos{\\omega_S t} \\right) \\sum_{j \\neq k} x_j \\ ,\n\\end{equation}\nwhere $\\beta$ is the damping constant, and $\\omega_k=2 \\pi f_k$ is the natural frequency of node $k$. The activity $x_j$ summed from all other nodes ($j \\neq k$) drives node $k$. We modulate this drive by a gain function with two terms: a constant gain $\\bar{g}_C$ and a sinusoidal gain with amplitude $\\bar{g}_S$ and frequency $\\omega_S=2 \\pi f_S$. To include noise in the dynamics, we represent the second order differential equation in Equation (\\ref{eq:dsho}) as two first order differential equations for the position and velocity of the oscillator. We add to the position dynamics a noise term, normally distributed with mean zero and standard deviation equal to the average standard deviation of the evoked response at all oscillators simulated without noise, excluding the perturbed oscillator from the average. In this way, we add meaningful noise of the same magnitude to all oscillators. We numerically simulate the model with noise using the Euler-Maruyama method. To examine the impact of different noise levels, we multiply the noise term by factors $\\{0, 0.5, 1.0, 1.5, 2.0\\}$. For each noise level, we repeat the simulation 100 times with random noise instantiations.\n\n\\section{Results}\nIn what follows, we propose that brain rhythms organized according to the golden ratio produce triplets of rhythms that establish a hierarchy of cross-frequency coupling. We conclude with four hypotheses deduced from this framework and testable in experiments.\n\n\\subsection{Rhythms organized by the golden ratio support selective cross-frequency coupling}\n\nIn the case of weakly-connected oscillatory populations, whether the populations interact or not depends on their frequency ratios \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998,izhikevich_weakly_1999}; rational frequency ratios support interactions, while irrational frequency ratios do not. Motivated by this theory, we consider a network of interacting, rhythmic neural populations (Figure \\ref{fig:schematic}). We model each population as a damped harmonic oscillator, with each oscillator assigned a natural frequency $f_k$. To couple the populations, we drive each oscillator with the summed activity of all other oscillators (i.e., the connectivity is all-to-all). We modulate this drive by a gain function ($g$) with constant ($\\bar{g}_C$) and sinusoidal (amplitude $\\bar{g}_S$, frequency $f_S$) terms: $g=\\bar{g}_C+\\bar{g}_S \\cos{\\left(2\\pi f_S t \\right)}$; see {\\it Methods}. Analysis of this coupled oscillator system reveals resonance (i.e., a large amplitude response) at a target oscillator in two cases. To describe these cases, we denote the frequency of a target oscillator as $f_T$ and the frequency of a driver oscillator as $f_D$. A large amplitude (resonant) response occurs at the target oscillator in the following cases,\n\\\\\n\nconstant gain modulation:\n\\vspace{-0.15in}\n\\begin{eqnarray}\n 0 = f_T - f_D \\label{eq:const}\n\\end{eqnarray}\n\nsinusoidal gain modulation:\n\\vspace{-0.15in}\n\\begin{subequations} \\label{eq:sin}\n\\begin{eqnarray}\n f_S = f_T - f_D \\label{eq:sin_a} \\\\\n f_S = f_D - f_T \\label{eq:sin_b} \\\\\n f_S = f_D + f_T \\label{eq:sin_c}\n\\end{eqnarray}\n\\end{subequations}\n\n\\noindent The first case (Equation \\ref{eq:const}) corresponds to the standard result for a damped target oscillator driven by sinusoidal input; when the sinusoidal driver frequency $f_D$ matches the natural frequency of the target $f_T$, the response amplitude at the target is largest (e.g., see Chapter 5 of \\cite{taylor_classical_2005}). The next three cases (Equation \\ref{eq:sin}) correspond to a damped target oscillator driven by sinusoidal input modulated by sinusoidal gain. If the gain frequency $f_S$ equals the sum or difference of the target and driver frequencies, then the response amplitude at the target is largest (see Appendix \\ref{Appendix2}). We note that the first case corresponds to within-frequency coupling (i.e., the driver and target have the same frequency) while the next three cases correspond to cross-frequency coupling (i.e., the driver and target have different frequencies). We also note that, in this model, we assume an oscillator responds to an input by exhibiting a large amplitude response; in this way, we consider the oscillation amplitude as encoding information, consistent with notion of information encoded in firing rate modulations \\cite{akam_oscillations_2010}.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=6cm]{Figure-1.pdf}\n\\caption{\\textbf{Illustration of the coupled oscillator network.} Oscillators with frequency $f_k$ receive input from all other oscillators. Input from one oscillator (frequency $f_1$) to all other oscillators $(f_2,f_3,\\ldots,f_8)$ is shown (black curves); similar connectivities exist from all other oscillators (not shown). Constant ($\\bar{g}_C$) and sinusoidal ($\\bar{g}_S$) gain modulates each input (gray lines).} \\label{fig:schematic}\n\\end{figure}\n\n\\noindent The results in Equations (\\ref{eq:const}, \\ref{eq:sin}) hold for any choice of driver, target, and gain frequencies without additional restrictions. We now apply an additional restriction, and consider the damped harmonic oscillator network with oscillator and gain frequencies $f_k$ satisfying,\n\\begin{eqnarray} \\label{eq:fk}\n f_k = f_0 \\, c^k \\, ,\n\\end{eqnarray}\nwhere $f_0 > 0$ determines the frequency at $k=0$. As discussed above, candidate values for $c$ deduced from {\\it in vivo} observations include Euler's number ($e\\approx2.718$) \\cite{penttonen_natural_2003}, the integer 2 \\cite{klimesch_algorithm_2013}, or the golden ratio ($\\phi\\approx1.618$) \\cite{roopun_temporal_2008}. Then, given the set of three neighboring frequencies $\\{f_k,f_{k+1},f_{k+2}\\}$, what choice of $c$ supports cross-frequency coupling in the network? To answer this, we choose $f_S = f_{k+2}$, $f_D = f_{k+1}$, and $f_T = f_k$ so that Equation (\\ref{eq:sin_c}) becomes\n\\begin{equation*}\n f_{k+2} = f_{k+1}+f_k \\, .\n\\end{equation*}\nSubstituting Equation (\\ref{eq:fk}) into this expression and solving for $c$, we find\n\\begin{equation*}\n c^2 - c - 1 = 0\n\\end{equation*}\nwith solution\n\\begin{equation*}\n c = \\dfrac{1 + \\sqrt{5}}{2} = \\phi \\, ,\n\\end{equation*}\nthe golden ratio. The same solution holds for all Equations (\\ref{eq:sin}) with appropriate selection of $\\{f_S,f_D,f_T\\}$ from $\\{f_k,f_{k+1},f_{k+2}\\}$. We conclude that, for a system of damped coupled oscillators with oscillator and gain frequencies spaced by the multiplicative factor $c$, cross-frequency coupling between three neighboring rhythms requires $c = \\phi$, the golden ratio. In other words, we propose that frequencies organized according to the golden ratio are particularly suited to support these cross-frequency interactions.\n\\\\\n\n\\noindent To illustrate this result, we consider a network of 8 damped, coupled oscillators each with a different natural frequency determined by the golden ratio ($f_k=\\phi^k$, where $\\phi= \\frac{(1+\\sqrt{5})}{2}$; Figure \\ref{fig:8nodes_phi}); we label these rhythms \u2013 scaled by factors of the golden ratio \u2013 as {\\it golden rhythms}. Starting all nodes in a resting state, we perturb one oscillator ($f_D = \\phi^6\\approx17.9$~Hz) to produce a transient oscillation at that node. With only a constant gain $(\\bar{g}_C=50, \\bar{g}_S=0)$, the impact of the perturbation on the other oscillators is small (Figure \\ref{fig:8nodes_phi}B); because $f_T \\neq f_D$ for any oscillator pair, the network impact of the perturbation is small, despite the constant coupling.\n\\\\\n\n\\noindent Including the sinusoidal gain modulation ($\\bar{g}_C=50, \\bar{g}_S=50$) results in selective communication between the oscillators. For example, choosing $f_S= \\phi^7 \\approx 29.0$ Hz, we observe an evoked response at two oscillators (Figure \\ref{fig:8nodes_phi}C): $f_T= \\phi^8 \\approx 47.0$ Hz (consistent with Equation (\\ref{eq:sin_a})) and $f_T= \\phi^5 \\approx 11.1$ Hz (consistent with Equation (\\ref{eq:sin_c})). We note that the frequency of evoked responses matches the natural frequency of each oscillator. We also note that no solution exists for Equation (\\ref{eq:sin_b}) because $f_T>0$. Different choices of gain frequency $f_S$ result in different pairs of cross-frequency coupling between the driver ($f_D$) and response oscillators (Figure \\ref{fig:8nodes_phi}D). Cross-frequency coupling occurs when Equations (\\ref{eq:sin}) are satisfied with $f_D \\approx17.9$~Hz. The coupling is selective; for example, choosing a gain modulation of $f_S=11.1$ Hz results in cross-frequency coupling between the driver ($f_D=17.9$ Hz) and faster ($29$ Hz) and slower ($6.9$ Hz) golden rhythms. In this case, sinusoidal gain frequencies $f_S$ exist that support cross-frequency coupling and occur at factors of the golden ratio: i.e., $f_S = \\phi^k$ (Figure \\ref{fig:8nodes_phi}D, circles). We note that evoked responses also occur when $f_S \\neq \\phi^k$ (Figure \\ref{fig:8nodes_phi}D, X's); in these cases, frequencies outside the original rhythm sequence $f_k = \\phi^k$ must exist to support cross-frequency coupling. We conclude that if brain rhythmic activity \u2013 both oscillator and gain frequencies \u2013 organizes according to the golden ratio, then cross-frequency coupling is possible between a subset of separate rhythmic communication channels.\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=14.5cm]{Figure-2.pdf}\n\\caption{\\it{\\textbf{Rhythms organized by the golden ratio support selective cross-frequency coupling.} \\textbf{(A)} We perturb one oscillator (natural frequency 17.9 Hz, red), with connectivity to all other oscillators; $\\phi$ is the golden ratio. \\textbf{(B)} With only constant gain modulation, the perturbation ($t=0$, red) has little impact on other nodes. \\textbf{(C)} With sinusoidal gain modulation at 29 Hz, two oscillators (natural frequencies 11.1 Hz and 47.0 Hz) selectively respond to the perturbation. \\textbf{(D)} Average response amplitude (logarithm base 10) from $t=0$ to $t=1.5$ s at each oscillator versus gain frequency $f_S$. Different choices of gain frequency support selective coupling between the perturbed oscillator (natural frequency 17.9 Hz) and other oscillators. Peaks in response amplitude occur at golden rhythms (yellow circles, vertical lines) or other frequencies (red X's). Minimum response set to 0 for each curve, and vertical scale bar indicates 1. \\textbf{(E)} Average amplitude (black curve) and range ($2.5\\%$ to $97.5\\%$ from 100 simulations, shaded region) of evoked responses versus noise level. The oscillator with frequency $17.9$ Hz is directly perturbed, and sinusoidal gain modulation occurs with $f_S \\approx 29.0$ Hz. Oscillators at golden rhythms exhibit different behavior with perturbation (red) versus without perturbation (gray). Code to simulate this network and create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-2.ipynb}{here}}.} \\label{fig:8nodes_phi}\n\\end{figure}\n\n\\noindent We now consider the impact of noise on this cross-frequency communication. With sinusoidal gain modulation ($\\bar{g}_C=50, \\bar{g}_S=50$, and $f_S= \\phi^7 \\approx 29.0$ Hz) and including noise in the oscillator dynamics (see {\\it Methods}), we show the results for two cases: with perturbation and without perturbation to one oscillator ($f_D \\approx 17.9$ Hz, as above). Without perturbation (gray in Figure \\ref{fig:8nodes_phi}E), we find no evidence of an evoked response at any node, as expected; the amplitude remains small at all nodes, with a small gradual increase as the noise increases. With the perturbation (red in Figure \\ref{fig:8nodes_phi}E), we find an evoked response at the perturbed oscillator ($f_D \\approx 17.9$ Hz) and two other oscillators: $f_T \\approx 47.0$ Hz (consistent with Equation (\\ref{eq:sin_a})) and $f_T \\approx 11.1$ Hz (consistent with Equation (\\ref{eq:sin_c})). As the noise increases, so does the variability in the evoked response. For the lower frequency $f_T \\approx 11.1$ Hz oscillator, the evoked response remains evident as the noise increases; in Figure \\ref{fig:8nodes_phi}E, the perturbed (red) and unperturbed (gray) responses remain separate. For the higher frequency $f_T \\approx 47.0$ Hz oscillator, the evoked response becomes more difficult to distinguish from the unperturbed case as the noise increases; in Figure \\ref{fig:8nodes_phi}E, the perturbed (red) and unperturbed (gray) responses begin to overlap with increasing noise. We note that the amplitude of evoked responses decreases with frequency. Therefore, the same amount of noise impacts the higher frequency ($f_T \\approx 47.0$ Hz) oscillator more than the lower frequency ($f_T \\approx 11.1$ Hz) oscillator, making an evoked response more difficult to distinguish from background noise in the higher frequency case. We also note that oscillators not satisfying Equation (\\ref{eq:sin}) (i.e., $f_T \\approx \\{6.9, 29.0, 76.0\\}$ Hz when $f_D \\approx 17.9$ Hz and $f_S \\approx 29.0$ Hz) exhibit little evidence of an evoked response at any noise level.\n\\\\\n\n\\vspace{-0.075in}\n\\noindent To illustrate the utility of the golden ratio, we consider an alternative network of oscillators with frequencies organized by a factor of 2 (Figure \\ref{fig:8nodes_2}A); such integer relationships have been proposed as important to neural communication \\cite{palva_phase_2005,marin_garcia_genuine_2013,klimesch_algorithm_2013}. As expected, with only constant gain ($\\bar{g}_C = 50$) a perturbation to one node ($f_D=16$ Hz) does not impact the rest of the network (Figure \\ref{fig:8nodes_2}B). Including sinusoidal gain with frequency $f_S$ can produce cross-frequency coupling. For example, choosing $f_S=8$ Hz results in cross-frequency coupling between the $f_D=16$ Hz and $f_T=8$ Hz rhythms (Figure \\ref{fig:8nodes_2}C). Similarly, choosing $f_S=16$ Hz results in cross-frequency coupling between the $f_D=16$ Hz and $f_T=32$ Hz rhythms; however, this choice of $f_S$ also results in strong cross-frequency coupling between $f_D=16$ Hz and lower frequency rhythms ($f_T=8,4,2,1$ Hz; Figure \\ref{fig:8nodes_2}D). Importantly, we note that cross-frequency coupling typically occurs at sinusoidal gain frequencies that differ from the set of oscillator frequencies at $2^k$ Hz (vertical lines in Figure \\ref{fig:8nodes_2}D); a new set of rhythms (and rhythm generators) must exist to support cross-frequency coupling in this network.\n\\\\\n\n\\vspace{-0.075in}\n\\noindent To summarize, in a network of damped coupled oscillators (Equation \\ref{eq:dsho}), sinusoidal gain modulation supports cross-frequency coupling (Equation \\ref{eq:sin}). If oscillator and gain frequencies organize according to a multiplicative factor (Equation \\ref{eq:fk}), then cross-frequency coupling between neighboring frequencies requires a multiplicative factor of $\\phi$, the golden ratio (e.g., Figure \\ref{fig:8nodes_phi}D). While oscillators organized with a different multiplicative factor can still produce cross-frequency coupling, the frequencies of effective gain modulation are not part of the original rhythmic sequence (e.g., Figure \\ref{fig:8nodes_2}D), thus requiring the brain devote more resources to implementing a larger set of rhythms in support of cross-frequency interactions.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=14.5cm]{Figure-3.pdf}\n\\caption{\\it{\\textbf{An integer scaling between oscillators limits cross-frequency interactions.} \\textbf{(A)} In a network of oscillators with frequencies organized by a factor of 2, we perturb one oscillator (natural frequency 16 Hz, red). \\textbf{(B)} With constant gain, the impact of the perturbation is limited. \\textbf{(C)} With sinusoidal modulation at $f_S=8$ Hz, a response appears at another oscillator (natural frequency 8 Hz). \\textbf{(D)} The average response amplitude at each oscillator versus gain frequency $f_S$. Many oscillators respond when the gain frequency is 8 Hz, and responses tend not to occur at the oscillator frequencies; see Figure \\ref{fig:8nodes_phi}D for plot details. Code to simulate this network and create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-3.ipynb}{here}}.} \\label{fig:8nodes_2}\n\\end{figure}\n\n\\subsection{Rhythms organized by the golden ratio support ensembles of cross-frequency coupling}\n\nIn the previous section, we considered a network of nodes oscillating at different natural frequencies. As an alternative example, we now consider a network with two ensembles of nodes oscillating at different frequencies. The two ensembles consist of nodes oscillating at frequencies $\\phi^k$ or $\\phi^{k+2}$, where $\\phi$ is the golden ratio. With only constant gain, a perturbation to any node impacts only nodes of the same ensemble (i.e., with the same frequency). Including sinusoidal gain modulation with (intermediate) frequency $f_S= \\phi^{k+1}$, a perturbation to any node impacts nodes in both ensembles. We illustrate this in the 8-node network with 4 nodes in each ensemble oscillating at natural frequencies $\\phi^4 \\approx 6.85$ Hz or $\\phi^6 \\approx 17.9$ Hz (Figure \\ref{fig:ensemble_phi}A). With only constant gain ($\\bar{g}_C=50, \\bar{g}_S=0$), a perturbation to one $\\phi^6 \\approx 17.9$ Hz (driver) node impacts the amplitude of all other nodes in the same ensemble (Figure \\ref{fig:ensemble_phi}B). Including sinusoidal gain modulation ($\\bar{g}_C=50, \\bar{g}_S=50$) with frequency $\\phi^5 \\approx 11.1$ Hz, the same perturbation now impacts all nodes in both ensembles (Figure \\ref{fig:ensemble_phi}C). From Equation \\ref{eq:sin} we determine that two sinusoidal gain frequencies support cross-frequency coupling between the driver ($f_D= \\phi^6 \\approx 17.9$ Hz) and target ($f_T=\\phi^4 \\approx 6.85$ Hz) ensembles,\n\\begin{subequations}\n\\begin{eqnarray*}\n f_S=f_T-f_D=6.85-17.9<0.00 \\, , \\\\\n f_S=f_D-f_T=17.9-6.85=11.1 \\, \\mathrm{Hz} \\, , \\\\\n f_S=f_D+f_T=17.9+6.85=24.6 \\, \\mathrm{Hz} \\, .\n\\end{eqnarray*}\n\\end{subequations}\n\\noindent However, of these two frequencies, only the former ($f_S=11.1$ Hz) is also a golden rhythm (Figure \\ref{fig:ensemble_phi}D, box). In this case, cross-frequency coupling occurs when ensemble and gain rhythms organize in a \"golden triplet\" $(f_T, f_S, f_D) =(\\phi^k,\\phi^{k+1},\\phi^{k+2}) \\approx (6.85, 11.1, 17.9)$ Hz, where $\\phi^k+\\phi^{k+1} = \\phi^{k+2}$.\n\n\\begin{figure}[t]\n\\centering\n\\includegraphics[width=14.5cm]{Figure-4.pdf}\n\\caption{\\it{\\textbf{Golden rhythms support coupling among an ensemble of nodes.} \\textbf{(A)} The network consists of two ensembles with frequencies: $\\phi^4$ and $\\phi^6$. \\textbf{(B)} With constant gain, perturbing a node in one ensemble impacts (red) other nodes in the same ensemble (black). \\textbf{(C)} With sinusoidal gain at frequency $f_S=11.1$ Hz, the same perturbation impacts both ensembles. \\textbf{(D)} Average amplitude response versus gain frequency for all nodes in both ensembles. The response at the unperturbed ensemble (blue) increases when the gain frequency is a golden rhythm (yellow box); see Figure \\ref{fig:8nodes_phi}D for additional plot details. Code to simulate this network and create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-4.ipynb}{here}}.} \\label{fig:ensemble_phi}\n\\end{figure}\n\n\\noindent An alternative choice of irrational frequency ratio between the brain's rhythms is Euler's number ($e$) \\cite{penttonen_natural_2003}. Repeating the simulation with two ensembles of frequency $e^k$ or $e^{k+2}$ results in cross-frequency coupling between ensembles only when $f_S=e^{k+2} \\pm e^k$ (see Figure \\ref{fig:ensemble_e} for an example with $k=2$). We therefore find similar results for the \"Euler triplet\" $(f_D, f_T,f_S)=(e^{k+2},e^k,e^{k+2} \\pm e^k)$ or specifically for $k=2$, $(f_D, f_T, f_S)=(e^4,e^2,e^4 \\pm e^2)$. However, this Euler triplet is not consistent with the ratio of $e$ observed {\\it in vivo}, where three neighboring frequency bands appear at multiplicative factors of $e$ (e.g., $(f,ef,e^2f)$) and the two slower rhythms do not sum to equal the faster rhythm (e.g., $f+ef \\neq e^2f)$. Only for three neighboring frequency bands related by the golden ratio $(f,\\phi f, \\phi^2f)$ do the frequencies of the slower rhythms sum to the faster rhythm (i.e., $f+\\phi f=\\phi^2f$).\n\\\\\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=14.5cm]{Figure-5.pdf}\n\\caption{\\it{\\textbf{Rhythms organized by Euler's number do not support coupling between ensembles of nodes.} \\textbf{(A)} The network consists of two ensembles with frequencies: $e^4$ and $e^2$. \\textbf{(B)} With constant gain, perturbing a node in one ensemble impacts (red) other nodes in the same ensemble (black). \\textbf{(C)} With sinusoidal gain at frequency $f_S=47.2$ Hz, the same perturbation impacts both ensembles. \\textbf{(D)} Average amplitude response versus gain frequency for all nodes in both ensembles. The response at the unperturbed ensemble (blue) does not increase when the gain frequency is a factor of the Euler number (black vertical lines); see Figure \\ref{fig:8nodes_phi}D for additional plot details. Code to simulate this network and create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-5.ipynb}{here}}.} \\label{fig:ensemble_e}\n\\end{figure}\n\n\\subsection{Golden rhythms establish a hierarchy of cross-frequency interactions}\nWe now consider results derived for weakly coupled oscillators, which motivated the study of (strongly) coupled damped harmonic oscillators presented above. In \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998}, Hoppensteadt and Izhikevich consider the general case of intrinsically oscillating neural populations with weak synaptic connections. When uncoupled, each neural population exhibits periodic activity (i.e., a stable limit cycle attractor) described by the phase of oscillation. We note that, in our study of coupled damped harmonic oscillators, we instead consider the amplitude of each oscillator. When Hoppensteadt and Izhikevich include weak synaptic connections between the neural populations, the phases of the neural populations interact only when a resonance relation exists between frequencies, i.e., \n\\begin{equation*}\n \\sum_{i}{k_i f_i}=0 ,\n\\end{equation*}\nwhere $k_i$ is an integer and not all 0, and $f_i$ is the frequency of neural population $i$. The resonance order is then defined as the summed magnitudes of the integers $k_i$,\n\\begin{equation*}\n \\mathrm{resonance\\ order}= \\sum_i ||{k_i}|| .\n\\end{equation*}\nFor the case of two neural populations, if\n\\begin{equation*}\n k_1 f_1+k_2 f_2 = 0\n\\end{equation*}\nfor integers $k_1$ and $k_2$, then\n\\begin{equation*}\n \\frac{f_2}{f_1}=-\\frac{k_1}{k_2} = \\mathrm{rational} .\n\\end{equation*}\nIn other words, if the frequency ratio $f_2\/f_1$ of the two neural populations is rational (i.e., the ratio of two integers), then the neural populations may interact, with the strength of interaction decreasing as either $k_1$ or $k_2$ increases (i.e., stronger interactions correspond to smaller resonance orders)\\footnote{See Proposition 9.14 of \\cite{izhikevich_weakly_1997}}. Alternatively, if this frequency ratio is irrational,\n\\begin{equation*}\n \\frac{f_2}{f_1} = \\mathrm{irrational} ,\n\\end{equation*}\nthen the two neural populations behave as if uncoupled.\n\\\\\n\n\\noindent Consistent with the results presented here, Hoppensteadt and Izhikevich show that golden triplets possess the lowest resonance order, and therefore the strongest cross-frequency coupling \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998,izhikevich_weakly_1999}. However, other resonances exist due to the recursive nature of rhythms organized by the golden ratio. To illustrate these relationships, we consider a set of golden rhythms $\\{f^k\\}$ \u2013 rhythms organized by the golden ratio so that,\n\\begin{equation} \\label{eq:hierarchy1}\n f_{k-1}+f_k=f_{k+1} \\, ,\n\\end{equation}\nwhere $k$ is an integer. Because\n\\begin{equation*}\n f_{k-1}+f_k-f_{k+1}=0\n\\end{equation*}\nthe resonance order is $3$; this golden triplet supports strong cross-rhythm communication. Replacing $k$ with $k-1$ in Equation \\ref{eq:hierarchy1}, we find\n\\begin{equation}\\label{eq:hierarchy2}\n f_{k-2}+f_{k-1}=f_k \\, .\n\\end{equation}\nThen, replacing $f_k$ in Equation \\ref{eq:hierarchy1} with the expression in Equation \\ref{eq:hierarchy2}, we find\n\\begin{equation*}\n f_{k-1}+\\big(f_{k-2}+f_{k-1}\\big)=f_{k+1}\n\\end{equation*}\nor\n\\begin{equation} \\label{eq:heirarchy3}\n f_{k-2}+2f_{k-1}-f_{k+1}=0 \\, ,\n\\end{equation}\nwhich has resonance order $4$. Continuing this procedure to replace $f_{k-2}$ in the equation above, we find\n\\begin{equation} \\label{eq:heirarchy4}\n {-f}_{k-3}+{3\\ f}_{k-1}{-\\ f}_{k+1}=0 \\, ,\n\\end{equation}\nwhich has resonance order $5$. In this way, golden rhythms support specific patterns of preferred coupling between rhythmic triplets, with the strongest coupling (lowest resonance order) between golden triplets.\n\n\\noindent As a specific example, we fix $f_{k+1}=40$ Hz and list in Table \\ref{tab:ex} the sequence of golden rhythms beginning with this generating frequency. We expect strong coupling between $(f_{k-1},f_k,f_{k+1}) = (15.3, 25, 40)$ Hz, a golden triplet, which has resonance order $3$. Using Equations \\ref{eq:heirarchy3}, \\ref{eq:heirarchy4}, and Table \\ref{tab:ex}, we compute additional triplets with higher resonance orders: $(9.4, 15.3, 40)$ Hz with resonance order $4$, and $(5.8,15.3,40)$ Hz with resonance order $5$. Continuing this procedure organizes golden rhythms into triplets with different resonance orders (Figure \\ref{fig:resonance_order}). Triplets with low resonance order appear near the target frequency of $f_{k+1}=40$ Hz (see gold, silver, and bronze circles in Figure \\ref{fig:resonance_order}), and resonance orders tend to increase for frequencies further from $f_{k+1}=40$ Hz, with exceptions (e.g., $( f_{k-1},f_k, f_{k+1})=(2.2, 9.4, 40)$ Hz has resonance order $6$). We conclude that - based on theory developed for weakly coupled oscillators - golden rhythms support both separate communication channels and a hierarchy of cross-frequency interactions between rhythmic triplets with varying coupling strengths. While here we consider three interacting rhythms, we note that the theory also applies to four (or more) interacting rhythms. The implications of these results for networks of (strongly) coupled (damped) oscillators remains unclear.\n\n\\begin{table}[tbh]\n\\caption{{\\it {\\bf Example sequence of golden rhythms.} Beginning with $f_{k+1}=40$ Hz we compute the sequence of golden rhythms by multiplying or dividing by the golden ratio.}}\n\\label{tab:ex}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{cccccc||c||cccc}\n\\headrow\n\\thead{$f_{k-5}$} & \\thead{$f_{k-4}$} & \\thead{$f_{k-3}$} & \\thead{$f_{k-2}$} & \\thead{$f_{k-1}$} & \\thead{$f_{k}$} & \\thead{$f_{k+1}$} & \\thead{$f_{k+2}$} & \\thead{$f_{k+3}$} & \\thead{$f_{k+4}$} & \\thead{$f_{k+5}$} \\\\\n2.2 & 3.6 & 5.8 & 9.4 & 15.3 & 24.7 & 40 & 64.7 & 104.7 & 169.4 & 274.2 \\\\\n\\hline \n\\end{tabular}\n\\end{threeparttable}\n\\end{table}\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=5.5cm]{Figure-6.pdf}\n\\caption{\\it{\\textbf{Golden rhythms establish triplets with a discrete set of resonance orders.} The resonance order (numerical value, marker size) for triplets generated from 40 Hz. Lower resonance orders (3,4,5) indicated in color (gold, silver, bronze, respectively). Code to simulate this network and create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-6.ipynb}{here}}.} \\label{fig:resonance_order}\n\\end{figure}\n\n\\subsection{Four experimental hypotheses}\nWe propose that golden rhythms optimally support separate and integrated communication channels between oscillatory neural populations. We now describe four hypotheses deduced from this theory. First, if the organization of brain rhythms follows the golden ratio, then we expect a discrete sequence of three frequency bands subdivides the existing gamma frequency band, broadly defined from 30-100 Hz \\cite{buzsaki_neuronal_2004,fries_gamma_2007}, with peak frequencies separated by a factor of $\\phi$. For example, using the sequence of golden rhythms with generating frequency 40 Hz (Table \\ref{tab:ex}), we identify multiple distinct rhythms (at 40 Hz, 65 Hz, 105 Hz) corresponding to this gamma band. Consistent with this hypothesis, multiple distinct rhythms have been identified within the gamma band (e.g., \\cite{lopes-dos-santos_parsing_2018,zhang_sub-second_2019,fernandez-ruiz_entorhinal-ca3_2017,edwards_high_2005,crone_functional_1998,vidal_visual_2006,colgin_frequency_2009,colgin_slow_2015,zhou_methodological_2019}). While different choices of generating frequency produce quantitatively different results, the qualitative result is the same: organized according to the golden ratio, multiple distinct rhythms exist within the gamma frequency range, each capable of supporting a separate communication channel.\n\\\\\n\n\\noindent Second, if rhythms organize according to the golden ratio, then evidence for this relationship should exist {\\it in vivo}. To that end, we consider examples of two or more frequency bands reported in the literature (predominately in rodent hippocampus; Figure \\ref{fig:empirical}). These preliminary observations suggest that, in these cases, frequency bands separated by a factor of $\\phi$ or $\\phi^2$ commonly occur. \n\\\\\n\n\\begin{figure}[h]\n\\centering\n\\includegraphics[width=14.5cm]{Figure-7.pdf}\n\\caption{\\it{\\textbf{Empirical observations of rhythms organized by the golden ratio in vivo.} \\textbf{(A)} Pairs of frequencies reported in the literature; see legend. When only a frequency band is reported, we select the mean frequency of the band. \\textbf{(B)} Histogram of the frequency ratio for each point in (A). Lines (golden) indicate frequency bands organized by $\\phi \\approx 1.6$ or $\\phi^2 \\approx 2.6$. Code to create this figure is available \\href{https:\/\/github.com\/Mark-Kramer\/Golden-Framework\/blob\/main\/Figure-7.ipynb}{here}}.} \\label{fig:empirical}\n\\end{figure}\n\n\\noindent Third, if rhythms organize according to the golden ratio, then we propose that rhythmic triplets support cross-frequency communication. Nearly all existing research in cross-frequency coupling focuses on interactions between two rhythms (e.g., theta-gamma \\cite{canolty_functional_2010,lisman_theta-gamma_2013,hyafil_neural_2015}), and many measures exist to assess and interpret bivariate coupling between rhythms \\cite{hyafil_neural_2015,siebenhuhner_genuine_2020,tort_measuring_2010,aru_untangling_2015}. Yet, brain rhythms coordinate beyond pairwise interactions; trivariate interactions between three brain rhythms include coordination of beta, low gamma, and high gamma activity by theta phase \\cite{belluscio_cross-frequency_2012,lopes-dos-santos_parsing_2018,zhang_sub-second_2019,fernandez-ruiz_entorhinal-ca3_2017,colgin_frequency_2009,jiang_distinct_2020}; coordination between ripples (140-200 Hz), sleep spindles (12-16 Hz), and slow oscillations (0.5-1.5 Hz) \\cite{buzsaki_brain_2012}; and coordination between (top-down) beta, (bottom-up) gamma, and theta rhythms \\cite{fries_rhythms_2015}. To assess trivariate coupling, an obvious initial choice is the bicoherence, which assesses the phase relationship between three rhythms: $f_1$, $f_2$, and $f_1+f_2$ \\cite{barnett_bispectrum_1971,kramer_sharp_2008,shahbazi_avarvand_localizing_2018}. However, the bicoherence may be too restrictive (requiring a constant phase relationship between the three rhythms), and estimation of alternative interactions (e.g., between amplitudes and phases) will require application and development of alternative methods \\cite{haufler_detection_2019}.\n\\\\\n\n\\noindent Fourth, why brain rhythms occur at the specific frequency bands observed, and not different bands, remains unknown. To address this, we combine the golden ratio scaling proposed here with a fundamental timescale for life on Earth: the time required for Earth to complete one rotation (i.e., the sidereal period) of 23 hr, 56 min. Beginning from this fundamental frequency ($1\/86160$ Hz), we compute higher frequency bands by repeated multiplication of the golden ratio (Table \\ref{tab:earth}). Doing so, we identify frequencies consistent with the canonical frequency bands (i.e., delta, theta, alpha, beta, low gamma, middle gamma, high gamma, ripples, fast ripples; see last column of Table \\ref{tab:earth}). We note that broad frequency ranges define the canonical frequency bands, for example the gamma band from (30, 100) Hz. Therefore, model predictions that identify rhythms within a band is not surprising. We propose instead that the relevant model prediction is the subdivision of the canonical frequency bands (e.g., the 10-30 Hz beta band into two sub-bands, the 30-100 Hz gamma band into three sub-bands), not the specific frequency values identified. We note that the lower frequencies may include \"body oscillations\", such as heart rate and breathing frequency \\cite{tort_respiration-entrained_2018}, as proposed for a harmonic frequency relationship (factor of 2) in \\cite{klimesch_algorithm_2013,klimesch_frequency_2018}. We hypothesize that if intelligent life were to evolve on a planet like Earth, in a star system like our own, with neural physiology like our own, then rhythmic bands would exist with center frequencies that depend on the planet's circadian cycle. We acknowledge that this hypothesis, and the proposed association between neural rhythms and the sidereal period in Table \\ref{tab:earth}, remain speculation, without robust supporting evidence.\n\n\\begin{table}[h]\n\\caption{{\\it {\\bf Golden rhythms, beginning with the sidereal period, align with the brain's rhythms.} The period ($T$) and frequency ($f$) of rhythms beginning with the sidereal period ($T=86160$ s) and multiplying the frequency by the golden ratio (number of multiplications indicated by the value in column {\\bf Power}). Traditional frequency band labels (from \\cite{buzsaki_neuronal_2004, buzsaki_scaling_2013, roopun_temporal_2008}) indicated in the last column.}}\n\\label{tab:earth}\n\\centering\n\\begin{threeparttable}\n\\begin{tabular}{ccc||ccc||ccc r}\n\\headrow\n\\thead{Power} & \\thead{$T$ [s]} & \\thead{$f$ [Hz]} & \n\\thead{Power} & \\thead{$T$ [s]} & \\thead{$f$ [Hz]} &\n\\thead{Power} & \\thead{$T$ [s]} & \\thead{$f$ [Hz]} & Label \\\\\n0&\t86160&\t1.16E-05&\t\t12&\t268&\t0.004&\t24&\t0.83&\t1.20&\tSlow 1 \\\\\n1&\t53250&\t1.88E-05&\t\t13&\t165&\t0.006&\t25&\t0.51&\t2&\tDelta \\\\\n2&\t32910&\t3.04E-05&\t\t14&\t102&\t0.010&\t26&\t0.32&\t3&\tDelta \\\\\n3&\t20340&\t4.92E-05&\t\t15&\t63.2&\t0.016&\t27&\t0.20&\t5&\tTheta \\\\\n4&\t12571&\t7.96E-05&\t\t16&\t39.0&\t0.026&\t28&\t0.12&\t8&\tAlpha \\\\\n5&\t7769&\t1.29E-04&\t\t17&\t24.1&\t0.041&\t29&\t0.07&\t13&\tBeta1 \\\\\n6&\t4802&\t2.08E-04&\t\t18&\t14.9&\t0.067&\t30&\t0.05&\t22&\tBeta2 \\\\\n7&\t2968&\t3.37E-04&\t\t19&\t9.2&\t0.11&\t31&\t0.03&\t35&\tLow Gamma \\\\\n8&\t1834&\t5.45E-04&\t\t20&\t5.7&\t0.18&\t32&\t0.02&\t57&\tMid Gamma \\\\\n9&\t1133&\t8.82E-04&\t\t21&\t3.5&\t0.28&\t33&\t0.01&\t91&\tHigh Gamma \\\\\n10&\t701&\t0.001&\t\t 22&\t2.2&\t0.46&\t34&\t0.01&\t148&\tRipple \\\\\n11&\t433&\t0.002&\t\t 23&\t1.3&\t0.74&\t35&\t0.004&\t239&\tFast Ripples \\\\\n\\hline \n\\end{tabular}\n\\end{threeparttable}\n\\end{table}\n\n\\section{Discussion}\nWhy do brain rhythms organize into the small subset of discrete frequencies observed? Why does the alpha rhythm peak at 8-12 Hz and the (low) gamma rhythm peak at 35-55 Hz, across species \\cite{buzsaki_scaling_2013}? Why does the brain not instead exhibit a continuum of rhythms, or a denser set of frequency bands, or different frequency bands? Here we provide a theoretical explanation for the organization of brain rhythms. Imposing a ratio of $\\phi$ (the golden ratio) between the peaks of neighboring frequency bands, we constrain activity to a small subset of discrete brain rhythms, consistent with those observed {\\it in vivo}. Organized in this way, brain rhythms optimally support the separation and integration of information in distinct rhythmic communication channels.\n\\\\\n\n\\noindent The framework proposed here combines insights developed in existing works. Mathematical analysis of weakly coupled oscillators established the importance of resonance order for effective communication between neural populations oscillating at different frequencies \\cite{izhikevich_weakly_1997,hoppensteadt_thalamo-cortical_1998,izhikevich_weakly_1999,nunez_brain_2010}. Experimental observations and computational models have established the importance of brain rhythms \\cite{buzsaki_rhythms_2011,whittington_inhibition-based_2000,wang_neurophysiological_2010}, their interactions \\cite{canolty_functional_2010,fries_rhythms_2015}, and their organization according to the golden ratio \\cite{roopun_temporal_2008,kramer_rhythm_2008,roopun_period_2008}. Here, we combine these previous results with simulations and analysis of a network of damped, coupled oscillators in support of the proposed theory.\n\\\\\n\n\\noindent The framework proposed here is consistent with existing theories for the role of brain rhythms. Like the communication-through-coherence (CTC) hypothesis \\cite{fries_mechanism_2005,fries_rhythms_2015} and the frequency-division multiplexing hypothesis \\cite{akam_oscillatory_2014,akam_efficient_2012}, in the framework proposed here neural populations communicate dynamically along anatomical connections via coordinated rhythms. Organization by the golden ratio complements these existing theories in two ways. First, by proposing which rhythms participate \u2013 namely, rhythms spaced by factors of the golden ratio. Second, by proposing the importance of three rhythms to establish cross-frequency interactions and proposing a hierarchical organization to these interactions.\n\\\\\n\n\\noindent We considered a network of damped, coupled oscillators with sinusoidal gain modulation. In that network, cross-frequency coupling occurs when the gain frequency equals the sum or difference of the oscillator frequencies (Equation \\ref{eq:sin}). This result holds without additional restrictions on the oscillator or gain frequencies. However, golden rhythms are unique in that oscillator and gain frequencies chosen from this set support cross-frequency coupling; no rhythms beyond this set are required. Alternative irrational scaling factors (e.g., Euler's number $e$) establish different sets of oscillator frequencies (e.g., $\\ldots e^1, e^2, e^3, \\ldots$) and separate communication channels, but require gain frequencies beyond this set to support cross-frequency coupling. In this alternative scenario, two distinct sets of rhythms exist: one reflecting local population activity, and another the cross-frequency coupling between populations. Rhythms organized by the golden ratio support a simpler framework: one set of frequencies (oscillator and gain) that reflect both local oscillations and their cross-frequency coupling. Golden rhythms are the smallest set of rhythms that support both separate communication channels and their cross-frequency interactions. Requiring fewer rhythms simplifies implementation, reducing the number of mechanisms required to produce these rhythms.\n\\\\\n\n\\noindent These results are consistent with existing proposals that the golden ratio organizes brain rhythms and minimizes cross-frequency interference \\cite{roopun_period_2008,weiss_golden_2003,pletzer_when_2010}. We extend these proposals by showing how triplets of golden rhythms facilitate cross-frequency coupling. An integer ratio of $2$ between frequency bands (with bandwidth determined by the golden ratio) provides an alternative organization to support cross-frequency coupling \\cite{klimesch_algorithm_2013,klimesch_frequency_2018}. In this scenario, cross-frequency interactions have been proposed to occur via a shift in frequency. For example, two regions - with an irrational frequency ratio - remain decoupled until the center frequencies shift to establish $1:2$ phase coupling \\cite{rodriguez-larios_mindfulness_2020,rodriguez-larios_thoughtless_2020}. We instead propose both regions maintain their original frequencies and couple when an appropriate third rhythm appears (e.g., a golden triplet). Our simulation results suggest more widespread coupling between populations oscillating at a $1:2$ frequency ratio compared to a golden ratio (Figure \\ref{fig:8nodes_2}D). Interpreted another way, integer ratios between frequency bands may facilitate a \"coupling superhighway\"; a target region shifts frequency to enter the coupling superhighway and receive strong inputs from all upstream regions oscillating at integer multiples (or factors) of the target frequency. Rhythms organized by a golden ratio require coordination with a third input to establish cross-frequency coupling. Investigating these proposals requires analysis of larger networks with multiple rhythms, and perhaps multiple organizing frequency ratios.\n\\\\\n\n\\noindent While we do not propose the specific mechanisms that support golden rhythms, proposals do exist. A biologically motivated sequence exists to create golden rhythms from the beta1 (15 Hz), beta2 (25 Hz), and gamma (40 Hz) bands. Through {\\it in vitro} experiments and computational models, a process of period concatenation \u2013 in which the mechanisms producing the faster beta2 and gamma rhythms concatenate to create the slower beta1 rhythm \u2013 was proposed \\cite{roopun_period_2008,roopun_temporal_2008,kramer_rhythm_2008}. Alternatively, golden rhythms may emerge when two input rhythms undergo a nonlinear transformation \\cite{haufler_detection_2019,ahrens_spectral_2002}. The framework proposed here suggests that the emergent rhythms \u2013 appearing at the sum and difference of the two rhythms \u2013 may support local coordination of the input rhythms. \n\\\\\n\n\\noindent The simplicity of the proposed framework (compared to the complexity of brain dynamics) results in at least four limitations. First, brain rhythms appear as broad spectral bands, not sharply defined spectral peaks. Therefore, the meaning of a precise frequency ratio, or the practical difference between an irrational frequency ratio ($\\phi$) and a rational frequency ratio (e.g., $1.6$) is unclear. Second, rhythm frequencies may vary systematically and continuously with respect to stimulus or behavioral parameters \\cite{ray_differences_2010,kropff_frequency_2021}. Whether the brain maintains a constant frequency ratio between varying frequency rhythms, and what mechanisms could support this coordination, is unclear. Third, no evidence suggests the tuning of brain rhythms specifically to support separate communication channels. Instead, brain rhythms may occur at the frequencies observed due to the biological mechanisms available for coordination of neural activity (e.g., due to the decay time of inhibitory postsynaptic potentials that coordinate excitatory cell activity). Fourth, identification of rhythms in noisy brain signals remains a practical challenge, with numerous opportunities for confounds \\cite{zhou_methodological_2019,cole_brain_2017,donoghue_methodological_2021}. Therefore, the best approach to compare this theory with data remains unclear.\n\\\\\n\n\\noindent However, the simplicity of the golden framework is also an advantage. The framework consists of only one parameter (the golden ratio) compared to the many \u2013 typically poorly constrained \u2013 parameters of biologically detailed models of neural rhythms. In this way, the golden framework is broadly applicable and requires no specific biological mechanisms or rhythm frequencies; instead, only the relationship between frequencies is constrained.\n\\\\\n\n\\noindent No theoretical framework exists to explain the discrete set of brain rhythms observed in nature. Here, we propose a candidate framework, simply stated: brain rhythms are spaced according to the golden ratio. This simple statement implies brain rhythms establish communication channels optimal for separate and integrated information flow. While the specific purpose of brain rhythms remains unknown, perhaps the brain evolved to these rhythms in support of efficient multiplexing on a limited anatomical network.\n\n\\section*{Acknowledgements}\nThe author would like to acknowledge Dr.\\ Catherine Chu for writing assistance and tolerating many conversations about the golden ratio. \n\n\\printendnotes\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n\nCredit reports have played a central role in consumer lending in the United States since the 1950's and are widely used around the globe. Today credit agencies collect hundreds of variables on each individual who is active in the consumer credit market. The central idea is to capture a consumer's credit worthiness through their past history of borrowing and payments. Most consumers are familiar with the aggregation of this information into a credit score which they often encounter when they apply for a mortgage or a car loan. By capturing the detailed credit history of a consumer, the report provides a unique observational window of the behavior of individuals across a number of core economic activities such as housing, credit card spending, car purchases etc. Credit reports thus capture a multi-modal view of individual behavior. Importantly, consumers retain some private information concerning their unobservable traits, including their underlying health status, those traits, unobservable to the researcher, are however central to the consumer decision process on how to behave in the credit market: i.e. how much to borrow, when, for what purpose, whether to repay, etc. The relevance of such asymmetric information, between parties of a contract, is a centerpiece of contract theory (see \\cite{StiglitzWeiss1981}, and \\cite{BoltonDewatripont2005}).\nAt the same time, various major shocks experienced by the consumer will have both short and long term implications on consumers' behavior which will be indirectly captured in the credit data. We use recent developments in machine learning to show that it is possible to predict a seemingly unrelated outcome, mortality, from the data recorded in a consumer report. \nThere are three channels through which we believe our approach has a large potential to predict outcomes: i. asymmetric or private information; ii. economic shocks leading the deterioration in health (access to treatment and directly linked to lower resources and the so call death of despair); iii. as a tangible signal of an occurring major health shock which would not be recorded otherwise in easily available data.\n\nWe will not try to pinpoint which one is the most likely channel in this project, they could all be at play. We see our work as pioneering the use of large available data in a specific domain to inform decision-making in seemingly unrelated domains.\n\nWe also note that the ability to use available Big Data generated in a specific domain to predict outcomes in seemingly unrelated domains is both an opportunity for innovation and also a potential privacy concern (\\cite{harding2014future}). While not specifically addressing the current COVID-19 crisis, this study highlights the potential of such data to capture the extent to which complex factors such as economic circumstances and choices can be predictive of health outcomes including mortality.\n\nTo the best of our knowledge, this is the first paper to study the relationship between credit and mortality directly. However, there is a rich literature that links mortality and health to other economic variables and economic activity. For example, it has been shown that heath is counter-cyclical in the sense that during economic booms health tends to deteriorate \\cite{ruhm2000recessions, ruhm2003good, ruhm2005healthy}. Economic growth is associated with increased levels of obesity and a decrease in physical activities, diet quality and leisure time. Also, a reduction of unemployment is associated with a fast increase in coronary heart disease \\cite{ruhm2007healthy}. An experimental study showed that higher income is associated with less risk of cardiovascular diseases \\cite{wang2019longitudinal}. The relationship however appears to be heterogeneous and it has been documented that recessions are correlated with increases in infant mortality \\cite{alexander2011quantifying} and differential across age, race, and education groups \\cite{HoynesMillerSchaller2012}.\n\nResearch also attempts to identify the underlying mechanisms for the previously mentioned counter-cyclical evidence \\cite{stevens2015best}. The evidence is that during economic growth mortality increases mostly amongst elderly women, which suggests that this relation may be linked to factors other than labor. This assumption is supported by the fact that health care may also be counter-cyclical in the sense that staff hiring in nursing facilities increases during recessions. Another theory states that higher mortality is actually related to worst economic conditions during early life \\cite{van2006economic}.\n\nOur work contributes this literature of mortality and economics by looking at how credit operations and individual death probabilities are related. We use a very rich data set of microdata from the Experian credit bureau, which is one of the agencies responsible for the credit scores in the United States. We follow a large pool of individuals credit activities through the years and to estimate a modified actuarial life table\\footnote{The actuarial life table shows the probability of death within the next year by age and gender. Examples can be found in the Social Security Administration page: www.ssa.gov.} that uses detailed credit variables to get more accurate results. We use machine learning models to deal with the complexity of the data in a pool of more than 2 million individuals (a random 1\\% sample of the US population with credit score) and more than a thousand variables in the models. \nTo reiterate, our analysis provides an individual level forecasting exercise on mortality making use of available data collected by credit reporting agencies. The ability to predict mortality using routinely performed credit operations opens up to possibility for policy interventions as well as for tailor-made contracts, but at the same time raises privacy concerns over and beyond the classic sharing of sensitive health information\n\n\n\\section{Data}\n\n\nOur data comes from the Experian Credit Bureau. \nIt consists of a yearly sample of more than two million anonymous individuals and 429 variables covering the period between 2004 and 2016. The data is representative for the population of individuals in the US with access to credit.\\footnote{Overall over 80\\% of the US adult population is represented in the data, with a coverage increasing by age up to age 65 where below 5\\% of people do not have a credit score. See for example \\cite{LeevanderKlaauw2010}.} The variables are divided in categories such as mortgage loans, car loans, credit cards, installments, etc. The full list of categories is presented in table \\ref{tab:groups}. Within the categories there are variables such as number of trades, trade balance, delinquency, etc. A ``deceased flag\" in the data will be used to create the mortality variable used in the estimation process. The state of residency is also recorded in the data and will be used separately as discussed below due to potentially confounding effects of geography on mortality given the health disparities in the US. Similar data sets were used to estimate improved credit scores and consumer credit risk using machine learning methods \\cite{albanesi2019predicting,khandani2010consumer} but have not been evaluated for their potential to predict outcomes recorded in the data such as mortality. \n\nThe deceased flag is 1 if the individual is dead or died within the reference year. We use this variable to define a mortality variable that is 1 if the individual died in the reference year and 0 otherwise. This is going to be our outcome variable in the machine learning models. \n\n\n\n\\begin{table}[htb]\n\\caption{Experian Groups of Variables}\n\\label{tab:groups}\n\\begin{tabular}{cc}\n\\hline\nGroup & Description \\\\ \\hline\nALJ & Joint Trades \\\\\nALL & ALL Trade Types \\\\\nAUA & Auto Loan or Lease trades \\\\\nAUL & Auto Lease trades \\\\\nAUT & Auto Loan trades \\\\\nBCA & Bankcard Revolving and charge trades \\\\\nBCC & Bankcard Revolving trades \\\\\nBRC & Credit Card Trades \\\\\nBUS & Personal Liable Business Loan Line of Credit \\\\\nCOL & Collection trades \\\\\nCRU & Credit Union \\\\\nFIP & Personal Finance \\\\\nILJ & Joint Installment Trades \\\\\nILN & Installment trades \\\\\nIQ & Inquiries \\\\\nMTA & Mortgage type trades \\\\\nMTF & First mortgage trades \\\\\nMTJ & Joint mortgage trades \\\\\nMTS & second mortgage trades \\\\\nPIL & Personal installment trades \\\\\nREC & Recreational Merchendise trades \\\\\nREJ & Joint Revolving trades \\\\\nREV & Revolving trades \\\\\nRPM & Real-estate property managment trades \\\\\nRTA & Retail trades \\\\\nRTI & Installment retail trades \\\\\nRTR & Revolving retail trades \\\\\nSTU & Student trades \\\\\nUSE & Authorized user trades \\\\\nUTI & Utility trades \\\\ \\hline\n\\end{tabular}\n\\end{table}\n\n\\subsection{Estimation Details}\n\n\nOur estimation uses the full data set available to us consisting of 429 credit variables for 2.2 million individuals covering the period from 2004 to 2016. Besides credit, we also have age and state of residency and we build a variable that counts how many times an individual moved to a different state in the training sample. Our goal is to estimate the probability of death for each individual within the next year using all these variables in the same fashion as the life table published by the Social Security Administration (SSA) conditional only on age and gender. Since our data does not have gender, gender is a protected category and cannot be used in credit decisions and therefore it is not available to us, our benchmark is the probability of death within the next year conditional only on age. Our estimates are not directly comparable with SSA numbers because our sample comprises people with a valid credit score, i.e. they are likely to be wealthier on average than a random sample taken from the entire population, as mentioned earlier for the age groups above 25 the sample covers over 90\\% of the US population.\n\nLet $y_{i,t}$ be a binary variable that is 1 if individual $i$ died in period $t$, $X_{i,t}$ contains all credit variables and $Z_{i,t}$ contains variables that are not related to credit, such as age and state of residency. Our estimation framework can be defined by the following equation:\n\n\n\\begin{equation}\n\\label{eq:1}\n P(y_{i,t+1} = 1 | Z_{i,t}, X_{i,t}, X_{i,t-1},\\dots,X_{i,t-k}) = f(Z_{i,t}, X_{i,t}, X_{i,t-1},\\dots,X_{i,t-k}), \n\\end{equation}\nwhere $f(\\cdot)$ is a general function that will depend on the chosen model and $k$ is the number of lags. Our out-of-sample period is from 2012 to 2016. For each out-of-sample year we estimated several models in the framework of equation \\ref{eq:1} using as many lags as possible given the timing of the observation in relation to the length of the available data. The model used to predict mortality in 2012, for example, was estimated with data from 2004 to 2011 (7 lags of the credit variables), which accounts for 3055 variables. The model used to predict mortality in 2016 uses all data from 2004 to 2015, which results in 11 lags and 4771 variables. \n\nThe models we used for the function $f(\\cdot)$ are the Random Forest \\cite{breiman2001random} and the Gradient Boosting\\footnote{Details on the algorithms, implementation and tuning of both models are available in the Supplementary Information.} \\cite{friedman2001greedy} in its stochastic version following \\cite{friedman2002stochastic} and with early stopping \\cite{zhang2005boosting}. The choice of these models is due to several facts. First, they can easily deal with a large number of variables and observations and the algorithms are computationally efficient in very large data sets such as the one we used here, where the outcome variable is an event of very low probability \\cite{pike2019bottom}; second, these models are suited for complex data sets where one expects to have many interactions between variables; third, these models are known to be consistent \\cite{zhang2005boosting, scornet2015consistency}; finally, both Gradient Boosting and Random Forest are well established machine learning with several successful applications in many different fields \\cite{ehrentraut2018detecting, touzani2018gradient, li2018machine}. Our aim is to show the feasibility of the mortality prediction using established techniques which are widely available.\n\n\n\n\\subsection{Predicting Mortality}\n\n\nTable \\ref{tab:probs} shows the estimated probabilities of dying conditional to the true outcomes from a model trained at $t$ and tested in $t+1$. The values were averaged across all out-of-sample years (2012-2016). The first row is the unconditional probability of death and the remaining rows are the probability of death when the true outcome was death ($y = 1$) and when it was not death ($y = 0$) for the models described in table \\ref{tab:abbrev}. The best way to understand this table is to compare for a given model the difference between the probabilities assigned to death when $y=0$ and to survival when $y = 1$. A big gap between these two means that the model is able to allocate higher probabilities to individuals that actually died in year $t+1$.\nWe notice that conditional on age we have a very small improvement compared to the unconditional model given that the model is assigning bigger probabilities to individuals where the true outcome was death. The inclusion of state of residency does not bring significant improvements however. This is likely due to the fact that while mortality rates differ by geography, in the absence of more detailed location information for the individual, the state of residence by itself carries little predictive power. However, when we evaluate the models with credit data variables, the improvement is significant, especially for older people. The Gradient Boosting with credit assign twice the probability of death in cases where the individual actually died for most age groups. The Age model, on the other hand assigns probabilities of death less than 10\\% larger to people that died in $t+1$ and the models with State dummies do not provide a sizeable improvement. Finally, the Random Forest model performs better, with assigned probability of dying at least 60\\% larger for people who actually died at $t+1$. \n\nWe present a more formal comparison between models in table \\ref{tab:auc}, which shows the Area Under the Curve (AUC) for all models in all years and the DeLong \\cite{delong1988comparing} test to compare their differences. The AUC is the area under the curve of the false positive rate (1-Specificity) against the true positive rate (Sensitivity) for all possible cuts between 0 and 1. If the AUC is larger than 0.5 it means that the model has improved predictive ability compared to the unconditional probabilities. In other words, the AUC tells how much a model is able to distinguish between classes. The values in parentheses in table \\ref{tab:auc} show the p-value of the test that compares the Credit Random Forest and the Credit Gradient Boosting to the Age only model. The null hypothesis is that the difference between the AUC of the two tested models is 0. We omitted the models using the state of residence from this table to given that table \\ref{tab:probs} already shows that these models do not significantly improve on the models conditional on age alone. The results show that for individuals older than 55 the models with credit data have significantly higher AUCs than traditional actuarial calculations based on age. As we start looking at younger individuals the Random Forest and the Gradient Boosting get closer to the age model to the point where there is no significant difference between them. However, this only happens in general for groups of people younger than 50 is we consider the Gradient Boosting and table \\ref{tab:probs} shows that even for younger groups the Gradient Boosting and the Random Forest separate better between the classes. The main explanation to this result is that the number of deaths in the sample for younger groups is very small, as expected, which makes it harder for the models to understand the relation between credit and mortality in a way that can be generalized. Lastly, in table \\ref{tab:auc}, the Gradient Boosting algorithm produces slightly higher AUCs than the Random Forest in most cases for older individuals, where the credit models outperform the simpler model based on age. At younger ages, our improved models, while still performing much better than simpler models, suffer from the scarcity of events in the data and somewhat shorter credit histories for younger individuals.\n\n\n\n\n\n\n\\begin{table\n\\caption{Model Abbreviations}\n\\label{tab:abbrev}\n\\begin{tabular}{lcc}\n\\hline\n\\textbf{Abbreviation} & \\textbf{Model} & \\textbf{Variables} \\\\ \\hline\nUncond. & Unconditional Probabilities & \\\\\nAge & Conditional Probabilities & Age \\\\\nState Lin. & Linear - Logistic & Age, State \\\\\nState GB & Gradient Boosting & Age, State \\\\\nState RF & Random Forest & Age, State \\\\\nCredit GB & Gradient Boosting & Age, State, Credit \\\\\nCredit RF & Random Forest & Age, State, Credit \\\\ \\hline\n\\end{tabular}%\n\\end{table}\n\n\n\\begin{table\n\\begin{threeparttable}\n\\caption{Average Estimated Probability of dying conditional to the true outcomes in $t+1$}\n\\label{tab:probs}\n\\resizebox{\\textwidth}{!}{%\n\\begin{tabular}{lcccccccccc}\n\\hline\n & & \\multicolumn{9}{c}{\\textbf{Age Cohorts}} \\\\\n & & 81-100 & 76-80 & 71-75 & 66-70 & 61-65 & 56-60 & 51-55 & 46-50 & 41-45 \\\\ \\hline\nUncond. & & 2.99 & 1.52 & 0.98 & 0.61 & 0.41 & 0.25 & 0.16 & 0.10 & 0.07 \\\\ \\hline\nAge & y=1 & 3.27 & 1.54 & 1.00 & 0.63 & 0.42 & 0.25 & 0.16 & 0.10 & 0.07 \\\\\n & y=0 & 3.03 & 1.52 & 0.99 & 0.62 & 0.41 & 0.25 & 0.16 & 0.10 & 0.07 \\\\ \\hline\nState Lin. & y=1 & 3.30 & 1.55 & 1.01 & 0.64 & 0.43 & 0.25 & 0.16 & 0.10 & 0.07 \\\\\n & y=0 & 3.02 & 1.52 & 0.99 & 0.62 & 0.41 & 0.25 & 0.16 & 0.10 & 0.07 \\\\ \\hline\nState GB & y=1 & 3.31 & 1.56 & 1.00 & 0.64 & 0.43 & 0.26 & 0.17 & 0.11 & 0.06 \\\\\n & y=0 & 3.00 & 1.52 & 0.99 & 0.62 & 0.41 & 0.25 & 0.16 & 0.10 & 0.07 \\\\ \\hline\nState RF & y=1 & 3.29 & 1.55 & 1.00 & 0.63 & 0.42 & 0.26 & 0.16 & 0.10 & 0.06 \\\\\n & y=0 & 3.01 & 1.52 & 0.99 & 0.62 & 0.41 & 0.25 & 0.16 & 0.10 & 0.07 \\\\ \\hline\nCredit GB & y=1 & 4.06 & 0.76 & 0.43 & 0.29 & 0.19 & 0.07 & 0.04 & 0.03 & 0.01 \\\\\n & y=0 & 1.93 & 0.42 & 0.25 & 0.15 & 0.09 & 0.04 & 0.02 & 0.01 & 0.01 \\\\ \\hline\nCredit RF & y=1 & 5.56 & 2.68 & 1.82 & 1.28 & 0.87 & 0.51 & 0.34 & 0.26 & 0.15 \\\\\n & y=0 & 3.63 & 1.99 & 1.33 & 0.88 & 0.60 & 0.35 & 0.23 & 0.17 & 0.10 \\\\ \\hline\n\\end{tabular}\n}%\n\\begin{tablenotes}\n\\small\n\\item The table shows the estimated probabilities of dying conditional to the true outcomes from a model trained in $t$ and tested in $t+1$. For example, under Prob by Age we have the probability of dying given that the true outcome was death ($y=1$) and the probability of dying given that the true outcome was not death ($y = 0$). The results were averaged across all estimation windows.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\\begin{landscape}\n\\begin{table}\n\\begin{threeparttable}\n\\caption{Out-of-Sample Area Under the Curve for all Test Years}\n\\label{tab:auc}\n\\begin{adjustbox}{max width=\\textwidth}\n\\begin{tabular}{lccccccccccccccccccc}\n\\hline\n{ \\textbf{}} & \\multicolumn{17}{c}{{\\textbf{Out-of-Sample Area Under the Curve}}} & { \\textbf{}} & { \\textbf{}} \\\\\n{ } & \\multicolumn{3}{c}{{ Test year = 2012}} & { } & \\multicolumn{3}{c}{{ Test year = 2013}} & { } & \\multicolumn{3}{c}{{ Test year = 2014}} & { } & \\multicolumn{3}{c}{{ Test year = 2015}} & { } & \\multicolumn{3}{c}{{ Test year = 2016}} \\\\\nCohort & Age & Credit GB & Credit RF & & Age & Credit GB & Credit RF & & Age & Credit GB & Credit RF & & Age & Credit GB &Credit RF & & Age &Credit GB & Credit RF \\\\ \\cline{1-4} \\cline{6-8} \\cline{10-12} \\cline{14-16} \\cline{18-20} \n81-100 & 0.587 & 0.659 & 0.657 & & 0.576 & 0.673 & 0.678 & & 0.578 & 0.693 & 0.684 & & 0.576 & 0.682 & 0.663 & & 0.562 & 0.703 & 0.683 \\\\\n & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) \\\\\n76-80 & 0.523 & 0.604 & 0.619 & & 0.525 & 0.607 & 0.607 & & 0.526 & 0.608 & 0.610 & & 0.542 & 0.608 & 0.603 & & 0.542 & 0.612 & 0.609 \\\\\n & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) \\\\\n71-75 & 0.518 & 0.596 & 0.620 & & 0.521 & 0.595 & 0.619 & & 0.522 & 0.592 & 0.615 & & 0.534 & 0.606 & 0.595 & & 0.546 & 0.587 & 0.594 \\\\\n & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) \\\\\n66-70 & 0.545 & 0.603 & 0.603 & & 0.543 & 0.619 & 0.611 & & 0.528 & 0.612 & 0.592 & & 0.531 & 0.612 & 0.607 & & 0.535 & 0.617 & 0.617 \\\\\n & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) \\\\\n61-65 & 0.491 & 0.583 & 0.592 & & 0.519 & 0.598 & 0.594 & & 0.531 & 0.602 & 0.604 & & 0.546 & 0.594 & 0.605 & & 0.528 & 0.599 & 0.592 \\\\\n & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) & & & (0.000) & (0.000) \\\\\n56-60 & 0.511 & 0.619 & 0.568 & & 0.550 & 0.610 & 0.574 & & 0.539 & 0.595 & 0.595 & & 0.545 & 0.591 & 0.582 & & 0.537 & 0.594 & 0.590 \\\\\n & & (0.000) & (0.001) & & & (0.000) & (0.152) & & & (0.000) & (0.001) & & & (0.002) & (0.015) & & & (0.000) & (0.000) \\\\\n51-55 & 0.530 & 0.616 & 0.601 & & 0.502 & 0.566 & 0.580 & & 0.545 & 0.591 & 0.579 & & 0.527 & 0.591 & 0.573 & & 0.541 & 0.578 & 0.562 \\\\\n & & (0.000) & (0.001) & & & (0.002) & (0.000) & & & (0.018) & (0.083) & & & (0.000) & (0.013) & & & (0.040) & (0.228) \\\\\n46-50 & 0.559 & 0.650 & 0.595 & & 0.515 & 0.617 & 0.558 & & 0.545 & 0.599 & 0.581 & & 0.553 & 0.603 & 0.583 & & 0.510 & 0.610 & 0.577 \\\\\n & & (0.001) & (0.184) & & & (0.000) & (0.085) & & & (0.042) & (0.148) & & & (0.028) & (0.216) & & & (0.000) & (0.003) \\\\\n41-45 & 0.539 & 0.557 & 0.551 & & 0.536 & 0.574 & 0.597 & & 0.546 & 0.594 & 0.537 & & 0.549 & 0.549 & 0.527 & & 0.536 & 0.592 & 0.575 \\\\\n & & (0.582) & (0.706) & & & (0.213) & (0.043) & & & (0.108) & (0.746) & & & (0.973) & (0.438) & & & (0.083) & (0.187) \\\\ \\hline\n\\end{tabular}\n\\end{adjustbox}\n\\begin{tablenotes}\n\\small\n\\item The table shows the out-of-sample Area Under the Curve (AUC) for all test years, cohorts and models. Values in parenthesis are \\\\ p-values from \\citet{delong1988comparing} test for the Random Forest and the Gradient Boosting against the probabilities conditional on \\\\ age only. The null hypothesis is that the difference between the AUC of both models is 0.\n\\end{tablenotes}\n\\end{threeparttable}\n\\end{table}\n\n\\end{landscape}\n\nThe same results presented in table \\ref{tab:auc} can be observed in more details in figures (\\ref{f:auc2012} and \\ref{f:auc2016}). These figures show the AUC plots for years 2012 and 2016 and all age cohorts. It is very clear that all curves deviate more from the $45^{\\circ}$ line for cohorts with older people. The small deviations in younger people are likely due to the low mortality rate for these cohorts resulting in a number of deaths in each year in the sample that is insufficient to successfully apply these algorithms. The same behavior persists through all out-of-sample years. The number of lags does not seem to increase the performance of the models. The 2012 model had 7 lags and the 2016 model had 11 lags. \nWe go back to the discussion on predictive value of distant lags in the next section.\n\n\n\\begin{figure}[htb]\n\\caption{Out-of-Sample Area Under The Curve - 2012}\n\\label{f:auc2012}\n\\includegraphics[width=0.95\\textwidth]{2012.pdf}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\caption{Out-of-Sample Area Under The Curve - 2016}\n\\label{f:auc2016}\n\\includegraphics[width=0.95\\textwidth]{2016.pdf}\n\\end{figure}\n\n\\subsection{Variable Importance}\n\nIn this subsection we address which variables are more relevant to reduce the prediction errors in our models. We present results for the Gradient Boosting, which was slightly more accurate in general than the Random Forest. Figure \\ref{f:importance_group} shows the relative importance between the 10 most relevant groups of variables by age cohorts. The list of all groups is presented in table \\ref{tab:groups}. The four most important groups (BCA, BCC, BRC and REV) are related to credit and bank cards and revolving trades, unsecured credit. Mortgage and joint trades variables (MTA, MTF, ALJ) are also of some importance. Auto loan (AUA) variables also appear in the most important groups for all cohorts. As for the difference between age cohorts, mortgage and auto related loans are less relevant for older people than for younger ones. Of less importance but still worth noting are Installment trades. These results seem intuitive given that credit cards and other revolving trades are classes of credit that can change very fast with individual circumstances. Furthermore, mortgage decisions are very central to many younger individuals and are driven by long run expectations. Likewise, one would expect mortgage and auto loans to be less important for older individuals. Overall, though the pattern of variable importance is remarkably consistent across age cohorts. The only exception appears to be the cohort of consumers over 80 years old. It is possible that many of these consumers ``simplify\" their financial life. For example mortgages are paid off, the elderly tend to reduce consumption in many areas and thus need to rely less on credit cards to smooth out consumption over time etc.\n\n\nFigure \\ref{f:importance_lags} shows the relative importance between the 10 most relevant groups of variables by lags. What is quite interesting is that for the class of variables that are the most responsive in the short run, they are also very central for the long lags. For example, the 5-years lags overall appears to have about a 1\/3 importance with respect to the 1-year lag, yet that rules out some simple story of running into default because of death or health shocks only. It seems that part of the predictive power, especially in long-lags, is due to individual revelation of their types rather than reaction to shocks. This is consistent with the hypothesized mechanism where credit behavior reveals the particular individual type, a story of private information rather than economic shocks. \n\nNote that the most relevant variables are the same as the ones in figure \\ref{f:importance_group}. However, we can see a bigger change in their relative importance as we move from lag 1 to lag 7. Inquiries (IQ) appear only for bigger lags (5 and 7) and in lag 1 we have collection trades (COL) as a relevant group. Finally, figure \\ref{f:importance_lag_only} shows the relative importance between lags. Lag 1 clearly dominates the remaining lags but the overall importance of lags 2 to 7 combined is bigger than lag 1 alone. This is evidence that the relationship between credit and mortality has a long term component and financial positions taken several years in the past may be predictive of mortality (and potentially other health outcomes) in the present. We take this as suggestive of sizeable private information retained by consumers on their health status, i.e. the predictive power of distant past behavior could be reflecting private information on one's general health and life-expectancy rather than short term shocks. It is less likely that these results are consistent with a simple story of a health shock leading to bankruptcy or an income shock reducing the individual ability to access health care. It is interesting to note that while there is a sharp reduction in the importance of the variables from lag 1 to lag 2, the importance after does not decline perhaps as rapidly as we might expect. We think that this is consistent with a process where credit reports capture both immediate shocks and more persistent effects. For example mortality is driven both by sudden health events such as a stroke and long run uncontrolled blood pressure which is related to lifestyle choices such as a sedentary life and bad nutrition.\n\n\n\\begin{figure}[htb]\n\\caption{Variable Importance by Groups and Age Cohorts}\n\\label{f:importance_group}\n\\includegraphics[width=0.95\\textwidth]{importance_groups.pdf}\\\\\n\\floatfoot{The figure shows the relative importance of the 10 most relevant groups of variables based on the Experian classification for several age cohorts. Values were adjusted so that the sum of all bars is equal 1. The importance was measured as the contribution of each variable to reduce the model prediction error.}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\caption{Variable Importance by Groups and Lags}\n\\label{f:importance_lags}\n\\includegraphics[width=0.95\\textwidth]{importance_lags.pdf}\\\\\n\\floatfoot{The figure shows the relative importance of the 10 most relevant groups of variables based on the Experian classification for lags 1-7. Values were adjusted so that the sum of all bars is equal 1. The importance was measured as the contribution of each variable to reduce the model prediction error.}\n\\end{figure}\n\n\n\\begin{figure}[htb]\n\\caption{Importance by lags}\n\\label{f:importance_lag_only}\n\\includegraphics[width=0.5\\textwidth]{importance_lag_only.pdf}\\\\\n\\floatfoot{The figure shows the relative importance between lags 1 to 7 averaged across all models. Values were adjusted so that the sum of all bars is equal 1. The importance was measured as the contribution of each variable to reduce the model prediction error.}\n\\end{figure}\n\n\\section{Conclusion}\n\n\nWe have shown that data routinely collected by credit bureaus such as Experian in the US have substantial power in predicting mortality at the individual level. We employed data on over 2 million individuals and 429 credit related variables to estimate Gradient Boosting and Random Forest models for the probability of an individual dying within the next year. Our models significantly outperform actuarial tables conditional on age in terms of AUC, which shows the model ability to distinguish between classes. Moreover, the Gradient Boosting assigns probabilities of death twice as big to individuals that actually died at $t+1$ against an increase of less than 10\\% in the actuarial age model. A limitation of our study is that we do not observe gender in our data which is commonly reported in actuarial tables, but it is unavailable in credit data. The predictive performance of our algorithms improves with the age of the individual. It is not clear whether this is due to the increased information content of credit data for older Americans or whether this is an artefact of the estimation strategy and the relatively low number of deaths in younger cohorts. The measured variable importance seems to suggest that the results are driven by the measured changes in credit and bank cards (such as balances and payment amounts). Additional variable groups related to mortgage activity and other loans are also predictive but to a lesser extent.\n\nThe current study is not meant to fully uncover the underlying mechanism which are likely to be complex and potentially subject to many feedback cycles between credit behavior and health. Our insights are however consistent with the current state of knowledge which documents correlations between economic shocks and health outcomes, and potentially with individuals retaining a substantial amounts of private information on their health status. These correlations appear to be highly predictive of mortality outcomes. It is important to note that lags of the credit variables are also predictive which is indicative of both a short and long run component of the relationships between health and consumer finance. \n\nThe documented predictive power of credit variables for individual mortality has a number of implications. From an economic perspective, mortality predictions play an important role in a number of markets such those for life insurance and reverse mortgages. Life expectancy calculations are also key in legal proceedings which rely on evaluations of the expected value of life. Our study shows that actuarial tables that are usually relied upon can be significantly improved upon at the individual level using relatively common if proprietary data collected routinely for most Americans. Thus even without access to any sort of health information on the individual, health outcomes such as mortality can in fact be inferred from credit data.\n\n\n\\clearpage \n\n\n\\bibliographystyle{agsm}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\n\nThe construction and study of discrete Painlev\\'e equations has been a topic of research interest \nfor almost two decades, \\cite{NP,FIK,RGH,JimSakai}. Reviews of the subject may be found in \\cite{GNR,GR}.\nThe subject has culminated in the classification by H. Sakai of discrete as well continuous \nPainlev\\'e equations based on the algebraic geometry of the corresponding rational \nsurfaces associated with the spaces of initial conditions \\cite{Sakai}. \nAs a byproduct of the latter treatment, \na ``mistress'' discrete Painlev\\'e equation with elliptic dependence on the independent variable\nwas discovered. \n\nIn the history of the Painlev\\'e program, after the classification results for second-order first-degree equations, Painlev\\'e's students, Chazy and Garnier, \\cite{Chazy9,Chazy11,Garn}, investigated the classification of second-order second-degree equations and third-order first-degree equations. The classification of the second-degree class was completed by Cosgrove in recent years, \\cite{Cosg,CosgS}. A partial classification for the third-order case was also obtained by the aforementioned authors.\nThe work of Bureau, \\cite{Bur72, Bur87}, is also important in this respect. \nNo classification results exist for the analogous discrete case and hardly any examples of \nsecond-order second-degree difference equations exist to date, with the notable exception of an (additive difference) equation given by Est\\'evez and Clarkson \\cite{Clarkson}.\n\nA key result of this letter is a second-order second-degree equation, \nwhich can be considered as a $q$-analogue of an equation in the Chazy-Cosgrove class, \ntogether with its Lax pair \n(i.e., isomonodromic $q$-difference problem). \nThis new equation contains four free parameters, which suggests that it \ncould be a $q$-difference analogue of the second-order second-degree \ndifferential equation that is a counterpart of the sixth Painlev\\'e \nequation. \nThere are several forms of a second-order second-degree equation\nrelated to the sixth Painlev\\'e \nequation that have appeared in the literature, notably\none derived by Fokas and Ablowitz \\cite{FA} and another appearing in the\nwork of Okamoto \\cite{Oka}.\nDifference analogues of the Fokas-Ablowitz equation have been \nprovided by Grammaticos and Ramani, \\cite{RG}, but these \ndifference equations were all of first-degree. It has \nbeen argued by these authors that equations that are second-degree in the highest iterate cannot be viewed as \\lq\\lq integrable\\rq\\rq , however, for the new equation we establish integrability through a Lax pair in the form of an isomonodromic deformation system. \nFurthermore we show that the equation arises as a similarity reduction from an integrable partial $q$-difference equation. Through the same procedure we also construct \nhigher-order second-degree equations, which form a hierarchy associated with the new equation.\n\n\\section{$q$-Difference Similarity Reduction}\\label{lKdVsaac}\n\\setcounter{equation}{0}\n\nLattice equations of KdV-type were introduced and studied over the last three decades \\cite{hirota:77,NQC}, \nsee \\cite{KDV} for a review. These lattice equations can be formulated as partial difference equations on a lattice with step sizes that enter as parameters of the equation. \nConventionally we think of these parameters as fixed constants. However, in agreement with the integrability of these equations, there exists the freedom to take the parameters as functions of the local lattice coordinate in each corresponding direction. \nIn this paper we consider the case when the parameters depend exponentially with base $q$ on the lattice coordinates.\n\nWe work in a space ${\\mathcal F}$ of functions $f$ of arbitrarily many variables $a_i$ ($i=1,\\dots,M$ for any $M$) on which we define the $q$-shift operations\n\\[\n\\,_q\\!T_i\\,f(a_1,\\ldots, a_N):=f(a_1,\\ldots, q\\,a_i,\\ldots, a_M)\\ . \n\\]\nFor $u, v, z \\in {\\mathcal F}$, we consider the following systems of nonlinear\npartial $q$-difference equations:\n\\begin{equation}\\label{eq:qKdV}\n\\left(u-\\,_q\\!T_i\\,_q\\!T_ju\\right)\\left(\\,_q\\!T_ju-\\,_q\\!T_iu\\right)\n= (a_i^2-a_j^2) q^2 \\ , \n\\end{equation} \n\\begin{equation}\\label{eq:qmKdV}\na_j(_q\\!T_jv)\\,_q\\!T_i\\,_q\\!T_jv + a_i(_q\\!T_jv)v = \na_i(_q\\!T_iv)\\,_q\\!T_j\\,_q\\!T_iv + a_j(_q\\!T_iv)v\\ \n\\end{equation}\nand\n\\begin{eqnarray}\\nonumber\n&&a_i^2\\left(z-\\,_q\\!T_iz\\right)\\left(\\,_q\\!T_jz-\\,_q\\!T_i\\,_q\\!T_jz\\right)\\\\\n\\label{eq:qSKdV}&&\\qquad\\qquad =\na_j^2\\left(z-\\,_q\\!T_jz\\right)\\left(\\,_q\\!T_iz-\\,_q\\!T_i\\,_q\\!T_jz\\right) , \n\\end{eqnarray}\nwhere $i,j=1,\\dots,M$. \nEach of these systems, (\\ref{eq:qKdV}) to (\\ref{eq:qSKdV}), \nrepresents a multi-dimensionally consistent family of partial difference equations, in the sense of \\cite{NW,BS}, which implies that they constitute holonomic systems of nonlinear partial $q$-difference equations. \nAnother way to formulate this property is through an underlying linear system which takes the form\n\\begin{equation}\\label{eq:Tjphi}\n\\,_q\\!T_i^{-1}\\boldsymbol{\\phi}=\\boldsymbol{M}_i(k)\\boldsymbol{\\phi} , \n\\end{equation}\nwhere $\\boldsymbol{\\phi}=\\boldsymbol{\\phi}(k;\\{a_j\\})$ is a two-component vector-valued function and by consistency, $\\,_qT_i^{-1}\\,_qT_j^{-1}\\boldsymbol{\\phi}=\\,_qT_j^{-1}\\,_qT_i^{-1}\\boldsymbol{\\phi}$, leads to the \nset of Lax equations (for each pair of indices $i,j$)\n\\begin{equation}\\label{eq:Laxeqs}\n(\\,_qT_i^{-1}\\boldsymbol{M}_j)\\boldsymbol{M}_i=(\\,_qT_j^{-1}\\boldsymbol{M}_i)\\boldsymbol{M}_j\\ . \n\\end{equation} \nWe will consider three different cases, associated respectively with equations (\\ref{eq:qKdV})--(\\ref{eq:qSKdV}).\nTo avoid proliferation of symbols we use the same symbol $\\boldsymbol{M}_i(k)$ for each of the respective Lax matrices. For specific choices of the matrices $\\boldsymbol{M}_i$ the Lax equations (\\ref{eq:Laxeqs}) lead to the nonlinear equations given above. \nIn the case of the $q$-lattice KdV, (\\ref{eq:qKdV}), the Lax matrices $\\boldsymbol{M}_i$ are given by\n\\begin{equation}\\label{eq:MjKdV}\n\\boldsymbol{M}_i(k;\\{a_j\\})=\\frac{1}{a_i-k}\\left(\\begin{array}{ccc} a_i-\\,_q\\!T_i^{-1}u &,& 1 \\\\ \nk^2-a_i^2+(a_i+u)(a_i-\\,_q\\!T_i^{-1}u) &,& a_i+u\\end{array}\\right) \\ . \n\\end{equation} \nIn the case of the $q$-lattice mKdV, (\\ref{eq:qmKdV}), the Lax matrices $\\boldsymbol{M}_i$ are given by\n\\begin{equation}\\label{eq:Mj}\n\\boldsymbol{M}_i(k;\\{a_j\\})=\\frac{1}{a_i-k}\\left(\\begin{array}{ccc} a_i(_q\\!T_i^{-1}v)\/v &,& k^2\/v \\\\ \n\\,_q\\!T_i^{-1}v &,& a_i\\end{array}\\right) \\ . \n\\end{equation} \nFinally, in the case of the $q$-lattice SKdV, (\\ref{eq:qSKdV}), the Lax matrices $\\boldsymbol{M}_i$ are given by \n\\begin{equation}\\label{eq:MjSKdV}\n\\boldsymbol{M}_i(k;\\{a_j\\})=\\frac{a_i}{a_i-k}\\left(\\begin{array}{ccc} 1 &,& (k^2\/a_i^2)\\left(z-\\,_q\\!T_i^{-1}z\\right)^{-1} \\\\ \nz-\\,_q\\!T_i^{-1}z &,& 1\\end{array}\\right) \\ . \n\\end{equation} \nThese Lax matrices are straightforward generalizations of those \nwith constant lattice parameters given in e.g. \\cite{NRGO}. \n\nWe mention that the solutions of the equations (\\ref{eq:qKdV}) to (\\ref{eq:qSKdV}) \nare related through discrete Miura type relations, namely\n\\numparts\\label{eq:Miuras}\n\\begin{eqnarray}\n&& a_i\\left(z-\\,_q\\!T_i^{-1}z\\right)=v\\left(\\,_q\\!T_i^{-1}v\\right)\\quad, \\label{eq:CH} \\\\ \n&& s=\\left(a_i-\\,_q\\!T_i^{-1}u\\right)v-a_i\\,_q\\!T_i^{-1}v\\quad,\\label{eq:Miura} \\\\ \n&& \\,_q\\!T_i^{-1}s=a_iv-(a_i+u)\\,_q\\!T_i^{-1}v\\ , \\label{eq:Miura2} \n\\end{eqnarray}\n\\endnumparts\nwhere $s\\in{\\mathcal F}$ is an auxiliary dependent variable. From these relations, the partial $q$-difference equations (\\ref{eq:qKdV}) to (\\ref{eq:qSKdV}) can be derived by eliminating $s$. \n\nSimilarity reductions of lattice equations have been considered in \\cite{NP,Nijh:Dorf,NW,DIGP,NRGO} where it was shown that scaling invariance of the solution can be implemented through additional compatible constraints on the lattice equations. In the present case of (\\ref{eq:qKdV})\nto (\\ref{eq:qSKdV}) these constraints adopt the following form \\cite{FJN}\n\\numparts\n\\begin{eqnarray}\n&& u(\\{q^{-N}a_i\\})=q^{-N}\\frac{1-\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}}\n{1+\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}}\\,u(\\{ a_j\\}) , \n \\label{eq:uconstr} \\\\\n&& v(\\{q^{-N}a_i\\})=\\frac{1-\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}}{1+\\mu(q^N-1)}\\,v(\\{ a_j\\}) ,\n\\label{eq:vconstr} \\\\ \n&& z(\\{q^{-N}a_i\\})=q^N\\frac{1-\\mu(q^N-1)}{1+\\mu(q^N-1)}\\,z(\\{ a_j\\})\\quad,\\label{eq:zconstr} \n\\end{eqnarray}\n\\endnumparts\nwhere $\\lambda$ and $\\mu$ are constant parameters of the reduction and $N \\in \\mathbb{N}$ \nrepresents a \\lq\\lq periodicity freedom\\rq\\rq. The notation $^{q}\\!\\log\\,x$ refers to the logarithm of $x$\nwith base $q$.\n\nIn order to compute the corresponding isomonodromic deformation problems associated with the \nsimilarity reductions we have the following constraints on the vector function of the Lax pairs. \nIn the case of (\\ref{eq:qKdV}) we have\n\\begin{eqnarray}\\label{eq:phiforqKdV}\n\\fl\\boldsymbol{\\phi}(q^N\\,k;\\{a_j\\})= \\\\ \n\\fl\\nonumber\\left(\\begin{array}{lcl}\n\\qquad\\left(1+\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}\\right)&,&0\\\\\n-2\\lambda q \\frac{q^N-1}{q-1}\\left(\\sum_i\\,a_i\\right)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}\n &,&q^N \\left(1-\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}\\right) \\\\\n\\end{array}\\right)\\boldsymbol{\\phi}(k;\\{q^{-N}\\,a_j\\}) .\n\\end{eqnarray}\nIn the case of (\\ref{eq:qmKdV})\n\\begin{eqnarray}\\label{eq:phiforqmKdV}\n\\fl\\boldsymbol{\\phi}(q^N\\,k;\\{a_j\\})=\\\\\n\\fl\\nonumber\\qquad\\left(\\begin{array}{ll}\n\\left(1+\\lambda(q^N-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}\\right)&0\\\\\n0&q^{-N}(1+\\mu(q^N-1))\\\\\n\\end{array}\\right)\\boldsymbol{\\phi}(k;\\{q^{-N}\\,a_j\\}) .\n\\end{eqnarray}\nIn the case of (\\ref{eq:qSKdV})\n\\begin{eqnarray}\\label{eq:phiforqSKdV}\n\\fl\\boldsymbol{\\phi}(q^N\\,k;\\{a_j\\})=\\\\\n\\fl\\nonumber\\qquad\\left(\\begin{array}{ll}\n\\left(1-\\mu(q^N-1)\\right)&0\\\\\n0&q^{-N}(1+\\mu(q^N-1))\\\\\n\\end{array}\\right)\\boldsymbol{\\phi}(k;\\{q^{-N}\\,a_j\\}) .\n\\end{eqnarray}\nThe similarity constraints, (\\ref{eq:phiforqKdV}) to (\\ref{eq:phiforqSKdV}), in conjunction with the discrete linear equations (\\ref{eq:MjKdV}) to (\\ref{eq:MjSKdV}) can be used to derive corresponding $q$-isomonodromic deformation problems. \nThat is, (\\ref{eq:phiforqKdV}) to (\\ref{eq:phiforqSKdV}) lead to\n$q$-difference equations in the spectral variable $k$, hence together\nwith the lattice equation Lax pairs we obtain $q$-isomonodromic deformation\nproblems for the corresponding reductions.\n\n\\paragraph{Remarks:} \n\\begin{enumerate}\n\\item The similarity constraints above were obtained through an approach based on Jackson-type integrals, the details of which will be presented elsewhere \\cite{FJN}. By construction, these constraints are compatible with the underlying lattice equations, which can be checked \\textit{a posteriori} \nby an explicit calculation, presented in the appendix. \n\\item In this approach, the dynamics in terms of the variables $a_i$ appear through appropriately chosen $q$-analogues of exponential functions, whereas the relevant Jackson integrals exhibit an invariance through scaling by factors $q^N$.\n\\item The parameters $\\lambda$ and $\\mu$ arise in this setting\nthrough boundary contributions in a manner analogous to the derivation in \\cite{NRGO}.\n\\end{enumerate}\n\nIn the remainder of this letter our aim is to implement the similarity constraint \nto obtain explicit reductions to ordinary $q$-difference equations. \nFor simplicity we consider only the reduction of the $q$-mKdV equation (\\ref{eq:qmKdV}), leaving\nconsiderations of the $q$-KdV and $q$-SKdV to a future publication \\cite{FJN}. \nThere are two possible scenarios to derive similarity reductions of the lattice equations using the constraint (\\ref{eq:vconstr}).\n\\paragraph{\\textit{``Periodic'' similarity reduction:}} \nBy fixing $M=2$ and allowing $N$ to vary, we select two lattice directions, say the variables $a_1$ and $a_2$, and consider similarity reductions with different values of $N$. This is a $q$-variant of the periodic staircase type reduction of partial difference equations on the two-dimensional lattice. \nFor instance, with $N=2$ the reduction is a second-order first-degree $q$-Painlev\\'e equation.\nIncreasing $N$ leads to $q$-difference Painlev\\'e type equations of increasing order.\nHowever, we will not pursue this route here but leave it to a subsequent publication \\cite{FJN}. These reductions are reminiscent of the work \\cite{Hay,SahadevanCapel,Carstea}. \n\\paragraph{\\textit{Multi-variable similarity reduction:}} \nThe similarity constraints provide the mechanism to couple together two or more lattice directions. \nBy considering the case $N=1$ we implement the similarity constraints on an extended lattice of three or more dimensions in order to obtain coupled ordinary $q$-difference equations, \nin a way that is reminiscent of the approach of \\cite{NW}. \nThis is considered in the next section.\n\n\\paragraph{{}} \nWe have not considered the more general case of arbitrary $M, N \\in \\mathbb{N}$, \nwhich we will postpone to a future publication \\cite{FJN}. \n\n\\section{Multi-variable similarity reduction} \\label{multivarsec}\n\\setcounter{equation}{0}\nIn this section we consider explicitly the $M=1$, 2 and 3 cases for $N=1$.\n\nFor simplicity we shall in what follows denote the coefficient in (\\ref{eq:vconstr}) as $\\gamma$, i.e. \n\\begin{equation}\\label{eq:gamma}\n\\gamma=\\frac{1-\\lambda(q-1)(-1)^{\\sum_i\\,^{q}\\!\\log\\,a_i}}{1+\\mu(q-1)}\\quad\\Rightarrow\\quad\n v(\\{q^{-1} a_i\\})=\\gamma \\, v(\\{a_i\\})\\ , \n\\end{equation}\nwhere $\\gamma$ alternates between two values, i.e., $\\,_q\\!T_i^2\\gamma=\\gamma$. \n\nIn contrast to the usual difference case which was explored in \\cite{NW} where \nin the case of two variables we obtain a nontrivial O$\\Delta$E as a reduction, \nin the $q$-difference case we have to consider at least three independent variables\nto obtain a nontrivial system of O$\\Delta$Es as a \nreduction.\n\nIn \\cite{NW} the compatibility between the similarity constraint and the lattice \nsystem was established and led \nto a system of higher order difference equations in the reduction, namely equations which were on \nthe level of the first Garnier system. \nIn contrast to the $q=1$ work, the 3D similarity constraint here is somewhat \nsimpler and leads to a second-order equation (which is of second-degree, and \nis a principal result of this letter). \n\n\n\n\n\\subsection*{Two-variable case}\nLet us now select among the collection of variables $\\{a_j\\}$ two specific ones\nwhich for simplicity we will call $a$ and $b$. \nDenote the $q$-shifts in these variables by an over-tilde $\\widetilde{\\phantom{C}}$ \nand an over-hat $\\widehat{\\phantom{C}}$ respectively. Equation\n(\\ref{eq:qmKdV}) may now be written\n\\begin{equation}\\label{qmKdVab}\nb \\widehat{v} \\widehat{\\widetilde{v}} + a \\widehat{v} v = a \\widetilde{v} \\widehat{\\widetilde{v}} + b v \\widetilde{v},\n\\end{equation}\nwhere the over-tilde $\\widetilde{\\phantom{v}}$ refers to the\n$q$-translation $a \\mapsto q a$ and the over-hat $\\widehat{\\phantom{v}}$ refers to the $q$-translation $b \\mapsto q b$\n(so if $v \\equiv v(a,b)$, $\\widetilde{v} \\equiv v(q a,b)$, $\\undertilde{v} \\equiv v( q^{-1}a ,b)$, \n$\\widehat{v} \\equiv v(a,bq)$, \\ldots).\n\nEquation (\\ref{eq:gamma})\ngives the constraint $v = \\gamma \\widehat{\\widetilde{v}}$ to impose on (\\ref{qmKdVab})\n(where $\\widehat{\\widetilde{\\gamma}} = \\widetilde{\\widetilde{\\gamma}} = \\gamma$).\nThis leads to the linear first-order (in that it is a two point) ordinary\ndifference equation\n\\begin{equation}\\label{Neq1}\n\\undertilde{v} = C \\,\\tilde{v},\n\\end{equation}\nwhere $C=\\widetilde{\\gamma} (a\\gamma + b)\/(a+b\\gamma)$.\nIn the appendix the\nconsistency between the lattice equation (\\ref{qmKdVab}) and the constraint\n(\\ref{Neq1}) is shown by direct computation.\n\n\n\n\n\\subsection*{Three-variable case}\n\n\nTake three copies of the lattice mKdV equation with $a_1=a$, $a_2=b$, $a_3=c$, \n\\numparts\n\\begin{eqnarray}\\label{qmKdVabc}\nb \\widehat{v} \\widehat{\\widetilde{v}} + a \\widehat{v} v = a \\widetilde{v} \\widehat{\\widetilde{v}} + b v \\widetilde{v}, \\label{eq:qmKdVab} \\\\ \nc \\ol{v} \\widetilde{\\ol{v}} + a \\ol{v} v = a \\widetilde{v} \\widetilde{\\ol{v}} + c v \\widetilde{v}, \\label{eq:qmKdVac}\\\\ \nc \\ol{v} \\widehat{\\ol{v}} + b \\ol{v} v = b \\widehat{v} \\widehat{\\ol{v}} + c v \\widehat{v}, \\label{eq:qmKdVbc}\n\\end{eqnarray}\n\\endnumparts\nwhere $\\ol{c} = q c$, together with the constraint \n\\begin{equation}\\label{eq:abcconstr}\nv(q^{-1}a,q^{-1}b,q^{-1}c)=\\gamma v(a,b,c).\n\\end{equation} \n(The similarity constraint is shown by a direct computation to be compatible with the multidimensionally \nconsistent system of mKdV lattice equations in the appendix.)\nWe now proceed to derive the \nreduced system which leads to a (higher-degree) ordinary $q$-difference \nequation in terms of one selected independent\nvariable, say the variable $a$. The remaining variables $b$ and $c$ \nwill play the role of parameters \nin the reduced equation. Thus, \nwe can derive the following system of two coupled O$\\Delta$Es for $v(a,b,c)$ and \n$w(a,b,c)\\equiv v(a,b,q^{-1}c)$:\n\\numparts\\label{eq:vw}\n\\begin{eqnarray}\n\\gamma v&=&w \\frac{a\\widetilde{\\gamma}\\widetilde{v}-b\\undertilde{w}}{a\\undertilde{w}-b\\widetilde{\\gamma}\\widetilde{v}}\\ , \\label{eq:vwa} \\\\ \n\\undertilde{w}&=&v \\frac{aw-c\\undertilde{v}}{a\\undertilde{v}-cw}\\ , \\label{eq:vwb} \n\\end{eqnarray}\n\\endnumparts\nwhere $\\widetilde{\\widetilde{\\gamma}}=\\gamma$. We consider the system (\\ref{eq:vwa}) and (\\ref{eq:vwb}) \nto constitute a $q$-Painlev\\'e system with four free \nparameters. \n\nThe system (\\ref{eq:vwa}) and (\\ref{eq:vwb}) \ncan be reduced to a second-order second-degree ordinary difference equation \nas follows. Introduce the variables\n\\begin{equation}\\label{eq:XVW} \nX=\\frac{v}{w}\\quad,\\quad V=\\frac{\\widetilde{v}}{v}\\quad,\\quad W=\\frac{\\widetilde{w}}{w}\\ , \n\\end{equation} \nthen from (\\ref{eq:vwa}) we obtain\n\\begin{equation}\\label{eq:VW}\n\\widetilde{\\gamma}\\frac{\\widetilde{v}}{\\undertilde{w}}=\\widetilde{\\gamma}VX\\undertilde{W}=\\frac{a\\gamma X+b}{b\\gamma X+a}\\ , \n\\end{equation} \nwhereas from (\\ref{eq:vwb}) we get \n\\begin{equation}\\label{eq:VV}\nW=\\frac{VX}{\\widetilde{X}}=\\frac{a+q^{-1}cXV}{q^{-1}c\/X+aV}\\ , \n\\end{equation} \nusing also the definitions (\\ref{eq:XVW}). Thus, we obtain a quadratic equation for $V$ in terms \nof $X$ and $\\widetilde{X}$ and hence also we have $W$ in terms of $X$ and $\\widetilde{X}$. Inserting these into \n(\\ref{eq:VW}) we obtain a second-order algebraic equation for $X$. Alternatively, avoiding \nthe emergence of square roots, the following second-order second-degree equation for $X$ may be \nderived\n\\begin{eqnarray}\\label{eq:Xeq}\n\\fl\n\\left[ \\widetilde{\\gamma}^2\\widetilde{X}\\undertilde{X}-\\left(\\frac{a\\gamma X+b}{b\\gamma\nX+a}\\right)^2\\right]^2 \\nonumber \\\\\n = \\widetilde{\\gamma}\\frac{c^2}{a^2}\\frac{1}{X}\n\\left(\\frac{a\\gamma X+b}{b\\gamma X+a}\\right)\\left[\\widetilde{\\gamma}\\widetilde{X}(1-X\\undertilde{X})+\nq^{-1}(1-X\\widetilde{X})\\frac{a\\gamma X+b}{b\\gamma X+a}\\right] \\nonumber \\\\\n\\qquad\\quad\\times \\left[ q^{-1}\\widetilde{\\gamma}\\undertilde{X}(1-X\\widetilde{X})+ (1-X\\undertilde{X})\n\\frac{a\\gamma X+b}{b\\gamma X+a}\\right] \\ .\n\\end{eqnarray}\nWe consider this second-degree equation to be one of the main results of this letter.\n\n\n\nWe now proceed to present the Lax pair for the $q$-Painlev\\'e system (\\ref{eq:vwa}) and (\\ref{eq:vwb}) and the second-order \nsecond-degree equation (\\ref{eq:Xeq}).\nThe Lax pair is formed by considering the compatibility of two paths on the lattice:\nalong a `period' then in the $a$ direction and evolving in the $a$ direction\nthen along a `period'.\nUsing (\\ref{eq:phiforqmKdV}) the evolution along a period is converted into a\ndilation of the spectral parameter, $k$, by $q$. \nThe result is the following isomonodromic $q$-difference system for the vector\n$\\phi(k;a)$ \nwhich using the results of section \\ref{lKdVsaac} yields \n\\numparts\n\\begin{eqnarray}\n\\phi(k;q^{-1}a)&=&\\boldsymbol{M}(k;a)\\phi(k;a)\\ , \\\\ \n\\phi(qk;a)&=&\\boldsymbol{L}(k;a)\\phi(k;a)\\ , \n\\end{eqnarray}\n\\endnumparts\nwhere \n\\numparts\n\\begin{equation}\\label{eq:M}\n\\boldsymbol{M}(k;a)=\\frac{1}{a-k}\\left(\\begin{array}{cc} a\\undertilde{v}\/v & k^2\/v \\\\ \n\\undertilde{v} & a\\end{array}\\right) \\ , \n\\end{equation} \nand \n\\begin{equation}\\label{eq:L}\n\\boldsymbol{L}(k;a)= \n\\frac{1}{a-k}\\left(\\begin{array}{cc} a\\gamma v\/\\widetilde{v} & k^2\/\\widetilde{v} \\\\ \nq^{-1}\\gamma v & q^{-1}a \\end{array}\\right)\\, \n\\left(\\begin{array}{cc} b\\widetilde{\\gamma}\\widetilde{v}\/w & k^2\/w \\\\ \n\\widetilde{\\gamma}\\widetilde{v} & b \\end{array}\\right)\\, \n\\left(\\begin{array}{cc} cw\/v & k^2\/v \\\\ \nw & c \\end{array}\\right)\\, \n\\end{equation}\n\\endnumparts\nwhere we have suppressed the dependence on the variables $b$ and $c$ (which now \nplay the role of parameters) and omitted the unnecessary prefactors $(b-k)^{-1}$ \nand $(c-k)^{-1}$, as well as an over factor $q^{-1}(1+\\mu(q-1))$. \n\n\nThe consistency condition \nobtained from the two ways of expressing $\\phi(qk;q^{-1}a)$ in terms of $\\phi(k;a)$ is formed by the \nLax equation \n\\begin{equation}\\label{eq:Laxeq}\n\\boldsymbol{L}(k;q^{-1}a)\\boldsymbol{M}(k;a)=\\boldsymbol{M}(qk;a)\\boldsymbol{L}(k;a)\\ . \n\\end{equation} \nA gauge transformation can be obtained expressing the \nLax matrices in terms of the variables introduced in (\\ref{eq:XVW}). Setting \n\\numparts\n\\begin{equation} \\label{eq:LMa}\n\\mathcal{M}(k;a) = \\frac{1}{a-k}\\left(\\begin{array}{cc} \na\/\\undertilde{V} & k^2 \\\\ 1 & a\\undertilde{V} \\end{array}\\right) ,\n\\end{equation}\n\\begin{equation}\\label{eq:LMb}\n\\mathcal{L}(k;a) = \\frac{1}{a-k}\\left(\\begin{array}{ccc} \n\\widetilde{\\gamma}(ab\\gamma X+k^2) & & k^2(a\\gamma X+b)\/V \\\\ \nq^{-1}\\widetilde{\\gamma} V(a+b\\gamma X) & & q^{-1}(ab+k^2\\gamma X)\\end{array}\\right) \n\\left(\\begin{array}{cc} c\/X & k^2 \\\\ 1\/X & c\\end{array}\\right) \\ , \n\\end{equation}\n\\endnumparts\nthe Lax equations (\\ref{eq:Laxeq}) (replacing $\\boldsymbol{L}$ and $\\boldsymbol{M}$ by $\\mathcal{L}$ and $\\mathcal{M}$ respectively) \nyield a set of relations equivalent to the following two equations: \n\\begin{eqnarray}\\label{eq:laxconds}\n&& \\widetilde{\\gamma}V\\undertilde{V}\\undertilde{X}=\\frac{a\\gamma X+b}{a+b\\gamma X} \\ , \\label{eq:laxcondsa} \\\\ \n&& aV^2+q^{-1}c\\left( \\frac{1}{X}-\\widetilde{X}\\right) V-a\\frac{\\widetilde{X}}{X}=0\\ , \\label{eq:Laxcondsb} \n\\end{eqnarray} \nusing also $\\widetilde{\\gamma}=\\undertilde{\\gamma}$. This set follows directly from (\\ref{eq:VW}) and (\\ref{eq:VV}). Thus \n(\\ref{eq:LMa}) and (\\ref{eq:LMb}) \nform a $q$-isomonodromic Lax pair for the second-degree equation (\\ref{eq:Xeq}). \n\n\n\\subsection*{Four-variable case:}\n\n\n\n\nSuppose we have $4$ variables $a_i$, $i=1,\\dots, 4$. Select $a=a_1$ to be the \nindependent variable after reduction. \nIntroduce the dependent variables $w_{j-2}=\\,_q\\!T_j^{-1}v$, \n$j=3,4$. Then directly from the $q$-lattice mKdV equation (\\ref{eq:qmKdV}) we have the set \nof equations\n\\begin{equation}\\label{eq:multi-qmKdV}\n\\underaccent{\\wtilde}{w}_j\n=v\\frac{aw_j-a_{j+2}\\undertilde{v}}{a\\undertilde{v}-a_{j+2}w_j}\\quad,\\quad j=1,2\\ , \n\\end{equation}\nwhere as before the tilde denotes a $q$-shift in the variable $a$. \nAt the same time the multiply shifted\nobject $_q\\!T_3^{-1} \\,_q\\!T_4^{-1}\\undertilde{v}$ can be expressed in a unique way (due to the CAC property) \nin terms of $\\undertilde{v}$ and $\\,_q\\!T_j^{-1}v=w_{j-2}$, $j=3,4$, by iterating the relevant copies of the \n$q$-lattice mKdV equation in the variables $a_j$, $j\\neq 2$, leading to an expression of the form\n$_q\\!T_3^{-1} \\,_q\\!T_4^{-1}\\undertilde{v}=:F(\\undertilde{v},w_1,w_{2})$, where $F$\nis easily obtained explicitly. \nImposing the similarity constraint (\\ref{eq:gamma}) \nwe obtain $\\widetilde{\\gamma}\\,_q\\!T_2v=F(\\undertilde{v},w_1,w_{2})$\nand inserting this expression into the $q$-lattice mKdV (\\ref{eq:qmKdV}) with $i=1,j=2$ we obtain \n\\begin{equation}\\label{eq:vvF}\n\\left( a+\\frac{a_2}{\\gamma} \\frac{a_3 w_2-a_4 w_1}{a_3 w_1 -a_4 w_2}\\right)\n\\left(a_2\\widetilde{\\gamma}^{-1}F(\\undertilde{v},w_1,w_{2})-a\\widetilde{v}\\right)=\n(a_2^2-a^2)v\\widetilde{v}\\ . \n\\end{equation} \nWith the explicit form of $F(\\undertilde{v},w_1,w_{2})$ equation (\\ref{eq:vvF}) reads\n\\numparts\n\\begin{eqnarray}\\label{eq:4dsysta}\n\\fl\n(a_2^2-a^2)\\gamma\\widetilde{\\gamma}\\widetilde{v}(a_3w_1-a_4w_2)\n\\left[ a(a_3^2-a_4^2)\\undertilde{v}+a_3(a_4^2-a^2)w_1+a_4(a^2-a_3^2)w_2\\right] \\nonumber \\\\\n=[(a_2a_3-\\gamma aa_4)w_2+(\\gamma aa_3-a_2a_4)w_1]\\,\\left[ a(a_3^2-a_4^2)(a_2 w_1w_2-\\widetilde{\\gamma}a\\widetilde{v}\\undertilde{v}) \\right. \n\\nonumber \\\\\n\\left. +\\left(a_2a_4(a^2-a_3^2)\\undertilde{v}-aa_3(a_4^2-a^2)\\widetilde{\\gamma}\\widetilde{v}\\right)w_1 +\n\\left(a_2a_3(a_4^2-a^2)\\undertilde{v}-\\widetilde{\\gamma}aa_4(a^2-a_3^2)\\widetilde{v}\\right)w_2 \\right] \\ , \\nonumber \\\\ \n\\end{eqnarray} \nand this is supplemented by the two equations\n\\begin{eqnarray}\n&& avw_1+a_3w_1\\underaccent{\\wtilde}{w}_1=a_3v\\undertilde{v}+a\\undertilde{v}\\underaccent{\\wtilde}{w}_1\\ , \\label{eq:4dsystb} \\\\ \n&& avw_2+a_4w_2\\underaccent{\\wtilde}{w}_2=a_4v\\undertilde{v}+a\\undertilde{v}\\underaccent{\\wtilde}{w}_2\\ , \\label{eq:4dsystc} \n\\end{eqnarray}\n\\endnumparts\nwhich is equivalent to a five-point \n(fourth-order) $q$-difference equation in terms of \n$v$ alone, containing five free parameters: $a_2$, $a_3$, $a_4$, $\\lambda$ and $\\mu$ (inside $\\gamma$ and $\\widetilde{\\gamma}$). This would be an algebraic equation, \nso we proceed as follows in order to derive a higher-degree \n$q$-difference system. Introduce the variables\n\\begin{equation}\\label{eq:XW}\nX_i=\\frac{v}{w_i}\\quad,\\quad W_i=\\frac{\\widetilde{w}_i}{w_i}\\quad,\\quad i=1,2\\ , \n\\end{equation} \nwhile retaining the variable $V=\\widetilde{v}\/v$ as before. By definition we have\n\\begin{equation}\\label{eq:XWV}\n\\frac{V}{W_i}=\\frac{\\widetilde{X}_i}{X_i}\\quad,\\quad i=1,2 , \n\\end{equation}\nand from (\\ref{eq:4dsystb}), (\\ref{eq:4dsystc}) we obtain\n\\begin{equation}\\label{eq:WXV}\nW_i=\\frac{qa+a_{i+2}VX_i}{qaV+a_{i+2}\/X_i}=\\frac{VX_i}{\\widetilde{X}_i}\\quad,\\quad i=1,2\\ , \n\\end{equation} \nwhilst from (\\ref{eq:4dsysta}) we get\n\\begin{eqnarray}\\label{eq:WVX} \n\\fl\n(a_2^2-a^2)\\gamma\\widetilde{\\gamma}V\\left(\\frac{a_3}{X_1}-\\frac{a_4}{X_2}\\right)\\left( a\\frac{a_3^2-a_4^2}{\\undertilde{V}}+\na_3\\frac{a_4^2-a^2}{X_1}+a_4\\frac{a^2-a_3^2}{X_2}\\right) \\nonumber \\\\\n =\\left(\\frac{a_2a_3-\\gamma aa_4}{X_2}+\\frac{\\gamma aa_3-a_2a_4}{X_1}\\right)\\,\\left[ a(a_3^2-a_4^2)(\\frac{a_2}{X_1X_2}\n-\\widetilde{\\gamma}a\\frac{V}{\\undertilde{V}}) \\right. \\nonumber \\\\\n\\left. +\\left(a_2a_4\\frac{a^2-a_3^2}{\\undertilde{V}}-aa_3(a_4^2-a^2)\\widetilde{\\gamma}V\\right)\\frac{1}{X_1} +\n\\left(a_2a_3\\frac{a_4^2-a^2}{\\undertilde{V}}-\\widetilde{\\gamma}aa_4(a^2-a_3^2)V\\right)\\frac{1}{X_2} \\right] \\ . \\nonumber \\\\ \n\\end{eqnarray}\n{}From (\\ref{eq:WXV}) we obtain the set of quadratic equations for $V$\n\\begin{equation}\nqa\\frac{X_i}{\\widetilde{X}_i}V^2+a_{i+2}\\left(\\frac{1}{\\widetilde{X}_i}-X_i\\right)V - qa=0\\quad,\\quad i=1,2\\ , \n\\end{equation} \nfrom which by eliminating $V$ we obtain\n\\begin{eqnarray}\\label{eq:YX}\n\\fl\n\\left[a_3(1-X_1\\widetilde{X}_1)X_2-a_4(1-X_2\\widetilde{X}_2)X_1\\right]\\left[a_3(1-X_1\\widetilde{X}_1)\\widetilde{X}_2-a_4(1-X_2\\widetilde{X}_2)\\widetilde{X}_1\\right] \\nonumber \\\\ \n= q^2a^2 (X_1\\widetilde{X}_2-X_2\\widetilde{X}_1)^2\\ . \n\\end{eqnarray} \nFurthermore, solving $V$ from the quadratic system as\n\\begin{equation}\\label{eq:V}\nV=qa\\frac{X_2\\widetilde{X}_1-X_1\\widetilde{X}_2}{a_3(1-X_1\\widetilde{X}_1)X_2-a_4(1-X_2\\widetilde{X}_2)X_1}\\ , \n\\end{equation} \nand inserting this into (\\ref{eq:WVX}) \nwe obtain a second-order equation in both $X_1$, $X_2$ coupled to the equation \n(\\ref{eq:YX}) which is first order in both $X_1$, $X_2$. It is this coupled system of two equations in $X_1$, $X_2$ which \nforms our higher order generalisation of (\\ref{eq:Xeq}). \nThe system of (\\ref{eq:WVX}) and (\\ref{eq:YX}) with (\\ref{eq:V})\nconstitutes a third-order system with five parameters.\n\nThe derivation of the Lax pair follows the same approach as that for the\nthree-variable case\n(with an extra factor in $\\mathcal{L}$ due to the additional lattice direction).\nWe omit details here, which we intend to publish in the future \\cite{FJN}.\n\n\n\\subsection*{Beyond the four-variable case:} \n\nIt is straight-forward to give the form of the full hierarchy, however due to lack of space we postpone this until a later publication \\cite{FJN}.\n \n\n\\section{Conclusion and discussion}\\label{limsanddeg}\n\\setcounter{equation}{0}\n\nIn this letter we have presented the results of \na scheme to derive partial $q$-difference equations \nof KdV type and consistent symmetries of the equations \nand demonstrated how it can be implemented. \nLax matrices follow from this approach. A notable result is the \nderivation of the higher-degree equation (\\ref{eq:Xeq}), showing that \nthe scheme presented here allows for the derivation of new results\nwithin the field of discrete integrable systems. \n\nThe first-, second- and third-order members of the $N=1$ hierarchy have been presented.\nThe scheme continues to give successively higher-order\nequations by considering successively higher dimensions of the original lattice equation. \nOne may ask the natural question\nas to whether this gives an `interpolating' hierarchy which,\ncontrary to the usual cases, increases the order and number of parameters\nof the equations\nby one in each step, rather than a two step increase. A further natural\nquestion connected with this hierarchy is its\nrelation to the $q$-Garnier systems of Sakai \\cite{GarnSakai}.\n\nWe will present full details of the \nscheme from which the lattice equations (\\ref{eq:qKdV})\nto (\\ref{eq:qSKdV}) and their associated constraints follow in a future publication \\cite{FJN}.\nThere we will consider the most general case of symmetry reductions \n(arbitrary $N \\in \\mathbb{N}$) of all three lattice equations.\n\nWe also intend \nto return in a future publication to the question of limits and degeneracies of the \nequations presented in this paper. These include the \n$q\\rightarrow 1$ continuum limit,\nthe $q\\rightarrow 1$ discrete limit \nand the $q\\rightarrow 0$ crystal or ultradiscrete limit. \n\n\\section*{Acknowledgements} \n\nDuring the writing of this letter C.M. Field has been supported by\nthe Australian Research Council Discovery Project Grant $\\#$DP0664624\nand by the Netherlands Organization\nfor Scientific Research (NWO) in the VIDI-project ``Symmetry and\nmodularity in exactly solvable models''.\nN. Joshi is also supported by \nthe Australian Research Council Discovery Project Grant $\\#$DP0664624.\nPart of the work presented here was performed whilst N. Joshi \nwas visiting the University of Leeds.\n\n\\section{Appendix}\n\nIn this appendix we present the result of explicit calculations showing the consistency\nof the lattice equations and constraints.\n\n\n\\subsection*{Two-variable consistency}\n\nWe shall check the consistency between the lattice equation (\\ref{qmKdVab}) and the \nconstraint $v= \\gamma \\widehat{\\widetilde{v}}$ by direct computation. \nThis computation is illustrated in the following diagram: \n\\vspace{.2cm}\n\\begin{center}\n\n\n\\setlength{\\unitlength}{0.00043489in}\n\\begingroup\\makeatletter\\ifx\\SetFigFont\\undefined%\n\\gdef\\SetFigFont#1#2#3#4#5{%\n \\reset@font\\fontsize{#1}{#2pt}%\n \\fontfamily{#3}\\fontseries{#4}\\fontshape{#5}%\n \\selectfont}%\n\\fi\\endgroup%\n{\\renewcommand{\\dashlinestretch}{30}\n\\begin{picture}(5245,4530)(0,-10)\n\\put(-1010,2205){.}\n\\put(-1010,3985){.}\n\\put(-1010,380){.}\n\n\\put(6235,2205){.}\n\\put(6235,3985){.}\n\\put(6235,380){.}\n\n\\put(765,3985){.}\n\\put(4410,380){.}\n\n\\put(2565,2205){\\circle*{303}}\n\\put(4410,2205){\\circle*{303}}\n\\put(765,2205){\\circle{303}}\n\\put(2365,3885){{\\Large$\\times$}\n\n\\put(4410,4005){\\circle{303}}\n\\put(615,325){{\\large$\\otimes$}}\n\\put(2610,439){\\circle{303}}\n\\thicklines\n\\dashline{90.000}(4410,4005)(2565,4005)(765,2205)\n\t(765,405)(2610,405)(4410,2205)(4410,4005)\n\\drawline(2565,2205)(4410,2205)\n\\put(2385,1725){\\makebox(0,0)[lb]{\\Large $v_0$}}\n\\put(4410,1770){\\makebox(0,0)[lb]{\\Large $v_1$}}\n\\put(4815,3980){\\makebox(0,0)[lb]{\\Large $v_{12}$}}\n\\put(2340,4305){\\makebox(0,0)[lb]{\\Large $v_2$}}\n\\put(0,2205){\\makebox(0,0)[lb]{\\Large $v_{-1}$}}\n\\put(295,-15){\\makebox(0,0)[lb]{\\Large $v_{-1,-2}$}}\n\\put(2475,-75){\\makebox(0,0)[lb]{\\Large $v_{-2}$}}\n\\end{picture}\n}\n\nFig 1. Consistency on the 2D lattice. \n\\end{center}\n\\noindent\nAssuming the values $v_0$, $v_1$ as indicated in Fig 1 are given, \nwe compute successively $v_{12}$,$v_2$ etc., \nwhere the subscripts refer to the shifts in the lattice variables $a$,$b$ respectively, as is evident \nfrom Fig 1.\nPoints other than $v_0$ and $v_1$ are computed using either \nthe lattice equation (indicated by $\\times$) or by using the similarity constraint (indicated by \n$\\bigcirc$). The value $v_{-1,-2}$ is the first point which can be calculated in two different ways \n(hence indicated in the diagram by $\\otimes$). Without making any particular assumptions on how \n$\\gamma$ depends on $a$ and $b$, a straightforward calculation shows that the two ways of computing \n$v_{-1,-2}$ are indeed the same, for any choice of initial data $v_0$ and $v_1$, provided that $\\gamma$ \nobeys the relation\n\\begin{equation}\\label{eq:gammarel}\n\\left(\\frac{a+b\\gamma}{b+a\\gamma}\\right)^{\\!\\widehat{\\widetilde{\\phantom{a}}}}\n\\left(\\frac{a+b\\gamma}{b+a\\gamma}\\right)^{-1}=\\frac{\\widetilde{\\gamma}}{\\widehat{\\gamma}}\\ . \n\\end{equation} \nA particular solution of this relation is\n\\begin{equation}\\label{eq:gammasol} \n\\widehat{\\widetilde{\\gamma}}=\\gamma\\quad\\Leftrightarrow\\quad \\widehat{\\gamma}=\\widetilde{\\gamma}\\ , \n\\end{equation} \nand hence $\\widetilde{\\widetilde{\\gamma}}=\\gamma$ implying that $\\gamma$ is an alternating ``constant'' which is in \naccordance with the value given in (\\ref{eq:vconstr}). The reduced equation \nin this case is (\\ref{Neq1}),\nwhich can be readily integrated. \n\nMore generally, equation (\\ref{eq:gammarel}) can be resolved by setting \n\\begin{equation}\\label{eq:nurels} \\frac{a+b\\gamma}{b+a\\gamma}=\\frac{\\widetilde{\\nu}}{\\widehat{\\nu}}\\quad,\\quad \n\\gamma=\\frac{\\widehat{\\widetilde{\\nu}}}{\\nu}\\ , \\end{equation} \nleading to the consequence that $\\nu$ has to solve the $q$-lattice mKdV (\\ref{qmKdVab}). \nIn principle we could take for $\\nu$ any solution of the reduced equation (\\ref{Neq1}) \nand use this to parametrise the reduced equation for $v$ via the relations (\\ref{eq:nurels}). \nIn any event, we see that the two-variable case does not lead to interesting nonlinear equations. \n\n\n\n\\subsection*{Three-variable consistency}\nIn this \ncase the consistency diagram is as follows: \n\\vspace{.2cm} \n\n\\begin{center}\n\n\n\\setlength{\\unitlength}{0.00057489in}\n\\begingroup\\makeatletter\\ifx\\SetFigFont\\undefined%\n\\gdef\\SetFigFont#1#2#3#4#5{%\n \\reset@font\\fontsize{#1}{#2pt}%\n \\fontfamily{#3}\\fontseries{#4}\\fontshape{#5}%\n \\selectfont}%\n\\fi\\endgroup%\n{\\renewcommand{\\dashlinestretch}{30}\n\\begin{picture}(6666,3630)(0,-10)\n\\put(3465,405){\\circle{180}}\n\\put(5265,1800){\\circle{180}}\n\\put(2430,1760){$\\otimes$\n\\put(1800,1665){\\circle*{180}}\n\\put(675,405){\\circle{180}}\n\\put(4200,1580){$\\times$\n\\put(6210,3060){\\circle*{180}}\n\\put(3510,3105){\\circle*{180}}\n\\thicklines\n\\drawline(720,405)(3465,405)\n\\drawline(3510,405)(5220,1800)\n\\drawline(3465,3105)(6210,3105)\n\\drawline(1778,1682)(3488,3077)\n\\dashline{90.000}(3465,3105)(3465,405)\n\\dashline{90.000}(2610,1800)(5175,1800)\n\\dashline{90.000}(2565,1800)(720,450)\n\\dashline{90.000}(1800,1665)(4365,1665)\n\\dashline{90.000}(6187,3017)(4342,1667)\n\\put(3240,3465){\\makebox(0,0)[lb]{\\Large $v_0$}}\n\\put(6390,3375){\\makebox(0,0)[lb]{\\Large $v_1$}}\n\\put(1080,1710){\\makebox(0,0)[lb]{\\Large $v_2$}}\n\\put(3375,0){\\makebox(0,0)[lb]{\\Large $v_{-3}$}}\n\\put(4000,1300){\\makebox(0,0)[lb]{\\Large $v_{1,2}$}}\n\\put(5500,1750){\\makebox(0,0)[lb]{\\Large $v_{-2,-3}$}}\n\\put(0,0){\\makebox(0,0)[lb]{\\Large $v_{-1,-3}$}}\n\\put(2565,1925){\\makebox(0,0)[lb]{\\Large $v_{-1,-2,-3}$}}\n\\end{picture}\n}\n\nFig 2. Consistency on the 3D lattice.\n\\end{center}\n\\vspace{.2cm}\n\n\\noindent \nA similar notation as the previous case is used as is evident from Fig 2.\nThe initial data $v_0$, $v_1$ and $v_2$ are given, and the indicated values on the vertices are \ncomputed either by using one of the lattice equations (\\ref{eq:qmKdVab}) to (\\ref{eq:qmKdVbc}) \nor the similarity \nconstraint (\\ref{eq:abcconstr}) over the diagonal. Thus, $v_{1,2}$ is obtained from (\\ref{eq:qmKdVab}) \nyielding \n$$ v_{1,2}=v_0\\,\\frac{av_2-bv_1}{av_1-bv_2}\\ , $$\nwhilst from the similarity constraint we obtain \n$$v_{-1,-3}=\\gamma_2 v_2 \\quad,\\quad v_{-3}=\\gamma_{1,2}v_{1,2}\\quad,\\quad \nv_{-2,-3}= \\gamma_1 v_1\\ , $$\nassuming that $\\gamma$ shifts along the lattice, indicated by the indices, \nand finally the value of $v_{-1,-2,-3}$ can be computed in \ntwo different ways, leading to\n$$v_{-1,-2,-3}=\\gamma_0 v_0=\\frac{av_{-2,-3}-bv_{-1,-3}}{av_{-1,-3}-bv_{-2,-3}}v_{-3}=\n\\frac{a\\gamma_1v_1-b\\gamma_2v_2}{a\\gamma_2v_2-b\\gamma_1v_1}\\gamma_{1,2}v_0 \\frac{av_2-bv_1}{av_1-bv_2}\\ , $$ \nleading quadratic identity in $v_1$ and $v_2$. Assuming that the latter must hold identically, \nand thus setting all coefficients equal to zero, we obtain the following conditions on $\\gamma$:\n$$ \\gamma_{1,2,3}=\\gamma_1=\\gamma_2=\\gamma_3\\quad , $$ \nfrom which we conclude that $\\gamma$ is an alternating ``constant'', for instance \n\\begin{equation}\\label{form:gamma}\n\\gamma=\\alpha\\,\\beta^{(-1)^{n+m+\\dots}}\\quad \\quad (\\alpha,\\beta\\ \\ {\\rm constants}) \\ \n\\end{equation} \nand this leads to the conditions\nfrom which it is easily deduced that the form (\\ref{eq:gamma}) of $\\gamma$ satisfies these \nconditions. \n\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{s-intro}\n\nOver the last 10 years, in particular since the installation of the Cosmic Origins Spectrograph (COS) on the {\\it Hubble Space Telescope} ({\\em HST}), we have made significant leaps in empirically characterizing the circumgalactic medium (CGM) of galaxies at low redshift where a wide range of galaxy masses can be studied (see recent review by \\citealt{tumlinson17}). We appreciate now that the CGM of typical star-forming or quiescent galaxies have a large share of galactic baryons and metals in relatively cool gas-phases ($10^4$--$10^{5.5}$ K) \\citep[e.g.,][]{stocke13,bordoloi14,liang14,peeples14,werk14,johnson15,burchett16,prochaska17,chen19,poisson19}. We have come to understand that the CGM of galaxies at $z \\la 1$ is not just filled with metal-enriched gas ejected by successive galaxy outflows, but has also a large amount of metal poor gas ($<1$--2\\% solar) in which little net chemical enrichment has occurred over several billions of years \\citep[e.g.,][]{ribaudo11,thom11,lehner13,lehner18,lehner19,wotta16,wotta19,prochaska17,kacprzak19,poisson19,zahedy19}. The photoionized gas around $z\\la 1$ galaxies is very chemically inhomogeneous, as shown by large metallicity ranges and the large metallicity variations among kinematically distinct components in a single halo (\\citealt{wotta19,lehner19}, and see also \\citealt{crighton13a,muzahid15,muzahid16, rosenwasser18}). Such a large metallicity variation is not only observed in the CGM of star-forming galaxies, but also in the CGM of passive and massive galaxies where there appears to be as much cold, bound \\ion{H}{1}\\ gas as in their star-forming counterparts \\citep[e.g.,][]{thom12,tumlinson13,berg19,zahedy19}.\n\nThese empirical results have revealed both expected and unexpected properties of the CGM of galaxies and they all provide new means to understand the complex relationship between galaxies and their CGM. Prior to these empirical results, the theory of galaxy formation and evolution was mostly left constraining the CGM properties indirectly by their outcomes, such as galaxy stellar mass and ISM properties. Thus the balance between outflows, inflows, recycling, and ambient gas--and the free parameters controlling them--were tuned to match the optical properties of galaxies rather than implemented directly as physically-rigorous and self-consistent models. These indirect constraints suffer from problems of model uniqueness: it is possible to match stellar masses and metallicities with very different treatments of feedback physics \\citep[e.g.,][]{hummels13,liang16}. Recent empirical and theoretical advances offer a way out of this model degeneracy. New high-resolution, zoom-in simulations employ explicit treatments of the multiple gas-phase nature and feedback from stellar population models \\citep[e.g.,][]{hopkins14,hopkins18}. It is also becoming clear that not only high resolution inside the galaxies but also in their CGM is required to capture more accurately the complex processes in the cool CGM, such as metal mixing \\citep{hummels19,peeples19,suresh19,vandevoort18,corlies19}.\n\nA significant limitation in interpreting the new empirical results in the context of new high-resolution zoom simulations is that only average properties of the CGM are robustly derived from traditional QSO absorption-line techniques for examining halo gas. In the rare cases where there is a UV-bright QSO behind a given galaxy, the CGM is typically probed along a single ``core sample\" through the halo of each galaxy. These measurements are then aggregated into a statistical map, where galaxies with different inclinations, sizes, and environments are blended together and the radial-azimuthal dependence of the CGM is essentially lost. All sorts of biases can result: phenomena that occur strongly in only a subset of galaxies can be misinterpreted as being weaker but more common, and genuine trends with mass or star formation rate can be misinterpreted as simply scatter with no real physical meaning (see also \\citealt{bowen16}). Simulations also suggest that time-variable winds, accretion flows, and satellite halos can induce strong halo-to-halo variability, further complicating interpretation \\citep[e.g.][]{hafen17,oppenheimer18a}. Observational studies of single galaxy CGM with multiple sightlines are therefore required to gain spatial information on the properties of the CGM. \n\nMulti-sightline information on the CGM of single galaxies has been obtained in a few cases from binary or multiple (2--4) grouped QSOs behind foreground galaxies \\citep[e.g.,][]{bechtold94,martin10,keeney13,bowen16}, gravitationally-lensed quasars \\citep[e.g.,][]{smette92,rauch01,ellison04,lopez05,zahedy16,rubin18,kulkarni19}, giant gravitational arcs \\citep[e.g.,][]{lopez19}, or extended bright background objects observed with integral field units \\citep[e.g.,][]{peroux18}. These observations provide better constraints on the kinematic relationship between the CGM gas and the galaxy and on the size of CGM structures. However, they yield limited information on the gas-phase structure owing to a narrow range of ionization diagnostics or poor quality spectral data. Thus, it remains unclear how tracers of different gas phases vary with projected distance $R$ or azimuth $\\Phi$ around the galaxy. \n\nThe CGM that has been pierced the most is that of the Milky Way (MW), with several hundred QSO sightlines \\citep{wakker03,shull09,lehner12,putman12,richter17} through the Galactic halo. However, our position as observers within the MW disk severely limits the interpretation of these data (especially for the extended CGM, see \\citealt{zheng15, zheng20}) and makes it difficult to compare with observations of other galaxies. \n\nWith a virial radius that spans over $30\\degr$ on the sky, M31 is the only $L^*$ galaxy where we can access more than 5 sightlines without awaiting the next generation of UV space-based telescope (e.g., \\citealt{luvoir19}). With current UV capabilities, it is the only single galaxy where we can study the global distribution and properties of metals and baryons in some detail.\n\nIn our pilot study (\\citealt{lehner15}, hereafter \\citetalias{lehner15}), we mined the {\\em HST}\/COS G130M\/G160M archive available at the \\textit{Barbara A. Mikulski Archive for Space Telescopes} (MAST) for sightlines piercing the M31 halo within a projected distance of $\\sim 2 \\ensuremath{R_{\\rm vir}}$ (where $R_{\\rm vir}=300$ kpc for M31, see below). There were 18 sightlines, but only 7 at projected distance $R \\la \\ensuremath{R_{\\rm vir}}$. Despite the small sample, the results of this study were quite revealing, demonstrating a high covering factor (6\/7) of M31 CGM absorption by \\ion{Si}{3}\\ (and other ions including, e.g., \\ion{C}{4}, \\ion{Si}{2}) within $1.1\\ensuremath{R_{\\rm vir}}$ and a covering factor near zero (1\/11) between $1.1 \\ensuremath{R_{\\rm vir}}$ and $2 \\ensuremath{R_{\\rm vir}}$. We found also a drastic change in the ionization properties, as the gas is more highly ionized at $R \\sim \\ensuremath{R_{\\rm vir}} $ than at $R<0.2\\ensuremath{R_{\\rm vir}}$. The \\citetalias{lehner15} results strongly suggest that the CGM of M31 as seen in absorption of low ions (\\ion{C}{2}, \\ion{Si}{2}) through intermediate (\\ion{Si}{3}, \\ion{Si}{4}) and high ions (\\ion{C}{4}, \\ion{O}{6}) is very extended out to at least the virial radius. However, owing to the small sample within \\ensuremath{R_{\\rm vir}}, the variation of the column densities ($N$) and covering factors (\\ensuremath{f_{\\rm c}}) with projected distances and azimuthal angle remain poorly constrained.\n\nOur Project AMIGA (Absorption Maps In the Gas of Andromeda) is a large {\\em HST}\\ program (PID: 14268, PI: Lehner) that aims to fill the CGM with 18 additional sightlines at various $R$ and $\\Phi$ within $1.1 \\ensuremath{R_{\\rm vir}}$ of M31 using high-quality COS G130M and G160M observations, yielding a sample of 25 background QSOs probing the CGM of M31. We have also searched MAST for additional QSOs beyond $1.1 \\ensuremath{R_{\\rm vir}}$ up to $R=569$ kpc from M31 ($\\sim 1.9 \\ensuremath{R_{\\rm vir}}$) to characterize the extended gas around M31 beyond its virial radius. This archival search yielded 18 suitable QSOs. Our total sample of 43 QSOs probing the CGM of a single galaxy from 25 to 569 kpc is the first to explore simultaneously the azimuthal and radial dependence of the kinematics, ionization level, surface-densities, and mass of the CGM of a galaxy over its entire virial radius and beyond. With these observations, we can also test how the CGM properties derived from one galaxy using multiple sightlines compares with a sample of galaxies with single sightline information and we can directly compare the results with cosmological zoom-in simulations.\n\nWith the COS G130M and G160M wavelength coverage, the key ions in our study are \\ion{C}{2}, \\ion{C}{4}, \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}\\ (other ions and atoms include \\ion{Fe}{2}, \\ion{S}{2}, \\ion{O}{1}, \\ion{N}{1}, \\ion{N}{5}, but are typically not detected, although the limit on \\ion{O}{1}\\ constrains the level of ionization). These species span ionization potentials from $<1$ to $\\sim$4 Rydberg and thus trace neutral to highly ionized gas at a wide range of temperatures and densities. We have also searched the {\\it Far Ultraviolet Spectroscopic Explorer}\\ ({\\em FUSE}) to have coverage of \\ion{O}{6}, which resulted in 11 QSOs in our sample having both COS and {\\em FUSE}\\ observations. The \\ion{H}{1}\\ Ly$\\alpha$\\ absorption can unfortunately not be used because the MW dominates the entire Ly$\\alpha$\\ absorption. Instead we have obtained deep \\ion{H}{1}\\ 21-cm observations with the Robert C. Byrd Green Bank Telescope (GBT) toward all the targets in our sample and several additional ones (\\citealt{howk17}, hereafter \\citetalias{howk17}), showing no detection of any \\ion{H}{1}\\ down to a level $N_{\\rm H\\,I} \\simeq 4\\times 10^{17}$ cm$^{-2}$\\ ($5\\sigma$; averaged over an area that is 2 kpc at the distance of M31). Our non-detections place a limit on the covering factor of such optically thick \\ion{H}{1}\\ gas around M31 to $\\ensuremath{f_{\\rm c}}\\ < 0.051$ (at 90\\% confidence level) for $R\\la \\ensuremath{R_{\\rm vir}}$.\n\nThis paper is organized as follows. In \\S\\ref{s-data}, we provide more information about the criteria used to assemble our sample of QSOs and explain the various steps to derive the properties (velocities and column densities) of the absorption. In that section, we also present the line identification for each QSO spectrum, which resulted in the identification of 5,642 lines. In \\S\\ref{s-ms}, we explain in detail how we remove the foreground contamination from the Magellanic Stream (MS, e.g., \\citealt{putman03,nidever08,fox14}), which extends to the M31 CGM region of the sky with radial velocities that overlap with those expected from the CGM of M31. For this work, we have developed a more systematic and automated methodology than in \\citetalias{lehner15} to deal with this contamination. In \\S\\ref{s-dwarfs}, we present the sample of the M31 dwarf satellite galaxies to which we compare the halo gas measurements. In \\S\\ref{s-properties}, we derive the empirical properties of the CGM of M31 including how the column densities and velocities vary with $R$ and $\\Phi$, the covering factors of the ions and how they change with $R$, and the metal and baryon masses of the CGM of M31. In \\S\\ref{s-disc}, we discuss the results derived in \\S\\ref{s-properties} and compare them to observations from the COS-Halos survey \\citep{tumlinson13,werk14} and to state-of-the-art cosmological zoom-ins from in particular the Feedback in Realistic Environments (FIRE, \\citealt{hopkins19}) and and Figuring Out Gas \\& Galaxies In Enzo (FOGGIE, \\citealt{peeples19}) simulations projects. In \\S\\ref{s-sum}, we summarize our main conclusions.\n\nTo properly compare to other work, and to simulations, we must estimate a characteristic radius for M31. We use the radius $R_{200}$ enclosing a mean overdensity of $200$ times the critical density: $R_{200} = (3 M_{\\rm 200} \/ 4 \\pi \\, \\Delta\\; \\rho_{\\rm crit})^{1\/3}$, where $\\Delta = 200$ and $\\rho_{\\rm crit}$ is the critical density. For M31, we adopt $M_{200} = 1.26 \\times 10^{12}$ M$_\\sun$ (e.g., \\citealt{watkins10,vandermarel12}), implying $R_{200} \\simeq 230$ kpc. For the virial mass and radius (\\ensuremath{M_{\\rm vir}}\\ and \\ensuremath{R_{\\rm vir}}), we use the definition that follows from the top-hat model in an expanding universe with a cosmological constant where $\\ensuremath{M_{\\rm vir}} = 4\\pi\/3 \\; \\rho_{\\rm vir} \\ensuremath{R_{\\rm vir}}^3$ where the virial density $ \\rho_{\\rm vir} = \\Delta_{\\rm vir} \\Omega_{\\rm m} \\rho_{\\rm crit}$ \\citep{klypin11,vandermarel12}. The average virial overdensity is $\\Delta_{\\rm vir} = 360$ assuming a cosmology with $h=0.7$ and $\\Omega_{\\rm m} = 0.27$ \\citep{klypin11}. Following, e.g, \\citet{vandermarel12}, $\\ensuremath{M_{\\rm vir}} \\simeq 1.2 M_{200} \\simeq 1.5 \\times 10^{12}$ M$_\\sun$ and $\\ensuremath{R_{\\rm vir}} \\simeq 1.3 R_{200} \\simeq 300$ kpc. The escape velocity at $R_{200}$ for M31 is then $v_{200}\\simeq 212$ ${\\rm km\\,s}^{-1}$. A distance of M31 of $d_{\\rm M31} = 752 \\pm 27$ kpc based on the measurements of Cepheid variables \\citep{riess12} is assumed throughout. We note that this distance is somewhat smaller than the other often adopted distance of M31 of 783 kpc \\citep[e.g.,][]{brown04,mcconnachie05}, but for consistency with our previous survey as well as the original design of Project AMIGA, we have adopted $d_{\\rm M31} = 752$ kpc. All the projected distances were computed using the three dimensional separation (coordinates of the target and distance of M31).\n\n\\begin{figure*}[tbp]\n\\epsscale{0.9}\n\\plotone{f1.pdf}\n\\caption{Locations of the Project AMIGA pointings relative to the M31--M33 system. The axes show the angular separations converted into physical coordinates relative to the center of M31. North is up and east to the left. The 18 sightlines from our large {\\em HST}\\ program are in red filled circles; the 25 archival COS targets are in open red circles. Crosses show the GBT \\ion{H}{1}\\ 21-cm observations described in \\citetalias{howk17}. Dotted circles show impact parameters $R = 100$, 200, 300, 400, 500 kpc. $\\ensuremath{R_{\\rm vir}} = 300$ kpc is marked with a heavy dashed line. The sizes and orientations of the two galaxies are taken from RC3 \\citep{devaucouleurs91} and correspond to the optical $R_{25}$ values. The light blue dashed line shows the plane of the Magellanic Stream ($b_{\\rm MS} =0\\degr$) as defined by \\citet{nidever08}. The shaded region within $b_{\\rm MS} \\pm 20\\degr$ of the MS midplane is the approximate region where we identify most of the MS absorption components contaminating the M31 CGM absorption (see \\S\\ref{s-ms}).\\label{f-map}}\n\\end{figure*}\n\n\\section{Data and Analysis}\\label{s-data}\n\\subsection{The Sample}\\label{s-sample}\n\nThe science goals of our {\\em HST}\\ large program require estimating the spatial distributions of the kinematics and metal column densities of the M31 CGM gas within about $1.1\\ensuremath{R_{\\rm vir}}$ as a function of azimuthal angle and impact parameter. The search radius was selected based on our pilot study where we detected M31 CGM gas up to $\\sim 1.1\\ensuremath{R_{\\rm vir}}$, but essentially not beyond \\citepalias{lehner15} (a finding that we revisit in this paper with a larger archival sample, see below). With our {\\em HST}\\ program, we observed 18 QSOs at $R\\la 1.1\\ensuremath{R_{\\rm vir}}$ that were selected to span the M31 projected major axis, minor axis, and intermediate orientations. The sightlines do not sample the impact parameter space or azimuthal distribution at random. Instead, the sightlines were selected to probe the azimuthal variations systematically. The sample was also limited by a general lack of identified UV-bright AGNs behind the northern half of M31's CGM owing to higher foreground MW dust extinction near the plane of the Milky Way disk. Combined with 7 archival QSOs, these sightlines probe the CGM of M31 in azimuthal sectors spanning the major and minor axes with a radial sample of 7--10 QSOs in each $\\sim$100 kpc bin in $R$.\n\nIn addition to target locations, the 18 QSOs were optimized to be the brightest available QSOs (to minimize exposure time) and to have the lowest available redshifts (in order to minimize the contamination from unrelated absorption from high redshift absorbers). For targets with no existing UV spectra prior to our observations, we also required that the GALEX NUV and FUV flux magnitudes are about the same to minimize the likelihood of an intervening Lyman limit system (LLS) with optical depth at the Lyman limit $\\tau_{\\rm LL}>2$. An intervening LLS could absorb more or all of the QSO flux we would need to measure foreground absorption in M31. This strategy successfully kept QSOs with intervening LLS out of the sample. \n\nAs we discuss below and as detailed by \\citetalias{lehner15}, the MS crosses through the M31 region of the sky at radial velocities that can overlap with those of M31 (see also \\citealt{nidever08,fox14}). To understand the extent of MS contamination and the extended gas around M31 beyond the virial radius, we also searched for targets beyond $1.1\\ensuremath{R_{\\rm vir}}$ with COS G130M and\/or G160M data. This search identified another 18 QSOs at $1.1\\la R\/\\ensuremath{R_{\\rm vir}} <1.9$ that met the data quality criteria for inclusion in the sample\\footnote{This search found eight additional targets at $R>1.6 \\ensuremath{R_{\\rm vir}}$ that are not included in our sample. SDSSJ021348.53+125951.4, 4C10.08, LBQS0052-0038 were excluded because of low S\/N in the COS data. NGC7714 has smeared absorption lines. LBQS0107-0232\/3\/5 lie at $z_{\\rm em}\\simeq 0.7$--1 and have extremely complex spectra. HS2154+2228 at $z_{\\rm em} = 1.29$ has no G130M wavelength coverage making the line identification highly uncertain. \\label{foot-reason}}. Our final sample consists of 43 sightlines probing the CGM of M31 from 25 to 569 kpc; 25 of these probe the M31 CGM from 25 to 342 kpc, corresponding to $0.08 - 1.1 \\ensuremath{R_{\\rm vir}}$. Fig.~\\ref{f-map} shows the locations of each QSO in the M31--M33 system (the filled circles being targets obtained as part of our {\\em HST}\\ program PID: 14268 and the open circles being QSOs with archival {\\em HST}\\ COS G130M\/G160M data), and Table~\\ref{t-sum} lists the properties of our sample QSOs ordered by increasing projected distances from M31. In this table, we list the redshift of the QSOs (\\ensuremath{z_{\\rm em}}), the J2000 right ascension (RA) and declination (Dec.), the MS coordinate (\\ensuremath{l_{\\rm MS}}, \\ensuremath{b_{\\rm MS}}, see \\citealt{nidever08} for the definition of this coordinate system), the radially ($R$) and cartesian ($X,Y$) projected distances, the program identification of the {\\em HST}\\ program (PID), the COS grating used for the observations of the targets, and the signal-to-noise ratio (SNR) per COS resolution element of the COS spectra near the \\ion{Si}{3}\\ transition (except otherwise stated in the footnote of this table). \n\n\\subsection{UV Spectroscopic Calibration}\\label{s-calib}\n\nTo search for M31 CGM absorption and to determine the properties of the CGM gas, we use ions and atoms that have their wavelengths in the UV (see \\S\\ref{s-prop}). Any transitions with $\\lambda>1144$ \\AA\\ are in the {\\em HST}\\ COS bandpass. All the targets in our sample were observed with {\\em HST}\\ using the COS G130M grating ($R_\\lambda \\approx 17,000$). All the targets observed as part of our new {\\em HST}\\ program were also observed with COS G160M, and all the targets but one within $R<1.1 \\ensuremath{R_{\\rm vir}}$ have both G130M and G160M wavelength coverage. \n\nWe also searched for additional archival UV spectra in MAST, including the {\\em FUSE}\\ ($R_\\lambda \\approx 15,000$) archive to complement the gas-phase diagnostics from the COS spectra with information from the \\ion{O}{6}\\ absorption. We use the {\\em FUSE}\\ observations for 11 targets with adequate SNR near \\ion{O}{6}\\ (i.e., $\\ga 5$): RX\\_J0048.3+3941, IRAS\\_F00040+4325, MRK352, PG0052+251, MRK335, UGC12163, PG0026+129, MRK1502, NGC7469, MRK304, PG2349-014 (only the first 6 targets in this list are at $R\\la 1.1 \\ensuremath{R_{\\rm vir}}$). We did not consider {\\em FUSE}\\ data for quasars without COS observations because the available UV transitions in the far-UV spectrum (\\ion{O}{6}, \\ion{C}{2}, \\ion{C}{3}, \\ion{Si}{2}, \\ion{Fe}{2}) are either too weak or too contaminated to allow for a reliable identification of the individual velocity components in their absorption profiles. \n\nThere are also 3 targets (MRK335, UGC12163, and NGC7469) with {\\em HST}\\ STIS E140M ($R_\\lambda \\simeq 46,500$) observations that provide higher resolution information.\\footnote{For 2 targets, we also use COS G225M (3C454.3) and FOS NUV (3C454.3, PG0044+030) observations to help with the line identification (see \\S\\ref{s-lineid}). The data processing follows the same procedure as the other data.}\n\nInformation on the design and performance of COS, STIS, {\\em FUSE}\\ can be found in \\citet{green12}, \\citet{woodgate98}, and \\citet{moos00}, respectively. For the {\\em HST}\\ data, we use the pipeline-calibrated final data products available in MAST. The {\\em HST}\\ STIS E140M data have an accurate wavelength calibration and the various exposure and echelle orders are combined into a single spectrum by interpolating the photon counts and errors onto a common grid, adding the photon counts and converting back to a flux.\n\nThe processing of the {\\em FUSE}\\ data is described in detail by \\citet{wakker03} and \\citet{wakker06}. In short, the spectra are calibrated using version 2.1 or version 2.4 of the {\\em FUSE}\\ calibration pipeline. The wavelength calibration of {\\em FUSE}\\ can suffer from stretches and misalignments. To correct for residual wavelength shifts, the central velocities of the MW interstellar lines are determined for each detector segment of each individual observation. The {\\em FUSE}\\ segments are then aligned with the interstellar velocities implied by the STIS E140M spectra or with the velocity of the strongest component seen in the 21-cm \\ion{H}{1}\\ spectrum. Since the \\ion{O}{6}\\ absorption can be contaminated by H$_2$ absorption, we remove this contamination following the method described in \\citet{wakker06}. This contamination can be removed fairly accurately with an uncertainty of about $\\pm 0.1$ dex on the \\ion{O}{6}\\ column density \\citep{wakker03}.\n\nFor the COS G130M and G160M spectra, the spectral lines in separate observations of the same target are not always aligned, with misalignments of up to $\\pm 40$ ${\\rm km\\,s}^{-1}$\\ that varying as function of wavelength. This is a known issue that has been reported previously \\citep[e.g.,][]{savage14,wakker15}. While the COS team has improved the wavelength solution, we find that this problem can still be present sometimes. \nSince accurate alignment is critical for studying multiple gas-phases probed by different ions and since there is no way to determine {\\it a priori} which targets are affected, we uniformly apply the \\cite{wakker15} methodology to coadd the different exposures of the COS data to ensure proper alignment of the absorption lines. In short, we identify the various strong ISM and IGM weak lines and record the component structures and identify possible contamination of the ISM lines by IGM lines. We cross-correlate each line in each exposure, using a $\\sim$3 \\AA\\ wide region, and apply a shift as a function of wavelength to each spectrum. To determine the absolute wavelength calibration, we compare the velocity centroids of the Gaussian fits to the interstellar UV absorption lines (higher velocity absorption features being Gaussian fitted separately) and the \\ion{H}{1}\\ emission observed from our 9$\\arcmin$ GBT \\ion{H}{1}\\ survey \\citepalias{howk17} or otherwise from 21-cm data from the Leiden\/Argentine\/Bonn (LAB) survey \\citep{kalberla05} or the Parkes Galactic All Sky Survey (GASS) \\citep{kalberla10}. The alignment is coupled with the line identification into an iterative process to simultaneously determine the most accurate alignment and line identification (see \\S\\ref{s-lineid}). To combine the aligned spectra, we add the total counts in each pixel and then convert back to flux, using the average flux\/count ratio at each wavelength (see also \\citealt{tumlinson11a,tripp11}); the flux error is estimated from the Poisson noise implied by the total count rate.\n\n\\subsection{Line Identification}\\label{s-lineid}\nWe are interested in the velocity range $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$\\ where absorption from the M31 CGM may occur (see \\S\\ref{s-prop} for the motivation of this velocity range). It is straightforward to identify M31 absorption or its absence in this pre-defined velocity range, but we must ensure that there is either no contamination from higher redshift absorbers, or if there is, that we can correct for it.\n\nFor ions with multiple transitions, it is relatively simple to determine whether contamination is at play by comparing the column densities and the shapes of the velocity profiles of the available transitions. The profiles of atoms or ions with a single transition can be compared to other detected ions to check if there is some obvious contamination in the single transition absorption. However, some contamination may still remain undetected if it directly coincides with the absorption under consideration. Furthermore, when only a single ion with a single transition is detected (\\ion{Si}{3}\\ $\\lambda$1206 being the prime example), the only method that determines if it is contaminated or not is to undertake a complete line identification of all absorption features in each QSO spectrum.\n\nFor the 18 targets in our large {\\em HST}\\ program, our instrument setup ensures that we have the complete wavelength coverage with no gap between 1140 and 1800 \\AA. As part of our target selection, we also favor QSOs at low redshift (44\\% are at $\\ensuremath{z_{\\rm em}} \\le 0.1$, 89\\% at $\\ensuremath{z_{\\rm em}} \\le 0.3$). This assures that Ly$\\alpha$\\ remains in the observed wavelength range out to the redshift of the QSO (Ly$\\alpha$\\ redshifts out the long end of the COS band at $z = 0.48$) and greatly reduces the contamination from EUV transitions in the COS bandpass. The combination of wavelength coverage and low QSO redshift ensures the most accurate line identification. At $R<351$ kpc (i.e., $\\la 1.2 \\ensuremath{R_{\\rm vir}}$), 93\\% have Ly$\\alpha$\\ coverage down to $z = \\ensuremath{z_{\\rm em}}$ that remains in the observed wavelength range (one target has only observation of G130M and another QSO is at $z=0.5$, see Table~\\ref{t-sum}). On the other hand, for the targets at $R>351$ kpc, the wavelength coverage is not as complete over 1140--1800 \\AA\\ (55\\% of the QSOs have only 1 COS grating---all but one have G130M, and 4 QSOs have $\\ensuremath{z_{\\rm em}} \\ga 0.48$). We note that the QSOs of 6\/10 G130M observations have $\\ensuremath{z_{\\rm em}} <0.17$, setting all the Ly$\\alpha$\\ transitions within the COS G130M bandpass. \n\nThe overall line identification process is as follows. First, we mark all the ISM absorption features (i.e., any absorption that could arise from the MW or M31) and the velocity components (which is done as part of the overall alignment of the spectra, see \\S\\ref{s-calib}). Local (approximate) continua are fitted near the absorption lines to estimate the equivalent widths ($W_\\lambda$) and their ratios for ions with several transitions are checked to determine if any are potentially contaminated. We then search for any absorption features at $z = \\ensuremath{z_{\\rm em}}$, again identifying any velocity component structures in the absorption. We then identify possible Ly$\\alpha$\\ absorption and any other associated lines (other \\ion{H}{1}\\ transitions and metal transitions) from the redshift of QSO down to $z=0$. In each case, if there are simultaneous detections of Ly$\\alpha$, Ly$\\beta$, and\/or Ly$\\gamma$\\ (and weaker transitions), we check that the equivalent width ratios are consistent. If there are any transitions left unidentified, we check whether it could be \\ion{O}{6}\\ $\\lambda\\lambda$1031, 1037 as this doublet can be sometimes detected without any accompanying \\ion{H}{1}\\ \\citep{tripp08}. Finally we check that the alignment in each absorber with multiple detected absorption lines is correct or whether it needs some additional adjustment.\n\nIn the region $R\\la 1.1\\ensuremath{R_{\\rm vir}}$ and for 84\\% of the sample at any $R$, we believe the line identifications are reliable and accurate at the 98\\% confidence level. In the Appendix, we provide some additional information regarding the line identification, in particular for the troublesome cases. We also make available in a machine-readable format the full line identification for all the targets listed in Table~\\ref{t-sum} (see Appendix~\\ref{a-lineid}).\n\n\\subsection{Determination of the Properties of the Absorption at $-700 \\le v_{\\rm LSR} \\le -150$ ${\\rm km\\,s}^{-1}$ }\\label{s-prop}\n\nOur systematic search window for absorption that may be associated with the CGM of M31 is $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$\\ \\citepalias{lehner15}. The $-700$ ${\\rm km\\,s}^{-1}$\\ cutoff corresponds to about $-100$ ${\\rm km\\,s}^{-1}$\\ less than the most negative velocities from the rotation curve of M31 ($\\sim -600$ ${\\rm km\\,s}^{-1}$, see \\citealt{chemin09}). The $-150$ ${\\rm km\\,s}^{-1}$\\ cutoff is set by the MW lines that dominate the absorption in the velocity range $-150 \\la \\ensuremath{v_{\\rm LSR}}\\ \\la +50$ ${\\rm km\\,s}^{-1}$. The $-100 \\la \\ensuremath{v_{\\rm LSR}} \\la -50$ ${\\rm km\\,s}^{-1}$\\ range is dominated by low and intermediate-velocity clouds that are observed in and near the Milky Way disk. Galactic high-velocity clouds (HVCs) down to velocities $\\ensuremath{v_{\\rm LSR}} \\sim -150$ ${\\rm km\\,s}^{-1}$\\ further above the MW disk have also been observed toward distant Galactic halo stars in the general direction of M31 \\citep{lehner15,lehner12,lehner11a}. Since the M31 disk rotation velocities extend to about $-150$ ${\\rm km\\,s}^{-1}$\\ in the northern tip of M31, there is a small window that is inaccessible for studying the CGM of M31 (see also \\citealt{lehner15} and \\S\\ref{s-dwarfs-vel}).\n\nTo search for M31 CGM gas and determine its properties, we use the following atomic and ionic transitions: \\ion{O}{1}\\ $\\lambda$1302, \\ion{C}{2}\\ $\\lambda\\lambda$1036, 1334, \\ion{C}{4}\\ $\\lambda\\lambda$1548, 1550, \\ion{Si}{2}\\ $\\lambda\\lambda$1190, 1193, 1260, 1304, 1526 \\ion{Si}{3}\\ $\\lambda$1206, \\ion{Si}{4}\\ $\\lambda\\lambda$1393, 1402, \\ion{O}{6}\\ $\\lambda$1031, \\ion{Fe}{2}\\ $\\lambda\\lambda$1144, 1608, \\ion{Al}{2}\\ $\\lambda$1670. We also report results (mostly upper limits on column densities) for \\ion{N}{5}\\ $\\lambda\\lambda$1238, 1242, \\ion{N}{1}\\ $\\lambda$1199 (\\ion{N}{1}\\ $\\lambda\\lambda$1200, 1201 being typically blended in the velocity range of interest $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$), \\ion{P}{2}\\ $\\lambda$1301, \\ion{S}{3}\\ $\\lambda$1190, and \\ion{S}{2}\\ $\\lambda\\lambda$1250, 1253, 1259. \n\nTo determine the column densities and velocities of the absorption, we use the apparent optical depth (AOD) method (see \\S\\ref{s-aod}), but in the Appendix~\\ref{s-pf} we confront the AOD results with measurements from Voigt profile fitting (see also \\S\\ref{s-gen-comments}). As much as possible at COS resolution, we derive the properties of the absorption in individual components. Especially toward M31, this is important since along the same line of sight in the velocity window $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$, there can be multiple origins of the gas (including the CGM of M31 or MS, see Fig.~\\ref{f-map} and \\citetalias{lehner15}) as we detail in \\S\\ref{s-ms}. However, the first step to any analysis of the absorption imprinted on the QSO spectra is to model the QSO's continuum.\n\n\\subsubsection{Continuum Placement}\\label{s-continuum}\nTo fit the continuum near the ions of interest, we generally use the automated continuum fitting method developed for the COS CGM Compendium (CCC, \\citealt{lehner18}). Fig. 3 in \\citet{lehner18} shows an example of an automatic continuum fit. In short, the continuum is fitted near the absorption features using Legendre polynomials. A velocity region of about $\\pm$1000--2000 ${\\rm km\\,s}^{-1}$\\ around the relevant absorption transition is initially considered for the continuum fit, but could be changed depending on the complexity of the continuum placement in this region. In all cases the interval for continuum fitting is never larger than $\\pm$2000 ${\\rm km\\,s}^{-1}$\\ or smaller than $\\pm$250 ${\\rm km\\,s}^{-1}$. Within this pre-defined region, the spectrum is broken into smaller sub-sections and then rebinned. The continuum is fitted to all pixels that did not deviate by more than $2\\sigma$ from the median flux, masking pixels from the fitting process that may be associated with small-scale absorption or emission lines. Legendre polynomials of orders between 1 and 5 are fitted to the unmasked pixels, with the goodness of the fit determining the adopted polynomial order. Typically the adopted polynomials are of orders between 1 and 3 owing to the relative simplicity of the QSO continua when examined over velocity regions of 500--4000 ${\\rm km\\,s}^{-1}$. The only systematic exception is \\ion{Si}{3}\\ where the polynomial order is always between 2--3 and 5 owing to this line being in the wing of the broad local Ly$\\alpha$\\ absorption profile.\n\nThis procedure is applied to our pre-defined set of transitions, with the continuum defined locally for each. Each continuum model is visually inspected for quality control. In a few cases, the automatic continuum fitting fails owing to a complex continuum (e.g., near the peak of an emission line or where many absorption lines were present within the pre-defined continuum window). In these cases, we first try to adjust the velocity interval of the spectrum to provide better-constrained fits; if that still fails, we manually select the continuum region to be fitted. \n\n\\begin{figure*}[tbp]\n\\epsscale{1}\n\\plotone{f2.pdf}\n\\caption{Example of normalized absorption lines as a function of the LSR velocity toward RX\\_J0043.6+3725 showing the typical atoms and ions probed in our survey. High negative velocity components likely associated with M31 are shown in colors, and each color represents a different component identified at the COS G130M-G160M resolution. In this case, significant absorption is observed in the two identified components in \\ion{C}{2}, \\ion{Si}{2}, and \\ion{Si}{3}. Higher ions (\\ion{Si}{4}, \\ion{C}{4}) are observed in only one of the components, showing a change in the ionization properties with velocity. Some species are not detected, but their limits can still be useful in assessing the physical properties of the gas. The MW absorption is indicated between the two vertical dotted lines and is observed in all the species but \\ion{N}{5}. At $\\ensuremath{v_{\\rm LSR}} \\ga -100$ ${\\rm km\\,s}^{-1}$, airglow emission lines can contaminate \\ion{O}{1}, and hence the MW absorption is contaminated, but typically that is not an issue for the surveyed velocity range $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$. \n\\label{f-example-spectrum}}\n\\end{figure*}\n\n\n\\subsubsection{Velocity Components and AOD Analysis}\\label{s-aod}\n\nThe next step of the analysis is to determine the velocity components and integrate them to determine the average central velocities and column densities for each absorption feature. In Fig.~\\ref{f-example-spectrum}, we show an example of the normalized velocity profiles. In the supplemental material, we provide a similar figure for each QSO in our sample. Although we systematically search for absorption in the full velocity range $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$, the most negative velocity of detected absorption in our sample is $\\ensuremath{v_{\\rm LSR}}= -508$ ${\\rm km\\,s}^{-1}$; that is, we do not detect any M31 absorption in the range $-700 \\la \\ensuremath{v_{\\rm LSR}} \\la -510 $ ${\\rm km\\,s}^{-1}$. In Fig.~\\ref{f-example-spectrum}, MW absorption at $-100 \\la \\ensuremath{v_{\\rm LSR}} \\la 100$ ${\\rm km\\,s}^{-1}$\\ is clearly seen in all species but \\ion{N}{5}. Absorption observed in the $-510 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$\\ that is not color-coded is produced by higher-redshift absorbers or other MW lines.\n\nTo estimate the column density in each observed component, we use the AOD method \\citep{savage91}. In this method, the absorption profiles are converted into apparent optical depth per unit velocity, $\\tau_a(v) = \\ln[F_{\\rm c}(v)\/F_{\\rm obs}(v)]$, where $F_c(v)$ and $F_{\\rm obs}(v)$ are the modeled continuum and observed fluxes as a function of velocity. The AOD, $\\tau_a(v)$, is related to the apparent column density per unit velocity, $N_a(v)$, through the relation $N_a(v) = 3.768 \\times 10^{14} \\tau_a(v)\/(f \\lambda(\\mbox{\\AA})$) ${\\rm cm}^{-2}\\,({\\rm km\\,s^{-1}})^{-1}$, where $f$ is the oscillator strength of the transition and $\\lambda$ is the wavelength in \\AA. The total column density is obtained by integrating the profile over the pre-defined velocity interval, $N = \\int_{v_1}^{v_2} N_a(v) dv $, where $[v_1,v_2]$ are the boundaries of the absorption. We estimate the line centroids with the first moment of the AOD $v_a = \\int v \\tau_a(v) dv\/\\int \\tau_a(v)dv$ ${\\rm km\\,s}^{-1}$. As part of this process, we also estimate the equivalent widths, which we use mainly to determine if the absorption is detected at the $\\ge 2\\sigma$ level. In cases where the line is not detected at $\\ge 2\\sigma$ significance, we quote a 2$\\sigma$ upper limit on the column density, which is defined as twice the 1$\\sigma$ error derived for the column density assuming the absorption line lies on the linear part of the curve of growth.\n\nFor features that are detected above the $2\\sigma$ level, the estimated column densities are stored for further analysis. Since we have undertaken a full identification of the absorption features in each spectrum (see \\S\\ref{s-lineid}, Appendix \\ref{a-lineid}), we can reliably assess if a given transition is contaminated using in particular the conflict plots described in the Appendix (see Appendix \\ref{a-conflicplot}). If there is evidence of some line contamination and several transitions are available for this ion (e.g., \\ion{Si}{2}, \\ion{Si}{4}, \\ion{C}{4}), we exclude it from our list. \n\nWe find contamination affects the \\ion{Si}{3}\\ and \\ion{C}{2}\\ in the velocity range $-700 \\le \\ensuremath{v_{\\rm LSR}}\\ \\le -150$ ${\\rm km\\,s}^{-1}$\\ in a few rare cases (6 components of \\ion{Si}{3}\\ and 3 components of \\ion{C}{2}\\ $\\lambda$1334).\\footnote{Toward RX\\_J0048.3+3941, \\ion{C}{2}\\ $\\lambda$1334 is contaminated in the third component, but \\ion{C}{2}\\ $\\lambda$1036 is available to correct for it in this case.}. For all but one of these contaminated \\ion{Si}{3}\\ components, we can correct the contamination because the interfering line is a Lyman series line from a higher redshift and the other \\ion{H}{1}\\ transitions constrain the equivalent width of the contamination. The one case we cannot correct this way is the $-340$ ${\\rm km\\,s}^{-1}$\\ component toward PHL1226 (see also Appendix~\\ref{a-lineid}), which is associated with the MS. In the footnote of Table~\\ref{t-results}, we list the ions that are found to be contaminated at some level. For any column density that is corrected for contamination, the typical correction error is about 0.05--0.10 dex depending on the level of contamination as well as the SNRs of the spectrum in that region.\n\nThe last step is to check for any unresolved saturation. When the absorption is clearly saturated (i.e., the flux level reaches zero-flux in the core of the absorption), the line is automatically marked as saturated and a lower limit is assigned to the column density. In \\S\\ref{s-ms}, we will show how we separate the MS from the M31 CGM absorption, but we note that only the \\ion{Si}{3}\\ components associated with the MS and the MW have their absorption reaching zero-flux level, not the components associated with the CGM of M31.\n\nWhen the flux does not reach a zero-flux level, the procedure for checking saturation depends on the number of transitions for a given ion or atom. We first consider ions with several transitions (\\ion{Si}{2}, \\ion{C}{4}, \\ion{Si}{4}, sometimes \\ion{C}{2}) since they can provide information about the level of saturation for a given peak optical depth. For ions with several transitions, we compare the column densities with different $f\\lambda$-values to determine whether there is a systematic decrease in the column density as $f\\lambda$ increases. If there is not, we estimate the average column density using all the available measurements and propagate the errors using a weighted mean. For the \\ion{Si}{2}\\ transitions, \\ion{Si}{2}\\ $\\lambda$1526 shows no evidence for saturation when detected based on the comparison with stronger transitions while \\ion{Si}{2}\\ $\\lambda$1260 or $\\lambda$1193 can be saturated if the peak optical $\\tau_a \\ga 0.9$. For doublets (e.g., \\ion{C}{4}, \\ion{Si}{4}), we systematically check if the column densities of each transition agree within $1\\sigma$ error; if they do not and the weak transition gives a higher value (and there is no contamination in the weaker transition), we correct for saturation following the procedure discussed in \\citet{lehner18} (and see also \\citealt{savage91}). For \\ion{C}{4}\\ and \\ion{Si}{4}, there is rarely any evidence for saturation (we only correct once for saturation of \\ion{C}{4}\\ in the third component observed in the MRK352 spectrum; in that component the peak optical $\\tau_a \\sim 0.9$). For single strong transitions (in particular \\ion{Si}{3}\\ and often \\ion{C}{2}), if the peak optical depth is $\\tau_a >0.9$, we conservatively flag the component as saturated and adopt a lower limit for that component. We adopt $\\tau_a >0.9$ as the threshold for saturation based on other ions with multiple transitions (in particular \\ion{Si}{2}) where the absorption starts to show some saturation at this peak optical depth.\n\nTo estimate how the column density of silicon varies with $R$ (which has a direct consequence for the CGM mass estimates derived from silicon in \\S\\S\\ref{s-nsi-vs-r} and \\ref{s-mass}), it is useful to assess the level of saturation of \\ion{Si}{3}, which is the only silicon ion that cannot be directly corrected for saturation\\footnote{Some of the \\ion{Si}{2}\\ transitions (especially, \\ion{Si}{2}\\ $\\lambda\\lambda$1193, 1260) have evidence for saturation, but weaker transitions are always available (e.g., \\ion{Si}{2}\\ $\\lambda$1526), and therefore we can determine a robust value of the column density of \\ion{Si}{2}.}. The lower limits of the \\ion{Si}{3}\\ components associated with the CGM of M31 are mostly observed at $R\\la 140$ kpc (only 2 are observed at $R>140$ kpc), but they do not reach zero-flux level; these components are conservatively marked as saturated because their peak apparent optical depth is $\\tau_a > 0.9$ (not because $\\tau_a \\gg 2$) and because the comparison between the different \\ion{Si}{2}\\ transitions show in some cases evidence for saturation (see above). Hence the true values of the column densities of these saturated components is most likely higher than the adopted lower-limit values but are very unlikely to be overestimated by a factor $\\gg 3$--4. We can estimate how large the saturation correction for \\ion{Si}{3}\\ might be using the strong \\ion{Si}{2}\\ lines (e.g., \\ion{Si}{2}\\ $\\lambda$1193 or \\ion{Si}{2}\\ $\\lambda$1260) compared to the weaker ones (e.g., \\ion{Si}{2}\\ $\\lambda$1526). Going through the 8 sightlines showing some saturation in the components of \\ion{Si}{3}\\ associated with the CGM of M31 (see Table~\\ref{t-results}), for all the targets beyond 50 kpc, the saturation correction is likely to be small $<0.10$--0.15 dex based on the fact that many show no evidence of saturation in \\ion{Si}{2}\\ $\\lambda$1260 (when there is no contamination for this transition) or \\ion{Si}{2}\\ $\\lambda$1193. On the other hand, for the two most inner targets, the saturation correction is at least 0.3 dex and possibly as large as 0.6 dex based on the column density comparison between saturated \\ion{Si}{2}\\ and weaker, unsaturated transitions. The latter would put $N_{\\rm Si}\\simeq 14.5$ close to the maximum values derived with photoionization modeling in the COS-Halos sample (see \\S\\ref{s-coshalos}). Therefore for the components associated with the CGM of M31 at $R>50$ kpc when we estimate the functional form of $N_{\\rm Si}$ with $R$, we adopt an increase of 0.1 dex of the lower limits. For the two inner targets at $R<50$ kpc, we explore how an increase of 0.3 and 0.6 dex affects the estimation of $N_{\\rm Si}(R)$.\n\n\\subsubsection{High Resolution Spectra and Profile Fitting Analysis}\\label{s-gen-comments}\n\nIn the Appendices~\\ref{s-comp-fit-aod} and \\ref{s-pf} we explore the robustness of the AOD results by comparing high- and low-resolution spectra and by comparing to a Voigt profile fitting analysis. There is good overall agreement in the column densities derived from the STIS and COS data and our conservative choice of $\\tau_a \\sim 0.9$ as the threshold for saturation in the COS data is adequate (see Appendix~\\ref{s-comp-fit-aod}). For the profile fitting analysis, we consider the most complicated blending of components in our sample and demonstrate there are some small systematic differences between the AOD and PF derived column densities (see Appendix~\\ref{s-pf}). However these difference are small and a majority of our sample is not affected by heavy blending. Hence the AOD results are robust and are adopted for the remaining of the paper.\n\n\\subsection{Correcting for Magellanic Stream Contamination}\\label{s-ms}\nPrior to determining the properties of the gas associated with the CGM of M31, we need to identify that gas and distinguish it from the MW and the MS. We have already removed from our analysis any contamination from higher redshift intervening absorbers and any contamination from the MW (defined as $-150 \\la \\ensuremath{v_{\\rm LSR}}\\ \\la 100$ ${\\rm km\\,s}^{-1}$). However, as shown in Fig.~\\ref{f-map} and discussed in \\citetalias{lehner15}, the MS is another potentially large source of contamination: in the direction of M31, the velocities of the MS can overlap with those expected from the CGM of M31. The targets in our sample have MS longitudes and latitudes in the range $-132\\degr \\le \\ensuremath{l_{\\rm MS}} \\le -86\\degr$ and $-14\\degr \\le \\ensuremath{b_{\\rm MS}} \\le +41\\degr$. The \\ion{H}{1}\\ 21-cm emission GBT survey by \\citet{nidever10} finds that the MS extends to about $\\ensuremath{l_{\\rm MS}} \\simeq -140\\degr$. Based on this and previous \\ion{H}{1}\\ emission surveys, \\citet{nidever08,nidever10} found a relation between the observed LSR velocities of the MS and \\ensuremath{l_{\\rm MS}}\\ that can be used to assess contamination in our targeted sightlines based on their MS coordinates. Using Fig.~7 of \\citet{nidever10}, we estimate the upper and lower boundaries of the \\ion{H}{1}\\ velocity range as a function of \\ensuremath{l_{\\rm MS}}, which we show in Fig.~\\ref{f-nidever} by the curve colored area. The MS velocity decreases with decreasing \\ensuremath{l_{\\rm MS}}\\ up to $\\ensuremath{l_{\\rm MS}} \\simeq -120\\degr$ where there is an inflection point where the MS LSR velocity increases. We note that the region beyond $\\ensuremath{l_{\\rm MS}} \\la -135\\degr$ is uncertain but cannot be larger than shown in Fig.~\\ref{f-nidever} (see also \\citealt{nidever10})---however, this does not affect our survey since all our data are at $\\ensuremath{l_{\\rm MS}} \\ga -132\\degr$.\n\n\n\\begin{figure*}[tbp]\n\\epsscale{1.}\n\\plotone{f3.pdf}\n\\caption{The LSR velocity of the \\ion{Si}{3}\\ components (circles) observed in our sample as a function of the MS longitude \\ensuremath{l_{\\rm MS}}, color-coded according to the absolute MS latitude. Shaded regions show the velocities that can be contaminated by the MS and MW (by definition of our search velocity window, any absorption at $\\ensuremath{v_{\\rm LSR}} >-150$ ${\\rm km\\,s}^{-1}$\\ was excluded from our sample). We also show the data (squares) from the MS survey from \\citet{fox14} and the radial velocities of the M31 dwarf galaxies (stars). \n\\label{f-nidever}}\n\\end{figure*}\n\nWe take a systematic approach to removing the MS contamination that does not reject entire sightlines based on their MS coordinates. Not all velocity components may be contaminated even on sightlines close to the MS. In Fig.~\\ref{f-nidever}, we show LSR velocity of the \\ion{Si}{3}\\ components as a function of the MS longitude. We choose \\ion{Si}{3}\\ as this ion is the most sensitive to detect both weak and strong absorption and is readily observed the physical conditions of the MS and M31 CGM \\citep{fox14,lehner15}. We consider the individual components as for a given sightline, several components can be observed falling in or outside the boundary region associated with the MS as illustrated in Fig.~\\ref{f-nidever}. We find that $28\/74\\simeq 38\\%$ of the detected \\ion{Si}{3}\\ components are within MS boundary region shown in Fig.~\\ref{f-nidever}. We note that changing the upper boundary by $\\pm 5$ ${\\rm km\\,s}^{-1}$\\ would change this number by about $\\pm 3\\%$.\n\nTo our own sample, we also add data from two different surveys: the {\\em HST}\/COS MS survey by \\citet{fox14} and the M31 dwarfs (\\citealt{mcconnachie12} and see \\S\\ref{s-dwarfs}). For the MS survey, we restrict the sample $-150\\degr \\le \\ensuremath{l_{\\rm MS}} \\le -20\\degr $, i.e., overlapping with our sample but also including higher \\ensuremath{l_{\\rm MS}}\\ value while still avoiding the Magellanic Clouds region where conditions may be different. The origin of the sample for the M31 dwarf galaxies is fully discussed in \\S\\ref{s-dwarfs} . The larger galaxies M33, M32, and NGC\\,205 are excluded here from that sample as their large masses are not characteristic. The LSR velocities of the M31 dwarfs as a function of \\ensuremath{l_{\\rm MS}}\\ are plotted with a star symbol in Fig.~\\ref{f-nidever}. For the MS survey, we select the LSR velocities of \\ion{Si}{3}\\ for the MS survey (note these are average velocities that can include multiple components), which are shown with squares in Fig.~\\ref{f-nidever}. Most ($\\sim90\\%$) of the squares fall between the two curves in Fig.~\\ref{f-nidever}, confirming the likelihood that these sightlines probe the MS (although we emphasize that this test was not initially used by \\citealt{fox14} to determine the association with the MS).\n\nThe M31 dwarf galaxies are of course not affected by the MS, but can help us to determine how frequently they fall within the velocity range where MS contamination is likely. For $\\ensuremath{l_{\\rm MS}} \\ga -132\\degr$ (where all the QSOs are and to avoid the uncertain region), only 9\\% (2\/22) of the dwarfs are within the velocity region where MS contamination occurs. If the velocity distributions of the M31 dwarfs and M31 CGM gas are similar, this would strongly suggest that velocity components with the expected MS velocities are indeed more likely associated with the MS. We, however, note two additional dwarfs are close to the upper boundary, which would change the frequency of the dwarfs in the MS velocity-boundary region to 18\\%.\n\nObservations of \\ion{H}{1}\\ 21-cm emission toward the QSOs observed with COS in MS survey \\citep{fox14} and Project AMIGA \\citep{howk17} show only \\ion{H}{1}\\ detections within $|\\ensuremath{b_{\\rm MS}}|\\la 11\\degr$. In the region defined by $-150\\degr \\le \\ensuremath{l_{\\rm MS}} \\le -20\\degr $, the bulk of the \\ion{H}{1}\\ 21-cm emission is observed within $|\\ensuremath{b_{\\rm MS}}|\\la 5\\degr$ \\citep{nidever10}. We therefore expect the metal ionic column densities to have a strong absorption when $|\\ensuremath{b_{\\rm MS}}|\\la 10\\degr$ and a weaker absorption as $|\\ensuremath{b_{\\rm MS}}|$ increases. In Fig.~\\ref{f-col-cont}, we show the total column densities of \\ion{Si}{3}\\ for the velocity components from the Project AMIGA sample found within the MS boundary region shown in Fig.~\\ref{f-nidever}, i.e., we added the column densities of the components that are likely associated with the MS. We also show in the same figure the results from the \\citet{fox14} survey. Both datasets show the same behavior of the total \\ion{Si}{3}\\ column densities with $|\\ensuremath{b_{\\rm MS}}|$, an overall decrease in $N_{\\rm Si\\,III}$ as $|\\ensuremath{b_{\\rm MS}}|$ increases. Treating the limits as values, combining the two samples, and using the Spearman rank order, the test confirms the visual impression that there is a strong monotonic anti-correlation between $N_{\\rm Si\\,III}$ and $|\\ensuremath{b_{\\rm MS}}|$ with a correlation coefficient $r_{\\rm S} = -0.72$ and a p-value $\\ll 0.1\\%$.\\footnote{We note that if we increase the lower limits by 0.15 dex or more and similarly decrease the upper limits, the significance of the anti-correlation would be similar.} There is a large scatter (about $\\pm 0.4$ dex around the dotted line) at any \\ensuremath{b_{\\rm MS}}, making it difficult to determine if any data points may not be associated with the MS (as, e.g., the three very low $N_{\\rm Si\\,III}$ at $12\\degr<|\\ensuremath{b_{\\rm MS}}|<18\\degr$ from our sample or the very high value at $|\\ensuremath{b_{\\rm MS}}|\\sim 27\\degr$ from the \\citealt{fox14} sample).\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f4.pdf}\n\\caption{The total column density of \\ion{Si}{3}\\ that are associated with the MS as a function of the absolute MS latitude. We also show the MS survey by \\citet{fox14} restricted to data with $-150\\degr \\le \\ensuremath{l_{\\rm MS}} \\le -20\\degr $. The lighter gray squares with downward arrows are non-detections in the \\citeauthor{fox14} sample. The dashed line is a linear fit to the data treating the limits as values. A Spearman ranking correlation test implies a strong anti-correlation with a correlation coefficient $r_{\\rm S} = -0.72$ and $p\\ll 0.1\\%$.\n\\label{f-col-cont}}\n\\end{figure}\n\nIn Fig.~\\ref{f-col-vs-rho-ex}, we show the individual column densities of \\ion{Si}{3}\\ as a function of the impact parameter from M31 for the Project AMIGA sightlines where we separate components associated with the MS from those that are not. Looking at Figs.~\\ref{f-map} and \\ref{f-col-cont}, we expect the strongest column densities associated with the MS to be at $|\\ensuremath{b_{\\rm MS}}|\\la 10\\degr$ and $R\\ga 300$ kpc, which is where they are located on Fig.~\\ref{f-col-vs-rho-ex}. We also expect a positive correlation between $N_{\\rm Si\\,III}$ and $R$ for the MS contaminated components while for uncontaminated components, we expect the opposite (see \\citetalias{lehner15}). Treating again limits as values, the Spearman rank order test demonstrates a strong monotonic correlation between $N_{\\rm Si\\,III}$ and $R$ ($r_{\\rm S} = 0.68$ with $p \\ll 0.1\\%$) while for uncontaminated components there is a strong monotonic anti-correlation ($r_{\\rm S} = -0.57$ with $p \\ll 0.1\\%$), in agreement with the expectations. Based on these results, it is therefore reasonable to consider any absorption components observed in the COS spectra within the MS boundary region defined in Fig.~\\ref{f-nidever} as most likely associated with the MS. We therefore flag any of these components (28 out 74 components for \\ion{Si}{3}) as contaminated by the MS and those are not included further in our sample.\n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f5.pdf}\n\\caption{Logarithm of the column densities of the individual components for \\ion{Si}{3}\\ as a function of the projected distances from M31 of the background QSOs where the separation is made for the components associated or not with the MS.\n\\label{f-col-vs-rho-ex}}\n\\end{figure}\n\nFinally, we noted above that only a small fraction of the dwarfs are found in the MS contaminated region. While that fraction is small (9\\%), this could still suggest that in the MS contaminated region, some of the absorption could be a blend between of both MS and M31 CGM components. However, considering the uncontaminated velocities along sightlines in (29 components) and outside (17 components) the contaminated regions, with $p$-value of 0.74 the Kolmogorov-Smirnov (KS) comparison of the two samples cannot reject the null-hypothesis that the distributions are the same. This strongly suggests that the correction from the MS contamination does not bias much the velocity distribution associated with the CGM of M31 (assuming that there is no strong change of the velocity with the azimuth $\\Phi$; as we explore this further in \\S\\S\\ref{s-dwarfs-vel} and \\ref{s-map-vel}, there is, however, no strong evidence a velocity dependence with $\\Phi$).\n\n\\section{M31 Dwarf Galaxy Satellites}\\label{s-dwarfs}\nWhile Project AMIGA is dedicated to understanding the CGM of M31, our survey also provides a unique probe of the dwarf galaxies found in the halo of M31. In particular, we have the opportunity to assess if the CGM of dwarf satellites plays an important role in the CGM of the host galaxy, as studied by cosmological and idealized simulations~\\citep[e.g.][]{angles-alcazar17, hafen19a, hafen19b, bustard18}. When considering the dwarf galaxies in our analysis we have two main goals: 1) to determine if the velocity distribution of the dwarfs and the absorbers are similar, and 2) assess if some of absorption observed toward the QSOs could be associated directly with the dwarfs, either as gas that is gravitationally bound or recently stripped. \n\nThe sample for the M31 dwarf galaxies is mostly drawn from the \\citet{mcconnachie12} study of Local Group dwarfs, in which the properties of 29 M31 dwarf satellites were summarized. Four additional dwarfs (Cas\\,II, Cas\\,III, Lac\\,I, Per\\,I) are added from recent discoveries \\citep{collins13,martin14,martin16,martin17}. M33 is excluded from that sample as its large mass is not characteristic of satellites.\\footnote{In Appendix~\\ref{a-m33}, we further discuss and present some evidence that the CGM of M33 is unlikely to contribute much to the observed absorption in our sample.} Table~\\ref{t-dwarf} summarizes our adopted sample of M31 dwarf galaxies (sorted by increasing projected distance from M31), listing some of their key properties. As listed in this table, most of the M31 satellite galaxies are dwarf spheroidal (dSph) galaxies, which have been shown to have been stripped of most of their gas most likely via ram-pressure stripping \\citep{grebel03}, a caveat that we keep in mind as we associate these galaxies with absorbers. \n\n\\subsection{Velocity Transformation}\\label{s-vel-trans}\nSo far we have used LSR velocity to characterize MW and MS contamination of gas in the M31 halo. However, as we now consider relative motions over $30\\degr$ on the sky, we cannot simply subtract M31's systemic radius velocity to place these relative motions in the correct reference frame. Over such large sky areas, tangential motion must be accounted for because the ``systemic'' sightline velocity of the M31 system changes with sightline. To eliminate the effects of ``perspective motion\", we follow \\citet{gilbert18} (and see also \\citealt{veljanoski14}) by first transforming the heliocentric velocity ($v_\\sun$) into the Galactocentric frame, $v_{\\rm Gal}$, which removes any effects the solar motion could have on the kinematic analysis. We converted our measured radial velocities from the heliocentric to the Galactocentric frame using the relation from \\citet{courteau99} with updated solar motions from \\citet{mcmillan11} and \\citet{schonrich10}:\n\\begin{equation}\\label{e-gal}\n\\begin{aligned}\nv_{\\rm Gal} = & v_\\sun + 251.24\\, \\sin(l)\\cos(b) +\\\\\n& 11.1\\, \\cos(l)\\cos(b) + 7.25\\, \\sin(b)\\,,\n\\end{aligned}\n\\end{equation}\nwhere $(l,b)$ are the Galactic longitude and latitude of the object. To remove the bulk motion of M31 along the sightline to each object, we use the heliocentric systemic radial velocity for M31 of $-301$ ${\\rm km\\,s}^{-1}$\\ \\citep{vandermarel08,chemin09}, which is $v_{\\rm M31,r}=-109$ ${\\rm km\\,s}^{-1}$\\ in the Galactocentric velocity frame. The systemic transverse velocity of M31 is $v_{\\rm M31,t}=-17$ ${\\rm km\\,s}^{-1}$\\ in the direction on the sky given by the position angle $\\theta_t = 287\\degr$ \\citep{vandermarel12}. The removal of M31's motion from the sightline velocities resulting in peculiar line-of-sight velocities for each absorber or dwarf, $v_{\\rm M31}$, is then given by \\citep{vandermarel08}:\n\\begin{equation}\\label{e-vm31}\n\\begin{aligned}\nv_{\\rm M31} = & v_{\\rm Gal} - v_{\\rm M31,r}\\, \\cos(\\rho) + \\\\\n & v_{\\rm M31,t}\\, \\sin(\\rho)\\cos(\\phi - \\theta_t),\n\\end{aligned}\n\\end{equation}\nwhere $\\rho$ is the angular separation between the center of M31 to the QSO or dwarf position, $\\phi$ the position angle of the QSO or dwarf with respect to M31's center. We note that the transverse term in Eqn.~\\ref{e-vm31} is more uncertain \\citep{vandermarel08,veljanoski14}, but its effect is also much smaller, and indeed including it or not would not quantitatively change the results; we opted to include that term in the velocity transformation. We apply these transformations to change the LSR velocities to heliocentric velocities to Galactocentric velocities to peculiar velocities for each component observed in absorption toward the QSOs and for each dwarf. With this transformation, an absorber or dwarf with no peculiar velocity relative to M31's bulk motion has $v_{\\rm M31}=0$ ${\\rm km\\,s}^{-1}$, regardless of its position on the sky \\citep{gilbert18}.\n\n\n\\subsection{Velocity Distribution}\\label{s-dwarfs-vel}\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f6.pdf}\n\\caption{{\\it Left}: The M31 peculiar velocity (as defined by Eqn.~\\ref{e-vm31}) against the projected distances for the observed absorption components associated with M31 (using \\ion{Si}{3}) and M31 dwarf galaxies. The dotted curves show the escape velocity divided by $\\sqrt{3}$ to account for the unknown tangential motions of the absorbers and galaxies. {\\it Right}: The M31 velocity distributions with the same color-coding definition. \n\\label{f-v_vs_r_dwarfs}}\n\\end{figure}\n\nIn Fig.~\\ref{f-v_vs_r_dwarfs}, we compare the M31 peculiar velocities of the absorbers using \\ion{Si}{3}\\ and dwarfs against the projected distance (see \\S\\ref{s-vel-trans}). In Fig.~\\ref{f-v_vs_r_dwarfs}, we also show the expected escape velocity, $v_{\\rm esc}$, as a function of $R$ for a $1.3\\times 10^{12}$ M$_\\sun$ point mass. We conservatively divide $v_{\\rm esc}$ by $\\sqrt{3}$ in that figure to account for remaining unconstrained projection effects. Nearly all the CGM gas traced by \\ion{Si}{3}\\ within $\\ensuremath{R_{\\rm vir}}$ is found at velocities consistent with being gravitationally bound, and this is true even at larger $R$ for most of the absorbers. This finding also holds for most of the dwarf galaxies, and, as demonstrated by \\citet{mcconnachie12}, it holds when the galaxies' 3D distances are used (i.e., using the actual distance of the dwarf galaxies, instead of the projected distances used in this work). Therefore, both the CGM gas and galaxies probed in our sample at both small and large $R$ are consistent with being gravitationally bound to M31.\n\nFig.~\\ref{f-v_vs_r_dwarfs} also informs us that the dwarf satellite and CGM gas velocities overlap to a high degree but do not follow identical distributions. The mean and standard deviation of the M31 velocities for the dwarfs are $+34.2 \\pm 110.0$ ${\\rm km\\,s}^{-1}$\\ and $+36.6 \\pm 68.0$ ${\\rm km\\,s}^{-1}$\\ for the CGM gas. There is therefore a slight asymmetry favoring more positive peculiar motions. A simple two-sided KS test of the two samples rejects the null hypothesis that the distributions are the same at 95\\% level confidence ($p=0.04$). And indeed while the two distributions overlap and the means are similar, the velocity dispersion of the dwarfs is larger than that of the QSO absorbers. For the QSO absorbers, all the components but one have their M31 velocities in the interval $-80 \\le v_{\\rm M31} \\le +160$ ${\\rm km\\,s}^{-1}$, but 9\/32 (28\\%) of the dwarfs are outside that range. Four of the dwarfs are in the range $+160 < v_{\\rm M31} \\le +210$ ${\\rm km\\,s}^{-1}$, a velocity interval that cannot be probed in absorption owing to foreground MW contamination. The other five dwarfs have $v_{\\rm M31}< -80$ ${\\rm km\\,s}^{-1}$, while only one out of 46 \\ion{Si}{3}\\ components (2\\%) have $v_{\\rm M31}< -80$ ${\\rm km\\,s}^{-1}$. Both the small fraction of dwarfs at $v_{\\rm M31}> +160$ ${\\rm km\\,s}^{-1}$\\ and $v_{\\rm M31}< -80$ ${\\rm km\\,s}^{-1}$\\ and the even smaller fraction of absorbers at $v_{\\rm M31}< -80$ ${\\rm km\\,s}^{-1}$\\ suggest that there is no important population of absorbers at the inaccessible velocities $v_{\\rm M31}> +160$ ${\\rm km\\,s}^{-1}$\\ (see also \\S\\ref{s-ms}).\n\n\\begin{figure*}[tbp]\n\\epsscale{1}\n\\plotone{f7.pdf}\n\\caption{Locations of the QSOs ({\\it squares}) and dwarfs ({\\it circles}) relative to M31 (see Fig.~\\ref{f-map}). The data are color-coded according to the relative velocities of the detected \\ion{Si}{3}\\ (multiple colors in a symbol indicate multiple detected components) or the dwarfs. The black circles centered on the dwarfs indicate their individual $R_{200}$. \n\\label{f-velmap-dwarfs}}\n\\end{figure*}\n\n\\subsection{The Associations of Absorbers with Dwarf Satellites}\\label{s-dwarfs-cgm}\n\nUsing the information from Table~\\ref{t-dwarf}, we cross-match the sample of dwarf galaxies and QSOs to determine the QSO sightlines that are passing within a dwarf's $R_{200}$ radius. There are 11 QSOs (with 58 \\ion{Si}{3}\\ components) within $R_{200}$ of 16 dwarfs. In Table~\\ref{t-xmatch-dwarf}, we summarize the results of this cross-match. Fig.~\\ref{f-velmap-dwarfs}, we show the map of the QSOs and dwarf locations in our survey where the M31 velocities of the \\ion{Si}{3}\\ components and dwarfs are color coded on the same scale and the circles around each dwarf represent their $R_{200}$ radius. \n\nTable~\\ref{t-xmatch-dwarf} and Fig.~\\ref{f-velmap-dwarfs} show that several absorbers can be found within $R_{200}$ of several dwarfs when \\ion{Si}{3}\\ is used as the gas tracer. For example, the two components observed in \\ion{Si}{3}\\ toward Zw535.012 are found within $R_{200}$ of 6 dwarf galaxies. In Table~\\ref{t-xmatch-dwarf}, we also list the escape velocity ($v_{\\rm esc}$) at the observed projected distance of the QSO relative to the dwarf as well as the velocity separation between the QSO absorber and the dwarf ($\\delta v \\equiv |v_{\\rm M31, Si\\,III} - v_{\\rm M31,dwarf}|$). So far we have not considered the velocity separation $\\delta v$ between the dwarf and the absorber, but it is likely that if $\\delta v \\gg v_{\\rm esc}$ then the observed gas traced by the absorber is unlikely to be bound to the dwarf galaxy even if $\\Delta_{\\rm sep} = R\/R_{200}<1$. \n\nIf we set $\\delta v 3.9\\times 10^{10}$ M$_\\sun$ are removed from the sample. Applying a cross-match where $\\delta v < v_{\\rm esc}$ and $\\Delta_{\\rm sep}<1$ can reduce the degeneracy between different galaxies, especially if one excludes the four most massive galaxies. For example, RXS\\_J0118.8+3836 is located at $0.40 R_{200}$ and $0.72 R_{200}$ from Andromeda\\,XV and Andromeda\\,XXIII, but only in the latter case $\\delta v \\ll v_{\\rm esc}$ (and in the former case $\\delta v > v_{\\rm esc}$), making the two components observed toward RXS\\_J0118.8+3836 more likely associated with Andromeda\\,XXIII.\n\nSeveral sightlines therefore pass within $\\Delta_{\\rm sep}<1$ of a dwarf galaxy and show a velocity absorption within the escape velocity. This gas could be gravitationally bound to the dwarf. However, there are also 5 absorbers where $\\delta v 89\\%$ level. In case 2, the mean and dispersion is $[$\\ion{O}{1}\/Si$]<-0.74 \\pm 0.51$, so that the ionization fraction is still $>81\\%$ on average. These are upper limits because typically \\ion{O}{1}\\ is not detected. However, even in the 5 cases where \\ion{O}{1}\\ is detected, 4\/5 are upper limits too because \\ion{Si}{3}\\ is saturated and hence only a lower limit on the column density of Si can be derived. In that case, $[$\\ion{O}{1}\/Si$]$ ranges from $<-1.78$ to $-0.43$ (or to $<-0.85$ if we remove the absorber where the \\ion{O}{1}\\ absorption is just detected at the $2\\sigma$ level), i.e., even when \\ion{O}{1}\\ is detected to more than $3\\sigma$, the gas is still ionized at levels $>86\\%$--$98\\%$.\n\nThe combination of \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}\\ allows us to probe gas within the ionization energies 8--45 eV, i.e., the bulk of the photoionized CGM of M31. The high ions, \\ion{C}{4}\\ and \\ion{O}{6}, have ionization energies 48--85 eV and 114--138 eV, respectively, and are not included in the above calculation. The column density of H can be directly estimated in the ionization energy 8--45 eV range from the observations via $\\log N_{\\rm H} = \\log N_{\\rm Si} - \\log Z\/Z_\\sun$. As we show below, Si varies strongly with $R$ with values $\\log N_{\\rm Si}\\ga 13.7$ at $R\\la 100$ kpc and $\\log N_{\\rm Si} \\la 13.3$ at $R\\ga 100$ kpc, which implies $N_{\\rm H} \\ga 1.5\\times 10^{18} (Z\/Z_\\sun)^{-1}$ cm$^{-2}$\\ and $\\la 0.6\\times 10^{18} (Z\/Z_\\sun)^{-1}$ cm$^{-2}$, respectively. For the high ions, a ionization correction needs to be added, and, e.g., for \\ion{O}{6}, $\\log N_{\\rm H} = \\log N_{\\rm O\\,VI} - \\log Z\/Z_\\sun - \\log f^i_{\\rm O\\,VI }$ where $f^i_{\\rm O\\,VI }\\la 0.2$ is the ionization fraction of \\ion{O}{6}\\ that peaks around 20\\% for any ionizing models \\citep[e.g.][]{oppenheimer13,gnat07,lehner14}. As discussed below, there is little variation of $N_{\\rm O\\,VI}$\\ with $R$ and is always such that $\\log N_{\\rm O\\,VI} \\ga 14.4$--$14.9$ within 300 kpc from M31, which implies $N_{\\rm H} \\ga (2.5$--$8.1)\\times 10^{18} (Z\/Z_\\sun)^{-1}$ cm$^{-2}$. Therefore the CGM of M31 is not only mostly ionized (often at levels close to 100\\%), but it also contains a substantial fraction of highly ionized gas with higher column densities than the weakly photoionized gas.\n\n\n\\subsection{Ion Column Densities versus $R$}\\label{s-n-vs-r}\n\n\\begin{figure*}[tbp]\n\\epsscale{1}\n\\plotone{f8.pdf}\n\\caption{Total column densities of the ions as a function of $R$ with ionization potential increasing from top to bottom panels. The column densities are shown in logarithm values (with the same relative vertical scale of about 3 dex in each panel) on the left and in linear units on the right. Blue circles are detections, while gray circles with downward arrows are non-detections. A blue circle with an upward arrow denotes that the absorption is saturated, resulting into a lower limit. The components associated with the MS have been removed. The dashed vertical line marks $R_{200}$. Note how \\ion{Si}{3}\\ and \\ion{O}{6}\\ are detected at high frequency well beyond $R_{200}$.\n\\label{f-coltot-vs-rho}}\n\\end{figure*}\n\nIn Fig.~\\ref{f-coltot-vs-rho}, we show the logarithmic (left) and linear (right) values of the total column densities of the components associated with M31 for \\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}, \\ion{C}{4}, and \\ion{O}{6}\\ as a function of the projected distances from M31. Gray data points are upper limits, while blue data with upward arrows are lower limits owing to saturated absorption. Overall, the column densities decrease at higher impact parameter. As the ionization potentials of the ions increase, the decrease in the column densities becomes shallower; \\ion{O}{6}\\ is almost flat. These conclusions were already noted in \\citetalias{lehner15}, but now that the region from 50 to 350 kpc is filled with data, these trends are even more striking. However, our new sample shows also an additional feature with a remarkable change around $R_{200} \\simeq 230$ kpc of M31 notable especially for the low and intermediate ions whereby high \\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}\\ column densities are observed solely at $R\\la R_{200}$. Low column densities \\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}\\ are observed at all $R$, but strong absorption is observed only at $R\\la R_{200}$. The frequency of strong absorption is also larger at $R\\la 0.6 R_{200}$ than at larger $R$ for all ions. A similar pattern is observed for \\ion{C}{4}\\ and \\ion{O}{6}, but the difference between the low and high column densities is smaller: for \\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}, the difference between low and high column densities is a factor $\\ga 5$--10, while it drops to a factor 2--4 for \\ion{C}{4}, and possibly even less for \\ion{O}{6}.\n\n\nIn Fig.~\\ref{f-col-vs-rho}, we show the logarithmic values of the column densities derived from the individual components for \\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}, \\ion{C}{4}, and \\ion{O}{6}\\ as a function of the projected distances from M31. Similar trends are observed in Fig.~\\ref{f-coltot-vs-rho}, but Fig.~\\ref{f-col-vs-rho} additionally shows that 1) more complex velocity structures (i.e., multiple velocity components) are predominantly observed at $R\\la R_{200}$, and 2) factor $\\ga 2$--10 changes in the column densities are observed across multiple velocity components along a given sightline.\n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f9.pdf}\n\\caption{Logarithm of the column densities for the individual components of various ions (low to high ions from top to bottom) as a function of the projected distances from M31 of the background QSOs. Blue circles are detections, while gray circles with downward arrows are non-detections. A blue circle with an upward arrow denotes that the absorption is saturated, resulting into a lower limit. The components associated with the MS have been removed. The dashed vertical lines shows the $R_{200}$ location. The same relative vertical scale of about 3 dex is used in each panel for comparison between the different ions. \n\\label{f-col-vs-rho}}\n\\end{figure}\n\n\n\\subsection{Silicon Column Densities versus $R$}\\label{s-nsi-vs-r}\n\nWith \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}, we can estimate the total column density of Si within the ionization energy range 8--45 eV without any ionization modeling. Gas in this range should constitute the bulk of the cool photoionized CGM of M31 (see \\S\\ref{s-ionization}). In Fig.~\\ref{f-coltotsi-vs-rho}, we show the total column density of Si (estimated following \\S\\ref{s-ionization}) against the projected distance $R$ from M31. The vertical-ticked bar in Fig.~\\ref{f-coltotsi-vs-rho} indicate data with some upper limits, and the length of the vertical bar represents the range of $N_{\\rm Si}$ values allowed between cases 1 and 2 (see \\S\\ref{s-ionization}). \n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f10.pdf}\n\\caption{Total column densities of Si (i.e., $N_{\\rm Si} = N_{\\rm Si\\,II} + N_{\\rm Si\\,III} + N_{\\rm Si\\,IV}$) as a function of the projected distances from M31 of the background QSOs. The vertical-ticked bars show the range of values allowed if the upper limit of a given Si ion is negligible or not. The lower limits have upward arrows, and the upper limits are flagged using downward arrows. The orange, green, and blue curves are the H, SPL, GP-derived models to the data, respectively (see text for details regarding how censoring is treated in each model). The dotted and dashed curves correspond to model where the lower limits at $R<50$ kpc are increased by 0.3 or 0.6 dex. The blue areas correspond to the dispersion derived from the GP models (see Appendix~\\ref{a-model} for more detail).\n\\label{f-coltotsi-vs-rho}}\n\\end{figure}\n\nFig.~\\ref{f-coltotsi-vs-rho} reinforces the conclusions observed from the individual low ions in Figs.~\\ref{f-coltot-vs-rho} and \\ref{f-col-vs-rho}. Overall there is a decrease of the column density of Si at larger $R$. This decrease has a much stronger gradient in the inner region of the M31 CGM between ($R\\la 25$ kpc) and about $R\\sim 100$--$150$ kpc than at $R\\ga 150$ kpc. $N_{\\rm Si}$ changes by a factor $>5$--$10$ between about 25 kpc and 150 kpc while it changes by a factor $\\la 2$ between 150 kpc and 300 kpc. The scatter in $N_{\\rm Si}$ is also larger in the inner regions of the CGM than beyond $\\ga 120$--150 kpc.\n\nTo model this overall trend (which is also useful to determine the baryon and metal content of the CGM, see \\S\\ref{s-mass}), we consider three models, a hyperbolic (H) model, single-power law (SPL) model, and a Gaussian Process (GP) model. We refer the refer to Appendix~\\ref{a-model} where we fully explain the modeling process and how lower and upper limits are accounted for in the modeling. Fig.~\\ref{f-coltotsi-vs-rho} shows these 3 models greatly overlap. The non-parametric GP model overlaps more with the SPL model than with the H fit in the range $250\\la R\\la 400$ kpc and at $R<90$ kpc (especially for the high H fit, see Fig.~\\ref{f-coltotsi-vs-rho}). While there are some differences between these models (and we will explore in \\S\\ref{s-mass} how these affect the mass estimates of the CGM), they all further confirm the strong evolution of the column density of Si with $R$ between $\\la 25$ and $90$--150 kpc and a much shallower evolution with $R$ beyond 200 kpc. In \\S\\ref{s-mass}, we use these models to constrain the metal and baryon masses of the cool CGM gas probed by \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}.\n\n\\subsection{Covering Factors}\\label{s-fc}\nAs noted in \\S\\ref{s-n-vs-r}, the diagnostic ions behave differently with $R$ in a way that probably reflects the underlying physical conditions. For example, \\ion{Si}{2}\\ has a high detection rate within $R<100$ kpc, a sharp drop beyond $R>100$ kpc, and a total absence at $R\\ga 240$ kpc (see Figs.~\\ref{f-coltot-vs-rho} and \\ref{f-col-vs-rho}). On the other hand, \\ion{Si}{3}\\ and \\ion{O}{6}\\ are mostly detected at all $R$, but the column densities of \\ion{Si}{3}\\ fall significantly with $R$ while \\ion{O}{6}\\ remains relatively flat. In this section, we quantity further the detection rates, or the covering factors, for each ion.\n\n\\begin{figure*}[tbp]\n\\epsscale{1}\n\\plotone{f11.pdf}\n\\caption{Cumulative covering factors for impact parameters less than $R$ without ({\\it left}) and with ({\\it right}) some threshold cut on the column densities (for Si ions, $\\log N_{\\rm th} = 13$; for C ions, $\\log N_{\\rm th} = 13.8$; for \\ion{O}{6}, $\\log N_{\\rm th} = 14.5$, and for \\ion{H}{1}, $\\log N_{\\rm th} = 17.6$, see text for more detail and \\citetalias{howk17}). Confidence intervals (vertical bars) are at the 68\\% level and data points are the median values. On the left panel, the solid lines are polynomial fits to median values of \\ensuremath{f_{\\rm c}}\\ for \\ion{Si}{3}, \\ion{O}{6}, \\ion{C}{2}--\\ion{C}{4}, \\ion{Si}{2}--\\ion{Si}{4}\\ (i.e., taking the mean value of \\ensuremath{f_{\\rm c}}\\ between these two ions at a given $R$). On the right panel, the orange line is a polynomial fit to the mean values of \\ensuremath{f_{\\rm c}}\\ for \\ion{C}{2}, \\ion{C}{4}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}\\ while the gray line is polynomial fit to the median values of \\ensuremath{f_{\\rm c}}\\ for \\ion{O}{6}.\n\\label{f-fccum-vs-rho}}\n\\end{figure*}\n\nTo calculate the covering factors of the low and high ions, we follow the methodology described in \\citetalias{howk17} for \\ion{H}{1}\\ by assuming a binomial distribution. We assess the likelihood function for values of the covering factor given the number of detections against the total sample, i.e., the number of targets within a given impact parameter range (see \\citealt{cameron11}). As demonstrated by \\citet{cameron11}, the normalized likelihood function for calculating the Bayesian confidence intervals on a binomial distribution with a non-informative (uniform) prior follows a $\\beta$-distribution.\n\nIn Fig.~\\ref{f-fccum-vs-rho}, we show the {\\em cumulative} covering factors (\\ensuremath{f_{\\rm c}}) for the various ions, where each point represents the covering factor for all impact parameters less than the given value of $R$. The vertical bars are 68\\% confidence intervals. As discussed in \\S\\S\\ref{s-n-vs-r}, \\ref{s-nsi-vs-r}, for all the ions but \\ion{O}{6}, the highest column densities are only observed at $R\\la 100$--150 kpc, with a sharp decrease beyond that. For the covering factors, we therefore consider 1) the entire sample (most of the upper limits---non-detections---are at the level of lowest column densities of a detected absorption, so it is adequate to do that), and 2) the sample where we set a threshold column density ($N_{\\rm th}$) to be included in the sample. In the left panel of Fig.~\\ref{f-fccum-vs-rho}, we show the first case, while in the right panel, we focus on the strong absorbers only. For the Si ions, we use $\\log N_{\\rm th} = 13$; for the C ions, $\\log N_{\\rm th} = 13.8$; for \\ion{O}{6}, $\\log N_{\\rm th} = 14.5$. These threshold column densities are chosen based on Fig.~\\ref{f-coltot-vs-rho}. We also show in the right panel of Fig.~\\ref{f-fccum-vs-rho}, the results for the \\ion{H}{1}\\ emission from \\citetalias{howk17}.\n\nThese results must be interpreted in light of the fact that the intrinsic strength of the diagnostic lines varies by ion. The oscillator strength, $f \\lambda$, of these different ions are listed in Table~\\ref{t-strength} along with the solar abundances of these elements. The optical depth is such that $\\tau \\propto f\\lambda N $ (see \\S\\ref{s-aod}) and $f\\lambda $ is a good representation of the strength of a given transition. For the Si ions, \\ion{Si}{3}\\ has the strongest transition, a factor 2.7--5.5 stronger than \\ion{Si}{4}\\ and a factor 1.3--5.7 stronger than \\ion{Si}{2}\\ (the weaker \\ion{Si}{2}\\ $\\lambda$1526 is sometimes used but mostly to better constrain the column density of \\ion{Si}{2}\\ if the absorption is strong). \\ion{Si}{2}\\ and \\ion{Si}{4}\\ have more comparable strength, which is also the case between \\ion{C}{2}\\ and \\ion{C}{4}. Comparing between different species, while $(f\\lambda)_{\\rm Si\\,III} \\simeq 14.4 (f\\lambda)_{\\rm O\\,VI}$, but this is counter-balanced by oxygen being 15 times more abundant than silicon (and a similar conclusion applies comparing \\ion{Si}{3}\\ with \\ion{C}{2}\\ or \\ion{C}{4}).\n\nWith that in mind, we first consider the left panel of Fig.~\\ref{f-fccum-vs-rho}, i.e., where all the absorbers irrespective of their absorption strengths are taken into account to estimate the cumulative covering factors. We fitted 4 low-degree polynomials to the data: \\ion{Si}{3}, \\ion{O}{6}, and treating in pair \\ion{C}{2}-\\ion{C}{4}\\ and \\ion{Si}{2}-\\ion{Si}{4}\\ as they appear to follow each other reasonably well, respectively. For \\ion{C}{2}-\\ion{C}{4}\\ and \\ion{Si}{2}-\\ion{Si}{4}, we fit the mean covering factors of each ionic pair. For \\ion{O}{6}, we only fitted data beyond 200 kpc owing to the smaller size sample (there are only 3 data points within 200 kpc and 11 in total, see Fig.~\\ref{f-coltot-vs-rho}). It is striking how the cumulative covering factors of \\ion{Si}{3}\\ and \\ion{O}{6}\\ vary with $R$ quite differently from each other and from the other ions. The cumulative covering factor of \\ion{Si}{3}\\ increases with $R$, reaches a maximum somewhere between 250 and 300 kpc, and then decreases, but still remains much higher than \\ensuremath{f_{\\rm c}}\\ of \\ion{C}{2}-\\ion{C}{4}\\ or \\ion{Si}{2}-\\ion{Si}{4}. The cumulative covering factor of \\ion{O}{6}\\ monotonically increases with $R$ up to $R\\sim 569$ kpc. In contrast, while the cumulative covering factors of \\ion{C}{2}-\\ion{C}{4}\\ or \\ion{Si}{2}-\\ion{Si}{4}\\ are offset from each other, they both monotonically decrease with $R$. There seems to be a plateau in \\ion{C}{2}-\\ion{C}{4}\\ covering factor beyond 400 kpc, which is not observed for \\ion{Si}{2}\\ or \\ion{Si}{4}.\n\nTurning to the right panel of Fig.~\\ref{f-fccum-vs-rho} where we show \\ensuremath{f_{\\rm c}}\\ for a given column density threshold that changes with species (see above), the relation between \\ensuremath{f_{\\rm c}}\\ and $R$ is quite different. For all the ions, the cumulative covering factors monotonically decrease with increasing $R$. For \\ion{C}{2}, \\ion{C}{4}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}, the covering factors are essentially the same within $1 \\sigma$, and the orange line in Fig.~\\ref{f-fccum-vs-rho} shows a second-degree polynomial fit to the mean values of \\ensuremath{f_{\\rm c}}\\ between these different ions. Ignoring data at $R<200$ kpc owing to the small sample size, \\ion{O}{6}\\ has a similar evolution of \\ensuremath{f_{\\rm c}}\\ with $R$, but overall \\ensuremath{f_{\\rm c}}\\ is tentatively a factor $\\sim$1.5 times larger than for the other ions at any $R$.\n\nThe contrast between the two panels of Fig.~\\ref{f-fccum-vs-rho} strongly suggests that the CGM of M31 has three main populations of absorbers: 1) the strong absorbers that are found mostly at $R \\la 100$--$150$ kpc ($0.3$--$0.5 \\ensuremath{R_{\\rm vir}}$) probing the denser regions and multiple gas-phase (singly to highly ionized gas) of the CGM, 2) weak absorbers probing the diffuse CGM traced principally by \\ion{Si}{3}\\ (but also observed in higher ions and more rarely in \\ion{C}{2}) that are found at any surveyed $R$ but more frequent at $R\\la \\ensuremath{R_{\\rm vir}}$, and 3) hotter, more diffuse CGM probed by \\ion{O}{6}, \\ion{O}{6}\\ having the unique property compared to the ions that its column density remains largely invariant with the radius of the M31 CGM.\n\n\\subsection{Ion ratios and their Relation with $R$}\\label{s-ratio-vs-r}\n\nIn \\S\\ref{s-ionization}, we show that the ratio of \\ion{O}{1}\\ to Si ions provides a direct estimate of the ionization fraction of the CGM gas of M31. Using ratios of the main ions studied here (\\ion{C}{2}, \\ion{C}{4}, \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}, \\ion{O}{6}), we can further constrain the ionization and physical conditions in the CGM of M31 and how they may change with $R$. To estimate the ionic ratios, we consider the component analysis of the absorption profiles, i.e., we compare the column densities estimated over the same velocity range. However, coincident velocities do not necessarily mean that they probe the same gas, especially if their ioniziation potentials are quite different (such as for \\ion{C}{2}\\ and \\ion{C}{4}). In Fig.~\\ref{f-ratio-vs-rho}, we show the results for several ion ratios as a function of $R$.\n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f12.pdf}\n\\caption{Logarithmic column density ratios of different ions as a function of the projected distances from M31 of the background QSOs. The column densities in individual components are compared to estimate the ionic ratios. Blue symbols indicate that both ions in the ratio are detected. Blue down or up arrows indicate that the absorption is saturated for the ion in the denominator or numerator of the ratio. Gray symbols indicate that one of the ions in the ratio is not detected at $>2\\sigma$. The components associated with the MS have again been removed. The dashed vertical lines mark $R_{200}$.\n\\label{f-ratio-vs-rho}}\n\\end{figure}\n\n\n\\subsubsection{The \\ion{Si}{2}\/\\ion{Si}{3}\\ and \\ion{Si}{4}\/\\ion{Si}{3}\\ ratios}\\label{s-si2-si3-si4}\nThe \\ion{Si}{2}\/\\ion{Si}{3}\\ and \\ion{Si}{4}\/\\ion{Si}{3}\\ ratios are particularly useful because they trace different ionization levels independently of relative elemental abundances. The ionization potentials for these ions are 8.1--16.3 eV for \\ion{Si}{2}, 16.3--33.5 eV for \\ion{Si}{3}, and 33.5--45.1 eV for \\ion{Si}{4}. The top two panels of Fig.~\\ref{f-ratio-vs-rho} show the ratios \\ion{Si}{2}\/\\ion{Si}{3}\\ and \\ion{Si}{4}\/\\ion{Si}{3}\\ as a function of $R$. In both panels, there are many upper limits and there is no evidence of any correlation with $R$, except perhaps for the for the \\ion{Si}{2}\/\\ion{Si}{3}\\ ratio where the only detections of \\ion{Si}{2}\\ are at $R\\la R_{200}$ (see also Fig.~\\ref{f-col-vs-rho}).\n\nWith so many upper limits, we use the Kaplan-Meier estimator (see \\S\\ref{s-abund}) to estimate the mean of these ratios: $\\langle \\log N_{\\rm Si\\,II}\/N_{\\rm Si\\,III}\\rangle = (-0.50 \\pm 0.04) \\pm 0.23$ (mean, error on the mean from the Kaplan-Meier estimator, and standard deviation for 44 data points with 38 upper limits) and $\\langle \\log N_{\\rm Si\\,IV}\/N_{\\rm Si\\,III}\\rangle = (-0.49 \\pm 0.07) \\pm 0.20$ (43 data points with 32 upper limits). There are only 4\/44 components where $ \\log N_{\\rm Si\\,II}\/N_{\\rm Si\\,III} \\simeq 0$ and 8\/43 where $ \\log N_{\\rm Si\\,IV}\/N_{\\rm Si\\,III} \\ga 0$. In the latter cases, \\ion{Si}{4}\\ could be produced by another mechanism such as collisional ionization. Among the three Si ions in our survey, \\ion{Si}{3}\\ is the dominant ion at any $R$ from M31 in the ionizing energy range 8.1--45.1 eV. Ions (of any element) with ionizing energies in the range 16.3--33.5 eV are therefore expected to be dominant ions at least for processes that are dominated by photoionization. \n\nThe \\ion{Si}{2}\/\\ion{Si}{3}\\ ratio has previously been used to constrain the properties of the photoionized gas. According to photoionization modeling produced by \\citet{oppenheimer18a}, an ionic ratio of $\\langle \\log N_{\\rm Si\\,II}\/N_{\\rm Si\\,III}\\rangle = (-0.50 \\pm 0.04) \\pm 0.23$ would imply gas density in the range $-3 \\la \\log n_{\\rm H} \\la -2.5 $ and a temperature of the gas around $10^4$ K (see Fig.~16 in \\citealt{oppenheimer18a}). \n\n\\subsubsection{The \\ion{C}{2}\/\\ion{C}{4}\\ ratio}\\label{s-c2-c4}\nFor the \\ion{C}{2}\/\\ion{C}{4}\\ ratio the ionizing energy ranges are well separated with 11.3--24.3 eV for \\ion{C}{2}\\ and 47.9--64.4 eV for \\ion{C}{4}. In fact with an ionization potential above the \\ion{He}{2}\\ ionization edge at 54.4 eV, \\ion{C}{4}\\ can be also produced not just photoionization but also by collisional ionization. Therefore \\ion{C}{2}\\ and \\ion{C}{4}\\ are unlikely to probe the same ionization mechanisms or be in a gas with the same density. We note that \\ion{C}{2}\\ has ionization energies that overlap with \\ion{Si}{3}\\ and larger than those of \\ion{Si}{2}, which certainly explain the presence of \\ion{C}{2}\\ beyond $R_{200}$ where \\ion{Si}{2}\\ is systematically not detected.\n\nThe third panel of Fig.~\\ref{f-ratio-vs-rho} shows the \\ion{C}{2}\/\\ion{C}{4}\\ ratios. There is again no strong relationship between \\ion{C}{2}\/\\ion{C}{4}\\ and $R$, but $ \\log N_{\\rm C\\,II}\/N_{\\rm C\\,IV} \\ga 0$ is more frequently observed at $RR_{200}$ (2\/9), consistent with the observation made in \\S\\ref{s-n-vs-r} that the gas becomes more highly ionized as $R$ increases. With the survival analysis (considering the only lower limit as a detection), we find $\\langle \\log N_{\\rm C\\,II}\/N_{\\rm C\\,IV}\\rangle = (-0.21 \\pm 0.11) \\pm 0.40$ (21 data points with 8 upper limits). Considering data at $RR_{200}$.\n\n\\subsubsection{The \\ion{C}{4}\/\\ion{Si}{4}\\ ratio}\\label{s-c4-si4}\nFor the \\ion{C}{4}\/\\ion{Si}{4}\\ ratio, different species are compared, but as we discuss in \\S\\ref{s-abund}, the relative abundances of C and Si are consistent with the solar ratio owing to little evidence of any strong dust depletion or nucleosynthesis effects, i.e., these effects should not affect the observed ratio of \\ion{C}{4}\/\\ion{Si}{4}. \\ion{Si}{4}\\ and \\ion{C}{4}\\ have near adjacent ionization energies 33.5--45.1 eV to 47.9--64.4 eV, respectively. Both photoionization and collisional ionization processes can be important at these ionizing energies, but if $\\log N_{\\rm C\\,IV}\/N_{\\rm Si\\,IV}>0$, then the ionization from hot stars is unimportant (see Fig.~13 in \\citealt{lehner11b}), which is nearly always the case, as illustrated in Fig.~\\ref{f-ratio-vs-rho}. A harder photoionizing spectrum or collisional ionization must be at play to explain the origin of these ions.\n\nFig.~\\ref{f-ratio-vs-rho} suggest a moderate correlation between $\\log N_{\\rm C\\,IV}\/N_{\\rm Si\\,IV}$ and $R$. If the two data points beyond 400 kpc are removed (and treating the limits as actual values), a Spearman rank order implies a monotonic correlation between $\\log N_{\\rm C\\,IV}\/N_{\\rm Si\\,IV}$ and $R$ with a correlation coefficient $r_{\\rm S} = +0.45$ and $p=0.019$ for the gas at $R<1.2\\ensuremath{R_{\\rm vir}}$. Considering the entire sample, the Spearman rank test yield $r_{\\rm S} = 0.34$ and $p=0.07$. This is again consistent with our earlier conclusion that the gas becomes more highly ionized as $R$ increases. With the survival analysis (considering the 3 upper limits as detections),\\footnote{If these 3 upper limits are included or excluded from the sample, the means are essentially the same. } we find $\\langle \\log N_{\\rm C\\,IV}\/N_{\\rm Si\\,IV}\\rangle = (+0.87 \\pm 0.07) \\pm 0.24$ (29 data points with 9 lower limits). This is about a factor 1.9 larger than the mean derived for the broad \\ion{C}{4}\\ and \\ion{Si}{4}\\ components in the Milky Way disk and low-halo \\citep{lehner11b}, which is about $1\\sigma$ larger.\n\n\\subsubsection{The \\ion{C}{4}\/\\ion{O}{6}\\ ratio}\\label{s-c4-o6}\nFinally, in the last panel of Fig.~\\ref{f-ratio-vs-rho}, we show the \\ion{C}{4}\/\\ion{O}{6}\\ ratio as a function of $R$. As for the \\ion{C}{4}\/\\ion{Si}{4}\\ ratio, different species are compared, and for the same reasons, the relative dust depletion or nucleosynthesis effects should be negligible. With 113.9--138.1 eV ionizing energies needed to produce \\ion{O}{6}, this is the highest high ion in the sample and as we demonstrated in the previous section the \\ion{O}{6}\\ properties (covering factor and column density as a function of $R$) are quite unique. Not surprisingly Fig.~\\ref{f-ratio-vs-rho} does not reveal any relation between $\\log N_{\\rm C\\,IV}\/N_{\\rm O\\,VI}$ and $R$.\n\nIf we treat the two lower limits as detections, then the survival analysis yields $\\langle \\log N_{\\rm C\\,IV}\/N_{\\rm O\\,VI}\\rangle = (-0.93 \\pm 0.11) \\pm 0.32$ (16 data points with 6 upper limits). The mean and range of $\\log N_{\\rm C\\,IV}\/N_{\\rm O\\,VI}$ are smaller than observed in the Milky Way disk and low halo where the full range varies from $-1$ to $+1$ dex (see, e.g., Fig.~14 of \\citealt{lehner11b}). This demonstrates that the highly ionized gas in the 113.9--138.1 eV range is much more important than in the 47.9--64.4 eV range at any $R$ of the M31 CGM.\n\n\\subsection{Metal and Baryon Mass of the M31 CGM}\\label{s-mass}\n\nWith a better understanding of the column density variation with $R$, we can estimate with more confidence the metal and baryon mass of the M31 CGM than in our original survey where we had very little information between 50 and 300 kpc \\citepalias{lehner15}. The metal mass can be directly estimated from the column densities of the metal ions. With the silicon ions, we have information on its three dominant ionization stages in the $T<7\\times 10^4$ K ionized gas (ionizing energies in the range 8--45 eV, see \\S\\ref{s-nsi-vs-r}), so we can obtain a direct measured metal mass without any major ionization corrections. Following \\citetalias{lehner15} (and see also \\citealt{peeples14}), the metal mass of the cool photoionized CGM is\n$$\nM^{\\rm cool}_{\\rm Z} = 2\\pi\\, \\mu_{\\rm Si}^{-1}\\, m_{\\rm Si}\\, \\int R \\,N_{\\rm Si}(R) \\,dR\\,,\n$$\nwhere $\\mu_{\\rm Si}=0.064$ is the solar mass fraction of metals in silicon (i.e., $12+\\log ({\\rm Si\/H})_\\sun = 7.51$ and $Z_\\sun = 0.0142$ from \\citealt{asplund09}), $m_{\\rm Si} = 28 m_{\\rm p}$, and for $N_{\\rm Si}(R)$ we use the hyperbola (``H model'', Eqn.~\\ref{e-colsi-vs-r}), single power-law (``SPL model\", Eqn.~\\ref{e-colsi-vs-r1}), and GP models that we determine in \\S\\ref{s-nsi-vs-r} and Appendix~\\ref{a-model} (see Fig.~\\ref{f-coltotsi-vs-rho}).\n\nA direct method to estimate the total mass is to convert the total observed column density of Si to total hydrogen column density via $N_{\\rm H} = N_{\\rm H\\,I} + N_{\\rm H\\,II} = N_{\\rm Si}\\, ({\\rm Si}\/{\\rm H})_\\sun^{-1}\\, ({\\rm Z\/Z}_\\sun)^{-1}$. The baryonic mass of the CGM of M31 is then:\n$$\nM^{\\rm cool}_{\\rm g} = 2\\pi\\, m_{\\rm H}\\, \\mu\\, \\ensuremath{f_{\\rm c}} \\, \\Big(\\frac{\\rm Si}{\\rm H}\\Big)_\\sun^{-1} \\Big(\\frac{Z}{Z_\\sun}\\Big)^{-1}\\, \\int R\\, N_{\\rm Si}(R)\\, dR\\,,\n$$\nwhere $\\mu \\simeq 1.4$ (to correct for the presence of He), $m_{\\rm H} = 1.67\\times 10^{-24}$ g is the hydrogen mass, \\ensuremath{f_{\\rm c}}\\ is covering fraction (that is 1 over the considered radii), and $\\log ({\\rm Si\/H})_\\sun = -4.49$ is the solar abundance of Si. Inserting the values for each parameter, $M^{\\rm cool}_{\\rm g}$ can be simply written in terms of $M^{\\rm cool}_{\\rm Z}$: $M^{\\rm cool}_{\\rm g} \\simeq 10^2 (Z\/Z_\\sun)^{-1} M^{\\rm cool}_{\\rm Z}$.\n\nIn Table~\\ref{t-mass}, we summarize the estimated metal mass over different regions of the CGM for the three models of $N_{\\rm Si}(R)$, within $R_{\\rm 200}$ (first entry), within \\ensuremath{R_{\\rm vir}}\\ (second entry), within $1\/2 \\ensuremath{R_{\\rm vir}}$ (third entry), between $1\/2 \\ensuremath{R_{\\rm vir}}$ and \\ensuremath{R_{\\rm vir}}\\ (fourth entry), and within 360 kpc (fifth entry), which corresponds to the radius where at least one of the Si ions is always detected (beyond that, the number of detections drastically plummets). A key difference between the H\/SPL models and the GP model is that the range of values for the H\/SPL models is derived using the low (dotted) and high (dashed) curves in Fig.~\\ref{f-coltotsi-vs-rho} while for the GP models we actually use the standard deviations from the low and high models (i.e., the top and bottom of the shaded blue curve in Fig.~\\ref{f-coltotsi-vs-rho}). Hence it is not surprising that the mass ranges for the GP model are larger. Nevertheless there is a large overlap between the three models. As the GP results overlap with the other models and provide empirical confidence intervals, we adopt them for the remaining of the paper. At \\ensuremath{R_{\\rm vir}}, the metal and cool gas masses are therefore $(2.0 \\pm 0.5) \\times 10^7$ and $2\\times 10^9\\, (Z\/Z_\\sun)^{-1}$ M$_\\sun$, respectively. Owing to the new functional form of $N_{\\rm Si}(R)$ and how the lower limits are treated, this explains the factor 1.4 times increase in the metal mass compared to that derived in \\citetalias{lehner15}.\n\nThese masses do not include the more highly ionized gas traced by \\ion{O}{6}\\ or \\ion{C}{4}. Even though the sample with \\ion{O}{6}\\ is smaller than \\ion{C}{4}, we use \\ion{O}{6}\\ to probe the higher ionization gas phase because as we show above the properties of \\ion{O}{6}\\ (column density and covering fraction as a function of $R$) are quite different from all the other ions, including \\ion{C}{4}, which behaves more like the other, lower ions. Furthermore, \\citet{lehner11b} using 1.5--3 ${\\rm km\\,s}^{-1}$\\ resolution UV spectra show that \\ion{C}{4}\\ can probe cool and hotter gas while the profiles of \\ion{N}{5}\\ and \\ion{O}{6}\\ are typically broad and more consistent with hotter gas. Since \\ion{O}{6}\\ is always detected and there is little evidence for variation with $R$ (see Fig.~\\ref{f-coltot-vs-rho}), we can simply use the mean column density $\\log N_{\\rm O\\,VI} = 14.46 \\pm 0.10$ (error on the mean using the survival method for censoring) to estimate the baryon mass assuming a spherical distribution:\n$$\nM^{\\rm warm}_{\\rm g} = \\pi r^2 \\, m_{\\rm H}\\, \\mu\\, \\ensuremath{f_{\\rm c}} \\, \\frac{N_{\\rm O\\,VI}}{f^i_{\\rm O\\,VI}}\\, \\Big(\\frac{\\rm O}{\\rm H}\\Big)_\\sun^{-1}\\, \\Big(\\frac{Z}{Z_\\sun}\\Big)^{-1},\n$$\nwhere the \\ion{O}{6}\\ ionization fraction is $f^i_{\\rm O\\,VI} \\la 0.2$ (see \\S\\ref{s-ionization}), $\\ensuremath{f_{\\rm c}} =1$ for \\ion{O}{6}\\ at any $R$ (see Fig.~\\ref{f-coltot-vs-rho}). At \\ensuremath{R_{\\rm vir}}, we find $M^{\\rm warm}_{\\rm g}\\ga 9.3\\times 10^9\\,(Z\/Z_\\sun)^{-1}$ M$_\\sun$ or $M^{\\rm warm}_{\\rm g} \\ga 4.4 \\,M^{\\rm cool}_{\\rm g}$ (assuming the metallicity is about similar in the cooler and hotter gas-phases). At $R_{200}$, we find $M^{\\rm warm}_{\\rm g}\\ga 5.5\\times 10^9\\,(Z\/Z_\\sun)^{-1}$ M$_\\sun$ (assuming the metallicity is similar in the cooler and hotter gas-phases). These are lower limits because the fraction of \\ion{O}{6}\\ could be much smaller than 20\\% and the metallicity of the cool or warm ionized gas is also likely to less than solar (see below). In terms of metal mass in the highly ionized gas-phase, we have $M^{\\rm warm}_{\\rm g} \\simeq 10^2 (Z\/Z_\\sun)^{-1} M^{\\rm warm}_{\\rm Z}$ and hence also $ M^{\\rm warm}_{\\rm Z} \\ga 4.4 \\, M^{\\rm cool}_{\\rm Z}$. Since \\ion{O}{6}\\ is detected out to the maximum surveyed radius of 569 kpc, and at that radius (i.e., $1.9 \\ensuremath{R_{\\rm vir}}$), $M^{\\rm warm}_{\\rm g}\\ga 34 \\times 10^9\\,(Z\/Z_\\sun)^{-1}$ M$_\\sun$.\n\n\nBy combining both the cool and hot gas-phase masses, we can find the baryon mass for gas in the temperature range $\\sim 10^3$--$10^{5.5}$ K at \\ensuremath{R_{\\rm vir}}:\n$$\n\\begin{aligned}\nM_{\\rm g} & = M^{\\rm cool}_{\\rm g} + M^{\\rm warm}_{\\rm g}\\\\\n& \\ga 1.1\\times 10^{10}\\, \\Big(\\frac{Z}{Z_\\sun}\\Big)^{-1} \\; {\\rm M_\\sun}\\,.\n\\end{aligned}\n$$\nWithin $R_{200}$, the total mass $M_{\\rm g} \\ga 7.2\\times 10^9$ M$_\\sun$. As the stellar mass of M31 is about $10^{11}$ M$_{\\sun}$ \\citep[e.g.,][]{geehan06,tamm12}, the mass of the diffuse weakly and highly ionized CGM of M31 within $1\\ensuremath{R_{\\rm vir}}$ is therefore at least 10\\% of the stellar mass of M31 and could be significantly larger than 10\\%.\n\nThis estimate does not take into account the hot ($T\\ga 10^6$ K) coronal gas. The diffuse X-ray emission is observed to extend to about 30--70 kpc around a handful of massive, non-starbursting galaxies \\citep{anderson11,bregman18} or in stacked images of galaxies \\citep{anderson13,bregman18}, but beyond $50$ kpc, the CGM is too diffuse to be traced with X-ray imaging, even though a large mass could be present. Using the results summarized recently by \\citet{bregman18}, the hot gas mass of spiral galaxy halos is in the range $M^{\\rm hot}_{\\rm g}\\simeq 1$--$10 \\times 10^9$ M$_\\sun$ within 50 kpc. For M31, $M_{\\rm g} = M^{\\rm cool}_{\\rm g} + M^{\\rm warm}_{\\rm g} \\ga 0.4\\times 10^9$ M$_\\sun$ within 50 kpc. Extrapolating the X-ray results to \\ensuremath{R_{\\rm vir}}, \\citet{bregman18} find masses of the hot X-ray gas similar to the stellar masses of these galaxies in the range $M^{\\rm hot}_{\\rm g}\\simeq1$--$10\\times 10^{11}$ M$_\\sun$. For the MW hot halo within $1\\ensuremath{R_{\\rm vir}}$, \\citet{gupta17} (but see also \\citealt{gupta12,gupta14,wang12,henley14}) derive $3$--$10\\times 10^{10}$ M$_\\sun$, i.e., on the low side of the mass range listed in \\citet{bregman18}. The hot gas could therefore dominate the mass of the CGM of M31. There are, however, two caveats to that latter conclusion. First, if $f_{\\rm O\\,VI}\\ll 0.2$, then $M^{\\rm warm}_{\\rm g}$ could be become much larger. Second, the metallicity of the hot X-ray gas ranges from 0.1 to $0.5 Z_\\sun$ with a mean metallicity of $0.3 Z_\\sun$ \\citep{bregman18,gupta17}, while for the cooler gas we have conservatively adopted a solar abundance. If instead we adopt a $0.3 Z_\\sun$ metallicity (consistent with the rough limits set in \\S\\S\\ref{s-metallicity}, \\ref{s-abund}), then $M_{\\rm g} \\simeq 3.7\\times 10^{10}$ M$_\\sun$ at \\ensuremath{R_{\\rm vir}}, which is now comparable to the hot halo mass of the MW. If we adopt the average metallicity derived for the X-ray gas, then $M^{\\rm cool}_{\\rm g} + M^{\\rm warm}_{\\rm g}$ would be comparable to the hot gas mass if $M^{\\rm hot}_{\\rm g} \\sim 5\\times 10^{10}$ M$_\\sun$ at \\ensuremath{R_{\\rm vir}}\\ for M31. Depending on the true metallicities and the actual state of ionization, the cool and warm gas in the M31 halo could therefore contribute to a substantial enhancement of the total baryonic mass compared to our conservative assumptions. \n\n\n\\subsection{Mapping the Metal Surface Densities in the CGM of M31}\\label{s-map-metal}\n\n\n\\begin{figure*}[tbp]\n\\epsscale{1.}\n\\plotone{f13.pdf}\n\\caption{Positions of the Project AMIGA targets relative to the M31, where the axes show the physical impact parameter from the center of M31 (north is up, east to the left). Dotted circles are centered on M31 to mark 100 kpc intervals. The dashes lines represent the projected minor and major axes of M31 and the thin dotted lines are $\\pm 45\\degr$ from the major\/minor axes (which by definition of the coordinate systems also correspond to the vertical and horizontal zero-axis). Each panel corresponds to a different ion. In each panel, the column densities of each velocity component are shown and color coded according the vertical color bar. Circles represent detections while triangles are non-detections. Circles with several color indicate the observed absorption along the sightlines have more than one component. \n\\label{f-colmap}}\n\\end{figure*}\n\nThus far, we have ignored the distribution of the targets in azimuthal angle ($\\Phi$) relative to the projected minor and major axes of M31, where different physical processes may occur. In Fig.~\\ref{f-colmap}, we show the distribution of the column densities of each ion in the X--Y plane near M31 where the circles represent detections and downward triangles are non-detections. Multiple colors in a given circle indicate several components along that sightline for that ion. In that figure, we also show the projected minor and major axes of M31 (dashed lines). The overall trends that are readily apparent from Fig.~\\ref{f-colmap} are the ones already described in the previous sections: 1) overall the column density decreases with increasing $R$, 2) the decrease in $N$ is much stronger for low ions than high ions, 3) \\ion{Si}{3}\\ and \\ion{O}{6}\\ are observed at any $R$ while singly ionized species tend to be more frequently observed at small impact parameters. This figure (and Fig.~\\ref{f-col-vs-rho}) also reveals that absorption with two or more components is observed more frequently at $R<200$ kpc: using \\ion{Si}{3}, 64\\%--86\\% of the sightlines have at least 2 velocity components at $R<200$ kpc, while this drops to 14\\%--31\\% at $R>200$ kpc (68\\% confidence intervals using the Wilson score interval); similar results are found using the other ions. However, the complexity of the velocity profiles does not change with $\\Phi$.\n\nConsidering various radius ranges (e.g., 25--50 kpc, 50--100 kpc, etc.) up to $1\\ensuremath{R_{\\rm vir}}$, there is no indication that the column densities strongly depend on $\\Phi$. Considering \\ion{Si}{3}\\ first, it is equally detected along the projected major and minor axes and in-between (wherever there is a sigthline) and overall the strength of the absorption mostly depends on $R$, not $\\Phi$. Considering the other ions, they all show a mixture of detections and non-detections, and the non-detections (that are mostly beyond 50 kpc) are not preferentially observed along a certain axis or one of the regions shown in Fig.~\\ref{f-colmap}. We therefore find no strong evidence of an azimuthal dependence in the column densities. \n\nBeyond $\\ga 1.1 \\ensuremath{R_{\\rm vir}}$, the situation is different with all but one detection (in \\ion{C}{4}\\ and \\ion{O}{6}\\ only) being near the southern projected major axis and about $52\\degr$ east off near the $X=0$ kpc axis. There is detection in this region of \\ion{Si}{3}, \\ion{C}{4}, \\ion{Si}{4}, \\ion{O}{6}, and also \\ion{C}{2}. That is the main region where \\ion{C}{2}\\ is detected beyond 200 kpc. In contrast, between the $X=0$ kpc axis and southern projected minor axis, the only region where there are several QSOs beyond \\ensuremath{R_{\\rm vir}}, there is no detection in any of the ions (excluding \\ion{O}{6}\\ because there is no {\\em FUSE}\\ observations in these directions). Although that direction is suspiciously in the direction of the MS, it is very unlikely to be additional contamination from the MS because 1) the velocities would be off from those expected of the MS in these directions (see Fig.~\\ref{f-nidever} and also \\S\\ref{s-map-vel}), and 2) there is no overall decrease of the column densities as $|b_{\\rm MS}|$ increases, a trend observed for the components identified as the MS components (see Fig.~\\ref{f-col-cont}). In fact, regarding the second point, the opposite trend is observed with the highest column densities being more frequently at $|b_{\\rm MS}|\\ga 15 \\degr$ than near the MS main axis ($b_{\\rm MS}\\sim 0\\degr$). Therefore while at $R<\\ensuremath{R_{\\rm vir}}$ there is no apparent trend between $N$ and $\\Phi$ for any ions (although we keep in mind that the azimuthal information for \\ion{O}{6}\\ is minimal), most of the detections at $R>\\ensuremath{R_{\\rm vir}}$ are near the southern projected major axis and $52\\degr$ east off of that axis. \n\nThe fact that the gas is observed mainly in a specific region of the CGM beyond \\ensuremath{R_{\\rm vir}}\\ suggests an IGM filament feeding the CGM of M31, as is observed in some cosmological simulations. In particular, \\citet{nuza14} study the gas distribution in simulated recreations of MW and M31 using a constrained cosmological simulation of the Local Group from the Constrained Local UniversE Simulations (CLUES) project. In their Figures 3 and 6, they show different velocity and density projection maps where the central galaxy (M31 or MW) is edge-on. They find that some of the gas in the CGM can flow in a filament-like structure, coming from outside the virial radius all the way down to the galactic disk. \n\n\n\\begin{figure*}[tbp]\n\\epsscale{1.}\n\\plotone{f14.pdf}\n\\caption{Similar to Fig.~\\ref{f-colmap}, but we now show the distribution of the M31 velocities for each component observed for each ion. Circles with several color indicate the observed absorption along the sightlines have more than one components. By definition, in the M31 velocity frame, an absorber with no peculiar velocity relative to M31's bulk motion has $v_{\\rm M31}=0$ ${\\rm km\\,s}^{-1}$.\n\\label{f-velmap}}\n\\end{figure*}\n\n\\subsection{Mapping the Velocities in the CGM of M31}\\label{s-map-vel}\n\n\\begin{figure*}[tbp]\n\\epsscale{1.}\n\\plotone{f15.pdf}\n\\caption{Same as Fig.~\\ref{f-velmap}, but for the average velocities.\n\\label{f-velavgmap}}\n\\end{figure*}\n\nHow the velocity field of the gas is distributed in $R$ and $\\Phi$ beyond 25--50 kpc is a key diagnostic of accretion and feedback. However, a statistical survey using one sightline per galaxy (such as COS-Halos) cannot address this problem because it observes many galaxies in an essential random mix of orientations and inclinations, which necessarily washes out any coherent velocity structures. An experiment like Project AMIGA is needed to access information about large-scale flows in a sizable sample of lines of sight for a single galaxy. The velocity information remains limited because we have only the (projected) radial velocity along pencil beams piercing the CGM at various $R$ and $\\Phi$. Nevertheless as we show below some trends are apparent thanks to the large size of the sample. We use here the $v_{\\rm M31}$ peculiar velocities as defined by Eqn.~\\ref{e-vm31}. By definition, in the M31 velocity frame, an absorber with no peculiar velocity relative to M31's bulk motion has $v_{\\rm M31}=0$ ${\\rm km\\,s}^{-1}$. In \\S\\ref{s-dwarfs-vel}, we show that the M31 peculiar velocities of the absorbers seen toward the QSOs and the velocities of the M31 dwarf satellites largely overlap. We now review how the velocities of the absorbers are distributed in the CGM of M31 over the entire surveyed range of $R$.\n\nIn Figs.~\\ref{f-velmap} and \\ref{f-velavgmap}, we show the distribution of the M31 peculiar velocities of the individual components identified for each ion and column-density-weighted average velocities of each ion, respectively. Circles with several colors indicate that the observed absorption appears in more than one component. Both Figs.~\\ref{f-velmap} and \\ref{f-velavgmap} demonstrate that in many cases there is some overlap in the velocities between low ions (\\ion{Si}{2}, \\ion{C}{2}, \\ion{Si}{3}) and higher ions (\\ion{Si}{4}, \\ion{C}{4}, \\ion{O}{6}). This strongly implies that the CGM of M31 has multiple gas-phases with overlapping kinematics when they are observed in projection (a property also readily observed from the normalized profiles shown in Fig.~\\ref{f-example-spectrum} and as supplemental material in Appendix~\\ref{a-supp-fig}). There are also some rarer cases where there is no velocity correspondence in the velocities between \\ion{Si}{3}\\ and higher ions (see, e.g., near $X\\simeq -335$ $Y\\simeq -95$ kpc), indicating that the observed absorption in each ion is dominated by a single phase--that is, the components are likely to be distinct single-phase objects.\n\nThe full range of velocities associated with the CGM of M31 are between $-249 \\le v_{\\rm M31}\\le +175$ ${\\rm km\\,s}^{-1}$\\ for \\ion{Si}{3}, but for all the other ions it is $-53 \\la v_{\\rm M31}\\le +175$ ${\\rm km\\,s}^{-1}$. Furthermore there is only one absorber\/component of \\ion{Si}{3}\\ that has $ v_{\\rm M31}=-249$ ${\\rm km\\,s}^{-1}$. We emphasize that the rarity of velocity $v_{\\rm M31}<-249$ ${\\rm km\\,s}^{-1}$\\ (corresponding to $\\ensuremath{v_{\\rm LSR}} <-510$ ${\\rm km\\,s}^{-1}$\\ in the direction of this sightline) is not an artifact since velocities below these values are not contaminated by any foreground gaseous features. \n\nWe show in \\S\\ref{s-dwarfs-vel} that the velocity dispersion of the M31 dwarf satellites have a velocity dispersion that is larger (110 ${\\rm km\\,s}^{-1}$\\ for the dwarfs vs. 68 ${\\rm km\\,s}^{-1}$\\ for the \\ion{Si}{3}\\ absorbers) and the M31 dwarfs have some velocities in the velocity range contaminated by the MW and MS. While the CGM gas velocity field distribution may not follow that of the dwarf satellites, it remains plausible that some of the absorption from the extended region of the M31 CGM could be lost owing to contamination from the MW or MS. Therefore we may not be fully probing the entire velocity distribution of the M31 CGM. However, as discussed in \\S\\ref{s-ms}, there is no evidence that the velocity distributions of the \\ion{Si}{3}\\ component in and outside the MS contamination zone are different (see also Figs.~\\ref{f-map} and \\ref{f-velmap}), and hence it is quite possible that at least the MS contamination does not affect much the velocity distribution of the M31 CGM. With these caveats, we now proceed describing the apparent trends of the velocity distribution in the CGM of M31.\n\nFrom Fig.~\\ref{f-velmap}, the first apparent property was already noted in the previous section: the velocity complexity (and hence full-width) of the absorption profiles increases with decreasing $R$ (see \\S\\ref{s-map-metal}). Within $R\\la 100$ kpc or $\\la 200$ kpc, about 75\\% of the \\ion{Si}{3}\\ absorbers have at least two components (at the COS G130M-G160M resolution). This drops to about 33\\% at $200100$ kpc. From this table and for all the ions besides \\ion{O}{6}, $\\langle v_{\\rm M31} \\rangle=90$ ${\\rm km\\,s}^{-1}$\\ at $R\\le 100$ kpc, while at $R>100$ kpc, $\\langle v_{\\rm M31} \\rangle=20$ ${\\rm km\\,s}^{-1}$, a factor 4.5 times smaller. There are only two data points for \\ion{O}{6}, at $R\\le 100$ kpc, but the average at $R>100$ kpc is $\\langle v_{\\rm M31} \\rangle=22$ ${\\rm km\\,s}^{-1}$, following a similar pattern as observed for the other ions. For all the ions but \\ion{C}{2}, the velocity dispersions or IQRs are smaller at $R\\le 100$ kpc than at $R>100$ kpc.\n\nThe third property observed in Fig.~\\ref{f-velmap} or Fig.~\\ref{f-velavgmap} is that at $R\\le 100$ kpc, there is no evidence for negative M31 velocities, while at $R>100$ kpc, about 40\\% of the \\ion{Si}{3}\\ sample has blueshifted $v_{\\rm M31}$ velocities. This partially explains the previous result, but even if we consider the absolute velocities, $\\langle|v_{\\rm M31}|\\rangle=40$ ${\\rm km\\,s}^{-1}$\\ at $R>100$ kpc, implying $\\langle |v_{\\rm M31}(R>100)|\\rangle = 0.44\\langle |v_{\\rm M31}|(R\\le 100) \\rangle$, i.e., in absolute terms or not, $v_{\\rm M31}$ is smaller at $R>100$ kpc than at $R\\le 100$ kpc. Therefore at $R>100$ kpc, not only are the peculiar velocities of the CGM gas less extreme, but they are also more uniformly distributed around the bulk motion of M31. At $R<100$ kpc, the peculiar velocities of the CGM gas are more extreme and systematically redshifted relative to the bulk motion of M31.\n\nThe fourth property appears in Fig.~\\ref{f-velmap-dwarfs} where we compare $v_{\\rm M31}$ velocities of the M31 dwarfs and \\ion{Si}{3}\\ absorbers, which shows that overall the velocities of the satellites and the CGM gas do not follow each other. As noted in \\S\\ref{s-dwarfs-cgm} (see also Table~\\ref{t-xmatch-dwarf}), some velocity components seen in absorption toward the QSOs are found with $\\Delta_{\\rm sep}<1$ and have $\\delta v100$ kpc or $R\\le 100$ kpc, $\\langle v_{\\rm M31,dwarf} \\rangle \\simeq 34$ ${\\rm km\\,s}^{-1}$ ($\\langle |v_{\\rm M31,dwarf}|\\rangle \\simeq 102$ ${\\rm km\\,s}^{-1}$), remarkably contrasting with the properties of the CGM gas described in the previous two paragraphs. These findings strongly suggests that the velocity fields of the dwarfs and CGM gas are decoupled. We infer from this decoupling that (1) gas bound to satellites does not make a significant contribution to CGM gas observed in this way, and (2) the velocities of gas removed from satellites via tidal or ram-pressure interactions, if it is present, becomes decoupled from the dwarf that brought it in (as one might expect from its definition as unbound to the satellites). \n\nThe fifth property is more readily apparent considering the average velocities shown in Fig.~\\ref{f-velavgmap} where considering the CGM gas in different annuli, there is an apparent change in the sign of the average $v_{\\rm M31}$ velocities with on average a positive velocity in at $R<200$ kpc, negative velocity in $2000$ (see \\S\\ref{s-ms}). Absorption occurring in the velocity range $-50 \\la v_{\\rm M31}\\la +150$ ${\\rm km\\,s}^{-1}$\\ is also not contaminated by the MS.\n\n\\section{Discussion}\\label{s-disc}\n\nThe major goal of Project AMIGA is to determine the global distribution of the gas phases and metals through the entire CGM of a representative galaxy. With a large sample of QSOs accumulated over many surveys, and newly observed by {\\em HST}\/COS, we are able to probe multiple sightlines that pierce M31 at different radii and azimuthal angles. Undertaking this study in the UV has been critical since only in this wavelength band there are the diagnostics and spectral resolution to constrain the physical properties of the multiple gas-phases existing in the CGM over $10^{4-5.5}$ K (for $ z = 0$, the hottest phase can only be probed with X-ray observations). With 25 sightlines within about $1.1\\ensuremath{R_{\\rm vir}}$ and 43 within 569 kpc ($\\la 1.9\\ensuremath{R_{\\rm vir}}$) of M31, the size of the sample and the information as a function of $R$ and $\\Phi$ are unparalleled. We will now consider the broad patterns and conclusions we can draw from this unique dataset. \n\n\\subsection{Pervasive Metals in the CGM of M31}\\label{s-disc-metal-mass}\n\nA key finding of Project AMIGA is the ubiquitous presence of metals in the CGM of M31. While the search for \\ion{H}{1}\\ with $\\log N_{\\rm H\\,I} \\ga 17.5$ in the CGM of M31 toward pointed radio observations has been unsuccessful in the current sample (\\citetalias{howk17} and see Fig.~\\ref{f-map}), the covering factor of \\ion{Si}{3}\\ (29 sightlines) is essentially 100\\% out to $1.2 \\ensuremath{R_{\\rm vir}}$, while \\ion{O}{6}\\ associated with M31 is detected toward all 11 sightlines with FUSE data, all the way out to $1.9 \\ensuremath{R_{\\rm vir}}$, the maximum radius of our survey (see \\S\\S\\ref{s-n-vs-r}, \\ref{s-fc}). From the ionization range probed by Project AMIGA, we further show that \\ion{Si}{3}\\ and \\ion{O}{6}\\ are key probes of the diffuse gas (see \\S\\ref{s-ratio-vs-r}). With information from \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}, we demonstrate that \\ion{Si}{3}\\ is the dominant ion in the ionizing energy range 8--45 eV (see \\S\\ref{s-si2-si3-si4}). The fact that \\ion{Si}{3}\\ and \\ion{O}{6}\\ have such high covering factors suggests that these ions are not produced in small clumps within a hotter medium; instead it must be more pervasively distributed. \n\nThe finding of pervasive metals in the CGM of M31 is a strong indication of ongoing and past gas outflows that ejected metals well beyond their formation site. Based on a specific star-formation rate of ${\\rm SFR\/M_\\star} = (5\\pm 1) \\times 10^{-12}$ yr$^{-1}$ \\citep[using the stellar mass M$_\\star$ and SFR from][]{geehan06,kang09}, M31 is not currently in an active star-forming episode. In fact, \\citet{williams17} show that the bulk of star formation occurred in the first $\\sim$6 billion years and the last strong episode happened over $\\sim$2 billion years ago (see also Fig.~6 in \\citealt{telford19} for a metal production model of M31). Hence most of the metals seen in the CGM of M31 have most likely been ejected by previous star-forming episodes and\/or stripped from its dwarfs and more massive companions. However, the fact that metals are detected beyond \\ensuremath{R_{\\rm vir}}, and, that beyond \\ensuremath{R_{\\rm vir}}\\ they are found predominantly in a certain direction, also suggests that some of the metals may be coming from the Local group medium, possibly recycling metals from the MW or M31 (see \\S\\ref{s-map-metal}), or from an IGM filament in that particular direction. \n\nIn \\S\\ref{s-mass}, we estimate that the mass of metals $M^{\\rm cool}_{\\rm Z} = (2.0 \\pm 0.5) \\times 10^7$ M$_\\sun$ within \\ensuremath{R_{\\rm vir}}\\ for the predominantly photoionized gas probed by \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}. For the gas probed by \\ion{O}{6}, we find that $M^{\\rm warm}_{\\rm Z} > 4.4 M^{\\rm cool}_{\\rm Z} \\ga 9\\times 10^7$ M$_\\sun$ at \\ensuremath{R_{\\rm vir}}\\ (this is a lower limit because the fractional amount of \\ion{O}{6}\\ is an upper limit, see \\S\\ref{s-mass}). The sum of these two phases yields a lower limit to the CGM metal mass because the hotter phase probed by the X-ray and metals bound in dust are not included. If the hot baryon mass of M31 is not too different from that estimated for the MW (see \\S\\ref{s-mass}), then we expect $M^{\\rm hot}_{\\rm Z} \\approx M^{\\rm warm}_{\\rm Z}$. The CLUES simulation of the Local group estimates that the mass of the hot ($>10^5$ K) gas is a factor 3 larger than the cooler ($<10^5$ K) gas \\citep{nuza14}. The dust CGM mass remains quite uncertain, but could be at the level of $5\\times 10^7$ M$_\\sun$ according to estimates around $0.1$--$1L^*$ galaxies \\citep{menard10,peeples14,peek15}. Hence the total metal mass of the CGM of M31 out to \\ensuremath{R_{\\rm vir}}\\ could be as large as $M^{\\rm CGM}_{\\rm Z}\\ga 2.5\\times 10^8$ M$_\\sun$.\n\nThe stellar mass of M31 is $(1.5\\pm 0.2) \\times 10^{11}$ M$_\\sun$ \\citep[e.g.,][]{williams17}. Using this result, \\citet{telford19} estimated that the current metal mass in stars is $3.9\\times 10^8$ M$_\\sun$, i.e., about the same amount that is found in the entire CGM of M31 up to \\ensuremath{R_{\\rm vir}}. \\citet{telford19} also estimated the metal mass of the gas in the disk of M31 to be around $(0.8$--$3.2)\\times 10^7$ M$_\\sun$, while \\citet{draine14} estimated the dust mass in the disk to be around $5.4\\times 10^7$ M$_\\sun$, yielding a total metal mass in the disk of M31 of about $M^{\\rm disk}_{\\rm Z}\\simeq 5\\times 10^8$ M$_\\sun$. Therefore M31 has in its CGM within \\ensuremath{R_{\\rm vir}}\\ at least 50\\% of the present-day metal mass in its disk. As we show in \\S\\ref{s-n-vs-r} and \\S\\ref{s-mass} and discuss above, metals are also found beyond \\ensuremath{R_{\\rm vir}}, especially in the more highly ionized phase traced by \\ion{O}{6}\\ (and even higher ions). These metals could come from M31 or being recycled in the Local group from the MW or dwarf galaxies. \n\n\\subsection{Comparison with COS-Halos Galaxies}\\label{s-coshalos}\nThe Project AMIGA experiment is quite different from most of the surveys of the CGM of galaxies done so far. Outside the local universe, surveys of the CGM of galaxies involve assembling samples of CGM gas in aggregate by using one sightline per galaxy (see \\S\\ref{s-intro}), and in some nearby cases up to 3--4 sightlines (e.g., \\citealt{bowen16,keeney17}). By assembling a sizable sample of absorbers associated with galaxies in a particular sub-population (e.g. $L^*$, sub-$L^*$, passive or star-forming galaxies), one can then assess how the column densities change with radii around that kind of galaxy, and from this estimate average surface densities, mass budgets, etc. can then be evaluated. By contrast, Project AMIGA has assembled almost as many sightlines surrounding M31 as COS-Halos had for its full sample of 44 galaxies. We can now make a direct comparison between these two types of experiments. For this comparison, we use the COS-Halos survey of $0.30.9R_{200}$, there is no COS-Halos observation (owing to the design of the survey). The extrapolated model to the COS-Halos observations shown in gray in Fig.~\\ref{f-cos-halos} is a factor 2--4 higher than the models of the Project AMIGA data shown in blue depending on $R\/R_{200}$. \n\nA likely explanation for the higher column density absorbers is that some of these COS-Halos absorbers could be fully or partly associated with a closer galaxy than the initially targeted COS-Halos galaxies where the gas can contain more neutral and weakly ionized gas. Indeed, while the COS-Halos galaxies were selected to have no bright companion, that selection did not preclude fainter nearby companions such as dwarf satellites (see \\citealt{tumlinson13}). Galaxy observation follow-up by \\citet{werk12} found several $L > 0.1 L^*$ galaxies within 160 kpc of the targeted COS-Halos galaxy. Comparing the results from other surveys of galaxies\/absorbers at low redshift \\citep{stocke13,bowen02}, \\citet{bregman18} also noted a higher preponderance of high \\ion{H}{1}\\ column density absorbers in the COS-Halos survey. However, the higher COS-Halos column densities at large radii could also be an effect of evolution in the typical CGM, as COS-Halos probed a slightly higher cosmological redshift. It is also possible that the M31 CGM is less rich in ionized gas at these radii than the typical $L^*$ galaxy at $z \\sim 0.2$, because of its star formation history or environment. \n\n\\subsubsection{CGM Mass Comparison}\\label{s-cos-halos-mass}\nAmong a key physical parameter of the CGM is its mass, which is obtained from the column density distribution of the gas and assuming a certain geometry of the gas. For M31, we cannot derive the baryonic mass of CGM gas without assuming a metallicity since the \\ion{H}{1}\\ column density remains unknown toward all the targets in our sample (but see \\S\\ref{s-metallicity}, \\ref{s-mass}). However, the metal mass of the cool gas probed by \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}\\ can be straightforwardly estimated directly from the observations without any ionization modeling (see top panel of Fig.~\\ref{f-cos-halos}). \n\nEven though both \\citet{peeples14} and \\citet{werk14} use the Si column densities derived from phoionization models, as illustrated in Fig.~\\ref{f-cos-halos}, this would not change the outcome that the metal mass of the cool CGM gas derived from the COS-Halos survey is about a factor 2--3 higher than the metal mass derived in Project AMIGA. This is because there are 7 COS-Halos Si column densities at $R\/R_{200}<0.3$ that are much higher owing to saturation in the weak \\ion{Si}{2}\\ transitions (see above), driving the overall model of $N_{\\rm Si}(R)$ substantially higher. The fact that these high $N_{\\rm Si}$ are not found in the CGM of M31 or lower redshift galaxies at similar impact parameters \\citep[e.g.,][]{bowen02,stocke13} suggests a source of high-column \\ion{H}{1}\\ and \\ion{Si}{2}\\ absorbers in the COS-Halos sample that could be recent outflows, strong accretion\/recycling, or gas associated with closer satellites to the sightline. With only 5 targets within $R\/R_{200}<0.3$ and none below $R\/R_{200}<0.1$ for M31, it would be quite useful to target more QSOs in the inner region of the CGM of M31 to better determine how $N_{\\rm Si}(R)$ varies with $R$ at small impact parameters.\n\nFor the warm-hot gas probed by \\ion{O}{6}, the COS-Halos star-forming galaxies have $\\langle N_{\\rm O\\,VI}\\rangle = 10^{14.5}$ cm$^{-2}$, a detection rate close to 100\\%, and no large variation of $N_{\\rm O\\,VI}$\\ with $R$ \\citep{tumlinson11a}. For M31, we have a similar average \\ion{O}{6}\\ column density, hit rate, and little evidence for any large variation of $N_{\\rm O\\,VI}$\\ with $R$ (see \\S\\ref{s-mass}). This implies that the masses of the warm-hot CGM of M31 and COS-Halos star-forming galaxies are similar. M31 has a specific SFR that is a factor $\\ga 10$ lower than the COS-Halos star-forming galaxies, but its halo mass is on the higher side of the COS-Halos star-forming galaxies (but lower than the COS-Halos quiescent galaxies). As discussed in \\S\\ref{s-disc-pers-comp} in more detail, M31 and the COS-Halos star-forming galaxies have halo masses in the range $M_{200} \\simeq 10^{11.7}$--$10^{12.3}$ M$_\\sun$, corresponding to a virial temperature range that overlaps with the temperature at which the ionization fraction of \\ion{O}{6}\\ peaks, which may naturally explaining some of the properties of the \\ion{O}{6}\\ in the CGM of ``$L^*$\" galaxies \\citep{oppenheimer18}. It is also possible that some \\ion{O}{6}\\ arises in photoionized gas or combinations of different phases (see \\S\\ref{s-disc-pers-comp}). \n\nBased on the comparison above, we find that the \\ion{O}{6}\\ is less subject to the uncertainty in the association of the absorber to the correct galaxy owing to its column density being less dependent on $R$ (see also \\S\\ref{s-disc-change}). Therefore this leads to similar metal masses of the CGM of the $z\\sim 0.2$ COS-Halos galaxies and M31 for the \\ion{O}{6}\\ gas-phase. For the lower ions, their column densities are more dependent on $R$ (see also \\S\\ref{s-disc-change}). Therefore the association of the absorber to the correct galaxy is more critical to derive an accurate column density profile with $R$ and hence derive an accurate CGM metal mass. However, we note that despite these uncertainties the metal mass of the cool CGM of the COS-Halos galaxies is only a factor 2--3 higher than that derived for M31. \n\n\n\\subsection{A Changing CGM with Radius}\\label{s-disc-change}\nA key discovery from Project AMIGA is that the properties of the CGM of M31 change with $R$. This is reminiscent of our earlier survey \\citepalias{lehner15}, but the increase in the size sample has transformed some of the tentative results of our earlier survey into robust findings. In particular the radius around $R\\sim 100$--150 kpc appears critical in view of several properties changing near this threshold radius:\n\\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]\n \\item For any ions, the frequency of strong absorption is larger at $R\\la 100$--150 kpc than at larger $R$.\n \\item The column densities of Si and C ions change by a factor $>5$--$10$ between about 25 kpc and 150 kpc, while they change only by a factor $\\la 2$ between 150 kpc and 300 kpc.\n \\item The detection rate of singly ionized species (\\ion{C}{2}, \\ion{Si}{2}) is close to 100\\% at $R<150$ kpc, but sharply decreases beyond (see Fig.~\\ref{f-col-vs-rho}), and therefore the gas has a more complex gas-phase structure at $R<150$ kpc.\n \\item The peculiar velocities of the CGM gas are more extreme and systematically redshifted relative to the bulk motion of M31 at $R\\la 100$ kpc, while at $R\\ga 100$ kpc, the peculiar velocities of the CGM gas are less extreme and more uniformly distributed around the bulk motion of M31.\n\\end{enumerate}\nThere are also two other significant regions: 1) beyond $R_{200}\\simeq 230$ kpc the gas is becoming more ionized and more highly ionized than at lower $R$ (e.g., there is a near total absence of \\ion{Si}{2}\\ absorption beyond $R_{200}$---see Fig.~\\ref{f-col-vs-rho}, or, a higher \\ion{C}{2}\/\\ion{C}{4}\\ ratio on average at $R\\ga R_{200}$ than at lower $R$---see \\S\\ref{s-c2-c4}); and 2) beyond $1.1\\ensuremath{R_{\\rm vir}}$ the gas is not detected in all the directions away from M31, as it is at smaller radii, but only in a cone near the southern projected major axis and about $52\\degr$ east off the $X=0$ kpc axis (see \\S\\ref{s-map-metal}). \n\nThe overall picture that can be drawn out from these properties is that the inner regions of the CGM of M31 are more dynamic and complex, while the more diffuse regions at $R\\ga 0.5\\ensuremath{R_{\\rm vir}}$ are more static and simpler. Zoom-in cosmological simulations capture in more detail and more accurately the structures of the CGM than large-scale cosmological simulations thanks to their higher mass and spatial resolution. Below we use several results from zoom simulations to gain some insights on these observed changes with $R$. However, the results laid out in \\S\\ref{s-properties} also now provide a new testbed for zoom simulations, so that not only qualitative but also quantitative comparison can be undertaken. We note that most of the zoom simulations discussed here have only a single massive halo. However, according to the ELVIS simulations of Local group analogs \\citep{garrison-kimmel14}, there should be no major difference at least within about \\ensuremath{R_{\\rm vir}}\\ for the distribution of the gas between isolated and paired galaxies.\n\n\\subsubsection{Visualization and Origins of the CGM Variation}\\label{s-disc-qual-comp}\nTo help visualize the properties described above and gain some insights into the possible origins of these trends, we begin by qualitatively examining two zoom simulations. First, we consider the Local group zoom simulations from the CLUES project \\citep{nuza14} where the gas distribution around MW and M31-like galaxies is studied. This paper does not show the distribution of the individual ions, but examines the two main gas-phases above and below $10^5$ K in an environment that is a constrained analog to the Local group. Interestingly, considering Fig.~3 (simulated M31) or Fig.~6 (simulated MW) in \\citeauthor{nuza14}, the region within 100--150 kpc appears more complex, with a large covering factor for both cool and hot gas phases and higher velocities than at larger radii. In these simulations, this is a result of the combined effects of cooling and supernova heating affecting the closer regions of the CGM of M31. This simulation also provides an explanation for the gas observed beyond $1.1\\ensuremath{R_{\\rm vir}}$ that is preferentially observed in a limited region of the CGM of M31 (see Fig.~\\ref{f-colmap} and see middle right panel of their Fig.~3) whereby the $\\la 10^5$ K gas might be accreting onto the CGM of M31. We also note that \\citet{nuza14} find a mass for the $\\la 10^5$ K CGM gas of $1.7\\times 10^{10}$ M$_\\sun$, broadly consistent with our findings (see \\S\\ref{s-mass}). More quantitative comparisons between the CLUES (or Local group analog simulations like ELVIS-FIRE simulations, \\citealt{garrison-kimmel14,garrison-kimmel19}) and Project AMIGA results are beyond the scope of this paper, but they would be valuable to undertake in the future. \n\nSecond, we consider the zoom Eris2 simulation of a massive, star-forming galaxy at $z = 2.8$ presented in \\citet{shen13}. The Eris2 galaxy being $z = 2.8$ and with a star-formation rate of 20 M$_\\sun$\\,yr$^{\u22121}$ is nothing like M31, but this paper shows the distribution of the gas around the central galaxy using some of the same ions that are studied in Project AMIGA, specifically \\ion{Si}{2}, \\ion{Si}{4}, \\ion{C}{2}, \\ion{C}{4}, and \\ion{O}{6}\\ (see their Figs.~3a and 4a, b). Because Eris2 is so different from M31, we would naively expect their CGM properties to be different, and yet: 1) Eris2 is surrounded by a large diffuse \\ion{O}{6}\\ halo with a near unity covering factor all the way out to about $3\\ensuremath{R_{\\rm vir}}$; 2) the covering factor of absorbing material in the CGM of Eris2 declines less rapidly with impact parameter for \\ion{C}{4}\\ or \\ion{O}{6}\\ compared to \\ion{C}{2}, \\ion{Si}{2}, or \\ion{Si}{4}; 3) beyond \\ensuremath{R_{\\rm vir}}, the covering factor of \\ion{Si}{2}\\ drops more sharply than \\ion{C}{2}. There are also key differences, like the strongest absorption in any of these ions being observed in the bipolar outflows perpendicular to the plane of the disk, which is unsurprisingly not observed in M31 since it currently has a low star-formation rate \\citep[e.g.,][]{williams17}. However, the broad picture of the CGM of M31 and the simulated Eris2 galaxy are remarkably similar. This implies that some of the properties of the CGM may depend more on the micro-physics producing the various gas-phases than the large-scale physical processes (outflow, accretion) that vary substantially over time. In fact, the Eris2 simulation shows that inflows and outflows coexist and are both traced by diffuse \\ion{O}{6}; In Eris2, a high covering factor of strong \\ion{O}{6}\\ absorbers seems to be the least unambiguous tracer of large-scale outflows. \n\n\n\\subsubsection{Quantitative Comparison in the CGM Variation between Observations and Simulations}\\label{s-disc-quant-comp}\n\nTwo simulations of M31-like galaxies in different environments at widely separated epochs show some similarity with some of the observed trends in the CGM of M31. We now take one step further by quantitatively comparing the column density variation of the different ions as a function of $R$ in three different zoom-in cosmological simulations, two being led by members of the Project AMIGA team (FIRE and FOGGIE collaborations), and a zoom-in simulation from the Evolution and Assembly of GaLaxies and their Environments (EAGLE) simulation project \\citep{oppenheimer18a,schaye15,crain15}. \n\n\\noindent\n{\\it $\\bullet$ Comparison with FIRE-2 Zoom Simulations}\n\nWe first compare our observations with column densities modeled using cosmological zoom-in simulations from the FIRE project\\footnote{FIRE project website: \\url{http:\/\/fire.northwestern.edu}}. Details of the simulation setup and CGM modeling methods are presented in \\citet{ji19}. Briefly, the outputs analyzed here are FIRE-2 simulations evolved with the {\\small GIZMO} code using the meshless finite mass (MFM) solver \\citep{hopkins15}. The simulations include a detailed model for stellar feedback including core-collapse and Type Ia SNe, stellar winds from OB and AGB stars, photoionization, and radiation pressure \\citep[for details, see][]{hopkins18}. We focus on the ``m12i'' FIRE halo, which has a mass $M_{\\rm vir} \\approx 1.2 M_{200} \\approx 1.2\\times10^{12}\\,M_\\odot$ at $z=0$, which is comparable to the halo mass of M31. However, neither the SFR history nor the present-day SFR are similar. The ``m12i'' FIRE halo has a factor 10--12 higher SFR (see Fig.~3 in \\citealt{hopkins19}) than the present-day SFR of M31 of 0.5\\,M$_\\sun$\\,yr$^{-1}$ \\citep[e.g.][]{kang09}. We compare Project AMIGA to FIRE-2 simulations with two different sets of physical ingredients. The ``MHD'' run includes magnetic fields, anisotropic thermal conduction and viscosity, and the ``CR'' run includes all these processes plus the ``full physics'' treatment of stellar cosmic rays. The CR simulation assumes a diffusion coefficient $\\kappa_{||}=3\\times10^{29}$ cm$^{-2}$\\,s$^{-1}$, which was calibrated to be consistent with observational constraints from $\\gamma-$ray emission of the MW and some other nearby galaxies \\citep{hopkins19,chan19}. \\citet{ji19} showed cosmic rays can potentially provide a large or even dominant non-thermal fraction of the total pressure support in the CGM of low-redshift $\\sim L^*$ galaxies. As a result, in the fiducial CR run analyzed here, the volume-filling CGM is much cooler ($\\sim 10^{4}-10^{5}$ K) and is thus photoionized in regions where in the run without CRs prefers by hot gas that is more collisionally ionized.\n\nThe column densities are generated as discussed in \\citet{ji19}. For the ionization modeling, a hybrid treatment combining the FG09 \\citep{faucher-giguere09} and HM12 \\citep{haardt12} UV background models is used.\\footnote{We use this mixture because, based on the recent UV background analysis of \\citet{faucher-giguere19}, the FG09 model is in better agreement with the most up-to-date low-redshift empirical constraints at energies relevant for low and intermediate ions (\\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}, and \\ion{C}{4}). However, the HM12 model is likely more accurate for high ions such as \\ion{O}{6}\\ because the FG09 model used a crude AGN spectral model which under-predicted the higher-energy part of the UV\/X-ray background. \\citet{ji19} shows how some ion columns depend on the assumed UV background model.}\n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f17.pdf}\n\\caption{Comparison of ion column density profiles between Project AMIGA (total column densities) and FIRE-2 simulations, where ``MHD'' and ``CR'' runs. Thick curves show median values of an ensemble of sightlines produced from simulations, and shaded regions show the full range across all model sightlines. \n\\label{f-col-vs-rho-fire}}\n\\end{figure}\n\nIn Fig.~\\ref{f-col-vs-rho-fire}, we compare the ion column densities from FIRE-2 simulations with observationally-derived total column densities around M31 as a function of $R\/R_{200}$. The green and orange curves show the median simulated column densities for the MHD and CR runs, respectively, while the shaded regions show the full range of columns for all sightlines at a given impact parameter (the lowest values are truncated to match the scales that are adequate for the observations, see \\citealt{ji19} for the full range of values). The CR run produces higher column densities and better agreement with observations than the MHD run for all ions presented. The much higher column densities of low\/intermediate ions (\\ion{C}{2}, \\ion{Si}{2}, \\ion{Si}{3}, and \\ion{Si}{4}) in the CR run owing to the more volume-filling and uniform cool phase, which produces higher median values of ion column densities and smaller variations across different sightlines. In contrast, in the MHD run the cool phase is pressure confined by the hot phase to compact and dense regions, leading to smaller median columns but larger scatter for the low and intermediate ions. We note, however, that even in the CR runs the predicted column densities are lower than observations at the larger impact parameters $R\\ga 0.5 R_{200}$. This might be due to insufficient resolution to resolve fine-scale structure in outer halos, or it may indicate that feedback effects are more important at large radii than in the present simulations. This difference is quite notable owing to the fact that the star formation of the ``m12i\" galaxy has been continuous with a SFR in the range 5--20 M$_\\sun$\\,yr$^{-1}$ \\citep{hopkins19} over the last $\\sim$8 billion years while M31 had only a continuous SFR around 6--8 M$_\\sun$\\,yr$^{-1}$ over its first 5 billion years while over the last 8 billion years it had only two short bursts of star formation about 4 and 2 billion years ago \\citep{williams17}. While there are some discrepancies, the simulations also follow some similar trends: 1) the simulated column densities of the low ions decrease more rapidly with $R$ than the high ions, 2) \\ion{O}{6}\\ is observed beyond $1.7R_{200}$ where there is no substantial amount of low\/intermediate ions, and 3) a larger scatter is observed in the column densities of the low and intermediate ions than \\ion{O}{6}. \n\nIn the FIRE-2 simulations, both collisional ionization and photoionization can contribute significantly to the simulated \\ion{O}{6}\\ columns, typically with an increasing contribution from photoionization with increasing impact parameter, driven by decreasing gas densities. In the MHD run, most of the \\ion{O}{6}\\ in the inner halo ($R\\la 0.5 R_{200}$) is produced by collisional ionization, but photoionization can dominate at larger impact parameters. In the CR run, collisional ionization and photionization contribute comparably to the \\ion{O}{6}\\ mass at radii $50 < R < 200$ kpc \\citep{ji19}. The actual origins of the CGM in terms of gas flows in FIRE-2 simulations without magnetic fields or cosmic rays were analyzed in \\cite{hafen19a}, although the results are expected to be similar for simulations with MHD only. In these simulations, \\ion{O}{6}\\ exists as part of a well-mixed hot halo, with contributions from all the primary channels of CGM mass growth: IGM accretion, wind, and contributions from satellite halos (reminiscent of the Eris2 simulations, see above and \\citealt{shen13}). The metals responsible for \\ion{O}{6}\\ absorption originate primarily in winds, but IGM accretion may contribute a large fraction of total gas mass traced by \\ion{O}{6}\\ since the halo is well-mixed and IGM accretion contributes $\\ga 60\\%$ of the total CGM mass. In the simulations, the hot halo gas persists in the CGM for billions of years, and the gas that leaves the CGM does so primarily by accreting onto the central galaxy \\citep{hafen19b}.\n\n\\noindent\n{\\it $\\bullet$ Comparison with FOGGIE Simulations}\n\n\nWe also compare the observed total column densities to the Milky-Way like-mass ``Tempest'' ($M_{\\rm 200} \\approx 4.2 \\times 10^{11}$ M$_{\\odot}$) halo from the FOGGIE simulations,\\footnote{FOGGIE project website: \\url{http:\/\/foggie.science}} which has a halo mass of $M_{\\rm 200} \\approx 4.2 \\times 10^{11}$ M$_{\\odot}$ \\citep{peeples19}. We use the $z=0$ output (see \\citealt{zheng20} for simulation details), but because of the size difference between M31 and the Tempest galaxy, we again scale all distances by $R_{200}$ ($R_{200} = 159$ kpc for the simulated halo compared to 230 kpc for M31). The only ``feedback'' included in this FOGGIE run is thermal explosion-driven SNe outflows. While this feedback is limited in scope compared to FIRE, FOGGIE achieves higher mass resolution than FIRE-2 by using a ``forced refinement'' scheme that applies a fixed computational cell size of $\\sim 381 h^{-1}$ pc within a moving cube centered on the galaxy that is $\\sim 200 h^{-1}$ ckpc on a side. This refinement scheme enforces constant {\\it spatial resolution} on the CGM, resulting in a variable and very small mass resolution in the low density gas, with typical cell masses of ($\\la 1$--100 M$_\\sun$). The individual small-scale structures that contribute to the observed absorption profiles can therefore be resolved. These small-scale structures that become only apparent in high-resolution simulations are hosts to a significant amount of cool gas, enhancing the column densities in especially the low ionization state of the gas (\\citealt{peeples19,corlies19}, and see also \\citealt{vandevoort18,hummels19,rhodin19}). \n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f18.png}\n\\caption{Comparison of ion column density profiles between Project AMIGA (total column densities) and the ``Tempest\" halo from the FOGGIE simulations. The pink and green shaded areas are projected total column densities from the simulated halo with and without galaxy\/satellite contributions, respectively, while the rest of the figure is analogous to Fig.~\\ref{f-coltot-vs-rho}. The vertical line shows the extent of the forced resolution cube in the FOGGIE simulation. \n\\label{f-col-vs-rho-foggie}}\n\\end{figure}\n\n\nAs for the FIRE-2 simulations, we compare the total Project AMIGA column densities to FOGGIE because in the simulation we do not (yet) separate individual components, but look at the projected column densities through the halo. We note that the CGM is not necessarily self-similar, so some differences between the simulation predictions and M31 observations at rescaled impact parameter could be due to the halo mass difference. This is especially so since the halo mass range $M_{\\rm h}\\approx 3\\times10^{11}$--$10^{12}$ M$_{\\odot}$ corresponds to the expected transition between cold and hot accretion \\citep[e.g.,][]{birnboim03,keres03,faucher-giguere11,stern19}.\n\nIn Fig.~\\ref{f-col-vs-rho-foggie}, we compare the simulated and observed column densities for each ion probed by our survey. The pink and green shaded areas are the data points from the simulation (with and without satellite contribution, respectively) and show the total column density in projection through the halo. The scatter in the simulated data points comes from variation in the structures along the mock sightline and most of the scatter is in fact below $10^{11}$ cm$^{-2}$. The peaks in the column densities are due to small satellites in the halo, which enhance primarily the low-ion column densities. We show the green points to highlight the difference between the mock column densities with and without satellites. For the high ionization lines the difference is negligible, while the difference in the low ions is significant. \n\nOverall, the metal line column densities are systematically lower than in the observations at any $R$. Only at $R\\la 0.3R_{200}$, there is some overlap for the singly ionized species between the FOGGIE simulation and observations. However, the discrepancy is particularly striking for \\ion{Si}{3}\\ and the high ions. This can be understood by the current feedback implementation in FOGGIE, which does not expel enough metals from the stellar disk into the CGM \\citep{hamilton20} to be consistent with known galactic metal budgets\\citep{peeples14}. This effect is expected to be stronger for the high ions than the low ions, due to the additional heating and ionization of the CGM that would be expected from stronger feedback, and indeed the discrepancy between the simulation and observations is larger for the high ions (and \\ion{Si}{3}) than for the singly ionized species. However, while the absolute scale of the column densities is off, there are also some similarities between the simulation and observations in the behavior of the relative scale of the column density profiles with $R$: 1) the column densities of the low ions drop more rapidly with $R$ than the high ions; 2) despite the inadequate feedback in the current simulations, the \\ion{O}{6}-bearing gas (and \\ion{C}{4}\\ to a lesser extent) is observed well-beyond $R_{200}$; 3) a large scatter is observed in the column densities of the low and intermediate ions than \\ion{O}{6}. It is striking that the overall slope of the \\ion{O}{6}\\ profile resembles the observations but at significantly lower absolute column density. In the FOGGIE simulation, the low ions tracing mainly dense, cool gas are preferentially found in the disk or satellites, while the hotter gas traced by the higher ions is more homogeneously distributed in the halo.\n\n\\noindent\n{\\it $\\bullet$ Comparison with EAGLE Simulations}\n\n\nFinally, we compare our results with the EAGLE zoom-in simulations (EAGLE \\emph{Recal-L025N0752} high-resolution volume) discussed in length in \\citet{oppenheimer18a}. The EAGLE simulations have successfully reproduced a variety of galaxy observables \\citep[e.g.,][]{crain15,schaye15} and achieved ``broad but imperfect\" agreement with some of the extant CGM observations (e.g., \\citealt{turner16,rahmati18,oppenheimer18a,lehner19,wotta19}). \n\n \\citet{oppenheimer18a} aimed to directly study the multiphase CGM traced by low metal ions and to compare with the COS-Halos survey (see \\S\\ref{s-coshalos}). As such, they explored the circumgalactic metal content traced by the same ions explored in Project AMIGA in the CGM galaxies with masses that comprise that of M31. Overall \\citeauthor{oppenheimer18a} find agreement between the simulated and COS-Halos samples for \\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}, and \\ion{C}{2}\\ within a factor two or so and larger disagreement with \\ion{O}{6}, where the column density is systematically lower. With Project AMIGA, we can directly compare the results with one of the EAGLE galaxies that has a mass very close to M31 and also compare the column densities beyond 160 kpc, the maximum radius of the COS-Halos survey \\citep{tumlinson13,werk13}. We refer the reader to \\citet{oppenheimer16}, \\citet{rahmati18}, and \\citet{oppenheimer18a} for more detail on the EAGLE zoom-in simulations. We also refer the reader to Fig.~1 in \\citet{oppenheimer18a} where in the middle column they show the column density map for galaxy halo mass of $\\log M_{200} = 12.2$ at $z \\simeq 0.2$, which qualitatively shows similar trends described in \\S\\ref{s-disc-qual-comp}. \n\n\n\\begin{figure}[tbp]\n\\epsscale{1.2}\n\\plotone{f19.pdf}\n\\caption{Comparison of ion column density profiles between Project AMIGA and EAGLE zoom-in simulation of a galaxy with $\\log M_{200}\\simeq 12.1 $ at $z=0$ (from the models presented in \\citealt{oppenheimer18a}). For the EAGLE simulation, the mean column densities are shown. Note that here we only plot the column density profiles out to about \\ensuremath{R_{\\rm vir}}.\n\\label{f-col-vs-rho-eagle}}\n\\end{figure}\n\nIn Fig.~\\ref{f-col-vs-rho-eagle}, we compare the EAGLE and observed column densities as a function of the impact parameter out to \\ensuremath{R_{\\rm vir}}. As in the previous two figures, the blue and gray circles are detections and non-detections in the halo of M31. The green curve in each panel represents the mean column density for each ion as a function of the impact parameter for the EAGLE galaxy with $\\log M_{200} = 12.1$ at $z=0$. In contrast to FIRE-2 or FOGGIE simulations, the EAGLE simulations appear to produce a better agreement between $N$ and $R$ for low and intermediate ions (\\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}), and \\ion{C}{4}\\ out to larger impact parameters. However, as already noted in \\citet{oppenheimer18a}, this agreement is offset by producing too much column density for the low and intermediate ions at small impact parameters (see, e.g., \\ion{Si}{2}, which is not affected by lower limits, is clearly overproduced at $R\\la 80$ kpc). The flat profile of \\ion{O}{6}, with very little dependence on $R$, is similar to the observations and other models, but overall the EAGLE \\ion{O}{6}\\ column densities are a factor 0.2--0.6 dex smaller than observed. \\citet{oppenheimer18a} (and also \\citealt{oppenheimer16}) already noted that issue from their comparison with the COS-Halos galaxies (see also \\S\\ref{s-coshalos}), requiring additional source(s) of ionization for the \\ion{O}{6}\\ such as AGN flickering \\citep{oppenheimer13a,oppenheimer18} or possibly CRs as shown for the FIRE-2 simulations (see \\citealt{ji19} and above). While the results are shown only to \\ensuremath{R_{\\rm vir}}, as in the other simulations and M31, \\ion{O}{6}\\ is also observed well beyond \\ensuremath{R_{\\rm vir}}\\ in the EAGLE simulations (see Fig.~1 in \\citealt{oppenheimer18a}). \n\n\\subsubsection{Insights from the Observation\/Simulation Comparison}\\label{s-disc-pers-comp}\n\nThe comparison with the simulations shows that the CGM is changing in zoom-in simulations on length scales roughly similar to those observed in M31. The low ions and high ions follow substantially different profiles with radius, in both data and simulations. In the zoom-in simulations described above, the inner regions of the CGM of galaxies are more directly affected by large-scale feedback and recycling processes between the disk and CGM of galaxies. Therefore it is not surprising that the M31 CGM within 100--150 kpc shows a large variation in column density profiles with $R$, a more complex gas-phase structure, and larger peculiar velocities even though the current star formation rate is low. While both accretion and large-scale outflow coexist in the CGM and are responsible for the gas flow properties, stellar feedback is required to produce substantial amount of metals in the CGM at large impact parameters (see Figs.~\\ref{f-col-vs-rho-fire}, \\ref{f-col-vs-rho-foggie}, \\ref{f-col-vs-rho-eagle}). M31 has currently a low SFR, but it had several episodes of bursting star formation in the past \\citep[e.g.,][]{williams17}, likely ejecting a large portion of its metals in the CGM during these episodes.\n\nVarious models simulating different galaxy masses at different epochs, with distinct SFRs or feedback processes can reproduce at some level the diffuse \\ion{O}{6}\\ observed beyond \\ensuremath{R_{\\rm vir}}. All the simulations we have reviewed produce \\ion{O}{6}\\ profiles that are flatter than the low ions and which extend to beyond \\ensuremath{R_{\\rm vir}}\\ with significant column density. While the galaxy halo masses are different, they are all roughly in the range of about $10^{11.5}$--$10^{12.3}$ M$_\\sun$, which is a mass range where their virial temperatures overlap with the temperature at which the ionization fraction of \\ion{O}{6}\\ peaks \\citep{oppenheimer16}. Using the EAGLE simulations, \\citet{oppenheimer16} show that the virial temperature of the galaxy halos can explain the presence of strong \\ion{O}{6}\\ in the CGM of star-forming galaxies with $M_{200} \\simeq 10^{11.5}$--$10^{12.3}$ M$_\\sun$ and the absence of strong \\ion{O}{6}\\ in the CGM of quiescent galaxies that have overall higher halo masses ($M_{200} 10^{12.5}$--$10^{13.5}$ M$_\\sun$) and hence higher virial temperatures, i.e., halo mass, not SFR largely drives the presence of strong \\ion{O}{6}\\ in the CGM of galaxies (cf. \\S\\ref{s-coshalos}). Production the \\ion{O}{6}\\ in volume-filling virialized gas could explain why \\ion{O}{6}\\ is widely spread in the CGM of simulated galaxies and the real M31. Additional ionization mechanisms from cosmic rays (\\citealt{ji19} and see Fig.~\\ref{f-col-vs-rho-fire}) or fluctuating AGNs \\citep{oppenheimer13a,oppenheimer18} can further boost the \\ion{O}{6}\\ production, but halo masses with their virial temperatures close to the temperature at which the ionization fraction of \\ion{O}{6}\\ peaks appear to provide a natural source for the diffuse, extended \\ion{O}{6}\\ in the CGM of $L^*$ galaxies. Conversely, a number of studies have shown that significant \\ion{O}{6}\\ can arise in active outflows, with the outflow column densities varying strongly with the degree of feedback \\citep{hummels13, hafen19a}. Right now, no clear observational test can distinguish \\ion{O}{6}\\ in warm virialized gas and direct outflows. However, any model that attempts to distinguish them will be constrained by the flat profile and low scatter seen by Project AMIGA. \n\nOn the other hand, the cooler, diffuse ionized gas probed predominantly by \\ion{Si}{3}, and also low ions (\\ion{C}{2}, \\ion{Si}{2}) at smaller impact parameters, is not well-reproduced in the simulations. In the FIRE-2 and FOGGIE simulations, the column densities of \\ion{Si}{3}\\ and low ions within $\\la 0.3 R_{200}$ are reasonably matched, but their covering fractions drop sharply and much more rapidly than observed for M31 when $ R > 0.3 R_{200}$. Only near satellite galaxies within $0.3 R_{200}$ do the column densities of these ions increases. This is, however, not a fair comparison as M31 lacks gas-rich satellites within this radius. Furthermore the near unity covering factor of \\ion{Si}{3}\\ out to $1.65 R_{200}$ in the CGM of M31 could not be explained by dwarf satellites anyway. For the EAGLE simulation, this problem is not as extreme as in the other simulations, but EAGLE does overproduce low and intermediate ions in the inner regions ($\\la 0.3 R_{200}$) of the CGM. Possibly maintaining a high resolution out to \\ensuremath{R_{\\rm vir}}\\ would be needed to accurately model the small-scale structures of the cool gas content and preserve it over longer periods of times \\citep{hummels19,peeples19,vandevoort18}. \n\nWhile the observations of M31 and simulations discussed above show some discrepancy, there is an overall trend that is universally observed: when the ionization energies increase from the singly-ionized species (\\ion{Si}{2}, \\ion{C}{2}) to intermediate ions (\\ion{Si}{3}, \\ion{Si}{4}) to \\ion{C}{4}\\ to \\ion{O}{6}, the column density dispersions and dependence on $R$ decrease. While the larger scatter in the low and intermediate column densities compared to \\ion{O}{6}\\ was observed previously \\cite[e.g.,][]{werk13,liang16}, that trend with $R$ was not as obvious owing to a larger scatter at any $R$, in part caused by neighboring galaxies or different galaxy masses \\citep{oppenheimer18a}. This general trend is the primary point of agreement between the observations and simulations, especially considering that the simulations were not tuned to match the CGM properties. This trend most likely arises from the physical conditions of the gas: in the inner regions of the CGM the gas takes on a density that favors the production of the low and intermediate ions. At these densities \\ion{O}{6}\\ would need to be collisionally ionized or distributed in pockets of low-density photoionized gas. In the outer regions of the CGM, the overall gas must have a much lower density where \\ion{O}{6}\\ and weak \\ion{Si}{3}\\ and nearly no singly ionized species can be produced predominantly by photoionization processes. This basic structure of the CGM appears in broad agreement between Project AMIGA, statistical sampling of many galaxies like COS-Halos, and three different suites of simulations. \n\n\\subsection{Implications for the MW CGM}\n\nBased the findings from Project AMIGA, it is likely that the MW has not only an extended hot CGM \\citep{gupta14,gupta17}, but also an extended CGM of cool (\\ion{Si}{2}, \\ion{Si}{3}, \\ion{Si}{4}) and warm-hot (\\ion{C}{4}, \\ion{O}{6}) gas that extends all the way to about 300 kpc (\\ensuremath{R_{\\rm vir}}), and even farther away for the \\ion{O}{6}. In fact, the MW and M31 \\ion{O}{6}\\ CGMs most likely already overlap as it can be seen, e.g., in the CLUES simulations of the Local group \\citep{nuza14} since the distance between M31 and MW is only 752 kpc. \n\nA large covering factor of the CGM of M31 is not detected at high peculiar velocities (see Fig.~\\ref{f-velavgmap}), and in fact beyond 100 kpc, the velocities $v_{\\rm M31}$ are scattered around 20 ${\\rm km\\,s}^{-1}$. Even within 100 kpc, the average velocity is about 90 ${\\rm km\\,s}^{-1}$, which would barely constitute a HVC studied in the MW. In the MW most of the absorption within $\\pm 90$ ${\\rm km\\,s}^{-1}$\\ relative to the systemic velocity of the MW in a given direction is dominated by the disk, i.e., material within a few hundreds of pc from the galactic plane. Because the HVC velocities are high enough to separate them from the disk absorption, HVCs in the MW have been studied for many years to determine the ``halo\" properties of the MW \\citep[e.g.][]{wakker97,putman12,richter17}. However, we know now that the distances of these HVCs, including the predominantly ionized HVCs, are not at 100s of kpc from the MW, but most of them are within 15--20 kpc from the sun \\citep[e.g.][]{wakker01,wakker08,thom08,lehner11a,lehner12}, i.e., in a radius not even explored by Project AMIGA and many other surveys of the galaxy CGM at higher redshifts \\citep[e.g.,][]{werk13,liang14,borthakur16,burchett16}. Only the MS allows us to probe the interaction between the MW and the Magellanic clouds in the CGM of the MW out to about 50--100 kpc \\citep[e.g.,][]{donghia16}. The results from Project AMIGA strongly suggest that the CGM of the MW is hidden in the low velocity absorption arising from its disk (see also \\citealt{zheng15}. To complicate the matter, the column densities of the low, intermediate ions, and \\ion{C}{4}\\ drop substantially beyond 100-150 kpc (see, e.g., Figs.~\\ref{f-coltot-vs-rho}, \\ref{f-coltotsi-vs-rho}). Owing to its strength and little dependence on $R$, \\ion{O}{6}\\ is among the best ultraviolet diagnostic of the extended CGM (see also the recent FOGGIE simulation results by \\citealt{zheng20}). \n\n\\section{Summary}\\label{s-sum}\nWith Project AMIGA, we have surveyed the CGM of a galaxy with an unprecedented number of background targets (43) piercing it at various azimuths and impact parameters, 25 from $0.08 \\ensuremath{R_{\\rm vir}}$ to about $1.1\\ensuremath{R_{\\rm vir}}$ and the additional 18 between $1.18 \\times 10^7$ M$_\\sun$. The total metal mass could be as large as $\\ga 2.5 \\times 10^8$ M$_\\sun$ if the dust and hot X-ray gas are accounted for. Since the total metal mass in the disk of M31 is about $M^{\\rm disk}_{\\rm Z}\\simeq 5\\times 10^8$ M$_\\sun$, the CGM of M31 has at least 50\\% of the present-day metal mass of its disk and possibly much more. \n\\item We estimate the baryon mass of the $\\sim 10^4$--$10^{5.5}$ K gas is $\\ga 3.7 \\times 10^{10}\\,(Z\/0.3\\,Z_\\sun)^{-1}$ M$_\\sun$ at \\ensuremath{R_{\\rm vir}}. The dependence on the largely unknown metallicity of the CGM makes the baryon mass estimate uncertain, but it is broadly comparable to other recent observational results or estimates in zoom-in simulations. \n\\item We study if any of the M31 dwarf satellites could give rise to some of the observed absorption associated with the CGM of M31. We find it is plausible that few absorbers within close spatial and velocity proximity of the dwarfs could be associated with the CGM of dwarfs if they have a gaseous CGM. However, these are Sph galaxies, which have had their gas stripped via ram-pressure and unlikely to have much gas left in their CGM. And, indeed, none of the properties of the absorbers in close proximity to these dwarf galaxies show any peculiarity that would associate them to the CGM of the satellites rather than the CGM of M31.\n\\end{enumerate}\n\n\n\\section*{Acknowledgements}\nWe thank David Nidever for sharing his original fits of the MS \\ion{H}{1}\\ emission and Ben Oppenheimer for sharing the EAGLE simulations shown in Fig.~\\ref{f-col-vs-rho-eagle}. Support for this research was provided by NASA through grant HST-GO-14268 from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. CAFG and ZH were also supported by NSF through grants AST-1517491, AST-1715216, and CAREER award AST-1652522; by NASA through grants NNX15AB22G and 17-ATP17-0067; by STScI through grants HST-GO-14681.011 and HST-AR-14293.001-A; and by a Cottrell Scholar Award from the Research Corporation for Science Advancement. Based on observations made with the NASA-CNES-CSA Far Ultraviolet Spectroscopic Explorer, which was operated for NASA by the Johns Hopkins University under NASA contract NAS5-32985.\n\n\\software{Astropy \\citep{price-whelan18}, emcee \\citep{foreman-mackey13}, Matplotlib \\citep{hunter07}, PyIGM \\citep{prochaska17a}}\n\n\\facilities{HST(COS); HST(STIS); FUSE}\n\n\\bibliographystyle{aasjournal}\n\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{ Introduction}\n\n\n\n The process $e^-e^+\\to W^-W^+$ has been studied theoretically\n and experimentally since a long time, as it provides sensitive\n tests of the gauge structure of the\n electroweak interactions \\cite{Boehm, Beenakker,Hahn, Denner}, and checks the possible\n presence of non standard new physics (NP) contributions.\n A detail history of the subject and a list of references may be seen in \\cite{Denner}.\n\n First experimental studies of this process have been done at LEP2 \\cite{LEP2WW}.\n No signal of departures from SM has been found, but the accuracy is not sufficient\n to eliminate the possibility of NP effects at a high scale.\n\n LHC studies involving production of W pairs also exist; but their detailed studies\n require a difficult event analysis, because of various sources of background\n \\cite{LHCWW}.\n\n Future high energy $e^-e^+$ colliders are therefore deeply desired,\n in order to provide fruitful information about this subject \\cite{ILC,CLIC}.\n\n From the theoretical side, the present situation of an 1loop electroweak (EW)\n order analysis, aiming e.g. at searching\n for any non standard effects, is quite complex.\n This is already true at the SM level, and if one includes\n SM extensions, like e.g. SUSY, the sensitivity to any benchmark\n choice has to be considered. Particularly for\n amplitudes involving longitudinal $W$'s, the numerical situation is more difficult,\n because of the huge cancelations taking place. In both the SM and SUSY cases,\n very lengthy numerical codes are required to describe\n the complete 1loop EW contribution; see e.g. \\cite{Hahn, Denner}.\n\n The aim of the present paper is to call attention to the fact that at high energies,\n the 1loop electroweak (EW) corrections to the helicity amplitudes for $e^-e^+\\to W^-W^+$,\n acquire very simple forms, in both, the SM and MSSM cases. To establish them we have\n done a complete calculation of the 1loop diagrams and then taken the high energy,\n fixed angle, limit using \\cite{asPV}. The soft photon bremsstrahlung can then\n be added as usual \\cite{Boehm, Beenakker,Hahn, Denner}.\n\n\n Our procedure is the same as the one used previously for other 2-to-2 processes,\n leading to the \"supersimple\" (sim) 1loop EW expressions for the dominant high energy\n helicity conserving (HC) amplitudes; the helicity violating (HV)\n ones are quickly vanishing\\footnote{The notations HC and HV are fully\n defined in the next section.}\n \\cite{super, ttbar}. We find very simple and quite accurate\n expressions for the high energy HC amplitudes, in both the SM and MSSM frameworks, which\n nicely show their relevant dynamical contents.\n\n The use of this description, which clearly indicates the relevant physical parameters,\n should very much simplify the analysis of the experimental results. Particularly because,\n its accuracy turns out to be\n sufficient for distinguishing 1loop SM (or MSSM) effects,\n from e.g. various types of additional New Physics contributions, like AGC couplings\n or $Z'$ exchange; see for example \\cite{Andreev}.\n\n The content of the paper is the following. In Section 2 we present\n the various properties of the high energy $e^-e^+\\to W^-W^+$ amplitudes,\n with special attention to their helicity conservation\n (HCns) property \\cite{heli1, heli2}. The explicit supersimple expressions\n are discussed in the later part of Section 2 and in Appendix A.\n In Section 3 we present the energy and angular dependencies\n of the cross sections, for polarized and unpolarized electron beams,\n in either SM or MSSM. And subsequently, we compare these SM or MSSM contributions\n to those due to anomalous gauge couplings (AGC) or $Z'$ effects; both given\n in Appendix B. We find that the accuracy of the supersimple\n expressions is sufficient for distinguishing these various types\n of contributions. Thus, they may be used instead of the complete 1loop results.\n The conclusions summarize these results.\\\\\n\n\n\n\n\n\n\\section{Supersimplicity in $e^-e^+\\to W^-W^+$}\n\nThe process studied, to the 1loop Electroweak (EW) order, is\n\\bq\ne^-_\\lambda (l) ~ e^+_{\\lambda'} (l') \\to W^-_\\mu (p)~ W^+_{\\mu'}(p')~~, \\label{process}\n\\eq\nwhere $(\\lambda, \\lambda')$ denote the helicities of the incoming $(e^-, e^+)$ states,\nand $(\\mu, \\mu')$ the helicities of the outgoing $(W^-, W^+)$. The corresponding\nmomenta are denoted as $(l,l',p,p')$. Kinematics are defined through\n\\bqa\n && s=(l+l')^2=(p+p')^2 ~~,~~ t=(l-p)^2=(l'-p')^2 ~~,~~ \\nonumber \\\\\n && p_W=\\sqrt{{s\\over4}-m^2_W}~~,~~ \\beta_W=\\sqrt{1-\\frac{4 m_W^2}{s}} ~~, \\label{kinematics}\n\\eqa\nwhere $p_W, \\beta_W$ denote respectively the $W^\\mp$ three-momentum and velocity in the\n$W^-W^+$-rest frame. Finally, the angle between the incoming $e^-$\nmomentum $l$ and the outgoing $W^-$ momentum $p$, in the center of mass frame,\nis denoted as $\\theta$.\n\nDue to the smallness of the electron mass,\nnon-negligible amplitudes at high energies only appear for $\\lambda=-\\lambda'=\\mp 1\/2$.\nThe helicity amplitudes for\nthis process are therefore determined by three helicity indices and denoted\nas $F_{\\lambda,\\mu,\\mu'}(\\theta)$, where $(e^-, W^-)$\nare treated as particles No.1, and $(e^+,W^+)$ as particles No.2, in the standard\nJacob-Wick notation \\cite{JW}.\n\nAssuming CP invariance, we obtain the constraint\n\\bq\nF_{\\lambda, \\mu,\\mu'}(\\theta)=F_{\\lambda, -\\mu',-\\mu}(\\theta) ~~, \\label{CP-cons}\n\\eq\nwhich means that the process is described by just 12 independent helicity\namplitudes.\n\nAt high energy, the helicity conservation (HCns) rule implies\nthat only the amplitudes satisfying\n\\bq\n\\lambda +\\lambda'=0= \\mu + \\mu' ~~, \\label{heli-cons}\n\\eq\ncan dominate \\cite{heli1,heli2}. These are the\n helicity conserving (HC) amplitudes, which explicitly are\n\\bq\n F_{\\mp-+}~~,~~ F_{\\mp+-}~~,~~ F_{\\mp 00} ~~.~~ \\label{6HC-amp-list}\n\\eq\nThe purely left-handed $W$ couplings though, forces the HC amplitudes\n\\bq\nF_{++-}~~,~~F_{+-+}~~, \\label{R-HC-amp-list}\n\\eq\nto vanish at Born-level and be very small at 1loop.\nThus, only four leading HC helicity amplitudes remain at high energy,\nnamely\n\\bq\nF_{--+}~~,~~ F_{-+-}~~,~~ F_{\\pm00}~~.~~ \\label{4HC-amp-list}\n\\eq\nThe remaining amplitudes, which violate (\\ref{heli-cons}), are termed as\nhelicity violating (HV) ones. Explicitly these are\n\\bq\nF_{-0+}~~,~~F_{---}~~,~~F_{-+0}~~,~~F_{+0+}~~,~~F_{+--}~~,~~F_{++0}~~,~~ \\label{HV-amp-list}\n\\eq\n and are expected to be suppressed like $m_W\/\\sqrt{s}$ ~ or $m^2_W\/ s$, at\n high energy.\\\\\n\n\n\\subsection{Born contribution to the helicity amplitudes}\n\n We next turn to the Born contribution to the HC and HV amplitudes in (\\ref{4HC-amp-list})\n and (\\ref{HV-amp-list}) respectively. The relevant diagrams involve\n neutrino exchange in the t-channel and photon+Z exchange in the s-channel. The resulting\n amplitudes satisfy the HCns constraints \\cite{heli1, heli2}.\n In the usual Jacob and Wick convention \\cite{JW}, their exact expressions are:\\\\\n\n\n\\noindent\n{\\bf Transverse-Transverse (TT) amplitudes ($\\mu,\\mu'=\\pm1$)}\n\nUsing (\\ref{kinematics}), we find\n\\bqa\nF^{\\rm Born}_{\\lambda \\mu \\mu' }&=& {se^2\\sin\\theta\\over16ts^2_W}\n\\delta_{\\lambda,-}\n\\left \\{ \\mu+\\mu'+\\beta_W(1+\\mu\\mu')-2\\mu(1+\\mu'\\cos\\theta) \\right \\}\n\\nonumber\\\\\n&&+{se^2\\over4}\\left [{Q_e\\over s}+{a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+}\\over\n2s^2_W(s-m^2_Z)}\\right ](1+\\mu\\mu')(2\\lambda)\\beta_W\\sin\\theta ~~,\n\\label{FBorn-TT}\n\\eqa\nwith\n\\bq\nQ_e=-1~~,~~a_{eL}=1-2s^2_W~~,~~a_{eR}=2s^2_W~~, \\label{e-couplings}\n\\eq\ndetermining the electron charge, and the $Z$ left- and right-couplings.\nBecause of the purely left-handed $W$ coupling, Eqs.(\\ref{FBorn-TT}) leads to\n\\bq\nF^{\\rm Born}_{+{1\\over2},\\mu, -\\mu}=0 ~~, \\label{R-HC-TT-Born}\n\\eq\nas already said just after (\\ref{R-HC-amp-list}).\nIn addition, (\\ref{FBorn-TT}) leads at high energy to\n\\bq\nF^{\\rm Born}_{\\lambda\\mu\\mu} \\to 0 ~~, \\label{Born-asym-TT-HV}\n\\eq\nin agreement with HCns \\cite{heli1, heli2}, and\n\\bq\nF^{\\rm Born}_{-{1\\over2}\\mu-\\mu}\\to - {e^2 \\sin\\theta (\\mu-\\cos\\theta)\n\\over 4s^2_W (\\cos\\theta-1)}~~. \\label{Born-asym-TT-HC}\n\\eq\nThis confirms that the first two HC Born amplitudes in (\\ref{4HC-amp-list}),\ngo to constants, asymptotically.\\\\\n\n\n\\noindent\n{\\bf Transverse-Longitudinal (TL) and Longitudinal-Transverse (LT)\\\\\namplitudes ($\\mu=\\pm1,\\mu'=0$, $\\mu=0,\\mu'=\\pm1$)}\n\nUsing again (\\ref{kinematics}), we obtain\n\\bqa\nF^{\\rm Born}_{\\lambda\\mu 0}&=& {s\\sqrt{s}e^2\\over8\\sqrt{2}m_Wts^2_W}\\delta_{\\lambda,-}\n\\left \\{ (\\beta_W-\\cos\\theta)(1-\\mu\\cos\\theta)- {2m^2_W\\over s}(\\mu-\\cos\\theta)\\right \\}\n\\nonumber\\\\\n&&-{s\\sqrt{s}e^2\\over2\\sqrt{2}m_W}\\left [{Q_e\\over s}+{a_{eL}\\delta_{\\lambda,-}\n+a_{eR}\\delta_{\\lambda,+}\\over\n2s^2_W(s-m^2_Z)} \\right ]\\beta_W(1+2\\lambda\\mu\\cos\\theta)~~, \\label{FBorn-TL} \\\\\nF^{\\rm Born}_{\\lambda 0 \\mu'}&=& {s\\sqrt{s}e^2\\over8\\sqrt{2}m_Wts^2_W}\\delta_{\\lambda,-}\n\\left \\{(\\beta_W-\\cos\\theta)(1+\\mu'\\cos\\theta)- {2m^2_W\\over s}(\\mu'+\\cos\\theta) \\right \\}\n\\nonumber\\\\\n&&-{s\\sqrt{s}e^2\\over2\\sqrt{2}m_W}\n\\left [{Q_e\\over s}+{a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+}\\over\n2s^2_W(s-m^2_Z)} \\right ]\\beta_W(1-2\\lambda\\mu'\\cos\\theta)~~. \\label{FBorn-LT}\n\\eqa\n\nThe amplitudes in (\\ref{FBorn-TL}, \\ref{FBorn-LT}) are both HV, and at high energies\nthey are quickly suppressed like $m_W\/ \\sqrt{s}$.\\\\\n\n\\noindent\nThe {\\bf Longitudinal-Longitudinal (LL) amplitudes ($\\mu=0,\\mu'=0$)}\nare\n\\bqa\nF^{\\rm Born}_{\\lambda 00}&=& {se^2\\sin\\theta\\over16ts^2_W}\\delta_{\\lambda,-}\n\\left \\{ {s\\over m^2_W}(\\beta_W-\\cos\\theta)+2\\beta_W \\right\\}\n\\nonumber\\\\\n&&+{(2\\lambda)s^2e^2\\over8m^2_W}\n\\left [{Q_e\\over s}+{a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+}\\over\n2s^2_W(s-m^2_Z)} \\right ]\\beta_W(3-\\beta_W^2)\\sin\\theta~~, \\label{FBorn-LL}\n\\eqa\nwhere (\\ref{kinematics}) have again been used.\nAt high energy, keeping terms to order $m^2_Z\/ s$ and $m^2_W\/ s$, one gets\n\\bqa\nF^{\\rm Born}_{-{1\\over2}00} & \\to & - {e^2\\over8s^2_Wc^2_W} \\sin\\theta ~~,\n\\nonumber \\\\\nF^{\\rm Born}_{+{1\\over2}00} & \\to & {e^2\\over4c^2_W} \\sin\\theta ~~, \\label{Born-asym-LL-HC}\n\\eqa\nwhich together with (\\ref{Born-asym-TT-HC}) confirm that all Born HC amplitudes in\n(\\ref{4HC-amp-list}), go to constants, asymptotically.\nOn the contrary, all six HV amplitudes listed in (\\ref{HV-amp-list}) vanish,\n in this limit.\\\\\n\n\nThe Born level properties of the helicity amplitudes are illustrated\nin Figs.\\ref{HV-HC-Born-amp}. The two HC amplitudes listed in (\\ref{R-HC-amp-list}),\nare not shown, since they vanish, when coefficients proportional to the\nelectron-mass are neglected.\\\\\n\n\n\n\n\n\n\n\\subsection{ Helicity amplitudes to the 1loop electroweak (EW) order.}\n\n\nThe relevant contributions come from up and down triangle diagrams in the t-channel;\ninitial and\nfinal triangle diagrams in the s-channel; direct, crossed and\ntwisted box diagrams; specific triangles involving a 4-leg gauge boson\ncouplings; and finally neutrino, photon and Z self-energies.\nCounter terms in the Born contributions, which help canceling the\ndivergences induced by self-energy and triangle diagrams, are also included, leading to the so-called\non-shell renormalization scheme \\cite{OS}.\n\n\n\nSuch type of computations have already been done; see for example \\cite{Hahn, Denner}.\nBut our aim here is to look at the specific properties of each helicity\namplitudes, and to derive simple high energy expressions for the\nHC ones. For this reason we repeated the complete calculation of the 1loop EW corrections\nand then computed their high energy expressions that we call supersimple (sim),\nusing the expansion of \\cite{asPV}. A special attention is paid to the\nvirtual photon exchange diagrams leading to infrared singularities (when $m_\\gamma\\to 0$)\nwhich are then cancelled by the addition of the soft photon bremsstrahlung contribution. The sim\nexpressions are given (in Appendix A) in the two possible choicess , arbitrary small $m_\\gamma$\nvalue, or $m_\\gamma=m_Z$ which can be considered as \"small\" at high energies. This second\nchoice, also used in previous studies \\cite{super, ttbar}, has the advantage of\nleading to even simpler expressions as we can see in Appendix A.\\\\\n\nAs already said and numerically shown below, the HV amplitudes amplitudes in (\\ref{HV-amp-list})\n are negligible at high energies. Only the four HC amplitudes appearing in (\\ref{4HC-amp-list})\n are relevant there. Turning to them, we present in Appendix A.1\n the very simple sim expressions\n for the TT amplitudes $F_{--+}$, ~$F_{-+-}$; while the corresponding expressions for the LL\namplitudes $F_{-00}$,~ $F_{+00}$ appear in Appendix A.2.\nThe results (\\ref{simSM--+}, \\ref{simSM-+-}, \\ref{simSM+00}, \\ref{simSM-00})\ngive the SM predictions,\nwhile (\\ref{simMSSM--+}, \\ref{simMSSM-+-}, \\ref{simMSSM+00}, \\ref{simMSSM-00})\ngive the MSSM ones, always corresponding to the $m_\\gamma=m_Z$ choice.\nThe corrections to be done to them in order to obtain the general result for any $m_\\gamma$,\nappear in (\\ref{deltaFTT}, \\ref{deltaF00}).\n\n\nFor deriving these, we start from the complete 1loop EW results\nin terms of Passarino-Veltman (PV) functions \\cite{PV},\nand then use their high energy expansions given in \\cite{asPV}.\nFor the TT amplitudes $F_{--+}$, ~$F_{-+-}$, the derivation is quite\nstraightforward.\n\nFor the two LL amplitudes $F_{-00}$, $F_{+00}$ though, the derivation is very delicate,\nbecause of huge gauge cancelations\namong contributions exploding like\\footnote{ Particularly for neutralinos, this demands\na very accurate determination of their mixing matrices, like the one supplied e.g. by \\cite{LeMouel}.}\n $s\/ m^2_W$. Such cancelations also occur at Born level,\nbetween t- and s-channel terms. But at 1loop level, the situation is\nmuch more spectacular, because more diagrams are involved.\nTechnically, the derivation of the limiting expressions can be facilitated by using\nthe equivalence theorem and looking at the Goldstone process $e^-e^+\\to G^-G^+$\n\\cite{equivalence}.\\\\\n\n\n\n\n\n\n\nWe next turn to the infrared divergencies implied by the presence of\n $m_\\gamma$ in the $e^-e^+\\to W^-W^+$ amplitudes. As usual, these are canceled at the cross section\nlevel by adding to the 1loop EW results for $d\\sigma (e^-e^+\\to W^-W^+)\/d\\Omega$, the Born-level\n cross section describing the soft photon bremsstrahlung, given by\n\\bq\n\\frac{d\\sigma_{\\rm brems}(e^-e^+\\to W^-W^+\\gamma)}{d\\Omega}=\n \\frac{d\\sigma^{\\rm Born} (e^-e^+\\to W^-W^+)}{d\\Omega} \\delta_{\\rm brems}(m_\\gamma,\\Delta E)~~,\n \\label{brems-sigma}\n\\eq\nwhere $\\delta_{\\rm brems}(m_\\gamma,~\\Delta E)$ is given by\\footnote{Parameter $\\lambda$ in \\cite{Boehm} corresponds to our $m_\\gamma$} Eqs. (5.18) in \\cite{Boehm}, while\n $\\Delta E$ describes the highest energy of the emitted unobservable\nsoft photon, satisfying\n\\bq\nm_\\gamma \\leq \\Delta E \\ll \\sqrt{s} ~~. \\label{brems-kin}\n\\eq\nThe only requirement for this cancelation to happen is that\n $m_\\gamma$ is {\\it small}; i.e. that terms proportional to a power of $m_\\gamma$\n (not inside a high energy logarithm) are always negligible. Under these condition, any\n $m_\\gamma$-dependence cancels out in the sum $d\\sigma (e^-e^+\\to W^-W^+)\/d\\Omega$ plus\n $d\\sigma_{\\rm brems}\/d\\Omega$.\n\n But, at the high energies of $\\sqrt{s} \\gg m_Z$ we are interested in,\n the $Z$ mass is also {\\it small}; since\n any such $m_Z$ coefficient is necessarily suppressed by an energy denominator. In other words,\n since the infrared $m_\\gamma$ effects cancel out in the cross section including bremsstrahlung (\\ref{brems-sigma}) contribution, they will also cancel in the special case $m_\\gamma=m_Z$.\n As already said we made this choice because it leads to the simplest expressions.\n The illustrations given below correspond to it.\n\n In order to obtain the (infrared sensitive) unpolarized cross section\n $d\\sigma (e^-e^+\\to W^-W^+)\/d\\Omega$ from the experimental data, one has obviously to subtract the\n bremsstrahlung contribution. Consequently, the difference between the values of this cross section\n regularized at an arbitrary $m_\\gamma$ or at $m_\\gamma=m_Z$, for the same $\\Delta E$,\n is given by\n \\bqa\n && \\frac{d\\sigma (e^-e^+\\to W^-W^+) }{d\\Omega}\\Big |_{m_\\gamma} -\n \\frac{d\\sigma (e^-e^+\\to W^-W^+) }{d\\Omega}\\Big |_{m_\\gamma \\to m_Z} \\nonumber \\\\\n && = \\frac{d\\sigma^{\\rm Born}}{d\\Omega}\n {\\alpha\\over\\pi}\\left ( \\ln {m_Z\\over m_\\gamma} \\right )\n \\left ( 4 -2\\ln{s\\over m^2_e} +4\\ln{m^2_W-u \\over m^2_W-t}\n +2{s-2m^2_W\\over s\\beta_W}\\ln{1-\\beta_W\\over 1+\\beta_W} \\right ) ; \\label{mgamma-mZ-effect}\n \\eqa\n see our eqs.(\\ref{kinematics}, \\ref{brems-sigma}) and eq.(5.18) of \\cite{Boehm}.\n If one wants to keep the usual choice of an arbitrary small $m_\\gamma$ in the bremsstrahlung cross section,\n one would have to use our extended sim expressions given in (\\ref{deltaFTT}, \\ref{deltaF00})\n of Appendix A.\\\\\n\n\nTurning now to the numerical illustrations, we first check\nthat all HV amplitudes quickly vanish at high energy, in both MSSM and SM \\cite{heli1, heli2}.\nFor the MSSM case, we use benchmark S1 of \\cite{bench}, where the EW\n scale values of all squark masses are at the 2 TeV level, $A_t=2.3$ TeV,\n the slepton masses are at $0.5$ TeV, and the remaining mass parameters (in TeV) are\n \\bq\n \\mu =0.4~~,~~ M_1=0.25 ~~,~~ M_2=0.5 ~~,~~ M_3=2 ~~, \\label{bench-param}\n \\eq\nwhile $\\tan\\beta=20$.\nSuch a benchmark is consistent with present LHC constraints \\cite{bench}.\nAll MSSM results shown in this paper, are using this benchmark.\nSimilar results are also obtained for other LHC-consistent MSSM benchmarks, like those\nlisted e.g. in the Snowmass suggestion \\cite{Snowmass},\nor the very encouraging cMSSM ones given in \\cite{Konishi}.\n\n\nComparing the SM and MSSM results in Figs.\\ref{HV-full-amp},\nwe see that for all HV amplitudes, the purely supersymmetric\ncontribution mostly cancel the (already suppressed)\npure SM ones; this is more spectacular for energies above the SUSY\nscale. Thus, Figs.\\ref{HV-full-amp} indeed show that the six HV amplitudes listed\nin (\\ref{HV-amp-list}), are quickly suppressed in MSSM, as well as in SM.\\\\\n\n\n\nWe next turn to the high energy description of the four\nleading (HC) amplitudes listed in (\\ref{4HC-amp-list}).\n As it is shown in Figs.\\ref{HC-full-amp}, the supersimple (sim) approximations to them,\n follow very closely the complete expressions for the 1loop electroweakly corrected\n amplitudes, in both SM and MSSM.\nFor the TT amplitudes $F_{--+}$, $F_{-+-}$, this appears in the upper panels\nof Figs.\\ref{HC-full-amp}, for SM and the MSSM benchmark mentioned above.\nThe corresponding numerical illustrations for the LL HC amplitudes are shown in the lower\npanels. These results indicate that all four 1loop predictions; i.e. the complete SM and\nMSSM results, as well their sim SM and sim MSSM approximations,\nare very close to each other at high energies. Moreover, a comparison of\nFigs.\\ref{HV-full-amp} and \\ref{HC-full-amp} immediately shows that soon above 0.5TeV\nthe HC amplitudes in (\\ref{4HC-amp-list}) are much larger than all other ones.\n\nThere are two main conclusions we draw from this, for energies up to a TeV or so:\nThe first is that the process\n $e^-e^+\\to W^-W^+$ is rather insensitive to MSSM contributions, for benchmarks consistent\nwith the present SUSY constraints \\cite{bench, Snowmass, Konishi}. And the second conclusion\nis that (\\ref{simSM--+}, \\ref{simSM-+-}, \\ref{simSM+00}, \\ref{simSM-00})\nprovide a true description of the sources of the relevant dynamics.\\\\\n\n\n\n\\section{Application to the $e^-e^+\\to W^-W^+$ observables}\n\n The observables we study here are the unpolarized differential cross sections\n\\bq\n{d\\sigma\\over d\\cos\\theta}={\\beta_W\\over 128\\pi s}\n\\Sigma_{\\lambda \\mu \\mu'}|F_{\\lambda \\mu \\mu'}(\\theta)|^2 ~~, \\label{dsigma-unpol}\n\\eq\nas well as the polarized differential cross sections using right-handedly polarized\n electron beams $e^-_R$,\n\\bq\n{d\\sigma^R\\over d\\cos\\theta}={\\beta_W\\over 64\\pi s}\n\\Sigma_{\\mu \\mu'}|F_{+{1\\over2},\\mu \\mu'}(\\theta)|^2 ~~, \\label{dsigma-pol}\n\\eq\nwhere (\\ref{kinematics}) is used.\n\nThese cross sections are shown in Figs.\\ref{sigmas}, where the complete\n1loop EW order SM results are compared to the corresponding\nsupersimple (sim) ones. The later are constructed\nby using the expressions of Appendix A for the HC amplitudes, while the HV amplitudes\nare approximated by the quickly vanishing Born contributions\\footnote{If instead\nwe had completely ignored the HV amplitudes in the sim cross sections, then\nappreciable differences would only appear for energies below 1TeV,\nparticularly for the $e^-_R$ cross sections.} in\n(\\ref{FBorn-TT}, \\ref{FBorn-TL}, \\ref{FBorn-LT}). As shown in\nFigs.\\ref{sigmas}, the sim results very closely follow the SM ones.\n\n In addition, we show in the same figures, how the complete 1loop SM results are changed,\n when an anomalous contribution is added like e.g. AGC1 or AGC2, respectively\ndefined by (\\ref{AGC1-choice}) or (\\ref{AGC2-gLgR}, \\ref{AGC2-choice}) of Appendix B.1; or\na $Z'$-effect defined Appendix B.2.\n\nLeft panels in Figs.\\ref{sigmas} present results for\nthe unpolarized $e^-e^+$ cross sections; while right panels show results for the\n$e^-_Re^+$ cross sections involving a right-handedly polarized electron.\n\nThe upper panels present the energy dependencies at $\\theta =30^o$;\nwhile the middle (lower) panels indicate the angular dependencies at\n$\\sqrt{s}=1$TeV ($\\sqrt{s}=5$TeV).\n\nIn all cases, the supersimple (sim) description is very good.\nNo MSSM or sim MSSM illustrations are given, since they are very close to the corresponding\n SM ones; at the 1-2\\% level, for benchmarks consistent with the current LHC\n constraints \\cite{bench, Snowmass, Konishi}.\n\n In other words, at the scale of Figs.\\ref{sigmas},\n the SM and MSSM results for \\cite{bench}, would coincide.\nSuch a weakness of the pure supersymmetric\n contributions, has been already noticed in previous analyses,\n \\cite{Hahn}. Because of the different\nmass scales of the supersymmetric partners, at a given energy,\nthe absolute magnitudes of the SUSY 1loop effects\nmay differ notably. But relative to the SM contributions (Born + 1 loop),\nthey always remain very small.\n\nConcerning the relevant dynamics for the unpolarized $e^-e^+$ cross sections,\nwe note that, at forward angles, they are dominated by the left-handed-$e^-$ TT\namplitudes.\\\\\n\nFor specific experimental studies of the LL amplitudes,\none can either make a final polarization analysis of the $W^\\mp$-decays;\nor use a right-handedly polarized $e^-$-beam, so that\nthe usual TT amplitudes do not contribute.\nIn the right panels in Figs.\\ref{sigmas}, we show the energy and angular dependencies\nof these $e^-_R e^+$ cross sections.\n\n These LL studies are probably the best place to search for\nanomalous contributions, like those from the AGC effects presented in Section B.1.\nAs seen in (\\ref{FAGC-TT}-\\ref{FAGC-LL}), such\nAGC contributions do not appear in the HC TT amplitudes;\nbut they do appear in the HC LL amplitudes, as well as\nin all the HV ones (TT, TL and LT).\nThis is a remarkable property that should be checked by a careful\nanalysis of experimental signals.\n\n The most simple-minded implication of AGC physics is presented\n by the AGC1 model in Figs.\\ref{HV-full-amp}, \\ref{sigmas}, \\ref{HC-full-NPamp},\n where the parameters in Appendix B.1 are fixed as in (\\ref{AGC1-choice}).\n In this case, the anomalous contributions to the LL\namplitudes increase like $ s\/ m^2_W$,\ncausing a strong increase of the cross sections\nwith the energy.\n\nSuch a strong increase may be tamed though, by the existence of\n scales $M$ in the various anomalous couplings, which transforms them to\n form factors decreasing like $M^2\/(s+ M^2)$.\n\nAnother way of taming the above strong AGC increase, is by the addition\nof new exchanges in the\nt-channel, such that one gets cancelations between s- and t-channel contributions,\n like in the Born SM case.\nA purely ad-hoc phenomenological solution of this kind is given by AGC2, presented\nin Appendix B.1, and determined by\n(\\ref{AGC2-gLgR}, \\ref{AGC2-choice}). In the effective lagrangian framework\nmany such possibilities exist; see e.g. \\cite{ef-Lagrangian}.\n\n\n The AGC1, AGC2 results of in Figs.\\ref{HV-full-amp}, \\ref{HC-full-NPamp}, \\ref{sigmas},\nshow various amplitudes and cross-sections where such anomalous\nbehaviours may be seen and compared to the SM and MSSM results.\n\nPresent experimental constraints on fixed AGC couplings,\n from LEP2 \\cite{LEP2WW} are of the order of\n $\\pm 0.04$.\n From LHC \\cite{LHCWW}, they are of the order of $\\pm 0.1$;\n compare with (\\ref{AGC1-choice}, \\ref{AGC2-choice}).\n These values are much larger than the uncertainties of\n our description.\\\\\n\nAnother type of anomalous contribution is a $Z'$ exchange in the s-channel;\n see \\cite{Andreev} and Appendix B2.\n Here also one can impose a good high energy behaviour to the\nLL and LT amplitudes. A simple solution is a $Z-Z'$ mixing\nsuch that, the total s-channel contribution at high energy, cancels the standard\nt-channel exchange at Born-level.\n Figs.\\ref{HV-full-amp}, \\ref{sigmas}, \\ref{HC-full-NPamp} show the\n behaviours of the various amplitudes\n and cross-sections under the presence of such $Z'$ contributions,\n and compare them to the corresponding SM and MSSM ones.\\\\\n\n>From the above illustrations one sees that our supersimple expressions\nare sufficiently accurate to distinguish 1loop SM or MSSM corrections\nfrom such New Physics. But these are examples. More elaborated\nanalyses could of course be done,\n for example in the spirit of \\cite{Andreev}; still remaining in a\n sensitivity region where supersimple expressions sufficiently describe\nSM physics. The existence of this possibility constitutes an important\n motivation for supersimplicity. \\\\\n\n\n\n\\section{Conclusions}\n\nIn this paper we have analyzed the high energy behaviour\nof the 1loop EW corrections to the\n$e^-e^+\\to W^- W^+$ helicity amplitudes. And we have\nverified that soon above threshold,\nthe four helicity conserving amplitudes in (\\ref{4HC-amp-list}) are much larger than\nall other ones, in both SM and MSSM.\n\nWe have then established the so-called supersimple (sim) expressions for\nthe HC amplitudes in (\\ref{4HC-amp-list}), both in SM and in MSSM.\nThese expressions (explicitly\nwritten in Appendix A) are really simple and provide a panoramic view of\nthe dynamics; i.e., of the fermion, gauge and higgs exchanges,\nand (in the supersymmetric part) of the sfermion, additional higgses,\ncharginos and neutralinos exchanges.\n\nMoreover, the accuracy of these sim expressions\nis sufficient to allow their use in order to search\nfor possible new physics contributing in addition to SM or MSSM.\nIn other words, sim expressions may be used to avoid the enormous codes\nneeded when using the complete 1loop expressions.\nThus, analyses done by only using Born terms, can be easily upgraded\nto the 1loop EW order.\n\nIn previous work \\cite{super, ttbar},\nwe have emphasized the peculiar simplicity arising in the MSSM case.\nHowever in the process $e^-e^+\\to W^- W^+$, the purely\nsupersymmetric contributions are rather small. So even in the purely\nSM case, we get simple accurate expressions, that are valid at LHC energies.\n\nAt present there is no signal of supersymmetry at LHC. The discovery of the Higgs\nboson at 125 GeV is nevertheless a source of questions about the\npossibility of various kinds of New Physics effects \\cite{Altarelli}.\nThe process $e^-e^+\\to W^- W^+$ is a typical place where such\neffects can be looked for. For our illustrations,\nwe have taken the cases of AGC or $Z'$ contributions,\nwhich have been often discussed. Other possibilities may of course\nbe tried \\cite{Andreev}.\n\nOur supersimple\nexpressions are intended to help differentiating such New Physics effects from\nstandard or supersymmetric corrections, in a way which is as simple as possible,\nwhile at the same time allowing us to directly see the responsible dynamics. \\\\\n\n\n\n\n\\vspace*{1cm}\n\n\n\\renewcommand{\\thesection}{A}\n\\renewcommand{\\theequation}{A.\\arabic{equation}}\n\\setcounter{equation}{0}\n\n\n\n\n\\section{Appendix: Supersimple expressions for the 4 HC amplitudes}\n\n\nThe purpose of this Appendix is to present the {\\it supersimple} (sim) expressions\nfor the four leading HC amplitudes listed in (\\ref{4HC-amp-list}).\nThe procedure is valid for of any 2-to-2 process at 1loop EW order,\nin either MSSM or SM, provided the Born contribution is non-negligible.\nAnd it is based on the fact that the\n helicity conserving (HC) amplitudes, are the only relevant\n ones at high energy \\cite{heli1,heli2}.\n\n To derive these sim expressions, we start from a complete 1loop EW order calculation,\n and then take the high energy\nlimit using \\cite{asPV}. As in the analogous cases studied in \\cite{super, ttbar},\nthese expressions\nconstitute a very good high energy approximation, to the HC amplitudes, renormalized\non-shell \\cite{OS}.\n\n\nApart from possible additive constants,\nthese sim expressions consist of linear combinations of just four forms \\cite{super, ttbar}.\nFor $e^-e^+\\to W^-W^+$, the structure of these forms simplifies as\n\\bqa\n&& \\overline{\\ln^2x_{Vi}} = \\ln^2x_{V}+4L_{aVi} ~~,~~\nx_V \\equiv \\left (\\frac{-x-i\\epsilon}{m_V^2} \\right )~~, \\label{Sud-ln2-form} \\\\\n&& \\overline{\\ln x_{ij}} = \\ln x_{ij}+b^{ij}_0(m_a^2)-2 ~~ , ~~\n\\ln x_{ij}\\equiv \\ln \\frac{-x-i\\epsilon}{m_im_j} ~~, \\label{Sud-ln-form} \\\\\n&& \\overline{\\ln^2r_{xy}}=\\ln^2r_{xy}+ \\pi^2 ~~~,~~~\nr_{xy} \\equiv \\frac{-x-i\\epsilon}{-y-i\\epsilon} ~~~~, \\label{ln2r-form} \\\\\n&& \\ln r_{xy} ~~~,~~ \\label{lnr-form}\n\\eqa\nwhere $(x,y)$ denotes any two of the Mandelstam variables $(s,t,u)$.\n\nThe indices $(i,j,V)$ in the first two forms (\\ref{Sud-ln2-form}, \\ref{Sud-ln-form}),\ncalled Sudakov augmented forms \\cite{super}, denote internally exchanged particles,\nin the various 1loop diagrams; while $V$ always\nrefers to a gauge exchange. The index \"$a$\" always refers to a particle such that\nthe tree-level vertices $aVi$ or $aij$ are non-vanishing. This particle $a$, could\neither be an external particle (i.e. $e^\\mp$ or $W^\\mp$ for the process studied here),\nor a particle contributing at tree level through an exchange in the $s,~t$ or $u$\nchannel (i.e. $\\nu_e$, or\\footnote{As always, for an internal photon we use a mass $m_\\gamma$,\nin order to regularize possible infrared singularities.} $\\gamma, Z$ in our case).\nUsing these, the energy-independent\nexpressions in (\\ref{Sud-ln2-form}, \\ref{Sud-ln-form}) may be written as\n\\cite{super, ttbar, asPV}\n\\bqa\n L_{aVi}& = & \\rm Li_2 \\left ( \\frac{2m_a^2+i\\epsilon}{m_V^2-m_i^2+m_a^2+i\\epsilon +\n\\sqrt{\\lambda (m_a^2+i\\epsilon, m_V^2, m_i^2)}} \\right )\n\\nonumber \\\\\n&& + \\rm Li_2 \\left ( \\frac{2m_a^2+i\\epsilon }{m_V^2-m_i^2+m_a^2+i\\epsilon -\n\\sqrt{\\lambda (m_a^2+i\\epsilon, m_V^2, m_i^2)}} \\right )~~,\\label{LaVi-term} \\\\[0.5cm]\nb_0^{ij}(m_a^2)& \\equiv& b_0(m_a^2; m_i,m_j) =\n2 + \\frac{1}{m_a^2} \\Big [ (m_j^2 -m_i^2)\\ln\\frac{m_i}{m_j}\\nonumber\\\\\n&& + \\sqrt{\\lambda(m_a^2+i\\epsilon, m_i^2, m_j^2)} {\\rm ArcCosh} \\Big\n(\\frac{m_i^2+m_j^2-m_a^2-i\\epsilon}{2 m_i m_j} \\Big ) \\Big ] ~~, \\label{b0ij}\n\\eqa\nwhere\n\\bq\n\\lambda(a,b,c)=a^2+b^2+c^2-2ab-2ac-2bc~~. \\label{lambda-function}\n\\eq\n\n\nThe other two forms (\\ref{ln2r-form}, \\ref{lnr-form}) are solely induced\n by box contributions to the asymptotic amplitudes \\cite{asPV}.\n The forms (\\ref{lnr-form}) in particular, have no dependence on mass scales and never arise\n from differences of the augmented Sudakov linear-log contributions,\n of the type (\\ref{Sud-ln-form}).\\\\\n\nAs already said, apart from possible additive constants,\n the sim expressions consist of linear combinations of the\n four forms (\\ref{Sud-ln2-form}-\\ref{lnr-form}).\nThe coefficients of these forms may\n involve ratios of Mandelstam variables, as well as constants.\nParticularly for the Sudakov augmented forms\n(\\ref{Sud-ln2-form}, \\ref{Sud-ln-form}) though, their coefficients should be\nsuch that, when differences in the scales of masses and Mandelstam variables are disregarded,\n then, the complete coefficients in the implied e.g. $\\ln s $ and $\\ln^2 s$ terms\nbecome the constants given by general rules\n\\cite{MSSMrules1,MSSMrules2,MSSMrules3,MSSMrules4}.\n\n\nGenerally, these \\emph{supersimple} HC helicity amplitudes, differ from the\non-shell renormalized ones \\cite{OS}, by small additive constant terms,\nin both, the MSSM and SM cases.\nWe have checked numerically that\nfor the process studied here, these are indeed negligible.\n\nIn the next two subsections we give the \\emph{supersimple} expressions\nfor the $e^-e^+\\to W^-_TW^+_T$ and $e^-e^+\\to W^-_LW^+_L$ HC amplitudes respectively.\nIn these, we first give the results for the case where infrared singularities are\nregularized by using $m_\\gamma=m_Z$ \\cite{super, ttbar};\nand subsequently quote the corrections\nfor the $m_\\gamma \\neq m_Z$ case.\nIn each case, we give separately the SM and the MSSM predictions.\n\n\n\n\n\\subsection{The $e^-e^+\\to W^-_TW^+_T$ HC amplitudes}\n\n\nThere are two such HC amplitudes listed in the left part of\n(\\ref{4HC-amp-list}); namely $F_{-{1\\over2}-+}$ and $F_{-{1\\over2}+-}$.\nIn the $m_\\gamma=m_Z$ case, using the Born results\nin (\\ref{Born-asym-TT-HC}),\n\n\\vspace*{0.1cm}\n\n\\noindent\nthe asymptotic supersimple {\\bf sim SM } expressions are\n\\bqa\nF_{-{1\\over2}-+}&=&F^{\\rm Born}_{-{1\\over2}-+} \\left ({\\alpha\\over16\\pi s^2_W}\\right )\n\\Bigg \\{\\overline{\\ln t_{Ze}}\\left ({3+2c^2_W\\over c^2_W}-{4t\\over s}+{4s\\over u}\\right )\n+\\overline{\\ln t_{W\\nu}}\\left ({-1+10c^2_W\\over c^2_W}-{8t\\over s} \\right )\\nonumber\\\\\n&&\n+{\\overline{\\ln t_{Z\\nu}}\\over c^2_W} +2\\overline{\\ln t_{We}}\n+\\overline{\\ln u_{Ze}}\\left ({4t\\over u}-{4t\\over s} \\right )\n+{8t\\over s}(\\overline{\\ln s_{W\\nu}}+\\overline{\\ln s_{Ze}})-4\\overline{\\ln u_{W\\nu}}\n\\nonumber\\\\\n&&-3\\overline{\\ln^2t_{Ze}}-\\overline{\\ln^2t_{ZW}}\n-3\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}}\\nonumber\\\\\n&&- {1\\over c^2_W}(\\overline{\\ln^2s_{Ze}}\n+4c^2_W\\overline{\\ln^2s_{ZW}})\n-2\\overline{\\ln^2s_{WZ}}+2\\overline{\\ln^2u_{Ze}}+2\\overline{\\ln^2u_{ZW}}\n\\nonumber\\\\\n&&-{2t\\over u}(\\overline{\\ln^2s_{W\\nu}}+\\overline{\\ln^2s_{WZ}}-\\overline{\\ln^2t_{W\\nu}}\n-\\overline{\\ln^2t_{WZ}})\\nonumber\\\\\n&&+\\overline{\\ln^2r_{ts}}\\left [{2u^3+2t^3+6ut^2+2tu^2)\\over 2u^3c^2_W}\n+{6u^3-6t^3)\\over u^3} \\right ]\\nonumber\\\\\n&&+{4s\\over u}\\overline{\\ln^2r_{ut}}+{4(t-u)\\over u}\\overline{\\ln^2r_{us}}\n+\\left [{t(2t+5u)\\over u^2c^2_W}+{t(12t^2+6u^2+6tu)\\over su^2}\\right ]\\ln r_{ts}\n\\nonumber\\\\\n&&\n+~{t(16u+12t)\\over su}\\ln r_{us}-\\left ({8t\\over u}+4 \\right)\\ln r_{tu}\n+~{t(1-6c^2_W)\\over uc^2_W} \\Bigg\\} ~~,\n\\label{simSM--+} \\\\[0.5cm]\nF_{-{1\\over2}+-}&=&F^{\\rm Born}_{-{1\\over2}+-}\\left ({\\alpha\\over16\\pi s^2_W} \\right )\n\\Bigg\\{\\overline{\\ln t_{Ze}}\\left ({3+2c^2_W\\over c^2_W}- {4t\\over s}+{4s\\over u}\\right )\n+\\overline{\\ln t_{W\\nu}}\\left ({-1+10c^2_W\\over c^2_W}-{8t\\over s}\\right )\\nonumber\\\\\n&&\n+{1\\over c^2_W}\\overline{\\ln t_{Z\\nu}} +2\\overline{\\ln t_{We}}\n+\\overline{\\ln u_{Ze}}\\left ({4t\\over u}-{4t\\over s} \\right )\n+ {8t\\over s} \\left (\\overline{\\ln s_{W\\nu}}+\\overline{\\ln s_{Ze}}\\right )\n-4\\overline{\\ln u_{W\\nu}}\n\\nonumber\\\\\n&&-3\\overline{\\ln^2t_{Ze}}-\\overline{\\ln^2t_{ZW}}\n-3\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}}\\nonumber\\\\\n&&- {1\\over c^2_W} (\\overline{\\ln^2s_{Ze}} +4c^2_W\\overline{\\ln^2s_{ZW}})\n-2\\overline{\\ln^2s_{WZ}}+2\\overline{\\ln^2u_{Ze}}+2\\overline{\\ln^2u_{ZW}}\n\\nonumber\\\\\n&&- {2t\\over u}\\left (\\overline{\\ln^2s_{W\\nu}}+\\overline{\\ln^2s_{WZ}}\n-\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}} \\right )\n+\\overline{\\ln^2r_{ts}}\\left [{u-t\\over uc^2_W}+{6(u-t)\\over u} \\right ]\\nonumber\\\\\n&&\n+{4s\\over u}\\overline{\\ln^2r_{ut}}\n+({4t^2+2ut+6u^2\\over ut})\\overline{\\ln^2r_{us}}\n+\\left [{-3\\over c^2_W}+{18u^2+30ut\\over su} \\right ]\n\\ln r_{ts}\\nonumber\\\\\n&&\n+({4t\\over u}+8)\\ln r_{tu}+ ({4t\\over s}+12)\\ln r_{us}\n- {1-6c^2_W\\over c^2_W}\\Bigg\\} ~~, \\label{simSM-+-}\n\\eqa\n\n\n\n\\vspace*{0.5cm}\n\n\\noindent\nwhile the {\\bf sim MSSM} results, always assuming CP conservation,\nare\n\\bqa\nF_{-{1\\over2}-+}&=&F^{\\rm Born}_{-{1\\over2}-+}\\left ({\\alpha\\over16\\pi s^2_W} \\right )\n\\Bigg\\{{1\\over c^2_W}\\left ( 3\\overline{\\ln t_{Ze}}\n-\\overline{\\ln t_{W\\nu}}+\\overline{\\ln t_{Z\\nu}} \\right ) -2\\overline{\\ln t_{Ze}}\n\\nonumber\\\\\n&& +6\\overline{\\ln t_{W\\nu}}\n+2\\overline{\\ln t_{We}}-3\\overline{\\ln^2t_{Ze}}-\\overline{\\ln^2t_{ZW}}\n-3\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}}\\nonumber\\\\\n&&- {1\\over c^2_W}(\\overline{\\ln^2s_{Ze}} +4c^2_W\\overline{\\ln^2s_{ZW}})\n-2\\overline{\\ln^2s_{WZ}}+2\\overline{\\ln^2u_{Ze}}+2\\overline{\\ln^2u_{ZW}}\n\\nonumber\\\\\n&&-~{2t\\over u}(\\overline{\\ln^2s_{W\\nu}}+\\overline{\\ln^2s_{WZ}}\n-\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}})\n\\nonumber\\\\\n&&+{4s\\over u}\\ln r_{tu}-{12t^2\\over su}\\ln r_{ts}+({4t\\over s}-{8t\\over u})\\ln r_{us}\n+{2t\\over uc^2_W}\\ln r_{ts}\n\\nonumber\\\\\n&&-~{1\\over c^2_W}\n\\Big \\{\\sum_j|Z^N_{1j}s_W+Z^N_{2j}c_W|^2\\overline{\\ln t_{\\chi^0_j\\tilde{e_L}}}\n+2c^2_W\\sum_j|Z^+_{1j}|^2\\overline{\\ln t_{\\chi^+_j\\tilde{\\nu}}} \\Big \\}\n\\nonumber\\\\\n&&+\\overline{\\ln^2r_{ts}}\\left [{t^2+u^2\\over u^2c^2_W}+{6t^2+6u^2)\\over u^2}\\right ]\n+~{4s\\over u}\\overline{\\ln^2r_{ut}}+{4(t-u)\\over u}\\overline{\\ln^2r_{us}}\\Bigg\\} ~~,\n\\label{simMSSM--+} \\\\[0.5cm]\nF_{-{1\\over2}+-}&=&F^{\\rm Born}_{-{1\\over2}+-}\\left ({\\alpha\\over16\\pi s^2_W} \\right )\n\\Bigg\\{{1\\over c^2_W}[3\\overline{\\ln t_{Ze}}\n-\\overline{\\ln t_{W\\nu}}+\\overline{\\ln t_{Z\\nu}}] -2\\overline{\\ln t_{Ze}}\n\\nonumber\\\\\n&& +6\\overline{\\ln t_{W\\nu}} +2\\overline{\\ln t_{We}}\n-3\\overline{\\ln^2t_{Ze}}-\\overline{\\ln^2t_{ZW}}\n-3\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}}\n\\nonumber\\\\\n&&-~{1\\over c^2_W}(\\overline{\\ln^2s_{Ze}}\n+4c^2_W\\overline{\\ln^2s_{ZW}})-2\\overline{\\ln^2s_{WZ}}\n+2\\overline{\\ln^2u_{Ze}}+2\\overline{\\ln^2u_{ZW}}\n\\nonumber\\\\\n&&-~{2t\\over u}(\\overline{\\ln^2s_{W\\nu}}+\\overline{\\ln^2s_{WZ}}\n-\\overline{\\ln^2t_{W\\nu}}-\\overline{\\ln^2t_{WZ}})\n\\nonumber\\\\\n&&+~{12(t-s)\\over s}\\ln r_{ts}+({4t\\over s}+8)\\ln r_{us}\n-~({2\\over c^2_W})\\ln r_{ts}-{4s\\over u}\\ln r_{tu}\n\\nonumber\\\\\n&&-~{1\\over c^2_W}\n\\Big \\{\\sum_j|Z^N_{1j}s_W+Z^N_{2j}c_W|^2\\overline{\\ln t_{\\chi^0_j\\tilde{e_L}}}\n+2c^2_W\\sum_j|Z^+_{1j}|^2\\overline{\\ln t_{\\chi^+_j\\tilde{\\nu}}} \\Big \\}\n\\nonumber\\\\\n&&+\\overline{\\ln^2r_{ts}}[{u-t\\over uc^2_W}+{6(u-t)\\over u}]\n+{4s\\over u}\\overline{\\ln^2r_{ut}}+{4(t^2+u^2)\\over ut}\\overline{\\ln^2r_{us}}\n\\Bigg\\} ~~, \\label{simMSSM-+-}\n\\eqa\nwhere the indices $(i,j)$ in (\\ref{simMSSM--+}, \\ref{simMSSM-+-}) and (\\ref{simMSSM+00}, \\ref{simMSSM-00}) below, refer to chargino and neutralino contributions, defined as in \\cite{Rosiek}.\n\n\nNote the constant terms at the end of the r.h.s. of the SM results\n(\\ref{simSM--+}, \\ref{simSM-+-}).\nNo such constants appear in the corresponding MSSM amplitudes\n(\\ref{simMSSM--+}, \\ref{simMSSM-+-}).\\\\\n\n In the $m_\\gamma \\neq m_Z$ case, the correction to be added to\n (\\ref{simSM--+}-\\ref{simMSSM-+-}),\nis given by\n\\bqa\n&& \\delta F_{-{1\\over2}\\mp \\pm}= F^{\\rm Born}_{-{1\\over2}\\mp \\pm}\n\\left ({\\alpha\\over16\\pi s^2_W}\\right )\n\\Bigg [ \\Bigg \\{-2s^2_W( \\overline{\\ln^2t_{\\gamma e}}+\\overline{\\ln^2t_{\\gamma W}})\n+16 s^2_W {t\\over s} \\overline{\\ln s_{\\gamma e}}\\nonumber\\\\\n&&+2s^2_W[-2\\overline{\\ln^2s_{\\gamma e}}+8\\overline{\\ln t_{\\gamma e}}]\n -2s^2_W[2\\overline{\\ln^2s_{\\gamma W}}+\\overline{\\ln^2 t_{W\\gamma}}]\\nonumber\\\\\n&&+2s^2_W[-2\\overline{\\ln^2s_{W\\gamma}}-\\overline{\\ln^2 t_{\\gamma e}}\n-\\overline{\\ln^2 t_{\\gamma W}}+4(1-{t\\over s})\\overline{\\ln t_{\\gamma e}}]\\nonumber\\\\\n&&+2s^2_W \\Big [-2{t\\over u}\\overline{\\ln^2s_{W\\gamma}}-{t\\over u}\n(\\overline{\\ln^2 u_{\\gamma e}}\n+\\overline{\\ln^2 u_{\\gamma W}})+4({t\\over u}-{t\\over s})\\overline{\\ln u_{\\gamma e}}\\Big ]\n\\nonumber\\\\\n&&-2s^2_W \\Big [{s-u\\over u}(\\overline{\\ln^2u_{\\gamma e}}+\\overline{\\ln^2 u_{\\gamma W}})\n+{s-t\\over u}\\overline{\\ln^2 t_{W\\gamma}}+4(2+{t\\over u})\\overline{\\ln t_{\\gamma e}} \\Big ]\n\\Bigg\\}\n\\nonumber \\\\\n&& -\\Big \\{ m_\\gamma \\to m_Z \\Big \\} \\Bigg ]~~, \\label{deltaFTT}\n\\eqa\nwhere (\\ref{Born-asym-TT-HC}) is again used.\n\n\n\n\\vspace*{1cm}\n\\subsection{The $e^-e^+\\to W^-_L W^+_L$ HC amplitudes}\n\nIn the $m_\\gamma=m_Z$ case, using the asymptotic Born LL amplitudes\n(\\ref{Born-asym-LL-HC}), \\\\\n\\noindent\nthe high energy supersimple {\\bf sim SM} results are written as\\\\\n\\bqa\nF_{+{1\\over2}00}&=&F^{\\rm Born}_{+{1\\over2}00}\\Bigg\\{\\left ({\\alpha\\over4\\pi} \\right )\n\\Big \\{{1\\over c^2_W}\n\\left [-\\overline{\\ln^2s_{Ze}}+3\\overline{\\ln s_{Ze}}-1 \\right ]\n+{1\\over 4s^2_Wc^2_W}\\left [-\\overline{\\ln^2s_{ZW}}+4\\overline{\\ln s_{ZW}} \\right ]\n\\nonumber\\\\\n&&+{1\\over 2s^2_W}\\left [-{1\\over2}(\\overline{\\ln^2s_{WZ}}+\\overline{\\ln^2s_{WH_{SM}}})\n+2\\overline{\\ln s_{WZ}} +2\\overline{\\ln s_{WH_{SM}}}\\right ]\n\\nonumber\\\\\n&&-{3(m^2_t+m^2_b)\\over 2s^2_Wm^2_W}\\overline{\\ln s_{tb}}\n-{1\\over 4c^2_W} \\Big [4(\\overline{\\ln^2t_{ZW}}-\\overline{\\ln^2u_{ZW}})\n+{2(u-t)\\over u}\\overline{\\ln^2r_{ts}}\n\\nonumber\\\\\n&& -{2(t-u)\\over t}\\overline{\\ln^2r_{us}} \\Big ]\\Big \\}\n+\\Sigma^{\\rm seSM}\\left (+{1\\over2},0,0 \\right )\\Bigg\\}~~,\n\\label{simSM+00} \\\\[0.5cm]\nF_{-{1\\over2}00}&=&F^{\\rm Born}_{-{1\\over2}00}\\Bigg\\{\\left ({\\alpha\\over4\\pi} \\right)\n\\Big \\{{1\\over 4s^2_Wc^2_W}\n\\left[-\\overline{\\ln^2s_{Ze}}+3\\overline{\\ln s_{Ze}}-1 \\right ]\n\\nonumber\\\\\n&&-{(1-2s^2_W)\\over 2s^2_W}[-\\overline{\\ln^2s_{W\\nu}}+3\\overline{\\ln s_{W\\nu}}-1]\n\\nonumber\\\\\n&& +{2c^2_W\\over s^2_W}\\left [{1\\over2}\\overline{\\ln s_{W\\nu}}+{1\\over2}\n+2\\overline{\\ln s_{WW}}\\right ]\n+{1\\over 4s^2_Wc^2_W}\\left [-\\overline{\\ln^2s_{ZW}}+4\\overline{\\ln s_{ZW}} \\right ]\n\\nonumber\\\\\n&&+{(1-2c^2_W)\\over 2s^2_W}\n\\left [-{1\\over2}(\\overline{\\ln^2s_{WZ}}+\\overline{\\ln^2s_{WH_{SM}}})+2\\overline{\\ln s_{WZ}}\n+2\\overline{\\ln s_{WH_{SM}}} \\right ]\n\\nonumber\\\\\n&&+{c^2_W\\over s^2_W}\\left [\\overline{\\ln s_{WZ}}\n+\\overline{\\ln s_{WH_{SM}}}\\right ]-{3(m^2_t+m^2_b)\\over 2s^2_Wm^2_W}\\overline{\\ln s_{tb}}\n\\nonumber\\\\\n&&-{c^2_W\\over 4s^2_W} \\left [4\\overline{\\ln^2t_{W\\nu}}\n+2\\overline{\\ln^2t_{WZ}}+2\\overline{\\ln^2t_{WH_{SM}}}\n-4(1-{t\\over u})\\overline{\\ln^2r_{ts}}\\right ]\n\\nonumber\\\\\n&&-{1\\over 8c^2_Ws^2_W}\\left [4(\\overline{\\ln^2t_{ZW}}-\\overline{\\ln^2u_{ZW}})\n+{2(u-t)\\over u}\\overline{\\ln^2r_{ts}}\n-{2(t-u)\\over t}\\overline{\\ln^2r_{us}} \\right ]\\Big \\}\n\\nonumber\\\\\n&& +\\Sigma^{\\rm seSM}\\left (-{1\\over2},0,0 \\right )\\Bigg\\}~~, \\label{simSM-00}\n\\eqa\n\n\\vspace*{0.5cm}\n\n\\noindent\nwhile the supersimple {\\bf sim MSSM} results are\n\\bqa\nF_{+{1\\over2}00}&=&F^{\\rm Born}_{+{1\\over2}00}\n\\Bigg\\{\\left ({\\alpha\\over4\\pi} \\right ) \\Big \\{{1\\over c^2_W}\n \\left [-\\overline{\\ln^2s_{Ze}}+3\\overline{\\ln s_{Ze}}\n-\\Sigma_i|Z^N_{1i}|^2 \\overline{\\ln s_{\\chi^0_i\\tilde{e_R}}} \\right ]\n\\nonumber\\\\\n&&+{1\\over 4s^2_Wc^2_W}\\left [-\\overline{\\ln^2s_{ZW}}+4\\overline{\\ln s_{ZW}} \\right ]\n+{1\\over 2s^2_W}\\left [-{1\\over2} \\overline{\\ln^2s_{WZ}} +2\\overline{\\ln s_{WZ}} \\right ]\n\\nonumber\\\\\n&&-{1\\over 4s^2_W} \\left [\\cos^2(\\beta-\\alpha)\\overline{\\ln^2 s_{WH^0}}+\\sin^2(\\beta-\\alpha)\n\\overline{\\ln^2 s_{Wh^0}} \\right ]\n\\nonumber\\\\\n&&+{1\\over 2s^2_W} \\left [2\\cos^2(\\beta-\\alpha)\\overline{\\ln s_{WH^0}}\n+2\\sin^2(\\beta-\\alpha)\\overline{\\ln s_{Wh^0}} \\right ]\n\\nonumber\\\\\n&&-{1\\over 2s^2_Wc^2_W}\\Sigma_{ij}\n\\Big [\\Big |{1\\over\\sqrt{2}}Z^-_{2i}(Z^N_{1j}s_W+Z^N_{2j}c_W)-Z^-_{1i}Z^N_{3j}c_W \\Big |^2\n\\nonumber\\\\\n&& +\\Big |{1\\over\\sqrt{2}}Z^+_{2i}(Z^N_{1j}s_W+Z^N_{2j}c_W)\n+Z^+_{1i}Z^N_{4j}c_W \\Big |^2 \\Big ] \\overline{\\ln s_{\\chi^+_i\\chi^0_j}}\n\\nonumber\\\\\n&&-{3(m^2_t+m^2_b)\\over 2s^2_Wm^2_W}\\overline{\\ln s_{tb}}\n-{\\cos^2\\beta\\over2c^2_W} \\left [{s\\over u}\\overline{\\ln^2r_{ts}}-\n{s\\over t}\\overline{\\ln^2r_{us}} \\right ]\n\\nonumber\\\\\n&& -{1\\over 4c^2_W}\\left [4(\\overline{\\ln^2t_{ZW}}-\\overline{\\ln^2u_{ZW}})\n+{2(u-t)\\over u}\\overline{\\ln^2r_{ts}}\n-{2(t-u)\\over t}\\overline{\\ln^2r_{us}} \\right ] \\Big \\}\n\\nonumber\\\\\n&& +\\Sigma^{\\rm seMSSM}\\left (+{1\\over2},0,0 \\right )\\Bigg\\} ~~,\n\\label{simMSSM+00} \\\\[0.5cm]\nF_{-{1\\over2}00}&=&F^{\\rm Born}_{-{1\\over2}00}\\Bigg\\{\\left ({\\alpha\\over4\\pi} \\right )\n\\Big \\{{1\\over 4s^2_Wc^2_W} \\Big [-\\overline{\\ln^2s_{Ze}}+3\\overline{\\ln s_{Ze}}\n-\\overline{\\ln^2s_{ZW}}+4\\overline{\\ln s_{ZW}}\n\\nonumber\\\\\n&& -\\Sigma_i|Z^N_{1i}s_W+Z^N_{2i}c_W|^2\\overline{\\ln s_{\\chi^0_i\\tilde{e_L}}} \\Big ]\n\\nonumber\\\\\n&&-{(1-2s^2_W)\\over 2s^2_W}\\left [-\\overline{\\ln^2s_{W\\nu}}+3\\overline{\\ln s_{W\\nu}}\n- {1\\over2}\\overline{\\ln^2s_{WZ}}+2\\overline{\\ln s_{WZ}}-\\Sigma_i|Z^+_{1i}|^2\n\\overline{\\ln s_{\\chi^+_i\\tilde{\\nu_L}}}\\right ]\n\\nonumber\\\\\n&&+{c^2_W\\over s^2_W} \\left [{\\overline{\\ln s_{W\\nu}}}\n+4\\overline{\\ln s_{WW}}-\\Sigma_i|Z^+_{1i}|^2\\overline{\\ln s_{\\chi^+_i\\tilde{\\nu_L}}} \\right ]\n\\nonumber\\\\\n&& -{(1-2c^2_W)\\over 4s^2_W} \\left [\\cos^2(\\beta-\\alpha)\\overline{\\ln^2 s_{WH^0}}\n+\\sin^2(\\beta-\\alpha) \\overline{\\ln^2 s_{Wh^0}} \\right ]\n\\nonumber\\\\\n&&+{(1-2c^2_W)\\over s^2_W} \\left [ \\cos^2(\\beta-\\alpha)\\overline{\\ln s_{WH^0}}\n+\\sin^2(\\beta-\\alpha)\\overline{\\ln s_{Wh^0}} \\right ]\n\\nonumber\\\\\n&&+{c^2_W\\over s^2_W} \\left [\\overline{\\ln s_{WZ}}\n+\\cos^2(\\beta-\\alpha) \\overline{\\ln s_{WH^0}}\n+\\sin^2(\\beta-\\alpha) \\overline{\\ln s_{Wh^0}}\\right ]\n\\nonumber\\\\\n&&-{3(m^2_t+m^2_b)\\over 2s^2_Wm^2_W}\\overline{\\ln s_{tb}}-{1\\over 2s^2_Wc^2_W}\\Sigma_{ij}\n\\Big [\\Big |{1\\over\\sqrt{2}}Z^-_{2i}(Z^N_{1j}s_W+Z^N_{2j}c_W)-Z^-_{1i}Z^N_{3j}c_W \\Big |^2\n\\nonumber\\\\\n&&+\\Big |{1\\over\\sqrt{2}}Z^+_{2i}(Z^N_{1j}s_W+Z^N_{2j}c_W)\n+Z^+_{1i}Z^N_{4j}c_W \\Big |^2 \\Big ] \\overline{\\ln s_{\\chi^+_i\\chi^0_j}}\n\\nonumber\\\\\n&&- {c^2_W\\over 4s^2_W} \\left [4\\overline{\\ln^2t_{W\\nu}}\n+2\\overline{\\ln^2t_{WZ}}\n-4\\left (1-{t\\over u} \\right )\\overline{\\ln^2r_{ts}} \\right ]\n\\nonumber\\\\\n&&- {c^2_W\\over 2s^2_W}\n\\left [\\cos^2(\\beta-\\alpha)\\overline{\\ln^2 t_{WH^0}}+\\sin^2(\\beta-\\alpha)\n\\overline{\\ln^2 t_{Wh^0}} \\right ]\n\\nonumber\\\\\n&&-{1\\over 8c^2_Ws^2_W}\\left [4(\\overline{\\ln^2t_{ZW}}-\\overline{\\ln^2u_{ZW}})\n+{2(u-t)\\over u}\\overline{\\ln^2r_{ts}}\n-{2(t-u)\\over t}\\overline{ln^2r_{us}} \\right ]\n\\nonumber\\\\\n&&-{\\sin^2\\beta\\over2c^2_Ws^2_W} \\left [{s\\over u}\\overline{\\ln^2r_{ts}}-\n{s\\over t}\\overline{\\ln^2r_{us}} \\right ]\n-{c^2_W \\sin^2\\beta \\over s^2_W} {s\\over u}\\overline{\\ln^2r_{ts}} \\Big \\}\n\\nonumber\\\\\n&& +\\Sigma^{\\rm seMSSM}\\left (-{1\\over2},0,0 \\right )\\Bigg\\}~~. \\label{simMSSM-00}\n\\eqa\\\\\n\n In the $m_\\gamma \\neq m_Z$ case,\n the correction to be added to (\\ref{simSM+00}-\\ref{simMSSM-00}) is given by\n \\bqa\n \\delta F_{\\pm {1\\over2} 00}&= &F^{\\rm Born}_{\\pm{1\\over2}00}\\left ({\\alpha\\over4\\pi} \\right )\n\\Bigg [ \\Big \\{ -\\overline{\\ln^2s_{\\gamma e}}+3\\overline{\\ln s_{\\gamma e}}\n -\\overline{\\ln^2s_{\\gamma W}}\n \\nonumber\\\\\n&& +4\\overline{\\ln s_{\\gamma W}}\n-2 \\overline{\\ln^2t_{\\gamma W}} +2 \\overline{\\ln^2u_{\\gamma W}} \\Big \\}\n- \\Big \\{ m_\\gamma \\to m_Z \\Big \\} \\Bigg ]~~ , \\label{deltaF00}\n\\eqa\nwhere (\\ref{Born-asym-LL-HC}) is again used.\n\n\n\n\nThe $\\Sigma^{\\rm se}$-contributions in either (\\ref{simSM+00}-\\ref{simSM-00})\nor (\\ref{simMSSM+00}-\\ref{simMSSM-00}), respectively appearing in SM and MSSM,\n come from the photon and Z self-energy contributions together with\n their renormalization counter terms. Their\n explicit expressions are\n\\bqa\n\\Sigma^{\\rm se}\\left (-{1\\over2},0,0 \\right )&= &\n{-4s^2_Wc^2_W\\over s}\\left \\{\\hat{\\Sigma}_{\\gamma\\gamma}(s)\n+{1-2s^2_W\\over s_Wc_W}\\hat{\\Sigma}_{Z\\gamma}(s)+{(1-2s^2_W)^2\\over4s^2_Wc^2_W}\n\\hat{\\Sigma}_{ZZ}(s) \\right \\} \\nonumber \\\\\n& + & C_P ~~, \\label{Cse-00} \\\\[0.5cm]\n\\Sigma^{\\rm se}\\left (+{1\\over2},0,0 \\right )&=&\n{-2c^2_W\\over s} \\left \\{\\hat{\\Sigma}_{\\gamma\\gamma}(s)\n+{1-4s^2_W\\over 2s_Wc_W}\\hat{\\Sigma}_{Z\\gamma}(s)-{(1-2s^2_W)\\over2c^2_W}\n\\hat{\\Sigma}_{ZZ}(s)\\right \\}~~, \\label{Cse+00}\n\\eqa\nwhere the renormalized gauge self energies $\\hat{\\Sigma}$ can be found in\n\\cite{ttbar}, together with their supersimple approximations. The last term in\n(\\ref{Cse-00}), given by\n\\bq\nC_P=-{\\alpha c^2_W\\over\\pi s^2_W}\\overline{\\ln s_{WW}}~~, \\label{pinch-part}\n\\eq\ncomes from the pinch part that had been previously\nremoved from the left and right triangular contributions, and is here restored\n\\cite{pinch,pinch1}.\n\nNote that no such $\\Sigma^{\\rm se}$-contributions exist for the transverse amplitudes\n in (\\ref{simSM--+}-\\ref{simMSSM-+-}).\n\nAs it should, the high energy ln and ln-squared parts of all expressions\n(\\ref{simSM--+}- \\ref{simMSSM-00}), agree with the usual Sudakov\nrules and the renormalization group results\n\\bqa\n&& A^{RG}=-{ln\\over 4\\pi^2} (g^4\\beta{dA^{Born}\\over dg^2}\n+g^{-4}\\beta'{dA^{Born}\\over dg^{'2}}) ~~, \\nonumber \\\\\n&& \\beta^{SM}={43\\over24}-{N_f\\over3}~~,~~\\beta^{SUSY}=-{13\\over24}-{N_f\\over6}\n~~,~~N_f=3 ~~, \\nonumber \\\\\n&& \\beta^{'SM}=-{1\\over24}-{5N_f\\over9}~~,~~\\beta^{'SUSY}=-~{5\\over24}-{5N_f\\over18}\n~~, \\label{renorm-group}\n\\eqa\ndiscussed in \\cite{MSSMrules1,MSSMrules2,MSSMrules3,MSSMrules4}.\n\n\n\n\\vspace*{1cm}\n\n\\renewcommand{\\thesection}{B}\n\\renewcommand{\\theequation}{B.\\arabic{equation}}\n\\setcounter{equation}{0}\n\n\n\\section{ Appendix: AGC and $Z'$ amplitudes}\n\n\n\\subsection{The AGC amplitudes}\n\n\nAs an Anomalous Gauge Coupling (AGC) model induced by\ns-channel $\\gamma $ and $Z$ exchanges with\n 5 anomalous couplings $\\delta_Z$, $x_{\\gamma,Z}$, $y_{\\gamma,Z}$, we consider\n the one presented in \\cite{heliWW} and Table V of \\cite{Andreev}.\nIn terms of these couplings and the SM ones in (\\ref{e-couplings}),\nthe induced AGC contributions to the TT, TL, LT and LL amplitudes,\nto lowest order, are\\footnote{Compare with\n(\\ref{FBorn-TT}, \\ref{FBorn-TL}, \\ref{FBorn-LT},\\ref{FBorn-LL}).}\n\\bqa\nF^{\\rm AGC}_{\\lambda\\mu\\mu}(\\theta)&=& {(2\\lambda)se^2\\over8}(1+\\mu\\mu')\\beta_W\\sin\\theta\n\\Bigg \\{ {\\delta_Z(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\ns^2_W(s-m^2_Z)}\n\\nonumber\\\\\n&&-\\left [{y_{\\gamma}\\over s}-{y_{Z}(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\n\\over 2s^2_W(s-m^2_Z)} \\right ]{s\\over m^2_W} \\Bigg \\} ~~, \\label{FAGC-TT} \\\\[0.5cm]\nF^{\\rm AGC}_{\\lambda \\mu 0}(\\theta)&=&\n- {(2\\lambda)s\\beta_W\\sqrt{s}e^2\\over4\\sqrt{2}m_W}(2\\lambda+\\mu\\cos\\theta)\n\\Bigg \\{ {\\delta_Z(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\ns^2_W(s-m^2_Z)}\n\\nonumber\\\\\n&&-\\left [{(x_{\\gamma}+y_{\\gamma})\\over s}-{(x_{Z}+y_{Z})\n(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\n2s^2_W(s-m^2_Z)}\\right ] \\Bigg \\} ~~, \\label{FAGC-TL}\\\\[0.5cm]\nF^{\\rm AGC}_{\\lambda 0\\mu'}(\\theta)&=&\n- {(2\\lambda)s\\beta_W\\sqrt{s}e^2\\over4\\sqrt{2}m_W}(2\\lambda-\\mu'\\cos\\theta)\n\\Bigg \\{ {\\delta_Z(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\ns^2_W(s-m^2_Z)}\n\\nonumber\\\\\n&&-\\left [{(x_{\\gamma}+y_{\\gamma})\\over s}-{(x_{Z}+y_{Z})\n(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\n2s^2_W(s-m^2_Z)} \\right ] \\Bigg \\} ~~, \\label{FAGC-LT}\\\\[0.5cm]\nF^{\\rm AHC}_{\\lambda 00}(\\theta)&=& {(2\\lambda)s^2e^2\\over4m^2_W}\\beta_W\\sin\\theta\n\\Bigg \\{ {\\delta_Z(a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+})\\over\ns^2_W(s-m^2_Z)}\\left (1+{s\\over2m^2_W} \\right )\n\\nonumber\\\\\n&&-\\left [{x_{\\gamma}\\over s}-x_{Z}{a_{eL}\\delta_{\\lambda,-}+a_{eR}\\delta_{\\lambda,+}\\over\n2s^2_W(s-m^2_Z)}\\right ]{s\\over m^2_W} \\Bigg \\} ~~. \\label{FAGC-LL}\n\\eqa\nNote that $\\delta_Z$ contributes to all amplitudes, except the two\nTT HC ones (because of the vanishing of the overall coefficient $(1+\\mu\\mu')$ in\n (\\ref{FAGC-TT}) in such a case);\n $x_{\\gamma,Z}$ contribute to all TL, LT and LL amplitudes; while\n $y_{\\gamma,Z}$ contribute only to the HV TT, TL and LT amplitudes.\n\nIn the figures, and under the name AGC1, we present illustrations\nfor the purely arbitrary choice\n\\bq\n{\\rm AGC1}~~~~\\Rightarrow ~~~ \\delta_Z=x_{\\gamma}=x_Z=0.003 ~~~,~~~ y_\\gamma=y_Z=0~~~.\n\\label{AGC1-choice}\n\\eq\nFor AGC1, the HV TT anomalous amplitudes behave like constants at high energy;\nthe HC LL ones explode like $s\/ m^2_W$; while the LT ones increase like\n$\\sqrt{s}\/ m^2_W$.\\\\\n\nIn the figures we also present results for an alternative AGC2 model in which the\n$s\/ m^2_W$ behavior of the HC LL anomalous amplitudes is canceled by a $t$-channel\ncontribution; much like it is done in the Born SM case. So we construct an\n ad-hoc model with an anomalous contribution in the\nt-channel which would lead to a similar cancelation.\nA simple phenomenological solution is obtained by keeping only $x_{\\gamma}$ and $x_Z$\n(called now $x'_{\\gamma}$ and $x'_Z$) in (\\ref{FAGC-TT}-\\ref{FAGC-LL}),\nand adding t-channel contributions induced by\n left- and right-handed $We\\nu$ couplings obtained from the\n initial SM one $g_L=e\/( \\sqrt{2} s_W)$, through\n\\bqa\ng^2_L & \\Rightarrow & g^2_L \\left (1+2s^2_W \\left\n[x'_{\\gamma}-{2s^2_W-1\\over2s_Wc_W}x'_Z \\right]\n\\right ) ~~~,\n\\nonumber \\\\\ng^2_R & \\Rightarrow & g^2_L (2s^2_W)\\left [x'_{\\gamma}-{s_W\\over c_W}x'_Z \\right ]\n~~. \\label{AGC2-gLgR}\n\\eqa\n This does not necessarily represent\ntrue anomalous $We\\nu$ couplings; it just represents the new\ncontribution necessary at high energy.\nFor example it may come from additional neutral fermion exchanges or\nfrom any sort of effective interaction. In the illustrations under the AGC2 name, we use\n\\bq\n{\\rm AGC2}~~~~\\Rightarrow ~~~ x'_{\\gamma}=x'_Z=0.03 ~~~;\\label{AGC2-choice}\n\\eq\nthese values are larger than those in (\\ref{AGC1-choice}),\nbecause of the\nglobal suppression effect following from the high energy\ncancelation between t- and s-channel terms.\n\nIf one does not want to introduce an anomalous right-handed contribution\none can just keep a non vanishing $x'_{\\gamma}$ only,\nand add the anomalous left-handed term\n\\bq\ng^2_L ~ \\Rightarrow ~ g^2_L (1+2s^2_Wx'_{\\gamma}) ~~, \\label{AGC2-gL}\n\\eq\n\nIn any case, investigating the origin of such anomalous terms\nis beyond the scope of the present work.\\\\\n\n\n\\subsection{ The $Z'$ New Physics model}\n\nThe general form of helicity amplitudes with a $Z'$ is written in Table VI of \\cite{Andreev}.\nThe $Z'$ contributions are very similar to the SM Z ones, with specific $Z'$ mass,\nwidth and couplings.\n\nIn general, with arbitrary $Z'$ couplings, there is an explosion of the LL, LT and\nTL amplitudes at high energies.\nBut, it is again easy to get high energy cancelation in an ad-hoc manner\nby just replacing the usual $Z$ contribution\ninvolving products of couplings like $g_{Zee}g_{ZWW}$, by $Z+Z'$ exchanges using respectively\n$g_{Zee}g_{ZWW}\\cos^2\\Phi$ for Z and\n$g_{Zee}g_{ZWW}\\sin^2\\Phi$ for $Z'$ (with a small value of $\\Phi$).\nThis way, the s-channel high energy contribution will be\nsimilar to the SM $Z$ one, and will cancel with the SM t-channel contribution.\nOnly around the $Z'$ peak, will the $Z'$ contribution be observable.\n\nFor the illustrations presented in the figures under the name $Z'$,\nwe use $\\sin\\Phi=0.05$ and $m_{Z'}=3$ TeV.\n\n\n\n\\newpage\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzbkav b/data_all_eng_slimpj/shuffled/split2/finalzzbkav new file mode 100644 index 0000000000000000000000000000000000000000..96e08d5f0cb11c907d83a75af076b687466840ee --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzbkav @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\nCombinatorial optimisation (CO) is the task of searching for a solution from a finite collection of possible candidate solutions that maximises the objective function. Put differently, the task is to reduce the large finite collection of possible solutions to a single (or small number of) optimal solution(s). In some cases, CO problems require methods that either have a bias to the problem structure or can learn the problem structure during the optimisation process such that it can be exploited. This hidden problem structure is caused by variable correlations and variable decompositions (building-blocks\/modules \\cite{goldberg1989genetic,holland1992adaptation}) and is, generally, unknown. The hidden structure can contain a multitude of characteristics such as near-separable decomposition, hierarchy, overlapping linkage and, as this paper shows, deep structure.\n\nDeep learning (DL) is tasked with learning high-order representations (features) of a data-set that construct an output to satisfy the learning objective. The higher-order features are constructed from a sub-set of units from the layer below. DL performs this recursively, reducing the dimensionality of the visible space and generating an organised hierarchical structure. Deep Neural Networks (DNNs) are capable of learning complex high-order features from unlabelled data. Evolutionary search has been used in conjunction with DNNs, namely to decide on network topological features (number of hidden layers, nodes per a layer etc) \\cite{stanley2002evolving} and for evolving weights of a DNN \\cite{such2017deep} (neuro-evolution). However, DO is different; whereas previous methods use optimisers to improve the performance of a learning algorithm, DO is the reverse - it uses learning to improve the performance of an optimisation algorithm (`learning how to optimise, not optimising how to learn').\n\nIn CO it is a common intuition that solutions to small sub-problems can be combined together to solve larger sub-problems, and that this can proceed through multiple levels until the whole problem is solved. However, in practice, this is difficult to achieve (without expert domain knowledge) because the problem structure necessary for such problem decomposition is generally unknown. In learning, it is a common intuition that concepts can be learned by combining low-level features to discover higher-level features, and that this can proceed through multiple levels until the high-level concepts are found. DO brings these two together so that multi-level learning can discover multi-level problem structure automatically.\n\nModel-Building Optimisation Algorithms (MBOAs), also known as Estimation of Distribution Algorithm's (EDAs) \\cite{hauschild2011introduction} are black-box solvers, inspired by biological evolutionary processes, that solve CO problems by using machine learning techniques to learn and exploit the hidden problem structure. MBOAs work by learning correlations present in a sample of fit candidate solutions and construct a model that captures the multi-variate correlations which, if learnt successfully, represents the hidden problem structure. They then proceed to generate new candidate solutions by exploiting this learnt information enabling them to find solutions that are otherwise pathologically difficult to find. It is the ability of MBOAs to exploit the hidden structure that has brought their success to solving optimisation problems \\cite{aickelin2007estimation,santana2008protein,pelikan2005hierarchical,goldman2014parameter,thierens2010linkage}. Models used in MBOAs include Bayesian Networks \\cite{pelikan1999boa,pelikan2005hierarchical}, Dependency Matrices \\cite{hsu2015optimization}, and Linkage Trees \\cite{thierens2010linkage,goldman2014parameter}.\n\nThe model is used for two tasks in MBOAs: 1) To learn, unsupervised, correlations between variables as to form higher-orders of organisation reducing the dimensionality of the solution space from combinations of individual variables to combinations of module solutions (features). 2) To generate or modify new candidate solutions in a way that exploits this learnt information. Thus far, the models used in MBOAs are simplistic in comparison to the state of the art models used by the machine learning community. We believe DNNs contain all the necessary characteristics required for solving CO problems and fit naturally as the model role in MBOAs.\n\nHow the learnt information is exploited has a profound effect on the algorithm's performance. One approach is to directly sample from the model, i.e. generating complete solutions before applying a selection pressure that filters out which complete solutions are better than others \\cite{pelikan1999boa}. This effectively conserves correlated variables during future search. A second approach is to use model-informed variation where selection is applied directly after a partial change is made to the solution. This results in an adaptation of the variation operator from substituting single variables to substituting module solutions \\cite{watson2011transformations,cox2014solving,mills2014transforming,thierens2010linkage}. DO utilises a model-informed approach as this has been shown to solve optimisation problems that algorithms generating complete solutions from the model cannot \\cite{caldwell2017get}.\n\nThe application of neural networks to solving optimisation problems has an esteemed history \\cite{hopfield1985neural}. Learning heuristics to generalise over a set of CO instances \\cite{khalil2017learning,bello2016neural,zhang2000solving} and adapting the learning function to bias future search \\cite{hopfield1985neural,boyan2000learning} are popular approaches. DO is different as it uses a DNN to recursively adapt the variation applied. The use of an autoencoder in MBOAs has been attempted \\cite{probst2015denoising,churchill2014denoising}, however they limit the autoencoder to a single hidden layer and use the model to generate complete candidate solutions rather than using model-informed variation. DO is the first algorithm to use a deep multi-layered feed-forward neural network to solve CO problems within the framework of MBOAs.\n\nThe focus of this paper is to introduce the concept of DO to show how DNNs can be extended to the field of MBOAs. By making this connection we open the opportunity to use the advanced DL tools that have a well-developed conceptual understanding but are not currently applicable to CO and MBOAs. We use two theoretical MAXSAT problems to demonstrate the performance of DO. The Hierarchical Transformation Optimisation Problem (HTOP) contains controllable deep structure that provides clear evaluation of DO during the optimisation process. Additionally, HTOP provides clear evidence that DO is performing as theorised - specifically with regards to the rescaling of the variation operator and the essential requirement for using a layerwise technique. The Parity Modular Constraint optimisation problem (MC\\textsubscript{parity}) contains structure with greater than pairwise dependencies and is a simplistic example of how DO can solve problems that current state of the art MBOAs cannot. Finally, DO is used to solve benchmark instances of the Travelling Salesman Problem (TSP) to demonstrate the applicability to CO problems containing characteristics such as non-binary representations and in-feasible solutions. Comparison is made with three heuristic methods in which DOs performance is better.\n\n\\section{The Deep Optimisation Algorithm}\n\n\\begin{algorithm}\n\\caption{Deep Optimisation}\\label{Alg:DO}\n\\textbf{Initialise Model}\\;\n\\While{Optimising Model}{\n \\textbf{Reset Solution}\\;\n \\While{Optimising Solution}{\n Perform model-informed variation to solution\\;\n Calculate fitness change to solution\\;\n \\eIf{Deleterious fitness change}{\n Reject change to solution\\;\n }{\n Keep change to solution\\;\n }\n }\nUpdate the model using optimised solution as a training example.\n}\n\\end{algorithm}\n\nThe Deep Optimisation algorithm is presented in Algorithm \\ref{Alg:DO}. The algorithm consists of two optimisation cycles, a solution optimisation cycle and model optimisation cycle, inter-locked in a two-way relationship. The relationship between these two cycles can be understood as a meta-heuristic method where the solution optimiser (heuristic method) is influenced by the model (external control). The solution optimisation cycle is an iterative procedure that produces a locally optimal solution using model-informed variation. The model optimisation cycle is an iterative procedure that updates the connection weights of a neural network to satisfy the learning objective.\n\nDO uses the deep learning Autoencoder model (AE) due to its ability to learn higher-order features from unlabelled data. The encoder and decoder network are updated during training. Only the decoder network is used for generating reconstructions from the hidden layer. DO uses an online learning approach where the learning rate controls the ratio between the exploration and exploitation of the search space.\n\n\\subsection{Model Optimisation Cycle}\n\nThe AE uses an encoder ($W$) and decoder network ($W'$). The encoder network performs a transformation from the visible units, $X$, to hidden units $H_1$ using a non-linear activation function $f$ on the sum of the weighted inputs: $H_1(X)=f(W_1X + b_1)$, where $W$ and $b$ are the connection weights and bias respectively. The decoder network generates a reconstruction of the visible units, $X_r$, from the hidden units: $X_r(H_1)=f(W_1'H_1 + b_{r1})$ where $W'$ is the transpose of the encoder weights. The backpropagation algorithm is used to train the network using a mean squared error of the reconstruction $X_r$ and input $X$.\n\n\nA deep AE consist of multiple hidden layers with the encoder performing a transformation from $H_{n-1}$ to $H_n$ defined by $H_n(H_{n-1})=f(W_nH_{n-1} + b_n)$ and the decoder reconstructing $H_{r(n-1)}$ defined by $H_{r(n-1)}(H_{r(n)})=f(W_n'H_{r(n)}+b_{rn})$. DO utilises a layer-wise approach for both training and generating samples. Initially the AE has a single hidden layer and is trained on solutions developed using the naive local search operator. The network then transitions such that the AE consist of two hidden layers and variation is informed by the first layer whilst training updates all connection weights. By constructing a network with less hidden units than visible units creates a regularisation pressure to learn a compression of the training data. At each hidden level, an optimised model will contain a meaningful compression of the lower level relating to higher-orders of organisation. Our experiments show the significance of using a layer-wise approach in comparison to an end-to-end network approach. We employ the notation DO\\textsuperscript{n} to differentiate between the number of hidden layers used in the AE.\n\n\\subsection{Solution Optimisation Cycle}\n\nThe solution optimisation cycle produces locally optimal solutions as guided by model-informed variation. Specifically, a candidate solution $X$ is initialised from a random uniform distribution. A random variation is applied to the candidate solution, forming $X'$, if the variation has caused a beneficial fitness change, or no change to the fitness, the variation is kept, $X=X'$, otherwise the variation is rejected. This procedure is repeated until no further improvements. By repeatedly resetting a candidate solution ensures the training data has sufficiently good coverage of the solution space.\n\nA model-informed variation is generated by performing a bit-substitution to the hidden layer activations at layer $n$, forming $H_n'$. $H_n'$ is then decoded to the solution level using the trained decoder network forming $X'$. The solution optimisation cycle continues as before where the fitness of $X'$ is determined and if there is a fitness benefit, or no change to the fitness, when compared to $X$, $H_n = H_n'$ and $X = X'$, otherwise the change is rejected. A decoded variation made to the hidden layer causes a change to the solution level that exploits the learnt problem structure. Concretely, module-substitutions are constructed by performing bit-substitutions to the hidden layer and decoding to the solution level. At a solution reset, it is important that $H_n$ is an accurate mapping for the current solution state. Therefore the hidden layer $H_n$ is reset using a random distribution $U[-1,1]$. It is then decoded to the solution level $X$ to construct an initial candidate solution. The output of the autoencoder is continuous between the activation values and therefore requires interpreting to a solution. For MAXSAT, DO uses a deterministic interpretation. Specifically if $X'[n] > 0$ then $X'[n] = 1$ else $X'[n] = -1$ where $n$ is the variable index. DO allows neutral changes to the solution. This allows for some degree of drift in the latent space to allow for small effects caused to the decoded output to accumulate and make a meaningful variation to the solution.\n\nAs DO uses an unsupervised learning algorithm, for it to learn a meaningful representation the training data must contain information about the hidden problem structure in its natural form. This structure becomes apparent when applying a hill-climbing algorithm to a solution because it ensures that it contains combinations of variables that provide meaningful fitness contributions. Initially, the AE model will have no meaningful knowledge of the problem structure and therefore a model-informed variation is equivalent to a naive search operator. After a transition, DO does not require knowledge of which operator has been initially used, it simply learns and applies its own learnt higher-order variation.\n\n\\subsection{Transition: Searching in Combinations of Features}\\label{searchfeature}\nAfter the network has learnt a good meaningful representation at hidden layer $n$ the following changes occur to DO, which we term a transition.\n\n\\begin{enumerate}\n \\item An additional hidden layer $H_{n+1}$ is added to the AE. Previous learnt weights are retained and training updates all weight ($W_1$ to $W_{n+1}$).\n \\item The hidden layer used for generating model-informed variation is changed from $H_{n-1}$ to $H_n$. Initialisation of a candidate solution is generated from $H_n$.\n\\end{enumerate}\n\nItem 1 is analogous to the approach introduced by Hinton and Salakhutdinov \\cite{hinton2006reducing} for training DNNs. The layer-wise procedure is important for learning a tractable representation at each hidden layer. The multi-layer network is trained on solutions developed using variation decoded from the layer below the current network depth. This is a significant requirement as DO learns from its own dynamics \\cite{watson2011optimization}. There may be many possible mappings in which the problem structure can be represented. Thus deeper layers are not only a representation of higher-order features present in the problem, but are reliant on how the higher-order features have been learnt and exploited, which, in-turn, is determined by the shallower layers. Therefore if the shallower layers do not contain a meaningful representation, then attempting to train or perform variation generated from deeper layers will be ineffective as we prove in our experiments. Item 2 is a layer-wise procedure for generating candidate solutions. The method of generating variation to the solution is the same at any hidden layer. Simply, only the hidden layer where bit-substitutions are performed and decoded from has been changed from $H_{n-1}$ to $H_n$ (to a deeper hidden layer).\n\nThis transition procedure is performed recursively until the maximum depth of the autoencoder is reached at which Item 1 is not performed. Like the learning rate, the timing of transition impacts the balance between exploration and exploitation of the search space. Once transitioned not only does the model provide information on how to adapt the applied variation but the solution optimisation cycle provides feedback to the model optimiser. Specifically, correctly learnt features will cause beneficial changes to a solution during optimisation, and therefore will be repeatedly accepted during the solution optimisation cycle and thus repeatedly presented to the model during training, reinforcing the learnt correlations. In contrast, incorrectly learnt features will cause deleterious fitness changes and therefore will not be accepted and thus not present in the training data.\n\n\\section{Performance Analysis of Deep Optimisation}\nTwo theoretical CO problems within the MAXSAT class are specifically designed to demonstrate how DO works and show that DO can solve problems containing high-order dependencies that state of the art MBOA's cannot.\n\n\\subsection{How Deep Optimisation Works}\\label{sec:howDO}\n\n\\begin{table}[]\n \\centering\n\\resizebox{0.6\\linewidth}{!}{%\n\n\\begin{tabular}{lll}\n \\toprule\n \\multicolumn{3}{c}{\\textbf{HTOP}} \\\\\n \\midrule\n $a$ $b$ $c$ $d$ & $t(a,b,c,d)$ & $f(a,b,c,d)$\\\\\n \\midrule\n 1 0 0 0 & 0 0 & 1\\\\\n 0 1 0 0 & 0 1 & 1\\\\\n 0 0 1 0 & 1 0 & 1\\\\\n 0 0 0 1 & 1 1 & 1\\\\\n Otherwise & - & 0\\\\\n \\bottomrule\n\n\\end{tabular}%\n}\n\\caption{HTOP transformation $t$, and fitness function $f$.}\n\\label{table:HTOP}\n\\end{table}\n\nThe Hierarchical Transformation Optimisation Problem (HTOP) is formed within the MAXSAT class where there objective is to find a solution that satisfies the maximum number of constraints imposed on the problem. HTOP is a consistent constraint problem and has four global optima. HTOP is specifically designed to provide clarity on how DO works with specific regards to the process of rescaling the variation operator to higher-order features and the necessity for a DNN to use a layerwise procedure. HTOP is inspired by Watson's Hierarchical If and only If (HIFF) problem \\cite{watson1998modeling} and uses the same recursive construction with an adaptation to cause deep structure. The generalised hierarchical construction is summarised here. The solution state to the problem is\n\\begin{math}\n x = \\{x_1,\\ldots,x_i,\\ldots,x_N\\},\n\\end{math}\nwhere $x_i \\in \\{0,1\\}$ and $N$ is the size of the problem. $p$ represents the number of levels in the hierarchy and $N_p$ represents the number of building-blocks of length $L_p$ at each hierarchical level. Each block containing $k$ variables is converted into a low-dimensional representation of length $L_p\/R$ by a transformation function $t$, where $R$ is the ratio of reduced dimensionality creating a new higher-order string $V^{p+1}=\\{V^{p+1}_1,\\dots,V^{p+1}_{N_pL_p\/R}\\}$. In what follows $k=4$ and $R=2$ using the transformation function detailed in Table \\ref{table:HTOP}, where a solution to a module is a one-hot bit string.\n\nThe transformation function is derived from a machine learning benchmark named the 424 encoder problem \\cite{ackley1985learning}. Learning the structure is not trivial and cannot be well approximated by pairwise associations unlike for HIFF. The transformation is applied recursively constructing deep constraint where at each level of hierarchy a one-hot coding is required to be learnt. The null variable is used to ensure that a fitness benefit at the higher level can only be achieved by satisfying all lower level transformations beneath it.\n\nHTOP is pathologically difficult for a bit-substitution hill climber. Satisfying a depth 2 constraint requires coordination of module transformations such that the transformed representations of two modules below construct a one-hot solution, e.g. Module 1 transformation = 01 ($X=0100$) and Module 2 transformation = 00 ($X=1000$). A bit-substitution operation is unable to change a module solution without causing a deleterious fitness change. Therefore a higher-order variation is required that performs substitutions of module solutions. This recursive and hierarchical construction requires the solver to successively rescale the search operator to higher-orders of organisation.\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=0.32\\linewidth]{DO0Fit.pdf}\n\t\\includegraphics[width=0.32\\linewidth]{DO1Fit.pdf}\n\t\\includegraphics[width=0.32\\linewidth]{DO3Fit.pdf}\n\t\\caption{A deep representation allows for learning and exploiting deep structure to find a global optimum.}\n\t\\label{Fig:FitnessHTOP}\n\\end{figure}\n\n\\begin{figure}\n\t\\centering\n\t\\includegraphics[width=\\linewidth]{Variation.pdf}\n\t\\caption{Example solution trajectories during solution optimisation using DO\\textsuperscript{3} before transition (left), after 1\\textsuperscript{st} transition (middle) and after 2\\textsuperscript{nd} transition (right). Variation is adapted from bit-substitutions to module solutions to combinations of module solutions as highlighted by circles. Module boundaries are represented by vertical dashed lines.}\n\t\\label{sFig:VariationHTOP}\n\\end{figure}\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.32\\linewidth]{bnw_32_fit.pdf}\n\\includegraphics[width=0.32\\linewidth]{bnw_64_fit.pdf}\n\\includegraphics[width=0.32\\linewidth]{bnw_128_fit.pdf}\n\\caption{Deep representations are consistently better performing than shallow ones, and incremental addition of layers is better than end-to-end learning.}\n\\label{Fig:shalvdeep}\n\\end{figure}\nA HTOP instance of size 32 (HTOP\\textsubscript{32}) is used to demonstrate how DO successfully learns, represents and exploits the deep structure. Further, we show the performance difference between DO\\textsuperscript{0} (a bit-substitution restart hill-climbing algorithm), DO\\textsuperscript{1} (a restart hill-climber using a shallow network) and DO\\textsuperscript{3} (a restart hill-climber using a deep network). The algorithms use 320 steps to optimise a solution and produce a total of 2000 solutions. Figure \\ref{Fig:FitnessHTOP} presents the solution fitness after each solution optimisation cycle.\n\nDO\\textsuperscript{0} is unable to find a globally optimal solution. It simply gets trapped at local optima as a bit-substitution is insufficient to improve a solution without a deleterious fitness change. DO\\textsuperscript{0} therefore has exponential time-complexity to produce a global optima. For DO\\textsuperscript{1}, the results show that a single hidden layer is sufficient for finding a global optima. The vertical dashed line illustrates the location of transition. After which DO\\textsuperscript{1} is able to perform module substitution without any deleterious fitness effects and thus search for a combination of module solutions to satisfy deeper constraints. However, note that HTOP\\textsubscript{32} contains 4 levels of hierarchy and thus a single layer network is not sufficient to fully represent the problem structure. As a result, DO\\textsuperscript{1} is unable to perform meta-module substitutions (a change of multiple module solutions simultaneously) and thus the algorithm is unable to converge to a global optima (reliably find a globally optimal solution). DO\\textsuperscript{3} shows it is able to find and converge to a globally optimal solution due to having a sufficiently deep network that can correctly learn and represent the full problem structure and thus able to perform meta-module substitutions.\n\nFigure \\ref{sFig:VariationHTOP} presents example solution trajectories during the solution optimisation cycle for DO\\textsuperscript{3} on HTOP\\textsubscript{32}. Initially, before transition, DO only performs a bit-substitution variation and is successful in finding solutions for each module (a one-hot solution per a module), but it is unable to change between module solutions and thus we observe no further changes. After the 1\\textsuperscript{st} transition, we observe the variation has been scaled up to allow variation of module solutions (as highlighted by the circles). Now DO can search for the correct combination of modules that satisfy depth 1 constraints without deleterious fitness changes. After the 2\\textsuperscript{nd} transition DO is capable of performing meta-module substitutions (module solutions of size 8) enabling it to easily satisfy depth 2 constraints. Hence, we observe, DO is able to learn and represent deep hidden structure and correctly exploit this information in a deep and recursive manner in-order to reduce the dimensionality of the search and adapt the variation operator to solve the problem.\n\nHTOP is a problem that contains deep structure, such that as the size of the problem increases so does the depth of the problem structure. Consequently, a shallow model is unable to solve large instances. Presented in Figure \\ref{Fig:shalvdeep} are results showing the fitness for the best solution found, in 10 repeats, by DO using a layer-wised approach: DO\\textsuperscript{0}, DO\\textsuperscript{1},DO\\textsuperscript{2}, DO\\textsuperscript{3} (standard method), DO using an end-to-end approach: DO(E2E)\\textsuperscript{2}, DO(E2E)\\textsuperscript{3}. HTOP\\textsubscript{32}, HTOP\\textsubscript{64} and HTOP\\textsubscript{128} instances are used which have a termination criteria of 800, 2500 and 15000 model optimisation steps respectively.\n\nIt is observed that a deeper network is required to solve problems with deeper constraints. Furthermore, the results show the significance of using the DNN in a layer-wise method instead of an end-to-end method. DO(E2E) works by constructing an end-to-end DNN at initialisation and model updates modify all hidden layer connections. A bit-substitution is used to produce the initial training data. At transition, the deepest hidden layer is used for generating a variation. The results clearly show that, whilst the DNN is sufficient to represent the problem structure (as proven by the layer-wise results), using an end-to-end model is not efficient at learning the problem structure so that it can be exploited effectively. Results show that as the DNN gets deeper, using an end-to-end approach produces successively inferior results. A layer-wise approach is therefore essential for DO to work and scale to large problems.\n\n\n\\subsection{Solving what MBOAs Cannot}\n\n\\begin{table}[]\n\\resizebox{\\linewidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|c|}\n\\hline\n\\multicolumn{4}{|c|}{Module} & \\multicolumn{3}{c|}{Fitness} \\\\ \\hline\n1 & 2 & 3 & 4 & \\multicolumn{1}{l|}{Within Module} & \\multicolumn{1}{l|}{Between Module} & \\multicolumn{1}{l|}{Total Fitness} \\\\ \\hline\n1000 & 0100 & 1101 & 0000 & 3x1 = 3 & 1 + 1 + 1 = 3 & 3.0003 \\\\ \\hline\n1000 & 1000 & 1101 & 1101 & 4x1 = 4 & 2\\textsuperscript{2} + 2\\textsuperscript{2} = 8 & 4.0008 \\\\ \\hline\n1000 & 1000 & 1000 & 1101 & 4x1 = 4 & 3\\textsuperscript{2} + 1 =10 & 4.0010 \\\\ \\hline\n1000 & 1000 & 1000 & 1000 & 4x1 = 4 & 4\\textsuperscript{2} = 16 & 4.0016 \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{Example solutions to MC\\textsubscript{parity}. A global optima is a solution with all modules containing the same parity answer.}\n\\label{tab:MCparity}\n\\end{table}\n\n\\begin{equation} \\label{eq1}\n\\begin{split}\nF & = \\sum_{i=0}^m\\begin{cases}1 & \\left(\\sum_{j=0}^{n}S_j^m\\right)\\mod 2 = 1\\\\0 & otherwise\\end{cases} \\\\\n & + p\\times\\sum_{k=0}^{n\/2}\\left(\\sum_{i=0}^m\\begin{cases}1 & S^m == Type_k\\\\0 & otherwise \\end{cases}\\right)^2\n\\end{split}\n\\end{equation}\n\nThe Parity Modular Constraint optimisation problem (MC\\textsubscript{parity}) is an adaptation of the Modular Constraint Problem \\cite{watson2011optimization} where module solutions are odd parity bit-strings. A problem of size $N$ is divided into $m$ modules each of size $n$. There are $n\/2$ sub-solutions per a module and each of the sub-solutions is assigned a type. A fitness point is awarded, for a module, if a module contains an odd parity solution, otherwise no point. A global solution is one where all modules in the problem contain the same parity solution ($n\/2$ global optima). The between module fitness is the summation of the squared count of each module solution type present in the whole solution. The fitness function is provided in Equation \\ref{eq1} and examples of a solutions fitness is presented in Table \\ref{tab:MCparity}. For the scaling analysis performed here we use $n=4$.\n\n\n\nAlthough this problem supports many solutions within each module the smaller fitness benefits of coordinating modules are more rare.\nBy ensuring the module fitness is much more beneficial than the between module fitness (p $\\ll$ 1) requires the algorithm to perform module substitutions of odd parity to follow the fitness gradient to coordinate the module solutions without deleterious fitness effects. If an algorithm cannot learn and exploit the high-order structure of the parity modules then finding a global optima will require exponential time with respect to the number of modules in the problem. Conversely, an algorithm that can will easily follow the fitness gradient to correctly coordinate the module solutions and thus scale polynomial with respect to the number of modules in the problem.\n\nLeading MBOAs such as LTGA, P3 and DSMGA use a dependency structure matrix (DSM) and the mutual information metric to capture variable dependencies. They are successful in capturing module structures containing more than 2 variables however it is hypothesised they are unable to correctly capture structure that contains greater than pairwise dependencies between variables. A simple example being parity. A neural network is capable of learning and capturing higher-order dependencies between variables.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\linewidth]{ModuleParity.pdf}\n\\caption{DO has polynomial time complexity and MBOAs has exponential time complexity when solving a problem containing high-order dependencies}\n\\label{Fig:MBOAres}\n\\end{figure}\n\n\nFor LTGA and DO the parameters are manually adjusted such that all 50 runs produce the global optimum. The results are presented in Figure \\ref{Fig:MBOAres}. The data points present the average number of fitness evaluations required to find the global optimum for the 50 independent runs for N $<$ 300 and 10 for N=300 and N=400. For LTGA the population is adjusted manually until a change of 10\\% would not cause a failure. For DO, from smallest to largest N, the learning rate varied from 0.05 to 0.0015 and transition from 60 to 1000 solutions. The network topology included up to three hidden layers and used a compression of $\\sim 10\\%$ at each layer. P3 is parameterless and required no adjustment. Two implementations are included for LTGA, \\cite{thierens2010linkage} and \\cite{goldman2014parameter}. The differences between both LTGA implementations and P3 are interesting and it is hypothesised to be caused by the way in which solutions can be prioritised according to their fitness for constructing the model. More significantly, LTGA and P3 scale exponentially whereas DO scales polynomial ($\\sim N\\textsuperscript{2}$).\nTo verify that the deep structure of DO is necessary a shallow version of DO is included in the results: DO\\textsuperscript{1} (limited to a single hidden layer). The scaling appears to be exponential where results could not be achieved for problem instances greater than 108 as the tuning of parameters became extremely sensitive. Whereas with HTOP we can see clearly what the deep structure is that needs to be learnt, in the MC\\textsubscript{parity} problem we can see that high-order dependencies defeat other algorithms but are not a problem for DO.\n\n\\section{Solving the Travelling Salesman Problem}\\label{sec:TSP}\n\nIn this section we apply DO to solve the travelling salesman problem (TSP). A solution to a TSP is a route that visits all locations once and returns to the starting location. The optimisation problem is to minimise the total travelling cost. We use 6 TSP instances from the TSP library \\cite{reinhelt2014tsplib}: 3 symmetric and 3 asymmetric, and compare with three other heuristic methods. Our aim here is to provide an example of how DO can be successfully used to solve CO problems containing characteristics such as non-binary representations and in-feasible solutions. The results that follow do not show DO outperforming state of the art heuristic methods (these problems are not particularly difficult for Lin-Kernighan Helsgaun algorithm \\cite{helsgaun2000effective}) but they do show that DO can find and exploit deep structure that can be used to solve TSP problems better than shallow methods.\n\n\n\n\n\\begin{table*}[t]\n\\centering\n\\resizebox{0.8\\textwidth}{!}{%\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|}\n\\hline\n\\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Problem \\\\ Instance\\end{tabular}} & \\multirow{2}{*}{Type} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Number \\\\ Locations\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Performance\\\\DO (\\%)\\end{tabular}} & \\multirow{2}{*}{\\begin{tabular}[c]{@{}c@{}}Avg trials \\\\ req. for DO\\end{tabular}} & \\multicolumn{3}{l|}{\\begin{tabular}[c]{@{}c@{}}Performance using\\\\same trials as DO (\\%)\\end{tabular}} & \\multicolumn{3}{c|}{\\begin{tabular}[c]{@{}c@{}}Performance using\\\\10000 trials (\\%)\\end{tabular}} \\\\ \\cline{6-11}\n & & & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l|}{Swap} & \\multicolumn{1}{l|}{Insert} & \\multicolumn{1}{l|}{2-Opt} & Swap & Insert & 2-Opt \\\\ \\hline\n fr26 & Sym & 26 & 0 & 30 & 4.6 & 1.2 & 0.2 & 0 & 0 & 0 \\\\ \\hline\n brazil58 & Sym & 58 & 0 & 224 & 17.0 & 4.0 & 0.1 & 0.9 & 0 & 0 \\\\ \\hline\n st70 & Sym & 70 & 0 & 806 & 25.8 & 6.1 & 0.4 & 20.9 & 3.9 & 0.02 \\\\ \\hline\n ftv35 & Asym & 36 & 0 & 112 & 1.6 & 0.3 & 2.7 & 0.8 & 0 & 1.4 \\\\ \\hline\n p43 & Asym & 43 & 0 & 393 & 0.3 & 0.1 & 0 & 0.1 & 0.02 & 0 \\\\ \\hline\n ft70 & Asym & 70 & 0 & 1776 & 17.1 & 4.4 & 26.7 & 7.2 & 2.2 & 23.1 \\\\ \\hline\n\\end{tabular}%\n}\n\\caption{DO exploits useful structure in TSP problems to find the global optimum (column 4-5) that is not found by a heuristic method within the same number of trials (columns 6-8) nor found easily within 10000 trials (columns 9-11). Values report percentage difference from the global optimum.}\n\\label{tab:TSPres}\n\\end{table*}\n\n\n\\subsection{TSP Representation}\n\nTo apply DO the TSP solution is transformed into a binary representation using a connection matrix $C$ of size N\\textsuperscript{2} where $C\\textsubscript{ij}$ represents an edge. $C\\textsubscript{ij} = 1$ signifies that $j$ is the next location after $i$ (the remaining entries are 0). There are a total of N connections, where each location is only connected to one other location (not itself) to construct a valid tour. The connection matrix is sparse and we found that normalising the data improved training. This is a non-compact representation but it is sufficient for demonstrating DO's ability for finding and exploiting deep structure.\n\nThe output generated by DO is continuous and is interpreted to construct a valid TSP solution. The interpretation is detailed in Algorithm \\ref{Alg:intTSP}. There are two stochastic elements included in the routine. The first element is the starting location from which the tour is then constructed. Choosing a random starting position removes the bias associated with starting at the problem defined starting location. The second element is selecting the next location in the tour. The autoencoder is trained such that positive numbers are connections in a tour. A negative output indicates no connection is made between locations. However, for the case when all locations with a positive connection have been used in the tour then, to ensure a feasible solution, the next location is randomly selected from the set of possible locations available. This ensures that learnt sub-tours (building blocks) are correctly conserved during future search and allows the location of the sub-tour to vary within the complete tour (searching in combinations of learnt building blocks). The construction method resembles the method used by Hopfield and Tank \\cite{hopfield1985neural} but here we use a max function rather than a probabilistic interpretation.\n\n\n\\begin{algorithm}\n\\caption{Interpretation for TSP Solution}\\label{Alg:intTSP}\nSet Tour[0] = select, uniformaly at random, starting position\\;\nSet ValidLocs = all possible TSP Locations\\;\nRemove Tour[0] from ValidLocs\\;\nConVec = Vector of size N\\;\ni = 1\\;\n\\Repeat{ValidLocs empty}{\n ConVec = connection vector generated from Autoencoder for location Tour[i-1] \\;\n NextLoc = Where(max(ConVec[ValidLocs])) \\;\n \\eIf{ ConVec[NextLoc] $>$ 0}\n\t{\n\t\tTour[i] = NextLoc\\;\n\t}{\n\t\tTour[i] = select, uniformaly at random, from ValidLocs\\;\n\t}\n\tremove Tour[i] from ValidLocs\\;\n i++\\;}\nCycle tour until Tour[0] = defined start location\\;\nTour[i] = defined start location\\;\n\\end{algorithm}\n\n\\subsection{Results}\n\nThe performance of DO is compared with three local search heuristics: location swap; location insert and 2-opt. The location swap heuristic consists of selecting, at random, two positions in a TSP tour and swapping the locations. The location insert heuristic selects a position in the tour at random, removes the location from the position and inserts it at another random position. The 2-opt heuristic \\cite{croes1958method} involves selecting two edge connections between locations, swapping the connections and reversing the sub-tour between the connections. For our experiments, DO used the location insert heuristic before transition as it produces good training data for both symmetric and asymmetric TSP cases. When performing search in the hidden layer local search is also applied.\n\nThe results, averaged over 10 runs, are provided in Table \\ref{tab:TSPres}. DO solves all TSP instances each time and the number of trials (training examples) are reported in column 5. Columns 6-8 report the percentage difference between the global optimum and the best found solution for a restart hill climber using a heuristic within the trials used by DO to find the global optimum. This demonstrates that DO is exploiting structure as it is able to find the global optimum faster. Note DO used the insert heuristic for all TSP instances, therefore 2-opt can perform better on some small cases as observed. Columns 9-11 report the percentage difference when the heuristic search is allowed 10000 trials. These results further confirm DO is exploiting structure reliably as, especially for the larger instances, the global optimum is not easily found.\n\n\\section{Conclusion}\nDO is the first algorithm to use a DNN to repeatedly redefine the variation operator used to solve CO problems. The experiments show there exist CO problems that DO can solve that state of the art MBOAs cannot. They also show there exists CO problems that a DNN can solve that a shallow neural network cannot and that using a layer-wise method can solve that an end-to-end method cannot. Further, results show that DO can be successfully applied to CO problems containing characteristics including non-binary representations and in-feasible solutions. This paper thus expands the use of DNN to be applied to CO problems.\n\nDO provides the opportunity to use the advanced deep learning tools that have been utilised throughout the community for other applications of deep learning and are not available to MBOAs, tools such as dropout, regularisation and different network architectures. These tools application have been shown to improve generalisation in conventional DNN tasks and should therefore also improve the ability to learn problem structure and thus DO's ability in solving CO problems. The application in DO remains to be investigated. Whether other network architectures (convolution neural networks, deep belief networks, generative adversarial networks) offer capabilities that are useful for solving some kinds of CO problems, e.g. problems with transposable sub-solutions, is also of interest.\n\n\\bibliographystyle{aaai}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\setcounter{equation}{0}\nExperimental results obtained a few years ago for ferrofluids \\cite{Litt}\nand recently for $\\gamma $-Fe$_{2}$O$_{3}$ nanoparticles \\cite{Sappey}\nindicate that for dilute samples (weak interparticle interactions), the\ntemperature $T_{\\max }$ at the maximum of the zero-field-cooled\nmagnetization $M_{ZFC},$ first increases with increasing field, attains a\nmaximum and then decreases. More recently, additional experiments performed\non the $\\gamma $-Fe$_{2}$O$_{3}$ particles dispersed in a polymer \\cite\n{Ezzir} confirm the previous result for dilute samples and show that, on\nthe contrary, for concentrated samples (strong interparticle interactions) $%\nT_{\\max }$ is a monotonic decreasing function of the magnetic field \\cite\n{Ezzir} (see Fig. \\ref{fig1}).\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,12)\n\\centerline{\\epsfig{file=fig1.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-5cm}\n\\caption{ \\label{fig1}\nThe temperature $T_{\\max }$ at the maximum of the ZFC\nmagnetization plotted against the applied field for different sample\nconcentrations. The volume fraction of the particles in the sample 4D\n($\\gamma-Fe_{2}O_{3}$, mean diameter $\\sim 8.32$ nm) \ndetermined from the density measurements is $C_{v}=0.0075$, $0.012$, $0.047$.}\n\\end{figure}\nThe behaviour observed for dilute samples (isolated nanoparticles) is\nrather unusual since one intuitively expects the applied field to lower the\nenergy barrier and thus the blocking temperature of all particles and\nthereby the temperature $T_{\\max }$ \\footnote{\nThe temperature $T_{\\max }$ at the maximum of the ZFC magnetization is\nassumed to be roughly given by the average of the blocking temperature $%\nT_{B} $ of all particles in the (dilute) assembly.}. Resonant tunneling is\none of the arguments which has been advanced \\cite{Friedman} as a mechanism\nresponsible for an increase of the relaxation rate around zero field. More\nrecently $M_{ZFC}$ measurements \\cite{Sappey} have shown an anomaly in the\ntemperature range 40-60K, which is probably too high for quantum effects to\nmanifest themselves. Yet another explanation \\cite{Sappey} of the $T_{\\max }$\neffect was proposed using arguments based on the particle anisotropy-barrier\ndistribution. It was suggested that for randomly oriented particles of a \n{\\it uniform size}, and for small values of the field, the field depresses\nthe energy barriers, and thereby enlarges the barrier distribution, so\nlowering the blocking temperature. It was also suggested that the increase\nof the barrier-distribution width overcompensates for the decrease of the\nenergy barrier in a single particle. However, the discussion of the\nrelaxation time was based on the original Arrhenius approach of N\\'{e}el.\nHere the escape rate, that is the inverse relaxation time, is modelled as an\nattempt frequency multiplied by the Boltzmann factor of the barrier height,\nwith the inverse of the attempt frequency being of the order of 10$^{-10}$\ns, thus precluding any discussion of the effect of the applied field and the\ndamping parameter on the prefactor, and so their influence on the blocking\ntemperature.\n\nIt is the purpose of this paper to calculate the blocking temperature using\nthe Kramers theory of the escape of particles over potential barriers as\nadapted to magnetic relaxation by Brown \\cite{Brown}. The advantage of using\nthis theory is that it allows one to precisely calculate the prefactor as a\nfunction of the applied field and the damping parameter (provided that\ninteractions between particles are neglected). Thus the behaviour of the\nblocking temperature as a function of these parameters may be determined. It\nappears from the results that even such a precise calculation is unable to\nexplain the maximum in the blocking temperature $T_{B}$ as a function of the\napplied field. Thus an explanation of this effect is not possible using the\nN\\'{e}el-Brown model for a {\\it single} particle as that model invariably\npredicts a monotonic decrease of $T_{B}$ as a function of the applied field.\n\nIn view of this null result we demonstrate that the $T_{\\max }$ effect may\nbe explained by considering an assembly of non-interacting particles having\na volume distribution. This is accomplished by using Gittleman's \\cite\n{Gittleman} model which consists of writing the zero-field cooled\nmagnetization of the assembly as a sum of two contributions, one from the\nblocked magnetic moments and the other from the superparamagnetic ones, with\nthe crucial assumption that the superparamagnetic contribution is given by a\nnon-linear function (Langevin function) of the applied magnetic field and\ntemperature. If this is done even the simple N\\'{e}el-Brown expression for\nthe relaxation time leads to a maximum in $T_{\\max }$ for a wide range of\nvalues of the anisotropy constant $K.$ It was claimed in \\cite{Luc Thomas}\nthat a simple volume distribution, together with a N\\'{e}el expression for\nthe relaxation time, leads to the same result for FeC particles, although\nthe author used the Langevin function for the superparamagnetic contribution\nto magnetization.\n\nTherefore, the particular expression for the single-particle relaxation time\nwhich is used appears not to be of a crucial importance in the context of\nthe calculation of the blocking temperature.\n\nIn the next section we briefly review Kramers' theory of the escape rate.\n\\section{Kramers' escape rate theory}\n\nThe simple Arrhenius calculation of reaction rates for an assembly of {\\it %\nmechanical particles} undergoing translational Brownian motion, in the\npresence of a potential barrier, was much improved upon by Kramers \\cite\n{Kramers}. He showed, by using the theory of the Brownian motion, how the\nprefactor of the reaction rate, as a function of the damping parameter and\nthe shape of the potential well, could be calculated from the underlying\nprobability-density diffusion equation in phase space, which for Brownian\nmotion is the Fokker-Planck equation (FPE). He obtained (by linearizing the\nFPE about the potential barrier) explicit results for the escape rate for\nintermediate-to-high values of the damping parameter and also for very small\nvalues of that parameter. Subsequently, a number of authors (\\cite\n{Meshkov-Melnikov}, \\cite{BHL}) showed how this approach could be extended\nto give formulae for the reaction rate which are valid for all values of the\ndamping parameter. These calculations have been reviewed by H\\\"{a}nggi et al.%\n\\cite{Hanggi et al.}.\n\nThe above reaction-rate calculations pertain to an assembly of mechanical\nparticles of mass $m$ moving along the $x$-axis so that the Hamiltonian of a\ntypical particle is \n\\begin{equation}\nH=\\frac{p^{2}}{2m}+V(q), \\label{Ham}\n\\end{equation}\nwhere $q$ and $p$ are the position and momentum of a particle; and $V(q)$ is\nthe potential in which the assembly of particles resides, the interaction of\nan individual particle with its surroundings is then modelled by the\nLangevin equation \n\\begin{equation}\n\\dot{p}+\\varsigma p+\\frac{dV}{dq}=\\lambda (t) \\label{Langevin}\n\\end{equation}\nwhere $\\lambda (t)$ is the Gaussian white noise, and $\\varsigma $ is the\nviscous drag coefficient arising from the surroundings of the particle.\n\nThe Kramers theory was first adapted to the thermal rotational motion of the\nmagnetic moment (where the Hamiltonian, unlike that of Eq.(\\ref{Ham}), is\neffectively the Gibbs free energy) by Brown \\cite{Brown} in order to improve\nupon N\\'{e}el's concept of the superparamagnetic relaxation process (which\nimplicitly assumes discrete orientations of the magnetic moment and which\ndoes not take into account the gyromagnetic nature of the system). Brown in\nhis first explicit calculation \\cite{Brown} of the escape rate confined\nhimself to axially symmetric (functions of the latitude only) potentials of\nthe magneto-crystalline anisotropy so that the calculation of the relaxation\nrate is governed (for potential-barrier height significantly greater than $%\nk_{B}T$) by the smallest non-vanishing eigenvalue of a one-dimensional\nFokker-Planck equation. Thus the rate obtained is valid for all values of\nthe damping parameter. As a consequence of this very particular result, the\nanalogy with the Kramers theory for mechanical particles only becomes fully\napparent when an attempt is made to treat non axially symmetric potentials\nof the magneto-crystalline anisotropy which are functions of both the\nlatitude and the longitude. In this context, Brown \\cite{Brown} succeeded in\ngiving a formula for the escape rate for magnetic moments of single-domain\nparticles, in the intermediate-to-high (IHD) damping limit, which is the\nanalogue of the Kramers IHD formula for mechanical particles. In his second\n1979 calculation \\cite{Brown} Brown only considered this case. Later Klik\nand Gunther \\cite{Klik and Gunther}, by using the theory of first-passage\ntimes, obtained a formula for the escape rate which is the analogue of the\nKramers result for very low damping. All these (asymptotic) formulae which\napply to a general non-axially symmetric potential, were calculated\nexplicitly for the case of a uniform magnetic field applied at an arbitrary\nangle to the anisotropy axis of the particle by Geoghegan et al. \\cite\n{Geoghegan et al} and compare favorably with the exact reaction rate given\nby the smallest non-vanishing eigenvalue of the FPE \\cite{Coffey1}%\n,\\thinspace \\cite{Coffey2}, \\cite{Coffey3} and with experiments on the\nrelaxation time of single-domain particles \\cite{Wensdorfer et al.}.\n\nIn accordance with the objective stated in the introduction, we shall now\nuse these formulae (as specialized to a uniform field applied at an\narbitrary angle by Geoghegan et al. \\cite{Geoghegan et al} and Coffey et al. \n\\cite{Coffey1},\\thinspace \\cite{Coffey2}, \\cite{Coffey3}) for the\ncalculation of the blocking temperature of a single particle.\n\nA valuable result following from these calculations will be an explicit\ndemonstration of the breakdown of the non-axially symmetric asymptotic\nformulae at very small departures from axial symmetry, manifesting itself in\nthe form of a spurious increase in $T_{\\max }$. Here interpolation formulae\njoining the axially symmetric and non-axially symmetric asymptotes\n(analogous to the one that joins the oscillatory and non-oscillatory\nsolutions of the Schr\\\"{o}dinger equation in the WKBJ method \\cite{Fermi})\nmust be used in order to reproduce the behaviour of the exact reaction rate\ngiven by the smallest non-vanishing eigenvalue of the FPE, which always\npredicts a monotonic decrease of $T_{\\max }$, as has been demonstrated by\nGaranin et al. \\cite{Garanin et al.} in the case of a transverse field.\n\n\\section{Calculation of the blocking temperature}\n\\label{Calculation of the blocking temperature}\n\nFollowing the work of Coffey et al. cited above, the effect of the applied\nmagnetic field on the blocking temperature is studied by extracting $T_{B}$\nfrom the analytic (asymptotic) expressions for the relaxation-time (inverse\nof the Kramers escape rate) \\cite{Coffey1},\\thinspace \\cite{Coffey2}, \\cite\n{Coffey3}, which allow one to evaluate the prefactor as a function of the\napplied field and the dimensionless damping parameter $\\eta _{r}$ in the\nGilbert-Landau-Lifshitz (GLL) equation. For single-domain particles the\nequation of motion of the unit vector describing the magnetization inside\nthe particle is regarded as the Langevin equation of the system (detailed in \n\\cite{Coffey4}).\n\nOur discussion of the N\\'{e}el-Brown model as applied to the problem at hand\nproceeds as follows~:\n\nIn an assembly of ferromagnetic particles with uniaxial anisotropy excluding\ndipole-dipole interactions, the ratio of the potential energy $vU$ to the\nthermal energy $k_{B}T$ of a particle is described by the bistable form \n\\begin{equation}\n\\beta U=-\\alpha ({\\bf e\\cdot n)}^{2}-\\xi ({\\bf e\\cdot h)} \\label{1}\n\\end{equation}\nwhere $\\beta =v\/(k_{B}T)$, $v$ is the volume of the single-domain particle; $%\n\\alpha =\\beta K\\gg 1$ is the anisotropy (reduced) barrier height parameter; $%\nK$ is the anisotropy constant; $\\xi =\\beta M_{s}H$ is the external field\nparameter; ${\\bf e,n,}$ and ${\\bf h}$ ($h\\equiv \\xi \/2\\alpha $) are unit\nvectors in the direction of the magnetization ${\\bf M}$, the easy axis, and\nthe magnetic field ${\\bf H}$, respectively. $\\theta $ and $\\psi $ denote the\nangles between ${\\bf n}$ and ${\\bf e}$ and between ${\\bf n}$ and ${\\bf h,}$\nrespectively. The N\\'{e}el time, which is the time required for the magnetic\nmoment to surmount the potential barrier given by (\\ref{1}), is\nasymptotically related to the smallest nonvanishing eigenvalue $\\lambda _{1}$\n(the Kramers' escape rate) of the Fokker-Planck equation, by means of the\nexpression $\\tau \\approx 2\\tau _{N}\/_{\\lambda _{1}}\\cite{Brown}$, where the\ndiffusion time is\n\\begin{equation}\n\\tau _{N}\\simeq \\frac{\\beta M_{s}}{2\\gamma }\\left[ \\frac{1}{\\eta _{r}}+\\eta\n_{r}\\right] , \\label{2}\n\\end{equation}\n$\\gamma $ is the gyromagnetic ratio, $M_{s}$ the intrinsic magnetization, $%\n\\eta $ the phenomenological damping constant, and $\\eta _{r}$ the GLL\ndamping parameter $\\eta _{r}=\\eta \\gamma M_{s}.$\n\nAs indicated above, Brown \\cite{Brown} at first derived a formula for $%\n\\lambda _{1}$, for an arbitrary {\\it axially symmetric} bistable potential\nhaving minima at $\\theta =(0,\\pi )$ separated by a maximum at $\\theta _{m},$\nwhich when applied to Eq.(\\ref{1}) for ${\\bf h\\Vert n,}$ i.e. a magnetic\nfield parallel to the easy axis, leads to the form given by Aharoni \\cite\n{Aharoni}, $\\theta _{m}=\\cos ^{-1}(-h),$%\n\\begin{equation}\n\\lambda _{1}\\approx \\frac{2}{\\sqrt{\\pi }}\\alpha ^{3\/2}(1-h^{2})\\times \\left[\n(1+h)\\;e^{-\\alpha (1+h)^{2}}+(1-h)\\;e^{-\\alpha (1-h)^{2}}\\right] \\label{4}\n\\end{equation}\nwhere $0\\leq h\\leq 1,$ $h=1$ being the critical value at which the bistable\nnature of the potential disappears.\n\nIn order to describe the non-axially symmetric asymptotic behaviour, let us\ndenote by $\\beta \\Delta U_{-}$ the smaller reduced barrier height of the two\nconstituting escape from the left or the right of a bistable potential. Then\nfor very low damping, i.e. for $\\eta _{r}\\times \\beta \\Delta U_{-}\\ll 1$\n(with of course the reduced barrier height $\\beta \\Delta U_{-}\\gg 1$,\ndepending on the size of the nanoparticle studied) we have \\cite{Coffey3}, \n\\cite{Brown}, \\cite{Coffey4} the following asymptotic expression for the\nN\\'{e}el relaxation time \n\\begin{eqnarray}\n\\tau _{VLD}^{-1} &\\approx &\\frac{\\lambda }{2\\tau _{N}} \\label{LD} \\\\\n&\\approx &\\frac{\\eta _{r}}{2\\pi }\\left\\{ \\omega _{1}\\times \\beta\n(U_{0}-U_{1})e^{-\\beta (U_{0}-U_{1})}+\\omega _{2}\\times \\beta\n(U_{0}-U_{2})e^{-\\beta (U_{0}-U_{2})}\\right\\} \\nonumber\n\\end{eqnarray}\nFor the intermediate-to-high damping, where $\\eta _{r}\\times \\beta \\Delta\nU_{-}>1$ (again with the reduced barrier height $\\beta \\Delta U_{-}$ much\ngreater than unity) we have \\cite{Coffey3} the asymptotic expression \n\\begin{equation}\n\\tau _{IHD}^{-1}\\approx \\frac{\\Omega _{0}}{2\\pi \\omega _{0}}\\left\\{ \\omega\n_{1}e^{-\\beta (U_{0}-U_{1})}+\\omega _{2}e^{-\\beta (U_{0}-U_{2})}\\right\\} ,\n\\label{IHD}\n\\end{equation}\nwhere \n\\begin{eqnarray*}\n\\omega _{1}^{2} &=&\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(1)}c_{2}^{(1)},\\quad\n\\omega _{2}^{2}=\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(2)}c_{2}^{(2)} \\\\\n\\omega _{0}^{2} &=&-\\frac{\\gamma ^{2}}{M_{s}^{2}}c_{1}^{(0)}c_{2}^{(0)},\\quad\n\\\\\n\\Omega _{0} &=&\\frac{\\eta _{r}g^{\\prime }}{2}\\left[ -c_{1}^{(0)}-c_{2}^{(0)}+%\n\\sqrt{(c_{2}^{(0)}-c_{1}^{(0)})^{2}-\\frac{4}{\\eta _{r}^{2}}%\nc_{1}^{(0)}c_{2}^{(0)}}\\right] \\\\\ng^{\\prime } &=&\\frac{\\gamma }{(1+\\eta _{r}^{2})M_{s}}\n\\end{eqnarray*}\nHere $\\omega _{1},\\omega _{2}$ and $\\omega _{0}$ are respectively the well\nand saddle angular frequencies associated with the bistable potential, $%\n\\Omega _{0}$ is the damped saddle angular frequency and the $c_{j}^{(i)}$\nare the coefficients of the truncated (at the second order in the direction\ncosines) Taylor series expansion of the crystalline anisotropy and external\nfield potential at the wells of the bistable potential denoted by $1$ and $2$\nand at the saddle point denoted by $0$. A full discussion of the application\nof these general formulae to the particular potential, which involves the\nnumerical solution of a quartic equation in order to determine the $%\nc_{j}^{(i)}$ with the exception of the particular field angle $\\psi =\\frac{%\n\\pi }{4}$ or $\\frac{\\pi }{2}$, in Eq.(\\ref{1}) is given in Refs. \\cite\n{Geoghegan et al}, \\cite{Kennedy}.\n\nThe blocking temperature $T_{B}$ is defined as the temperature at which $%\n\\tau =\\tau _{m}$, $\\tau _{m}$ being the measuring time. Therefore, using\nEqs.(\\ref{2}), (\\ref{4}) and (\\ref{LD}) (or (\\ref{2}), (\\ref{4}) and (\\ref\n{IHD})) and solving the equation $\\tau =\\tau _{m}$ for the blocking\ntemperature $T_{B}$, we obtain the variation of $T_{B}$ as a function of the\napplied field, for an arbitrary angle $\\psi $ between the easy axis and the\napplied magnetic field.\n\nIn particular, for very small values of $\\psi $ we have used Eq.(\\ref{4}),\nas the problem then becomes almost axially symmetric and the arguments\nleading to Eqs.(\\ref{LD}) and (\\ref{IHD}) fail \\cite{Brown}, \\cite{Geoghegan\net al}, \\cite{Coffey4}, and appropriate connection formulae must be used so\nthat they may attain the axially symmetric limit. We then sum over $\\psi ,$\nas the easy axes of the particles in the assembly are assumed to be randomly\ndistributed. In Fig. \\ref{fig2} we have plotted the resulting $T_{B}$ vs. $H$ for\ndifferent values of the damping parameter $\\eta _{r}$. We have checked that\nlowering (or raising) the value of the measuring time $\\tau _{m}$ shifts the\ncurve $T_{B}(H)$ only very slightly upwards (or downwards) while leaving the\nqualitative behaviour unaltered. \n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig2.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig2}\nThe blocking temperature $T_{B}$ as a function of the\napplied field as extracted from the formulae for the relaxation time of a\nsingle-domain particle and summed over the arbitrary angle $\\psi ,$ plotted\nfor different values of the damping parameter $\\eta _{r}$ (see text), and\nmean volume $\\left\\langle V\\right\\rangle =$ $265$ nm$^{3}.$ ($K=1.25\\times\n10^{5}$erg\/cm$^{3},\\;\\gamma =2.10^{7}$ S$^{-1}$G$^{-1},M_{s}=300$ emu\/cm$%\n^{3},\\;\\tau _{m}=100$ s).\n}\n\\end{figure}\nIn order to compare our analytical results\nwith those of experiments on particle assemblies, we have calculated the\ntemperature $T_{\\max }$ at the maximum of the ZFC magnetization. In the\npresent calculations we have assumed that $M_{s}$ is independent of\ntemperature. We find that the temperature $T_{\\max }$ behaves in the same\nway as was observed experimentally \\cite{Sappey}, \\cite{Ezzir} for dilute\nsamples (see Fig.\\ \\ref{fig3}, where the parameters are those of the most dilute sample\nin Fig.\\ \\ref{fig1}, with $\\eta _{r}=2.5$).\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig3.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig3}\nThe temperature $T_{\\max }$ at the maximum of the ZFC\nmagnetization for an assembly of particles with randomly distributed easy\naxes, as a function of the applied field obtained by averaging the blocking\ntemperature in Fig.\\ 2 over the volume log-normal distribution ($\\sigma\n=0.85 $), see text; with $\\eta _{r}\\medskip =2.5$.\n}\n\\end{figure}\n\\noindent Moreover, our calculations for a single particle show that the\nblocking temperature $T_{B}(H)$ exhibits a bell-like shape in the case of\nintermediate-to-high damping. This behaviour is however spurious as is shown\nbelow.\n\n\\section{Spurious behaviour of the blocking temperature at low fields}\n\nWe have mentioned that initially the non-axially symmetric asymptotic\nformulae appear to render the low-field behaviour of $T_{\\max }$. However,\nthis apparent behaviour is spurious as the asymptotic formulae (as one may\nverify (i) by exact calculation of the smallest non-vanishing eigenvalue of\nthe Fokker-Planck equation, and (ii) e.g. the IHD formula does not continue\nto the axially symmetric asymptote) fail at low fields, since the IHD\nformula diverges like $h^{-1\/2},$ for all angles, where $h$ is the reduced\nfield as defined in sect.\\ \\ref{Calculation of the blocking\ntemperature}. The effect of the divergence is thus to \nproduce a spurious maximum in $T_{\\max }$ as a function of the applied field.\n\nIn order to verify this, we have also performed such calculations \\cite\n{Geoghegan et al}, \\cite{Coffey1}, \\cite{Kennedy} using exact numerical\ndiagonalization of the Fokker-Planck matrix. The smallest non-vanishing\neigenvalue $\\lambda _{1}$ thus obtained leads to a blocking temperature\nwhich agrees with that rendered by the asymptotic formulae with the all\nimportant exception of IHD at very low fields where the exact calculation\ninvariably predicts a monotonic decrease in the blocking temperature rather\nthan the peak predicted by the IHD formula (\\ref{IHD}), so indicating that\nthe theoretical peak is an artefact of the asymptotic expansion, caused by\nusing Eq.(\\ref{IHD}) in a region beyond its range of validity, that is in a\nregion where the potential is almost axially symmetric due to the smallness\nof the applied field which gives rise to a spurious discontinuity between\nthe axially and non axially symmetric asymptotic formulae.\n\nAn explanation of this behaviour follows (see also \\cite{Garanin et al.}):\nin the non-axially symmetric IHD asymptote Eq.(\\ref{IHD}) which is\nformulated in terms of the Kramers escape rate, as the field tends to zero,\nfor high damping, the saddle angular frequency $\\omega _{0}$ tends to zero.\nThus the saddle becomes infinitely wide and so the escape rate predicted by\nEq.(\\ref{IHD}) diverges leading to an apparent rise in the blocking\ntemperature until the field reaches a value sufficiently high to allow the\nexponential in the Arrhenius terms to take over. When this occurs the\nblocking temperature decreases again in accordance with the expected\nbehaviour. This is the field range where one would expect the non-axially\nasymptote to work well.\n\nIn reality, as demonstrated by the exact numerical calculations of the\nsmallest non vanishing eigenvalue of the Fokker-Planck matrix, the small\nfield behaviour is not as predicted by the asymptotic behaviour of Eq.(\\ref\n{IHD}) (it is rather given by the axially-symmetric asymptote) because the\nsaddle is limited in size to $\\omega _{0}.$ Thus the true escape rate cannot\ndiverge, and the apparent discontinuity between the axially-symmetric and\nnon axially-symmetric results is spurious, leading to apparent rise in $%\nT_{B} $. In reality, the prefactor in Eq.(\\ref{IHD}) can never overcome the\nexponential decrease embodied in the Arrhenius factor. Garanin \\cite\n{Garanin2} (see \\cite{Garanin et al.}) has discovered bridging formulae\nwhich provide continuity between the axially-symmetric Eq.(\\ref{4}) and non\naxially symmetric asymptotes leading to a monotonic decrease of the blocking\ntemperature with the field in accordance with the numerical calculations of\nthe lowest eigenvalue of the Fokker-Planck equation.\n\nAn illustration of this was given in Ref. \\cite{Garanin et al.} for the\nparticular case of $\\psi =\\frac{\\pi }{2},$ that is a transverse applied\nfield. If the escape rate is written in the form \n\\[\n\\tau ^{-1}=\\frac{\\omega _{1}}{\\pi }A\\exp (-\\beta \\Delta U) \n\\]\nwhere $\\omega _{1}$ is the attempt frequency and is given by \n\\[\n\\omega _{1}=\\frac{2K\\gamma }{M_{s}}\\sqrt{1-h^{2}}, \n\\]\nthen the factor $A,$ as predicted by the IHD formula, behaves as $\\eta _{r}\/%\n\\sqrt{h}$ for $h\\ll 1,\\eta _{r}^{2},$ while for $h=0$, $A$ behaves as $2\\pi\n\\eta _{r}\\sqrt{\\alpha \/\\pi },$ which is obviously discontinuous. So that a\nsuitable interpolation formula is required. Such a formula (analogous to\nthat used in the WKBJ method \\cite{Fermi}) is obtained by multiplying the\nfactor $A$ of the axially symmetric result by $e^{-\\xi }I_{0}(\\xi ),$ where $%\nI_{0}(\\xi )$ is the modified Bessel function of the first kind, and $\\xi\n=2\\alpha h$ (see \\cite{Garanin et al.}).\n\nThis interpolation formula, as is obvious from the large and small $\\xi $\nlimits, automatically removes the undesirable $1\/\\sqrt{h}$ divergence of the\nIHD formula and establishes continuity between the axially symmetric and\nnon-axially symmetric asymptotes for $\\psi =\\pi \/2$, as dictated by the\nexact solution.\n\nIt is apparent from the discussion of this section that the N\\'{e}el-Brown\nmodel for a single particle is unable to explain the maximum in $T_{\\max },$\nas a careful calculation of the asymptotes demonstrates that they always\npredict a monotonic decrease in the blocking temperature. However this\neffect may be explained by considering an assembly of non-interacting\nparticles with a (log-normal) volume distribution and using\nGittleman's \\cite\n{Gittleman} model as shown below, where the superparamagnetic contribution\nto magnetization is taken to be a non-linear function (Langevin function) of\nthe magnetic field.\n\n\\section{Possible explanation of the maximum in $T_{\\max }$}\n\\label{Possible explanation of the maximum}\n\nOur explanation of the low-field behaviour of $T_{\\max }$ is based on\nextracting $T_{\\max }$ from the zero-field cooled magnetization curve\nassuming a volume distribution of particles. According to Gittleman's model\nthe zero-field cooled magnetization of the assembly can be written as a sum\nof two contributions, one from the blocked magnetic moments and the other\nfrom the superparamagnetic ones. In addition, we write the superparamagnetic\ncontribution as a Langevin function of the applied magnetic field and\ntemperature.\n\nGittleman \\cite{Gittleman} proposed a model in which the alternative\nsusceptibility of an assembly of non-interacting particles, with a volume\ndistribution and randomly distributed easy axes, can be written as \n\\begin{equation}\n\\chi (T,\\omega )=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{\\infty }dVVf(V)\\chi _{V}(T,\\omega ), \\label{Susc1}\n\\end{equation}\nwhere $Z=$ $\\int_{0}^{\\infty }dVVf(V),$ $f(V)$ is the volume distribution\nfunction, $\\chi _{V}$ is the susceptibility of the volume under\nconsideration, and $dVVf(V)\\chi _{V}$ is the contribution to the total\nsusceptibility corresponding to volumes in the range $V-V+dV.$ $\\chi _{V}$\nis then calculated by assuming a step function for the magnetic field, i.e. $%\nH=0$ for $t<0$ and $H=H_{0}=const$ for $t>0.$ Then, the contribution to the\nmagnetization from particles of volume $V$ is given by \n\\begin{equation}\nM_{V}(t)=VH_{0}\\left( \\chi _{0}-(\\chi _{0}-\\chi _{1})e^{-t\/\\tau }\\right) ,\n\\label{Susc2}\n\\end{equation}\nwhere $\\chi _{0}=M_{s}^{2}(T)V\/3k_{B}T$ is the susceptibility at\nthermodynamic equilibrium and $\\chi _{1}=M_{s}^{2}(T)V\/3E_{B}$ is the\ninitial susceptibility of particles in the blocked state (see \\cite{Dormann\net al.} and many references therein). The Fourier transform of (\\ref{Susc2})\nleads to the complex susceptibility \n\\begin{equation}\n\\chi =\\frac{(\\chi _{0}+i\\omega \\tau \\chi _{1})}{1+i\\omega \\tau },\n\\label{Susc3}\n\\end{equation}\nwhose real part reads \\cite{Gittleman} \n\\begin{equation}\n\\chi ^{\\prime }=\\frac{\\chi _{0}+\\omega ^{2}\\tau ^{2}\\chi _{1}}{1+\\omega\n^{2}\\tau ^{2}}, \\label{Susc4}\n\\end{equation}\nwhere $\\tau _{m}$ is the measuring time, and $\\omega $ is the angular\nfrequency ($=2\\pi \\nu $).\n\nStarting from (\\ref{Susc4}) the application of an alternating field yields:\na) $\\chi ^{\\prime }=\\chi _{0}$ if $\\omega \\tau \\ll 1.$ At high temperature\nthe magnetic moments orientate themselves on a great number of occasions\nduring the time of a measurement, and thus the susceptibility is the\nsuperparamagnetic susceptibility $\\chi _{0}.$ b) $\\chi ^{\\prime }=\\chi _{1}$\nif $\\omega \\tau \\gg 1.$ At low temperature the energy supplied by the field\nis insufficient to reverse the magnetic moments the time of a measurement.\nHere the susceptibility is the static susceptibility $\\chi _{1}.$ Between\nthese two extremes there exists a maximum at the temperature $T_{\\max }.$\n$\\chi ^{\\prime }$ can be calculated from (\\ref{Susc4}) using the formula for\nthe relaxation time $\\tau $ appropriate to the anisotropy symmetry, and\nconsidering a particular volume $V,$ one can determine the temperature $%\nT_{\\max }.$\n\nIn an assembly of particles with a volume distribution, $\\chi^{\\prime }$\ncan be calculated for a (large) volume distribution by postulating that at a\ngiven temperature and given measuring time, certain particles are in the\nsuperparamagnetic state and that the others are in the blocked state. The\nsusceptibility is then given by the sum of two contributions \n\\begin{equation}\n\\chi ^{\\prime }(T,\\nu )=%\n\\displaystyle \\int %\n\\limits_{V_{c}}^{\\infty }dVVf(V)\\chi _{1}(T,\\nu )+%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}dVVf(V)\\chi _{0}(T,\\nu ), \\label{Susc5}\n\\end{equation}\nwhere $V_{c}=V_{c}(T,H,\\nu )$ is the blocking volume defined as the volume\nfor which $\\tau =\\frac{1}{\\nu} = \\tau_{m}$. $\\chi ^{\\prime }$ shows a maximum at $%\nT_{\\max }$ near $.$\n\nIf this is done even the simple N\\'{e}el-Brown\\footnote{%\nThis is the simplest non-trivial case since the relaxation time (and thereby\nthe critical volume) depends on the magnetic field.} expression for the\nrelaxation time leads to a maximum in $T_{\\max }$ when the superparamagnetic\ncontribution to magnetization is a Langevin function of the magnetic field.\nThus the particular expression for the single-particle relaxation time used\nappears not to be of a crucial importance in the context of the calculation\nof the blocking temperature.\n\nIn Figs. \\ref{fig4a}-\\ref{fig4c} we plot the result of such calculations (see appendix) where the\nparameters correspond to the samples of Fig. \\ref{fig1}. In Fig. \\ref{fig4a} we compare the\nresults from linear and non-linear (Langevin function) dependence of the\nmagnetization on the magnetic field. We see that indeed the non-linear\ndependence on $H$ of the superparamagnetic contribution to magnetization\nleads to a maximum in $T_{\\max }$ while in the linear case the temperature $%\nT_{\\max }$ is a monotonic function of the field, for all values of $K$\n(corresponding to our samples). This only shows that the volume distribution\nby itself cannot account for the non-monotonic behaviour of the temperature $%\nT_{\\max }$, contrary to what was claimed in \\cite{Luc Thomas}.\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,12)\n\\centerline{\\epsfig{file=fig4a.eps,angle=0,width=10cm}}\n\\end{picture}\n\\vspace{-5cm}\n\\caption{ \\label{fig4a}\nTemperature $T_{\\max }$ as a function of the applied field obtained by\nthe calculations of sect. V and appendix. Squares : the superparamagnetic\nmagnetization $M_{sp}$ is a linear function of the magnetic field given by\nEq.(\\ref{11}). Circles : $M_{sp}$ is the Langevin function given by Eq.(\\ref\n{11b}). The parameters are the same as in Figs.1-3, and $K=1.0\\times 10^{5}$%\nerg\/cm$^{3}$.\n}\n\\end{figure}\n\\begin{figure}[t]\n\\unitlength1cm\n\\begin{picture}(15,11)\n\\centerline{\\epsfig{file=fig4b.eps,angle=0,width=9cm}}\n\\end{picture}\n\\vspace{-4.9cm}\n\\caption{ \\label{fig4b}\nTemperature $T_{\\max }(H)$ for different values of the anisotropy\nconstant $K.$\n}\n\\end{figure}\n\\begin{figure}[h]\n\\unitlength1cm\n\\begin{picture}(15,11.5)\n\\centerline{\\epsfig{file=fig4c.eps,angle=0,width=8cm}}\n\\end{picture}\n\\vspace{-4cm}\n\\caption{ \\label{fig4c}\n$T_{\\max }(H)$ for different values of the volume-distribution width $%\n\\sigma ,$ and $K=1.5\\times 10^{5}$erg\/cm$^{3}.$\n}\n\\end{figure}\n\nIn fact, in the non-linear case, $T_{\\max }$ exhibits three different\nregimes with field ranges depending on all parameters and especially on $K$\n(see Figs. \\ref{fig4a}, \\ref{fig4b}). For example, for $K=1.5\\times 10^{5}$ erg\/cm$^{3}$, the\nfield ranges are (in Oe) $0.0140$. In the first\nrange, i.e. very low fields, $T_{\\max }$ slightly decreases, then in the\nsecond range it starts to increase up to a maximum, and finally for very\nhigh fields $T_{\\max }$ decreases down to zero. These three regimes were\nobtained experimentally in \\cite{Luc Thomas} in the case of diluted FeC\nparticles.\n\nNext, we studied the effect of varying the anisotropy constant $K$. In\nFig. \\ref{fig4b} we plot the temperature $T_{\\max }$ vs. $H$ for different values of $%\nK.$ It is seen that apart from the obvious shifting of the peak of $T_{\\max\n} $ to higher fields, this peak broadens as the anisotropy constant $K$\nincreases.\n\nWe have also varied the volume-distribution width $\\sigma $ and the results\nare shown in Fig. \\ref{fig4c}. There we see that the maximum of $T_{\\max }$ tends to\ndisappear as $\\sigma $ becomes smaller.\n\nFinally, these results show that the non-monotonic behaviour of $T_{\\max }$\nis mainly due to the non-linear dependence of the magnetization as a\nfunction of magnetic field, and that the magneto-crystalline anisotropy and\nthe volume-distribution width have strong bearing on the variation of the\ncurvature of $T_{\\max }$ vs. field.\n\n\\section{Conclusion}\n\nOur attempt to explain the experimentally observed maximum in the curve $%\nT_{\\max }(H)$ for dilute samples using the asymptotic formulae for the\nprefactor of the relaxation rate of a single-domain particle given by Coffey\net al. \\cite{Coffey4}, has led to the conclusion that these asymptotic\nformulae are not valid for small fields, where the maximum occurs. However,\nthis negative result has renewed interest in the long-standing problem of\nfinding bridging formulae between non-axially and axially symmetric\nexpressions for the prefactor of the escape rate. Recently, this problem has\nbeen partially solved in \\cite{Garanin et al.}.\n\nOn the other hand, exact numerical calculations \\cite{Geoghegan et al}, \\cite\n{Coffey1}, \\cite{Kennedy} of the smallest eigenvalue of the Fokker-Planck\nmatrix invariably lead to a monotonic decrease in the blocking temperature\n(and thereby in the temperature $T_{\\max }$) as a function of the magnetic\nfield. We may conclude then that the expression of the single-particle\nrelaxation time does not seem to play a crucial role. Indeed, the\ncalculations of sect.\\ \\ref{Possible explanation of the maximum} have\nshown that even the simple N\\'{e}el-Brown \nexpression for the relaxation time leads to a maximum in $T_{\\max }$ if one\nconsiders an assembly of particles whose magnetization, formulated through\nGittleman's model, has a superparamagnetic contribution that is a Langevin\nfunction of the magnetic field. The magneto-crystalline anisotropy and the\nvolume-distribution width have strong influence.\n\nAnother important point, whose study is beyond the scope of this work, is\nthe effect of interparticle interactions on the maximum in the temperature $%\nT_{\\max }$. As was said in the introduction, this maximum disappears in\nconcentrated samples, i.e. in the case of intermediate-to-strong\ninterparticle interactions. A recent study \\cite{Chantrell} based on Monte\nCarlo simulations of interacting (cobalt) fine particles seems to recover\nthis result but does not provide a clear physical interpretation of the\neffect obtained. In particular, it was shown there that interactions have a\nstrong bearing on the effective variation of the average energy barrier with\nfield, as represented in an increase of the curvature of the variation of $%\nT_{\\max }$ with $H$ as the packing density (i.e. interparticle interactions)\nincreases.\n\n\\clearpage\n\n\\section*{Acknowledgements}\nH.K., W.T.C. and E.K. thank D.\\ Garanin for helpful conversations.\nThis work was supported in part by Forbairt Basic Research Grant\nNo SC\/97\/70 and the Forbairt research collaboration fund 1997-1999. The work\nof D.S.F.C. and E.C.K. was supported by EPSRC (GR\/L06225).\n\\newpage \n\n\\section*{Appendix: Obtaining $T_{\\max }$ from the ZFC magnetization}\n\nHere we present the (numerical) method of computing the temperature $T_{\\max\n}$ at the peak of the zero-field cooled magnetization of non-interacting\nnanoparticles.\n\nThe potential energy for a particle reads \n\\begin{equation}\n\\frac{\\beta U}{\\alpha }=\\sin ^{2}\\theta -2h\\cos (\\psi -\\theta ) \\label{1app}\n\\end{equation}\nwhere all parameters are defined in sect.\\ \\ref{Calculation of the blocking temperature}.\n\nThen, one determines the extrema of the potential $U$ and defines the escape\nrate $\\lambda $ according to the symmetry of the problem. Here we consider,\nfor simplicity, the axially-symmetry N\\'{e}el-Brown model where $\\lambda $\nis given by Eq. (\\ref{4}).\n\nThe next step consists in finding the critical volume $V_{c}$ introduced in\nEq. (\\ref{Susc5}). $V_{c}$ is defined as the volume at which the relaxation\ntime (or the escape rate) is equal to the measuring time $\\tau _{m}=100s$\n(or measuring frequency). That is, if one defines the function \n\\begin{equation}\nF(V)=\\lambda (\\alpha ,\\theta _{a},\\theta _{m},\\theta _{b})-\\frac{\\tau _{N}}{%\n\\tau _{m}}, \\label{3}\n\\end{equation}\nwhere $\\theta _{a},\\theta _{b},\\theta _{m}$ correspond to the two minima and\nmaximum of the potential, respectively, the critical volume $V_{c}$ is\nobtained as the volume that nullifies the function $F(V)$ for given values\nof $T,H$ and all other fixed parameters ($\\gamma ,\\eta _{r},M_{s}$ and the\nvolume-distribution width $\\sigma $).\n\nThen, $M_{zfc}$ is defined according to Gittleman's model \\cite{Gittleman},\nnamely \n\\begin{equation}\nZ\\times M_{zfc}(H,T,\\psi )=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{sp}(H,T,V,\\psi )+%\n\\displaystyle \\int %\n\\limits_{V_{c}}^{\\infty }{\\cal D}V\\,M_{b}(H,T,V,\\psi ) \\label{4app}\n\\end{equation}\nwhere $M_{sp}$ and $M_{b}$ are the contributions to magnetization from\nsuperparamagnetic particles with volume $V\\leq V_{c}$ and particles still in\nthe blocked state with volume $V>V_{c}$. $f(V)=(1\/\\sigma \\sqrt{2\\pi })\\exp\n(-\\log ^{2}(V\/V_{m})\/2\\sigma ^{2})$, is the\nlog-normal volume distribution, $V_{m}$ being the mean volume; $Z\\equiv \\int_{0}^{\\infty }{\\cal D}%\nV=\\int_{0}^{\\infty }Vf(V)dV.$\n\nEq.(\\ref{4app}) can be rewritten as \n\\begin{equation}\nZ\\times M_{zfc}=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{sp}-%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,M_{b}+%\n\\displaystyle \\int %\n\\limits_{0}^{\\infty }{\\cal D}V\\,M_{b}. \\label{5}\n\\end{equation}\nNow using, \n\\begin{equation}\nM_{b}(H,T,V,\\psi )=\\frac{M_{s}^{2}H}{2K}\\sin ^{2}\\psi , \\label{6}\n\\end{equation}\n$M_{b}$ can be taken outside the integral in the last term above. Thus%\n\\footnote{%\nThe reason for doing so is to avoid computing the integral $%\n\\int_{V_{c}}^{\\infty }$ which is numerically inconvenient.}, \n\\[\nZ\\times M_{zfc}=%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,(M_{sp}-\\,M_{b})+\\,Z\\times M_{b}. \n\\]\n\nThe final expression of $M_{zfc}$ is obtained by averaging over the angle $%\n\\psi $ ($\\left\\langle \\sin ^{2}\\psi \\right\\rangle =\\frac{2}{3}$), \n\\begin{equation}\nM_{zfc}=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{\\pi \/2}d\\psi \\sin \\psi \\times \n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,(M_{sp}-\\,M_{b})+\\,\\frac{M_{s}^{2}H}{3K}.\n\\label{8}\n\\end{equation}\nThe expression of $M_{sp}$ varies according to the model used. Chantrell et\nal. \\cite{Chantrell et al.} have given an expression which is valid for $%\nM_{s}HV\/k_{B}T\\ll 1,$%\n\\begin{equation}\nM_{sp}(H,T,V,\\psi )=\\frac{M_{s}^{2}VH}{k_{B}T}\\left( \\cos ^{2}\\psi +\\frac{1}{%\n2}\\left[ 1-\\cos ^{2}\\psi (1-\\frac{I_{2}}{I_{0}})\\right] \\right) , \\label{9}\n\\end{equation}\nwith \n\\begin{equation}\n\\frac{I_{2}}{I_{0}}=\\frac{1}{\\alpha }\\left( -\\frac{1}{2}+\\frac{e^{\\alpha }}{%\nI(\\alpha )}\\right) ,\\quad I(\\alpha )=2%\n\\displaystyle \\int %\n\\limits_{0}^{1}dxe^{\\alpha x^{2}}. \\label{10}\n\\end{equation}\nNote that upon averaging over $\\psi ,$ the expression in (\\ref{9}) reduces\nto \n\\begin{equation}\nM_{sp}(H,T,V)=\\frac{M_{s}^{2}VH}{3k_{B}T}, \\label{11}\n\\end{equation}\nwhich is just the limit of the Langevin function for $M_{s}HV\\ll k_{B}T,$\ni.e. \n\\begin{equation}\nM_{sp}(H,T,V)=M_{s}{\\cal L}\\left( \\frac{M_{s}HV}{k_{B}T}\\right) .\n\\label{11b}\n\\end{equation}\n\nTherefore, the expression in (\\ref{8}) becomes \n\\begin{equation}\nM_{zfc}=\\frac{1}{Z}%\n\\displaystyle \\int %\n\\limits_{0}^{V_{c}}{\\cal D}V\\,\\left( M_{sp}-\\,M_{b}\\right) +\\,\\frac{%\nM_{s}^{2}H}{3K} \\label{12app}\n\\end{equation}\nThis is valid only in the case of a relaxation time independent of $\\psi ,$\nas in the N\\'{e}el-Brown model, which is applicable to an assembly of\nuniformly oriented particles. However, if one wanted to use the expressions\nof the relaxation time given by Coffey et al. and others, where $\\tau $\ndepends on the angle $\\psi ,$ as is the case in reality, one should not\ninterchange integrations over $\\psi $ and $V,$ as is done in (\\ref{8}),\nsince $V_{c}$ in general depends on $\\tau $ and thereby on $\\psi $.\n\nTherefore, the final expression for $M_{zfc}$ that was used in our\ncalculations for determining the temperature $T_{\\max }$ is given by eqs. (%\n\\ref{11}), (\\ref{11b}), (\\ref{12app}).\n\n\\clearpage\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\nAn evolution of the magnetic phase transition in the helical magnet MnSi at high pressure is reported in a number of publications~\\cite{1,2,3}. It became clear that the phase transition temperature decreased with pressure and practically reached the zero value at $\\sim$15 kbar. However a nature of this transition at zero temperature and high pressure is still a subject of controversial interpretations. Early it was claimed an existence of tricritical point on the phase transition line that might result in a first order phase transition in MnSi at low temperatures~\\cite{4}. The latter would prevent an existence of quantum critical point in MnSi. This view was seemingly supported by the volume measurements at the phase transition in MnSi~\\cite{5,6}. However, this idea was disputed in papers~\\cite{7,8}, where stated that the observed volume anomaly at the phase transitions in MnSi at low temperatures was simply the slightly narrowing anomaly clearly seen at elevated temperatures. On the other hand some experimental works and the recent Monte-Carlo calculations may indicate a strong influence of inhomogeneous stress arising at high pressures and low temperatures on characteristics of phase transitions that could make any experimental data not entirely conclusive~\\cite{7,8,9}.\n\n\nIn this situation it would be appealing to use a different approach to discover a quantum criticality in MnSi, for instance, making use doping as a controlling parameter. Indeed, it became known that doping MnSi with Fe and Co decreases a temperature of the magnetic phase transitions and finally completely suppress the transitions at some critical concentrations of the dopants. In case of Fe doping a critical concentration consist about 15\\% (actually different estimates vary from 0.10 to 0.19)~\\cite{10,11,13}. \n\nActually, the general belief that the concentration of the dopant added to the batch will be the same in the grown crystal is incorrect. One needs to preform chemical and x-ray analysises to make a certain conclusion about the real composition of material. Anyway there are some evidences (non Fermi liquid resistivity, logarithmic divergence of specific heat) that indeed the quantum critical point occurs in (MnFe)Si in the vicinity of iron concentration 0.15\\% at ambient pressure. However, in the recent publication it is claimed that (Mn$_{0.85}$Fe$_{0.15}$)Si experiences a second order phase transition at the pressure range to $\\sim$0-23 kbar, therefore placing the quantum critical point in this material at high pressure~\\cite{14}. \n\n\nTo this end it seems appropriate to take another look at the situation. \nWe report here results of study of a single crystal with nominal composition Mn$_{0.85}$Fe$_{0.15}$)Si. The sample was prepared from the ingot obtained by premelting of Mn (purity 99.99\\%, Chempur), Fe (purity 99.98\\%, Alfa Aesar), and Si ($\\rho_n$=300 Ohm cm, $\\rho_p$=3000 Ohm cm) under argon atmosphere in a single arc oven, then a single crystal was grown using the triarc Czochralski technique. \nThe electron-probe microanalysis shows that real composition is (Mn$_{0.795}$Fe$_{0.147}$)$_{47.1}$Si$_{52.9}$, which indicates some deviations from the stoichiometric chemical compositions common to the silicide compounds. But hereafter we will call the sample under study as (MnFe)Si.\n\n \nThe lattice parameter of the sample appeared to be a=4.5462\\AA. Note that the lattice parameter of pure MnSi is somewhat higher and equal to a=4.5598 \\AA. This implies that iron plays a role of some sort of pressure agent. Let's estimate what pressure is needed to compress pure MnSi to the volume corresponding to the lattice parameter of the material under study. We use a simple linear expression of the form $P=K\\dfrac{\\Delta V}{V}$, where $P$-pressure, $K=-V(\\frac{dP}{dV})_T$- bulk modulus, $\\dfrac{\\Delta V}{V}$= $(V_{MnSi}-V_{(MnFe)Si})\/V_{(MnFe)Si}$. Taking $K$=1.64 Mbar~\\cite{15} and $\\dfrac{\\Delta V}{V}$=8.96$\\cdot 10^{-3}$ (it follows from the given above the lattice parameters values), one obtain $P$=14.63 kbar. Surprisingly this value practically coincide with the pressure corresponding to the phase transition in the pure MnSi at zero temperature~\\cite{1,2,3,4}. This adds an extra argument in favor of quantum criticality of (MnFe)Si in the vicinity of iron concentration 0.15\\%.\n\n\n\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig1.eps}\n\\caption{\\label{fig1} (Color online) Magnetization curves for (MnFe)Si (a) and MnSi (b)~\\cite{7,17}.}\n\\end{figure}\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig2.eps}\n\\caption{\\label{fig2} (Color online) The inverse magnetic susceptibility $1\/\\chi$ for (MnFe)Si and MnSi~\\cite{7,17} as measured at 0.01 T.}\n \\end{figure}\n\n\n\\section{Experimental}\nWe performed some magnetic, dilatometric, electrical and heat capacity measurements to characterize the sample of (MnFe)Si. All measurements were made making use the Quantum Design PPMS system with the heat capacity and vibrating magnetometer moduli. The linear expansion of the sample was measured by the capacity dilatometer~\\cite{16}. The resistivity data were obtained with the standard four terminals scheme using the spark welded Pt wires as electrical contacts.\n\n\nThe experimental results are displayed in Fig.~\\ref{fig1}--\\ref{fig11}. Whenever it is possible the corresponding data for pure MnSi are depicted at the same figures to facilitate comparisons of the data.\n\n\nIn Fig.~\\ref{fig1} the magnetization curves for both (MnFe)Si and MnSi are shown. As it follows the magnetization of (MnFe)Si (a) does not reveal an existence of the spontaneous magnetic moment in contrast with a case of MnSi. From the saturated magnetization of MnSi at high field (Fig.~\\ref{fig1}b), the magnetic moment per atom Mn is 0.4$\\mu_B$.\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig3.eps}\n\\caption{\\label{fig3} (Color online) Specific heat of (MnFe)Si as a function of temperature at different magnetic fields. Specific heat of (MnFe)Si divided by temperature $C_p\/T$ is shown in the inset in the logarithmic scale at zero magnetic field. }\n\\end{figure}\n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig4.eps}\n\\caption{\\label{fig4} (Color online) a) Specific heat of (MnFe)Si as a function of temperature in the 2--20 K range. The line is the power function fit the experimental data (shown in the plot).\nb) Specific heat of MnSi at high pressure measured by the ac-calorimetry technique~\\cite{19}.}\n\\end{figure}\n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig5.eps}\n\\caption{\\label{fig5} (Color online) Temperature dependence of $C_p$ (a) and $C_p\/T$ (b) for (MnFe)Si and MnSi~\\cite{7,20}.}\n\\end{figure}\n \n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig6.eps}\n\\caption{\\label{fig6} (Color online) Dependence of linear thermal expansion of (MnFe)Si and MnSi~\\cite{7,20} on temperature. MnSi data reduced to (MnFe)Si ones at 200 K for better viewing.} \n\\end{figure}\n \nAs seen from Fig.~\\ref{fig2} the magnetic susceptibility $\\chi$ of (MnFe)Si does not obey the Curie-Weiss law, which clearly works in the paramagnetic phase of MnSi. The temperature dependence of $1\/\\chi$ for (MnFe)Si is well described in the range 5-150 K by the expression $1\/\\chi=A+cT^{0.78}$, which was also observed for some substances with quantum critical behavior~\\cite{18}. This expression can rewritten in the form $(1\/\\chi - 1\/\\chi_0)^{-1}=cT^{-1}$, implying a divergence of the quantity $(1\/\\chi - 1\/\\chi_0)^{-1}$. The nature of the anomalous part of the $1\/\\chi$ at $<5$~K (see inset in Fig.~\\ref{fig2}) will be discussed later.\n\n\nAs can be seen from Fig.~\\ref{fig3} magnetic field does not influence much the specific heat of (MnFe)Si at least at high temperatures. Also is seen in the inset of Fig.~\\ref{fig3} that the ratio of $C_p\/T$ does not fit well the logarithmic law.\n\n\nThe power law behavior of $C_p$ in the range to 20~K is characterized by the exponent $\\sim 0.6$ (Fig.~\\ref{fig4}), which immediately leads to the diverging expression for $C_p\/T\\sim T^{-1+0.6}$ (see Fig.~\\ref{fig5}b). This finding contradicts to the data~\\cite{12} declaring the logarithmic divergence of $C_p\/T$ for (MnFe)Si in about the same temperature range (see the inset in Fig.~\\ref{fig3}). In Fig.~\\ref{fig4}b is shown how the phase transition in MnSi at high pressure close to the quantum critical region influences the specific heat. The additional illustration of this kind is provided by the resistivity data (see Fig.~\\ref{fig11}). So one cannot find any similar evidence in Fig.~\\ref{fig4}a for the would be phase transition, which was suggested in~\\cite{14}.\n\n\nFig.~\\ref{fig5} shows the temperature dependences of specific heats $C_p$ (a) and specific heats divided by temperature $C_p\/T$ (b) for (MnFe)Si and MnSi. As can be seen both quantities do not differ much at temperatures above the magnetic phase transitions in MnSi even with applied magnetic field. The great difference arises at and below phase transition temperatures in MnSi. The remarkable thing is the diverging behavior of $C_p\/T$ that is removed by an application of strong magnetic field (Fig.~\\ref{fig5}b) leading to the finite value of $(C_p\/T)$ at T=0 corresponding to the electronic specific heat term $\\gamma$, therefore restoring Fermi liquid picture \n\n\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig7.eps}\n\\caption{\\label{fig7} (Color online) Linear thermal expansion coefficients of (MnFe)Si and MnSi~\\cite{7,20}. } \n\\end{figure}\n\nAs is seen in Fig.~\\ref{fig6} the magnetic phase transition in MnSi is signified by a significant volume anomaly. Nothing of this kind exists on the thermal expansion curve of (MnFe)Si. Probably a somewhat different situation can be observed in Fig.~\\ref{fig7}, which displays the temperature dependences of linear thermal expansion coefficients $\\beta=(1\/L_0) (dL\/dT)_p$ for (MnFe)Si and MnSi. It is seen a surprisingly good agreement between both data at high temperature. A specific feature of $\\beta$ of (MnFe)Si is a small tail at T$<5$~K. This tale inclines to cross the temperature axis at finite value therefore tending to the negative $\\beta$ as it does occur in MnSi in the phase transition region (see Figs.~\\ref{fig7} and \\ref{fig8}). Just this behavior of $\\beta$ creates sudden drop at low temperatures in the seemingly diverging ratio $\\beta\/C_{p}$, which conditionally may be called the Gruneisen parameter (See Fig.~\\ref{fig9}). \n \n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig8.eps}\n\\caption{\\label{fig8} (Color online) Linear thermal expansion coefficients of (MnFe)Si as functions of temperature and magnetic fields.} \n\\end{figure}\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig9.eps}\n\\caption{\\label{fig9} (Color online) Gruneisen ratio tends to diverse at $T\\rightarrow0$. This tendency is interrupted by a peculiar behavior of the thermal expansion coefficient.} \n\\end{figure}\n\nFig.~\\ref{fig8} shows that magnetic field strongly influences the \"tale\" region of the thermal expansion coefficient of (MnFe)Si that indicates its fluctuation nature. This feature should be linked to the anomalous part of the $1\/\\chi$ at $<5$~K (Fig.~\\ref{fig2}).\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig10.eps}\n\\caption{\\label{fig10} (Color online) Resistivities of (MnFe)Si and MnSi~\\cite{21} as functions of temperature.} \n\\end{figure}\n\nResistivities of (MnFe)Si and MnSi as functions of temperature are shown in Fig.~\\ref{fig10}. The quasi linear non Fermi liquid behavior of resistivity of (MnFe)Si at low temperature in contrast with the MnSi case is quite obvious. With temperature increasing the resistivity of (MnFe)Si evolves to the \"saturation\" curve typical of the strongly disordered metals and similar to the post phase transition branch of the resistivity curve of MnSi~\\cite{22}.\n\n\\begin{figure}[htb]\n\\includegraphics[width=80mm]{fig11.eps}\n\\caption{\\label{fig11} (Color online)Dependence of temperature derivative of resistivity of (MnFe)Si and MnSi on temperature: a) $d\\rho\/dT$ of (MnFe)Si as functions of temperature and magnetic fields, b) $d\\rho\/dT$ of MnSi as a function of temperature at ambient and high pressure (in the inset)~\\cite{8,21}.} \n\\end{figure}\n\n\nA comparison of Fig.~\\ref{fig11} (a) and (b) shows a drastic difference in behavior of $d\\rho\/dT$ at the phase transition in MnSi and in (Mn,Fe)Si in the supposedly critical region. The peculiar form of $d\\rho\/dT$ of (Mn,Fe)Si does not look as a phase transition feature though it certainly reflects an existence of significant spin fluctuations. This feature should be related to the anomalies of the magnetic susceptibility (fig.~\\ref{fig2}) and thermal expansion coefficient (fig.~\\ref{fig8}).\n\n\\section{Discussion}\nAs we have shown in the Introduction the lattice parameter of our sample of (MnFe)Si corresponds to the one of compressed pure MnSi by pressure about 1.5 GPa. At this pressure and zero temperature the quantum phase transition in MnSi does occur, nature and properties of which are still under discussion~\\cite{7}. Alternative way to reach the quantum regime is to use so called \"chemical pressure\" doping MnSi with suitable \"dopants\" that could avoid disturbing inhomogeneous stresses arising at conventional pressure loading. So as it appeared the composition Mn$_{0.85}$Fe$_{0.15}$Si indeed demonstrated properties typical of the quantum critical state ~\\cite{11,12}. However the conclusions of Ref.~\\cite{11,12} were disputed in the publication~\\cite{14}, authors of which claim on the basis of the muon spin relaxation experiments that 15\\% Fe-substituted (Mn,Fe)Si experiences a second order phase transition at ambient pressure then reaching a quantum critical point at pressure $\\sim$21--23 kbar.\n\n\nWith all that in mind we have carried out a number of measurements trying to elucidate the problem. Below we summarize our finding.\n\n\n\\begin{enumerate}\n\\item There is no spontaneous magnetic moment in (MnFe)Si at least at 2~K (Fig.1). Magnetic susceptibility of (MnFe)Si can be described by the expression $1\/\\chi=A+cT^{0.78}$ or $(1\/\\chi - 1\/\\chi_0)^{-1} = cT^{-1}$ in the temperature range $\\sim$5--150~K, implying divergence of the quantity $(1\/\\chi - 1\/\\chi_0)^{-1}$. This behavior also was observed earlier in case of some substances close to quantum critical region (Fig.~\\ref{fig2})~\\cite{18}. At $T<5$~K a behavior of $1\/\\chi$ deviates from the mentioned expression in a way, which can be traced to the analogous feature at the fluctuation region of MnSi at $T>T_c$ (see the round inset in Fig.~\\ref{fig2}).\n\\item Specific heat of (MnFe)Si is well defined by the simple power expression $C\\sim T^{0.6}$ in the range 2--20~K , which does not show any features inherited to phase transitions as it take place in case of MnSi at pressure close to the quantum phase transition (Fig.~\\ref{fig3},\\ref{fig4}). This expression immediately leads to the divergence of the quantity $C_p\/T\\sim T^{-1+0.6}$, which can be suppressed by magnetic field that leads to restoring Fermi liquid picture with finite value of electronic specific heat term $\\gamma$ (Fig.~\\ref{fig5}).\n\\item The thermal expansion experiment with (MnFe)Si does not reveal any features that can be linked to a phase transition (Fig.~\\ref{fig6}). However the thermal expansion coefficient $\\beta$ show a low temperature tale, which inclines to cross the temperature axis at finite value tending to become negative as it occurs in MnSi (Fig.~\\ref{fig7},\\ref{fig8}). This specifics of $\\beta$ causes a sudden low temperature drop of the Gruneisen parameter otherwise it would diverge at $T\\rightarrow 0$. An application of magnetic field suppresses this kind of behavior of the thermal expansion coefficient therefore revealing its fluctuation nature (Fig.~\\ref{fig8}).\n\\item The resistivity of (MnFe)Si clearly demonstrates non Fermi liquid behavior with no specifics indicating a phase transition. However, the temperature derivative of resistivity $d\\rho\/dT$ of (MnFe)Si shows non trivial form, which indicates an existence of significant spin fluctuations. That should be related to the low temperature \"tales\" both magnetic susceptibility and thermal expansion coefficient.\n\\end{enumerate}\n\\section{Conclusion}\nFinally, magnetic susceptibility in the form $(1\/\\chi - 1\/\\chi_0)^{-1}$ and Gruneisen parameter $\\beta\/C_p$ in (MnFe)Si show diverging behavior, which is interrupted at about 5~K by factors linked somehow with spin fluctuations analogues to ones preceding the phase transition in MnSi (see Fig.~\\ref{fig2},\\ref{fig7},\\ref{fig8}). Specific heat divided by temperature $C_p\/T$ of (MnFe)Si clearly demonstrate diverging behavior to 2~K. The electrical resistivity of (MnFe)Si exhibits non Fermi liquid character.\n\n\nGeneral conclusions: there are no thermodynamic evidences in favor of a second order phase transition for the 15\\% Fe substituted MnSi. The trajectory corresponding to the present composition of (MnFe)Si is a critical one, i.e. approaching quantum critical point at lowering temperature, which agrees with conclusions made in Ref.~\\cite{11,12}. However, the critical trajectory in fact is a tangent to the phase transition line and therefore some properties inevitably would be influenced by the cloud of spin helical fluctuations bordering the phase transition. This situation produces some sort of a mixed state instead of a pure quantum critical one that probably was seen in the experiments~\\cite{14}. \n\n\n\\section{Acknowledgements}\nWe express our gratitude to I.P. Zibrov and N.F. Borovikov for technical assistance.\nAEP and SMS greatly appreciate financial support of the Russian Foundation for Basic Research (grant No. 18-02-00183), the Russian Science Foundation (grant 17-12-01050). \n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nUsually various types of diffusion processes are classified by analysis of the spread of the distance traveled by a random walker. If the mean square displacement grows like $\\langle [x-x(0)]^2 \\rangle \\propto t^\\delta$ with $\\delta<1$ the motion is called subdiffusive, in contrast to normal ($\\delta=1$) or superdiffusive ($\\delta>1$) situations. For a free Brownian particle moving in one dimension, a stochastic random force entering its equation of motion is assumed to be composed of a large number of independent identical pulses. If they posses a finite variance, then by virtue of the standard Central Limit Theorem (CLT) the distribution of their sum follows the Gaussian statistics. However, as it has been proved by L\\'evy and Khintchine, the CLT can be generalized for independent, identically distributed (i.i.d) variables characterized by non-finite variance or even non-finite mean value. With a L\\'evy forcing characterized by a stability index $\\alpha<2$ independent increments of the particle position sum up yielding $\\langle [x-x(0)]^2 \\rangle \\propto t^{2\/\\alpha}$ \\cite{bouchaud1990}, see below. Such\nenhanced, fast superdiffusive motion is observed in various real situations when a test particle is able to perform unusually large jumps \\cite{dubkov2008,shlesinger1995,metzler2004}. L\\'evy flights have been documented to describe motion of fluorescent probes in living polymers, tracer particles in rotating flows and cooled atoms in laser fields. They serve also as a paradigm of efficient searching strategies \\cite{Viswanathan1996,sims2008,reynolds2009} in social and environmental problems with some level of controversy \\cite{Edwards2007}.\n\nIn contrast, transport in porous, fractal-like media or relaxation kinetics in inhomogeneous materials are usually ultraslow, i.e. subdiffusive \\cite{klafter1987,metzler2000,metzler2004}. The most intriguing situations take place however, when both effects -- occurrence of long jumps and long waiting times for the next step -- are incorporated in the same scenario \\cite{zumofen1995}. The approach to this kind of anomalous motion is provided by continuous time random walks (CTRW) which assume that the steps of the walker occur at random times generated by a renewal process.\nIn particular, a mathematical idealization of a free Brownian motion (Wiener process $W(t)$) can be then derived as a limit (in distribution) of i.i.d random (Gaussian) jumps taken at infinitesimally short time intervals of non-random length. Other generalizations are also possible, e.g. $W(t)$ can be defined as a limit of random Gaussian jumps performed at random Poissonian times. The characteristic feature of the Gaussian Wiener process is the continuity of its sample paths. In other words, realizations (trajectories) of the Wiener process are continuous (although nowhere differentiable) \\cite{doob1942}. The process is also self-similar (scale invariant) which means that by rescaling $t'=\\lambda t$ and $W'(t)=\\lambda^{-1\/2}W(\\lambda t)$ another Wiener process with the same properties is obtained. Among scale invariant stable processes, the Wiener process is the only one which possesses finite variance \\cite{janicki1994,saichev1997,doob1942,dubkov2005b}. Moreover, since the correlation function of increments $\\Delta W(s)= W(t+s)-W(t)$ depends only on time difference $s$ and increments of non-overlapping times are statistically independent, the formal differentiation of $W(t)$ yields a white, memoryless Gaussian process \\cite{vankampen1981}:\n\\begin{equation}\n\\dot{W}(t)=\\xi(t),\\qquad \\langle \\xi(t)\\xi(t') \\rangle =\\delta(t-t')\n\\end{equation}\ncommonly used as a source of idealized environmental noises within the Langevin description\n\\begin{equation}\ndX(t)=f(X)dt + dW(t).\n\\end{equation}\nHere $f(X)$ stands for the drift term which in the case of a one-dimensional overdamped motion is directly related to the potential $V(X)$, i.e. $f(X)= -dV(X)\/dX$.\n\n\nIn more general terms the CTRW concept may asymptotically lead to non-Markov, space-time fractional noise $\\tilde{\\xi}(t)$, and in effect, to space-time fractional diffusion. For example, let us define $\n\\Delta \\tilde{W}(t)\\equiv \\Delta X(t)=\\sum^{N(t)}_{i=1}X_i,$\nwhere the number of summands $N(t)$ is statistically independent from $X_i$ and governed by a renewal process $\\sum^{N(t)}_{i=1}T_i\\leqslant t < \\sum^{N(t)+1}_{i=1}T_i$ with $t>0$.\nLet us assume further that $T_i$, $X_i$ belong to the domain of attraction of stable distributions, $T_i\\sim S_{\\nu,1}$ and $X_i\\sim S_{\\alpha,\\beta}$, whose corresponding characteristic functions $\\phi(k)= \\langle \\exp(ikS_{\\alpha,\\beta}) \\rangle=\\int^{\\infty}_{-\\infty}e^{ikx} l_{\\alpha,\\beta}(x;\\sigma) dx$, with the density $l_{\\alpha,\\beta}(x;\\sigma)$, are given by\n\\begin{equation}\n\\phi(k) = \\exp\\left[ -\\sigma^\\alpha|k|^\\alpha\\left( 1-i\\beta\\mathrm{sign}k\\tan\n\\frac{\\pi\\alpha}{2} \\right) \\right],\n\\label{eq:charakt1}\n\\end{equation}\nfor $\\alpha\\neq 1$\nand\n\\begin{equation}\n\\phi(k) = \\exp\\left[ -\\sigma|k|\\left( 1+i\\beta\\frac{2}{\\pi}\\mathrm{sign}k\\log|k| \\right) \\right].\n\\label{eq:charakt2}\n\\end{equation}\nfor $\\alpha=1$.\nHere the parameter $\\alpha\\in(0,2]$ denotes the stability index, yielding the asymptotic long tail power law for the $x$-distribution, which for $\\alpha<2$ is of the $|x|^{-(1+\\alpha)}$ type. The parameter $\\sigma$ ($\\sigma\\in(0,\\infty)$) characterizes the scale whereas $\\beta$ ($\\beta\\in[-1,1]$) defines an asymmetry (skewness) of the distribution.\n\nNote, that for $0<\\nu<1$, $\\beta=1$, the stable variable $S_{\\nu,1}$ is defined on positive semi-axis. Within the above formulation the counting process ${N(t)}$ satisfies\n\\begin{eqnarray}\n&& \\lim_{t\\rightarrow \\infty}\\mathrm{Prob}\\left\\{ \\frac{N(t)}{(t\/c)^{\\nu}}t\\right\\}\\nonumber \\\\\n&& = \\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\sum_{i=1}^{[n]}T_i>\\frac{cn^{1\/\\nu}}{x^{1\/\\nu}}\\right\\} \\\\\n&& = \\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\frac{1}{cn^{1\/\\nu}}\\sum_{i=1}^{[n]}T_i>\\frac{1}{x^{1\/\\nu}}\\right\\}\n = 1-L_{\\nu,1}(x^{-1\/\\nu}), \\nonumber\n\\end{eqnarray}\nwhere $[(t\/c)^{\\nu}x]$ denotes the integer part of the number $(t\/c)^{\\nu}x$ and $L_{\\alpha,\\beta}(x)$ stands for the stable distribution of random variable $S_{\\alpha,\\beta}$, i.e. $l_{\\alpha,\\beta}(x)=dL_{\\alpha,\\beta}(x)\/dx$.\nMoreover, since\n\\begin{equation}\n\\lim_{n\\rightarrow\\infty}\\mathrm{Prob}\\left\\{ \\frac{1}{c_1n^{1\/\\alpha}}\\sum_{i=1}^{n}X_it \\right \\}$, where $U(s)$ denotes a strictly increasing $\\nu$-stable process whose distribution $L_{\\nu,1}$ yields a Laplace transform $\\langle e^{-kU(s)} \\rangle =e^{-sk^{\\nu}}$. The parent process $\\tilde{X}(s)$ is composed of increments of symmetric $\\alpha$-stable motion described in an operational time $s$\n\\begin{equation}\nd\\tilde{X}(s)=-V'(\\tilde{X}(s))ds+dL_{\\alpha,0}(s),\n\\label{eq:langevineq}\n\\end{equation}\nand in every jump moment the relation $U(S_t)=t$ is fulfilled. The (inverse-time) subordinator $S_t$ is (in general) non-Markovian hence, as it will be shown, the diffusion process $\\tilde{X}(S_t)$ possesses also some degree of memory. The above setup has been recently proved \\cite{magdziarz2007b,magdziarz2008,magdziarz2007} to give a proper stochastic realization of the random process described otherwise by a fractional diffusion equation:\n\\begin{equation}\n \\frac{\\partial p(x,t)}{\\partial t}={}_{0}D^{1-\\nu}_{t}\\left[ \\frac{\\partial}{\\partial x} V'(x) + \\frac{\\partial^\\alpha}{\\partial |x|^\\alpha} \\right] p(x,t),\n\\label{eq:ffpe}\n\\end{equation}\nwith the initial condition $p(x,0)=\\delta(x)$. In the above equation ${}_{0}D^{1-\\nu}_{t}$ denotes the Riemannn-Liouville fractional derivative ${}_{0}D^{1-\\nu}_{t}=\\frac{\\partial}{\\partial t}{}_{0}D^{-\\nu}_{t}$ defined by the relation\n\\begin{equation}\n{}_{0}D^{1-\\nu}_{t}f(x,t)=\\frac{1}{\\Gamma(\\nu)}\\frac{\\partial}{\\partial t}\\int^{t}_0 dt'\\frac{f(x,t')}{(t-t')^{1-\\nu}}\n\\end{equation}\nand $\\frac{\\partial^{\\alpha}}{\\partial |x|{\\alpha}}$ stands for the Riesz fractional derivative with the Fourier transform ${\\cal{F}}[\\frac{\\partial^{\\alpha}}{\\partial |x|^{\\alpha}} f(x)]=-|k|^{\\alpha}\\hat{f}(x)$.\nEq.~(\\ref{eq:ffpe}) has been otherwise derived from a generalized Master equation \\cite{metzler1999}.\nThe formal solution to Eq.~(\\ref{eq:ffpe}) can be written \\cite{metzler1999} as:\n\\begin{equation}\np(x,t)=E_{\\nu}\\left (\\left[ \\frac{\\partial}{\\partial x} V'(x) + \\frac{\\partial^\\alpha}{\\partial |x|^\\alpha} \\right]t^{\\nu}\\right)p(x,0).\n\\end{equation}\nFor processes with survival function $\\Psi(t)=1-\\int_0^t\\psi(\\tau)d\\tau$ (cf. Eq.~(\\ref{eq:master})) given by the Mittag-Leffler function Eq.~(\\ref{eq:mittag}), this solution takes an explicit form\n\\cite{saichev1997,jespersen1999,metzler1999,scalas2006,germano2009}\n\\begin{equation}\np(x,t)\n=\\sum_{n=0}^{\\infty}\\frac{t^{\\nu n}}{n!}E^{(n)}_{\\nu}(-t^{\\nu})w_n(x)\n\\end{equation}\nwhere $E^{(n)}_{\\nu}(z)=\\frac{d^n}{dz^n}E_{\\nu}(z)$ and $w_n(x)\\propto l_{\\alpha,0}(x)$, see \\cite{scalas2004,scalas2006}.\n\n\nIn this paper, instead of investigating properties of an analytical solution to Eq.~(\\ref{eq:ffpe}), we switch to a Monte Carlo method \\cite{magdziarz2007b,magdziarz2008,magdziarz2007,gorenflo2002,meerschaert2004} which allows generating trajectories of the subordinated process $X(t)$ with the parent process $\\tilde{X}(s)$ in the potential free case, i.e. for $V(x)=0$. The assumed algorithm provides means to examine the competition between subdiffusion (controlled by a $\\nu$-parameter) and L\\'evy flights characterized by a stability index $\\alpha$. From the ensemble of simulated trajectories the estimator of the density $p(x,t)$ is reconstructed and statistical qualifiers (like quantiles) are derived and analyzed.\n\nAs mentioned, the studied process is $\\nu\/\\alpha$ self-similar (cf. Eq.~(\\ref{eq:scaling})). We further focus on examination of a special case for which $\\nu\/\\alpha=1\/2$. As an exemplary values of model parameters we choose $\\nu=1,\\alpha=2$ (Markovian Brownian diffusion) and $\\nu=0.8,\\alpha=1.6$ (subordination of non-Markovian sub-diffusion with L\\'evy flights). Additionally we use $\\nu=1,\\alpha=1.6$ and $\\nu=0.8,\\alpha=2$ as Markovian and non-Markovian counterparts of main cases analyzed. Fig.~\\ref{fig:trajectories} compares trajectories for all exemplary values of $\\nu$ and $\\alpha$. Straight horizontal lines (for $\\nu=0.8$) correspond to particle trapping while straight vertical lines (for $\\alpha=1.6$) correspond to L\\'evy flights. The straight lines manifest anomalous character of diffusive process.\n\n\nTo further verify correctness of the implemented version of the subordination algorithm \\cite{{magdziarz2007b}}, we have performed extensive numerical tests. In particular, some of the estimated probability densities have been compared with their analytical representations and the perfect match between numerical data and analytical results have been found. Fig.~\\ref{fig:numeric-theory} displays numerical estimators of PDFs and analytical results for $\\nu=1$ with $\\alpha=2$ (Gaussian case, left top panel), $\\nu=1$ with $\\alpha=1$ (Cauchy case, right top panel), $\\nu=1\/2$ with $\\alpha=1$ (left bottom panel) and $\\nu=2\/3$ with $\\alpha=2$ (right bottom panel). For those last two cases, the expressions for $p(x,t)$ has been derived \\cite{saichev1997}, starting from the series representation given by Eq.~(\\ref{eq:series}). For $\\nu=1\/2,\\;\\alpha=1$ the appropriate formula reads\n\\begin{equation}\np(x,t)=-\\frac{1}{2\\pi^{3\/2}\\sqrt{t}}\\exp\\left(\\frac{x^2}{4t}\\right)\\mathrm{Ei}\\left(-\\frac{x^2}{4t}\\right),\n\\label{eq:pa1n12}\n\\end{equation}\nwhile for $\\nu=2\/3,\\;\\alpha=2$ the probability density is\n\\begin{equation}\np(x,t)=\\frac{3^{2\/3}}{2t^{1\/3}}\\mathrm{Ai}\\left[ \\frac{|x|}{(3t)^{1\/3}} \\right].\n \\label{eq:pa2n23}\n\\end{equation}\n$\\mathrm{Ei}(x)$ and $\\mathrm{Ai}(x)$ are the integral exponential function and the Airy function respectively.\nWe have also compared results of simulations and Eq.~(\\ref{eq:closedformula}) for other sets of parameters $\\nu$, $\\alpha$. Also there, the excellent agreement has been detected (results not shown).\n\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{trajectories.eps}\n\\caption{Sample trajectories for $\\nu=1, \\alpha=2$ (left top panel), $\\nu=1, \\alpha=1.6$ (left bottom panel), $\\nu=0.8, \\alpha=2$ (right top panel) and $\\nu=0.8, \\alpha=1.6$ (right bottom panel). Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-2}$ and averaged over $N=10^6$ realizations.}\n\\label{fig:trajectories}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{numeric-theory.eps}\n\\caption{(Color online) PDFs for $\\nu=1$ with $\\alpha=2$ (left top panel), $\\nu=1$ with $\\alpha=1$ (right top panel), $\\nu=1\/2$ with $\\alpha=1$ (left bottom panel) and $\\nu=2\/3$ with $\\alpha=2$ (right bottom panel). Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-3}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical densities: Gaussian (left top panel), Cauchy (right top panel) and the $p(x,t)$ given by Eqs.~(\\ref{eq:pa1n12}) (left bottom panel) and (\\ref{eq:pa2n23}) (right bottom panel). Note the semi-log scale in the bottom panels.}\n\\label{fig:numeric-theory}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{hist-cdf.eps}\n\\caption{(Color online) PDFs (top panel) and $1-\\mathrm{CDF}(x,t)$ (bottom panel) at $t=2$ (left panel), $t=15$ (right panel). Simulation parameters as in Fig.~\\ref{fig:trajectories}. Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-3}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical asymptotic $x^{-1.6}$ scaling representative for $\\alpha=1.6$ and $\\nu=1$, i.e. for Markovian L\\'evy flight.}\n\\label{fig:histograms}\n\\end{center}\n\\end{figure}\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{hist-cdf12.eps}\n\\caption{(Color online) PDFs (top panel) and $1-\\mathrm{CDF}(x,t)$ (bottom panel) at $t=2$ (left panel), $t=15$ (right panel). Simulation parameters as in Fig.~\\ref{fig:trajectories}. Eq.~(\\ref{eq:langevineq}) was numerically approximated by subordination techniques with the time step of integration $\\Delta t=10^{-2}$ and averaged over $N=10^6$ realizations. Solid lines present theoretical asymptotic $x^{-1.2}$ scaling representative for $\\alpha=1.2$ and $\\nu=1$, i.e. for Markovian L\\'evy flight.}\n\\label{fig:histograms12}\n\\end{center}\n\\end{figure}\n\n\n\n\n\nFigure~\\ref{fig:histograms} and \\ref{fig:histograms12} display time-dependent probability densities $p(x,t)$ and corresponding cumulative distribution functions ($CDF(x,t)=\\int_{-\\infty}^xp(x',t)dx'$) for ``short'' and for, approximately, an order of magnitude ``longer'' times. The persistent cusp \\cite{sokolov2002} located at $x=0$ is a finger-print of the initial condition $p(x,0)=\\delta(x)$ and is typically recorded for subdiffusion induced by the subordinator $S_t$ with $\\nu<1$. For Markov L\\'evy-Wiener process \\cite{dybiec2006,denisov2008} for which the characteristic exponent $\\nu=1$, the cusp disappears and PDFs of the process $\\tilde{X}(S_t)$ become smooth at $x=0$. In particular, for the Markovian Gaussian case ($\\nu=1$, $\\alpha=2$) corresponding to a standard Wiener diffusion, PDF perfectly coincides with the analytical normal density $N(0,\\sqrt{t})$.\n\n\nThe presence of L\\'evy flights is also well visible in the power-law asymptotic of CDF, see bottom panels of Figs.~\\ref{fig:histograms} and \\ref{fig:histograms12}. Indeed, for $\\alpha<2$ independently of the actual value of the subdiffusion parameter $\\nu$ and at arbitrary time, $p(x,t)\\propto |x|^{-(\\alpha+1)}$ for $x\\rightarrow \\infty$. Furthermore, all PDFs are symmetric with median and modal values located at the origin.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{stdev-only-divided.eps}\n\\caption{(Color online) Time dependence of $\\mathrm{var}(x)\/t$. Straight lines present $t^{2\\nu\/\\alpha-1}$ theoretical scaling (see Eq.~(\\ref{eq:variancescaling}) and explanation in the text). Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:stdev}\n\\end{center}\n\\end{figure}\n\n\n\n\n\\section{Scaling properties of moments}\n\nThe $\\nu\/\\alpha$ self-similar character of the process (cf. Eq.~(\\ref{eq:scaling})) is an outcome of allowed long flights and long breaks between successive steps. In consequence, the whole distribution scales as a function of $x\/t^{\\nu\/\\alpha}$ with the width of the distribution growing superdiffusively for $\\alpha<2\\nu$ and subdiffusively for $\\alpha > 2\\nu$. This $t^{\\nu\/\\alpha}$ scaling is also clearly observable in the behavior of the standard deviation and quantiles $q_p(t)$, defined via the relation $\\mathrm{Prob}\\left \\{X(t)\\leqslant q_p(t)\\right \\}=p$, see Figs.~\\ref{fig:stdev}, \\ref{fig:quantiles}. For random walks subject to superdiffusive, long-ranging trajectories ($\\alpha=1.6$), the asymptotic scaling is observed for sufficiently long times, cf. Fig.~\\ref{fig:stdev}. On the other hand, normal (Gaussian) distribution of jumps superimposed on subdiffusive motion of trapped particles ($\\nu =0.8$) clearly shows rapid convergence to the $\\nu\/\\alpha$ law. Notably, both sets $\\nu=1,\\alpha=2$ and $\\nu=0.8,\\alpha=1.6$ lead to the same scaling $t^{1\/2}$, although in the case $\\nu=0.8,\\alpha=1.6$, the process $X(t)=\\tilde{X}(S_t)$ is non-Markov, in contrast to a standard Gaussian diffusion obtained for $\\nu=1, \\alpha=2$. Thus, the competition between subdiffusion and L\\'evy flights questions standard ways of discrimination between normal (Markov, $\\langle [x-x(0)]^2 \\rangle \\propto t$) and anomalous (generally, non-Markov $\\langle [x-x(0)]^2 \\rangle \\propto t^\\delta$) diffusion processes.\n\nIndeed, for $\\nu=1$, the process $X(t)$ is not only $1\/\\alpha$ self-similar but it is also memoryless (i.e. Markovian). In such a case, the asymptotic PDF $p(x,t)$ is $\\alpha$-stable \\cite{janicki1994,weron1995,weron1996,dybiec2006} with the scale parameter $\\sigma$ growing with time like $t^{1\/\\alpha}$, cf. Eq.~(\\ref{eq:charakt1}). This is no longer true for subordination with $\\nu<1$ when the underlying process becomes non-Markovian and the spread of the distribution follows the $t^{\\nu\/\\alpha}$-scaling (cf. Fig.~\\ref{fig:quantiles}, right panels).\n\n\nSome additional care should be taken when discussing the scaling character of moments of $p(x,t)$ \\cite{bouchaud1990,fogedby1998,jespersen1999}. Clearly, L\\'evy distribution (with $\\alpha<2$) of jump lengths leads to infinite second moment (see Eqs.~(\\ref{eq:charakt1}) and (\\ref{eq:charakt2}))\n\\begin{equation}\n\\langle x^2 \\rangle = \\int_{-\\infty}^{\\infty} x^2 l_{\\alpha,\\beta}(x;\\sigma) dx = \\infty,\n\\end{equation}\nirrespectively of time $t$.\nMoreover, the mean value $\\langle x \\rangle$ of stable variables is finite for $\\alpha > 1$ only ($\\langle x \\rangle=0$ for symmetric case under investigation). Those observations seem to contradict demonstration of the scaling visible in Fig.~\\ref{fig:stdev} where standard deviations derived from ensembles of trajectories are finite and grow in time according to a power law.\nA nice explanation of this behavior can be given following argumentation of Bouchaud and Georges \\cite{bouchaud1990}: Every finite but otherwise arbitrarily long trajectory of a L\\'evy flight, i.e. the stochastic process underlying Eq.~(\\ref{eq:ffpe}) with $\\nu=1$, is a sum of finite number of independent stable random variables. Among all summed $N$ stable random variables there is the largest one, let say $l_c(N)$. The asymptotic form of a stable densities\n\\begin{equation}\nl_{\\alpha,\\beta}(x;\\sigma) \\propto x^{-(1+\\alpha)},\n\\label{eq:asymptotics}\n\\end{equation}\ntogether with the estimate for $l_c(N)$ allow one to estimate how standard deviations grows with a number of jumps $N$. In fact, the largest value $l_c(N)$ can be estimated from the condition\n\\begin{equation}\nN\\int_{l_c(N)}^{\\infty}l_{\\alpha,\\beta}(x)dx\\approx 1.\n\\label{eq:trials}\n\\end{equation}\nwhich locates most of the ``probability mass'' in events not exceeding the step length $l_c$ (otherwise, the relation states that $l_c(N)$ occurred at most once in $N$ trials, \\cite{bouchaud1990}). Alternatively, $l_c(N)$ can be estimated as a value which maximizes probability that the largest number chosen in $N$ trials is $l_c$\n\\begin{equation}\nl_{\\alpha,\\beta}(l_c)\\left[ \\int\\limits_0^{l_c} l_{\\alpha,\\beta}(x)dx \\right]^{N-1}=l_{\\alpha,\\beta}(l_c)\\left[ 1 - \\int\\limits_{l_c}^\\infty l_{\\alpha,\\beta}(x)dx \\right]^{N-1}.\n\\label{eq:minimalization}\n\\end{equation}\nBy use of Eqs.~(\\ref{eq:trials}) and (\\ref{eq:asymptotics}), simple integration leads to\n\\begin{equation}\nl_c(N)\\propto N^{1\/\\alpha}.\n\\label{eq:threshold}\n\\end{equation}\nDue to finite, but otherwise arbitrarily large number of trials $N$, the effective distributions becomes restricted to the finite domain which size is controlled by Eq.~(\\ref{eq:threshold}). Using the estimated threshold, see Eq.~(\\ref{eq:threshold}) and asymptotic form of stable densities, see Eq.~(\\ref{eq:asymptotics}), it is possible to derive an estimate of $\\langle x^2 \\rangle$\n\\begin{equation}\n\\langle x^2 \\rangle \\approx \\int^{l_c}x^2 l_{\\alpha,\\beta}(x) dx\\approx \\left( N^{1\/\\alpha} \\right)^{2-\\alpha}=N^{2\/\\alpha-1}.\n\\end{equation}\nFinally, after $N$ jumps\n\\begin{equation}\n\\langle x^2 \\rangle_N=N\\langle x^2 \\rangle \\propto N^{2\/\\alpha}.\n\\label{eq:variancescaling}\n\\end{equation}\nConsequently, for L\\'evy flights standard deviation grows like a power law with the number of jumps $N$.\nIn our CTRW scenario incorporating competition between long rests and long jumps, the number of jumps $N=N(t)$ grows sublinearly in time, $N\\propto t^{\\nu}$, leading effectively to $\\langle x^2 \\rangle_N\\propto t^{2\\nu\/\\alpha}$ with $0<\\nu<1$ and $0<\\alpha<2$. Since in any experimental realization tails of the L\\'evy distributions are almost inaccessible and therefore effectively truncated, analyzed sample trajectories follow the pattern of the $t^{\\nu\/\\alpha}$ scaling, which is well documented in Fig.~\\ref{fig:stdev}.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{quantiles.eps}\n\\caption{(Color online) Quantiles: $q_{0.9}$, $q_{0.8}$, $q_{0.7}$, $q_{0.6}$ (from top to bottom) for $\\nu=1, \\alpha=2$ (left top panel), $\\nu=1, \\alpha=1.6$ (left bottom panel), $\\nu=0.8, \\alpha=2$ (right top panel) and $\\nu=0.8, \\alpha=1.6$ (right bottom panel). The straight line presents theoretical $t^{\\nu\/\\alpha}$ scaling. Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:quantiles}\n\\end{center}\n\\end{figure}\n\n\n\n\n\n\\section{Discriminating memory effects}\n\nClearly, by construction, for $\\nu < 1$ the limiting ``anomalous diffusion'' process $X(t)$ is non-Markov. This feature is however non-transparent when discussing statistical properties of the process by analyzing ensemble-averaged mean-square displacement for the parameters set $\\nu\/\\alpha=1\/2$, (e.g. $\\nu=0.8,\\alpha=1.6$) when contrary to what might be expected, $\\langle [x-x(0)]^2 \\rangle \\propto t$, similarly to the standard, Markov, Gaussian case.\nThis observation implies a different problem to be brought about: Given an experimental set of data, say time series representative for a given process, how can one interpret its statistical properties and conclude about anomalous (subdiffusive) character of underlying kinetics? The similar question has been carried out in a series of papers discussing use of transport coefficients in systems exhibiting weak ergodicity breaking (see \\cite{he2008} and references therein).\n\nTo further elucidate the nature of simulated data sets for $\\nu\/\\alpha=1\/2$, we have adhered to and tested formal criteria \\cite{vankampen1981,fulinski1998,dybiec2009b,dybiec-sparre} defining the Markov process. The standard formalism of space- and time- continuous Markov processes requires fulfillment of the Chapman-Kolmogorov equation ($t_1>t_2>t_3$)\n\\begin{equation}\n P(x_1,t_1|x_3,t_3) = \\sum_{x_2}P(x_1,t_1|x_2,t_2)P(x_2,t_2|x_3,t_3).\n\\label{eq:ch-k}\n\\end{equation}\nalong with the constraint for conditional probabilities which for the ``memoryless'' process should not depend on its history. In particular, for a hierarchy of times $t_1>t_2>t_3$, the following relation has to be satisfied\n\\begin{equation}\n P(x_1,t_1|x_2,t_2)=P(x_1,t_1|x_2,t_2,x_3,t_3).\n\\label{eq:cond}\n\\end{equation}\nEqs.~(\\ref{eq:ch-k}) and (\\ref{eq:cond}) have been used to directly verify whether the process under consideration is of the Markovian or non-Markovian type. From Eq.~(\\ref{eq:ch-k}) squared cumulative deviation $Q^2$ between LHS and RHS of the Chapman-Kolmogorov relation summed over initial ($x_3$) and final ($x_1$) states has been calculated \\cite{fulinski1998}\n\\begin{eqnarray}\n Q^2 & = & \\sum\\limits_{x_1,x_3}\\bigg[P(x_1,t_1|x_3,t_3) \\nonumber \\\\\n & & \\qquad\\quad - \\sum\\limits_{x_2}P(x_1,t_1|x_2,t_2)P(x_2,t_2|x_3,t_3)\\bigg]^2.\n\\label{eq:ch-k-averaged}\n\\end{eqnarray}\nThe same procedure can be applied to Eq.~(\\ref{eq:cond}) leading to\n\\begin{eqnarray}\nM^2 & = & \\sum_{x_1,x_2,x_3}\\Big[P(x_1,t_1|x_2,t_2) \\nonumber \\\\\n& & \\qquad\\quad - P(x_1,t_1|x_2,t_2,x_3,t_3)\\Big]^2.\n\\label{eq:cond-averaged}\n\\end{eqnarray}\n\n\nFigure~\\ref{fig:ch-k} presents evaluation of $Q^2$ (top panel) and $M^2$ (bottom panel) for $t_1=27$ and $t_3=6$ as a function of the intermediate time $t_2=\\{7,8,9,\\dots,25,26\\}$. It is seen that deviations from the Chapman-Kolmogorov identity are well registered for processes with long rests when subdiffusion wins competition with L\\'evy flights at the level of sample paths.\nThe tests based on $Q^2$ (see Eq.~(\\ref{eq:ch-k-averaged})) and $M^2$ (see Eq.~(\\ref{eq:cond-averaged})) have comparative character. The deviations $Q^2$ and $M^2$ are about three order of magnitudes higher for the parameter sets $\\nu=0.8, \\alpha=2.0$ and $\\nu=0.8, \\alpha=1.6$ than $Q^2$ and $M^2$ values for the Markovian counterparts with $\\nu=1$ and $\\alpha=2$, $\\alpha=1.6$, respectively. Performed analysis clearly demonstrates non-Markovian character of the limiting diffusion process for $\\nu<1$ and the findings indicate that scaling of PDF, $p(x,t)=t^{-1\/2}p(xt^{-1\/2},1)$ or, in consequence, scaling of the variance\n$\\langle [x-x(0)]^2 \\rangle \\propto t$ and interquantile distances (see Fig.~\\ref{fig:quantiles}) do not discriminate satisfactory between ``normal'' and ``anomalous'' diffusive motions \\cite{zumofen1995}. In fact, linear in time spread of the second moment does not necessarily characterize normal diffusion process. Instead, it can be an outcome of a special interplay between subdiffusion and L\\'evy flights combined in the subordination $X(t)=\\tilde{X}(S_t)$. The competition between both processes is better displayed in analyzed sample trajectories $X(t)$ where combination of long jumps and long trapping times can be detected, see Fig.~\\ref{fig:trajectories}.\n\n\n\\begin{figure}[!ht]\n\\begin{center}\n\\includegraphics[angle=0,width=8.0cm]{ch-k-q-m.eps}\n\\caption{(Color online) Squared sum of deviations $Q^2$, see Eq.~(\\ref{eq:ch-k-averaged}), (top panel) and $M^2$, see Eq.~(\\ref{eq:cond-averaged}), (bottom panel) for $t_1=27$, $t_3=6$ as a function of the intermediate time $t_2$. 2D histograms were created on the $[-10,10]^2$ domain. 3D histograms were created on the $[-10,10]^3$ domain. Due to non-stationary character of the studied process the analysis is performed for the series of increments $x(t+1)-x(t)$. Simulation parameters as in Fig.~\\ref{fig:trajectories}.}\n\\label{fig:ch-k}\n\\end{center}\n\\end{figure}\n\n\\section{Conclusions}\n\nSummarizing, by implementing Monte Carlo simulations which allow visualization of stochastic trajectories subjected to subdiffusion (via time-subordination) and superdiffusive L\\'evy flights (via extremely long jumps in space), we have demonstrated that the standard measure used to\ndiscriminate between anomalous and normal behavior cannot be applied\nstraightforwardly. The mean square displacement alone, as derived from the (finite) set of time-series data does not provide full\ninformation about the underlying system dynamics. In order\nto get proper insight into the character of the motion, it is necessary to perform analysis of individual trajectories.\nSubordination which describes a transformation between a physical time and an operational time of the system \\cite{sokolov2000,magdziarz2007b} is responsible for unusual statistical properties of waiting times between subsequent steps of the motion. In turn,\nL\\'evy flights are registered in instantaneous long jumps performed by a walker. Super- or sub- linear character of the motion in physical time is dictated by a coarse-graining procedure, in which fractional time derivative with the index $\\nu$ combines with a fractional spatial derivative with the index $\\alpha$.\nSuch situations may occur in motion on random potential surfaces where the presence of vacancies and other defects introduces both --\nspatial and temporal disorder \\cite{sancho2004}.\nWe believe that the issue of the interplay of super- and sub-diffusion with a crossover resulting in a pseudo-normal paradoxical diffusion may be of special interest in the context of e.g. facilitated target location of proteins on folding heteropolymers \\cite{belik2007} or in analysis of single particle tracking experiments \\cite{lubelski2008,he2008,metzler2009,bronstein2009}, where the hidden subdiffusion process can be masked and appear as a normal diffusion.\n\n\n\n\\begin{acknowledgments}\nThis project has been supported by the Marie Curie TOK COCOS grant (6th EU Framework\nProgram under Contract No. MTKD-CT-2004-517186) and (BD) by the Foundation for Polish Science.\nThe authors acknowledge many fruitfull discussions with Andrzej Fuli\\'nski, Marcin Magdziarz and Aleksander Weron.\n\\end{acknowledgments}\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe European Space Agency's {\\it Gaia}\\ mission (\\citealt{gaiaMission}) Early Data Release 3 (EDR3) provides photometry in the $G$, $G_{BP}$ and $G_{RP}$ bands, as well as precise astrometry and parallax measurements for over 1.5 billion sources (\\citealt{gaiaEDR3,gaiaEDR3Astro,gaiaEDR3Phot,gaiaEDR3Plx}). Although the absolute number of sources is comparable to {\\it Gaia}\\ Data Release 2 (DR2; \\citealt{gaiaDR2,gaiaDR2Astro,gaiaDR2Cat,gaiaDR2Phot,gaiaDR2Plx}), the astrometric and photometric precision has drastically improved thanks to a 3-fold increase in the celestial reference sources and longer data collection baseline (22 vs 34 months), as well as an updated and improved processing pipeline (\\citealt{gaiaEDR3Astro}). This quantity and quality is defining a new standard for Galactic studies. The more recent {\\it Gaia}\\ Data Release 3 (DR3) augments EDR3 by providing additional information on some detected targets such as variability indicators, radial velocity, binary star information, as well as low-resolution spectra for >200 million sources (e.g. \\citealt{gaiaDR3cat,gaiaDR3var,gaiaDR3rv,gaiaDR3gold,deangeli22}). \n\nThe INT\/WFC Photometric H$\\alpha$ Survey of the Northern Galactic Plane (IPHAS; \\citealt{iphas}) is the first comprehensive digital survey of the northern Galactic disc ($|b|<5^\\circ$), covering a Galactic longitude range of $29^\\circ < l < 215^\\circ$. The IPHAS observations are obtained using the Wide Field Camera (WFC) at the prime focus of the 2.5m Isaac Newton Telescope (INT) on La Palma, Spain. IPHAS images are taken through three filters: a narrow-band H$\\alpha$, and two broad-band Sloan $r$ and $i$ filters. The UV-Excess Survey of the northern Galactic Plane (UVEX; \\citealt{uvex}) has covered the same footprint as IPHAS using the same WFC on the INT in the two broad-band Sloan $r$ and $g$ filters as well as a Sloan $u$-like $U_{RGO}$ filter. Exposures are set to reach an $r$-band depth of $\\approx 21$ in both surveys. Pipeline data reduction for both surveys is handled by the Cambridge Astronomical Survey Unit (CASU). Further details on the data acquisition and pipeline reduction can be found in \\cite{iphas}, \\cite{uvex} and \\cite{iphasEDR}. A defining feature of both these surveys are the quasi-contemporaneous observations of each filter set so as to recover reliable colour information for sources without the contributing effects of variability on timescales longer than $\\approx 10$ minutes. This same characteristic is also shared by the {\\it Gaia}\\ mission. Recently \\cite{igaps} has produced the IGAPS merged catalogue of IPHAS and UVEX observations, while \\cite{greimel21} provides the IGAPS images. Additionally to merging the sources observed by both IPHAS and UVEX, a global photometric calibration has been performed on IGAPS, which resulted in photometry being internally reproducible to 0.02 magnitudes (up to magnitudes of $\\approx 18-19$, depending on the band) for all except the $U_{RGO}$ band. Furthermore, this 174-column catalogue provides astrometry for both the IPHAS and UVEX observations as well as the observation epoch, which allows to perform a precise cross-match with {\\it Gaia}\\ given the proper motion information provided. The astrometric solution of IGAPS is based on {\\it Gaia}\\ DR2. Although no per source errors are available, the astrometric solution yields typical astrometric errors in the $r$ band of 38mas.\n\nThe United Kingdom Infrared Deep Sky Survey (UKIDSS; \\citealt{ukidss}) is composed of five public surveys of varying depth and area coverage which began in May 2005. UKIDSS uses the Wide Field Camera (WFCAM, see \\citealt{casali07}) on the United Kingdom Infrared Telescope (UKIRT). All data is reduced and calibrated at the Cambridge Astronomical Survey Unit (CASU) using a dedicated software pipeline and are then transferred to the WFCAM Science Archive (WSA; \\citealt{hambly08}) in Edinburgh. There, the data are ingested and detections in the different passbands are merged. The UKIDSS Galactic Plane Survey (GPS; \\citealt{ukidssGPS}) is one of the five UKIDSS public surveys. UKIDSS GPS covers most of the northern Galactic plane in the $J$, $H$ and $K$ filters for objects with declination less than 60 degrees, and contains in excess of a billion sources. We use in this work UKIDSS\/GPS Data Release 11 (DR11). Similarly to IGAPS, no per source errors are available, but the astrometric solution of UKIDSS based on {\\it Gaia}\\ DR2 yields a typical astrometric error of 90 mas. \n \n\\cite{gIPHAS} described and provided a sub-arcsecond cross-match of {\\it Gaia}\\ DR2 against IPHAS. The resulting value-added catalogue provided additional precise photometry for close to 8 million sources in the northern Galactic plane in the $r$, $i$, and H$\\alpha$\\ bands. This paper describes a sub-arcsecond cross-match between {\\it Gaia}\/DR3, IGAPS and UKIDSS GPS. Similarly to \\cite{gIPHAS} this cross-match of northern Galactic plane surveys (XGAPS) takes into account the different epochs of observations of all surveys and the {\\it Gaia}\\ astrometric information (including proper motions) to achieve sub-arcsecond precision when cross-matching the various surveys. XGAPS provides photometry in up to 9 photometric bands ($U$, $g$, $r$, $i$, H$\\alpha$, $J$, $H$, $K$, $BP$, $RP$ and $G$) for 33,987,180 sources. XGAPS also provides a quality flag indicating the reliability of the {\\it Gaia}\\ astrometric solution for each source, which has been inferred through the use of Random Forests (\\citealt{RF}). Section \\ref{sec:crossMatch} describes our cross-matching procedure, including the preliminary selection cuts applied to all datasets. Section \\ref{sec:RF} describes the machine learning model (using Random Forests) to train and select sources from the XGAPS catalogue which can be considered to have reliable {\\it Gaia}\\ astrometry, while Section \\ref{sec:catalogue} describes a potential application for selecting blue-excess sources for spectroscopic follow-up. Finally conclusions are drawn in Section \\ref{sec:conclusion}, and the catalogue format is summarised in the appendix.\n\n\n\\section{Cross-matching Gaia with IGAPS and UKIDSS} \\label{sec:crossMatch}\n\nThe aim of XGAPS is to cross-match all sources detected in IGAPS (either IPHAS or UVEX) to {\\it Gaia}\\ DR3, and as a second step cross-match those sources to UKIDSS. The cross match is restricted to sources with a significant {\\it Gaia}\\ DR3 parallax detection and IGAPS sources identified as being stellar-like. \n\n\\subsection{Selection cuts} \\label{sec:selection}\nBefore the cross-match some selection cuts are applied to the master catalogues.\n\nFrom {\\it Gaia}\\ DR3 only objects satisfying the following are selected:\n\\begin{itemize}\n\\item Are within an area slightly larger than the IGAPS footprint ($203);\n\\item Have a signal-to-noise parallax measurement above 3 (\\texttt{parallax\\_over\\_error}>3).\n\\end{itemize}\n\\noindent This results in 41,572,231 sources. For reference, the removal of the two signal-to-noise limits would result in 240,725,104 {\\it Gaia}\\ DR3 sources within the IGAPS footprint. The parallax signal-to-noise limit ensures distances up to 1--1.5 kpc are well covered.\n\nBecause IGAPS is already a merge between IPHAS and UVEX, the selection cuts are applied to the individual surveys. For IPHAS detections, sources are retained only if the $r$, $i$ and H$\\alpha$\\ detections are not flagged as either saturated, vignetted, contaminated by bad pixels, flagged as noise-like, or truncated at the edge of the CCD. For UVEX the same cut as IPHAS is applied to the $U$, $g$, $r$ detections with the additional constraint that detections are not located in the degraded area of the $g$-band filter. Of the 295.4 million sources in IGAPS, 212,378,160 are retained through the IPHAS selection cuts and 221,495,812 are retained through the UVEX ones.\n\nFinally the UKIDSS Galactic Plane Survey Point Source Catalogue contains 235,696,744 sources within $203. A more detailed analysis of this effect has already been discussed in \\cite{gIPHAS}. What was found is an upper limit of 0.1\\% on the fraction of mismatches associated with their selection cut of \\texttt{phot\\_g\\_mean\\_flux\\_over\\_error}>5 in some of the most crowded regions of the Galactic plane mostly affecting the faintest sources. For XGAPS it is expected that the number of miss-matches is even lower than the 0.1\\% miss-match fraction quoted in \\cite{gIPHAS} as the selection cut now includes many more {\\it Gaia}\\ sources. The next section also introduces an additional quality flag that can be used to further clean erroneous matches and\/or targets that have spurious astrometric solutions.\n\n\n\\section{Cleaning XGAPS with Random Forests} \\label{sec:RF}\nThe left panel of Fig. \\ref{fig:cmd1} shows colour-magnitude diagram (CMD) using the {\\it Gaia}-based colours for all cross-matched targets as described in Section \\ref{sec:crossMatch}. The distances used to convert apparent to absolute magnitudes have been inferred via $M = m+5+5 \\log_{10}(\\varpi\/1000)$, where $M$ and $m$ are the absolute and apparent magnitudes respectively, and $\\varpi$ the parallax in milliarcseconds provided by {\\it Gaia}\\ DR3. \\cite{gaiaEDR3Plx} provide a correction to the $\\varpi$ measurements to correct for the zero point bias. This correction is not applied here, and neither is extinction, but users of XGAPS can do so through the available code provided by \\cite{gaiaEDR3Plx}. \n\nAs can be seen from the left panel of Fig. \\ref{fig:cmd1} both CMDs appear to be ``polluted'' by spurious sources. This is particularly evident in the regions between the main sequence and white dwarf tracks, where a low population density of sources is expected. Similar contamination can also be observed in different colour combination CMD plots. Spurious astrometric solutions from {\\it Gaia}\\ can be due to a number of reasons. One of the major causes that produce such spurious parallax measurements is related to the inclusion of outliers in the measured positions. In {\\it Gaia}\\ DR3 this is more likely to occur in regions of high source density (as is the case in the Galactic plane) or for close binary systems (either real or due to sight line effects) which have not been accounted for. The dependence of spurious parallax measurements on other measured quantities in {\\it Gaia}\\ DR3 is not straight forward to disentangle, and CMDs cannot be easily cleaned through the use of empirical cuts on the available {\\it Gaia}\\ DR3 parameters.\n\n\\begin{figure*}\n \\includegraphics[width=1\\columnwidth]{Figures\/CMDGaia.jpeg}\n \n \\includegraphics[width=1\\columnwidth]{Figures\/CMDneg.jpeg}\n \\caption{Left panel: {\\it Gaia}-based absolute CMD of all cross-matched sources between {\\it Gaia}\\ and IGAPS. Right panel: The recovered negative parallax ``mirror sample'' from the {\\it Gaia}\/IGAPS cross-match. In producing this the absolute val;ue of the {\\it Gaia}\\ parallax measurements are used.}\n \\label{fig:cmd1}\n\\end{figure*}\n\nSeveral methods attempting to identify spurious astrometric sources have been explored in the literature. \\cite{gIPHAS} defined both a ``completeness'' and ``purity'' parameter that can be used to clean the resulting CMDs from the previous cross-match between {\\it Gaia}\\ DR2 and IPHAS. More recently, \\cite{GCNS} employed a machine learning classifier based on Random Forests to identify spurious astrometric measurements in the 100 pc sample of {\\it Gaia}\\ EDR3. In both cases, a negative parallax sample had been used to infer common properties of spurious astrometric sources. This was then generalised and applied to the positive parallax sources to identify spurious measurements. \n\nA classifier will only be as good at generalising a given set of properties as the provided training set allows. Here a Random Forest classifier is also used to clean XGAPS from the contamination of bad astrometric measurements. To explore this further, the same cross-matching method as described in Section \\ref{sec:crossMatch} is performed using as a master {\\it Gaia}\\ catalogue of all sources satisfying the same quality cuts as described in Section \\ref{sec:selection} but inverting the parallax signal-to-noise selection criteria to be less than $-3$ (\\texttt{parallax\\_over\\_error}<-3). This produces a total of 1,034,661 sources after the cross-matching with the IGAPS catalogue has been performed. The right panel of Fig. \\ref{fig:cmd1} shows the {\\it Gaia}\\ CMD of the recovered negative parallax ``mirror sample'' after having parsed through the same cross-matching pipeline as all other XGAPS sources. To obtain ``absolute magnitudes'' for sources the absolute value of the negative parallax has been used. It is clear from comparing both panels of Fig. \\ref{fig:cmd1} that the suspiciously spurious parallax sources and negative parallax sources occupy similar regions of the CMDs. This in turn suggests that the same systematic measurement challenges are affecting both these samples, even though there is no clear parameter combination cut from the {\\it Gaia}\\ astrometric measurements that can be used to exclude spurious sources.\n\n\nIn a similar way to what has been adopted in \\cite{GCNS} to remove spurious sources, a Random Forest (\\citealt{RF}) is trained through the use of XGAPS data to classify all $\\approx 34$ million entries into two categories (good vs. bad astrometric solutions) purely based on astrometric quantity and quality indicators provided by {\\it Gaia}\\ DR3 and augmented by astrometric indicators resulting from XGAPS. To achieve this a reliable training set of both categories is required. Because XGAPS sources are found in the crowded Galactic plane, and because these sources may suffer from specific systematic errors, a training\/testing set is constructed from XGAPS data alone. The good astrometric solution set is compiled by selecting all sources in XGAPS which have a parallax signal-to-noise measurement above 5. This results in 19,242,307 good astrometric solution sources used for training. Although some bad parallax measurement sources may be expected to have a parallax signal-to-noise measurement above 5, it is reasonable to assume that a small fraction of sources will fall into this category. The bad astrometric training sources are compiled through the use of the ''negative parallax mirror sample'', for which the CMD is shown in the right panel of Fig. \\ref{fig:cmd1}. This is obtained by selecting sources with a parallax signal-to-noise measurement below $-5$, resulting in 250,069 sources. In total, the set of good and bad astrometric solution targets is 19,492,876. The testing set is created by randomly selecting 20\\% of the lowest populated class (50,113 from the bad astrometric sources), and randomly selecting the same number of sources from the other class. All remaining sources are used as a training set.\n\nThe classification model consists of a trained Random Forest (\\citealt{RF}) using a total of 26 predictor variables listed in Table \\ref{tab:RF} which are purely astrometry based. Each decision tree in the Random Forest is grown using 5 randomly chosen predictor variables, and each tree is grown to their full length. Surrogate splits when creating decision trees are used to take into account missing variables in some of the training samples. Each tree is grown by resampling targets, with replacement, in the training sample while keeping the total number of training samples per tree the same as the total number of targets used for training. Because the number of good astrometric training sources is much larger than the bad astrometric sources, each tree is grown using all bad astrometric sources (200,456 after having removed the testing set), and randomly under-sampling the same number of good training sources. This ensures that there is a balance between the two classes for each grown tree. These resampling techniques ensure that each tree is grown using a different subset of the training set and related predictors, which in turn avoids the Random Forest from overtraining (\\citealt{RF}). In total, the Random Forest consists of 1001 decision trees. Final source classifications are assigned by the largest number of trees that classified the source as a particular class. The vote ratio between the two classes is also retained in the XGAPS catalogue. We have further attempted to establish the relative predictor importance for each of the 26 predictors used. This is achieved through the same classifier methodology described. However, for computational time purposes, the predictor importance values only are obtained by growing each tree using the same good training sources (200,456 randomly selected from the entire population) rather than resampling these for each individual tree. The resulting predictor importance using the out-of-bag samples during training is included in Table \\ref{tab:RF}.\n\n\\begin{table}\n\\begin{center}\n\\begin{tabular}{l l}\nPredictor Name & Predictor Importance\\\\\n\\hline\\hline\n pmra & 11.68 \\\\\n pmdec & 9.07 \\\\\n bMJD\\_separation\\_UVEX & 4.30 \\\\\n bMJD\\_separation\\_IPHAS & 4.26 \\\\\n ipd\\_frac\\_multi\\_peak & 4.06 \\\\\n ipd\\_gof\\_harmonic\\_amplitude & 3.61 \\\\\n astrometric\\_n\\_good\\_obs\\_al & 2.67 \\\\\n astrometric\\_n\\_obs\\_al & 2.65 \\\\\n scan\\_direction\\_mean\\_k1 & 2.53 \\\\\n parallax\\_error & 2.42 \\\\\n scan\\_direction\\_mean\\_k2 & 2.24 \\\\\n scan\\_direction\\_mean\\_k3 & 2.22 \\\\\n ruwe & 1.96 \\\\\n astrometric\\_excess\\_noise\\_sig & 1.84 \\\\\n astrometric\\_gof\\_al & 1.81 \\\\\n astrometric\\_excess\\_noise & 1.74 \\\\\n pmdec\\_error & 1.70 \\\\\n redChi2 & 1.64 \\\\\n scan\\_direction\\_strength\\_k1 & 1.57 \\\\\n astrometric\\_sigma5d\\_max & 1.50 \\\\\n ipd\\_frac\\_odd\\_win & 1.49 \\\\\n scan\\_direction\\_mean\\_k4 & 1.49 \\\\\n astrometric\\_n\\_bad\\_obs\\_al & 1.42 \\\\\n astrometric\\_chi2\\_al & 1.36 \\\\\n pmra\\_error & 1.33 \\\\\n astrometric\\_n\\_obs\\_ac & 0.27 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Out-of-bag predictor importance of all predictors used for classification by the Random Forest classifier ordered according to importance. The predictor names used in the table correspond to column names used in the XGAPS catalogue. A short description of each can be found in the Appendix.\n}\n\\end{center}\n\\label{tab:RF} \n\\end{table}\n\nThe Random Forest is robust against variations in the number of trees or candidate predictors, as altering these did not produce substantially different results as evaluated on the test set. It is important to note that although the bad training sources can be considered to be the result of bonafide spurious astrometric measurements, some systems in the good training set are expected to have been mislabeled by the training set selection criteria. Thus when inspecting the Random Forest classification accuracy on the testing set only sources with misclassified labels from the bad astrometric sources should be considered, and these should provide a lower limit on the true accuracy of the classifier. The final result on the testing set is summarised by the confusion matrix shown in Figure \\ref{fig:conf}. Overall 1984 sources are classified as bad sources owning a parallax signal-to-noise measurement above 5. More importantly, 503 out of 50,113 bad astrometric sources (1.0\\%) have been mislabeled, and these should provide the lower limit on the accuracy of the classifier.\n\n\\begin{figure}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/confusionChart.jpeg}\n \\caption{Confusion matrix between the positive and negative parallax samples computed on the test set. Class values of 0 represent \"bad\" astrometric sources while a value of 1 represent \"good\" astrometric sources. Details of the definition of the test set and training of the Random Forest can be found in Section \\ref{sec:RF}.}\n \\label{fig:conf}\n\\end{figure}\n\n\\begin{figure}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/voteRF.jpeg}\n \\caption{Distribution of associated votes as computed by the trained Random Forest described in Section \\ref{sec:RF} for the full XGAPS targets. The \\texttt{voteRF} value is included in the XGAPS catalogue for each source. Objects with \\texttt{voteRF}>0.5 have a \\texttt{flagRF} value of 1 in XGAPS rather than 0.} \n \\label{fig:voteRF}\n\\end{figure}\n\nHaving trained the classification model, all $\\approx 34$ million sources in XGAPS are parsed through the Random Forest classifier and receive an associated vote (see Fig. \\ref{fig:voteRF}) from each tree and an associated flag with the predicted classification. Sources are classified as good astrometric sources if more than 50\\% of individual trees in the Random Forest classifier have classified them as such, and are assigned a flag in the catalogue of \\texttt{flagRF}=1. If this is not achieved, the source flags are set to \\texttt{flagRF}=0. This results in 30,927,929 (91\\%) targets with \\texttt{flagRF}=1 and 3,059,251 (9\\%) with \\texttt{flagRF}=0.\n\n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/IPHASseparation.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/UVEXseparation.jpeg}\n \\caption{Distribution of the separations between all matched sources in XGAPS between the {\\it Gaia}\/IPHAS targets (left) and {\\it Gaia}\/UVEX targets (right) are shown with black solid lines. Both panels also show the decomposition of the distribution employing the Random Forest classifier to select ``good'' astrometric targets (\\texttt{flagRF}=1, blue solid lines) and``bad'' astrometric targets (\\texttt{flagRF}=0, red solid lines).}\n \\label{fig:separation}\n\\end{figure*}\n\nFig. \\ref{fig:separation} shows the angular separation between the individual IPHAS and UVEX matches to the epoch-corrected {\\it Gaia}\\ DR3 sources. The bulk of the population finds an angular separation of about 0.02 arcseconds, but there exists an additional component of sources evident at larger separations. Although these sources have found their correct match between {\\it Gaia}\\ and both IPHAS and UVEX, the larger angular separation may in fact be attributed to poor astrometry in {\\it Gaia}\\ DR3. Shown in the same figure are also the distribution of the good vs. bad astrometric sources as classified using the trained Random Forest. It is clear that the classifier has been able to separate those sources with relatively large angular separation when compared to the bulk of the population. \n\nThis split between the good vs. bad astrometric sources can also be validated when considering other astrometric predictor variables used by the classifier. Fig. \\ref{fig:predVars} shows the distributions of an additional 3 predictor variables (\\texttt{parallax\\_error}, \\texttt{pmra\\_error}, \\texttt{ruwe}) as well as the parallax signal-to-noise measurement (\\texttt{parallax\\_over\\_error}) which has been used to select the training set. In all cases the Random Forest classifier appears to have separated the apparent bimodal distributions observed in the predictor variables. \n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/plxOverError.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/plxError.jpeg}\n \\newline\n \\includegraphics[width=1\\columnwidth]{Figures\/pmraError.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/ruwe.jpeg}\n \\caption{Distributions of a subset of astrometric parameters taken from {\\it Gaia}\\ DR3 included in XGAPS (black solid lines). All but the \\texttt{parallax\\_over\\_error} values have been used for training the Random Forest classifier. All panels also show the decomposition of the distribution employing the Random Forest classifier to select ``good'' astrometric targets (\\texttt{flagRF}=1, blue solid lines) and ``bad'' astrometric targets (\\texttt{flagRF}=0, red solid lines).}\n \\label{fig:predVars}\n\\end{figure*}\n\nInspecting the CMDs of the predicted good vs. bad astrometric targets provides additional insight on the Random Forest performance. Fig. \\ref{fig:gaiaRF} displays the {\\it Gaia}\\ CMD of the predicted good vs. bad astrometric targets. Overall, the Random Forest classifies a total of 30,944,717 good astrometric targets ($\\approx$91\\%) and 3,042,463 bad astrometric targets ($\\approx$9\\%). It is clear that most of the bad astrometric sources are correctly removed as they populate the same region in the CMD as the negative parallax sample used for training (see right panel of Fig. \\ref{fig:cmd1}). Although the split has been efficiently achieved, it is also the case that some good astrometric sources have been flagged as bad ones by the classifier, and vice-versa. This is particularly evident when inspecting the CMD region for sources classified as having good astrometry (left panel in Fig. \\ref{fig:gaiaRF}), which appears to still be populated with relatively large number of sources on the blue side of the main sequence. Furthermore, some sources flagged as having bad astrometry by the classifier appear to populate the WD track, and it is also possible some of these have been mislabeled (right panel in Fig. \\ref{fig:gaiaRF}). Overall however, the bulk of the bad astrometric sources appears to have been removed correctly. \n \n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/GoodGaiaRF.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/BadGaiaRF.jpeg}\n \\caption{{\\it Gaia}-based absolute CMDs for all targets in the XGAPS catalogue. The panel on the left shows all targets with \\texttt{flagRF}=1, while targets with \\texttt{flagRF}=0 are displayed in the right panel. Although all sources displayed have a positive parallax measurement, the \"bad\" astrometric sample in XGAPS as defined by the Random Forest occupies a similar region in CMD space as the negative parallax ``mirror sample'' used for training and shown in the right panel Fig. \\ref{fig:cmd1}.}\n \\label{fig:gaiaRF}\n\\end{figure*}\n\n\n\\section{Potential applications of the XGAPS catalogue} \\label{sec:catalogue}\n\nOwning broad and narrow-band photometric measurements for $\\approx 34$ million Galactic plane sources, astrometric information, as well as multi-epoch photometry in many of these, the applications for the XGAPS catalogue can be wide-reaching, especially for the identification of specific source types and related population studies. Examples based on the {\\it Gaia}\/IPHAS catalogue (\\citealt{gIPHAS}) include the discovery of new binary systems (\\citealt{carrell22}), the selection and identification of Herbig Ae\/Be systems (\\citealt{vioque20}), planetary nebulae (\\citealt{sabin22}), as well as candidate X-ray emitting binaries (\\citealt{gandhi21}). Further applications may also be found in constructing reliable training sets for classification, as has been used by \\cite{gaiaSynth} to train a Random Forest for classification of targets based on synthetic photometry.\n\nAlso important, XGAPS provides information that can be efficiently used in selecting targets for large multi-object spectroscopic surveys such as the WHT Enhanced Area Velocity Explorer (WEAVE: \\citealt{weave}) and the 4-metre Multi-Object Spectrograph Telescope (4MOST: \\citealt{4most}). An example of this is the selection of white dwarf candidates in the Galactic plane to be observed with 4MOST as part of the community selected White Dwarf Binary Survey (PIs: Toloza and Rebassa-Mansergas). This includes a total of 28,102 targets that satisfy the following criteria in XGAPS:\n\\begin{itemize}\n\\item Have a {\\it Gaia}\\ declination $<5$ degrees\n\\item Have the \\texttt{flagRF} set to 1\n\\item Lie within the region $M_{U} > 3.20 \\times (U-g) + 6.42$ and $(U-g)<1.71$\n\\end{itemize}\n\\noindent The resulting CMD using the UVEX colours is shown in the left panel of Fig. \\ref{fig:wdbSelection}. The declination cut was employed to ensure targets are observable from Paranal Observatory where the 4MOST survey will be carried out from. The \\texttt{flagRF} is employed to minimise spurious cross-matches and bad astrometric targets. The final colour-magnitude cuts are somewhat ad-hoc at this stage (especially as the $U_{RGO}$ band has not yet been photometrically calibrated across the full survey), but attempt to select all blue-excess sources relative to the main sequence as defined in the UVEX passbands (the bluest set of the XGAPS catalogue). Although preliminary and in need of refinement using well-validated and spectroscopically confirmed targets, these colour cuts provide a first attempt to select white dwarf candidates in the plane for the 4MOST survey. A further cut using the IPHAS passbands of $(r-H\\alpha)>0.56 \\times (r-i) + 0.27$ to select H$\\alpha$-excess sources yields 241 likely accreting white dwarf systems (right panel of Fig. \\ref{fig:wdbSelection}). We point out that these colour cuts are preliminary, and only serve to demonstrate the potential application of the XGAPS catalogue. Specifically for the selection of H$\\alpha$-excess sources, a more refined method of selecting H$\\alpha$-excess candidates based on the local population as defined in absolute colour-magnitude diagrams has been shown to produce more complete samples of objects, but this comes at the expense of purity (e.g. \\citealt{fratta21}).\n\n\\begin{figure*}\n\t\\includegraphics[width=1\\columnwidth]{Figures\/blueExcess_WDBS.jpeg}\n \\includegraphics[width=1\\columnwidth]{Figures\/haExcess_WDBS.jpeg}\n \\caption{{\\it Gaia}\/UVEX CMD (left panel) and corresponding IPHAS-based colour-colour diagrams demonstrating simple selection cuts to select candidate white dwarf systems to be observed by 4MOST (\\citealt{4most}). Gray points in both panels show all targets in XGAPS with declination smaller than 5 degrees (observable from Paranal) and \\texttt{flagRF}=1. Blue points mark targets selected as blue-excess sources, likely related to white dwarf emission contributing to the UVEX photometry. The red points mark blue-excess candidates that also display evidence of H$\\alpha$-excess emission as determined from the IPHAS photometry. The exact cuts are described in Section \\ref{sec:catalogue}.}\n \\label{fig:wdbSelection}\n\\end{figure*}\n\n\n\\section{Conclusion} \\label{sec:conclusion}\n\nWe have presented the XGAPS catalogue which provides a sub-arcsecond cross-match between {\\it Gaia}\\ DR3, IPHAS, UVEX and UKIDSS. It contains photometric and astrometric measurements for $\\approx 34$ million sources within the northern Galactic plane. In total, XGAPS contains 2 epoch photometry in the $r$-band, as well as single-epoch (not simultaneous) photometry in up to 9 broad-band filters ($U_{RGO}$, $g$, $r$, $i$, $J$, $H$, $K$, $G$, $G_{BP}$ and $G_{RP}$) and one narrow-band H$\\alpha$-filter. XGAPS additionally provides a confidence metric inferred using Random Forests aimed at assessing the reliability of the {\\it Gaia}\\, astrometric parameters for any given source in the catalogue. XGAPS is provided as a catalogue with 111 columns. A description of the columns is presented in Table \\ref{tab:cols}. The full XGAPS catalogue can be obtained through ViZieR. As XGAPS only covers the northern Galactic plane, future extensions are planned to merge the southern Galactic plane and bulge using data from the VST Photometric H$\\alpha$\\ Survey of the Southern Galactic Plane and Bulge (VPHAS+: \\citealt{vphas}).\n\n\n\\section*{Acknowledgements}\nCross-matching between catalogues has been performed using STILTS, and diagrams were produced using the astronomy-oriented data handling and visualisation software TOPCAT (\\citealt{topcat}). This work has made use of the Astronomy \\& Astrophysics package for Matlab (\\citealt{matlabOfek}). This research has also made extensive use of the SIMBAD database, operated at CDS, Strasbourg. This work is based on observations made with the Isaac Newton Telescope operated on the island of La Palma by the Isaac Newton Group of Telescopes in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrof\u00edsica de Canarias. This work is also based in part on data obtained as part of the UKIRT Infrared Deep Sky Survey. This work has made additional use of data from the European Space Agency (ESA) mission {\\it Gaia} (\\url{https:\/\/www.cosmos.esa.int\/gaia}), processed by the {\\it Gaia} Data Processing and Analysis Consortium (DPAC, \\url{https:\/\/www.cosmos.esa.int\/web\/gaia\/dpac\/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\\it Gaia} Multilateral Agreement.\n\nMM work was funded by the Spanish MICIN\/AEI\/10.13039\/501100011033 and by ``ERDF A way of making Europe'' by the ``European Union'' through grant RTI2018-095076-B-C21, and the Institute of Cosmos Sciences University of Barcelona (ICCUB, Unidad de Excelencia ``Mar\\'{\\i}a de Maeztu'') through grant CEX2019-000918-M. ARM acknowledges support from Grant RYC-2016-20254 funded by MCIN\/AEI\/10.13039\/501100011033 and by ESF Investing in your future, and from MINECO under the PID2020-117252GB-I00 grant.\n\n\\section*{Data Availability}\nThe XGAPS catalogue produced in this paper is available and can be found on VizieR.\n\n\n\n\\bibliographystyle{mnras}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzefqh b/data_all_eng_slimpj/shuffled/split2/finalzzefqh new file mode 100644 index 0000000000000000000000000000000000000000..35d6e6b238ca82a951381092060a684c02c079b5 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzefqh @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}{\\label{Introduction}}\nSingle electron transistors (SETs)\n\\cite{IEEE.Transactions.on.Magnetics.1987.1142-5,\nPhysRevLett.59.109} are extremely sensitive electrometers, with demonstrated charge sensitivities of the order of $\\mathrm{\\mu e\/\\sqrt{Hz}}$ \\cite{ApplPhysLett.79.4031, JApplPhys.100.114321}. Due to their high charge sensitivities they have found a large number of applications in research, for example, SETs are used to detect nano-mechanical oscillators \\cite{Science.304.74}, to count electrons \\cite{Nature.423.422, Nature.434.361} and to read out qubits \\cite{PhysRevB.69.140503, PhysRevLett.90.027002,\nNature.421.823}.\n\nThe fundamental limitation for the sensitivity of the SET is set by shot noise generated when electrons tunnel across the tunnel barriers \\cite{Korotkov, Nature.406.1039}. Shot noise was observed in a two junction structure (without gate) by Birk \\emph{et al.} \\cite{PhysRevLett.75.1610}.\n\nHowever, there are two other types of noise which are limiting the charge sensitivity in experiments. At high frequencies, above $1\\,\\mathrm{MHz}$, the sensitivity is limited by the amplifier noise. At low frequencies the sensitivity is limited by $1\/f$ noise which is due to background charge fluctuators near the SET. A collection of several fluctuators with different frequencies leads to a $1\/f$ spectrum. In several cases it has been observed that there is a background of $1\/f$ noise and a single or a few more strongly coupled fluctuators, resulting in random telegraph noise, which in the frequency spectrum leads to Lorentzians superimposed on the $1\/f$ background \\cite{ApplPhysLett.61.237, ApplSupercondIEEETrans.5.3085}.\n\nUnderstanding the nature of the $1\/f$ noise is also very important for solid state qubits since $1\/f$ noise strongly limits the decoherence time for these qubits. It has been suggested by Ithier \\emph{et al.} that the charge noise has a cut-off at the frequency of the order of $0.5\\,\\mathrm{MHz}$ \\cite{PhysRevB.72.134519}.\n\nEven though there have been many efforts \\cite{ApplPhysLett.61.237,\nApplSupercondIEEETrans.5.3085, IEEETransInstrMeas.46.303,\nJApplPhys.84.3212, JApplPhys.78.2830, Bouchiat, PhysRevB.56.7675,\nJApplPhys.86.2132, JApplPhys.83.310, JLowTempPhys.123.103} to reveal the physical origin of the background charge fluctuators the nature of these fluctuators is still unknown. It is not even clear where these fluctuators are located. The charge fluctuators can be located either in the tunnel barrier or outside the barrier but in close proximity of the junction.\n\nThe role of the substrate has been examined in several experiments\n\\cite{Bouchiat, JApplPhys.84.3212, ApplPhysLett.91.033107}. However, those experiments did not show a strong dependence of the noise on the substrate material. The barrier dielectric has been proposed as location of the charge traps by several groups \\cite{ApplSupercondIEEETrans.5.3085, JApplPhys.84.3212,\nJApplPhys.78.2830, PhysRevB.53.13682}.\n\nSeveral groups have shown that the low frequency noise at the output of the SET varies with the current gain ($\\partial I\/\\partial Q_g$) of the SET and that the maximum noise is found at the bias point with maximum gain \\cite{JApplPhys.78.2830, JApplPhys.86.2132, JApplPhys.83.310}. This indicates that the noise sources acts at the input of the device, \\emph{i.e.} as an external fluctuating charge. A detailed comparison of the noise to the gain was done by Starmark \\emph{et al.} \\cite{JApplPhys.86.2132}. All above mentioned experiments were performed with conventional SETs by measuring current or voltage noise at relatively low frequencies. \\emph{i.e.} below a few kHz.\n\nIn this work we have measured low frequency noise in the Single Electron Transistor which has demonstrated a very high charge sensitivity\n\\cite{JApplPhys.100.114321}, by using the Radio Frequency Single Electron Transistor (rf-SET) technique \\cite{Science280.1238, Wahlgren}. This allowed us to measure low frequency noise of the reflected voltage from the rf-SET in the range of a few hertz up to tens of MHz, and due to high charge sensitivity we were not limited by the amplifier noise. We find two Lorentzians superimposed on a $1\/f$ spectrum and that the noise in the range$50\\,\\mathrm{kHz}$-$1\\,\\mathrm{MHz}$ is quite different for positive and for negative gain of the transistor. By analyzing the bias and gate dependence of the noise we argue that the noise in this frequency range is dominated by electron tunneling to an aluminum grain, which acts as a single electron box capacitively connected to the SET island.\n\nThe paper is organized as follows. In section \\ref{LFN_model} we describe a model for the low frequency noise, which allows us to separate contributions from the different noise sources. In section \\ref{Experimental_detail} we describe the experimental details of our measurements. Section \\ref{Experimental_results} is the main part of this paper and contains the experimental results. Finally in section \\ref{Discussion} we describe our model for the nature of the low frequency noise in our SETs.\n\n\\section{Low-Frequency Noise Model}{\\label{LFN_model}}\nWe start by analyzing the different contributions of the measured noise. What we actually measure is the reflected voltage from the tank circuit in which the SET is embedded. The rf-SET tank circuit is a series $LC$ circuit with inductance $L$ and capacitance $C$. The power spectral density of the reflected voltage can be decomposed into the following terms originating from charge noise,\nresistance noise, shot noise and amplifier noise according to the following equation \\cite{Korotkov_unpubl, JApplPhys.86.2132}:\n\\begin{equation}{\\label{Noise_decomp}}\nS_{|v_{r}|}=\\left(\\frac{\\partial |v_r|}{\\partial\nQ_g}\\right)^2S_{Q_g}(f)+ \\left(\\frac{\\partial |v_r|}{\\partial\nR_1}\\right)^2S_{R_{1}}(f)+\\left(\\frac{\\partial |v_r|}{\\partial\nR_2}\\right)^2S_{R_{2}}(f)+S_{Shot}+S_{Ampl.},\n\\end{equation}\nwhere $Q_g$ is the charge at the input gate and $R_{1,2}$ are the tunnel resistances of the two junctions. Here we neglected the higher order terms and possible correlation terms between the charge noise and the resistance noise. In the case when the charge fluctuator is located in the tunnel barrier the correlation term may not be negligible. By measuring the noise for different bias points having different gains (\\emph{i.e.} different $\\partial |v_r|\/\\partial Q_g$) it is possible to extract information on whether the noise is associated with charge or resistance fluctuators.\n\nWe have designed the matching circuit for the rf-SET to work in the over-coupled regime, in order to have a monotonic dependence of the reflection coefficient as a function of the SET differential conductance. This regime corresponds to the matching condition when the internal quality factor of the rf-SET tank circuit\n($Q_{int}=\\sqrt{L\/C}\/R$) is larger than the external quality factor \n($Q_{ext}=Z_0\/\\sqrt{L\/C}$), where $Z_0=50\\,\\Omega$ is the characteristic impedance of the transmission line, and $R$ is the SET differential resistance.\n\nWe have theoretically analyzed the reflected voltage from the rf-SET as a function bias and gate voltages applied to the SET. The reflected voltage characteristic of the rf-SET can be calculated by the orthodox theory \\cite{Averin} using a master equation approach. From this theory we can also calculate the derivatives in eq.(\\ref{Noise_decomp}) of the reflected voltage with respect to variations in gate charge and in the resistances of the SET tunnel junctions.\n\n\\begin{figure}\n\\centering\\epsfig{figure=Figure1.eps,width=7cm}\n\\caption{\\label{Orthodox}\\small(color online) Calculated derivatives from equation (\\ref{Noise_decomp}) as a function of bias voltage $V$, and gate charge $Q_g=C_gV_g$. The derivative of the reflected voltage from the rf-SET with respect to (a) the charge fluctuations $(\\partial |v_r|\/\\partial Q_g)^2$; (b) the resistance fluctuations in the 1st junction $(\\partial |v_r|\/\\partial R_1)^2$. Sensitivity to the resistance fluctuations in the second junction has a mirror symmetry, along SET open state $(Q_g=0.5\\,e)$, with the sensitivity to the resistance fluctuations in the first barrier.}\n\\end{figure}\n\nFigure \\ref{Orthodox}(a) shows the sensitivity of the rf-SET to charge fluctuations as a function of the bias and gate voltages. The charge sensitivity is a symmetric function around the SET open state ($Q_g=0.5\\,e$), and has maxima close to the onset of the open state.\n\nThe sensitivity of the SET to resistance fluctuations in the first tunnel barrier is shown in the figure \\ref{Orthodox}(b). The sensitivity to resistance fluctuations in the second barrier (not shown) is identical to figure \\ref{Orthodox}(b), except that it is mirrored along the SET open state $(Q_g=0.5\\,e)$.\n\nBy operating at different bias and gate voltage we can choose operation points where the noise contribution from the different derivatives dominates, and it is possible to distinguish charge noise from resistance noise.\n\n\\section{Experiment details}\n{\\label{Experimental_detail}}\nThe samples were fabricated on oxidized silicon substrates using electron beam lithography and a standard double-angle evaporation technique. The asymptotic resistance of the measured single electron transistor was $25\\,\\mathrm{k\\Omega}$. The charging energy $E_C=e^2\/2C_\\Sigma\\simeq 18\\pm 2\\,\\mathrm{K}$ was extracted from the measurements of the SET stability diagram of the reflected signal with frequency $f=350\\,\\mathrm{MHz}$ versus bias voltage and gate voltages. From the asymmetry of the SET stability diagram we could\nalso deduce that the asymmetry in the junction capacitances was $30\\%$.\n\nThe SET was embedded in a resonant circuit and operated in the radio frequency mode \\cite{Science280.1238, Wahlgren}. The bandwidth of the setup was approximately $10\\,\\mathrm{MHz}$ limited by the quality factor of the resonance circuit. The radio frequency signal was launched toward the resonance circuit and the reflected signal was amplified by two cold amplifiers, and then downconverted using homodyne mixing. The output signal from the mixer containing the noise information was then measured by a spectrum analyzer. The sample was attached to the mixing chamber of a dilution refrigerator which was cooled to a temperature of approximately $25\\,\\mathrm{mK}$. All measurements were performed in the normal (nonsuperconducting) state at a magnetic field of $1.5\\,\\mathrm{T}$.\n\nWe have performed the noise measurements for different gate voltages and different bias voltages. Due to experimental problems we have performed measurements mostly for negative biases. The sample shows very high charge sensitivity of the order of $1\\,\\mathrm{\\mu e\/\\sqrt{Hz}}$. For a detailed description of the sensitivity with respect to different parameters (see\nref.\\cite{JApplPhys.100.114321}). A small sinusoidal charge signal\nof $7.3\\cdot 10^{-4}\\,\\mathrm{e_{rms}}$ with a frequency of $133\\,\\mathrm{Hz}$ was applied to the gate, which allowed us to calibrate the charge sensitivity referred to the input of the SET.\n\n\\begin{figure}\n\\centering\\epsfig{figure=Figure2.eps,width=7cm}\n\\caption{\\label{Points}\\small(color online) (a) The SET stability diagram ($\\partial I\/\\partial Q_g$) measured at a temperature of $25\\,\\mathrm{mK}$. Horizontal black line corresponds to the bias voltage, where transfer function $I(Q_g)$ (see panel (b)) was obtained. The points \\textbf{A}\/ (($\\circ$) for negative and ($+$) for positive bias) where $I=0$ inside the Coulomb blockade region. The points \\textbf{B} and \\textbf{D} ($\\ast\/\\bullet$) correspond to\nmaximum positive\/negative gain, $\\partial I\/\\partial Q_g$ that is where $\\partial^2I\/\\partial Q_g^2=0$. The measurement points \\textbf{C}, close to the current maximum current, correspond to $\\partial I\/\\partial Q_g=0$, marked with ($+$) at negative bias and with ($\\circ$) at positive bias. (b) SET transfer function $I(Q_g)$ measured for the bias voltage $V=-0.4\\,\\mathrm{mV}$. In the stability diagram this measurements is shown as a black line.}\n\\end{figure}\n\nBefore each measured spectrum, we have employed a charge-locked loop\n\\cite{ApplPhysLett.81.4859}, with a help of a lock-in amplifier. The first ($\\partial I\/\\partial Q_g$) or the second ($\\partial^2 I\/\\partial Q_g^2$) derivatives of the current were used as an error signals for stabilization of the gate point to compensate for the slow drift at the current maximum points (\\textbf{C} in fig.\\ref{Points}(b)), or at the maximum gain points (\\textbf{B} and \\textbf{D} in fig.\\ref{Points}(b)) respectively. Each noise spectrum\nis however measured with the feed back loop turned off.\n\n\\section{Experimental Results}\n{\\label{Experimental_results}}\nIn order to separate the contributions from different noise sources we have performed measurements at specific points. The measurement points are shown in fig. \\ref{Points}. At point \\textbf{A} ($Q_g\\approx 0\\,e$) there is an almost a complete Coulomb blockade with a zero current and zero gain ($\\partial |v_r|\/\\partial Q_g=0$). Here the derivatives of the reflected voltage with respect to resistance fluctuations in the tunnel barriers ($\\partial\n|v_r|\/\\partial R_{1,2}=0$) are also zero (see fig.\\ref{Orthodox}(b)).\nAt this point we see only amplifier noise --- different curves for different bias voltages show the same noise level convincing us that we see only amplifier noise. These measurements thus serve to calibrate the noise of the amplifier.\n\nThe measurements at points \\textbf{B} and \\textbf{D}, correspond to the requirement of maximum current gain ($\\max|\\partial I\/\\partial Q_g|$) and therefore also high $|\\partial |v_r|\/\\partial Q_g|$, diagonals in figure\\ref{Orthodox}(a). At these points there are contributions from all the noise sources (see fig. \\ref{Orthodox}), but since the absolute gain and the current are very similar at the points \\textbf{B} and \\textbf{D}, these measurements can be compared.\n\nWe have also measured noise at the points \\textbf{C} ($Q_g\\approx 0.5\\,e$) close to the maximum of the current transfer function ($|I(Q_g)|$). At this point the gain of the reflected signal ($\\partial |v_r|\/\\partial Q_g$) is quite low, but the shot noise could be high. The derivatives of the reflected voltage with respect to resistance fluctuations in the tunnel barriers ($\\partial |v_r|\/\\partial R_{1,2}$) are small but not equal to zero (see fig. \\ref{Orthodox}(b)).\n\\begin{figure}\n\\centering\\epsfig{figure=Figure3.eps,width=7cm}\n\\caption{\\label{Noise}\\small (color online) (a) Power spectral density (PSD) of the reflected voltage measured in the points \\textbf{A} (black curve); \\textbf{B} (blue curve); \\textbf{C} (green curve); \\textbf{D} (red curve). (b) Normalized noise in the points \\textbf{B} (blue curve); \\textbf{D} (red curve). The black continuous lines are a fits to the measured PSD with a sum of two Lorentzian functions.}\n\\end{figure}\n\nThe noise of the reflected voltage at the fixed bias point and for the different gate points are shown in fig. \\ref{Noise}(a). We start by subtracting the amplifier noise and then we compare the noise measured at the different points. Comparing the noise measured at points \\textbf{B} and \\textbf{D}, where the current gain has a maximum, with the noise measured at point \\textbf{C} close to the maximum of the transfer function we see that the noise at the point\n\\textbf{C} is substantially lower, even though the current is higher. From this we draw the conclusion that the difference is not due to the shot noise.\n\nComparing the noise spectra measured at the points \\textbf{B} and \\textbf{D} it is necessary to note that both the currents and the gains are very similar at these points. Both spectra \\textbf{B} and \\textbf{D} have a $1\/f$ dependence at low frequencies with two Lorentzian shoulders at higher frequencies. At frequencies above $30\\,\\mathrm{kHz}$ the noise at point \\textbf{D} drops well bellow the noise at point \\textbf{B}. At $300\\,\\mathrm{kHz}$ the difference is a factor of 5.\n\nWe have fitted the noise spectra at points \\textbf{B} and \\textbf{D} to a sum of two Lorentzian functions. The results of these fits are shown in figure \\ref{Noise}(b). From these fits we can extract the cut-off frequency and the level for each of the two Lorentzians.\n\nThe low-frequency Lorentzian has a cut-off frequency of the order of $1\\,\\mathrm{kHz}$. The cut-off frequency is the same for both slopes ($\\partial I\/\\partial Q_g \\,^>_< 0$), and it does not show any bias or gate dependence within the accuracy of our measurements.\n\nIn contrast, the high-frequency Lorentzian has a cut-off frequency above $f_{co}>50\\rm\\,kHz$, with a strong dependence on the bias and gate voltages. The bias dependence of the Lorentzian cut-off frequency for the positive ($\\partial I\/\\partial Q_g>0$) and negative ($\\partial I\/\\partial Q_g <0$) slopes are shown in figure \\ref{Gate_Depend}(a).\n\n\\begin{figure}[b]\n\\centering\\epsfig{figure=Figure4.eps,width=12cm}\n\\caption{\\label{Gate_Depend}\\small (color online) (a) The bias dependence of the cut-off frequency for the high frequency Lorentzian. (b) The gate dependence of the cut-off frequency for the high frequency Lorentzian. (c) The bias dependence of the cut-off frequency for the low frequency Lorentzian. (d) The gate dependence of the cut-off frequency for the low frequency Lorentzian. Blue points correspond to the negative slope $\\partial I\/\\partial Q_g>0$ (see fig. \\ref{Points}). Red points correspond to the positive slope $\\partial I\/\\partial Q_g<0$ (see fig. \\ref{Points}). The error bars are extracted from the fits to the Lorentzians. The continuous lines (blue, red) show the bias and gate dependencies for the Lorentzian cut-off frequency calculated in the described model.}\n\\end{figure}\n\nFor the negative slope $(\\partial I\/\\partial Q_g<0)$(blue points in figure \\ref{Gate_Depend}(a)), the cut-off frequency remains practically constant ($f_{co}\\simeq 50\\rm\\, kHz$) for negative biases. Close to the zero bias voltage the Lorentzian cut-off frequency switches to a higher frequency ($f_{co}\\simeq 150\\rm\\, kHz$). For the positive slope $(\\partial I\/\\partial Q_g>0)$ (red curve in figure \\ref{Gate_Depend}), the situation is different. For negative bias voltage the Lorentzian cut-off frequency is continuously growing from $f_{co}\\simeq 60\\rm\\, kHz$ and reaches a maximum ($f_{co}\\simeq 120\\rm\\, kHz$) close to zero bias voltage. For the positive bias voltage it rapidly decrease from the maximum to the initial value $f_{co}\\simeq 60\\rm\\, kHz$).\nFigure \\ref{Gate_Depend}(b), shows the gate dependence for the Lorentzian cut-off frequencies for both slopes $(\\partial I\/\\partial Q_g\\,^>_<0)$. As is clearly shown in this figure, the gate dependence for the positive and negative slopes are different. The gate dependence for the positive slope $(\\partial I\/\\partial Q_g>0)$ (red curve) has a peak, with a small negative offset on the gate charge from the SET open state. The Lorentzian cut-off frequency behavior on the negative slope $(\\partial I\/\\partial Q_g<0)$ (blue curve) is a step like function.\n\nBy integrating the Lorentzians in the noise spectra (see figure \\ref{Noise}(b)) we can calculate the total variation of induced charge on the SET island for both fluctuators. The variation of induced charge, for the low-frequency fluctuator is of the order of $\\Delta q_{lf}\\approx 6.6\\,\\mathrm{me_{rms}}$. The same estimation for the high-frequency fluctuator gives $\\Delta q_{hf}\\approx 30\\,\\mathrm{me_{rms}}$.\n\n\\section{Discussion}{\\label{Discussion}}\nIn this section we analyze the bias and the gate dependence of the cut-off frequency of the high frequency Lorentzian ($f_{co}\\simeq 50-150\\,\\mathrm{kHz}$). In particular, we try to explain why the cut-off frequency is different for different biasing conditions.\n\nIn our analysis we have assumed that there are in principle two possible sources for the low frequency noise, resistance fluctuators or charge fluctuators. The physical nature of the resistance fluctuators is not well understood, but they can be related with charge fluctuations. For instance, a charge oscillating in the tunnel barrier may modify both the transparency and the induced island charge. \n\nThe noise from a resistance fluctuator in one of the tunnel barriers would have an asymmetry along the onset of the SET open state, as it was shown in section \\ref{LFN_model} (see fig. \\ref{Orthodox}(b)). In order to explain the bias dependence of the cut-off frequency (see fig. \\ref{Gate_Depend}(a)) in terms of resistance fluctuators we must assume that there is an individual resistance fluctuator located in each of the SET tunnel barriers. Furthermore these fluctuators must have the same tunneling rates. With this strong assumption, however, it is impossible to explain the sharp drop of the experimentally measured gate dependence of the cut-off frequency (see. fig. \\ref{Gate_Depend}(b)).\n\nThus, in order to explain the results for the high-frequency Lorentzian, we will assume that there are individual charge fluctuators affecting the SET, and that each Lorentzian in the experimentally measured spectra is due to a single fluctuator coupled to the SET island. Many experimental groups have suggested a microscopic nature of these fluctuators. The microscopic nature is not known well, but there are suggestions that it could be traps in the substrate dielectric close to the SET or in the aluminum surface oxide. \n\nHere we will argue that the sources of these two level fluctuators are located outside the barrier and that they may have a \\textit{mesoscopic} nature. In reference \\cite{PhysRevB.53.13682} it is argued based on electrostatic analysis of the tunnel barrier, that such fluctuators could not be localized inside barrier. There are also other experiments \\cite{PhysRevB.67.205313, JApplPhys.96.6827}, where it is argued that the charge fluctuators most probably are localized outside the tunnel barrier.\n\nA typical SET is made from thin aluminum films which are not uniform; they consist of small grains connected to each other. In figure \\ref{SET_SEB}(a) we are show a SEM image of a sister sample to the measured one. In figure \\ref{SET_SEB}(b) we also show an AFM image of the same sister sample. It can be clearly seen that there are many small grains close to the device. We will assume that some grains are separated from the main film by a thin oxide layer but also capacitively connected to the SET island.\n\\begin{figure}\n\\centering\\epsfig{figure=Figure5-1.eps,width=10cm}\n\\caption{{\\label{SET_SEB}}\\small (a) A SEM image of a sister sample to the measured one. The black bar shows $100\\,\\mathrm{nm}$ linear scale. (b) An AFM pictures of the edge of an aluminum film on the $SiOx$ surface (c) Equivalent electrostatic scheme, where the small metallic grain is capacitively coupled to the SET island and has a tunnel connection with a bias lead.}\n\\end{figure}\n\nElectrostatically such a grain can be described as a single electron box \\cite{ZeitsPhysB.85.327}, capacitively coupled to the SET island with capacitance $C'_g$ and having a tunnel contact with resistance $R'$ and capacitance $C'$ with one of the bias leads as indicated in figure\\,\\ref{SET_SEB}(c). The situation is almost equivalent if the grain would be tunnel connected to the SET island and capacitively connected to the SET bias lead. For a detailed analysis we should estimate the energy scales for this grain. We assume that the linear dimension of the stray aluminum grain is in the range $1-5\\,\\mathrm{nm}$. The charging energy for this grain is of the order $E'_C \\equiv e^2\/(2 C'_\\Sigma) \\sim 10^{-1}\\,\\mathrm{eV}$, where $C'_\\Sigma=C'+C'_g+C'_{env}$, and $C'_{env}$ is the capacitance to the rest of the environment. This charging energy is substantially larger than the experimental temperature and the charging energy of the SET ($k_\\mathrm{B}T \\ll E_C \\ll E'_C$). In addition there will be be further separation of the energy levels due to the small size of the grains. \n\nThe ratio of capacitances $C'_g\/C'_\\Sigma$ is given directly by the charge induced on the SET island which we already have extracted by integrating the Lorentzians. Thus for the high frequency Lorentzian we have $C'_g\/C'_\\Sigma=0.030$ and for the low frequency Lorentzian this ratio is 0.0066.\n\nIn our model the single electron box is capacitively coupled to the SET island, and tunnel coupled with one of the SET leads. The average potential of the SET island $\\phi$ acts, in this system, as a gate potential for the single electron box and induces the charge $n'_g=C'_g \\phi \/e$ on the grain.\nThe charging dynamics for the electron box can be described by the orthodox theory using a master equation approach \\cite{Averin}. Electron tunneling changes the number of excess electrons $n$ in the grain. The differences in the electrostatic energy, when electrons tunnel to $(+)$ and from $(-)$ the grain are:\n\\begin{equation}\n\\Delta\\mathcal{E}_c^{\\pm}(n)=2 E'_C \\left( \\pm n \\mp n'_g +1\/2\\right).\n\\end{equation}\n\nThe tunnel rates of electron tunneling to or from the grain is a function of the tunnel resistance $R'$ and the electrostatic energy gain $\\Delta \\mathcal{E}_c^{\\pm}(n)$:\n\\begin{equation}{\\label{Tunnel_rates}}\nw^{\\pm}_n=\\frac{1}{e^2 R'}\n\\frac{\\Delta \\mathcal{E}_c^{\\pm}(n)}{1-\\exp\\left(-\\Delta \\mathcal{E}_c^{\\pm}(n)\/(k_\\mathrm{B}T)\\right)},\n\\end{equation}\n\nThe probability $\\sigma_n$ to have $n$ excess electrons in the grain\nobeys the master equation:\n\\begin{equation}\n\\frac{d\\sigma_n}{dt}=w^+_{n-1}\\sigma_{n-1}+w^-_{n+1}\\sigma_{n+1}- \\sigma_n(w^+_n+w^-_n)\n\\end{equation}\n\nFor our case of low temperature, $k_B T \\ll E'_C$ \nthere are only two nonvanishing probabilities, $\\sigma_n$ and $\\sigma_{n+1}$. This simplifies the problem, and it is convenient to define the distance from the grain degeneracy point, \\textit{i.e.} the point where these two nonvanishing states are degenerate, as $\\Delta n'_g=C'_g (\\phi -\\phi_0) \/e$. Here $\\phi_0=e(n+1\/2)\/C'_g$ is the SET island potential needed to reach the grain degeneracy point. \n\nIn the time domain, the dynamics for charging this grain from the lead is a random telegraph process, and the spectrum of this process is a Lorentzian function with a cut-off frequency defined by the sum of charging and escaping rates \\cite{JApplPhys.25.341}:\n\\begin{equation}{\\label{Eq_Cut_off frequency}}\nf_{co} = w^{+}_{n}+w^{-}_{n+1} = \\frac { \\Delta n'_g}{R' C'_\\Sigma} \\coth \\left( \\frac{E'_C}{k_B T} \\Delta n'_g \\right)\n= \\mathcal{A}(\\phi -\\phi_0) \\coth \\left( \\mathcal{B} (\\phi -\\phi_0) \\right),\n\\end{equation}\nwhere $\\mathcal{A}=C'_g\/(e C'_\\Sigma R')$ and $\\mathcal{B}=e C'_g\/(2 k_B T C'_\\Sigma)$.\n\nFrom eq. (\\ref{Eq_Cut_off frequency}) we thus see that the cut-off frequency of the Lorentzian depends directly on $\\Delta n'_g$ and therefore on the potential of the SET island, $\\phi$. When the grain is biased away from its degeneracy point, \\textit{i.e.} when $\\Delta n'_g > 2k_B T\/E'_C$, the cut-off frequency grows linearly with the SET island potential. This means that if we are far from the grain degeneracy point, the cut-off frequency is relatively high and the relative frequency shift due to the change in $\\phi$ will be small. On the other hand if we are close to the grain degeneracy point the cut-off frequency will be close to its minimum and the relative change in frequency due to $\\phi$ can be substantial. The maximum relative frequency change occurs when the potential just barely reaches the grain degeneracy point and can be calculated from eq.\\,\\ref{Eq_Cut_off frequency}. Taking in to account the bounds of the island potential, $-e\/(2C_\\Sigma)<\\phi0 \\quad\\text{in } \\Omega,\n\\\\\n& u=0 \\quad\\text{on } \\partial\\Omega,\n\\end{aligned}\n\\right.\n\\end{align*}\nwhere $\\Omega$ is a smooth bounded domain in $\\R^N$, $N\\geq 3$, $p= \\frac{N+2}{N-2}$ and\n $\\lambda$ is a real positive parameter.\n\nIn this article, we are interested in obtaining solutions to this problem, in the special case $N=3$, that concentrate in $k$ different points of $\\Omega$, $k\\geq 2$.\nIn particular, we\nanalyze the role of the Green function of $\\Delta+\\lambda$ in the presence of multi-peak solutions when $\\lambda$ is regarded as a parameter.\n\nSolutions to $(\\wp_\\lambda)$ correspond to critical points of the energy functional\n\\[\nJ_\\lambda(u)=\\frac{1}{2}\\int_{\\Omega}\\vert\\nabla u\\vert^2-\\frac{\\lambda}{2}\\int_{\\Omega}u^2-\\frac{1}{p+1}\\int_{\\Omega}|u|^{p+1}.\n\\]\nAlthough this functional is of class $C^2$ in $H_0^1(\\Omega)$, it does not satisfy the Palais-Smale condition at all energy levels, and hence variational arguments to find solutions are delicate and sometimes fail.\n\nLet $\\lambda_1$ denote the first eigenvalue of $-\\Delta$ with Dirichlet boundary condition.\nIt is well known that\n$(\\wp_\\lambda)$ admits no solutions if\n$\\lambda \\geq\\lambda_1$, which can be verified by\ntesting the equation against a first eigenfunction of the Laplacian. Moreover, the classical Pohozaev identity \\cite{pohozaev} guarantees that problem $(\\wp_\\lambda)$ with $\\lambda\\leq 0$ has no solution in a starshaped domain.\n\n\n\nIn the classical paper \\cite{brezis-nirenberg}, Brezis and Nirenberg showed that\nleast energy\nsolutions\nto this problem exist for $\\lambda \\in (\\lambda^*,\\lambda_1)$, where $\\lambda^* \\in [0,\\lambda_1)$ is a special number depending on the domain.\nThey also showed that if $N\\geq 4$, then $\\lambda^*=0$ and in particular $(\\wp_\\lambda)$ has a solution with minimal energy for all $\\lambda \\in (0,\\lambda_1)$.\n\n\nWhen $N=3$ the situation is strikingly different, since, as it is shown in \\cite{brezis-nirenberg}, $\\lambda^*>0$ and no solutions with minimal energy exist\nwhen $\\lambda \\in (0,\\lambda^*)$.\nIn 2002, Druet \\cite{druet} showed that there is no solution with minimal energy neither for $\\lambda=\\lambda^*$,\nwhich implies that $\\lambda^*$ can be characterized as the critical value such that a solution of $(\\wp_\\lambda)$ with minimal energy exists if and only if $\\lambda\\in (\\lambda^*,\\lambda_1)$.\n\nIn the particular case of the ball in $\\R^3$, Brezis and Nirenberg \\cite{brezis-nirenberg} also proved that $\\lambda^* = \\frac{\\lambda_1}{4}$ and that a solution to $(\\wp_\\lambda)$ exists if and only if $\\lambda \\in ( \\frac{\\lambda_1}{4}, \\lambda_1)$. By the results of Gidas, Ni, Nirenberg \\cite{gidas-ni-nirenberg} and Adimurthi, Yadava \\cite{adimurthi-yadava} this solution is unique and corresponds indeed to the minimum of the energy functional.\n\n\nIn dimensions three a characterization of $\\lambda^*$ can be given in terms of the \\emph{Robin function $g_\\lambda$} defined as follows.\nLet $\\lambda\\in(0,\\lambda_1)$. For a given $x\\in\\Omega$ consider the Green function $G_\\lambda(x, y)$, solution of\n\\[\n\\begin{array}{rlll}\n-\\Delta_yG_\\lambda-\\lambda G_\\lambda&=&\\delta_x&y\\in \\Omega,\n\\\\\nG_\\lambda(x, y)&=&0&y\\in \\partial\\Omega,\n\\end{array}\n\\]\nwhere $\\delta_x$ is the Dirac delta at $x$.\nLet $H_{\\lambda}(x,y) = \\Gamma(y-x)-G_\\lambda(x,y)$ with $\\Gamma(z)=\\frac{1}{4\\pi \\vert z\\vert}$, be its regular part,\nand let us define the Robin function of $G_\\lambda$ as\n$g_\\lambda(x) := H_\\lambda(x, x)$.\n\nIt is known that $g_\\lambda(x)$ is a smooth function which goes to $+\\infty$ as $x$ approaches $\\partial \\Omega$. The minimum of $g_\\lambda$ in $\\Omega$ is strictly decreasing in $\\lambda$, is strictly positive when $\\lambda$ is close to $0$ and approaches $-\\infty$ as $\\lambda\\uparrow \\lambda_1$.\n\nIt was conjectured in \\cite{brezis} and proved by Druet \\cite{druet} that $\\lambda^*$ is the largest $\\lambda \\in (0,\\lambda_1)$ such that $\\min_{\\Omega} g_\\lambda \\geq 0$. Moreover, Druet also proved that, as $\\lambda\\downarrow \\lambda^*$, least energy solutions to $(\\wp_\\lambda)$ develop a singularity which is located at a point $\\zeta_0\\in \\Omega$ such that $g_{\\lambda^*}(\\zeta_0) = 0$.\nNote that $\\zeta_0$ is a global minimizer of $g_{\\lambda^*}$ and hence a critical point.\nA concentrating family of solutions can exist at other values of $\\lambda$.\nIndeed, del Pino, Dolbeault and Musso \\cite{DPDM}\nproved that if $\\lambda_0 \\in (0,\\lambda_1)$ and $\\zeta_0\\in\\Omega$ are such that\n\\[\ng_{\\lambda_0}(\\zeta_0)=0, \\quad\n\\nabla g_{\\lambda_0}(\\zeta_0)=0,\n\\]\nand either $\\zeta^0$ is a strict local minimum or a nondegenerate critical point of $g_{\\lambda}$, then for $\\lambda - \\lambda_0 >0$, there is a solution $u_\\lambda$ of $(\\wp_\\lambda)$ such that\n\\[\nu_\\lambda (x) = w_{\\mu,\\zeta}\\,(1+o(1))\n\\]\nin $\\Omega$ as $\\lambda - \\lambda_0 \\to 0$, where\n\\[\nw_{\\mu,\\zeta}(x) =\n\\frac{\\alpha_3 \\, \\mu^{1\/2}}{(\\mu^2+ | x-\\zeta|^2 )^{1\/2}},\n\\quad \\alpha_3 = 3^{1\/4},\n\\]\n$\\zeta\\to \\zeta_0$ and $\\mu=O(\\lambda - \\lambda_0)$.\n\nThe behavior described above, namely \\emph{bubbling} of a family of solutions, was\nalready studied in higher dimensions.\nHan \\cite{han} proved that if $N\\geq 4 $, minimal energy solution of $(\\wp_\\lambda)$ concentrate at a critical point of the Robin function $g_0$ as $\\lambda\\downarrow0$. See also Rey \\cite{Rey} for an arbitrary family of solutions that concentrates at a single point. Conversely, Rey in \\cite{Rey,Rey2} showed that attached to any $C^1$-stable critical point of the Robin function $g_0$ there is a family of solutions of $(\\wp_\\lambda)$ that blows up at this point as $\\lambda\\downarrow 0$.\n\nUnlike the case of dimension three, bubbling behavior with concentration at multiple points as $\\lambda \\downarrow 0$ is known in higher dimensions.\nIndeed, Musso and Pistoia \\cite{Musso-Pistoia2002} constructed multispike solutions in a smooth bounded domain $\\Omega \\subset \\R^N$, $N\\geq 5$.\nTo state precisely their result let us consider an integer $k \\geq 1$,\nlet us write $\\bar\\mu = (\\bar\\mu_1,\\ldots,\\bar\\mu_k)\\in \\R^k $, $\\zeta = (\\zeta_1,\\ldots,\\zeta_k) \\in \\Omega^k$, $\\zeta_i\\not=\\zeta_j$ for $i\\not=j$, and define\n\\[\n\\psi_k(\\bar\\mu,\\zeta) = \\frac{1}{2} ( M(\\zeta) \\,\\bar\\mu^{\\frac{N-2}{2}} , \\bar\\mu^{\\frac{N-2}{2}} ) - \\frac{1}{2}\n\\,B\\sum_{i=1}^k\\bar\\mu_i^2\n\\]\nwhere $\\bar\\mu^{\\frac{N-2}{2}} = (\\bar\\mu_1^{\\frac{N-2}{2}},\\ldots,\\bar\\mu_k^{\\frac{N-2}{2}})$, and $M(\\zeta)$ is the matrix with coefficients\n\\[\nm_{ii}(\\zeta) = g_0(\\zeta_i), \\quad\nm_{ij}(\\zeta) = -G_0(\\zeta_i,\\zeta_j), \\quad \\text{for } i\\not=j.\n\\]\nHere $B>0$ is a constant depending only on the dimension.\nIt is shown in \\cite{Musso-Pistoia2002} that if $\\psi_k$ has a stable critical point $(\\bar\\mu,\\zeta)$ then, for $\\lambda>0$ small, problem $(\\wp_\\lambda)$ has a family of solutions that blow up at the $k$ points $\\zeta_1,\\ldots,\\zeta_k$, with profile near $\\zeta_i$ given by $w_{\\mu_i,\\zeta_i}$ and rates $\\mu_i \\sim \\bar\\mu_i \\,\\lambda^{\\frac{1}{N-4}}$. Musso and Pistoia also exhibit classes of domains where such critical points of $\\psi_k$ can be found.\nA related multiplicity result is given by the same authors in \\cite{Musso-Pistoia2003}, where $\\Omega$ is a domain with a sufficiently small hole.\nThey show that for $\\lambda<0$ small there is a family of solutions concentrating at two points.\n\n\n\n\nAs far as we know, there are no works dealing with solutions with multiple concentration in lower dimensions ($N=3$ and $N=4$), and it is not clear what type of finite dimensional function governs the location and the concentration rate of the bubbling solutions.\n\nIn this work we focus in dimension three. We give conditions on the parameter $\\lambda$ such that solutions with simultaneous concentration at $k$ points exist and find the finite dimensional function describing the location and rate of concentration.\nWe remark that the condition on $\\lambda$ that we obtain for solutions with multiple bubbling in dimension three is a non-obvious but natural generalization of the condition given by Dolbeault, del Pino and Musso \\cite{{DPDM}} for single bubble solutions in dimension three,\nand is somehow related to the result of Musso and Pistoia \\cite{Musso-Pistoia2002} for $\\lambda^*=0$ in higher dimensions.\n\nIn order to state our results we need some notation. For a given integer\n$k\\geq2$\nset\n\\[\n\\Omega_k^* =\\{ \\zeta= ( \\zeta_1,\\dots, \\zeta_k) \\in \\Omega^k : \\zeta_i\\not=\\zeta_j \\text{ for all } i\\not=j\\}.\n\\]\nFor $\\zeta= ( \\zeta_1,\\dots, \\zeta_k) \\in \\Omega_k^*$, let us consider the matrix\n\\[\nM_\\lambda(\\zeta):=\n\\begin{pmatrix}\ng_{\\lambda}(\\zeta_1) & -G_{\\lambda}(\\zeta_1,\\zeta_2) & \\ldots & -G_{\\lambda}(\\zeta_1,\\zeta_k) \\\\\n-G_{\\lambda}(\\zeta_1,\\zeta_2) & g_{\\lambda}(\\zeta_2) & \\ldots & -G_\\lambda(\\zeta_2,\\zeta_k)\n\\\\\n\\vdots & & & \\vdots\n\\\\\n-G_\\lambda(\\zeta_1,\\zeta_k) & -G_\\lambda(\\zeta_2,\\zeta_k) & \\ldots & g_\\lambda(\\zeta_k)\n\\end{pmatrix}.\n\\]\nIn other words, $M_\\lambda(\\zeta)$ is the matrix whose\n$ij$ component is given by\n\\[\n\\begin{cases}\ng_\\lambda(\\zeta_i) & \\text{if } i=j\n\\\\\n- G_\\lambda(\\zeta_i,\\zeta_j) & \\text{if } i\\not=j .\n\\end{cases}\n\\]\nDefine the function\n\\begin{align*}\n\\psi_\\lambda(\\zeta) = \\det M_\\lambda(\\zeta) , \\quad \\zeta \\in \\Omega_k^*.\n\\end{align*}\nOur main result is the following.\n\\begin{theorem}\n\\label{thm1}\nAssume that for a number $\\lambda=\\lambda_0 \\in (0,\\lambda_1)$ there is \\,$\\zeta^0 = ( \\zeta_1^0,\\ldots,\\zeta_k^0) \\in\\Omega_k^*$ such that:\n\\begin{itemize}\n\n\\item[(i)]\n$\\psi_{\\lambda_0}(\\zeta^0)=0$ and $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite,\n\n\\item[(ii)]\n$ D_{\\zeta}\\psi_{\\lambda_0}(\\zeta^0)=0 $,\n\n\\item[(iii)]\n$D_{\\zeta\\zeta} ^2\\psi_{\\lambda_0}(\\zeta^0)$ is non-singular,\n\n\\item[(iv)]\n$\\frac{\\partial \\psi_{\\lambda} }{\\partial \\lambda}\\big|_{\\lambda=\\lambda_0} (\\zeta^0) < 0 $.\n\n\\end{itemize}\nThen for $\\lambda=\\lambda_0+\\varepsilon$, with $\\varepsilon>0$ small, problem $(\\wp_\\lambda)$ has a solution $u$ of the form\n\\[\nu = \\sum_{j=1}^k w_{\\mu_j,\\zeta_j} + O(\\varepsilon^{\\frac{1}{2}})\n\\]\nwhere $\\mu_j = O(\\varepsilon)$, $\\zeta_j\\to \\zeta_j^0$, $j=1,\\ldots,k$, and $O(\\varepsilon^{\\frac{1}{2}})$ is uniform in $\\Omega$ as $\\varepsilon\\to 0$.\n\\end{theorem}\n\n\\medskip\t\n\n\\noindent\nWe remark that\nTheorem~\\ref{thm1} admits some variants. For example, if $\\frac{\\partial \\psi_{\\lambda} }{\\partial \\lambda}\\big|_{\\lambda=\\lambda_0} (\\zeta^0) > 0 $, then a solution with $k$ bubbles can be found for $\\lambda=\\lambda_0-\\varepsilon$, with $\\varepsilon>0$ small.\nWhen $k=2$ the assumption that $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite is equivalent to $g_{\\lambda_0}(\\zeta_1^0)>0$ or $g_{\\lambda_0}(\\zeta_2^0)>0$.\n\n\nAs an example where the previous theorem can be applied,\nlet us consider the annulus\n\\[\n\\Omega_a = \\{ x\\in \\R^3 \\ : \\ a < |x| < 1 \\} ,\n\\]\nwhere $00$ small, has a solution with $k$ bubbles centered at the vertices of a planar regular polygon for some $\\lambda_0\\in (0,\\lambda_1)$. As a byproduct of the construction we also deduce that\n\\[\n\\lambda_0<\\lambda^*.\n\\]\nA detailed proof of these assertions is given in Section~\\ref{exampleAnnulus}.\nThe ideas developed here can be applied\nto obtain two bubble solutions in more general thin axially symmetric domains.\n\n\nIn dimension $N\\geq 4$ qualitative similar solutions were detected by Wang-Willem \\cite{wang-willem} for all $\\lambda $ in an interval almost equal to $ (0,\\lambda_1)$ by using variational methods.\nThe existence of this kind of solutions in dimension three was (to the best of our knowledge) not known.\n\nWe should remark that multipeak solutions cannot be constructed in a ball, since the solution of $(\\wp_\\lambda)$ is radial and unique if it exists. This may indicate that if we consider $(\\wp_\\lambda)$ in the annulus $\\Omega_a$ with $a>0$ sufficiently small there are no multipeak solutions.\n\n\n\n\n\n\n\n\nFinally, we mention that several interesting results have been obtained on the existence of sign changing solutions to the Brezis-Nirenberg problem. See for instance Ben Ayed, El Mehdi, Pacella \\cite{benayed-elmehdi-pacella}, Iacopetti \\cite{iacopetti}, Iacopetti and Vaira \\cite{iacopetti-vaira} and the references therein. It is in fact foreseeable that the methods developed in this work can also give the existence of multipeak sign changing solutions in dimension 3.\n\n\nThe paper is organized as follows. In Section~\\ref{sectEnergyExpansion} we introduce some notation and give the energy expansion for a multi-bubble approximation.\nSections~\\ref{sectLinear} and ~\\ref{sectNonlinear} are respectively devoted to the study the linear and nonlinear problems involved in the Lyapunov Schmitd reduction, which is carried out in Section~\\ref{secReduction}.\nTheorem~\\ref{thm1} is proved in Section~\\ref{sectProof}.\nFinally, in Section~\\ref{exampleAnnulus} we give the details for the case of the annulus $\\Omega_a$.\n\n\n\\section{Energy expansion of a multi-bubble approximation}\n\\label{sectEnergyExpansion}\n\n\\noindent\nWe denote by\n\\[\nU(z):=\\frac{\\alpha_3 }{(1+\\vert z\\vert^2)^{1\/2}},\\quad \\alpha_3=3^{1\/4},\n\\]\nthe standard bubble.\nIt is well known that all positive solutions to the Yamabe equation\n\\[\n\\Delta w + w^5 = 0 \\quad\\text{in }\\mathbb{R}^3\n\\]\nare of the form\n\\begin{align*}\nw_{\\mu,\\zeta}(x):\n&=\\mu^{-1\/2}\\,U\\Bigl(\\frac{x-\\zeta}{\\mu}\n\\Bigr)\n=\\frac{\\alpha_3 \\, \\mu^{1\/2}}{\\Bigl(\\mu^2+\\vert x-\\zeta\\vert^2\\Bigr)^{1\/2}},\n\\end{align*}\nwhere $\\zeta$ is a point in $\\mathbb{R}^3$ and $\\mu$ is a positive number.\n\nFrom now on we assume that $0<\\lambda<\\lambda_1(\\Omega)$.\n\nFor a given $k\\geq 2$,\nwe consider $k$ different points $\\zeta_1,\\ldots,\\zeta_k\\in\\Omega$ and\nsmall positive numbers $\\mu_1,\\ldots, \\mu_k$ and denote by\n\\[\nw_i:=w_{\\mu_i,\\zeta_i}.\n\\]\nWe are looking for solutions of $(\\wp_\\lambda)$ that at main order are given by $\\sum_{i=1}^k w_i$. Since $w_i$ are not zero on $\\partial\\Omega$ it is natural to correct this approximation by terms that provide the Dirichlet boundary condition.\nIn order to do this we introduce, for each $i=1,\\ldots,k$, the function\n$\\pi_i$ defined as the unique solution of the problem\n\\[\n\\begin{array}{rlll}\n\\Delta \\pi_i+\\lambda\\,\\pi_i&=&-\\lambda\\,w_i&\\text{in }\\Omega,\n\\\\\n\\pi_i&=&-w_i&\\text{on }\\partial\\Omega,\n\\end{array}\n\\]\nand then we shall consider as a first approximation of the solution to $(\\wp_\\lambda)$ one of the form\n\\[\nU^0 = U_1+\\ldots+U_k,\n\\]\nwhere\n\\[\nU_i(x) = w_i(x) + \\pi_i(x).\n\\]\nObserve that $U_i\\in H_0^1(\\Omega)$ and satisfies the equation\n\\begin{equation}\n\\label{projection}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta U_i+\\lambda U_i&=&-w_i^5&\\text{in } \\Omega,\n\\\\\nU_i&=&0&\\text{on }\\partial\\Omega.\n\\end{array}\n\\right.\n\\end{equation}\nLet us recall that the energy functional associated to $(\\wp_\\lambda)$ when $N=3$ is given by:\n\\[\nJ_\\lambda(u)=\\frac{1}{2}\\int_{\\Omega}\\vert\\nabla u\\vert^2-\\frac{\\lambda}{2}\\int_{\\Omega}u^2-\\frac{1}{6}\\int_{\\Omega}|u|^{6}.\n\\]\nLet us write $\\zeta= (\\zeta_1,\\ldots,\\zeta_k)$ and $\\mu=(\\mu_1,\\ldots,\\mu_k)$ and note that $U^0 = U^0(\\mu,\\zeta)$.\nSince we are looking for solutions close to $U^0(\\mu,\\zeta)$, formally we expect\n$\nJ_\\lambda( U^0(\\mu,\\zeta) )$\nto be almost critical in the parameters $\\mu,\\zeta$.\nFor this reason it is important to obtain an asymptotic formula of the functional $(\\mu,\\zeta)\\rightarrow J_\\lambda( U^0(\\mu,\\zeta))$ as $\\mu\\to 0$.\n\nFor any $\\delta>0$ set\n\\begin{multline*}\n\\Omega_\\delta^k:=\\{\n\\zeta\\equiv(\\zeta_1,\\ldots,\\zeta_k)\\in\\Omega^k: \\,\\textrm{dist}(\\zeta_i,\\partial \\Omega)>\\delta, \\vert \\zeta_i-\\zeta_j\\vert>\\delta,\\\\\n i=1,\\ldots,k,\\,j=1,\\ldots,k,\\, i\\neq j\n\\} .\n\\end{multline*}\nThe main result in this section is the expansion of the energy in the case of a multi-bubble ansatz.\n\n\\begin{lemma}\n\\label{lemmaEnergyExpansion}\nLet $\\delta>0$ be fixed and let $\\zeta\\in\\Omega_\\delta^k$.\nThen as $\\mu_i\\rightarrow 0$, the following expansion holds:\n\\begin{align*}\nJ_\\lambda\\Bigl(\\sum_{i=1}^k U_i\\Bigr):=&\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\,\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\mu_i^2\n\\\\\n&\n-a_3\\,\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2\n+ \\theta_\\lambda^{(1)} (\\mu,\\zeta) ,\n\\end{align*}\nwhere $\\theta_\\lambda^{(1)} (\\zeta,\\mu)$ is such that for any $\\sigma>0$ and $\\delta>0$ there is $C$ such that\n\\[\n\\Bigl| \\frac{\\partial^{m+n}}{\\partial\\zeta^m\\partial\\mu^n}\\,\\theta_\\lambda^{(1)} (\\zeta,\\mu) \\Bigr| \\leq C (\\mu_1+\\ldots+\\mu_k)^{3-\\sigma -n} ,\n\\]\nfor $m = 0,1$, $n = 0,1,2$, $m+n\\leq 2$, all small $\\mu_i$, $i=1,2,\\ldots,k$, and all $\\zeta\\in\\Omega_\\delta^k$.\n\\end{lemma}\n\nThe $a_j$'s are\nthe following explicit constants\n\\begin{align}\n\\label{a0}\na_0:&=\\frac{1}{3}\\int_{\\R^3}U^6\n= \\frac{1}{4}(\\alpha_3\\pi)^2,\n\\\\\n\\label{a1}\na_1:&=2\\pi\\alpha_3\\int_{\\R^3}U^5\n=8(\\alpha_3\\pi)^2,\n\\\\\n\\label{a2}\na_2:&=\\frac{\\alpha_3}{2}\\int_{\\R^3}\\biggl[\\biggl(\\frac{1}{\\vert z\\vert}-\\frac{1}{\\sqrt{1+\\vert z\\vert^2}}\n\\biggr)U+\\frac{1}{2}\\,\\vert z\\vert\\, U^5\\biggr]\\,dz\na_2=(\\alpha_3\\pi)^2,\n\\\\\n\\label{a3}\na_3:&=\\frac{5}{2}(4\\pi\\alpha_3)^2\\int_{\\R^3}U^4\n=120\\,(\\alpha_3\\pi^2)^2.\n\\end{align}\nTo prove this lemma we need some preliminary results.\nTo begin with, we recall the relationship between the functions $\\pi_{i}(x)$ and the regular part of Green's function, $H_\\lambda(\\zeta_i,x)$.\nLet us consider the (unique) radial solution $\\mathcal{D}_0(z)$ of the following problem in entire space\n\\[\n\\begin{array}{rlll}\n\\Delta \\mathcal{D}_0&=&-\\lambda\\,\\alpha_3\\,\n\\Bigl(\n\\frac{1}{(1+\\vert z\\vert^2)^{1\/2}}\n-\\frac{1}{\\vert z\\vert}\n\\Bigr)\n&\\text{in } \\mathbb{R}^3,\n\\\\\n\\mathcal{D}_0&\\rightarrow\n&0&\n\\text{as } \\vert z\\vert\\rightarrow \\infty.\n\\end{array}\n\\]\nThen $\\mathcal{D}_0(z)$ is a $C^{0,1}$\nfunction with $\\mathcal{D}_0(z)\\sim \\vert z\\vert^{-1}\\log \\vert z\\vert$ as $\\vert z\\vert\\rightarrow \\infty$.\n\n\\bigskip\n\n\\begin{lemma}\n\\label{lemma22}\nFor any $\\sigma > 0$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\[\n\\mu_i^{-\\frac{1}{2}}\\pi_i(x)=-4\\pi\\alpha_3\\,H_{\\lambda}(x,\\zeta_i)+\\mu_i\\,\\mathcal{D}_0\\Bigl(\\frac{x-\\zeta_i}{\\mu_i}\\Bigr)+\\mu_i^{2-\\sigma}\\,\\theta(\\mu_i,x,\\zeta_i)\n\\]\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\theta(\\mu_i,y,\\zeta_i)\n$\nis bounded uniformly on $y\\in\\Omega$, all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\n\\end{lemma}\n\\begin{proof}\nSee \\cite[Lemma 2.2]{DPDM}.\n\\end{proof}\n\n\n\n\n\n\nFrom Lemma \\ref{lemma22} and the fact that,\naway from $ x=\\zeta_i$,\n\\[\n\\mathcal{D}_0\\Bigl(\\frac{x-\\zeta_i}{\\mu_i}\\Bigr)=O(\\mu_i\\log \\mu_i),\n\\]\nthe following holds true.\n\\begin{lemma}\n\\label{Uiexp}\nLet $\\delta>0$ be given. Then for any $\\sigma > 0$ and $x\\in\\Omega\\setminus B_\\delta(\\zeta_i)$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\[\n\\mu_i^{-\\frac{1}{2}}U_i(x)=4\\pi\\,\\alpha_3\\,G_{\\lambda}(x,\\zeta_i)+\\mu_i^{2-\\sigma}\\,\\hat{\\theta}(\\mu_i,x,\\zeta_i)\n\\]\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\hat{\\theta}(\\mu_i,x,\\zeta_i)\n$\nis bounded uniformly on $x\\in\\Omega\\setminus B_\\delta(\\zeta_i)$, all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\n\\end{lemma}\n\n\n\n\n\n\n\n\n\n\n\nWe also recall the expansion of the energy for the case of a single bubble, which was proved in \\cite{DPDM}.\n\n\n\n\\begin{lemma}\n\\label{lemma21}\nFor any $\\sigma > 0$ the following expansion holds as $\\mu_i\\rightarrow 0$\n\\begin{equation*}\nJ_\\lambda(U_i)=a_0+a_1\\,g_{\\lambda}(\\zeta_i)\\,\\mu_i+\n\\bigl(a_2\\,\\lambda-a_3\\,g_{\\lambda}(\\zeta_i)^2\\bigr)\\,\\mu_i^2+\\mu_i^{3-\\sigma}\\,\n\\theta(\\mu_i,\\zeta_i),\n\\end{equation*}\nwhere for $m=0,1$, $n=0,1,2$, $m+n\\leq 2$, the function\n$\\mu_i^n\\frac{\\partial^{m+n}}{\\partial\\zeta_i^m\\partial\\mu_i^n}\\,\\theta(\\mu_i,\\zeta_i)\n$\nis bounded uniformly on all small $\\mu_i$ and $\\zeta_i$ in compact subsets of $\\Omega$.\nThe $a_j$'s are given in \\eqref{a0}--\\eqref{a3}.\n\\end{lemma}\n\n\n\n\n\n\\bigskip\n\n\\begin{proof}[Proof of Lemma~\\ref{lemmaEnergyExpansion}]\n\\noindent\nWe decompose\n\\begin{align*}\nJ_\\lambda \\Bigl(\\sum_{i=1}^k U_i\\Bigr)\n&=\\frac{1}{2}\\sum_{i=1}^k\\Bigl(\\int_{\\Omega} \\vert\\nabla U_i\\vert^2+\\sum_{j\\neq i}\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j\\Bigr)\n\\\\\n&\\quad -\\frac{\\lambda}{2}\\sum_{i=1}^k\\Bigl( \\int_{\\Omega}U_i^2+\\sum_{j\\neq i}\\int_{\\Omega}U_i\\,U_j\\Bigr)-\\frac{1}{6}\\int_{\\Omega}\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6\n\\\\\n& =\\sum_{i=1}^kJ_\\lambda(U_i)+\\frac{1}{2}\\sum_{i=1}^k\\sum_{j\\neq i}\\int_{\\Omega}[\\nabla U_i\\cdot \\nabla U_j-\\lambda \\,U_i\\,U_j]\n\\\\\n& \\quad -\\frac{1}{6}\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr].\n\\end{align*}\nIntegrating by parts in $\\Omega$ we get\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j=\n\\int_{\\Omega}(-\\Delta U_i)U_j+\n\\int_{\\partial\\Omega}\\frac{\\partial U_i}{\\partial \\eta} U_j=\\int_{\\Omega}(-\\Delta U_i)U_j,\n\\]\nwhere $\\frac{\\partial }{\\partial \\eta}$ denotes the derivative along the unit outgoing normal at a point of $\\partial \\Omega$.\nFrom \\eqref{projection} one gets\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j=\\int_{\\Omega}(-\\Delta U_i)U_j=\\int_{\\Omega}(\\lambda U_i+w_i^5)U_j.\n\\]\nand so\n\\[\n\\int_{\\Omega}\\nabla U_i\\cdot \\nabla U_j-\\lambda\\int_{\\Omega}U_i\\, U_j=\\int_{\\Omega}w_i^5\\, U_j.\n\\]\nHence,\n\\begin{equation}\nJ_\\lambda \\Bigl(\\sum_{i=1}^k U_i\\Bigr)=\\sum_{i=1}^kJ_\\lambda(U_i)+\\frac{1}{2}\\sum_{i=1}^k \\sum_{j\\neq i}\\int_{\\Omega} w_i^5\\, U_j-\\frac{1}{6}\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr].\n\\label{Jlambda}\n\\end{equation}\nLet $\\rho\\in(0,\\delta\/2)$ and denote by\n\\[\n\\mathcal{O}_\\rho=\\Omega\\setminus\\cup_{j=1}^kB_{\\rho}(\\zeta_j).\n\\]\nLet us decompose\n\\begin{equation}\n\\label{Ei}\n\\int_{\\Omega}\\Bigl[\\Bigl(\\sum_{i=1}^k U_i\\Bigr)^6-\\sum_{i=1}^k U_i^6\\Bigr]=\\sum_{i=1}^k\\int_{B_{\\rho}(\\zeta_i)}\nE_i+\\sum_{i=1}^k\n\\int_{\\mathcal{O}_\\rho}\nE_i,\n\\end{equation}\nwhere\n\\begin{align}\n\\notag\nE_i:=&\\Bigl[(U_i+Q_i)^6-U_i^6\\Bigr]-\\sum_{j\\neq i} U_j^6\\\\\n=&6\\,(U_i^5\\,Q_i+U_i\\,Q_i^5)+15\\,(U_i^4\\,Q_i^2+Q_i^2\\,U_i^4)\n+20\\,U_i^3\\,Q_i^3-\\sum_{j\\neq i} U_j^6.\n\\label{U6}\n\\end{align}\nand\n$Q_i:=\\sum_{j\\neq i} U_j$.\n\nFrom now on, we write simply $O(\\mu^r)$ to indicate that some function is of the order of $(\\mu_1+\\ldots+\\mu_k)^r$ for any $r>0$.\n\nNotice that, if $s+t=6$,\n\\[\n\\mathcal{R}_{i,j}^{s,t}:=\\int_{\\mathcal{O_\\rho}} U_i^s\\,U_j^t=O(\\mu^3).\n\\]\nIf, additionally, $s>t$,\n\\[\n\\tilde{\\mathcal{R}}_{i,j}^{s,t}:=\\int_{B_\\rho(\\zeta_i)} U_i^t\\,U_j^s=O(\\mu^3).\n\\]\nThis implies, in particular, that $\\int_{\\mathcal{O}_\\rho}\nE_i=O(\\mu^3)$ and that $\\int_{B_{\\rho}(\\zeta_i)}U_j^6=O(\\mu^3)$.\n\n\n(i)\\,If $s=5$ and $t=1$, then we have\n\\begin{align}\n\\label{Ui5Uj}\n\\int_{B_\\rho(\\zeta_i)}U_i^5\\,U_j&=\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\n+5\\int_{B_\\rho(\\zeta_i)}w_i^4\\,\\pi_i\\,U_j+\n\\mathcal{R}_{i,j}^1,\n\\end{align}\nwhere\n\\[\n\\mathcal{R}_{ij}^1:=20\n\\int_0^1 d\\tau\\,(1-\\tau)\n\\int_{B_\\rho(\\zeta_i)}(w_i+\\tau\\pi_i)^3\\,\\pi_i^2\\,U_j.\n\\]\nUsing the change of variable $x=\\zeta_i+\\mu_i z$\nand calling $B_{\\mu_i}=B_{\\frac{\\rho}{\\mu_i}}(0)$\nwe find that\n\\[\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\\,dx=\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}U^5(z)\\,\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)\\,dz.\n\\]\nBy Lemma \\ref{Uiexp} we have\n\\[\n\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)=4\\pi\\alpha_3\\,G_{\\lambda}(\\zeta_i+\\mu_i z,\\zeta_j)+\\mu_j^{2-\\sigma}\\hat{\\theta}(\\mu_j,\\zeta_i+\\mu_i z,\\zeta_j).\n\\]\nWe expand\n\\begin{equation}\n\\label{Greenexp}\nG_\\lambda(\\zeta_i+\\mu_i z,\\zeta_j)=\nG_\\lambda(\\zeta_i,\\zeta_j)+\n\\mu_i\\, {\\bf c}\\cdot z+\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j),\n\\end{equation}\nwhere ${\\bf c}=D_1G_\\lambda(\\zeta_i,\\zeta_j)$ and $\\vert\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j)\\vert\\leq C\\mu_i^2\\,\\vert z\\vert^2.$\n\n\n\\noindent\nBy symmetry,\n\\[\n\\int_{B_{\\mu_i}}({\\bf c}\\cdot z)\\,U^5(z)\\,dz=0\n\\]\nan so,\n\\begin{align}\n\\int_{B_\\rho(\\zeta_i)}w_i^5\\,U_j\\,dy=&4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3}U^5(z)\\,dz+\\mathcal{R}_{i,j}\\notag\n\\\\\n=&2a_1\\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)+\\mathcal{R}_{i,j}^2,\n\\label{wi5Uj}\n\\end{align}\nwhere\n$\na_1:= 2\\pi\\,\\alpha_3\\int_{\\R^3}U^5\n$ and\n\\begin{align*}\n\\mathcal{R}_{i,j}^2:=&-4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,G_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3\\setminus B_{\\mu_i}}U^5(z)\\,dz\\\\\n&+\n4\\pi\\alpha_3 \\,\n\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\,\\int_{B_{\\mu_i}}U^5(z)\\,\\theta_2(\\zeta_i+\\mu_i z,\\zeta_j)\\,dz\\\\\n&+\\mu_i^{\\frac{1}{2}}\\,\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}\\mu_j^{2-\\sigma}\\,U^5(z)\\,\\hat{\\theta}(\\mu_j,\\zeta_i+\\mu_i z,\\zeta_j)\n\\,dz.\n\\end{align*}\nFrom Lemma~\\ref{lemma22} and \\cite[Appendix]{DPDM}, we have the following expansions, for any $\\sigma > 0$, as $\\mu_i\\rightarrow 0$\n\\begin{equation*}\n\\mu_i^{-\\frac{1}{2}}\\pi_i(\\zeta_i+\\mu_iz)=-4\\pi\\alpha_3\\,H_{\\lambda}(\\zeta_i+\\mu_iz,\\zeta_i)+\\mu_i\\,\\mathcal{D}_0(z)+\\mu_i^{2-\\sigma}\\,\\theta(\\mu_i,\\zeta_i+\\mu_iz,\\zeta_i)\n\\end{equation*}\n\\begin{equation*}\nH_{\\lambda}(\\zeta_i+\\mu_iz,\\zeta_i)=g_{\\lambda}(\\zeta_i)+\\frac{\\lambda}{8\\pi}\\,\\mu_i\\vert z\\vert\n+\\theta_0(\\zeta_i,\\zeta_i+\\mu_iz)\n\\end{equation*}\nwhere $\\theta_0$ is a function of class $C^2$ with $\\theta_0(\\zeta_i,\\zeta_i)=0$.\n\n\\noindent\nThe above expressions, combined with\nLemma \\ref{Uiexp} and \\eqref{Greenexp}, gives\n\\begin{align}\n\\nonumber\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,\\pi_i\\,U_j=&\n\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}\\int_{B_{\\mu_i}}U^4(z)\\,\\mu_i^{-\\frac{1}{2}}\\pi_i(\\zeta_i+\\mu_i z)\\,\\mu_j^{-\\frac{1}{2}}U_j(\\zeta_i+\\mu_i z)\\,dz\n\\\\\n\\notag\n&=-\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}(4\\pi\\alpha_3)^2 \\,g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\int_{\\R^3}U^4(z)\\,dz+\\mathcal{R}_3\\\\\n&=-\\frac{2}{5}\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}}\\, g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)+\\mathcal{R}_{i,j}^3,\n\\label{wi4piiUj}\n\\end{align}\nwhere $a_3:=\\frac{5}{2}(4\\pi\\alpha_3)^2\\int_{\\R^3}U^4.$\n\nFrom \\eqref{Ui5Uj}, \\eqref{wi5Uj} and \\eqref{wi4piiUj}, we get\n\\begin{align}\n\\label{Ui5UjF}\n\\int_{B_\\rho(\\zeta_i)}U_i^5\\,U_j=&\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\n\\\\&+\n\\mathcal{R}_{i,j}^1+\\mathcal{R}_{i,j}^2+5\\,\\mathcal{R}_{i,j}^3,\n\\mathcal{R}_{i,j}^{5,1}.\n\\notag\n\\end{align}\n\n(ii)\\, If $s=4$ and $t=2$, we have\n\\begin{align*}\n\\int_{B_\\rho(\\zeta_i)}U_i^4\\,U_j\\,U_m&=\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,U_j\\,U_m+\n\\mathcal{R}_{i,j,m}^5,\n\\end{align*}\nwhere\n\\[\n\\mathcal{R}_{i,j,m}^5:=4\n\\int_0^1 d\\tau\\,\n\\int_{B_\\rho(\\zeta_i)}(w_i+\\tau\\pi_i)^3\\,\\pi_i\\,U_j\\,U_m.\n\\]\nFrom Lemma \\ref{lemma22}, Lemma \\ref{Uiexp}, and \\eqref{Greenexp}, we get\n\\begin{align*}\n\\int_{B_\\rho(\\zeta_i)}w_i^4\\,U_j\\,U_m=&\n\\mu_i\\,\\mu_j\\,\\mu_m\\int_{B_{\\mu_i}}U^4(z)\\,\n\\Bigl(\\mu_j^{-\\frac{1}{2}}\\,U_j(\\zeta_i+\\mu_i z)\\Bigr)\\,\n\\Bigl(\\mu_m^{-\\frac{1}{2}}\\,U_m(\\zeta_i+\\mu_i z)\\Bigr)\\,dz\\\\\n&=\\mu_i\\,\\mu_j\\,\\mu_m\\,(4\\pi\\alpha_3)^2 \\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)\\int_{\\R^3}U^4(z)\\,dz+\\mathcal{R}_{i,j,m}^6\\\\\n&=\\frac{2}{5}\\,a_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)+\\mathcal{R}_{i,j,m}^6.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\int_{B_\\rho(\\zeta_i)}U_i^4\\,U_j\\,U_m&=\n\\frac{2}{5}\\,a_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)+\\mathcal{R}_{i,j,m}^5\n+\\mathcal{R}_{i,j,m}^6.\n\\label{Ui4UjUm}\n\\end{align}\n\n(iiii)\\, If $s=3$ and $t=3$, we have\n\\begin{equation*}\n\\int_{B_\\rho(\\zeta_i)}U_i^3\\,U_j^3=\n\\mathcal{R}_{i,j}^{8},\n\\end{equation*}\nwhere\n\\begin{align*}\n\\mathcal{R}_{i,j}^8:=&\\int_{B_\\rho(\\zeta_i)}w_i^3\\,U_j^3+3\\int_0^1 ds\\,\n\\int_{B_\\rho(\\zeta_i)}(w_i+s\\pi_i)^2\\,\\pi_i\\,U_j^3.\n\\end{align*}\nTo analyse the size of the remainders $\\mathcal{R}_{i,j}^\\ell$ we proceed as in \\cite{DPDM}. We have the following\n\\begin{equation*}\n\\frac{\\partial^{m+n}}{\\partial\\zeta^m\\partial\\mu^n}\\mathcal{R}_{i,j}^\\ell=O(\\mu^{3-(n+\\sigma)})\n\\end{equation*}\nfor each $m=0,1$, $n=0,1,2$, $m+n\\leq 2$,\n$\\ell=1,\\ldots, 8$,\nuniformly on all small $(\\mu,\\zeta)\\in \\Gamma_\\delta$.\n\n\n\\noindent\nAnalogous statements hold true for ${\\mathcal{R}}_{i,j}^{s,t}$ and $\\tilde{\\mathcal{R}}_{i,j}^{s,t}$ with $s+t=6$.\n\nFrom \\eqref{U6} and the previous analisys\nwe get that\n\\begin{align*}\n\\int_{B_{\\rho}(\\zeta_i)}\nE_i\n&=6\\,\\int_{B_{\\rho}(\\zeta_i)}U_i^5\\,Q_i+15\\int_{B_{\\rho}(\\zeta_i)}U_i^4\\,Q_i^2+\\mathcal{R}\\\\\n&=\n6\\,\\sum_{j\\neq i}\\int_{B_{\\rho}(\\zeta_i)}U_i^5\\,U_j+15\\sum_{j\\neq i}\\sum_{m\\neq i}\\int_{B_{\\rho}(\\zeta_i)}U_i^4\\, U_j\\, U_m+\\mathcal{R}.\n\\end{align*}\nThis expression together with\n\\eqref{Ui5UjF} and \\eqref{Ui4UjUm} yields\n\\begin{align*}\n\\int_{B_{\\rho}(\\zeta_i)}\nE_i\n&=\n6\\,\\sum_{j\\neq i}\\Bigl[\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\Bigr]\\\\\n&+6\\sum_{j\\neq i}\\sum_{m\\neq i}\n\\Bigl[\na_3\\,\\mu_i\\,\\mu_j\\,\\mu_m\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\,G_\\lambda(\\zeta_i,\\zeta_m)\n\\Bigr]\\\\\n&=\n6\\,\\sum_{j\\neq i}\\Bigl[\n2\\,a_1\\,\n\\mu_i^{\\frac{1}{2}}\\mu_j^{\\frac{1}{2}}G_\\lambda(\\zeta_i,\\zeta_j)\n-2\\,a_3\\,\\mu_i^{\\frac{3}{2}}\\,\\mu_j^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\\Bigr]\\\\\n&+6\\,a_3\\,\\mu_i\\Bigl(\\sum_{j\\neq i}\n\\mu_j\\,\nG_\\lambda(\\zeta_i,\\zeta_j)\n\\Bigr)^2.\n\\end{align*}\nCombining relations \\eqref{Jlambda}, \\eqref {Ei}, \\eqref{U6},\n\\eqref{wi5Uj}, Lemma \\ref{lemma21} and the above expression we get\nthe conclusion.\nFor the statement of this lemma $\\theta_\\lambda^{(1)}$ is defined as the sum of all remainders.\n\n\nThe formula\n\\[\n\\int_0^\\infty\\Bigl(\\frac{r}{1+r^2} \\Bigr)^q\\frac{dr}{r^{\\alpha+1}}=\\frac{\\Gamma\\bigl(\\frac{q-\\alpha}{2}\\bigr)\\,\\Gamma\\bigl(\\frac{q+\\alpha}{2}\\bigr)}{2\\,\\Gamma(q)}\n\\]\nyields that\n\\[\na_1=8(\\alpha_3\\pi)^2,\n\\quad\na_3=120\\,(\\alpha_3\\pi^2)^2.\n\\]\n\\end{proof}\n\n\n\n\\section{The linear problem}\n\\label{sectLinear}\n\n\n\nLet $u$ be a solution of $(\\wp_\\lambda)$. For\n$\\varepsilon>0$, we define\n\\[\nv(y) = \\varepsilon^{1\/2} u(\\varepsilon y) .\n\\]\nThen $v$ solves the boundary value problem\n\\begin{equation}\n\\label{maineqreesc}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta v+\\varepsilon^2\\,\\lambda\\, v&=&-v^5&\\text{in } \\Omega_\\varepsilon,\n\\\\\nv&>&0&\\text{in }\\Omega_\\varepsilon,\n\\\\\nv&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation}\nwhere $\\Omega_\\varepsilon=\\varepsilon^{-1}\\,\\Omega$.\nThus finding a solution of $(\\wp_\\lambda)$ which is a small perturbation of $\\sum_{i=1}^k U_i $ is equivalent to finding a solution of $(\\wp_\\lambda)$ of the form\n\\[\n\\sum_{i=1}^k V_i +\\phi,\n\\]\nwhere\n\\begin{align*}\nV_i(y) &= \\varepsilon^{ \\frac{1}{2}} U_i( \\varepsilon y)\n= w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n\\quad y \\in \\Omega_\\varepsilon,\n\\end{align*}\nfor $i=1,\\ldots, k$, and $\\phi$ is small in some appropriate sense.\n\n\n\n\n\nNotice that $V_i$ satisfies\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rlll}\n\\Delta V_i+\\varepsilon^2 \\,\\lambda \\,V_i&=&-w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5&\\text{in } \\Omega_\\varepsilon,\n\\\\\nV_i&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation*}\nwhere\n\\begin{align}\n\\label{muiPrime}\n\\mu_i^{\\prime}=\\frac{\\mu_i}{\\varepsilon},\\quad \\zeta_i^{\\prime}=\\frac{\\zeta_i}{\\varepsilon}.\n\\end{align}\nThen solving \\eqref{maineqreesc} is equivalent to finding $\\phi$ such that,\n\\begin{equation}\n\\label{linearizedproblem}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\end{array}\n\\right.\n\\end{equation}\nwhere\n\\[\nL(\\phi)=\\Delta\\phi+\\varepsilon^2\\,\\lambda\\, \\phi+5V^4\\,\\phi,\n\\]\n\\[\nN(\\phi)=(V+\\phi)^5-V^5-5V^4\\,\\phi,\n\\]\n\\begin{equation}\n\\label{error}\nE=V^5-\n\\sum_{i=1}^k w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5.\n\\end{equation}\nand\n\\begin{align}\n\\label{defV}\nV = \\sum_{i=1}^k V_i.\n\\end{align}\nIn what follows,\nthe canonical basis of $\\R^3$ will be denoted by\n\\[\n\\textrm{e}_1=(1,0,0), \\quad \\textrm{e}_2=(0,1,0)\\quad \\textrm{e}_3=(0,0,1).\n\\]\nLet $z_{i,j}$, $i=1,2,$ be given by\n\\begin{equation}\n\\left\\{\n\\label{zij}\n\\begin{array}{rcl}\nz_{i,j}(y)&=&D_{\\zeta_i^{\\prime}} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\\cdot \\textrm{e}_j\\quad j=1,2,3\\\\\nz_{i,4}(y)&=&\\frac{\\partial\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}}{\\partial\\,\\mu_i^{\\prime}}(y).\n\\end{array}\n\\right.\n\\end{equation}\nWe recall that for each $i$, the functions $z_{i,j}$ for $j=1,...,4$, span the space\nof all bounded solutions of the linearized problem:\n\\[\n \\Delta z+5\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z=0\\quad\\text{ in } \\R^3.\n \\]\nA proof of this fact can be found for instance in \\cite{Rey}.\n\n\n\nObserve that\n\\[\n\\int_{\\R^3} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4 z_{i,j}\\,z_{i,l}=0\\quad\\text{if }j\\neq l.\n\\]\nIn order to study the operator $L$, the key idea is that, as $\\varepsilon\\rightarrow 0$, the linear operator $L$ is close to being the sum of\n\\[\n\\Delta +5w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4,\n \\]\n$i=1,\\ldots,k$.\n\n\n\n\n\n\nRather than solving \\eqref{linearizedproblem} directly, we will look for a solution of the following problem first: Find a function $\\phi$ such that for certain constants $c_{i,j}$, $i=1,2$, $j=1,2,3,4$,\n\\begin{equation}\n\\label{linearizedproblemproj}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j.\n\\end{array}\n\\right.\n\\end{equation}\nAfter this is done, the remaining task is to adjust the parameters $\\zeta_i^\\prime, \\mu_i^\\prime$ in such a way that all constants $c_{ij}=0$.\n\nIn order to solve problem \\eqref{linearizedproblemproj} it is necessary to understand its linear part. Given a function $h$ we consider the problem of finding $\\phi$ and real numbers $c_{ij}$ such that\n\\begin{equation}\n\\label{linpart}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&h+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j.\n\\end{array}\n\\right.\n\\end{equation}\nWe would like to show that this problem is uniquely solvable with uniform bounds in suitable functional spaces.\nTo this end, it is convenient to introduce the following weighted norms.\n\n\\noindent\nGiven a fixed number $\\nu\\in(0,1)$,\nwe define\n\\begin{align*}\n\\Vert f\\Vert_{\\ast}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\Bigl(\n\\omega(y)^{-\\nu}\\,\\vert f(y)\\vert\n+\\omega(y)^{-\\nu-1}\\,\\vert \\nabla f(y)\\vert\n\\Bigr)\n\\\\\n\\Vert f\\Vert_{\\ast\\ast}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\omega(y)^{-(2+\\nu)}\\,\\vert f(y) \\vert,\n\\end{align*}\nwhere\n\\[\n\\omega(y)=\\sum_{i=1}^k\\bigl(1+\\vert y-\\zeta_i^{\\prime}\\vert\\bigr)^{-1}.\n\\]\n\n\\begin{prop}\n\\label{solvability}\nLet $0<\\alpha<1$. Let $\\delta>0$ be given.\nThen there exist a positive number $\\varepsilon_0$ and a constant $C>0$ such that if \\,$0<\\varepsilon<\\varepsilon_0$, and\n\\begin{align}\n\\label{parametros}\n\\vert \\zeta_i^{\\prime}-\\zeta_j^{\\prime} \\vert >\\frac{\\delta}{\\varepsilon}, \\ i\\not=j; \\quad\ndist(\\zeta_i^{\\prime},\\partial\\Omega_{\\varepsilon})>\\frac{\\delta}{\\varepsilon}\n\\text{ and }\n\\delta<\\mu_i^{\\prime}<\\delta^{-1},\\ i=1,\\ldots,k,\n\\end{align}\nthen for any $h\\in C^{0,\\alpha}(\\Omega_\\varepsilon)$ with $\\Vert h\\Vert_{\\ast\\ast}<\\infty$, problem \\eqref{linpart} admits a unique solution $\\phi=T(h)\\in C^{2,\\alpha}(\\Omega_\\varepsilon)$. Besides,\n\\begin{equation}\n\\label{cotas}\n\\Vert T(h)\\Vert_{\\ast}\\leq C\\,\\Vert h\\Vert_{\\ast\\ast} \\quad\\text{and}\\quad\n\\vert c_{ij}\\vert\\leq C\\,\\Vert h\\Vert_{\\ast\\ast},\\,\\,i=1,\\ldots,k,\\,\\, j=1,2,3,4.\n\\end{equation}\n\\end{prop}\nHere and in the rest of this paper, we denote by $C$ a positive constant that may change from line to line but is always independent of $\\varepsilon$.\n\n\nFor the proof of the previous proposition\nwe need the following a priori estimate:\n\\begin{lemma}\n\\label{lemmaCotaApriori}\nLet $\\delta>0$ be a given small number.\nAssume the existence of sequences $(\\varepsilon_n)_{n\\in\\mathbb{N}}$, $(\\zeta^{\\prime}_{i,n})_{n\\in\\mathbb{N}}$,, $(\\mu^{\\prime}_{i,n})_{n\\in\\mathbb{N}}$ such that $\\varepsilon_n> 0$,\n$\\varepsilon_n\\rightarrow 0$,\n\\[\\vert \\zeta^{\\prime}_{i,n}-\\zeta^{\\prime}_{j,n} \\vert >\\frac{\\delta}{\\varepsilon_n}, \\ i\\not=j;\n\\quad\ndist(\\zeta^{\\prime}_{i,n},\\partial\\Omega_{\\varepsilon_n})>\\frac{\\delta}{\\varepsilon_n}\n\\text{ and }\n\\delta<\\mu^{\\prime}_{i,n}<\\delta^{-1},\\ i=1,\\ldots,k,\n\\]\nand for certain functions $\\phi_n$ and $h_n$ with\n$\\Vert h_n\\Vert_{\\ast\\ast}\\rightarrow 0$ and scalars\n$c_{ij}^n$, $i=1,\\ldots,k$, $j=1,2,3,4$, one has\n\\begin{equation}\n\\label{linpartn}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi_n)&=&h_n+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\n&\\text{in } \\Omega_{\\varepsilon_n},\n\\\\\n\\phi_n&=&0&\\text{on }\\partial\\Omega_{\\varepsilon_n},\n\\\\\n\\int_{\\Omega_{\\varepsilon_n}}w_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,\\phi_n&=&0&\\text{for all }i,j,\n\\end{array}\n\\right.\n\\end{equation}\nwhere the functions $z_{ij}^n$\nare defined as in \\eqref{zij} for $\\zeta_{i,n}^{\\prime}$ and $\\mu_{i,n}^{\\prime}$.\nThen\n\\[\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\ast}=0.\n\\]\n\\end{lemma}\n\\begin{proof}\nArguing by contradiction, we may assume that $\\Vert \\phi_n\\Vert_{\\ast}=1$.\nWe shall establish first the weaker assertion that\n\\[\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=0.\n\\]\nLet us assume, for contradiction, that except possibly for a subsequence\n\\begin{equation}\n\\label{contra}\n\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=\\gamma,\\quad\\text{ with }0<\\gamma\\leq 1.\n\\end{equation}\nWe consider a cut-off function $\\eta\\in C^{\\infty}(\\R)$ with\n\\[\n\\eta(s)\\equiv 1 \\quad\\text{for } s\\leq \\frac{\\delta}{2},\\quad\n\\eta(s)\\equiv 0 \\quad\\text{for } s\\geq \\delta.\n\\]\nWe define\n\\begin{equation}\n\\label{zklcortada}\n{\\bf z}_{kl}^n(y):=\\eta(2\\,\\varepsilon_n\\,\\vert y-\\zeta_{k,n}^{\\prime}\\vert)\\,z_{kl}^n(y).\n\\end{equation}\nTesting \\eqref{linpartn} against ${\\bf z}_{kl}^n$ and integrating by parts twice we get the following relation\n\\[\n\\sum_{i,j}c_{ij}^n\\,\n\\int_{\\Omega_{\\varepsilon_n}}\nw_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,{\\bf z}_{kl}^n\n=\n\\int_{\\Omega_{\\varepsilon_n}}\nL({\\bf z}_{kl}^n)\\,\\phi_n-\\int_{\\Omega_{\\varepsilon_n}}h_n\\,{\\bf z}_{kl}^n.\n\\]\nSince $z_{kl}^n$ lies on the kernel of\n\\[\nL_{k}:=\\Delta +5w_{\\mu_k^{\\prime},\\zeta_k^{\\prime}}^4,\n\\]\nwriting $L({\\bf z}_{kl}^n)=L({\\bf z}_{kl}^n)-L_k(z_{kl}^n)$,\nit is easy to check that\n\\[\n\\Bigl\\vert\n\\int_{\\Omega_{\\varepsilon_n}}\nL({\\bf z}_{kl}^n)\\,\\phi_n\n\\Bigr\\vert =o(1)\\, \\Vert \\phi_n\\Vert_{\\ast}\\quad\\text{for } l=1,2,3,4.\n\\]\nTo obtain the last estimate, we take into account the effect of the Laplace operator on the cut-off function $\\eta$ which is used to define ${\\bf z}_{kl}^n$ and the effect of the difference between the two potentials $V^4$ and $w_{\\mu_k^{\\prime},\\zeta_k^{\\prime}}^4$ which appear respectively in the definition of $L$ and $L_{k}$.\n\nOn the other hand, a straightforward computation yields\n\\[\n\\Bigl\\vert\n\\int_{\\Omega_{\\varepsilon_n}}h_n\\,{\\bf z}_{kl}^n\n\\Bigr\\vert \\leq C\\, \\Vert h_n\\Vert_{\\ast\\ast}.\n\\]\nFinally, since\n\\[\n\\int_{\\Omega_{\\varepsilon_n}}\nw_{\\mu_{i,n}^{\\prime},\\zeta_{i,n}^{\\prime}}^4\\,z_{ij}^n\\,{\\bf z}_{kl}^n=C\\,\\delta_{i,k}\\,\\delta_{j,l}+o(1) \\quad \\text{with } \\delta_{i,k}=\n\\left\\{\n\\begin{array}{ll}\n1 &\\text{ if }i=k\\\\\n0 &\\text{ if }i\\neq k,\n\\end{array}\n\\right.\n\\]\nwe conclude that\n\\[\n\\lim_{n\\rightarrow\\infty}c_{ij}^n=0, \\quad \\text{for all }i,j.\n\\]\nNow, let $y_n \\in \\Omega_{\\varepsilon_n}$ be such that $\\phi_n(y_n)=\\gamma$, so that $\\phi_n$ attains its absolute maximum value at this point.\nSince $\\Vert \\phi_n\\Vert_{\\ast}=1,$ there is a radius $R>0$ and $i\\in\\{1,\\ldots,k\\}$ such that, for $n$ large enough,\n\\[\\vert y_n-\\zeta_{i,n}^\\prime\\vert\\leq R\n.\\]\nDefining $\\tilde{\\phi}_n(y)=\\phi_n(y+\\zeta_{i,n}^\\prime)$ and using elliptic estimates together with Ascoli-Arzela's theorem, we have that, up to a subsequence, $\\tilde{\\phi}_n$ converges uniformly over compacts to a nontrivial bounded solution $\\tilde{\\phi}$ of\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rlll}\n-\\Delta \\,\\tilde{\\phi}+5\\,w_{\\mu_{i}^{\\prime},0}^4\\,\\tilde{\\phi}&=&0&\\text{in }\\R^3,\n\\\\\n\\int_{\\R^3}w_{\\mu_{i}^{\\prime},0}^4\\,z_{0,j}\\tilde{\\phi}&=&0 &\\text{for }j=1,2,3,4,\n\\end{array}\n\\right.\n\\end{equation*}\nwhich is bounded by a constant times $\\vert y\\vert^{-1}$.\nHere\n$z_{0,j}$ is defined as\nin \\eqref{zij} taking $\\zeta_{i}^{\\prime}=0$ and $\\mu_{i}^{\\prime}:=\\lim_{n\\rightarrow\\infty}\\mu_{i,n}^{\\prime}$ (up to subsequence).\nFrom the assumptions, it follows that $\\delta\\leq \\mu_{i}^{\\prime}\\leq \\delta^{-1}$.\n\nNow, taking into account that\nthe solution $w_{\\mu_{i}^{\\prime},0}$ is nondegenerate, the above implies that\n$\\tilde{\\phi}=\\sum_{j=1}^4 \\alpha_j\\,z_{0,j}(y)$\nand then, from the orthogonality conditions\nwe can deduce that $\\alpha_j=0$ for $j=1,2,3,4.$\nFrom here we obtain $\\tilde{\\phi}\\equiv 0$, which contradicts \\eqref{contra}.\nThis proves that $\\lim_{n\\rightarrow\\infty}\\Vert \\phi_n\\Vert_{\\infty}=0$.\n\nNext we shall establish that $\\Vert \\phi_n\\Vert_{\\ltimes}\\rightarrow 0$ where\n\\begin{align*}\n\\Vert \\phi \\Vert_{\\ltimes}\n&=\\sup_{y\\in\\Omega_\\epsilon}\n\\omega(y)^{-\\nu}\\,\\vert \\phi(y)\\vert .\n\\end{align*}\nDefining\n\\[\n\\psi_n(x)=\\frac{1}{\\varepsilon_n^\\nu}\\,\\phi_n\\Bigl(\\frac{x}{\\varepsilon_n}\\Bigr),\\quad x\\in\\Omega\n\\]\nwe have that $\\psi_n$ satisfies\n\\begin{equation*}\n\\left\\{\n\\begin{array}{rllll}\n\\Delta \\,\\psi_n+\\lambda \\,\\psi_n&=&\\varepsilon_n^{-(2+\\nu)}\\Bigl\\{\n&-5 \\varepsilon_n^{1\/2}\n\\left( \\varepsilon_n^{1\/2}\n\\sum_{i=1}^k\nU_{\\mu_{i,n},\\zeta_{i,n}}\\right)^4\\,\\varepsilon_n^{\\nu}\\,\\psi_n\\\\\n&&&+\ng_n\n+\\sum_{i,j}c_{ij}^n\\,\\varepsilon_n^{2}\\,w_{\\mu_{i,n},\\zeta_{i,n}}^4\\,Z_{ij}^n\\Bigr\n\\}\n&\\text{in } \\Omega,\n\\\\\n\\psi_n&=&0&&\\text{on }\\partial\\Omega,\n\\end{array}\n\\right.\n\\end{equation*}\nwhere $\\mu_{i,n}=\\varepsilon_n\\,\\mu_{i,n}^{\\prime}$,\n$\\zeta_{i,n}=\\varepsilon_n\\,\\zeta_{i,n}^{\\prime}$,\n$g_n(x)=h_n\\bigl(\\frac{x}{\\varepsilon_n}\\bigr)$ and $Z_{ij}^n(x)=z_{ij}^n\\bigl(\\frac{x}{\\varepsilon_n}\\bigr)$.\n\n\nLet $\\zeta_i\\in\\Omega$ be such that, after passing to a subsequence, $\\vert\\zeta_{i,n}-\\zeta_i\\vert\\leq\\frac{\\delta}{4}$ for all $n\\in\\mathbb{N}$. Notice that, by the assumptions, $B_{\\frac{\\delta}{4}}(\\zeta_i)\\subset\\Omega$ and $B_{\\frac{\\delta}{4}}(\\zeta_i)\\cap B_{\\frac{\\delta}{4}}(\\zeta_j)=\\emptyset$ for $i\\not=j$.\nFrom the assumption $\\Vert \\phi_n\\Vert_{*} = 1$ we deduce that\n\\begin{equation*}\n\\vert \\psi_n(x)\\vert\\leq \\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\varepsilon_n+\\vert x-\\zeta_{i,n}\\vert} \\biggr)^{\\nu} ,\n\\quad \\forall x\\in \\Omega.\n\\end{equation*}\nSince $\\lim_{n\\rightarrow\\infty}\\Vert h_n\\Vert_{\\ast\\ast}\\rightarrow 0$,\n\\[\n\\vert g_n(x)\\vert\\leq o(1)\\,\\varepsilon_n^{2+\\nu}\\,\\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\varepsilon_n+\\vert x-\\zeta_{i,n}\\vert}\n\\biggr)^{2+\\nu}\n\\quad \\text{for }x\\in \\Omega.\n\\]\nFrom Lemma \\ref{Uiexp} we know that,\naway from\n$\\zeta_{i,n}$,\n\\[\nU_{\\mu_{i,n},\\zeta_{i,n}}(x)=C\\,\\varepsilon_{n}^{1\/2}\\,(1+o(1))\\,G_{\\lambda}(x,\\zeta_{i,n}).\n\\]\nMoreover, it is easy to see that also away from\n$\\zeta_{i,n}$,\n\\[\n\\varepsilon_n^{-\\nu}\\,\n\\sum_{j=1}^4 c_{ij}^n\\,w_{\\mu_{i,n},\\zeta_{i,n}}^4\\,Z_{ij}^n=o(1)\\quad \\text{as } \\varepsilon_n\\rightarrow 0,\n\\]\nand so, a diagonal convergence argument allows us to conclude that $\\psi_n(x)$ converges uniformly over compacts of $\\bar{\\Omega}\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\}$ to $\\psi(x)$, a solution of\n\\[\n-\\Delta \\,\\psi+\\lambda \\,\\psi=0\n\\quad\\text{in } \\Omega\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\},\n\\quad\n\\psi=0\\quad\\text{on }\\partial\\Omega ,\n\\]\nwhich satisfies\n\\begin{equation*}\n\\vert \\psi(x)\\vert\\leq \\biggl(\n\\sum_{i=1}^k\n\\frac{1}{\\vert x-\\zeta_{i,n}\\vert} \\biggr)^{\\nu} ,\n\\quad \\forall x\\in \\Omega.\n\\end{equation*}\nThus $\\psi$ has a removable singularity at all $\\zeta_i$, $i=1,\\ldots,k$, and we conclude that $\\psi(x)=0$. Hence, over compacts of $\\bar{\\Omega}\\setminus\\{\\zeta_1,\\ldots,\\zeta_k\\}$, $\\vert \\psi_n(x)\\vert=o(1).$\nIn particular, this implies that, for all\n$x\\in\\Omega\\setminus\n\\bigl( \\cup_{i=1}^k\nB_{\\frac{\\delta}{4}}(\\zeta_{i,n}) \\bigr)$,\n$\n\\vert \\psi_n(x)\\vert\\leq o(1).\n$\nThus we have\n\\begin{equation}\n\\label{57}\n\\vert \\phi_n(y)\\vert\\leq o(1)\\,\\varepsilon_n^\\nu,\n\\quad \\text{for all } y\\in \\Omega_{\\varepsilon_n}\\setminus\\Bigl(\n\\bigcup_{i=1}^k\nB_{\\frac{\\delta}{4\\varepsilon_n}}(\\zeta_{i,n}^\\prime)\n\\Bigr).\n\\end{equation}\nNow, consider a fixed number $M$, such that $M<\\frac{\\delta}{4\\,\\varepsilon_n}$, for all $n\\in\\mathbb{N}$.\n\n\n\\noindent\nSince $\\Vert \\phi_n\\Vert_\\infty=o(1)$,\n\\begin{equation}\n\\label{58}\n\\bigl(1+|y-\\zeta_{i,n}^\\prime|\\bigr)^{\\nu}\n\\vert \\phi_n(y)\\vert\\leq o(1) \\quad\\text{for all }\ny\\in \\overline{B_{M}(\\zeta_{i,n}^\\prime)}.\n\\end{equation}\nWe claim that\n\\begin{equation}\n\\label{59}\n\\bigl(1+|y-\\zeta_{i,n}^\\prime|\\bigr)^{\\nu}\n\\vert \\phi_n(y)\\vert\\leq o(1) \\quad\\text{for all }\ny\\in A_{\\varepsilon_n,M},\n\\end{equation}\nwhere\n $A_{\\varepsilon_n,M}:=B_{\\frac{\\delta}{4\\,\\varepsilon_n}}(\\zeta_{i,n}^\\prime)\\setminus\\overline{B_{M}(\\zeta_{i,n}^\\prime)}$.\n\nThe proof of this assertion relies on the fact that the operator $L$ satisfies the weak maximum principle in $A_{\\varepsilon_n,M}$ in the following sense: if $u$ is bounded, continuous in $\\overline{A_{\\varepsilon_n,M}}$, $u\\in H^1(A_{\\varepsilon_n,M})$ and satisfies $L(u)\\geq 0$ in $A_{\\varepsilon_n,M}$ and $u\\leq 0$ in $\\partial\\,A_{\\varepsilon_n,M},$ then, choosing a larger $M$ if necessary, $u\\leq 0$ in $A_{\\varepsilon_n,M}$. We remark that this result is just a consequence of the fact that $L(\\vert y-\\zeta_{i,n}^\\prime\\vert^{-\\nu})\\leq 0$ in $A_{\\varepsilon_n,M}$ provided that $M$ is large enough but independent of $n$.\n\n\\noindent\nNext, we shall define an appropriate\nbarrier function. First we\nobserve that there exists $\\eta_n^1\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{Lphi}\n\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{2+\\nu}\\,\\vert L(\\phi_n)\\vert\\leq \\eta_n^1 \\quad\n\\text{in } A_{\\varepsilon_n,M}.\n\\end{equation}\nOn the other hand, from\n\\eqref{57} we deduce the\nexistence of $\\eta_n^2\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{extrad}\n\\varepsilon_n^{-\\nu}\\vert \\phi_n(y)\\vert\\leq \\eta_n^2 \\quad \\text{ if } \\vert y-\\zeta_{i,n}^\\prime\\vert =\\delta\/4\\varepsilon_n,\n\\end{equation}\nand from \\eqref{58} we deduce the existence of $\\eta_n^3\\rightarrow 0$, as $\\varepsilon_n\\rightarrow 0$, such that\n\\begin{equation}\n\\label{intrad}\nM^{\\nu}\\vert \\phi_n(y)\\vert\\leq \\eta_n^3, \\quad \\text{if } \\vert y-\\zeta_{i,n}^\\prime\\vert =M.\n\\end{equation}\nSetting $\\eta_n=\\max\\{ \\eta_n^1, \\eta_n^2, \\eta_n^3\\}$ we find that\nthe function\n\\[\n\\varphi_n(y)=\\eta_n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-\\nu}\n\\]\ncan be used for the intended comparison argument.\n\nIndeed, for each $i=1,\\ldots,k$ we can write\n\\begin{align*}\nL(\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-\\nu})\n&=-\\Bigl(\n\\nu\\,(1-\\nu)-\\bigl(\\varepsilon_n^2\\,\\lambda\n+5(V_1+V_2)^4\\bigr)\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^2\n\\Bigr)\n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)}\\\\\n&\\leq -\\frac{\\nu\\,(1-\\nu)}{2} \\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)}\n\\end{align*}\nprovided $\\vert y-\\zeta_{i,n}^{\\prime}\\vert$ is large enough,\nand then\n\\begin{equation*}\nL(\\varphi_n)\\leq -\\frac{\\nu\\,(1-\\nu)}{2} \\,\\,\\eta_n\\,\\vert y-\\zeta_{i,n}^{\\prime}\\vert^{-(2+\\nu)} \\quad\n\\text{in } A_{\\varepsilon_n,M}\n\\end{equation*}\nprovided $M$ is fixed large enough (independently of $n$).\nThis together with \\eqref{Lphi} yields that $\n\\vert L(\\phi_n)\\vert\\leq - C L(\\varphi_n)\n$ in $A_{\\varepsilon_n,M}$. Moreover, it follows from\n\\eqref{extrad} and \\eqref{intrad} that\n$\n\\vert \\phi_n(y)\\vert\\leq C \\varphi_n(y)\n$ on $\\partial \\,A_{\\varepsilon_n,M}$\nand thus the maximum principle allows us to conclude that \\eqref{59} holds.\n\nThus, we have shown that $\\|\\phi_n \\|_{\\ltimes} \\to 0$ as $n\\to\\infty$.\nA standard argument using an appropriate scaling and elliptic estimates shows that $\\|\\phi\\|_*\\to 0$ as $n\\to \\infty$, which contradicts the assumption $\\Vert \\phi_n \\Vert_*=1$.\n\\end{proof}\n\n\\begin{proof}[Proof of Proposition \\ref{solvability}]\nLet us consider the space:\n\\[\nH=\\Bigl\\{\n\\phi\\in H_0^1(\\Omega_\\varepsilon): \\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi=0, \\, i=1,\\ldots,k,\\, j=1,2,3,4\n\\Bigr\\}\n\\]\nendowed with the inner product:\n\\[\n[\\phi,\\psi]=\\int_{\\Omega_\\varepsilon}\n\\nabla \\phi \\cdot\\nabla \\psi\n-\\varepsilon^2\\,\\lambda \\int_{\\Omega_\\varepsilon}\n\\phi \\,\\psi.\n\\]\nProblem \\eqref{linpart} expressed in weak form is equivalent to that of finding a $\\phi\\in H$ such that\n\\[\n[\\phi,\\psi]=\\int_{\\Omega_\\varepsilon}\\Bigl[5(V_1+V_2)^4 \\phi -h-\\Bigr]\\,\\psi\\quad\\text{for all }\\psi\\in H.\n\\]\nWith the aid of Riesz's representation theorem, this equation gets rewritten in $H$ in the operational form $\\phi = K(\\phi) + \\tilde{h}$, for certain $\\tilde{h}\\in H$, where $K$ is a compact operator in $H$.\nFredholm's alternative guarantees unique solvability of this problem for any $\\tilde{h}$ provided that the homogeneous equation $\\phi=K(\\phi)$ has only the zero solution in $H$. Let us observe that this last equation is precisely equivalent to \\eqref{linpart} with $h=0$. Thus existence of a unique solution follows.\nEstimate \\eqref{cotas} can be deduced from Lemma~\\ref{lemmaCotaApriori}.\n\\end{proof}\n\nIt is important, for later purposes, to understand the differentiability of the operator\n$T:h\\mapsto \\phi$ with respect to the variables $\\mu_i^{\\prime}$ and $\\zeta_i^{\\prime}$, $i=1,\\ldots,k$,\nfor $\\varepsilon$ fixed. That is, only the parameters $\\mu_i$ and $\\zeta_i$ are allowed to vary.\n\n\\begin{prop}\n\\label{propDerLinearOp}\nLet $\\mu^{\\prime}:=(\\mu_1^{\\prime},\\ldots,\\mu_k^{\\prime})$ and $\\zeta^{\\prime}:=(\\zeta_1^{\\prime},\\ldots,\\zeta_k^{\\prime})$.\nUnder the conditions of Proposition \\ref{solvability}, the map $T$ is of class $C^1$ and the derivative $D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}\\,D_{\\mu^{\\prime}}T$ exists and is a continuous function.\nBesides, we have\n\\[\n\\Vert D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}T(h)\\Vert_{\\ast}\n+\\Vert D_{\\mu^{\\prime},\\,\\zeta^{\\prime}}\\,D_{\\mu^{\\prime}}T(h)\\Vert_{\\ast}\\leq C\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\n\\end{prop}\n\\begin{proof}\nLet us begin with differentiation with respect to $\\zeta^\\prime$. Since $\\phi$ solves problem \\eqref{linpart},\nformal differentiation yields that $X_{n}:=\\partial_{(\\zeta^\\prime)_n}\\phi$, $n=1,\\ldots,3k$, should satisfy\n\\begin{equation*}\nL(X_{n})=\n-5\\,\\bigl[ \\partial_{(\\zeta^\\prime)_n} V^4\\bigl]\\,\\phi\n+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n+\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]\n\\quad\\text{in } \\Omega_\\varepsilon\n\\end{equation*}\ntogether with\n\\begin{equation}\n\\label{ort}\n\\int_{\\Omega_\\varepsilon}X_{n}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}+\\int_{\\Omega_\\varepsilon}\\phi\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]=0\\quad\\text{for }j=1,2,3,4 ,\n\\end{equation}\nwhere $c_{ij}^n=\\partial_{(\\zeta^\\prime)_n}c_{ij}$.\n\n\n\n\\noindent\nLet us consider constants $b_{ml}$\nsuch that\n\\[\n\\int_{\\Omega_\\varepsilon}\n\\Bigl(\nX_{n}-\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}\n\\Bigr)\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}=0,\n\\]\nwhere ${\\bf z}_{ml}$ is defined in \\eqref{zklcortada}.\nFrom \\eqref{ort} we get\n\\begin{equation*}\n\\sum_{m,l}b_{ml}\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\, {\\bf z}_{ml}=-\\int_{\\Omega_\\varepsilon}\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr]\\,\\phi\n\\end{equation*}\nfor $i=1,\\ldots,k$, $j=1,2,3,4$.\nSince this system is diagonal dominant with uniformly bounded coefficients, we see that it is uniquely solvable and that\n\\[\nb_{ml}=O(\\Vert \\phi\\Vert_\\ast)\n\\]\nuniformly on $\\zeta^\\prime$, $\\mu^\\prime$ in $\\Omega_\\varepsilon$.\nOn the other hand, it is not hard to check that\n\\[\n\\bigl\\Vert \\phi\\,\\partial_{(\\zeta^\\prime)_n} V^4 \\bigr\\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert \\phi\\Vert_\\ast.\n\\]\nRecall now that from Proposition \\ref{solvability} $c_{i,j}=O(\\Vert h\\Vert_{\\ast\\ast})$. Since besides\n\\[\\Bigl\\vert\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z_{ij}(x)\\bigr]\\Bigr\\vert\\leq C\\, \\bigl\\vert y-\\zeta^\\prime_i\\bigr\\vert^{-7},\\]\nwe get\n\\[\\Bigl\\Vert\n\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\, z_{ij}\n\\bigr]\\Bigr\\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\nSetting $X=X_{n}-\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}$, we have that $X$ satisfies\n\\[\nL(X)=f+\\sum_{i,j}c_{ij}^n\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\quad\\text{in }\\Omega_\\varepsilon,\n\\]\nwhere\n\\[\nf=\\sum_{m,l}b_{ml} \\,L({\\bf z}_{ml})-5\\,\\phi\\,\\partial_{(\\zeta^\\prime)_n}V^4\n+\\sum_{i,j}c_{ij}\\,\\partial_{(\\zeta^\\prime)_n}\\bigl[w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\bigr].\n\\]\nThe above estimates, together with the fact that $\\Vert \\phi\\Vert_\\ast\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}$ implies that\n\\[\\Vert f \\Vert_{\\ast\\ast}\\leq C\\,\n\\Vert h\\Vert_{\\ast\\ast}.\\]\nMoreover, since $X\\in H_0^1(\\Omega)$\n and\n\\[\n\\int_{\\Omega_\\varepsilon}\nX\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}=0\n\\quad\\text{for all }i,j,\n\\]\nwe have that $X=T(f)$.\nThis computation is not just formal. Indeed,\narguing directly by definition, one gets that\n\\[\n\\partial_{(\\zeta^\\prime)_n}\\phi=\\sum_{m,l}b_{ml} \\,{\\bf z}_{ml}+T(f)\\quad\\text{and}\\quad\n\\Vert \\partial_{(\\zeta^\\prime)_n}\\phi\\Vert_{\\ast}\\leq C \\,\n\\Vert h\\Vert_{\\ast\\ast}.\n\\]\nThe corresponding result for differentiation with respect to the $\\mu_i$'s follows similarly. This concludes the proof.\n\\end{proof}\n\n\n\\section{The nonlinear problem}\n\\label{sectNonlinear}\n\nIn this section we consider the nonlinear problem\n\\eqref{linearizedproblemproj}, namely,\n\\begin{equation}\n\\label{linearizedproblemproj2}\n\\left\\{\n\\begin{array}{rlll}\nL(\\phi)&=&-N(\\phi)-E+\\sum_{i,j}c_{ij}\\,w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n&\\text{in } \\Omega_\\varepsilon,\n\\\\\n\\phi&=&0&\\text{on }\\partial\\Omega_\\varepsilon,\n\\\\\n\\int_{\\Omega_\\varepsilon}w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\\phi&=&0&\\text{for all }i,j ,\n\\end{array}\n\\right.\n\\end{equation}\nand show that it has a small solution $\\phi$ for $\\varepsilon>0$ small enough.\n\n\n\nWe first obtain an estimate of the error $E$ defined in \\eqref{error}.\nAssuming \\eqref{parametros} it is possible to show that $E$ satisfies\n$\n\\|E\\|_{**} \\leq C \\varepsilon.\n$\nHowever, for the proof of the main theorem, we require a stronger estimate. In order to find it, we need to impose certain extra assumptions on the parameters.\n\nLet us use the notation\n\\[\n\\mu^{\\frac{1}{2}} =\n\\left[\n\\begin{matrix}\n\\mu_1^{\\frac{1}{2}}\n\\\\\n\\vdots\n\\\\\n\\mu_k^{\\frac{1}{2}}\n\\end{matrix}\n\\right] \\in \\R^k.\n\\]\n\n\\begin{lemma}\n\\label{lemma-error}\nAssuming that the parameters $\\mu_i,\\zeta_i$ satisfy \\eqref{parametros}, where $\\delta>0$ is fixed small, we have the existence of\n$\\varepsilon_1>0$, $C>0$, such that for all $\\varepsilon \\in (0,\\varepsilon_1)$\n\\[\n\\| E \\|_{**} \\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) .\n\\]\n\\end{lemma}\n\\begin{proof}\nWe recall that\n\\begin{align*}\nE(y)=\\Bigl(\n\\sum_{i=1}^k\n\\bigl[\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n\\bigr]\n\\Bigr)^5\n- \\sum_{i=1}^k\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5(y)\n,\\quad y\\in \\Omega_{\\varepsilon} .\n\\end{align*}\nFirst we note that\n\\[\n|E(y)|\\leq C \\varepsilon^5,\\quad \\text{if }\ny \\in \\widetilde \\Omega_\\varepsilon := \\Omega_\\varepsilon \\setminus \\bigcup_{j=1}^k B_{\\delta\/\\varepsilon}(\\zeta_j') ,\n\\]\nand this implies that\n\\begin{align}\n\\label{EcomplementB}\n\\sup_{y \\in \\widetilde \\Omega_\\varepsilon} \\omega(y)^{-(2+\\nu)} |E(y)| \\leq C \\varepsilon^{5-\\nu}.\n\\end{align}\nFor $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))$ and $j\\not=i$, thanks to Lemma~\\ref{lemma22} we have\n\\[\n\\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y) = O(\\varepsilon), \\quad\nw_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n= O(\\varepsilon).\n\\]\nHence, using Taylor's theorem and the fact that $\\mu_i = O ( \\varepsilon )$ (which follows from \\eqref{parametros}), we find that\n\\begin{align}\n\\nonumber\nE(y)\n& = 5 w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n\\Bigl( \\varepsilon^{\\frac{1}{2}}\\pi_i(\\varepsilon\\, y)\n+ \\sum_{j\\not=i} w_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n\\Bigr) \\\\\n\\label{expansionE}\n& \\quad + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5)\n,\\quad \\text{for } y \\in B_{\\delta\/\\varepsilon}(\\zeta_i').\n\\end{align}\nNow, Lemma~\\ref{lemma22} guarantees that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i')$,\n\\begin{align}\n\\label{pi}\n\\pi_i(\\varepsilon y)\n&= -4\\pi \\alpha_3 \\mu_i^{\\frac{1}{2}} H_\\lambda(\\varepsilon y,\\zeta_i)\n+O( \\mu_i^{\\frac{3}{2}} )\n=-4\\pi \\alpha_3 \\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\n+O( \\varepsilon^{\\frac{3}{2}} ) .\n\\end{align}\nSimilarly, Lemma~\\ref{Uiexp} yields that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))$ and $j\\not=i$,\n\\begin{align}\n\\nonumber\nw_{\\mu_j^{\\prime},\\zeta_j^{\\prime}}(y)+\\varepsilon^{\\frac{1}{2}}\\pi_j(\\varepsilon\\, y)\n&= V_j(y) = \\varepsilon^{ \\frac{1}{2}} U_j( \\varepsilon y)\n\\\\\n\\nonumber\n& = 4\\pi\\,\\alpha_3 \\varepsilon^{ \\frac{1}{2}} \\mu_j^{\\frac{1}{2}} \\,G_{\\lambda}(\\varepsilon y ,\\zeta_j)+O(\\mu_i^{\\frac{5}{2}-\\sigma} )\n\\\\\n\\label{vJ}\n&= 4\\pi\\,\\alpha_3\\varepsilon^{ \\frac{1}{2}} \\mu_j^{\\frac{1}{2}}\\,G_{\\lambda}(\\zeta_i ,\\zeta_j) + O(\\varepsilon^2).\n\\end{align}\nUsing \\eqref{expansionE}, along with \\eqref{pi} and \\eqref{vJ}, we find that\n\\begin{align}\n\\nonumber\nE(y)\n& = 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n\\Bigl(\n-\\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i)\n+\\sum_{j\\not=i} \\mu_j^{\\frac{1}{2}} G_\\lambda(\\zeta_i,\\zeta_j)\n\\Bigr) \\\\\n\\label{expansionE2}\n& \\quad + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) ,\n\\quad \\text{for } y \\in B_{\\delta\/\\varepsilon}(\\zeta_i')),\n\\end{align}\nwhich implies\n\\[\n\\sup_{y \\in B_{\\delta\/\\varepsilon}(\\zeta_i'))}\n\\omega(y)^{-(2+\\nu)} |E(y)|\n\\leq C \\varepsilon^{\\frac{1}{2}}\n\\bigl|\n- \\mu_i^{\\frac{1}{2}} g_\\lambda(\\zeta_i) + \\sum_{j\\not=i} \\mu_j^{\\frac{1}{2}} \\,G_{\\lambda}(\\zeta_i ,\\zeta_j)\n\\bigr| + C \\varepsilon^2.\n\\]\nThis together with \\eqref{EcomplementB} yields the desired estimate.\n\\end{proof}\n\nWe note that just assuming that $\\mu_i$, $\\zeta_i$ satisfy \\eqref{parametros} we have $|M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| \\leq C \\varepsilon^{\\frac{1}{2}}$ and hence\n\\begin{align}\n\\label{estE1}\n\\| E \\|_{**}\\leq C \\varepsilon.\n\\end{align}\nHowever, this estimate is not sufficient to prove the main theorem. An essential part of the argument is to work with $\\zeta$ and $\\mu$ so that $M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}} $ is smaller than $\\varepsilon^{\\frac{1}{2}}$.\n\n\n\\medskip\n\n\n\\begin{lemma}\n\\label{lema2}\nAssume that $\\zeta_i'$, $\\mu_i'$ satisfy \\eqref{parametros} where $\\delta>0$ is fixed small.\nThen there exist\n$\\varepsilon_1>0$, $C_1>0$, such that for all $\\varepsilon \\in (0,\\varepsilon_1)$ problem \\eqref{linearizedproblemproj2} has a unique solution $\\phi$ that satisfies\n\\begin{align}\n\\label{estPhi}\n\\|\\phi\\|_*\\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nIn order to find a solution to problem \\eqref{linearizedproblemproj2} it is sufficient to solve the fixed point problem\n\\[\n\\phi= A(\\phi),\n\\]\nwhere\n\\begin{align}\n\\label{def-A}\nA(\\phi) = - T(N(\\phi)+E) ,\n\\end{align}\nand $T$ is the linear operator defined in Proposition \\ref{solvability}.\n\n\n\n\n\n\nNow, for a small $\\gamma>0$, let us consider the ball\n$\\mathcal{F}_\\gamma:= \\{\\phi \\in C(\\overline{\\Omega}_{\\varepsilon}) \\, \\vline \\ \\|\\phi\\|_*\\leq \\gamma \\}.$\nWe shall prove that\n$A$ is a contraction in $ \\mathcal{F}_\\gamma$ for small $\\varepsilon>0$.\nFrom Proposition \\ref{solvability}, we get\n\\[ \\|A(\\phi)\\|_*\\leq C\\left[ \\|N(\\phi)\\|_{**}+\\|E\\|_{**} \\right].\\]\nWriting the formula for $N$ as\n\\[\nN(\\phi)=20\\int_{0}^1(1-t)\\,[V+t\\phi]^3\\,dt \\,\\phi^2,\n\\]\nwe get the following estimates which are valid for\n$\\phi_1, \\phi_2\\in \\mathcal{F}_\\gamma$,\n\\[\n\\|N(\\phi_1)\\|_{**}\\leq C \\|\\phi_1\\|_*^2 ,\n\\]\n\\begin{equation}\n\\label{N}\n\\|N(\\phi_1)-N(\\phi_2)\\|_{**}\\leq C \\,\\gamma \\,\\|\\phi_1-\\phi_2\\|_*.\n\\end{equation}\nThus, we can deduce the existence of a constant $C>0$ such that\n\\[\n\\|A(\\phi)\\|_*\\leq C \\left[\\gamma^2 + \\|E\\|_{**} \\right] .\n\\]\nFrom Lemma~\\ref{lemma-error} we obtain the basic estimate $\\|E\\|_{**} \\leq C\\varepsilon$ with $C$ independent of the parameters $(\\mu,\\zeta)$ satisfying \\eqref{parametros}.\nChoosing $\\gamma = 2 C \\|E \\|_{**} $ we see that $A$ maps $\\mathcal{F}_\\gamma$ into itself if $\\gamma\\leq \\frac{1}{2C}$, which is true for $\\varepsilon>0$ small.\nUsing now \\eqref{N} we obtain\n\\[\n\\|A(\\phi_1) - A(\\phi_2) \\|_* \\leq C \\gamma \\|\\phi_1-\\phi_2\\|_*\n\\]\nfor $\\phi_1,\\phi_2 \\in \\mathcal{F}_\\gamma$. Therefore $A$ is a contraction in $ \\mathcal{F}_\\gamma$ for small $\\varepsilon>0$ and hence a unique fixed point of $A$ exists in this ball.\nThe solution $\\phi$ satisfies\n\\begin{align}\n\\label{estPhiNonlinear}\n\\|\\phi\\|_* \\leq \\gamma = 2 C \\|E\\|_{**} \\leq C ( \\varepsilon^{\\frac{1}{2}} |M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}| + \\varepsilon^2) ,\n\\end{align}\nby Lemma~\\ref{lemma-error}.\nThis concludes the proof of the lemma.\n\\end{proof}\n\n\n\n\n\n\\medskip\n\n\nWe shall next analyze the differentiability of the map $(\\zeta',\\mu')\\rightarrow \\phi$.\n\nFirst we claim that:\n\\begin{lemma}\nAssume that the parameters $\\mu_i,\\zeta_i$ satisfy \\eqref{parametros}. Then\n\\begin{align}\n\\label{estDerEMu}\n\\| D_{\\mu_i'} E \\|_{**} \\leq C \\varepsilon ,\n\\\\\n\\label{estDerEZeta}\n\\| D_{\\zeta_i^\\prime}\\, E \\|_{**} \\leq C \\varepsilon .\\end{align}\n\\end{lemma}\n\n\\begin{proof}\n\nFirst we observe that\n\\begin{align*}\n\\partial_{\\mu_i^\\prime}w_{\\mu_i',\\zeta_i'}&=\n\\frac{\\alpha_3 \\,\\bigl(|y-\\zeta_i'|^2-\\mu_i'^2\\bigr)}{2\\,\\sqrt{\\mu_i'}\\,\\bigl(|y-\\zeta_i'|^2+\\mu_i'^2\\bigr)^{\\frac{3}{2}}}, \\quad\nD_{\\zeta_i^\\prime}w_{\\mu_i',\\zeta_i'}=\n\\frac{\\alpha_3\\, \\sqrt{\\mu_i'}\\,(y-\\zeta_i')}{\\bigl(|y-\\zeta_i'|^2+\\mu_i'^2\\bigr)^{\\frac{3}{2}}} .\n\\end{align*}\nand hence\n\\begin{equation}\n\\label{DMUZIE}\n|\\partial_{\\mu_i^\\prime}w_{\\mu_i',\\zeta_i'}|\\leq C\\, w_{\\mu_i',\\zeta_i'} \\quad\\text{and}\\quad\n|D_{\\zeta_i^\\prime}\\,w_{\\mu_i',\\zeta_i'}|\\leq C\\, w_{\\mu_i',\\zeta_i'}^2.\n\\end{equation}\nLet us prove \\eqref{estDerEZeta}, the other being similar.\nLet us assume without loss of generality that $i=1$.\nRecall that\n\\[\nE=V^5-\n\\sum_{i=1}^k w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^5,\n\\]\nand so\n\\begin{align*}\nD_{\\zeta_1^\\prime}\\, E\n&=\n5V^4 D_{\\zeta_1^\\prime}\\, V_1\n- 5 w_{\\mu_1',\\zeta_1'}^4 D_{\\zeta_1^\\prime}\\, w_{\\mu_1',\\zeta_1'}\n\\\\\n&=\n5\n\\biggl[\n\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4\n-\nw_{\\mu_1',\\zeta_1'}^4 \\biggr]\nD_{\\zeta_1^\\prime}\\,w_{\\mu_1',\\zeta_1'}\n +\n5\\,\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4 \\, D_{\\zeta_1^\\prime}\\, \\varphi_1 ,\n\\end{align*}\nwhere $\\varphi_i(y) = \\varepsilon^{1\/2} \\pi_i(\\varepsilon y)$.\nBy \\eqref{DMUZIE}, we have that, for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')$,\n\\begin{align}\n\\nonumber\n&\n\\biggl|\n\\biggl(\n\\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4\n-\nw_{\\mu_1',\\zeta_1'}^4 \\biggr)\\,\nD_{\\zeta_1'} w_{\\mu_1',\\zeta_1'}\n\\biggr|\n\\\\\n\\nonumber\n& \\leq\nC w_{\\mu_1',\\zeta_1'}^3 \\Bigl( | \\varphi_1 |+ \\sum_{i=2}^k \\bigl( |w_{\\mu_i',\\zeta_i'}| + | \\varphi_i| \\bigr) \\Bigr) \\, |D_{\\zeta_1'} w_{\\mu_1',\\zeta_1'} |\n\\\\\n\\label{estDerE1}\n& \\leq C \\,\\varepsilon \\, w_{\\mu_1',\\zeta_1'}^5 .\n\\end{align}\nNote that from Lemma~\\ref{lemma22}, $ |D_{\\zeta_1'} \\varphi_1 (y)|\\leq C \\varepsilon^2$. Then,\n for $y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')$,\n\\begin{align}\n\\nonumber\n\\left|\n5 \\Bigl(\n\\sum_{i=1}^k w_{\\mu_i',\\zeta_i'} + \\varphi_i\n\\Bigr)^4 \\,D_{\\zeta_1'} \\varphi_1\n\\right|\n& \\leq C \\left( w_{\\mu_1',\\zeta_1'}^4 + \\varepsilon^4 \\right) \\varepsilon^2\n\\\\\n\\label{estDerE2}\n& \\leq C \\,\\varepsilon^2\\, w_{\\mu_1',\\zeta_1'}^4.\n\\end{align}\nUsing \\eqref{estDerE1} and \\eqref{estDerE2} we find that\n\\[\n\\sup_{y \\in B_{\\delta\/\\varepsilon}(\\zeta_1')}\n\\omega(y)^{-(2+\\nu)} |D_{\\zeta_1'} E(y)| \\leq C \\varepsilon.\n\\]\nThe supremum on the rest of $\\Omega_\\varepsilon$ can be estimated similarly and this yields \\eqref{estDerEZeta}.\n\\end{proof}\n\n\n\n\\begin{lemma}\nAssume that $\\zeta$, $\\mu$ satisfy \\eqref{parametros}.\nThen\n\\begin{align}\n\\label{derPhiZeta}\n\\| D_{\\zeta_i'} \\phi\\|_* & \\leq C ( \\| E\\|_{**} + \\|D_{\\zeta'} E\\|_{**} ) ,\n\\\\\n\\label{derPhiMu}\n\\| D_{\\mu_i'} \\phi\\|_*& \\leq C ( \\| E\\|_{**} + \\|D_{\\mu'} E\\|_{**} ) .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nTo prove differentiability of the function $\\phi(\\zeta')$ we first\nrecall that $\\phi$ is found solving the fixed point problem\n\\[\n\\phi = A (\\phi; \\mu',\\zeta')\n\\]\nwhere $A$ is given in \\eqref{def-A} but now we emphasize the dependence on $\\mu ',\\zeta '$.\nFormally, differentiating this equation with respect to $\\zeta_i'$ we find\n\\begin{align}\n\\label{fixedDerPhi}\nD_{\\zeta_i'} \\phi =\n\\partial_{\\zeta_i'} A(\\phi;\\mu',\\zeta') +\n\\partial_\\phi A(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi].\n\\end{align}\nThe notation we are using is $D_{\\zeta_i'}$ for the total derivative of the corresponding function and $\\partial_{\\zeta_i'}$ for the partial derivative.\nFrom this fixed point problem for $D_{\\zeta_i'} \\phi $ we shall derive an estimate for $\\|D_{\\zeta_i'} \\phi\\|_*$.\n\nSince $A(\\phi;\\mu',\\zeta') = - T( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta')$ we get\n\\begin{align*}\n\\partial_{\\zeta_i'}A(\\phi;\\mu',\\zeta')\n&=\n-\\partial_{\\zeta_i'}\nT( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta')\n-\nT( \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') ;\\mu',\\zeta')\n\\\\\n& \\quad\n- T( D_{\\zeta_i'} E;\\mu',\\zeta') .\n\\end{align*}\nFrom Proposition \\ref{solvability} we see that\n\\[\n\\| T( D_{\\zeta_i'} E;\\mu',\\zeta') \\|_*\n\\leq\nC \\| D_{\\zeta_i'} E\\|_{**} .\n\\]\nUsing Proposition~\\ref{propDerLinearOp} and estimates \\eqref{estPhiNonlinear} and \\eqref{N}, we find that\n\\[\n\\| \\partial_{\\zeta_i'}\nT\\bigl( N(\\phi;\\mu',\\zeta') + E;\\mu',\\zeta'\\bigr) \\|_*\n\\leq C \\| N(\\phi;\\mu',\\zeta') + E \\|_{**}\n\\leq\nC \\|E \\|_{**} .\n\\]\nSimilarly,\n\\begin{align*}\n\\| T( \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') ;\\mu',\\zeta') \\|_*\n& \\leq\nC\n\\| \\partial_{\\zeta_i'}N(\\phi;\\mu',\\zeta') \\|_{**}\n\\leq\nC\n\\| \\phi \\|_{*}^2\n\\leq C \\|E\\|_{**}^2.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\label{estDerAZeta}\n\\| D_{\\zeta_i'}A(\\phi;\\mu',\\zeta') \\|_* \\leq C \\|E\\|_{**}.\n\\end{align}\nNext we estimate\n\\begin{align}\n\\nonumber\n\\|\n\\partial_\\phi A(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] \\|_*\n&=\n\\|\nT ( \\partial_\\phi N(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] ) \\|_*\n\\\\\n\\nonumber\n& \\leq\n\\| \\partial_\\phi N(\\phi;\\mu',\\zeta')\n[D_{\\zeta_i'} \\phi] \\|_{**}\n\\\\\n\\nonumber\n& \\leq\nC \\, \\|\\phi\\|_* \\, \\|D_{\\zeta_i'} \\phi\\|_*\n\\\\\n\\label{estDerAPhi}\n& \\leq\nC \\, \\| E \\|_* \\, \\|D_{\\zeta_i'} \\phi\\|_* .\n\\end{align}\nFrom \\eqref{estDerAZeta}, \\eqref{estDerAPhi} and the fixed point problem \\eqref{fixedDerPhi} we deduce \\eqref{derPhiZeta}.\nThe proof of \\eqref{derPhiMu} is similar.\n\\end{proof}\n\nAs a corollary of the previous lemma and taking into account\n\\eqref{estDerEMu}, \\eqref{estDerEZeta}, and \\eqref{estE1} we get the following estimate\n\\begin{align}\n\\label{estDerPhi2}\n\\|D_{\\zeta_i'} \\phi \\|_* + \\|D_{\\mu_i'} \\phi \\|_* \\leq C \\varepsilon .\n\\end{align}\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The reduced energy}\n\\label{secReduction}\n\nAfter Problem (\\ref{linearizedproblemproj}) has been solved, we will find a solution to the original problem (\\ref{maineqreesc}) if we manage to adjust the pair $(\\zeta',\\mu' )$ in such a way that $c_{i}(\\zeta',\\mu' )=0$, $i=1,2,3,4$. This is the {\\em reduced problem} and it turns out to be variational, that is, its solutions are critical points of the reduced energy functional\n\\begin{align}\n\\label{defIlambda}\nI_\\lambda(\\zeta',\\mu' )\n=\n\\bar J_\\lambda(V+\\phi)\n\\end{align}\nwhere $\\bar J_\\lambda$ is the energy functional for the problem \\eqref{maineqreesc}, that is,\n\\[\n\\bar J_\\lambda(v)=\\frac{1}{2}\\int_{\\Omega_\\varepsilon}\\vert\\nabla v\\vert^2 - \\varepsilon^2\\, \\frac{ \\lambda}{2}\\int_{\\Omega_\\varepsilon}v^2-\\frac{1}{6}\\int_{\\Omega_\\varepsilon}v^6,\n\\]\nthe function $V$ is the ansatz given in \\eqref{defV} and $\\phi = \\phi(\\zeta',\\mu' )$ is the solution of (\\ref{linearizedproblemproj}) constructed in Lemma~\\ref{lema2} for $\\varepsilon \\in (0,\\varepsilon_1)$.\n\n\n\\begin{lemma}\n\\label{lemmaReduction1}\nAssume that $\\zeta_i'$, $\\mu_i'$ satisfy \\eqref{parametros} where $\\delta>0$ is fixed small and $\\varepsilon_1>0$ is small as in Lemma~\\ref{lema2}. Then $I_\\lambda$ is $C^1$ and\n$V+\\phi$ is a solution to \\eqref{maineqreesc} if and only if\n\\begin{align}\n\\label{reducedSystem}\nD_{\\zeta'} I_\\lambda(\\zeta',\\mu' )=0,\n\\quad\nD_{\\mu'} I_\\lambda(\\zeta',\\mu' )=0 .\n\\end{align}\n\\end{lemma}\n\\begin{proof}\nDifferentiating $I_\\lambda$ with respect to $\\mu_n'$\nand using that $\\phi$ solves \\eqref{linearizedproblemproj}\nwe find\n\\begin{align*}\n\\partial_{\\mu_n'}\nI_\\lambda(\\zeta',\\mu' )\n&=\nD \\bar J_\\lambda(V+\\phi)[\\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi]\n\\\\\n&= - \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,( \\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi ) .\n\\end{align*}\nSimilarly\n\\begin{align*}\nD_{\\zeta_{n}'}\nI_\\lambda(\\zeta',\\mu' )\n&= - \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( D_{\\xi_{n}'} V + D_{\\xi_{n}'} \\phi ) .\n\\end{align*}\nSince all terms in these expressions depends continuously on $\\zeta',\\mu'$ we deduce that $I_\\lambda$ is $C^1$.\n\nClearly if $V+\\phi$ is a solution to \\eqref{maineqreesc} then all $c_{ij}=0$ and hence \\eqref{reducedSystem} holds.\nReciprocally, if \\eqref{reducedSystem} holds, then\n\\begin{align}\n\\label{systemC}\n\\left\\{\n\\begin{aligned}\n\\sum_{i,j} c_{ij}\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( \\partial_{\\mu_n'} V + \\partial_{\\mu_n'} \\phi )\n&=0\n\\\\\n\\sum_{i,j} c_{ij}\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\n( D_{\\zeta_{n}'} V \\cdot e_l + D_{\\zeta_{n}'} \\phi \\cdot e_l)\n&=0,\n\\end{aligned}\n\\right.\n\\end{align}\nfor all $n=1,\\ldots,k$. Thanks to \\eqref{estDerPhi2} we see that\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\\,\n\\partial_{\\mu_n'} \\phi \\to0,\n\\quad\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\,\nD_{\\zeta_{n}'} \\phi \\to0,\n\\]\nas $\\varepsilon\\to 0$. Also, by \\eqref{zij} and the expansion in Lemma~\\ref{lemma22} we find that\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,\\partial_{\\mu_n'} V\n=\n\\delta_{j4}\\, \\delta_{ik}\n\\int_{\\R^3} w_{\\mu',0}^4 (\\partial_\\mu w_{\\mu',0})^2 + o(1)\n\\]\nand\n\\[\n\\int_{\\Omega_\\varepsilon}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij}\n\\,D_{\\zeta_{n}'} V \\cdot e_l\n=\n\\delta_{ik}\\,\n\\delta_{jl}\n\\int_{\\R^3}\nw_{\\mu',0}^4 (\\nabla w_{\\mu',0}\\cdot e_1)^2\n+o(1)\n\\]\nas $\\varepsilon\\to0$, for some $\\mu' \\in (\\delta,\\frac{1}{\\delta})$.\n\nTherefore the system of equations \\eqref{systemC} is invertible for the $c_{ij}$ when $\\varepsilon>0$ is small, and hence $c_{ij}=0$ for all $i,j$.\n\\end{proof}\n\n\n\nA nice feature of the system of equations \\eqref{reducedSystem} is that it turns out to be equivalent to finding critical points of a functional of the pair $(\\zeta',\\mu')$ which is close, in appropriate sense, to the energy of $k$ bubbles $U_1 + \\ldots + U_k$.\n\n\n\n\\begin{lemma}\n\\label{lemmaApproxEnergy}\nAssume the same conditions as in Lemma~\\ref{lemmaReduction1}.\nThen\n\\begin{align}\n\\label{expansion2}\nI_\\lambda(\\zeta',\\mu') = J_\\lambda(\\sum_{i=1}^k U_i) + \\theta_\\lambda^{(2)}(\\zeta',\\mu'),\n\\end{align}\nwhere $\\theta$ satisfies\n\\[\n\\theta_\\lambda^{(2)}(\\zeta',\\mu') = - \\int_0^1 s \\left[\\int_{\\Omega_\\varepsilon}\\vert\\nabla \\phi\\vert^2 - \\varepsilon^2 \\lambda \\phi^2-5 (V + s\\phi)^4 \\phi^2\\right]\\,ds ,\n\\]\nwhere $\\phi = \\phi(\\zeta',\\mu') $ is the solution of \\eqref{linearizedproblemproj2} found in Lemma~\\ref{lema2}.\n\\end{lemma}\n\\begin{proof}\nFrom Taylor's formula\nwe find that\n\\begin{align*}\nI_\\lambda(\\zeta',\\mu' )\n&=\n\\bar J_\\lambda(V)\n+ D \\bar J_\\lambda(V+\\phi) [\\phi]\n+ \\theta_\\lambda^{(2)}(\\zeta',\\mu'),\n\\end{align*}\nwhere\n\\begin{align}\n\\label{formulaR}\n\\theta_\\lambda^{(2)}(\\zeta',\\mu')\n&= -\\int_0^1 s D^2 \\bar J_\\lambda(V + s\\phi)[\\phi^2] \\,ds .\n\\end{align}\nBut since $\\phi$ satisfies \\eqref{linearizedproblemproj}, we have that\n\\begin{align*}\nD \\bar J_\\lambda(V+\\phi) [\\phi]\n&=- \\sum_{i,j} c_{ij} \\int_{\\Omega_\\varepsilon} w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}^4\\,z_{ij} \\phi = 0 ,\n\\end{align*}\nwhich implies \\eqref{expansion2}.\n\\end{proof}\n\nWe remark that assuming \\eqref{parametros} we get\n\\[\n\\left|\n\\theta_\\lambda^{(2)}(\\zeta',\\mu') \\right|\\leq C \\varepsilon^2 ,\n\\]\nsince \\eqref{estPhi} holds.\n\n\n\n\n\\section{Critical multi-bubble}\n\\label{sectProof}\n\\noindent\nLet $k\\geq2$ be a given integer.\nFor $\\delta>0$ fixed small we consider the sets\n\\begin{multline*}\n\\Omega_\\delta^k:=\\{\n\\zeta\\equiv(\\zeta_1,\\ldots,\\zeta_k)\\in\\Omega^k: \\,\\textrm{dist}(\\zeta_i,\\partial \\Omega)>\\delta, \\vert \\zeta_i-\\zeta_j\\vert>\\delta,\n i=1,\\ldots,k,\\, j\\neq i\n\\}\n\\end{multline*}\nRecall that the main term in the expansion of $J_\\lambda\\Bigl(\\sum_{i=1}^k U_i\\Bigr)$ is the function\n\\begin{align*}\nF_\\lambda(\\zeta,\\mu)\n&:=\n\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\,\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\mu_i^2\n\\\\\n&\n-a_3\\,\\sum_{i=1}^k\\Bigl(\\mu_i\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\mu_i^{1\/2}\\mu_j^{1\/2}\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2 ,\n\\end{align*}\nwhere $\\zeta\\in\\Omega_\\delta^k$,\n$\\mu\\equiv(\\mu_1,\\ldots,\\mu_k)\\in(\\R^+)^k$\nand the constants $a_i$ are given in \\eqref{a0}--\\eqref{a3}.\n\n\\noindent\n\n\n\n\\begin{proof}[Proof of Theorem~\\ref{thm1}]\n\nBy Lemma~\\ref{lemmaReduction1}, $v= V+\\phi$ solves \\eqref{maineqreesc} if the function $I_\\lambda(\\zeta',\\mu')$ defined in \\eqref{defIlambda} has a critical point.\n\nIn the sequel we will write also $I_\\lambda(\\zeta,\\mu)$ for the same function but depending on $\\zeta$, $\\mu$, which we always assume satisfy the relation \\eqref{muiPrime} with $\\zeta'$, $\\mu'$.\n\nUsing the expansion of $J_\\lambda\\Bigl(\\sum_{i=1}^kU_i\\Bigr)$ given in Lemma~\\ref{lemmaEnergyExpansion}, together with Lemma~\\ref{lemmaApproxEnergy}, we see that\n$I_\\lambda(\\zeta,\\mu)$ has the form\n\\[\nI_\\lambda(\\zeta,\\mu) = F_\\lambda(\\zeta,\\mu) + \\theta_\\lambda(\\zeta,\\mu)\n\\]\nwhere $\\theta_\\lambda(\\zeta,\\mu)= \\theta_\\lambda^{(1)}(\\zeta,\\mu) + \\theta_\\lambda^{(2)}(\\zeta,\\mu) $, $\\theta_\\lambda^{(1)}$ is the remainder that appears in Lemma~\\ref{lemmaEnergyExpansion} and $\\theta_\\lambda^{(2)}$ the remainder in Lemma~\\ref{lemmaApproxEnergy}.\n\n\nIt is convenient to perform the change of variables\n\\begin{align}\n\\label{LambdaMu}\n\\Lambda_i := \\mu_i^{1\/2} ,\n\\end{align}\nwhere now $\\Lambda\\equiv(\\Lambda_1,\\ldots,\\Lambda_k)\\in \\R^k$, and write, with some abuse of notation,\n\\begin{align*}\nF_{\\lambda}(\\zeta,\\Lambda)\n:=&\n\\,k\\,a_0\n+a_1\\sum_{i=1}^k\\Bigl(\\Lambda_i^2\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\Lambda_i\\,\\Lambda_j\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)+a_2\\,\\lambda\\,\\sum_{i=1}^k \\Lambda_i^4\\\\\n&-a_3\\,\\sum_{i=1}^k\\Bigl(\\Lambda_i^2\\,g_{\\lambda}(\\zeta_i)-\\sum_{j\\neq i}\\Lambda_i \\Lambda_j\\,G_{\\lambda}(\\zeta_i,\\zeta_j)\\Bigr)^2.\n\\end{align*}\nNote that $\\partial_{\\mu'_i} I_\\lambda(\\mu',\\zeta')=0$ is equivalent to $\\partial_{\\mu_i} \\tilde F_\\lambda=0$, whenever $\\Lambda_i \\not=0$.\n\nThe function $F_\\lambda$ can be expressed in terms of the matrix $M_\\lambda$ as\n\\[F_{\\lambda}(\\zeta,\\Lambda)\n=\n\\,k\\,a_0\n+a_1\\,\n\\Lambda^T M_\\lambda(\\zeta) \\Lambda\n+a_2\\,\\lambda\\,\\sum_{i=1}^k \\Lambda_i^4-a_3\\,\\sum_{i=1}^k\\Lambda_i^2\\,( M_\\lambda(\\zeta) \\Lambda )_i^2.\n\\]\nIn what follows we write $\\sigma_1(\\varepsilon,\\zeta) $ for the smallest eigenvalue of $M_\\lambda(\\zeta)$ where $\\lambda = \\lambda_0 + \\varepsilon$.\nUsing the Perron-Frobenius theorem or a direct argument as in \\cite{bahri-li-rey} the eigenvalue $\\sigma_1(\\varepsilon,\\zeta) $ is simple and has an eigenvector $v_1(\\varepsilon,\\zeta) $ with $|v_1(\\varepsilon,\\zeta) |=1$ and whose components are all positive.\nBy a standard application of the implicit function theorem, we have that $\\sigma_1(\\varepsilon,\\zeta)$ and $v_1(\\varepsilon,\\zeta)$ are smooth functions of $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$.\n\n\\noindent\nWe also have the following properties as a consequence of the hypothesis:\n\\[\nD_\\zeta \\sigma_1(0,\\zeta^0) = 0,\\qquad\nD^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)\\, \\text{is nonsingular},\\qquad\n\\frac{\\partial \\sigma_1 }{\\partial \\lambda} (0,\\zeta^0)<0.\n\\]\nThese assertions can be proved by observing that\n\\[\n\\psi_{\\lambda_0+\\varepsilon}(\\zeta)=\\det M_{\\lambda_0+\\varepsilon}(\\zeta) = \\sigma_1(\\varepsilon,\\zeta) \\sigma_*(\\varepsilon,\\zeta) ,\n\\]\nwhere $\\sigma_*(\\varepsilon,\\zeta)$ is the product of the rest of the eigenvalues of $M_{\\lambda_0+\\varepsilon}(\\zeta)$. Since $\\sigma_1$ is a simple eigenvalue and $M_{\\lambda_0}(\\zeta^0)$ is positive semidefinite, we have $\\sigma_*(0,\\zeta^0)>0$ and this is still true for $\\varepsilon,\\zeta$ in a neighborhood of $(0,\\zeta^0)$.\nThen the properties stated above for $\\sigma_1$ follow from our assumptions on $\\psi_{\\lambda_0+\\varepsilon}(\\zeta)$.\n\n\nSince $\\frac{\\partial \\sigma_1 }{\\partial \\lambda} (0,\\zeta^0)<0$,\nwe deduce that there are $\\varepsilon_0>0$ and $c_0>0$ such that\n \\begin{align}\n\\label{sigma-negative}\n\\sigma_1(\\varepsilon,\\zeta) <0 ,\n\\quad\n\\text{for } \\varepsilon \\in (0,\\varepsilon_0),\n\\quad\n\\zeta \\in B_{c_0 \\sqrt{\\varepsilon}}(\\zeta^0).\n\\end{align}\nNext we construct a $k\\times k$ matrix $P (\\varepsilon,\\zeta)$ for $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$ with the following properties:\n\\begin{itemize}\n\\item[a)] the first column of $P$ is $v_1(\\varepsilon,\\zeta)$,\n\\item[b)] columns 2 to $k$ of $P$ are orthogonal to $v_1(\\varepsilon,\\zeta)$,\n\\item[c)] $P (\\varepsilon,\\zeta)$ is smooth for $\\varepsilon$ and $\\zeta$ in a neighborhood of $(0,\\zeta^0)$,\n\\item[d)] $P (0,\\zeta^0)$ is such that $M_{\\lambda_0}(\\zeta^0) = P (0,\\zeta^0) D P (0,\\zeta^0)^T$ with $D$ diagonal,\n\\item[e)] $P (0,\\zeta^0)^T P (0,\\zeta^0) = I$.\n\\end{itemize}\nTo achieve this we let $\\bar v_1,\\ldots,\\bar v_k$ be an orthonormal basis of $\\R^k$ of eigenvectors of $M_{\\lambda_0}(\\zeta^0)$ such that $\\bar v_1 = v_1(0,\\zeta^0)$.\nWe let, for $\\varepsilon>0$ and $\\zeta$ close to $\\zeta^0$,\n\\[\nv_i(\\varepsilon,\\zeta) = \\bar v_i - (\\bar v_i\\cdot v_1(\\varepsilon,\\zeta) ) v_1(\\varepsilon,\\zeta)\n, \\quad 2\\leq i\\leq k,\n\\]\nand $P $ be the matrix whose columns are $v_1(\\varepsilon,\\zeta) ,\\ldots, v_k(\\varepsilon,\\zeta) $.\n\nWe remark that although it would be more natural to consider a matrix $\\tilde P(\\varepsilon,\\zeta)$, which diagonalizes $M_\\lambda(\\zeta)$, this matrix may not be differentiable with respect to $\\varepsilon$ and $\\zeta$. For this reason we choose to work with $P$ as defined before.\n\nLet us perform the following change of variables\n\\begin{align}\n\\label{changeLambda}\n\\Lambda=|\\sigma_1|^{1\/2}P(\\varepsilon,\\zeta) \\bar{\\Lambda} .\n\\end{align}\nNote that the quadratic form $\\Lambda^T M_\\lambda(\\zeta) \\Lambda$ can be written as\n\\begin{align*}\n\\Lambda^T M_\\lambda(\\zeta) \\Lambda\n= \\sigma_1(\\varepsilon,\\zeta) |\\sigma_1(\\varepsilon,\\zeta) | \\bar\\Lambda_1^2 + |\\sigma_1(\\varepsilon,\\zeta)| (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda',\n\\end{align*}\nwhere\n\\[\n\\bar\\Lambda' =\n\\left[\n\\begin{matrix}\n\\bar\\Lambda_2 \\\\\n\\vdots\\\\\n\\bar\\Lambda_k\n\\end{matrix}\n\\right]\n,\\quad\nQ(\\varepsilon,\\zeta)= P'(\\varepsilon,\\zeta)^T M_{\\lambda_0+\\varepsilon}(\\zeta) P'(\\varepsilon,\\zeta)\n\\]\nand $ P'(\\varepsilon,\\zeta) = [v_2,\\ldots,v_k]$ is the matrix formed by the columns 2 to $k$ of $P(\\varepsilon,\\zeta)$.\n\n\n\n\nThus $I_\\lambda(\\zeta,\\bar\\Lambda) = F_\\lambda(\\zeta,\\bar\\Lambda) + \\theta_\\lambda(\\zeta,\\bar\\Lambda)$ can be written as\n\\begin{align}\n\\nonumber\nI_\\lambda(\\zeta,\\bar{\\Lambda}) & = ka_0\n+ a_1 \\Big[ -\\sigma_1(\\varepsilon,\\zeta)^2 \\bar\\Lambda_1^2 + |\\sigma_1(\\varepsilon,\\zeta)| (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda' \\Bigr]\n\\\\\n\\label{formF1lambda}\n& \\quad\n+\\sigma_1(\\varepsilon,\\zeta)^2\\,\n\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n+ \\theta_\\lambda(\\zeta,\\bar \\Lambda),\n\\end{align}\nwhere\n\\begin{align*}\n& \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n:=a_2\\lambda\\sum_{i=1}^k\n\\Bigl(\\sum_{j=1}^k\nP_{ij} (\\varepsilon,\\zeta) \\bar{\\Lambda}_j\\Bigr)^4\n\\\\\n& \\quad\n-a_3\\sum_{i=1}^k\n\\Big[\n\\Bigl(\\sum_{j=1}^k\nP_{ij}(\\varepsilon,\\zeta)\\bar{\\Lambda}_j\\Bigr)^2\n\\Bigl(\n\\sigma_1 (\\varepsilon,\\zeta)v_{1,i}(\\varepsilon,\\zeta) \\bar\\Lambda_1\n+\n\\sum_{j=2}^k\n\\sum_{l=1}^k\n(M_{\\lambda_0+\\varepsilon}(\\zeta))_{il}\\, P_{lj}(\\varepsilon,\\zeta) \\,\\bar{\\Lambda}_j\\Bigr)^2\n\\Big] ,\n\\end{align*}\nand $\\theta_\\lambda(\\zeta,\\bar \\Lambda)$ denotes the function $\\theta_\\lambda(\\zeta,\\mu) $ where we have used the transformations \\eqref{LambdaMu} and \\eqref{changeLambda}.\n\nNote that $\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda) $ is a polynomial in the variables $\\bar{\\Lambda}_1,\\ldots,\\bar{\\Lambda}_k$ of degree $4$ whose coefficients are functions of $\\varepsilon$ and $\\zeta$.\n\nWe need to solve the equations $D_{\\zeta} I_{\\lambda}=0$,\n$\\frac{\\partial I_{\\lambda}}{\\partial\\bar{\\Lambda}_1}=0$, $\\ldots,$\n$\\frac{\\partial I_{\\lambda}}{\\partial\\bar{\\Lambda}_k}=0$.\nBecause of the the absolute value of $\\sigma_1$ appearing in \\eqref{formF1lambda} it is a bit more convenient to modify this function by defining\n\\begin{align*}\n\\bar F_{\\lambda}(\\zeta,\\bar{\\Lambda})\n& = ka_0\n- a_1 \\sigma_1(\\varepsilon,\\zeta)^2 \\bar\\Lambda_1^2\n- a_1 \\sigma_1(\\varepsilon,\\zeta) (\\bar\\Lambda')^T Q(\\varepsilon,\\zeta) \\bar\\Lambda'\n\\\\\n& \\quad\n+\\sigma_1(\\varepsilon,\\zeta)^2\\, \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar \\Lambda)\n+ \\theta_\\lambda(\\zeta,\\bar \\Lambda) ,\n\\end{align*}\nwhich coincides with $I_{\\lambda}$ when $\\sigma_1<0$.\n\n\n\n\n\n\nNext we compute\n\\begin{align*}\nD_{\\zeta} \\bar F_{\\lambda}\n& =\n-2a_1\\sigma_1\\,(D_{\\zeta}\\sigma_1)\\bar{\\Lambda}_1^2\n-a_1 (D_\\zeta \\sigma_1) (\\bar\\Lambda')^T Q \\bar\\Lambda'\n- a_1 \\sigma_1 (\\bar\\Lambda')^T ( D_\\zeta Q) \\bar\\Lambda'\n\\\\\n& \\quad\n+2\\,\\sigma_1\\,(D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n+\\sigma_1^2\\, D_{\\zeta} \\mathcal Poly_4\n+D_\\zeta \\theta_\\lambda,\n\\\\\n\\frac{\\partial \\bar F_\\lambda}{\\partial \\bar\\Lambda_1}\n&=\n- 2 a_1 \\sigma_1^2 \\bar\\Lambda_1\n+\\sigma_1^2\\,\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4\n+ \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} ,\n\\\\\n\\frac{\\partial \\bar F_\\lambda}{\\partial \\bar\\Lambda_l}\n&=\n- 2a_1 \\sigma_1 \\sum_{j=2}^k Q_{j-1,l-1} \\bar\\Lambda_j\n+\\sigma_1^2\\,\n\\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4\n+ \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} ,\n\\end{align*}\nwith $ l=2,\\ldots,k$.\n\n\n\n\n\nObserve that, whenever $\\sigma_1<0$, the equations\n$D_{\\zeta} \\bar F_{\\lambda}=0$,\n$\\frac{\\partial \\bar F_{\\lambda}}{\\partial\\bar{\\Lambda}_n}=0$, $n=1,\\ldots,k$, are equivalent to\n\\begin{align}\n\\nonumber\n0&=\n-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1)\n- \\frac{a_1}{\\sigma_1} (D_\\zeta \\sigma_1 ) (\\bar\\Lambda')^T Q \\bar\\Lambda'\n- a_1 (\\bar\\Lambda')^T ( D_\\zeta Q ) \\bar\\Lambda'\n\\\\\n\\label{eq1}\n& \\quad\n+2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n+\\sigma_1\\, D_{\\zeta}\\mathcal Poly_4\n+\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda,\n\\\\\n\\label{eq2}\n0&=\n- 2 a_1 \\bar\\Lambda_1\n+\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4\n+ \\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} ,\n\\\\\n\\label{eq3}\n0&=- 2a_1 \\sum_{j=2}^k Q_{j-1,l-1} \\bar\\Lambda_j\n+\\sigma_1\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l}\\mathcal Poly_4\n+ \\frac{1}{\\sigma_1} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} ,\n\\end{align}\nwith $ l=2,\\ldots,k$.\nNote that we have normalized the equations (the first one was divided by $\\sigma_1$, the second by $\\sigma_1^2$ and the last ones by $\\sigma_1$).\n\n\nWe claim that there exists $\\varepsilon_0>0$ such that for each $\\varepsilon\\in (0,\\varepsilon_0)$ the system \\eqref{eq1}, \\eqref{eq2}, \\eqref{eq3}\nhas a solution $(\\zeta(\\varepsilon)$, $\\bar\\Lambda(\\varepsilon))$ such that $\\sigma_1(\\varepsilon,\\zeta(\\varepsilon))<0$, thus yielding a critical point of $I_{\\lambda_0+\\varepsilon}$.\n\n\n\nWe will prove that \\eqref{eq1}, \\eqref{eq2}, \\eqref{eq3} has a solution using degree theory in a ball centered at a suitable point $(\\bar\\Lambda^0, \\zeta^0)$ and with a conveniently small radius.\n\nTo find the center of this ball, let us consider a simplified version of equations \\eqref{eq2}, \\eqref{eq3}, by omitting the terms involving $\\theta_\\lambda$ and evaluating at $\\varepsilon=0$, $\\zeta=\\zeta^0$.\nUsing that $Q(0,\\zeta^0)$ is the diagonal matrix with entries $\\sigma_2,\\ldots,\\sigma_k$,\nwhere $0,\\sigma_2,\\ldots,\\sigma_k$ are the eigenvalues of $M_{\\lambda_0}(\\zeta^0)$, we get\n\\begin{align}\n\\label{eq2a}\n0&=\n- 2 a_1 \\bar\\Lambda_1\n+\\frac{\\partial }{\\partial \\bar \\Lambda_1} \\mathcal Poly_4(0,\\zeta_0,\\bar \\Lambda) , \\\\\n\\label{eq3a}\n0&=- 2a_1 \\sigma_l \\bar\\Lambda_l\n ,\\quad l=2,\\ldots,k .\n\\end{align}\nWe note that there is a solution of \\eqref{eq2a}, \\eqref{eq3a} which has the form\n$\\bar\\Lambda^0= ( \\bar\\Lambda^0_1,\\ldots,\\bar\\Lambda^0_k)$ with\n\\[\n\\bar\\Lambda^0_l = 0 \\qquad \\text{for all }l =2,\\ldots, k\n\\]\nand\n\\begin{align}\n\\label{barLambda10}\n\\bar{\\Lambda}^0_1:=\\sqrt{\\frac{a_1}{2 a_2 \\lambda_0 \\sum_{i=1}^k P_{i1}(0,\\zeta^0)^4 }} .\n\\end{align}\nFor later purposes it will be useful to know that the linearization of the functions on the right hand side of \\eqref{eq2a} and \\eqref{eq3a} around $\\bar\\Lambda^0$ define an invertible operator.\nSince the right hand side of \\eqref{eq3a} is a constant times the identity it is sufficient to study\nthe expression\n$ - 2 a_1 \\bar\\Lambda_1 +\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta_0,\\bar \\Lambda) $.\nA straightforward computation yields\n\\begin{align}\n\\nonumber\n\\frac{\\partial }{\\partial\\bar{\\Lambda}_1}\n\\left[\n - 2 a_1 \\bar\\Lambda_1 + \\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta_0,\\bar \\Lambda)\n\\right]\n(\\bar{\\Lambda}^0)\n&=-2a_1\\,+\n12a_2\\lambda_0\\Bigl(\\sum_{i=1}^kP_{i1}(0,\\zeta^0)^4\\Bigr)(\\bar{\\Lambda}_1^0)^2\n\\\\\n\\label{firstCoefficient}\n& =4a_1,\n\\end{align}\nwhich is nonzero.\n\n\nWe now introduce one more change of variables\n\\[\n\\widehat\\Lambda_j = \\bar\\Lambda_j - \\bar\\Lambda_j^0, \\quad 1\\leq j\\leq k .\n\\]\nDefine\n\\[\n\\Upsilon (\\zeta,\\widehat \\Lambda) = A (\\zeta,\\widehat{\\Lambda}) + \\mathcal R (\\zeta,\\widehat{\\Lambda}),\n\\]\nwhere\n\\[\nA (\\zeta,\\widehat{\\Lambda}) = (A_0 (\\zeta,\\widehat{\\Lambda}) ,A_1 (\\zeta,\\widehat{\\Lambda}) ,\\ldots, A_k (\\zeta,\\widehat{\\Lambda}) ))\n\\]\nwith\n\\begin{align*}\nA_0 (\\zeta,\\widehat{\\Lambda}) &=\n- a_1 (\\bar\\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0) (\\zeta-\\zeta^0),\n\\\\\nA_1 (\\zeta,\\widehat{\\Lambda}) &= 4 a_1 \\widehat\\Lambda_1\n+ \\sum_{j=2}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n+D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0),\n\\\\\nA_l (\\zeta,\\widehat{\\Lambda}) &= -2 a_1 \\sigma_l \\widehat \\Lambda_l , \\quad l=2,\\ldots,k,\n\\end{align*}\nand\n\\[\n\\mathcal{R} (\\zeta,\\widehat{\\Lambda}) = (\\mathcal{R}_0 (\\zeta,\\widehat{\\Lambda}) ,\\mathcal{R}_1 (\\zeta,\\widehat{\\Lambda}) ,\\ldots, \\mathcal{R}_k (\\zeta,\\widehat{\\Lambda}) ))\n\\]\nwith\n\\begin{align*}\n\\mathcal R_0 (\\zeta,\\widehat \\Lambda)\n&=\n-a_1 (\\bar \\Lambda_1^0)^2 \\bigl( D_\\zeta\\sigma_1(\\varepsilon,\\zeta) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr)\n\\\\\n& \\quad\n-2a_1 (2 \\bar \\Lambda_1^0 \\widehat \\Lambda_1 + \\widehat \\Lambda_1^2) D_\\zeta\\sigma\t_1(\\varepsilon,\\zeta)\n\\\\\n& \\quad\n- \\frac{a_1}{\\sigma_1} D_\\zeta \\sigma_1 (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda'\n- a_1 (\\widehat\\Lambda')^T ( D_\\zeta Q(\\varepsilon,\\zeta) ) \\widehat\\Lambda'\n\\\\\n& \\quad\n+2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) -\n\\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0 ) )\n+\\sigma_1\\, D_{\\zeta}\\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\\\\n&\\quad\n+\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda(\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda),\n\\\\\n\\mathcal R_1 (\\zeta,\\widehat \\Lambda)\n&=\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n-\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta^0,\\bar\\Lambda^0)\n- \\sum_{j=1}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n\\\\\n& \\quad\n-D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0)\n+ \\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) ,\n\\\\\n\\mathcal R_l (\\zeta,\\widehat \\Lambda)\n&=- 2a_1 \\sum_{j=2}^k ( Q_{j-1,l-1} (\\varepsilon,\\zeta) -\\delta_{jl}) \\widehat \\Lambda_j\n+\\sigma_1(\\varepsilon,\\zeta)\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4 (\\varepsilon,\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\\\\n&\\quad\n+ \\frac{1}{\\sigma_1(\\varepsilon,\\zeta)} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\Lambda^0 + \\widehat \\Lambda) ,\\quad l=2,\\ldots k.\n\\end{align*}\nLet us indicate the motivation for the definition of $A_0$. In equation \\eqref{eq1} we combine the terms $-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1) $ and $ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4$ into the expression\n\\begin{align*}\n&\n-2a_1\\bar{\\Lambda}_1^2 (D_\\zeta\\sigma_1) + 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4\n\\\\\n&=\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n-2a_1\\bigl ( 2 \\bar{\\Lambda}_1^0 \\widehat{\\Lambda}_1 + \\widehat{\\Lambda}_1^2 \\Bigr) (D_\\zeta\\sigma_1)\n\\\\\n& \\quad\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n+ 2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4 - \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0) ) .\n\\end{align*}\nIn this expression we combine\n\\begin{align*}\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n&= 2 (D_{\\zeta}\\sigma_1) \\Bigl[\n-a_1 (\\bar{\\Lambda}_1^0)^2 + \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0) \\Bigr].\n\\end{align*}\nBut an explicit computation using \\eqref{barLambda10} gives\n\\[\n-a_1( \\bar{\\Lambda}_1^0)^2 + \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0) =\n-\\frac{1}{2} a_1( \\bar{\\Lambda}_1^0)^2 .\n\\]\nThen\n\\begin{align*}\n&\n-2a_1 (\\bar{\\Lambda}_1^0)^2(D_\\zeta\\sigma_1)\n+ 2 (D_{\\zeta}\\sigma_1) \\, \\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda_0)\n\\\\\n&= -a_1 (\\bar \\Lambda_1^0)^2 (D_\\zeta\\sigma_1)\n\\\\\n&=\n-a_1 (\\bar \\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)\n-a_1 (\\bar \\Lambda_1^0)^2\n\\bigl(\n(D_\\zeta\\sigma_1) - D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)\n\\bigr) .\n\\end{align*}\nWe define $A_0$ as $-a_1 (\\bar \\Lambda_1^0)^2 D^2_{\\zeta\\zeta}\\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0)$ and we leave all the others terms in $\\mathcal R_0$.\n\nThen the equations \\eqref{eq1}, \\eqref{eq2} and \\eqref{eq3} for the unknowns $\\widehat\\Lambda_j$, $1\\leq j\\leq k$ and $\\zeta$ are equivalent to\n\\[\n\\Upsilon (\\zeta,\\widehat \\Lambda) = 0 .\n\\]\nWe are going to show that the this equation has a solution in the ball\n\\[\n\\mathcal B = \\{ (\\zeta,\\widehat\\Lambda) \\in \\R^{3k}\\times \\R^k : | (\\zeta-\\zeta^0,\\widehat\\Lambda) | < \\varepsilon^{1-\\sigma} \\}\n\\]\nwith a fixed and small $\\sigma>0$, using degree theory.\n\n\n\n\n\n\n\nThe linear operator $(\\zeta,\\widehat\\Lambda) \\mapsto A(\\zeta-\\zeta^0,\\widehat{\\Lambda})$ is invertible thanks to hypothesis (iii) in the statement of the theorem and \\eqref{firstCoefficient}. Hence there is a constant $c>0$ such that\n\\[\n|A(\\zeta, \\widehat\\Lambda)|\\geq c |( (\\zeta-\\zeta^0), \\widehat\\Lambda)|,\n\\]\nfor $(\\zeta,\\widehat{\\Lambda} ) \\in \\partial \\mathcal B$, if we take $\\varepsilon>0$ sufficiently small.\nTo conclude that the equation $A(\\zeta,\\widehat{\\Lambda}) + \\mathcal R(\\zeta,\\widehat{\\Lambda}) =0$ has a solution in $\\mathcal B$, it suffices to verify that\n\\[\n|\\mathcal R(\\zeta,\\widehat{\\Lambda})| \\leq\no( \\varepsilon^{1-\\sigma})\n\\]\nuniformly for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ as $\\varepsilon\\to 0$.\n\nBefore performing the computations we recall the assumptions we are imposing on $\\mu$, $\\zeta$.\nFrom \\eqref{LambdaMu} and \\eqref{changeLambda} we have\n\\[\n\\mu^{\\frac{1}{2}} = |\\sigma_1(\\varepsilon,\\zeta) |^{\\frac{1}{2}} P(\\varepsilon,\\zeta) \\bar{\\Lambda}.\n\\]\nThen for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$,\n\\begin{align}\n\\label{cotaZetaGorro}\n|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}\n\\end{align}\nand\n\\begin{align}\n\\label{cotaLambdaGorro}\n\\bar\\Lambda = \\bar\\Lambda^0 + \\widehat{\\Lambda},\n\\quad\n| \\widehat{\\Lambda} | \\leq \\varepsilon^{1-\\sigma} .\n\\end{align}\nUsing Taylor's theorem we see that, for $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$,\n\\begin{align}\n\\label{cotasSigma}\n-c_1\\varepsilon\\leq \\sigma_1(\\varepsilon,\\zeta) \\leq -c_2\\varepsilon\n\\end{align}\nwith $c_1,c_2>0$, and in particular\n\\[\n|\\mu_i | \\leq C \\varepsilon ,\\quad i=1,\\ldots,k.\n\\]\nAlso for $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$,\n\\begin{align}\n\\label{cotaGradSigma1}\n|D_\\zeta \\sigma_1(\\varepsilon,\\zeta)| \\leq C \\varepsilon^{1-\\sigma} .\n\\end{align}\nWe will also need the following estimates: for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ we have\n\\begin{align}\n\\label{cota1}\n|D_\\zeta \\theta_\\lambda(\\zeta, \\bar\\Lambda^0+\\hat \\Lambda)| & \\leq C \\varepsilon^{3-\\sigma},\n\\\\\n\\label{cota2}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) \\Bigr| & \\leq C \\varepsilon^{3-\\sigma\/2},\n\\\\\n\\label{cota3}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) \\Bigr| & \\leq C \\varepsilon^{3-\\sigma} , \\quad l=2,\\ldots,k.\n\\end{align}\nWe will prove these estimates later on.\n\nFor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ let us estimate $\\mathcal R_0 (\\zeta,\\widehat \\Lambda) $.\nWe start with\n\\begin{align*}\n& \\big|\n(\\bar \\Lambda_1^0)^2 \\bigl( D_\\zeta\\sigma_1(\\varepsilon,\\zeta) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr)\n\\bigr|\n\\\\\n& \\leq C |D_\\zeta\\sigma_1(\\varepsilon,\\zeta)-D_\\zeta\\sigma_1(0,\\zeta)|\n+C\\bigl| D_\\zeta\\sigma_1(0,\\zeta) - D_\\zeta\\sigma_1(0,\\zeta^0) - D^2_{\\zeta\\zeta} \\sigma_1(0,\\zeta^0)(\\zeta-\\zeta^0) \\bigr|\n\\\\\n& \\leq C \\varepsilon + C \\varepsilon^{2-2\\sigma} \\leq C \\varepsilon,\n\\end{align*}\nsince $|\\zeta-\\zeta^0|\\leq \\varepsilon^{1-\\sigma}$.\nNext,\n\\begin{align*}\n\\bigl|\n(2 \\bar \\Lambda_1^0 \\widehat \\Lambda_1 + \\widehat \\Lambda_1^2) D_\\zeta\\sigma\t_1(\\varepsilon,\\zeta)\n\\bigr|\n& \\leq C\\varepsilon^{2-2\\sigma}\n\\end{align*}\nbecause $|\\widehat{\\Lambda}_1|\\leq \\varepsilon^{1-\\sigma}$ and $D_\\zeta \\sigma_1(0,\\zeta^0)=0$.\nTo estimate $ \\frac{D_\\zeta \\sigma_1}{\\sigma_1} (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda' $ we note that \\eqref{cotasSigma} together with \\eqref{cotaGradSigma1} implies\n\\[\n\\Bigl|\n\\frac{D_\\zeta \\sigma_1(\\varepsilon,\\zeta)}{\\sigma_1(\\varepsilon,\\zeta)}\n\\Bigr| \\leq C \\varepsilon^{-\\sigma}\n\\]\nand so\n\\begin{align*}\n\\Bigl|\na_1 \\frac{D_\\zeta \\sigma_1}{\\sigma_1} (\\widehat \\Lambda')^T Q(\\varepsilon,\\zeta) \\widehat\\Lambda' \\Bigr|\n\\leq C \\varepsilon^{2-3\\sigma}.\n\\end{align*}\nNext, using \\eqref{cota1} we estimate\n\\begin{align*}\n\\bigl|\na_1 (\\widehat\\Lambda')^T ( D_\\zeta Q(\\varepsilon,\\zeta) ) \\widehat\\Lambda'\n\\bigr|\n&\\leq C \\varepsilon^{2-2\\sigma},\n\\\\\n\\bigl|\n2 (D_{\\zeta}\\sigma_1) \\, ( \\mathcal Poly_4(\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda) -\n\\mathcal Poly_4(0,\\zeta^0,\\bar\\Lambda^0 ) )\n\\bigr|\n&\\leq C \\varepsilon^{2-2\\sigma} ,\n\\\\\n|\\sigma_1(\\varepsilon,\\zeta) D_\\zeta \\mathcal Poly_4( \\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat{\\Lambda})|\n& \\leq C \\varepsilon ,\n\\\\\n\\left|\\frac{1}{\\sigma_1}\t D_\\zeta \\theta_\\lambda(\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\right|\n&\\leq C \\varepsilon^{2-\\sigma}\n\\end{align*}\nThis proves that\n\\begin{align}\n\\label{R0}\n|\\mathcal R_0 (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$, if we have fixed $\\sigma>0$ small.\n\n\nLet us estimate $ |R_1 (\\zeta,\\widehat \\Lambda) | $ for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\nBy Taylor's theorem we have that\n\\begin{align*}\n& \\Bigl|\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (\\varepsilon,\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n-\n\\frac{\\partial }{\\partial \\bar \\Lambda_1}\\mathcal Poly_4 (0,\\zeta^0,\\bar\\Lambda^0)\n- \\sum_{j=1}^k \\frac{\\partial^2 }{\\partial \\bar \\Lambda_j \\partial \t\\bar \\Lambda_1}\\mathcal Poly_4(0,\\zeta^0,\\bar \\Lambda^0)\\widehat \\Lambda_j\n\\\\\n& \\quad\n-D_\\zeta \\frac{\\partial }{ \\partial \t\\bar \\Lambda_1}\n\\mathcal Poly_4\n(0,\\zeta^0,\\bar \\Lambda^0)(\\zeta-\\zeta^0)\n\\Bigr| \\leq C \\varepsilon + C |\\zeta-\\zeta^0|^2 + C |\\widehat{\\Lambda}|^2 \\leq C \\varepsilon.\n\\end{align*}\nOn the other hand by \\eqref{cota2} we have\n\\begin{align*}\n\\left|\n\\frac{1}{\\sigma_1^2}\\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_1} (\\zeta,\\bar\\Lambda^0 + \\widehat \\Lambda)\n\\right| \\leq C \\varepsilon^{1-\\sigma\/2} .\n\\end{align*}\nThis shows that\n\\begin{align}\n\\label{R1}\n|\\mathcal R_1 (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon^{1-\\sigma\/2}\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\n\n\nFinally, using \\eqref{cota3}, we have that for $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$ and $l=2,\\ldots,k$, the following holds\n\\begin{align*}\n\\biggl|\n2a_1 \\sum_{j=2}^k ( Q_{j-1,l-1} (\\varepsilon,\\zeta) -\\delta_{jl}) \\widehat \\Lambda_j\n\\biggr|\n\\leq C \\varepsilon^{2-2\\sigma},\n\\end{align*}\n\\begin{align*}\n\\biggl| \\sigma_1(\\varepsilon,\\zeta)\\, \\frac{\\partial }{\\partial \\bar \\Lambda_l} \\mathcal Poly_4 (\\varepsilon,\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\biggr|\n\\leq C \\varepsilon^{2-2\\sigma},\n\\end{align*}\nand\n\\begin{align*}\n\\left|\n\\frac{1}{\\sigma_1(\\varepsilon,\\zeta)} \\frac{\\partial \\theta_\\lambda}{\\partial \\bar \\Lambda_l} (\\zeta,\\Lambda^0 + \\widehat \\Lambda)\n\\right|\n\\leq C \\varepsilon^{2-\\sigma}.\n\\end{align*}\nTherefore,\n\\begin{align}\n\\label{R2}\n|\\mathcal R_l (\\zeta,\\widehat \\Lambda) |\\leq C \\varepsilon^{2-2\\sigma}, \\quad l=2,\\ldots,k\n\\end{align}\nfor $( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}$.\n\nCombining \\eqref{R0}, \\eqref{R1} and \\eqref{R2} we obtain\n\\[\n|\\mathcal{R}(\\zeta,\\widehat{\\Lambda})|\\leq C \\varepsilon^{1-\\sigma\/2},\n\\quad \\forall ( \\zeta,\\widehat{\\Lambda} ) \\in \\bar{\\mathcal B}.\n\\]\nA standard application of degree theory then yields a solution of $\\Upsilon(\\zeta,\\widehat{\\Lambda})=0$ in the ball $\\mathcal{B}$.\nNote that for $(\\zeta,\\widehat{\\Lambda}) \\in \\mathcal B$ we are in the region where \\eqref{sigma-negative} holds, and hence $\\sigma_1(\\zeta , \\bar\\Lambda^0 + \\widehat{\\Lambda})<0$. Therefore we have found a critical point of $I_\\lambda(\\zeta,\\mu)$, which was the desired conclusion.\n\\end{proof}\n\n\n\n\n\\begin{proof}[Proof of \\eqref{cota1}, \\eqref{cota2}, \\eqref{cota3}]\nBy Lemma~\\ref{lemmaEnergyExpansion} (using the satement with $\\frac{\\sigma}{2}$ instead of $\\sigma$) we get directly the estimates\n\\begin{align}\n\\label{theta1a}\n| D_\\zeta \\theta_\\lambda^{(1)} (\\zeta,\\mu) | & \\leq C |\\mu|^{3-\\sigma\/2} \\leq C \\varepsilon^{3-\\sigma\/2} ,\n\\\\\n\\label{theta1b}\n\\Bigl|\n\\frac{\\partial \\theta_\\lambda^{(1)}}{\\partial \\bar\\Lambda_i} (\\zeta,\\mu) \\Bigr|\n&\\leq C |\\mu|^{3-\\sigma\/2} \\leq C \\varepsilon^{3-\\sigma\/2} .\n\\end{align}\nTo estimate $D_\\zeta \\theta_\\lambda^{(2)}$, we recall formula \\eqref{formulaR} which gives\n\\begin{align*}\n\\theta_\\lambda^{(2)}(\\zeta,\\mu)\n&= \\int_0^1 s D^2 \\bar J_\\lambda(V + s\\phi)[\\phi^2] \\,ds .\n\\\\\n&=\n\\int_0^1 s \\left[\\int_{\\Omega_\\varepsilon}\\vert\\nabla \\phi\\vert^2 - \\varepsilon^2 \\lambda \\phi^2-5 (V + s\\phi)^4 \\phi^2\\right]\\,ds\n\\end{align*}\nand therefore\n\\begin{align*}\n|D_\\zeta \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\n\\leq C \\|\\phi\\|_{*} \\|D_\\zeta\\phi\\|_* + \\frac{C}{\\varepsilon}\\|\\phi\\|_*^2.\n\\end{align*}\nWe can compute\n\\begin{align*}\nM_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}\n&=|\\sigma_1|^{\\frac{1}{2}} M_\\lambda P \\bar\\Lambda\n=|\\sigma_1|^{\\frac{1}{2}} \\Bigl( \\sigma_1 v_1 \\bar\\Lambda_1 + \\sum_{l=2}^k \\bar v_l \\bar \\Lambda_l \\Bigr) ,\n\\end{align*}\nand thanks to \\eqref{cotasSigma} we see that\n\\[\n|M_\\lambda(\\zeta) \\mu^{\\frac{1}{2}}|\\leq C \\varepsilon^{\\frac{3}{2}-\\sigma} ,\n\\]\nwhich in turn implies\n\\[\n\\| E \\|_{**} \\leq C \\varepsilon^{2-\\sigma}, \\quad\n\\| \\phi \\|_{*} \\leq C \\varepsilon^{2-\\sigma} .\n\\]\nFrom this we deduce\n\\[\n| \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{4-2\\sigma}.\n\\]\nWe can write \\eqref{expansionE2} in the form (near $\\zeta_i'$)\n\\begin{align*}\nE(y)\n& = - 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\nM_\\lambda \\mu^{\\frac{1}{2}} + O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y) \\varepsilon^2) + O(\\varepsilon^5)\n\\\\\n&=\n- 20\\pi \\alpha_3 \\varepsilon^{\\frac{1}{2}}\nw_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)\n|\\sigma_1|^{\\frac{1}{2}} \\Bigl( \\sigma_1 v_1 \\bar\\Lambda_1 + \\sum_{l=2}^k \\bar v_l \\bar \\Lambda_l \\Bigr)\n+ O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) .\n\\end{align*}\nThe $O(\\cdot )$ terms are bounded together with their derivatives with respect to $\\zeta'$, $\\mu'$.\nDifferentiating $E$ with respect to $\\zeta'$ and $\\bar\\Lambda_l$, taking into account the last expression, and thanks to\n\\eqref{cotaZetaGorro}, \\eqref{cotaLambdaGorro}, \\eqref{cotasSigma} and \\eqref{cotaGradSigma1},\nwe find that for $|y-\\zeta_i|\\leq \\frac{\\delta}{\\varepsilon}$ the following hold\n\\begin{align*}\nD_{\\zeta'} E\n&= O(\\varepsilon^{2-\\sigma} )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5),\n\\end{align*}\n\\begin{align*}\nD_{\\bar\\Lambda_1} E\n&= O(\\varepsilon^{ 2-\\sigma } )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5),\n\\end{align*}\nand for $l=2,\\dots,k$\n\\begin{align*}\nD_{\\bar\\Lambda_l} E\n&= O(\\varepsilon )w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^4\n+O( w_{\\mu_i^{\\prime},\\zeta_i^{\\prime}}(y)^3 \\varepsilon^2) + O(\\varepsilon^5) .\n\\end{align*}\nFrom this and analogous estimates outside of all the balls $B_{\\delta\/\\varepsilon}(\\zeta_i')$ it follows that\n\\begin{align*}\n\\|D_{\\zeta'} \\phi\\|_* \\leq \\varepsilon^{ 2-\\sigma } , \\quad\n\\|D_{\\bar\\Lambda_1} \\phi\\|_* \\leq \\varepsilon^{ 2-\\sigma}, \\quad\n\\|D_{\\bar\\Lambda_l} \\phi\\|_* \\leq \\varepsilon, \\quad l=2,\\ldots,k.\n\\end{align*}\nAs a consequence,\n\\begin{align}\n\\label{theta2}\n|D_\\zeta\\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{3-\\sigma},\n\\quad\n|D_{\\bar\\Lambda_1} \\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{4-2\\sigma},\n\\quad\n|D_{\\bar\\Lambda_l}\\theta_\\lambda^{(2)}(\\zeta,\\mu) |\\leq C \\varepsilon^{3-\\sigma} ,\n\\end{align}\nfor $l=2,\\ldots,k$.\n(Here we are assuming $\\sigma>0$ small so that $3-\\sigma < 4 - 2 \\sigma$).\n\nCombining \\eqref{theta1a}, \\eqref{theta1b} and \\eqref{theta2} we obtain the estimates\n\\eqref{cota1}, \\eqref{cota2}, \\eqref{cota3}.\n\\end{proof}\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{The case of the annulus}\n\\label{exampleAnnulus}\n\nLet $00$ and a solution of $(\\wp_\\lambda)$ with $k$ concentration points.\n\\end{prop}\n\nExplicit values of $a_k$ seem difficult to get, but one can obtain estimates that show that for a low number of peaks the annulus does not need to be so thin. In particular for two bubbles we have the following estimate.\n\n\n\\begin{prop}\n\\label{propA2}\nFor $a\\in (\\frac{1}{49},1)$ there is $\\lambda>0$ and a solution of $(\\wp_\\lambda)$ with $2$ concentration points.\n\\end{prop}\n\nLet us give first a lemma about the behavior of the Green function for a thin domain.\nFor this we write now $G_0^a(x,y)$, $H_0^a(x,y)$, $g_0^a(x) = H_0^a(x,x)$ for the Green function, its regular part and the Robin function respectively for $\\lambda=0$ in the domain $\\Omega_a$.\n\\begin{lemma}\n\\label{lemma81}\nLet $x_0, y_0$ be fixed so that $|x_0|=|y_0|=1$ and $y_0 \\not= x_0$. Then\n\\begin{align}\n\\label{convergenceGreen}\nG_0^a(y,x) \\to 0\n\\end{align}\nas $a \\to 1$ uniformly for $y = r y_0$ with $r\\in (a,1)$ and $x = r' x_0$ with $r'\\in (a,1)$.\nMoreover,\n\\begin{align}\n\\label{convergenceRobin}\n\\min_{\\Omega_a} g_0^a \\to \\infty\n\\end{align}\nas $a \\to 0$.\n\\end{lemma}\n\\begin{proof}\nTo prove \\eqref{convergenceGreen} let us write $\\varepsilon= 1 -a>0$ and let $\\varepsilon\\to 0$. We also change the notation $G_0^a$ to $G_0^\\varepsilon$, $\\Omega_a$ to $\\Omega_\\varepsilon$ and shift coordinates so that the annulus is centered at $-e_1$:\n\\[\n\\Omega_\\varepsilon = \\{ z\\in \\R^3 : 1-\\varepsilon<|z + e_1|<1\\},\n\\]\nwhere $e_1=(1,0,0)$.\nWithout loss of generality we can assume that $y_0 = 0$.\nOur assumption now is that $|x_0+e_1|=1$ and $x_0\\not=0$.\n\n\nBy the maximum principle\n\\[\n0\\leq G_0^\\varepsilon(y,x) \\leq \\Gamma(y-x) , \\quad \\forall y\\in \\Omega_\\varepsilon\\setminus\\{x\\},\n\\]\nfor any $x\\in \\Omega_\\varepsilon$.\nLet $\\rho =\\frac{|x_0-y_0|}{4} >0$. Then there is $C$ such that\n\\[\n0\\leq G_0^\\varepsilon(y,x) \\leq C , \\quad \\forall y\\in \\Omega_\\varepsilon \\cap B_\\rho(0),\n\\]\nfor any $x$ in the segment $\\{ t x_0 + (1-t)(-e_1) : t\\in (a,1)\\}$.\n\nLet\n\\[\n\\tilde G^\\varepsilon(y') = G_0^\\varepsilon(\\varepsilon y',x)\n\\quad y' \\in \\frac{1}{\\varepsilon}( \\Omega_\\varepsilon \\cap B_\\rho(0)) .\n\\]\nThen $\\tilde G^\\varepsilon$ is harmonic and bounded in $\\frac{1}{\\varepsilon}( \\Omega_\\varepsilon \\cap B_\\rho(0)) $. By standard elliptic estimates, up to a subsequence, $\\tilde G^\\varepsilon \\to \\tilde G$ which is harmonic and bounded on the slab $S = \\{ (x_1,x_2,x_3) : -10$. This implies that $\\min_{\\Omega_a} g_0^a \\geq \\frac{c}{1-a}\\to \\infty$ as $a\\to 1$.\n\\end{proof}\n\n\n\n\nTo prove Propositions~\\ref{propA1} and \\ref{propA2} we consider a configuration of points in the $xy$ plane at equal distance from the origin and spaced at uniform angles, that is,\n\\[\n\\zeta_j(r) = ( r e^{2 \\pi i \\frac{j-1}{k}} , 0 ) \\in \\R^3 , \\quad j=1,\\ldots, k,\n\\]\nwhere the notation we are using for $z\\in {\\mathbb C}$ and $t\\in \\R$, is $(z,t) = (Re(z),Im(z),t)$.\n\nDefine then the matrix $M_\\lambda$ restricted to this configuration as\n\\[\n\\tilde M_\\lambda(r) = M_\\lambda(\\zeta(r) ),\n\\]\nwhere $\\zeta(r) = (\\zeta_1(r) , \\ldots, \\zeta_k(r) )$.\nSimilarly we define\n\\[\n\\tilde \\psi_\\lambda(r) = \\tilde \\psi_\\lambda( \\zeta(r)) ,\n\\]\nand denote by $\\tilde \\sigma_j(\\lambda,r)$ the eigenvalues of $\\tilde M_\\lambda(r)$ with $\\tilde \\sigma_1$ the smallest one.\n\n\n\\begin{proof}[Proof of Proposition~\\ref{propA1}]\nLet $k\\geq 2$ be given.\nBy Lemma~\\ref{lemma81}, if $a>0$ is small, we have\n\\begin{align}\n\\label{positiveSigma1Lambda0}\n& \\tilde \\sigma_j(0,r) >0 , \\quad \\forall r\\in (a,1), \\ j=1,\\ldots, k,\n\\\\\n\\label{positivePairLambda0}\n& g_0(\\zeta_1(r))^2 - G_0(\\zeta_1(r), \\zeta_j(r))^2>0\\quad \\forall r\\in (a,1), \\ j=2,\\ldots, k.\n\\end{align}\nNow, we\ndefine\n\\begin{align}\n\\label{lambda0}\n\\lambda_0 = \\sup\\, \\{\\lambda \\in (0,\\lambda_1): \\sigma_j(\\lambda',r) >0 \\quad \\forall r\\in (a,1), \\ j=1,\\ldots, k , \\ \\lambda'\\in (0,\\lambda)\t \\}.\n\\end{align}\nThen $\\lambda_0$ is well defined by continuity and \\eqref{positiveSigma1Lambda0}. We will need the following properties:\n\\begin{align}\n\\label{positivePair}\n& g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2>0\\quad \\forall\n\\lambda \\in [0,\\lambda_0),\nr\\in (a,1), \\ j=2,\\ldots, k,\n\\\\\n\\label{lambdaLess}\n& \\lambda_0 <\\lambda_1,\n\\\\\n\\label{sigma1}\n& \\tilde \\sigma_1(\\lambda_0,r) \\geq 0 \\quad \\text{and there exists $r_0 \\in (a,1)$ such that $\\sigma_1(\\lambda_0,r_0)=0$},\n\\\\\n\\label{simpleEigenvalue}\n& \\tilde \\sigma_j(\\lambda_0,r) >0 \\quad \\text{for all $r\\in (a,1)$ and $ j=2,\\ldots, k$},\n\\\\\n\\label{sigma1Decreasing}\n& \\frac{ \\partial \\tilde \\sigma_1 }{\\partial \\lambda}(\\lambda_0,r)<0, \\quad \\forall r\\in (a,1).\n\\end{align}\nLet us prove \\eqref{positivePair}. If this fails, then for some $\\lambda \\in [0,\\lambda_0)$, some $r_0 \\in (a,1)$, and some $ j=2,\\ldots, k $, we have $g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2 \\leq 0$.\nThis condition implies that the matrix $\\tilde M_\\lambda(r)$ has a nonpositive eigenvalue. This follows from the criterion that asserts that a symmetric matrix $A=(a_{i,j})_{1\\leq i,j\\leq k}$ is positive definite if and only if all submatrices $(a_{i,j})_{1\\leq i,j\\leq m}$ are positive definite for $ m=1,\\ldots, k$ (we apply this to $\\tilde M_\\lambda(r)$ after the permutation of the rows 2 and $j$, and the columns 2 and $j$).\nBut this contradicts the definition of $\\lambda_0$ \\eqref{lambda0}.\n\nLet us prove \\eqref{lambdaLess}. For this we recall that $\\min_{\\Omega_a} g_0>0$ and $\\min_{\\Omega_a} g_\\lambda \\to -\\infty$ as $\\lambda\\uparrow\\lambda_1$. Therefore there exists $r \\in (a,1) $ and $\\lambda \\in (0,\\lambda_1)$ such that $g_\\lambda(\\zeta_1(r)) = 0$.\nThis implies that $ g_\\lambda(\\zeta_1(r))^2 - G_\\lambda(\\zeta_1(r), \\zeta_j(r))^2<0$\nfor any $ j=2,\\ldots, k$.\nBy \\eqref{positivePair} this value of $\\lambda$ is greater or equal than $\\lambda_0$. It follows that $\\lambda_0<\\lambda_1$.\n\nSince $\\lambda_0<\\lambda_1$ by continuity we deduce the validity of \\eqref{sigma1}.\nWe also deduce from this and the way we have arranged the eigenvalues that $\\sigma_j(\\lambda_0,r)\\geq 0$ for all $ j=2,\\ldots, k$ and for all $r\\in (a,1)$.\n\nTo continue the proof of the stated properties we need a formula for the eigenvalues of a circulant matrix. We recall that a matrix $A$ of $k\\times k$ is circulant if it has the form\n\\[\nA =\n\\left[\n\\begin{matrix}\na_0 & a_{k-1} & a_{k-2} & \\ldots & a_{2} & a_{1}\n\\\\\na_1 & a_0 & a_{k-1} & \\ldots & a_{3} & a_{2}\n\\\\\na_2 & a_1 & a_0 & \\ldots & a_{4} & a_{3}\n\\\\\n\\vdots & \\vdots& \\vdots& &\\vdots & \\vdots\n\\\\\na_{k-1} & a_{k-2} & a_{k-3} & \\ldots & a_{1} & a_{0}\n\\end{matrix}\n\\right]\n\\]\nfor some complex numbers $a_0,\\ldots,a_{k-1}$.\n(This means each column is obtained from the previous one by a rotation in the components).\nWe note that the matrix $\\tilde M_\\lambda(r)$ has this structure with\n\\begin{align*}\na_0 &= g_\\lambda(\\zeta_1(r)),\n\\\\\na_j &= -G_\\lambda( \\zeta_1(r), \\zeta_{j+1}(r)), \\quad j=1,\\ldots, k-1 ,\n\\end{align*}\nsince $G_\\lambda (\\zeta_l(r), \\zeta_j(r) ) = G_\\lambda (\\zeta_{l+1}(r), \\zeta_{j+1}(r) ) $.\n\nIt is known that the eigenvalues $\\nu_l$ ($l=0,\\ldots, k-1$) of the circulant matrix $A$ are given by\n\\[\n\\nu_l = \\sum_{j=0}^{k-1} a_j e^{\\frac{2\\pi i}{k} j l } , \\quad l=0,\\ldots, k-1.\n\\]\nThese numbers coincide up to relabeling the indices with the numbers $\\tilde\\sigma_j(\\lambda,r)$.\nWe note that since $\\tilde M_\\lambda(r)$ is symmetric, the eigenvalues are real.\nWe claim that\n\\[\n\\nu_0 < \\nu_j \\quad j=2,\\ldots, k-1.\n\\]\nIndeed, since the $\\nu_l$ are real\n\\begin{align*}\n\\nu_l & = g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} Re\\left[ G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) e^{\\frac{2\\pi i}{k} j l }\\right]\n\\\\\n& > g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) = \\nu_0,\n\\end{align*}\nwhere the strict inequality holds because there are point $e^{\\frac{2\\pi i}{k} j l }$ in the sum which are not colinear and $G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) >0$.\nThis proves \\eqref{simpleEigenvalue} and also that\n\\begin{align}\n\\label{formulaSigma1}\n\\tilde \\sigma_1(\\lambda,r) = g_\\lambda(\\zeta_1(r)) - \\sum_{j=1}^{k-1} G_\\lambda(\\zeta_1(r), \\zeta_{j+1}(r)) ,\n\\end{align}\nfor all $\\lambda \\in [0,\\lambda_0]$ because for this range of $\\lambda$ we know that the eigenvalues $\\tilde \\sigma_j$ are nonnegative.\nFrom this formula we obtain\n\\begin{align*}\n\\frac{\\partial \\tilde \\sigma_1}{\\partial \\lambda}(\\lambda,r)\n&=\n\\frac{\\partial g_\\lambda}{\\partial \\lambda}\n(\\zeta_1(r)) - \\sum_{j=1}^{k-1}\n\\frac{\\partial G_\\lambda }{\\partial \\lambda} (\\zeta_1(r), \\zeta_{j+1}(r)) < 0\n\\end{align*}\nfor $\\lambda \\in [0,\\lambda_0]$, which proves \\eqref{sigma1Decreasing}.\n\nLet us see that we are almost in a situation where Theorem~\\ref{thm1} can be applied.\nLet $r_0$ be the number found in property \\eqref{sigma1}.\nThe eigenvalue $\\tilde \\sigma_1(\\lambda_0,r_0)$ is zero and $\\tilde M_{\\lambda_0}(r_0)$ is positive semidefinite (assumption (i)), we have $D_\\zeta \\sigma_1(\\lambda_1,\\zeta(r_0))=0$ because $\\zeta(r_0)$ is a global minimum for $\\sigma_1(\\lambda_0,\\cdot)$. Condition (iv) follows from \\eqref{sigma1Decreasing}.\n\n\n\nThe only hypothesis in Theorem~\\ref{thm1} which has not been verified is the nondegeneracy of $\\zeta(r_0)$ as a critical point of $\\sigma_1(\\lambda_0,\\cdot)$. In fact this nondegeneracy does not hold because the problem is invariant about rotations about the $z$ (or $x_3$) axis.\nWe could impose a symmetry condition on the functions involved so that degeneracy by rotation is eliminated, but still we do not know whether we have nondegeneracy in the radial direction.\nInstead of this assumption, we will see that a slight modification of the argument in the proof of Theorem~\\ref{thm1} yields the desired conclusion. Basically, the nature of the critical point of $F_\\lambda$ in this case is stable with respect to $C^1$ perturbations.\n\nWe recall from Section~\\ref{secReduction} that to construct a solution it is sufficient to find a critical point of the function\n$\n\\bar J_\\lambda( \\sum_{j=1}^k V_j + \\phi)\n$\nand\n\\[\n\\bar J_\\lambda\\Bigl(\\sum_{j=1}^k V_j + \\phi\\Bigr)\n= J_\\lambda\\Bigl(\\sum_{j=1}^k U_j \\Bigr ) + o(\\varepsilon^{2})\n\\]\nwhere $o(\\varepsilon^{2})$ is in $C^1$ norm.\nTherefore it is enough to ensure that $J_\\lambda(\\sum_{j=1}^k U_j ) $ has a critical point that is stable under $C^1$ perturbations.\n\n\n\n\n\nIn the case when $\\Omega_a$ is an annulus, and $\\zeta_j(r) = (re^{2\\pi i\\frac{j-1}{k}},0)$ using that $g_\\lambda(\\zeta_j(t))$ only depends on $r$ and considering $\\mu = \\mu_1 = \\ldots = \\mu_k $, by Lemma~\\ref{lemmaEnergyExpansion} we have that\n\\[\nJ_\\lambda\\Bigl(\\sum_{j=1}^k U_j \\Bigr) = F_\\lambda(\\mu,r) + R_\\lambda(\\mu,r),\n\\]\nwhere\n\\[\nF_\\lambda(\\mu,r) = k a_0 + 2 a_1 \\mu f_\\lambda(r) + k a_2 \\lambda \\mu^2 - a_3 \\mu^2 f_\\lambda(r)^2\n\\]\nwith\n\\begin{align*}\nf_\\lambda(r) &= k g_\\lambda(\\zeta_1(r)) - k \\sum_{j=2}^k G_\\lambda(\\zeta_1(r), \\zeta_j(r))\n\\end{align*}\nand\n\\[\nR_\\lambda(\\mu,r) = O(\\mu^{3-\\sigma}).\n\\]\nfor some $\\sigma\\in(0,1)$.\n\nAs was observed previously, for $\\lambda \\in [0,\\lambda_0]$, $f_\\lambda(r)$ is precisely the eigenvalue $\\tilde \\sigma_1(\\lambda,r)$ (see \\eqref{formulaSigma1}).\nTherefore \\eqref{sigma1} gives $f_{\\lambda_0}(r)\\geq 0$ and then there exists $r_0\\in (a,1)$ such that $f_{\\lambda_0}(r_0)=0$.\n\nSince we have \\eqref{sigma1Decreasing} we deduce that for $\\lambda = \\lambda_0 +\\varepsilon$ with $\\varepsilon>0$ small enough and $r$ close to $r_0$,\nwe have $f_\\lambda(r)<0$ and so the equation\n\\[\n\\frac{\\partial}{\\partial \\mu}\nF_\\lambda(\\mu,\\zeta) = 0\n\\]\nhas a solution given explicitly by\n\\[\n\\mu_0(\\lambda,r) = \\frac{- a_1 f_\\lambda(r)}{k a_2 \\lambda - a_3 f_\\lambda(r)^2} > 0.\n\\]\nWe consider this expression only for $r$ in a neighborhood of $r_0$, so that $f_\\lambda(r) \\leq 0$.\nThen\n\\[\n\\frac{\\partial}{\\partial \\mu} ( F_\\lambda(\\mu,r) +R_\\lambda(\\mu,r) ) = 0\n\\]\nhas a solution $\\mu(\\lambda,r) $ close to $\\mu_0(\\lambda,r) $.\nNote that since $\\frac{\\partial}{\\partial \\mu} R_\\lambda(\\mu,r) = O(\\mu^{2-\\sigma})$, we have\n\\[\n|\\mu(\\lambda,r)- \\mu_0(\\lambda,r)|\\leq C | f_\\lambda(r)|^{2-\\sigma}.\n\\]\nReplacing $\\mu(\\lambda,r)$ in $F_\\lambda$ we find\n\\[\nF_\\lambda( \\mu(\\lambda,r), r ) + R_\\lambda( \\mu(\\lambda,r), r ) =\n-\\frac{a_1^2 f_\\lambda(r)^2}{k a_2 \\lambda-a_3 f_\\lambda(r)} + O( | f_\\lambda(r)|^{3-\\sigma} ).\n\\]\nFrom this formula, \\eqref{sigma1Decreasing} and the property\n\\[\nf_\\lambda(r) \\to \\infty \\quad \\text{ as \\, $r\\to a$ or $r\\to 1$},\n\\]\nwe get that $F_\\lambda( \\mu(\\lambda,r), r ) + R_\\lambda( \\mu(\\lambda,r), r ) $ has a critical point $r_\\lambda$ for which $f_\\lambda(r_\\lambda)<0$.\n\\end{proof}\n\n\n\n\n\n\n\\begin{proof}[Proof of Proposition~\\ref{propA2}]\nThe argument is the same as in Proposition~\\ref{propA1}, except that for this result we claim that properties \\eqref{positiveSigma1Lambda0} and \\eqref{positivePairLambda0} hold for $a\\in (\\frac{1}{49},1)$.\nIn the case $k=2$ both properties actually follow from the following claim: if $a \\in (\\frac{1}{49},1)$ then\n\\begin{align}\n\\label{g1}\ng_0(x) > G_0(x,-x) , \\quad \\forall x\\in \\Omega_a.\n\\end{align}\nTo prove this we use an explicit formula for the Green function in the annulus $ \\Omega_a$, which can be found in \\cite{grossi-vujadinoic}, to obtain that:\n\\[\ng_0(x) = \\frac{1}{\\omega_{2}}\n\\sum_{m=0}^\\infty\nP_m(x)\n\\quad \\text{and}\\quad\nG_0(x,-x) = \\frac{1}{\\omega_{2}}\\left[\n\\frac{1}{2\\vert x\\vert}-\n\\sum_{m=0}^\\infty\n(-1)^m P_m(x)\\right],\n\\]\nwhere\n\\[\nP_m(x):=\\frac{a^{2m+1}-2a^{2m+1} |x|^{2m+1} +|x|^{2(2m+1)} }{(2m+1) |x|^{2(m+1)}(1-a^{2m+1}) } .\n\\]\nNotice that $P_m(x)$ is nonnegative for all $m\\geq 0$, and therefore,\n\\begin{align*}\ng_0(x) - G_0(x,-x) &= \\frac{1}{\\omega_{2}}\n\\left[-\n\\frac{1}{2\\vert x\\vert}+\n\\sum_{m=0}^\\infty\n[1+(-1)^m] \\,P_m(x)\\right]\\\\\n&\\geq\n\\frac{1}{\\omega_{2}}\n\\left[-\n\\frac{1}{2\\vert x\\vert}+\n2P_0(x)\\right]\\qquad \\forall x\\in \\Omega_a.\n\\end{align*}\nA sufficient condition to have \\eqref{g1} is then\n\\[\n4 \\frac{a-2a|x|+|x|^2}{|x|^2(1-a)}> \\frac{1}{|x|}, \\quad \\forall x\\in\\Omega_a.\n\\]\nThis in turn holds if $a\\in(\\frac{1}{49},1)$.\n\\end{proof}\n\n\n\n\\bigskip \\noindent \\textbf{Acknowledgement.}\nThe research of M. Musso has been partly supported by FONDECYT Grant 1160135 and Millennium Nucleus Center for Analysis of PDE, NC130017.\nD. Salazar was partially funded by grant Hermes 35454 from Universidad Nacional de Colombia sede Medell\\'\\i n and\nMillennium Nucleus Center for Analysis of PDE, NC130017.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nIn recent years, multimode optical fibers (MMFs) are at the focus of numerous studies aiming at enhancing the capacity of optical communications and endoscopic imaging systems \\cite{Richardson2013, Ploschner2015}. \nIdeally, one would like to utilize the transverse modes of the fiber to deliver information via multiple channels, simultaneously. However, inter-modal interference and coupling between the guided modes of the fiber result in scrambling between channels. \nOne of the most promising approaches for unscrambling the transmitted information is by shaping the optical wavefront at the proximal end of the fiber in order to get a desired output at the distal end. \nDemonstrations include compensation of modal dispersion \\cite{Shen2005, Alon2014,Wen2016}, focusing at the distal end \\cite{DiLeonardo2011,Papadopoulos2012, Caravaca-Aguirre2013,Papadopoulos2013,BoonzajerFlaes2018}, and delivering images \\cite{Cizmar2012, Bianchi2012, Choi2012} or an orthogonal set of modes \\cite{Carpenter:14, Carpenter2015} through the fiber. \n\nTypically in wavefront shaping, the incident wavefront is controlled using spatial light modulators (SLMs), digital micromirror devices (DMDs) or nonlinear crystals.\nIn all cases, the shaped wavefront sets the superposition of guided modes that is coupled into the fiber. For a fixed transmission matrix (TM) of the fiber, this superposition determines the field at the output of the fiber, as depicted in Fig. \\ref{fig:matrix}(a)). Hence, in a fiber that supports $N$ guided modes, wavefront shaping provides at most $N$ complex degrees of control. \nHowever, many applications require the number of degrees of control to be larger than the number of modes. \nFor example, one of the key ingredients for spatial division multiplexing is mode converters, which require simultaneous control over the output field of multiple incident wavefronts.\nTo this end, complex multimode transformations were previously demonstrated by applying phase modulations at multiple planes \\cite{Morizur:10, Labroille:14, Fontaine2019, Fickler2019}. However, this requires free-space propagation between the modulators, thus limiting the stability of the system and increasing its footprint. \n\nIn this work we propose and demonstrate a new method for controlling light at the output of MMF, which does not rely on shaping the incident light and that can be implemented in an all-fiber configuration. \nInspired by the ongoing efforts to generate on-chip mode converters by manipulating modal interference in multimode interferometers \\cite{Piggott2015, Bruck2016, Harris:18}, we directly control the light propagation inside the fiber to manipulate its TM, allowing us to generate a desired field at its output (Fig. \\ref{fig:matrix}(b)). Since the TM is determined by $O\\left(N^{2}\\right)$ complex parameters, TM-shaping provides access to much more degrees of control than shaping the incident wavefront. \n\nTo control the fiber's TM, we apply computer controlled bends at multiple positions along the fiber. \nSince the stress induced by the bends changes the boundary conditions of the system, it modifies the TM such that different bends yield different speckle patterns at the distal end (Fig. \\ref{fig:matrix}(c)). \nWe can therefore obtain a desired field at the output of the fiber by imposing a set of controlled bends, without modifying the incident wavefront. \nSince in this approach the input field is fixed, it does not require an SLM or any other free-space component. Such an all-fiber configuration is especially attractive for MMF-based applications that require high throughput and an efficient control over the field at the output of the fiber. As a proof-of-concept demonstration of TM-shaping, we demonstrate focusing at the distal end of the fiber, and conversion between the fiber modes. \n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig1.png}\n\\par\\end{centering}\n\\caption{\\textbf{Shaping the transmission matrix of multimode optical fibers.}\n(a) The conventional method for wavefront shaping in complex media, performed e.g. by using an SLM and free space optics to tailor the incoming wavefront at the proximal end of the multimode fiber. (b) Proposed method for light modulation, in which the transmission matrix of the medium is altered, e.g. by performing perturbations on the fiber itself. (c) Illustration of the sensitivity of the output pattern on the fiber geometry. Three different configurations of the fiber (depicted by red, green and blue curves), correspond to three different speckle patterns at the output of the fiber. Since the input field coupled into the fiber is fixed, the different output patterns correspond to different transmission matrices of the fiber.}\n\\label{fig:matrix}\n\\end{figure}\n\n\n\n\\section{Experimental Techniques}\n\n\\subsection{Principle}\n\nOur method relies on applying controlled weak local bends along the fiber. To this end, we use an array of computer-controlled piezoelectric actuators to locally apply pressure on the fiber at multiple positions \\cite{Golubchik2015, regelman2016method}. \nThe TM of the fiber depends of the curvatures of the bends, which are determined by the travel of each actuator. \nTo obtain a target pattern at the distal end, we compare the intensity pattern recorded at the output of the fiber with a desired target pattern. Using an iterative algorithm, we search for the optimal configuration of the actuators, i.e. the optimal travel of each actuator, that maximizes the overlap of the output and target patterns. \n \n\\subsection{Experimental Setup}\nThe experimental setup is depicted in Fig. \\ref{fig:setup}. A HeNe laser (wavelength of $\\lambda=632.8 \\hspace{2pt} nm$) is coupled to an optical fiber, overfilling its core. We placed 37 piezoelectric actuators along the fiber. By applying a set of computer-controlled voltages to each actuator, we controlled the vertical displacement of the actuators. \nEach actuator bends the fiber by a three-point contact, creating a bell-shaped local deformation of the fiber, with a curvature that depends on the vertical travel of the actuator (see Figs. \\ref{fig:setup}(b,c)). \nFor the maximal curvature we applied ($R\\approx10 \\hspace{2pt} mm$), we measured an attenuation of few percent per actuator due to bending loss. The spacing between nearby actuators was set to be at least $3 \\hspace{2pt} cm$, which is larger than $\\frac{d^2}{\\lambda}$ for $d$ the core's diameter, such that the interference pattern inside the fiber between two adjacent actuators is uncorrelated. At the distal end, a CMOS camera records the intensity distribution of both the horizontally and vertically polarized light. \n\nWe used two types of multimode fibers: a fiber supporting few modes for demonstrating mode conversion, and a fiber supporting numerous modes for demonstrating focusing. For the focusing experiment, we used a 2 meter-long graded-index (GRIN) multimode optical fiber with numerical aperture (NA) of 0.275 and core diameter of $d_{MMF}=62.5 \\hspace{2pt} \\mu m$ (InfiCor OM1, Corning). The fiber supports approximately 900 transverse modes per polarization at $\\lambda=632.8 \\hspace{2pt} nm$ ($V\\approx85$), yet we used weak focusing at the fiber's input facet to excite only $\\approx 280$ modes.\nFor the experiments with the few mode fiber (FMF), we used a 5 meter-long step-index (SI) fiber, with an NA of 0.1 and core diameter of $d_{FMF}=10 \\hspace{2pt} \\mu m$ (FG010LDA, Thorlabs). In principle, at our wavelength the fiber supports 6 modes per polarization, ($V \\approx5$).\n\n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig2.png}\n\\par\\end{centering}\n\\caption{\\textbf{Experimental setup for controlling the transmission matrix of optical fibers.} \n(a) The laser beam is coupled into the optical fiber, which is fixed to a metal bar. 37 actuators are placed above the fiber, applying local vertical bends. The light that is emitted from the distal facet of the fiber travels through a polarizing beamsplitter, and both horizontal and vertical polarizations are recorded by a CMOS camera. (b) Top view of five actuators, bending the fiber from above. (c) The fiber is pressed by two pins that are attached to each actuator, and one pin which is placed below it, creating a three-point contact. A computer-controlled voltage that is applied on each actuator sets its travel and defines the curvature of the local deformation it poses on the fiber. L, lens; M, mirror; PBS, polarizing beamsplitter; CMOS, camera.}\n\\label{fig:setup}\n\\end{figure}\n\n\n\\subsection{Optimization Process}\n\nThe curvature of the bends, set by the travel of each actuator, modifies how light propagates through the fiber and thus determines the speckle pattern that is received at the distal end. \nWe can therefore define an optimization problem of finding the voltages that should be applied to the actuators, to receive a given target pattern at the output of the fiber. \nThe distance between the target and each measured pattern is quantified by a cost function, which the algorithm iteratively attempts to minimize. \n\nFor $M$ actuators, the solution space is an $M$-dimensional sub-space, defined by the voltages range and the algorithm's step intervals, and can be searched using an optimization algorithm. While the optical system is linear in the optical field, the response of the actuators, i.e. the modulation they pose on the complex light field, is not linear in the voltages.\n\nMoreover, since a change in the curvature of an actuator at one point along the fiber affects the interference pattern at all of the following actuators positions, the actuators cannot be regarded as independent degrees of control. \nSimilar nonlinear dependence between degrees of control is obtained, for example, for wave control in chaotic microwave cavities \\cite{Geoffroy2018}. \nOut of the wide range of iterative optimization algorithms that can efficiently find a solution to such nonlinear optimization problems, we chose to use Particle Swarm Optimization (PSO) \\cite{PSO}, as on average it achieved the best results out of the algorithms we tested (See the Supplementary Material for more details regarding the use of PSO). \n\n\n\n\\section{Results}\n\\subsection{Focusing at the Distal End of the Fiber}\n\nTo illustrate the concept of shaping the intensity patterns at the output of the fiber by controlling its TM, we first demonstrate focusing the light to a sharp spot at the distal end of the fiber. We excite a subset of the fiber modes by weakly focusing the input light on the proximal end of the fiber. Due to inter-modal interference and mode mixing, at the output of the fiber the modes interfere in a random manner, exhibiting a fully developed speckle pattern (Fig. \\ref{fig:MMF}(a)). Based on the number of speckle grains in the output pattern, we estimate that we excite the first 280 guided fiber modes.\n\nTo focus the light to some region of interest (ROI) in the recorded image, we run the optimization algorithm to enhance the total intensity at the target area. We define the enhancement factor $\\eta$ by the total intensity in the ROI after the optimization, divided by the ensemble average of the total intensity in the ROI before the optimization. The ensemble average is computed by averaging the output intensity over random configurations of the actuators, and applying an additional azimuthal integration to improve the averaging.\n\nWe start by choosing an arbitrary spot in the output speckle pattern of one of the polarizations. \nWe define a small ROI surrounding the chosen position, in an area that is roughly the area of a single speckle grain, and run the optimization scheme to maximize the total intensity of that area.\nFig. \\ref{fig:MMF} depicts the output speckle pattern of the horizontal polarization before (Fig. \\ref{fig:MMF}(a)) and after (Fig. \\ref{fig:MMF}(b) the optimization, using all 37 actuators. The enhanced speckle grain is clearly visible and has a much higher intensity than its surroundings, corresponding to an enhancement factor of $\\eta=25$. \n\nWe repeat the focusing experiment described above with a varying number of actuators $M$. When a subset of actuators is used, the remaining are left idle throughout the optimization. Fig. \\ref{fig:MMF}(d) summarizes the results of this set of experiments, showing the obtained enhancement factor $\\eta$ grows linearly with the number of active actuators $M$. \nIt is instructive to compare this linear scaling with the well-known results for focusing light through random media using SLMs or DMDs.\nVellekoop and Mosk have shown that when the number of degrees of control (i.e. independent SLM or DMD pixels) is small compared to the effective number of transverse modes of the sample, the enhancement scales linearly with the number of degrees of control. The slope of the linear scaling $\\alpha$ depends on the speckle statistics and on the modulation mode \\cite{Vellekoop2007,Vellekoop2008,Vellekoop:15}. For Rayleigh speckle statistics, as in our system (see Supplementary Material), the slopes predicted by theory are $\\alpha=1$ for perfect amplitude and phase modulation, $\\alpha=\\frac{\\pi}{4}\\approx0.78$ for phase-only modulation \\cite{Vellekoop:15}. Experimentally measured slopes, however, are typically smaller, mainly due to technical limitations such as finite persistence time of the system, unequal contribution of the degrees of control, and statistical dependence between them. \nInterestingly, we measure a slope of $\\alpha\\approx 0.71$, which is close to the theoretical value for phase-only modulation for Rayleigh speckles, and higher than typical experimentally measured slopes (e.g. $\\alpha\\approx 0.57$ in \\cite{VellekoopPhdThesis}). Naively, one could expect a lower slope in our system, since as discussed above, in our configuration the degrees of control are not independent. The large slope values we obtain may indicate that the bends change not only the relative phases between the guided modes (corresponding to phase modulation), but also their relative amplitudes (corresponding to amplitude modulation), via mode-mixing and polarization rotation.\n\nTo further study the linear scaling, we performed a set of numerical simulations. We used a simplified scalar model for the light propagating in a GRIN fiber, in which the fiber is composed of multiple sections, where each section is made of a curved and a straight segment. The curved segments simulate the bend induced by an actuator, and the straight segments simulate the propagation between actuators (see Supplementary Material for more details). As in the experiment, we use the PSO algorithm to focus the light at the distal end of the fiber. The numerical results exhibit a clear linear scaling, with slopes in the range of 0.57-0.64 (see Fig. S3 in Supplementary Material). Simulations for fibers supporting $N=280$ modes, roughly the number of modes we excite in our experiment, exhibit a slope of $\\alpha\\approx0.64$, slightly lower than the the experimentally measured slope. \n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/fig3.png}\n\\par\\end{centering}\n\\caption{\\textbf{Focusing at the output of a multimode fiber}. (a) Image of the speckle pattern at the output of the fiber, before the optimization process. \n(b, c) The output intensity pattern, after optimizing the travel of the 37 actuators to focus the light to a single target (b), and two foci simultaneously (c). (d) The average enhancement as a function of the number of active actuators. Each data point (blue circles) was obtained by averaging the enhancement over several experiments, where the error bars indicate the standard error.\nA linear fit yields a slope of $\\alpha\\approx0.71$, which is close to the theoretical slope for phase-only modulation. The fit intersects the $\\hat{y}$ axis at $M_{0}\\approx1.5$, matching our observation that about $4-5$ actuators are required to overcome the inherent noise of the system. Numerical simulations for a GRIN fiber with $NA=0.275$ and $a=17.1 \\hspace{2pt} \\mu m$ (red circles), exhibit a linear scaling with a slope of $\\alpha\\approx 0.64$. The slope increases with the number of guided modes assumed in the simulation. Here we chose the number of modes ($M=280$) according to the number of excited modes in the experiment. \n}\n\\label{fig:MMF}\n\\end{figure}\n\nAs in experiments with SLMs, focusing is not limited to a single spot. To illustrate this, we used the optimization algorithm to simultaneously maximize the intensity at two target areas. Fig. \\ref{fig:MMF}(c) shows a typical result, exhibiting an enhancement which is half of the enhancement obtained when focusing to a single spot, as expected by theory \\cite{Vellekoop2008}. In principle, it is possible to focus the light to an arbitrary number of spots, yet in practice we are limited by the number of available actuators. \n\n\\subsection{Mode Conversion in a Few Mode Fiber}\n\nIn the previous section, we demonstrate the possibility to use our system as an all-fiber SLM, i.e. to shape an output complex field by modifying the relative complex weight of the propagating modes. In the following, we show that we can go further by studying the feasibility of TM-shaping to tailor the output patterns in the few-mode regime, where the number of fiber modes is comparable with the number of actuators. Specifically, we are interested in converting an arbitrary superposition of guided modes to one of the linearly-polarized (LP) modes supported by the fiber. To this end, we utilize the PSO optimization algorithm to find the configuration of actuators that maximizes the overlap between the output intensity pattern and the desired LP mode.\nThe target LP modes of the step-index fiber were computed numerically for the parameters of our fiber, and scaled to match the transverse dimensions of the fiber image. \nFig. \\ref{fig:FMF} presents a few examples of conversions between LP modes using 33 and 12 actuators. A mixture of $LP_{01}$ and $LP_{11}$ at two different polarizations can be converted to $LP_{11}$ in one polarization (Fig. \\ref{fig:FMF}(a)). Alternatively, a horizontally polarized $LP_{11}$ mode can be converted to a superposition of horizontally $LP_{01}$ and a vertically polarized $LP_{11}$ (Fig. \\ref{fig:FMF}(b)). The Pearson correlation between the target and final patterns in these examples is $0.93$. Similar results are obtained when we run the optimization with fewer active actuators, with a negligible reduction in the correlation between the target and final pattern. For example, with 12 actuators we observe correlations of $0.90$ for the conversion presented in Fig. \\ref{fig:FMF}(c). Optimization with less than 12 actuator shows poorer performance, as the number of actuators becomes comparable with the number of guided modes. \n\n\n\\begin{figure}[!tbh]\n\\begin{centering}\n\\includegraphics[width=0.8\\columnwidth]{Figures\/fig4.png}\n\\par\\end{centering}\n\\caption{\\textbf{Conversion between transverse fiber modes.} \nIntensity patterns recorded at the output of the fiber, before (left column) and after (middle column) the optimization, exhibiting conversion between the $LP$ fiber modes at orthogonal polarizations. The PSO algorithm iteratively minimizes the $\\ell_{1}$ distance between the measured pattern and the target mask (right column). (a) and (b) are obtained using 33 actuators (with Pearson correlation of $0.94$ and $0.92$ respectively), (c) is obtained with 12 actuators (with correlation of $0.90$).}\n\\label{fig:FMF}\n\\end{figure}\n\n\n\n\\section{Discussion}\nControlling the transmission matrix of a multimode fiber, rather than the wavefront that is coupled to it, opens the door for an unprecedented control over the light at the output of the fiber. Since the number of degrees of control, the number of actuators in our implementation, is not limited by the number of fiber modes $N$, it can allow simultaneous control for orthogonal inputs and\/or spectral components. \nIn fact, if $O(N^2)$ degrees of control are available, one can expect generating arbitrary $N \\times N$ transformations between the input and output modes. Over the past two decades there is an ever-growing interest in realizing reconfigurable multimode transformations, for a wide range of applications, such as quantum photonic circuits \\cite{Reck:1994, Carolan2015, Taballione2018, Harris:18, Gigan2019} optical communications \\cite{Miller2015, Fontaine2019}, and nanophotonic processors \\cite{Piggott2015, Annoni2017}. \nThese realizations require strong mixing of the input modes, as the output modes are arbitrary superpositions of the input modes. The mixing can be achieved, for example, by diffraction in free-space propagation between carefully designed phase plates \\cite{Morizur:10, Labroille:14, Fontaine2019, Fickler2019}, a mesh of Mach-Zehnder interferometers with integrated modulators \\cite{Harris:18}, engineered scattering elements in multimode interferometers \\cite{Piggott2015, Bruck2016}, or scattering by complex media \\cite{Geoffroy2018, Maxime:19}. In our implementation, we rely on the natural mode mixing and inter-modal interference in multimode fibers, allowing implementation using standard commercially available fibers. \n\nThe main limitation our current proof-of-concept suffers from is the achievable modulation rates.\nThe piezo-based implementation limits the achievable modulation rates. The response time of the system to abrupt changes of the piezos is approximately $30 \\hspace{2pt} ms$ (see Supplementary Material), allowing in principle for modulation rates as high as 30 Hz. In practice, our system works at slower rates ($\\approx$5 Hz), mainly due the latency of the piezoelectric actuators and the camera. The total optimization time for the focusing experiments is $50$ minutes, and $12-15$ minutes for the mode conversion experiments.\nFaster electronics and development of a stiffer and more efficient bending mechanism will allow higher modulation rates, limited by the resonance frequency of the piezo benders ($\\approx$ 300-500 Hz). To achieve even faster rates, a different technology should be used for applying perturbations to the fibers, e.g. utilizing all-fiber acousto-optical modulators \\cite{acousto-optic-book} or the 'smart fibers' technology with integrated modulators \\cite{Fink12}. Optical fibers with built-in modulators can also be utilized for a scalable implementation of our method. \n\n\n\n\\section{Conclusions and Outlook}\n\nIn this work we proposed a novel technique for controlling light in multimode optical fibres, by modulating its TM using controlled perturbations. We presented proof-of-principle demonstrations of focusing light at the distal end of the fiber, and conversion between guided modes, without utilizing any free-space components. \nSince our approach to modulate the TM of the fiber is general and not limited to mechanical perturbations, it could be directly transferred to other types of actuators, e.g. in-fiber electro-optical or acousto-optical modulators, to achieve all-fiber, loss-less, fast, and scalable implementations. \nThe all-fiber configuration and the possibility to control more degrees of freedom than the number of guided modes, makes our method attractive for fiber-based applications that require control over multiple inputs and\/or wavelengths. \nMoreover, the possibility to achieve high dimension complex operations opens the way to the implementation of optical neural networks. \nOur system can provide an important building block for linear reconfigurable transformations, which can be further used in combination with fibers and lasers that exhibit strong gain and\/or nonlinearity for deep learning applications.\n\n\n\\section*{Funding Information}\n\nThis research was supported by the Zuckerman STEM Leadership Program, the ISRAEL SCIENCE FOUNDATION (grant No. 1268\/16), the Ministry of Science \\& Technology, Israel and the French \\textit{Agence Nationale pour la Recherche} (grant No. ANR-16-CE25-0008-01 MOLOTOF), and Laboratoire international associ\u00e9 Imaginano.\n\n\\section*{Acknowledgments}\nWe thank Daniel Golubchik and Yehonatan Segev for invaluable help. \n\n\\section*{Disclosures}\nThe authors declare no conflicts of interest.\n\n\\printbibliography\n\n\\renewcommand{\\thefigure}{S\\arabic{figure}}\n\\setcounter{figure}{0}\n\\newpage\n\\section{Supplementary Material}\n\n\\subsection{Typical Time scales of the Optical System}\n\n\\subsubsection{Response Time}\n \nTo measure the typical response time of the system, we introduced abrupt changes to the voltages applied to a subset of the piezoelectric actuators, and recorded the speckle pattern obtained at the distal end of the fiber.\nWe then calculated the 2D Pearson correlation coefficient between each of the captured frames and the first frame. The measurements were repeated using different subsets of piezos. Examples of a few of these measurements, for subsets that include between one and four actuators, are shown in Fig. \\ref{fig:response}(a). The abrupt voltage change causes a fast change to the recorded speckle pattern, yielding a sharp decrease in the computed correlation coefficient. As expected, the bigger the subset of the piezos, the stronger the correlation drop. This sharp decrease is the result of the change in the actuators configuration (the bend they pose), and manifests in a change to the captured speckle pattern. \nOnce the actuator's position stabilized, the correlation stabilized on a lower value. To ensure that the patterns with lower correlation with regard to the first frame are correlated with one another (thus ensuring that the plateau is not a result of the statistical properties of speckles), we also calculated the 2D correlation coefficient of each frame from the last acquired frame. These results are shown in Fig. \\ref{fig:response}(b) for the same groups of actuators. The high correlation after the configuration change indeed verifies that the speckle pattern did not change further.\nBased on such measurements we were able to estimate the response time of the system at $30 \\hspace{2pt} ms$, which corresponds to modulation rates of 33 Hz.\n\n\\begin{figure*}[htb]\n\\begin{centering}\n\\includegraphics[width=0.7\\linewidth]{Figures\/sup_fig1.png}\n\\par\\end{centering}\n\\caption{\\label{fig:S1}\\textbf{Response time of the experimental system.}\n The 2D correlation coefficient of each frame with (a) the first and (b) the last of the acquired frames, when a configuration of actuators (the voltage which is applied on these piezos) is changed. Blue lines show a change of configuration of a single actuator, red of two actuators, yellow of three and purple of four.} \n\\label{fig:response}\n\\end{figure*}\n\n\n\\subsubsection{Decorrelation Time}\nTo estimate the stability of the system, we calculated the 2D correlation coefficient of the speckle pattern at the distal end of the fiber over time when the system is idle, i.e. no changes are performed to the states of the actuators. This loss of correlation is known to be attributed to the sensitivity of bare optical fibers to thermal fluctuations in the room and changes of pressure due to air flow. \nWith the GRIN MMF, we found that the system remained highly correlated ($corr \\geq 0.99$) for $\\simeq10$ minutes. The correlation decreased slowly and linearly for 55 minutes, reaching $corr=0.976$. The correlation then decreased faster, reaching $corr=0.883$ after two hours. With the SI FMF, the system remained stable and highly correlated ($corr \\geq 0.996$) over the course of 15 hours.\n\n\\subsection{Rayleigh Statistics}\nThe slopes of the linear scaling of the focusing enhancement factor $\\eta$ as a function of the number of degrees of control rely on the intensity statistics of the generated speckle patterns. The theoretical values reported in the main text are derived for Rayleigh intensity statistics [1]. It is therefore important to compare the intensity statistics of the speckle patterns we obtain in our system with the predictions of Rayleigh statistics. Such a comparison is depicted in \\ref{fig:rayleigh}, which shows excellent agreement with theory. \n\n\\begin{figure}[htb]\n\\begin{centering}\n\\includegraphics[width=\\columnwidth]{Figures\/sup_fig2.png}\n\\par\\end{centering}\n\\caption{\\textbf{Intensity distribution of the speckle patterns at the end of the fiber.}\n The natural logarithm of the probability distribution function (PDF) of the speckle patterns intensities, as a function of normalized pixel intensity (dots), and a linear fit (line). The data points correspond to experimental readouts, without background noise subtraction.}\n\\label{fig:rayleigh}\n\\end{figure}\n\n\n\\subsection{Optimization Technique}\nAs described in the main text, the results were obtained by finding solutions to optimization problems. These problems used a feedback loop- at each iteration, the speckle pattern at the distal end of the fiber was recorded using the CMOS camera.\nThis pattern was evaluated according to its similarity to a target pattern, and this score was given to the optimization algorithm as a cost, which it tried to minimize by changing the configuration of bends which are applied to the fiber segments. Lower costs were obtained for bend configurations which yielded patterns with high similarity to the target.\n\nThe optimization algorithm we chose to use is the Particle Swarm Optimization (PSO), which is a genetic algorithm. It randomly initializes a population of points (referred to as particles) in an \\textit{M}-dimensional search space, representing the voltages which are assigned to the \\textit{M} actuators. These positions are iteratively improved according to their local and global memory from previous iterations. Its stochastic nature helps avoiding local extrema in non-convex problems.\nAn open-source implementation of PSO [2] was modified to fit our experimental setup and simulation. We defined a single run as a single instance of the optimization process, i.e. achieving a single optimized target speckle pattern, such as the example shown in Fig. 3(b) in the main text. With the GRIN MMF, each such run constituted of 80 iterations, with the following hyper-parameters: population size of 120, inertia weight of $w=1$, inertia damping ratio of $w_{damp}=0.99$, personal learning coefficient of $c_1=1.5$, global learning coefficient of $c_2=2$. With the SI FMF, each run used 86-108 iterations, with a population of size 50. The values of the other hyper-parameters were not changed.\n\n\n\\subsection{Simulation}\n\nSince our system is linear in the optical field, it is natural to describe the propagation of light in it with matrix formalism. We divided the fiber into multiple segments, calculated the transmission matrix (TM) of each segment and computed the total TM of the fiber by multiplying them. To represent our experimental system, we composed bent segments (which mimic the effect of actuators) and straight segments (for the propagation between actuators). A bent segment was approximated by a circular arc, with a defined curvature. To find the guided modes and propagation constants of different segments, we used a numerical module [3] which solves the scalar Helmholtz equation under the weakly guiding approximation [4]. We used 10 radii of curvatures, to simulate 10 different vertical positions of the actuators, which impose 10 different perturbations. These radii were linearly spaced between a maximal and a minimal value, which we estimated from the experimental system. \n\nMode-mixing in short GRIN fibers mostly occurs within groups of degenerate modes. To mimic this phenomenon, we introduced unitary block matrices, whose block sizes were determined according to the mode degeneracy, as expressed in the propagation constants, allowing mixing between modes with the same propagation constants. It is note worthy that without introducing this feature, we were unable to achieve focusing. \n\nWe used the same discrete set of possible curvatures for all of the actuators in all runs, and the same optimization mechanism as the experimental setup to achieve a focus. The optimization process mapped one of the possible curvatures for each of the bent fiber segments. In runs where not all of the actuators where used, the remaining were set as straight segments (with no curvature) to maintain the same propagation distance in all runs.\nFig. \\ref{fig:simulation} shows the enhancement factor $\\eta$ as a function of the number of actuators whose curvatures were optimized for simulated fibers. It is noticeable that the enhancement factor scales linearly with the number of simulated actuators, where the slope ranges between 0.57-0.64 for the displayed fiber parameters, at a wavelength of $\\lambda=632.8 \\hspace{2pt} nm$.\n\n\\begin{figure}[h]\n\\begin{centering}\n\\includegraphics[width=\\linewidth]{Figures\/sup_fig3.png}\n\\par\\end{centering}\n\\caption{\\textbf{Simulation of focusing in a multimode optical fiber.}\nThe average enhancement factor that was achieved (circles) and the standard error (bars), as a function of the number of actuators whose curvatures where modified as part of the optimization process in the simulation. Several results obtained for different fiber parameters are shown in different colors, along with a linear curve (gray dashed line).}\n\\label{fig:simulation}\n\\end{figure}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section*{Introduction}\\label{sec:introduction}\nMentorship is fundamental in many professions\\cite{scandura1992mentorship,de2003mentor,payne2005longitudinal,janosov2020elites}. In science, successful mentoring is crucial not only for a mentee's growth and success, but also for the career advancement of the mentor\\cite{kram1988mentoring,lee2007nature,bhattacharjee2007nsf}. In mentoring relationships, mentees learn scientific values, skills, and build their scientific network\\cite{green1991professional}. Also, mentorship has been shown to play a prominent role on a researcher's first job placement\\cite{enders2005border,hilmer2007dissertation,wright2008mentoring,clauset2015systematic,way2019productivity}. At the same time, mentors obtain benefits from training mentees, like higher productivity, job satisfaction, and a broader social network in the long run\\cite{allen2004career,astrove2017mentors}. A mentor can have multiple mentees over a career, and their number and success can improve the mentor's institutional recognition\\cite{rossi2017genealogical,semenov2020network}. Yet, despite the important role of mentees and of junior researchers in the scientific ecosystem, we witness a large fraction of early-stage researchers exiting academia at an alarming rate\\cite{roach2010taste,petersen2012persistence,moss2012science,ghaffarzadegan2015note,milojevic2018changing,xing2019strong,woolston2019phds,huang2020historical,levine2020covid,davis2022pandemic}, and we still have a limited quantitative understanding of the impact of mentors and their research group on the survival rate and career evolution. Given also the increasing reports of unhealthy working environments experienced by graduate students and early career researchers in academia\\cite{levecque2017work, guthrie2018understanding, woolston2019phds, gonzalez2020risk, murguia2022navigating}, it is of fundamental importance to understand which kind of mentorship minimizes dropout rate, supports junior researchers' well-being, and enables talent diffusion.\n\nThe success of a mentor-mentee relationship is characterized by the complex interaction of different factors, like institutional environment, country of origin of the mentor and mentee, or funding for PhD research\\cite{sugimoto2011academic,baruffaldi2016productivity,brostrom2019academic,way2019productivity}. Previous research on mentorship has been primarily based on anecdotal studies and self-report surveys, and supports the hypothesis that both mentees and mentors benefit from the mentoring relationship\\cite{lewis1992carl,payne2005longitudinal}. Most recently, some large-scale quantitative studies provided a quantitative understanding of the interplay between mentor and mentee performance \\cite{malmgren2010role, liu2018understanding, fortunato2018science}. For example, in Mathematics a mentor's fecundity, that is the number of mentees that a mentor supervises, is positively correlated with the number of the mentor's publications, and mathematicians have an academic fecundity similar to that of their mentors\\cite{malmgren2010role}. Mentees in STEM fields not only learn technical skills and traditional knowledge\\cite{liu2018understanding} but also inherit hidden capabilities displaying a higher propensity for producing prize-winning research, becoming a member of the National Academy of Science, and achieving ``superstardom''\\cite{ma2020mentorship}. Researchers have a higher probability of continuing in academia if they can better synthesize intelligence between their graduate and postdoctoral mentors\\cite{lienard2018intellectual,wuestman2020genealogical}. Moreover, graduate mentors are less instrumental to their mentees' survival and fecundity than postdoctoral mentors\\cite{lienard2018intellectual}. However, mentees who show independence from the mentor's research topics after graduation have a higher tendency to be part of the academic elite\\cite{ma2020mentorship}. Early-career investigators who coauthor with high impact scientists in the early stage have a long-lasting competitive advantage over those who do not have these collaboration opportunities\\cite{li2019early}. Mentorship is also connected to the chaperone effect in scientific publishing\\cite{sekara2018chaperone}: publishing with a senior mentor in a journal is crucial to become corresponding author in a later publication in the same journal, and this effect is particularly pronounced for high-impact publishing venues. \n\nThese prior works have well demonstrated the positive association between the success of mentors and mentees. However, they have a major limitation: they mainly focus on the career success of those surviving in academia. As such, they are affected by survival bias and fail to capture why a mentee does not continue their academic career. Indeed, in a mentorship relation, a mentee can benefit from a mentor's broad vision and valuable research experience, especially when working with academically successful mentors. However, the mentee may face a strong competition for the mentor's limited time, since successful mentors tend to supervise more mentees, work with more collaborators\\cite{johnson2002toward}, do more academic service, like peer reviewing or covering scientific editorial roles\\cite{ma2020mentorship}, and manage scientific groups of large size \\cite{luckhaupt2005mentorship,malmgren2010role,brostrom2019academic}. Therefore, mentees in large groups have to compete for the mentor's attention and have on average less chances of interactions with the mentor than mentees in small groups, entailing potential risks for the mentees' career evolution. \n\nGiven this premise, here we ask a fundamental question: What are the advantages and disadvantages of working with successful mentors, especially in relation to their scientific group size? To address this question, we construct a dataset combining mentor-mentee relations and their academic records. This dataset can capture their academic proliferation and publication performance and can be used to explore the potential drivers of mentee success in academia\\cite{ke2022dataset,david2012neurotree,sinha2015overview,wang2019review}. Most importantly, we can perform a survival analysis accounting for survivor bias, and understand not only the factors associated with success, but also those that lead to dropout. We further apply a coarsened exact matching regression model to uncover the causal relationship between mentees' group size and academic performance, which rules out potential confounding factors and uncovers alternative predictors\\cite{iacus2009cem,iacus2012causal}. \n\n\\section*{Results}\\label{sec:result}\n\\subsection*{Data and data curation.}\nOur analysis is based on two distinct data sets. The first one is curated from the Academic Family Tree (AFT, Supplementary S1.1), an online website (\\url{Academictree.org}) for collecting mentor-mentee relationship in a crowd-sourced fashion. AFT initially focused on Neuroscience and expanded later to span more than 50 disciplines. The second data set is the Microsoft Academic Graph (MAG, \\url{https:\/\/aka.ms\/msracad}, Supplementary S1.2), a bibliographic database containing entities about authors, doctypes (journals, conferences, etc.), affiliations, and citations. One advantage of MAG over other publication databases is that all entities have been disambiguated and associated with identifiers. These two data sets have been connected by matching the same scientists in each data set, and this matching has been validated with extensive and strict procedures\\cite{ke2022dataset}. From this combined dataset, we extract the genealogical data of 309,654 scientists who published 9,248,726 papers in Chemistry, Neuroscience, and Physics between 1900 and 2021 (Methods and Supplementary Note 1 for data curation).\n\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig1.png}\n \\caption{\\textbf{Illustration of two academic family networks and mentor-mentee collaboration networks.} \\textbf{a.} Genealogy network of a mentee. The network is built around a focal mentee $p_1$ (the large blue circle). The purple node corresponds to $p_1$'s mentor. A directed link between two nodes indicates the mentorship relation, where the mentor is the node the link departs from. $G_1$ indicates the set of nodes co-mentored by $p_1$'s mentor 5 years before and after $p_1$'s graduation. This time window is denoted as $d$. The squared nodes in $G_2$ are the mentees mentored by the nodes of $G_1$ during the first 25 years after their graduation. The blue node $p_1$ and the green node $p_2$ in $G_1$ have their mentees in $G_2$, whereas the grey node $p_3$ has no offspring, hence no mentee-node in $G_2$. Therefore, $p_1$ and $p_2$ are survived mentees, according to our definition, while $p_3$ drops out. Because of the number of nodes in $G_1$, $p_1$ is a mentee in a small group. \\textbf{b.} Genealogy network of the mentee $p_4$. The difference in respect to panel a) is that the number of nodes in $G_1$, i.e. the group size, is among the top 25\\% in the dataset, meaning that the mentee is mentored in a big group. \\textbf{c.} Mentor-mentee co-authored publications and the corresponding weighted collaboration network among the mentor and mentees in the small group of panel a. Here the mentor and the $G_1$ mentees have co-authored three papers during the tranining period, resulting in a collaboration network where each node corresponds to an author and the edge weights represent the number of co-authored papers. \\textbf{d.} Mentor-mentee co-authored publications and the corresponding weighted collaboration network among the mentor and their mentees in the big group of panel b.}\n \\label{fig:schematic}\n\\end{figure}\n\n\\subsection*{Genealogy networks, mentee generations, and group size}\nThese curated datasets allow us to construct for each researcher $p$ their academic genealogy network, that is a temporal directed network where each node represents a researcher and a directed link is a mentorship relation pointing from a mentor to their mentee (Fig. \\ref{fig:schematic}a,b). Each node has a time attribute, corresponding to their doctoral or postdoctoral graduation year. The nodes included in this network are: (i) the node corresponding to the researcher $p$, (ii) the mentor of $p$, (iii) the set of nodes that are mentored by $p$'s mentor 5 years before and after $p$'s graduation, denoted as generation $G_1$, (iv) the set of nodes mentored by the nodes of $G_1$ during the first 25 years after graduation, denoted as generation $G_2$. For example, in Fig. \\ref{fig:schematic}a, we show the academic genealogy network of researcher $p_1$.\nTo account for the longitudinal limits of the dataset, for each researcher we only consider two generations of nodes and include in $G_2$ only the mentees mentored during the first 25 years after a researcher's graduation (Supplementary Fig. S2 and Table S2). Also, we consider only researchers that graduated between 1900 and 1995, in order to have at least 25 years of career after graduation and to avoid right-censoring issues\\cite{leung1997censoring}.\n\nIn order to understand the relation between the mentees' academic performance and the mentorship environment they were trained in, we introduce the concept of \\textit{group size} and provide measures of \\textit{academic performance}.\nThe group size of a given mentee is defined as the total number of nodes in $G_1$, that is the number of mentees that were supervised by the same mentor 5 years before and after the mentee's graduation. For example, in Fig. \\ref{fig:schematic}a, the node $p_1$ is mentored with two other mentees during the five years before and after $p_1$'s graduation, whereas in Fig. \\ref{fig:schematic}b, $p_4$ is mentored with four other mentees five years before and after $p_4$'s graduation. The group size is thus 3 for $p_1$ and 5 for $p_4$. Notice that the group size associated with a mentee is fixed, but a mentor can lead a group whose size can change over time and is equal to the number of mentees mentored in any 10 years window. The usage of this time years window ($d$ in Fig. \\ref{fig:schematic}) to quantify the group size is motivated by previous work \\cite{lienard2018intellectual}. We also show that group size has the same distribution when using different time windows (Supplementary Fig. S3), indicating that our findings do not depend on the choice of $d$.\n\nNext, we define small groups and big groups: we first identify all the mentees who graduated in a given year, then we rank them in descending order according to their group size. Mentees in the top 25\\% are in \\textit{big} groups, while those in the bottom 25\\% are in \\textit{small} groups. Since the average and the 75\\% quantile of group size are generally increasing over time \\cite{wuchty2007increasing,wu2019large}, the threshold separating big groups and small groups is time-dependent, and accounts for the increasing size effect (Supplementary Fig. S3 and Table S3). \n\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig2.pdf}\n \\caption{\\label{fig:survival_rate}\\textbf{Survival rate, fecundity, and yearly citations.} \\textbf{a-c.} The evolution of the total number of mentees (dark grey bars) and the number of survived mentees (light grey bars). Survived mentees are those that had at least one mentee themselves. \\textbf{d-f.} The evolution of the survival rate of mentees from small groups (blue lines) and from big groups (orange lines), compared with the overall average survival rate (grey lines). \\textbf{g-i.} Fecundity during the first 25 years of career of all mentees (left two bars in each subplot) and of survived mentees (right two bars in each subplot) from small groups and big groups. \\textbf{j-l.} Average yearly citations of all mentees (left two bars in each subplot) and of survived mentees (right two bars in each subplot) from small groups and big groups. The result of the Mann-Whitney significance test comparing distributions is reported at the top of each paired bars (*p < 0.05; **p < 0.01; ***p < 0.001).}\n\\end{figure}\n\n\\subsection*{Academic performance measures}\nTo quantify the academic performance of a mentee, we use three kinds of widely used measures \\cite{malmgren2010role,milojevic2018changing,xing2019strong}: \\textit{fecundity}, \\textit{survival}, and \\textit{publishing performance indicators}.\nThe \\textit{fecundity} of a node is defined as the number of their mentees 25 years after graduation, that is the number of their neighbors in $G_2$. For example, the node $p_1$ trains two mentees in $G_2$ (Fig. \\ref{fig:schematic}a), while $p_3$ has no trainees (Fig. \\ref{fig:schematic}b), hence $p_1$ and $p_3$ have fecundity equal to two and zero respectively.\n\nLike previous work\\cite{lienard2018intellectual}, we define \\textit{survival} as having at least one mentee during the first 25 years, which is equivalent to having at least one neighbor in $G_2$. These researchers are ``surviving'' because they eventually establish themselves and build a scientific group. In contrast, mentees that do not have mentees themselves are considered dropouts, as they have not built a scientific group of their own, although they might continue publishing. In Fig. \\ref{fig:schematic}a and b, the blue and green circles are survived mentees while grey circles are dropouts. We then define the \\textit{survival rate} of the group, which is the fraction of nodes in $G_1$ that survive. For example, in Fig. \\ref{fig:schematic}a 2 out of 3 mentees in $G_1$ survive, as they have a mentee in $G_2$, then the survival rate associated with this (small) group is 0.67. Similarly, in Fig. \\ref{fig:schematic}b the survival rate of the (big) group is 0.4. We use also alternative definitions of survival\\cite{milojevic2018changing,xing2019strong}, based on the mentee's publication record 10 years after graduation, and obtain findings similar to those shown in the following sections (Supplementary S2.2 and Fig. S6).\n\nIn addition to measures of academic survival and fecundity, we focus on publishing performance, as captured by \\textit{productivity}, that is the number of papers published during the mentee's career, and \\textit{average number of yearly citations} acquired by these papers. \nFinally, we use the publication record also to construct the \\textit{collaboration} network between the mentor and all the mentees in $G_1$, to understand if this has an effect on the future career of mentees. In Fig. \\ref{fig:schematic}c and \\ref{fig:schematic}d, we provide two examples of collaboration networks, derived by the shown publications, where a node represents an author and two nodes are linked if they co-author at least one publication. The link weight corresponds to the number of co-authored publications. \n\n\n\n\\subsection*{Mentees trained in big groups have lower survival rate}\nGiven that fecundity and publications are both widely used as a proxy of success\\cite{malmgren2010role,wuestman2020genealogical,rossi2017genealogical,clauset2017data}, we ask: what are the success differences between mentees from big groups and small groups? Who will perform better in their future academic career: Those trained together with many other mentees in supposedly high-profile large groups or those trained with just a few other mentees? Apart from group size, are there any other confounding factors associated with the development of a mentee's career?\n\nTo answer these questions, we first investigate the evolution of the total number of mentees (dark grey bars) and survived mentees (light grey bars) (Fig. \\ref{fig:survival_rate}a-c). The total number of mentees has overall significantly increased from 1910 to 2000. In particular, after a temporary slow down soon after the World War II, the second half of the 20th century has witnessed a striking increase in both the total number of mentees and survived mentees, that continued until today. However, the rate of survived mentees was lower than the total number of mentees, indicated by the increasing difference between the dark grey and light grey bars. Indeed, the survival rate (grey lines in Fig. \\ref{fig:survival_rate}d-f), is (i) relatively stable for Chemistry until the 60s, (ii) suffered from a temporary decrease during World War II for Physics, followed by an increase probably because of the newly revived welfare after the war, which provided a large number academic positions in university and research institutes, and (iii) for Neuroscience had many ups and downs before and during World War II, and an increase until the early 70s. However, for all three disciplines the survival rate exhibits a striking declining trend after the 70s, which is still ongoing. When we split the survival rate into big groups and small groups, we find a pronounced difference (Fig. \\ref{fig:survival_rate}d-f): Mentees from big groups have a significantly lower survival rate than those from small groups starting in the 1940's (Chemistry) or 1950's (Physics and Neuroscience), indicating that mentees from big groups were much less likely to continue in academia. In the 90s, the survival rate of mentees trained in big groups is between 30\\% and 40\\% lower than those from small groups. The different results about survival rates in small and big groups do not depend on the time-dependent threshold identifying small groups and big groups (Supplementary Table 3, Fig. S6-S7). \n\n\\subsection*{Mentees trained in big groups have higher fecundity and higher yearly citations}\nIn the following analysis, we mainly focus on the data after 1960 when big groups and small groups exhibit evident difference in survival rate. We find significant differences between the mentees from big groups and small groups also in the other academic performance measures: Mentees from small groups are on average more successful in both fecundity (left two bars in the panels of Fig. \\ref{fig:survival_rate}h-i) and yearly citations (left two bars in the panels of Fig. \\ref{fig:survival_rate}j-l) than those from big groups, except for fecundity in Chemistry (Fig. \\ref{fig:survival_rate}g). One possibility leading to this exception is that Chemistry is a predominantly experimental discipline requiring a large workforce, therefore the mentees from big groups inherit from their mentoring groups a much larger fecundity than those from small groups. Interestingly, when we compare the academic achievements of only survived mentees, that is mentees that have at least fecundity one, the observed performance differences in fecundity and average yearly citations reverse between groups (right two bars in each panel of Fig. \\ref{fig:survival_rate}g-l). This means that mentees from small groups tend to do better in terms of average fecundity and yearly citations than those from big groups. However, if we consider only the mentees that manage to survive and establish a group, those from big group have an advantage, since they tend to have higher fecundity and yearly citations. Taken together, this advantage reversal happens because of the low survival rate in big groups, which lowers the average fecundity and citations of mentees from big groups. These findings are not a trivial consequence of dividing the data into the small groups and big groups, as shown by a null model where we randomize the mentor-mentee relationships while keeping the group size constant. In this null model, we do not see significant differences between mentees from big groups and small groups (Supplementary Fig. S4). The findings about survival suggest that being mentored in a big scientific group can have long-term career competitive advantages in academic performance, but these are conditional to the lower odds of surviving.\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig3.pdf}\n \\caption{\\textbf{Fecundity distribution and citation representation disparity among mentees from big groups and small groups.} \\textbf{a-c.} Complementary cumulative distribution function (CCDF) of fecundity. The orange (blue) line displays the fecundity distribution of $G_1$ mentees from big (small) scientific groups. The green dashed line marks the point where the two probabilities are equal and the two distributions cross (only for Chemistry and Physics). \\textbf{Inset:} The evolution of the equal probability point, $k$, for each decade since 1960s. \\textbf{d-f.} Relative representation $Rr$ (see Methods) of $G_1$ mentees from small and big groups in the top 5\\%, 10\\%, 25\\%, and 50\\% of the average annual citations (AAC) ranking . \\textbf{g-i.} The evolution of $Rr$ in the top 5\\% and top 10\\% of the AAC ranking.}\n \\label{fig:evolution}\n\\end{figure}\n\n\\subsection*{Researchers trained in small groups have small groups, researchers trained in big groups have big groups}\nThe results in Fig. \\ref{fig:survival_rate}g-l imply that big groups and small groups have different advantages based on the chosen success metric, namely survival probability, future fecundity, and average citations. Here we further explore the respective advantages of the two kinds of groups according to the academic aim of a mentee. We investigate the complementary cumulative distribution function (CCDF) of the mentees' fecundity, that is the probability that a researcher has at least $k$ mentees in their career, (Fig. \\ref{fig:evolution}a-c), and focus on the value of $k$ where the probabilities for researchers trained in big groups and small groups are equal.\nWe observe that for Chemistry and Physics there is one point where the two probabilities are equal and two distributions cross. The point of crossover is at $k=5$ in the period 1990-1995, meaning that the likelihood to survive and have 5 mentees or less is higher for researchers trained in small groups. On the other hand, researchers trained in big groups have a higher likelihood to mentor 5 or more mentees in their careers, despite their lower odds of surviving. The two distributions do not cross over in Neuroscience and display only minor, although statistically significant differences, indicating that researchers trained in small groups have a slightly higher probability of having $k$ mentees, for all values of $k$ in the period 1990-1995. The point of equal probability $k$ identifies two different regimes: One regime where fecundity is smaller than $k$ and is associated with a higher likelihood to mentees trained in small groups; in the other regime, fecundity is larger than $k$ and is associated with higher likelihood to mentees trained in big groups. This opposite role of small and big groups regarding fecundity suggests two different strategies: a big group is to be preferred if a mentee aims at high fecundity, while a small group is to be preferred if the aim of a mentee is to avoid dropout, although the expected fecundity will be smaller. We calculate the points of same probability for each decade since 1960s (Fig. \\ref{fig:evolution}a-c insets, and Supplementary Fig. S5) and find an increasing trend with time. This phenomenon indicates that researchers trained in a big group face high risks of dropout, if their aim is not a high fecundity.\n\n\\begin{figure}[!b]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig4.pdf}\n \\caption{\\textbf{Results of coarsened exact matching regressions.} \\textbf{a-c.}\n Logistic regression of the odd of surviving in academia. The green (pink) bars indicate the positive (negative) regression coefficients for the corresponding variables on the y-axis. The numbers next to the bars indicate the value of the regression coefficient. The statistical significance of the variables are presented at the top of each value (* p < 0.05; **p < 0.01; ***p < 0.001). The error bars indicate the standard error of each regression coefficient. \\textbf{d-f.} Linear regression of the fecundity of $G_1$ mentees. \\textbf{g-i.} Logistic regression of the odds of being in the top 5\\% in the AAC rank. Note that we show only the statistically significant variables of the regression models after coarsened exact matching the data.}\n \\label{fig:regression}\n\\end{figure}\n\\subsection*{Big groups are more likely to nurture future top-cited researchers} \nApart from academic fecundity, citation is one of the most popular and widely recognized metrics to measure a researcher's success. \nWe measure the mentee's citation success by the probability of being a top-cited scientist. \nBased on previous work \\cite{ma2020mentorship}, we measure the average annual citations (denoted by AAC) of each mentee during their career, and use it to create a ranking for each decade, based on the mentee's graduation year, from 1960 to 1995. We then define the relative representation $Rr_{X\\%}$ which captures how many mentees we observe in the top $X\\%$ of the ranking compared to a random model where group size has no effect on the ranking (see Methods). In general, $Rr>0$ means that the mentees of a given group are more represented than expected. Conversely, $Rr<0$ indicates that mentees are underrepresented compared to the expectation. In Fig. \\ref{fig:evolution}d, we consider the mentees trained in big groups and small groups in the period 1990-1995 and study their relative representation in the top 5\\%, 10\\%, 25\\% and 50\\% of the AAC ranking. We find that mentees from big groups, if surviving, are over-represented among top-cited scientists. Moreover, the result is more pronounced in the top 10\\% and the pattern is consistent across different research fields. Taken together, Fig. \\ref{fig:survival_rate}g-l and Fig. \\ref{fig:evolution}d-f show that survivors from big scientific groups are not only likely to have a better average academic performance, but also have a competitive advantage in being top-cited scientists. We additionally study how the relative representation evolved over time (Fig. \\ref{fig:evolution}g-i). Big groups are becoming less dominant in raising top-cited scientists in Chemistry and Physics in recent decades, as indicated by the orange line decreasing from 0.35 to 0.16 in Fig. \\ref{fig:evolution}g and from 0.38 to 0.18 in Fig. \\ref{fig:evolution}h. The same trend was present from 1960 to 1990 in Neuroscience, but seems to have changed in the 1990s (Fig. \\ref{fig:evolution}i).\nSurvived mentees from small groups have become more represented than previously, even though they are still underrepresented compared to those surviving from big groups. However, we do not find evident changing trends with respect to the top 25\\% and top 50\\% AAC rank (Supplementary Fig. S8, S9 and S10 for details). Our results imply that the candidates surviving in a big group perform better in terms of impact than those from small groups. \n\n\\subsection*{Controlling for confounding factors}\nIn order to understand the role of potential confounding factors, we use a coarsened exact matching (CEM) coupled with regression models to study the relation between scientific group size and predictors of future academic performance. CEM regression consists in running a separate regression model on matched groups of mentees, resulting in a more stringent way of controlling for confounding factors than regression alone (Methods)\\cite{iacus2009cem,iacus2012causal}. In Fig. \\ref{fig:regression}a-c, the logistic regression applied to CEM datasets \nshows that the most significant variable, with a negative weight, to predict survival is \\textit{MenteeFromBigGroup} variable, confirming the finding that being trained in a big group lowers the odds of future survival in academia. The positive regression coefficient of the variable \\textit{First5YearPubsOfMentee} indicates that a mentee's early productivity is associated with survival, supporting previous findings \\cite{milojevic2018changing,xing2019strong}. Moreover, working with senior supervisors (larger \\textit{CareerAgeOfMentor}) rather than with junior supervisors gives a slight yet significant advantage to survive in Chemistry and Neuroscience, while it seems to have a slight negative effect on mentee survival in Physics. Also, the regression coefficient of the \\textit{YearlyPubsOfMentor} variable indicates that the mentor's yearly productivity, i.e. the average number of papers published in a year, has a negative effect on the mentee's survival probability. Taken together, a possible explanation for the results observed in Fig. \\ref{fig:regression}a-c is that busy mentors, such as those from big groups and with a high publishing rate, have typically little time to spend on supervising each mentee, affecting their future academic career. \nThe negative association between mentor productivity during the mentee training and mentee success is further confirmed when we study the distribution of the number of mentor's papers divided by group size (Fig. \\ref{fig:regression_support}a-c). The CCDF for mentors supervising big groups is always larger than those supervising small groups, indicating that mentors leading big groups tend to have more publications on average than those leading small groups. At the same time, their mentees have a lower survival probability than mentees trained in small groups (Fig. \\ref{fig:survival_rate}d-f).\n\\begin{figure}[!bt]\n \\centering\n \\includegraphics[width=1\\textwidth]{figs\/fig5.pdf}\n \\caption{Mentors' productivity and collaboration with their mentees during the mentees' training period. \\textbf{a-c.} The complementary cumulative distribution function (CCDF) of the mentors' productivity leading small groups (blue) and big groups (orange). Here productivity is the number of publications published by the mentor during the training period $d$, highlighted in Fig. \\ref{fig:schematic}a. \\textbf{d-f.} CCDF of the number of papers co-authored with the mentor by survived mentees (green pluses) and by dropped out mentees (grey diamonds). This data refers only to mentees trained in big groups. \\textbf{Inset:} CCDF of the number of papers co-authored with the mentor of survived mentees (green pluses) and dropped out mentees (grey diamonds) trained in small groups only.}\n \\label{fig:regression_support}\n\\end{figure}\n\nWe use a regression approach on CEM datasets also to control for confounding factors in the prediction of fecundity and citation performance, and confirm our previous observations: Group size is a significant factor, being positively associated with future fecundity and citation performance, captured by being among the top 5\\% scientists for yearly citations (\\textit{Top5\\%YearlyCitations}, Fig. \\ref{fig:regression}d-i). The only exception is Neuroscience, where group size is not significant to predict a top-cited scientist (Fig. \\ref{fig:regression}j).\nOverall, the regression analysis confirms that, if surviving, a mentee from a big group has long-term competitive advantages compared to small groups.\n\nApart from group size, we find one more variable, the number of papers co-authored with a mentor during training (\\textit{CollaPubsWithMentor}, see schematic illustration in Fig. \\ref{fig:schematic}c-d), which is positively associated with fecundity and citation performance. This is compatible with the hypothesis that the mentees that receive more supervision from their mentors, as signalled by the higher number of coauthored papers, have higher chances of future success.\nThis is further confirmed by the observed statistical disparities of the mentor-mentee collaboration in big groups and small groups (Fig. \\ref{fig:regression_support}e-g): in big groups, mentees who will survive tend to have more co-authored papers with a mentor during training than those that will dropout. For mentees trained in small groups, there is no noticeable distribution difference between survived mentees and dropepd out mentees (Fig. \\ref{fig:regression_support}e-g Insets). Mentees working in small groups can receive more even attention from the mentor in the same period because there are fewer trainees. Finally, our regression models have between $66\\%$ and $73\\%$ prediction accuracy in mentee survival and reveal another main factor (Fig. \\ref{fig:regression} and Supplementary Table S5, Fig. S15): Surprisingly, the more productive a mentor is, the smaller the probability that their mentee will stay in academia.Taken together, our findings quantitatively support the hypothesis that the attention received from the mentor plays a key role for the higher survival rate and success of mentees in academia\\cite{ma2020mentorship}.\n\n\\section*{Discussion, limitations, and conclusions}\\label{sec:discussion}\nOur findings about the effects of group size and mentor productivity support our hypothesis that the attention allocation of the mentor affects the future academic success of mentees: A highly productive mentor supervising a big group tends to provide less supervision opportunities to each mentee, which results in a higher dropout rate. In big groups, this tendency is counterbalanced only when there are frequent collaborations, and hence more supervision opportunities, between mentee and mentor. Taken together, based on large-scale data in scientific genealogy and scientometrics, we offer empirical evidence for both potential profits and risks of working with successful mentors. \n\nOur study has some limitations. First, some mentorship relations might not be reported in the AFT dataset, which could affect the actual group size measure. To mitigate this issue, we analyzed only the three most represented fields in the data: Chemistry, Neuroscience, and Physics (Supplementary S1.2). Our CEM and regression analysis should also mitigate reporting bias due to the different visibility of mentors, since we control for individual productivity and citations. Also, prior literature has widely investigated the AFT dataset\\cite{lienard2018intellectual,ma2020mentorship,schwartz2021impact,david2012neurotree,ke2022dataset}, and has not found obvious biases that could affect our findings. Second, the AFT dataset only reports formal mentorship relations. In academia, graduate students receive informal mentorship from many researchers, including postdocs, teachers, other faculty and academic staff \\cite{acuna2020some}. These relations are not captured by this data set and, to our knowledge, by no other openly available data set. Yet, while information about informal mentorship could provide more causal explanation to our findings on career evolution, the reported relation between group size of the official mentor and survival, fecundity, and academic achievements would still hold. Third, in this study we use a narrow definition of academic achievements, such as survival, fecundity, and average annual citations. These measures are oblivious of other dimensions of success, not quantifiable in our data, and do not fully represent a successful academic career, in all its aspects. Yet, since decisions in the academic enterprise are lately strongly driven by quantitative measures like those used in this paper, we believe that it is important to study the properties and relations between these indicators. \n\nOur findings indicate that a simple characteristic such as the size of the mentor's group can help predicting the long-lasting achievements of a researcher's career. Our work also raises important questions: Should research policies balance the number of mentees per mentor, given the association with a higher dropout rate? Or should they promote excellence of future career, as arguably nurtured in big successful groups, despite the higher risk of dropout? \nThere are also open questions that we have not tackled here but that offer important future directions of inquiry. \nAn important one is: what is the effect of group structure on the mentees' success? We have shown a strong collaboration between mentee and mentor counterbalances the lower odds of survival in a big group. However, we have not explored the role of the inner group structure, as captured by collaboration ties between the supervised mentees or between a mentee and other junior academics. These collaborations could provide mutual support, mitigating future dropout risk.\nOther important open questions concern how our findings change when differentiating data based on gender, country of origin, or ethnicity. Indeed, previous research shows the existence of strong biases in mentorship and in science in general \\cite{moss2012science,lariviere2013bibliometrics,dutt2016gender,dennehy2017female,schwartz2021impact,hernandez2020inspiration} which could intersect in problematic ways with the big group effect. Answering these questions could not only offer a better understanding of the fundamental mechanisms that underpin a scientific career from the beginning but might also substantially improve our ability to retain young researchers, to improve workplace quality, and to nurture high-impact scientists.\n\n\\clearpage\n\n\\section*{Methods}\\label{sec:methods}\n\\subsection*{Data Preparation}\nThe Academic Family Tree (AFT, \\url{Academictree.org}) records formal mentorship mainly based on training relationships of graduate student, postdoc and research assistant from 1900 to 2021. AFT includes 743,176 mentoring relationships among 738,989 scientists across 112 fields. The data can be linked to the Microsoft Academic Graph (MAG, \\url{https:\/\/aka.ms\/msracad}) which is one of the largest multidisciplinary bibliographic databases. The combined data contains the publication records of mentors and mentees, which can be used to calculate the measurements of publication-related performance in our analysis (Supplementary Note 2 and Note 3). The combined data of AFT and MAG is taken from \\cite{ke2022dataset}. In this paper, we conduct our analysis on researchers in Chemistry, Physics, and Neuroscience, amounting to 350,733 mentor-mentee pairs, and to 309,654 scientists who published 9,248,726 papers. We motivate our choice for the studied fields in Supplementary Note 1 (Data and preprocessing). \n\n\\subsection*{Relative Representation (Rr)}\nGiven a time window, we rank the mentees graduated in this window according to their average annual citations (AAC), calculated over their whole career. Then we compute the observed representation of big group mentees, $R_{BG}(X)$ in the top X\\% of the AAC ranking as:\n\\begin{equation}\n\\label{eq1}\n R_{BG} (X) = \\frac{N_{BG}(X)}{N_{BG}(X) + N_{SG}(X)} \n\\end{equation}\nwhere $N_{BG}(X)$ and $N_{SG}(X)$ are the number of mentees from big groups and small groups, respectively, found in the top $X\\%$ AAC ranking.\nWe compare this observed representation with the expected representation if the position in the ranking is independent of the group size a mentee is from. The expected representation is:\n\\begin{equation}\n\\label{eq2}\n R^{exp}_{BG}=\\frac{N_{BG}}{N_{BG}+N_{SG}}\n\\end{equation}\nwhere $N_{BG}$ and $N_{SG}$ are the total number of mentees from big groups and small groups, respectively.\nThe relative representation in the top $X\\%$ ranking, $Rr_{BG}(X\\%)$ is obtained by subtracting (\\ref{eq2}) from (\\ref{eq1}) and dividing by (\\ref{eq2}):\n\\begin{equation}\n\\label{eq3}\n Rr_{BG}(X\\%) = \\frac{R_{BG} (X) - R^{exp}_{BG}}{R^{exp}_{BG}} \n\\end{equation}\nSimilarly, the relative representation for small groups is defined as:\n\\begin{equation}\n\\label{eq4}\n Rr_{SG}(X\\%) = \\frac{R_{SG}(X) - R^{exp}_{SG}}{R^{exp}_{SG}} \n\\end{equation}\nwhere $R_{SG}(X)$ and $R^{exp}_{SG}$ are obtained by swapping $N_{BG}(X)$ and $N_{SG}(X)$ in (\\ref{eq1}) and (\\ref{eq2}). \n\n\\subsection*{Coarsened Exact Matching (CEM) Regression}\nIn causal inferences, analyzing matched data set is generally less model-dependent (i.e., less prone to the modeling assumptions) than analyzing the original full data\\cite{iacus2012causal,ho2007matching}. For this reason, we use a matching approach before applying regression models to our datasets. With a matching approach\\cite{iacus2009cem,iacus2012causal}, two groups can be balanced, resulting in similar empirical distributions of the covariates. There are many approaches to matching: one approach is based on exact matching, which is the most accurate but also not usable in practice as it returns too few observations in the matched samples. Here, we use the Coarsened Exact Matching (CEM): this method first coarsen the data into linear bins, then matches elements of two groups that fall within the same bin. This approach returns approximately balanced data and allows to control for covariates. \nTaken together, the CEM approach involves three steps:\n\\begin{itemize}\n \\item [1.] Given each mentee $i$, we define a vector $\\mathbf{V}_i$ where each element of the vector corresponds to an individual variable, like number of publications or number of collaborators. \n \\item [2.] We coarsen each control variable, creating bins for each quantile of the distribution\\cite{iacus2012causal} (Supplementary S3.2). Then for each $i$, we convert the vector $\\mathbf{V}_i$ into a coarsened vector $\\mathbf{V^C}_i$, where each element maps the individual variable to the corresponding bin of the coarsened variable.\n \\item [3.] We then perform an exact matching of the coarsened vectors, that is for each $i$ in one group we find a $j$ in the other group such that $\\mathbf{V^C}_i==\\mathbf{V^C}_j$.\n\\item [4.] We discard all elements $i$ that we are not able to match.\n\\end{itemize}\nThese procedure returns to CEM datasets. After creating these datasets , we apply regression models to estimate the effect of the independent variable (group size) on outcome variables (survival, fecundity, and yearly citations). \nSpecifically, we use a logistic regression, a linear regression, and a logistic regression model to study the effects of group size, respectively, on the mentee's survival, fecundity, and being among the top5\\% cited researchers. See the Supplementary Note 3 and Note 4 for details on variables, CEM regression models, Chi-square test and cross-validation.\n\n\\subsection*{Regression variables}\nThe regression includes controls of the following variables: \n\\begin{itemize}\n \\item [$\\bullet$]\\textit{YearlyPubsOfMentor} -- number of yearly publications over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalPubsOfMentor} -- number of total publications over a mentor's career.\n \\item [$\\bullet$]\\textit{YearlyCitationOfMentor} -- number of yearly citations over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalCitationOfMentor} -- number of total citations over a mentor's career.\n \\item [$\\bullet$]\\textit{YearlyCollaOfMentor} -- number of yearly coauthors over a mentor's career.\n \\item [$\\bullet$]\\textit{TotalCollaOfMentor} -- number of total coauthors over a mentor's career.\n \\item [$\\bullet$]\\textit{PubsOfMentorInTraining} -- number of mentor's papers during a given mentee's training period.\n \\item [$\\bullet$]\\textit{CareerAgeOfMentorInTraining} -- a mentor's career stage at the mentee's graduation.\n \\item [$\\bullet$]\\textit{First5YearPubsOfMentee} -- number of publications in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{First5YearCitationOfMentee} -- number of total papers' citations in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{First5YearCollaOfMentee} -- number of coauthors in the first 5 years of a mentee's career.\n \\item [$\\bullet$]\\textit{CollaPubsWithMentor} -- number of co-authored papers with their mentor during training period.\n \\item [$\\bullet$]\\textit{MenteeFromBigGroup} -- the independent variable is 1 or 0 if the mentee graduated from a big group or small group in the regressions.\n \\item [$\\bullet$]\\textit{survival} -- the dependent variable is binary.\n \\item [$\\bullet$]\\textit{fecundity} -- the dependent variable is discrete.\n \\item [$\\bullet$]\\textit{Top5\\%YearlyCitations} -- the dependent variable that is being among the top 5\\% scientists for yearly citations, which is also binary.\n\\end{itemize}\nMore information about the variables of the regressions can be found in the Supplementary Note 3.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nInvariants under local unitary transformations are tightly related\nto the discussions on nonlocality - a fundamental phenomenon in\nquantum mechanics, to the quantum entanglement and classification of\nquantum states under local transformations. In recent years many\napproaches have been presented to construct invariants of local\nunitary transformations. One method is developed in terms of\npolynomial invariants Refs. \\cite{Rains,Grassl}, which allows in\nprinciple to compute all the invariants of local unitary\ntransformations, though it is not easy to perform operationally. In\nRef. \\cite{makhlin}, a complete set of 18 polynomial invariants is\npresented for the locally unitary equivalence of two qubit-mixed\nstates. Partial results have been obtained for three qubits states\n\\cite{linden}, tripartite pure and mixed states \\cite{SCFW}, and\nsome generic mixed states \\cite{SFG, SFW, SFY}. Recently the local\nunitary equivalence problem for multiqubit \\cite{mqubit} and general\nmultipartite \\cite{B. Liu} pure states has been solved.\n\nHowever, generally one still has no operational ways to judge the\nequivalence of two arbitrary dimensional bipartite or multipartite\nmixed states under local unitary transformations. An effective way\nto deal with the local equivalence of quantum states is to find the\ncomplete set of invariants under local unitary transformations.\nNevertheless usually these invariants depend on the detailed\nexpressions of pure state decompositions of a state. For a given\nstate, such pure state decompositions are infinitely many.\nParticularly when the density matrices are degenerate, the problem\nbecomes more complicated. Since in this case even the eigenvector\ndecompositions of a given state are not unique.\n\nIn this note, we give a way of constructing invariants under local\nunitary transformations such that the invariants obtained in this\nway are independent from the detailed pure state decompositions of a\ngiven state. They give rise to operational necessary conditions for\nthe equivalence of quantum states under local unitary\ntransformations. We show that the hyperdeterminants, the generalized\ndeterminant for higher dimensional matrices \\cite{gel}, can be used\nto construct such invariants. The hyperdeterminant is in fact\nclosely related to the entanglement measure like concurrence\n\\cite{Hill,wott,uhlm,rungta,ass} and 3-tangle \\cite{coff}. It has\nalso been used in classification of multipartite pure state\n\\cite{miy,Luq,vie}. By employing hyperdeterminants, we construct\nsome trace invariants that are independent of the detailed pure\nstate decompositions of a given state. These trace invariants are a\npriori invariant under local unitary transformations.\n\n\\section{State decomposition independent local invariant}\n\nLet $H_1$ and $H_2$ be $n$ and $m$-dimensional complex Hilbert\nspaces, with $\\{\\vert i\\rangle\\}_{i=1}^n$ and $\\{\\vert\nj\\rangle\\}_{j=1}^m$ the orthonormal basis of spaces $H_1$ and $H_2$\nrespectively. Let $\\rho$ be an arbitrary mixed state defined on\n$H_1\\otimes H_2$,\n\\begin{eqnarray}\\label{general decomposition-1}\n\\rho=\\sum_{i=1}^I p_i|v_i\\ra\\la v_i|,\n\\end{eqnarray}\nwhere $|v_i\\ra$ is a normalized bipartite pure state of the form:\n$$\n|v_i\\ra=\\sum_{k,l=1}^{n,m}a_{kl}^{(i)}|kl\\ra,\\ \\\n\\sum_{k,l=1}^{n,m}a_{kl}^{(i)}a_{kl}^{(i)\\ast}=1,\\ \\ a_{kl}^{(i)}\\in\n\\Cb,\n$$\nwhere $\\ast$ denotes complex conjugation. Denote $A_i$ the matrix\nwith entries given by the coefficients of the vector\n$\\sqrt{p_{i}}|v_i\\ra$, i.e. $(A_i)_{kl}=(\\sqrt{p_{i}}a_{kl}^{(i)})$\nfor all $i=1,\\cdots,I$. Define $I\\times I$ matrix $\\Omega$ such that\n$(\\Omega)_{ij}=tr(A_iA_{j}^{\\dag}), ~~~ i,j=1,\\cdots,I,$ \\ where\n$\\dag$ stands for transpose and complex conjugation.\n\n\nThe pure state decomposition (\\ref{general decomposition-1}) of a\ngiven mixed state $\\rho$ is not unique. For another decomposition:\n\\begin{eqnarray}\\label{general decomposition-2}\n\\rho=\\sum_{i=1}^I q_i|\\psi_i\\ra\\la\\psi_i|,\n\\end{eqnarray}\nwith\n$$\n|\\psi_i\\rangle=\\sum_{k,l=1}^{n,m}b_{kl}^{(i)}|kl\\ra,\\ \\\n\\sum_{k,l=1}^{n,m}b_{kl}^{(i)}b_{kl}^{(i)\\ast}=1,\\ \\\nb_{kl}^{(i)}\\in\\Cb,\n$$\none similarly has matrices $B_i$ with entries\n$(B_i)_{kl}=(\\sqrt{q_{i}}b_{kl}^{(i)})$, $i=1,\\cdots,I$, and\n$I\\times I$ matrix $\\Omega^\\prime $ with entries\n$$\n(\\Omega^\\prime )_{ij}=tr(B_iB_{j}^{\\dag}),~~~ \\ i,j=1,\\cdots,I.\n$$\n\nA quantity $F(\\rho)$ is said to be invariant under local unitary\ntransformations if $F(\\rho)=F((u_1\\otimes u_2)\\rho (u_1\\otimes\nu_2)^\\dag)$ for any unitary operators $u_1\\in SU(n)$ and $u_2\\in\nSU(m)$. Generally $F(\\rho)$ may depend on the detailed pure state\ndecomposition. We investigate invariants $F(\\rho)$ that are\nindependent on the detailed decompositions of $\\rho$. That is,\nexpression in Eq. (\\ref{general decomposition-1}) and expression in\nEq. (\\ref{general decomposition-2}) give the same value of $F(\\rho)$\nfor a given state $\\rho$. These kind of invariants are of special\nsignificance in determining the equivalence of two density matrices\nunder local unitary transformations.\n\nTwo density matrices $\\rho$ and $\\tilde{\\rho}$ are said to be\nequivalent under local unitary transformations if there exist\nunitary operators $u_1$ (resp. $u_2$) on the first (resp. second)\nspace of $H_1\\otimes H_2$ such that \\be\\label{lu} \\tilde{\\rho}=\n(u_1\\otimes u_2)\\rho(u_1\\otimes u_2)^\\dag. \\ee\n\nA necessary condition that (\\ref{lu}) holds is that the local\ninvariants have the same values $F(\\rho)=F(\\tilde{\\rho})$. Therefore\nif the expression of the invariants $F(\\rho)$ do not depend on the\ndetailed pure state decomposition, one can easily compare the values\nof the invariants between $F(\\rho)$ and $F(\\tilde{\\rho})$. Otherwise\none has to verify $F(\\rho)=F(\\tilde{\\rho})$ by surveying all the\npossible pure state decompositions of $\\rho$ and $\\tilde{\\rho}$. In\nparticular, when $\\rho$ is degenerate, even the eigenvector\ndecomposition is not unique, which usually gives rise to the main\nproblem in finding an operational criterion for local equivalence of\nquantum states. In fact, we have presented a complete set of\ninvariants in \\cite{ZGFJL}. However, these invariants depend on the\neigenvectors of a state $\\rho$. When the state is degenerate, this\nset of invariants is no longer efficient as criterion of local\nequivalence.\n\nWe set out to discuss how to find parametrization independent local\nunitary invariants. First of all we give an elementary result that\nthe determinant can be used to give invariants that are independent\nfrom the choice of the pure state decomposition.\n\n\\noindent{ \\bf Theorem 1:} The coefficients $F_i(\\Omega)$,\n$i=1,2,...,I$, of the characteristic polynomials of the matrix\n$\\Omega$,\n\\begin{eqnarray}\\label{thm}\n\\det(\\Omega-\\lambda\\,E)= \\lambda^I + \\lambda^{I-1} F_1(\\Omega) +\n\\cdots + \\lambda F_{I-1}(\\Omega)+ F_{I}(\\Omega) =\n\\Sigma_{i=0}^{I}\\lambda^{I-i}F_{i}(\\Omega),\n\\end{eqnarray}\nwhere $E$ is the $I \\times I$ unit matrix, $F_{0}(\\Omega)=1$, $\\det$\ndenotes the determinant, have the following properties:\n\n(i) $F_{i}(\\Omega)$ are independent of the pure state decompositions\nof $\\rho$;\n\n(ii) $F_{i}(\\Omega)$ are invariant under local unitary\ntransformations, $i=1,\\cdots, I$.\n\n\\noindent{ \\bf Proof:} (i) If Eq. (\\ref{general decomposition-1})\nand Eq. (\\ref{general decomposition-2}) are two different\nrepresentations of a given mixed state $\\rho$, we have\n$B_{i}=\\Sigma_{j} U_{ij}A_{j}$ for some unitary operator\n$U$\\cite{nie}. Consequently,\n$$\n\\ba{rcl} \\Omega_{ij}^{\\prime}&=& \\displaystyle tr(B_{i}B_{j}^{\\dag})\n= tr\\left[\\sum_{k,l}U_{ik}A_{k}U_{jl}^\\ast A_{l}^{\\dag}\\right]\\\\[5mm]\n&=&\\displaystyle \\sum_{k,l}U_{ik}U_{jl}^\\ast\\, tr(A_{k}A_{l}^{\\dag})\n=\\sum_{k,l}U_{ik}U_{jl}^\\ast \\Omega_{kl} =(U\\Omega U^{\\dag})_{ij},\n\\ea\n$$\ni.e. $\\Omega^\\prime =U\\Omega U^{\\dag}$. Therefore\n$\\det(\\Omega^\\prime -\\lambda\\,E)=\\det( U\\Omega\nU^{\\dag}-\\lambda\\,E)=\\det(\\Omega-\\lambda\\,E)$. Thus the matrices\n$\\Omega$ and $\\Omega^\\prime $ have the same characteristic\npolynomials. Namely $F_{i}(\\Omega)=F_{i}(\\Omega^\\prime )$. Therefore\n$F_{i}(\\Omega)$ are invariants under the pure state decomposition\ntransformations.\n\n(ii) Let $P\\otimes Q\\in SU(n)\\otimes SU(m)$. Under the local unitary\ntransformations one has\n$$\\tilde{\\rho}=(P\\otimes Q)\\rho (P\\otimes\nQ)^{\\dag}=\\sum_{i=1}^{I}p_i(P\\otimes Q)|v_i\\ra\\la v_i| (P\\otimes\nQ)^{\\dag}=\\sum_{i=1}^{I}p_i|w_i\\ra\\la w_i|,$$ with\n$$|w_i\\ra=P\\otimes Q|v_i\\ra=\\sum_{k,l=1}^{n,m}a_{kl}^{(i)\\prime\n}|kl\\ra,\\ \\ \\sum_{k,l=1}^{n,m}a_{kl}^{(i)\\prime }a_{kl}^{{(i)\\prime\n}\\ast}=1,\\ \\ a_{kl}^{(i)\\prime }\\in \\Cb.$$ Denote\n$(A_i^{\\prime})_{kl}=\\sqrt{p_i}a_{kl}^{(i)\\prime}$. We have \\be\nA_{i}^{\\prime}=PA_iQ^{T}. \\ee Therefore\n$tr(A_iA_{j}^{\\dag})=tr(A_i^{\\prime}A_{j}^{\\prime \\dag})$ and\n$\\Omega(\\rho)=\\Omega(\\tilde{\\rho})$. Hence\n$F_{i}(\\Omega(\\rho))=F_{i}(\\Omega(\\tilde{\\rho}))$, and\n$F_{i}(\\Omega)$, $i=1,\\cdots, I$, are invariant under local unitary\ntransformations. \\qed\n\nIn particular, the invariants $F_1= \\sum tr(\\sum_i A_iA_{i}^{\\dag})$\nand $F_I = \\det (\\Omega)$. For the case $I=2$, one has\n$$\\Omega=\\left(\n\\begin{array}{cc}\ntr(A_1A_1^{\\dag}) & tr(A_1A_2^{\\dag}) \\\\\ntr(A_2A_1^{\\dag}) & tr(A_2A_2^{\\dag}) \\\\\n\\end{array}\n\\right)$$ and $F_1=tr(A_1A_1^{\\dag})+tr(A_2A_2^{\\dag})$,\n$F_2=tr(A_1A_1^{\\dag})tr(A_2A_2^{\\dag})-tr(A_1A_2^{\\dag})tr(A_2A_1^{\\dag})$.\n\n\\noindent{ \\bf Remark:} The number of local invariants $F_i$ is\nuniquely determined by the rank $r$ of the mixed state $\\rho$, i.e.\n$I=r$. Therefore we only need to calculate the invariants\ncorresponding to the eigenvector decomposition. Because for\narbitrary pure state decomposition $\\rho=\\Sigma_{j=1}^{J}\nq_j|\\psi_j\\rangle\\langle\\psi_j|$ with $J>r$, the above determinant\nis the same as that of the eigenvector decomposition of\n$\\rho=\\Sigma_{i}^r p_i|\\phi_i\\rangle\\langle\\phi_i|$ after adding\n$J-r$ zero vectors. The determinant $\\det(\\Omega^\\prime\n-\\lambda\\,E)$ of the eigenvector decomposition of $\\rho$ after\nadding $J-r$ zero vectors and $\\det(\\Omega-\\lambda\\,E)$ of\n$\\rho=\\Sigma_{j=1}^{r} q_j|\\psi_j\\rangle\\langle\\psi_j|$ without\n$J-r$ zero vectors have the relation: $\\det(\\Omega^\\prime\n-\\lambda\\,E)=\\lambda^{J-r}\\det(\\Omega-\\lambda\\,E)$. This means that\nthe number of independent local invariants given by (\\ref{thm}) does\nnot depend on the number of pure states in the ensemble of a given\n$\\rho$. Therefore if two mixed states $\\rho$ and $\\tilde{\\rho}$ have\ndifferent ranks, they are not local unitary equivalent. If their\nranks are the same, one only needs to calculate the corresponding\ninvariants with respect to the same numbers $I$ of pure states in\nthe pure state decompositions.\n\nIn fact for a quantum state $\\rho$ in eigenvector decomposition\n$\\rho=\\sum_i \\lambda_i |\\psi_i\\rangle\\langle\\psi_i|$, the\ncorresponding matrix $\\Omega$ is a diagonal one with $\\rho$'s\neigenvalues $ \\lambda_i$ as the diagonal entries. In this case the\nlocal invariants from Theorem 1 are just the coefficients of the\ncharacteristic polynomial of the quantum state $\\rho$. Theorem 1\nshows that these coefficients are local invariants and independent\nfrom the detailed pure state decompositions. But the easy approach\nemployed in Theorem 1 can be generalized to construct more local\ninvariants that are independent of the detailed pure state\ndecompositions by using hyperdeterminant \\cite{gel}.\n\nIn order to derive more parametrization independent quantities we\nconsider the multilinear form $f_A: \\underbrace{V\\otimes \\cdots\n\\otimes V}_\\text{$2s$}\\mapsto \\mathbb C$ given by\n\\begin{equation}\\label{eq:form}\nf_A(e_{i_1}, \\cdots, e_{i_s}, e_{j_1}, \\cdots,\ne_{j_s})=tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger}),\n\\end{equation}\nwhere $e_i$ ($1\\leq i\\leq I$) are standard basis elements in\n$V={\\mathbb C}^I$. The multilinear form $f$ can also be written as a\ntensor in $V^*\\otimes\\cdots \\otimes V^*$:\n\\begin{equation}\\label{eq:form2}\nf_A=\\sum_{\\underline{i},\n\\underline{j}}tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger})e_{i_1}^*\\otimes\\cdots\\otimes\ne_{i_s}^*\\otimes e_{j_1}^*\\otimes\\cdots\\otimes e_{j_s}^*,\n\\end{equation}\nwhere $e_i^*$ are standard $1$-forms on ${\\mathbb C}^I$ such that\n$e_i^*(e_j)=\\delta_{ij}$, and $\\underline{i}=(i_1, \\cdots, i_s),\n\\underline{j}=(j_1, \\cdots, j_s), 1\\leq i_p, j_p\\leq I$. In general\nwe call the $2s$-dimensional matrix or hypermatrix\n$A=(A_{\\underline{i}\\underline{j}})=(tr(A_{i_1}A_{j_1}^{\\dagger}\\cdots\nA_{i_s}A_{j_s}^{\\dagger}))$ formed by the coefficients of\n(\\ref{eq:form2}) the hypermatrix of the multilinear form $f_A$\nrelative to\nthe standard basis. \n\nThe Cayley hyperdeterminant Det($A$) \\cite{gel} is defined to be the\nresultant of the multilinear form $f_A$, that is, Det(A) is the\nnormalized integral equation of the hyperplane given by the\nmultilinear form $f_A$. It is known that \\cite{gel} the\nhyperdeterminant exists for a given format and is unique up to a\nscalar factor, if and only if the largest number in the format is\nless than or equal to the sum of the other numbers in the format.\nHyperdeterminants enjoy many of the properties of determinants. One\nof the most familiar properties of determinants, the multiplication\nrule $det(AB) = det(A) det(B)$, can be generalized to the situation\nof hyperdeterminants as follows. Given a multilinear form\n$f(x^{(1)}, ..., x^{(r)})$ and suppose that a linear transformation\nacting on one of its components using an $n\\times n$ matrix B, $y_r\n= B x_r$. Then\n\\begin{equation}\\label{eq:det-action}\nDet(f.B) = Det(f) det(B)^{N\/n},\n\\end{equation}\nwhere $N$ is the degree of the hyperdeterminant. Therefore we have\nthe following result.\n\n\\begin{lemma} The hyperdeterminant of format $(k_1,\\ldots,k_r)$ is an invariant under\nthe action of the group $SL(k_1) \\otimes \\cdots \\otimes SL(k_r)$,\nand subsequently also invariant under $SU(k_1) \\otimes \\cdots\n\\otimes SU(k_r)$.\n\\end{lemma}\n\\noindent{ \\bf Proof:} For $(A, B, \\cdots, C)\\in SL(k_1) \\otimes\n\\cdots \\otimes SL(k_r)$, it follows from Eq. (\\ref{eq:det-action})\nthat\n\n\\begin{align}\\label{eq:det-eq}\nDet((A_{(1)}\\cdot B_{(2)}\\cdot\\cdots C_{(r)}\\cdot)f) = Det(f)\ndet(A)^{N\/k_1}det(B)^{N\/k_2}\\cdots det(C)^{N\/k_r} =Det(f).\n\\end{align}\n\n\n\nThe three-dimensional hyperdeterminant of the format $2\\times\n2\\times 2$ is known as the Cayley's hyperdeterminant \\cite{Ca}. In\nthis case the hyperdeterminant of a hypermatrix $A$ with components\n$a_{ijk}$, $i,j,k \\in \\{0, 1\\}$, is given by\n\\begin{eqnarray}\nDet(A) &=& a_{000}^2a_{111}^2 + a_{001}^2a_{110}^2 +\na_{010}^2a_{101}^2 + a_{100}^2a_{011}^2 -\n2a_{000}a_{001}a_{110}a_{111}\\\\\\nonumber\n&&-2a_{000}a_{010}a_{101}a_{111}-2a_{000}a_{011}a_{100}a_{111} -\n2a_{001}a_{010}a_{101}a_{110}\n\\\\\\nonumber &&-2a_{001}a_{011}a_{110}a_{100}- 2a_{010}a_{011}a_{101}a_{100} +\n4a_{000}a_{011}a_{101}a_{110}\\\\\\nonumber && +\n4a_{001}a_{010}a_{100}a_{111}.\n\\end{eqnarray}\nThis hyperdeterminant can be written in a more compact form by using\nthe Einstein convention and the Levi-Civita symbol\n$\\varepsilon^{ij}$, with $\\varepsilon^{00} =\\varepsilon^{11} = 0,\n\\varepsilon^{01} = -\\varepsilon^{10} = 1$; and $b_{kn} =\n(1\/2)\\varepsilon^{il}\\varepsilon^{jm}a_{ijk}a_{lmn}$, $ Det(A)\n=(1\/2)\\varepsilon^{il}\\varepsilon^{jm}b_{ij}b_{lm}$. The\nfour-dimensional hyperdeterminant of the format $2\\times 2\\times 2\n\\times 2$ has been given in Ref. \\cite{Luq}.\n\nFor the general mixed state $\\rho$ in Eq. (\\ref{general\ndecomposition-1}), we can define a hypermatrix $\\Omega_{s}$ with\nentries\n\\begin{eqnarray}\n(\\Omega_{s})_{i_1i_2\\cdots i_sj_1j_2\\cdots j_s}\n=tr(A_{i_1}A_{j_1}^{\\dag}A_{i_2}A_{j_2}^{\\dag}\\cdots\nA_{i_s}A_{j_s}^{\\dag}),\n\\end{eqnarray}\nfor $i_k,j_k=1,\\cdots,I$, $s \\geq 1$, $0\\leq i_{j}\\leq k_{j}$. The\nformat of $\\Omega_{s}$ is $I\\times \\cdots \\times I$.\n\n\\noindent{ \\bf Theorem 2:} $Det (\\Omega_s-\\lambda\\,E)$, with\n$E=(E_{i_1,i_2,\n\\cdots,i_s,j_1,j_2,\\cdots,j_s})=(\\delta_{i_1j_1}\\delta_{i_2j_2}\n\\cdots \\delta_{i_sj_s})$, is independent of the pure state\ndecompositions of $\\rho$. It is also invariant under local unitary\ntransformations of $\\rho$. In particular, all coefficients of\npolynomial $Det (\\Omega_s-\\lambda\\,E)$ are local invariants\nindependent from the pure state decompositions and are invariance\nunder local unitary transformations.\n\n\\noindent{ \\bf Proof:} We first show that it is independent from the\npure state decomposition of $\\rho$. Let Eq. (\\ref{general\ndecomposition-1}) and Eq. (\\ref{general decomposition-2}) be two\ndifferent representations of a given mixed state $\\rho$. We have\n\\begin{eqnarray}\n(\\Omega_s^\\prime )_{i_1i_2\\cdots i_sj_1j_2\\cdots\nj_s}&=&tr(B_{i_1}B_{j_1}^{\\dag}B_{i_2}B_{j_2}^{\\dag}\\cdots\n B_{i_s}B_{j_s}^{\\dag})\\\\\\nonumber\n&=&tr\\left[\\Sigma_{i_1^\\prime j_1^\\prime, \\cdots, i_s^\\prime\n j_s^\\prime} U_{i_1i_1^\\prime}A_{i_1^{\\prime}}U_{j_1j_1^\\prime}^\\ast\nA_{j_1^{\\prime}}^{\\dag}\\cdots\nU_{i_si_s^\\prime}A_{i_s^{\\prime}}U_{j_sj_s^\\prime}^\\ast\nA_{j_s^{\\prime}}^{\\dag}\\right]\\\\\\nonumber &=&((U \\otimes U \\otimes\n\\cdots \\otimes U) (\\Omega_{s})(U^{\\dag} \\otimes U^{\\dag} \\otimes\n\\cdots \\otimes U^{\\dag}))_{i_1i_2\\cdots i_sj_1j_2\\cdots j_s}.\n\\end{eqnarray}\nTherefore $\\Omega^\\prime _s=(U \\otimes U \\otimes \\cdots \\otimes U)\n\\Omega_{s} (U^{\\dag} \\otimes U^{\\dag} \\otimes \\cdots \\otimes\nU^{\\dag})$. Using the action, the associated multilinear form\n$f_{\\omega}$ is acted upon by $U \\otimes U \\otimes \\cdots \\otimes U$\nand $U^{\\dag} \\otimes U^{\\dag} \\otimes \\cdots \\otimes U^{\\dag}$ as\nfollows:\n\\begin{equation*}\n(U_{(1)}\\cdot \\cdots U_{(s)}\\cdot U^{\\ast}_{(1)}\\cdot \\cdots\nU^{\\ast}_{(s)}\\cdot) f_{\\omega}\n\\end{equation*}\nUsing the formula under the action (\\ref{eq:det-eq}) we get $Det\n(\\Omega^\\prime _s-\\lambda\\,E)=Det (\\Omega_s-\\lambda\\,E)$, and thus\n$Det (\\Omega_s-E\\,\\lambda)$ does not depend on the detailed pure\nstate decompositions of a given $\\rho$. Note that in general we\ndon't know the exact formula for the hyperdeterminant, but we can\nstill derive its invariance abstractly.\n\nOn the other hand, under local unitary transformations\n$\\tilde{\\rho}=(P\\otimes Q)\\rho(P\\otimes Q)^\\dag$ for some local\nunitary operators $P\\otimes Q\\in SU(n)\\otimes SU(m)$, similar to the\nproof of the second part of the Theorem 1 and using Lemma\n\\ref{eq:det-action} in \\cite{gel}, it is easy to get\n$\\Omega_s=\\Omega^\\prime _s$. Therefore $Det(\\Omega_s-\\lambda\\,E)$ is\ninvariant under local unitary transformations. Moreover,\nfollowing\\cite{Luq}, the invariant polynomials are invariance under\nlocal unitary transformations. \\qed\n\nAs the application of our theorems we now give two interesting\nexamples.\n\nExample 1: Consider two mixed states $\\rho_1=diag\\{1\/2,1\/2,0,0\\}$\nand $\\rho_2=diag\\{1\/2,0,1\/2,0\\}$. $\\rho_1$ has a pure state\ndecomposition with\n$$\nA_0=\\left(\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{2}} & 0 \\\\\n0 & 0 \\\\\n\\end{array}\n\\right),~~~~ A_1=\\left(\n\\begin{array}{cc}\n 0 & \\frac{1}{\\sqrt{2}} \\\\\n 0 & 0\n \\end{array}\n \\right).\n$$\nWhile $\\rho_2$ has a pure state decomposition with\n$$\nB_0=\\left(\n \\begin{array}{cc}\n \\frac{1}{\\sqrt{2}} & 0 \\\\\n 0 & 0 \\\\\n \\end{array}\n \\right),~~~~\n B_1=\\left(\n \\begin{array}{cc}\n 0 & 0 \\\\\n \\frac{1}{\\sqrt{2}} & 0\n \\end{array}\n \\right).\n$$\nWe have the corresponding matrices\n$(\\Omega(\\rho_1))_{i,j}=tr(A_iA_{j}^{\\dag})$ and $(\\Omega\n(\\rho_2))_{i,j}=tr(B_iB_{j}^{\\dag})$, $i,j=0,1$. From Theorem 1 one\ncan find that these two states have the same values of the\ninvariants in Eq. (\\ref{thm}), $F_i(\\Omega(\\rho_1))\n=F_i(\\Omega(\\rho_2))$.\n\nWe now consider further the four-dimensional hyperdeterminant of the\nformat $2\\times 2\\times 2 \\times 2$ \\cite{Luq}. Let\n$(\\Omega(\\rho_1))_{ijkl}=tr(A_iA_{j}^{\\dag}A_kA_{l}^{\\dag})\\equiv\na_r$, $r=0,\\cdots,15$, where $r=8i+4j+2k+l$. From Ref. \\cite{Luq},\none invariant with degree $4$ is given by\n$$\nN(\\rho_1)=det\\left(\n\\begin{array}{cccc}\na_{0} & a_{1} & a_{8} & a_{9} \\\\\n a_{2} & a_{3} & a_{10} & a_{11} \\\\\n a_{4} & a_{5} & a_{12} & a_{13} \\\\\n a_{6} & a_{7} & a_{14} & a_{15}\n \\end{array}\n\\right)=\\frac{1}{256}.\n$$\nHowever for $\\rho_{2}$ we have $N(\\rho_{2})=0$. Therefore $\\rho_1 $\nand $\\rho_{2}$ are not equivalent under local unitary\ntransformations.\n\nIn Ref. \\cite{che}, the Ky Fan norm of the realignment matrix of the\nquantum states $\\cal{N}(\\rho)$ is proved to be invariant under local\nunitary operations. By calculation we find\n${\\cal{N}}(\\rho_1)={\\cal{N}}(\\rho_2)=\\frac{1}{\\sqrt{2}}$. This means\nthe Ky Fan norm of the realignment matrix can not recognize that\n$\\rho_1$ and $\\rho_2$ are not equivalent under local unitary\ntransformations. Therefore Theorem 2 has its superiority over it\nwith respect to these two states.\n\n\nExample 2: Let two mixed states $\\sigma_1=\\left(\n\\begin{array}{cccc}\n\\frac{1}{3} & 0 & 0 & 0 \\\\\n 0 & \\frac{1}{3} & \\frac{1}{3} & 0 \\\\\n 0 & \\frac{1}{3} & \\frac{1}{3} & 0 \\\\\n 0 & 0 & 0 & 0\n \\end{array}\n\\right)$ and $\\sigma_2=diag\\{2\/3,0,0,1\/3\\}$. Then $\\sigma_1$ has a\npure state decomposition with\n$$\nC_0=\\left(\n\\begin{array}{cc}\n\\frac{1}{\\sqrt{3}} & 0 \\\\\n0 & 0 \\\\\n\\end{array}\n\\right),~~~~ C_1=\\left(\n\\begin{array}{cc}\n 0 & \\frac{1}{\\sqrt{3}} \\\\\n \\frac{1}{\\sqrt{3}} & 0\n \\end{array}\n \\right).\n$$\nWhile $\\sigma_2$ has a pure state decomposition with\n$$\nD_0=\\left(\n \\begin{array}{cc}\n \\frac{\\sqrt{2}}{\\sqrt{3}} & 0 \\\\\n 0 & 0 \\\\\n \\end{array}\n \\right),~~~~\n D_1=\\left(\n \\begin{array}{cc}\n 0 & 0 \\\\\n 0 & \\frac{1}{\\sqrt{3}}\n \\end{array}\n \\right).\n$$\nWe have the corresponding matrices\n$(\\Omega(\\sigma_1))_{i,j}=tr(C_iC_{j}^{\\dag})$ and $(\\Omega\n(\\sigma_2))_{i,j}=tr(D_iD_{j}^{\\dag})$, $i,j=0,1$. From Theorem 1\none can find that these two states have the same values of the\ninvariants in Eq. (\\ref{thm}), $F_i(\\Omega(\\sigma_1))\n=F_i(\\Omega(\\sigma_2))$.\n\nWe also consider further the four-dimensional hyperdeterminant of\nthe format $2\\times 2\\times 2 \\times 2$. Also let\n$(\\Omega(\\sigma_1))_{ijkl}=tr(C_iC_{j}^{\\dag}C_kC_{l}^{\\dag})\\equiv\na_s$, $r=0,\\cdots,15$, where $s=8i+4j+2k+l$. From Ref. \\cite{Luq},\nthe another invariant with degree $4$ is given by\n$$\nM(\\sigma_1)=det\\left(\n\\begin{array}{cccc}\na_{0} & a_{8} & a_{2} & a_{10} \\\\\n a_{1} & a_{9} & a_{3} & a_{11} \\\\\n a_{4} & a_{12} & a_{6} & a_{14} \\\\\n a_{5} & a_{13} & a_{7} & a_{15}\n \\end{array}\n\\right)=\\frac{1}{6561}.\n$$\nHowever for $\\sigma_{2}$ we have $M(\\sigma_{2})=0$. Therefore\n$\\sigma_1 $ and $\\sigma_{2}$ are not equivalent under local unitary\ntransformations. From example 2, one can see that the spectra of\ntheir reduced one-qubit density matrices have the same value.\nTherefore, only by the spectra of their reduced one-qubit density\nmatrices can not judge the equivalence of given states.\n\nOur results can be generalized to multipartite case. Let\n$H_1,H_2,\\cdots,H_m$ be $n_1,n_2, \\cdots, n_m$-dimensional complex\nHilbert spaces with $\\{\\vert k_1\\rangle\\}_{k_1=1}^{n_1}$, $\\{\\vert\nk_2\\rangle\\}_{k_2=1}^{n_2}$, $\\cdots $, $\\{\\vert\nk_m\\rangle\\}_{k_m=1}^{n_m}$ the orthonormal basis of $H_1, H_2,\n\\cdots, H_m$ respectively. Let $\\rho$ be an arbitrary mixed state\ndefined on $H_1\\otimes H_2\\otimes\\cdots\\otimes H_m$,\n$\\rho=\\sum_{i=1}^Ip_i|v_i\\ra\\la v_i|$, where $|v_i\\ra$ is a\nmultipartite pure state of the form:\n$|v_i\\ra=\\sum_{k_1,k_2,\\cdots,k_m=1}^{n_1,n_2,\\cdots,n_m}a_{k_1k_2\\cdots\nk_m}^{(i)}|k_1k_2\\cdots k_m\\ra,\\ \\ a_{k_1k_2\\cdots k_m}^{(i)}\\in\n\\Cb$. Now we view $|v_i\\ra$ as bipartite pure state under the\npartition between the first $l$ subsystems and the rest, $1\\leq l\n0$ is a small parameter,\n$\\vec f_\\alpha \\in \\cont{2}(U,\\rR^{ n}),$ $\\alpha=1,\\dots,d$ are the\ncomponents of an advective flux and\n$\\vec g_\\alpha \\in \\cont{2}(U \\times \\rR^{d \\times n},\\rR^{ n}),$\n$\\alpha=1,\\dots,d$ are the components of a diffusive flux. Moreover,\nthe so-called state space $U \\subset \\rR^n$ is an open set; $T>0$ is\nsome finite time and $\\rT^d$ denotes the $d$-dimensional flat torus.\nBy presenting the arguments in this form we are able to simultaneously\nanalyse the NSF system as well as various other examples.\n\nWe are interested in the case of $\\varepsilon$ being very small, so that on\nlarge parts of the computational domain the inviscid model\n\\begin{equation}\\label{sim} \\partial_t \\vec u + \\sum_\\alpha \\partial_{x_\\alpha} \\vec f_\\alpha (\\vec u) = 0 \\text{ on } \\rT^d \\times (0,T)\\end{equation}\nis a reasonable approximation of \\eqref{cx}, for which, numerical\nmethods are computationally cheaper.\n\nWe will show later how the NSF equations fit into the framework\n\\eqref{cx} and that the Euler equations have the form \\eqref{sim}.\nIndeed, we will present our analysis first for a scalar model problem,\nwhere we are able to derive fully computable a posteriori estimators,\nleading to \\eqref{sim} and \\eqref{cx}. Pairs of models \\eqref{sim} and\n\\eqref{cx} also exist in applications beyond compressible fluid flows,\nfor example traffic modelling.\n\nWe will assume throughout this work that \\eqref{cx} and \\eqref{sim}\nare endowed with an (identical) convex entropy\/entropy flux pair.\nThis assumption is true for the Euler and NSF model hierarchy, and\nexpresses that both models are compatible with the second law of\nthermodynamics. For the scalar model problem it is trivially\nsatisfied.\n\nThis entropy pair gives rise to a stability framework via the\nso-called relative entropy, which we will exploit in this work. The\nrelative entropy framework for the NSF equations has received a lot of\ninterest in recent years, \\eg \\cite{FJN12,LT13}. It enables us to\n study modelling errors for inviscid approximations of\nviscous, compressible flows in an a posteriori fashion.\nIn the case $n=1$ and general $d$, which\nis the scalar case, we are able to prove rigorous a\nposteriori estimates. All constants are explicitly computable and we\ncall such quantities \\emph{estimators}. When $n>1$ a non-computable\nconstant appears in the argument which we were not able to\ncircumvent. The resultant a posteriori bounds are called\n\\emph{indicators} in the sequel.\nBased on this a posteriori estimator\/indicator we\nconstruct a model adaptive algorithm which we have implemented for\nmodel problems.\n\nThere are many applications, \\eg aeroacoustics \\cite{USDM06}, where it\nwould be desirable to consider the pair of models consisting of NSF\nand the {\\it linearised} Euler equations for computational efficiency.\nWe are currently unable to deal with this setting since our analysis\nrelies heavily on \\eqref{cx} and \\eqref{sim} having the same entropy\nfunctional, while the linearised Euler system has a different entropy.\n\n\n\nThe outline of this paper is as follows: In \\S \\ref{sec:mees} we\npresent the general framework of designing a posteriori estimators\nbased on the study of the abstract equations (\\ref{cx})--(\\ref{sim})\nin the scalar case. In \\S \\ref{sec:meesy} we describe how this\nframework can be extended to the specific example of the NSF\/Euler\nproblem through the study of generic systems of the form (\\ref{cx})\nand (\\ref{sim}). A posteriori analysis is always based on the\nstability framework of the underlying problem. For the class of\nproblems considered here we make use of the relative entropy\nframework. For dG spatial approximations of these operators the a\nposteriori argument requires a \\emph{reconstruction} approach. This\nis since the discrete solution itself is not smooth enough to be used\nin the relative entropy stability analysis. In \\S \\ref{sec:recon} we\ngive an overview of the reconstruction approach presented in\n\\cite{GMP_15,DG_15}. We also take the opportunity to extend the\noperators to 2 spatial dimensions for use in numerical experiments\npresented. Finally, we conclude the presentation with \\S \\ref{sec:num}\nwhere we summarise extensive numerical experiments aimed at testing\nthe performance of the indicator as a basis for model adaptivity.\n\n\\section{Estimating the modeling error for generic reconstructions - the scalar case}\\label{sec:mees}\nOur goal in this section is to show how the entropic structure can be used to obtain estimates for the difference between a (sufficiently regular) solution to \\eqref{cx} and\n the solution $\\vec v_h$ of a numerical scheme which is a discretisation of \\eqref{sim} on part of the (space-time) domain and \nof \\eqref{cx} everywhere else.\nWe will pay particular attention to the {\\it model adaptation error}, \\ie the part of the error which is due to approximating not \\eqref{cx} but \\eqref{sim} on part of the domain.\nIn this section we present the arguments in the scalar case, as in this case all arguments can be given in a tangible way, see Section \\ref{subs:sca}.\n\n\\subsection{Relative Entropy}\nBefore treating the scalar case we will review the classical concept of entropy\/entropy flux pair which will be used in this Section and in Section \\ref{sec:meesy}.\nWe will show how it can be employed in the relative entropy stability framework.\nWe will also show explicitly that the relative entropy in the NSF model satisfies the conditions which are necessary for our analysis.\nThroughout this exposition we will use $d$ to denote the spatial dimension of the problem and $n$ as the number of equations in the system.\n\n\\begin{definition}[Entropy pair]\nWe call a tuple $(\\eta,{\\bf q}) \\in \\cont{1}(U,\\rR) \\times C^1(U,\\rR^d)$ an entropy pair to \\eqref{cx}\n provided the following compatibility conditions are satisfied:\n\\begin{equation}\\label{cc1}\n \\D \\eta \\D \\vec f_\\alpha = \\D q_\\alpha \\quad \\text{for } \\alpha = 1,\\dots, d \\end{equation}\n and\n \\begin{equation}\\label{cc2}\n \\partial_{x_\\alpha} (\\D \\eta (\\vec y)) : \\vec g_\\alpha(\\vec y, \\nabla \\vec y) \\geq 0 \\quad \\text{ for any } \\vec y \\in \\cont{1}(\\rT^d,U) \\text{ and } \\alpha =1,\\dots,d\n \\end{equation}\nwhere $\\D$ denotes the Jacobian of functions defined on $U$ (the state space).\nWe call an entropy pair strictly convex if $\\eta$ is strictly convex.\n\\end{definition}\n\n\\begin{remark}[Entropy equality]\n Note that every solution $\\vec u \\in \\sobh{1}(\\rT^d \\times (0,T), U)$ of \\eqref{cx} satisfies the additional companion balance law\n \\[\\partial_t \\eta(\\vec u) + \\sum_\\alpha \\partial_{x_\\alpha} \\Big( q_\\alpha (\\vec u) - \\varepsilon \\vec g_\\alpha(\\vec u, \\nabla \\vec u) \\D \\eta(\\vec u)\\Big) =\n - \\varepsilon \\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u)\\partial_{x_\\alpha} \\D \\eta(\\vec u).\\]\n We refer to $\\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u)\\partial_{x_\\alpha} \\D \\eta(\\vec u)$ as entropy dissipation.\n\\end{remark}\n\n\n\\begin{Rem}[Entropic structure]\nWe restrict our attention to systems \\eqref{cx} which are endowed with at least one strictly convex entropy pair.\nSuch a convex entropy pair gives rise to a stability theory based on the relative entropy which we recall in Definition \\ref{def:red}.\nWe will make the additional assumption that $\\eta \\in \\cont{3}(U,\\rR).$\nWhile this last assumption is not standard in relative entropy estimates, it does not exclude any important cases.\n\\end{Rem}\n\n\\begin{remark}[Commutation property]\n Note that the existence of $q_\\alpha$ means that for each $\\alpha$ the vector field $\\vec u \\mapsto \\D \\eta(\\vec u) \\D \\vec f_\\alpha(\\vec u)$ has a potential and, thus,\n gives rise to the following commutivity property:\n \\begin{equation}\\label{eq:dfdeta}\n \\Transpose{(\\D \\vec f_\\alpha)} \\D^2 \\eta = \\D^2 \\eta \\D \\vec f_\\alpha \\quad \\text{ for } \\alpha=1,\\dots, d.\n \\end{equation}\n\\end{remark}\n\n\n\\begin{definition}[Relative entropy]\\label{def:red}\n Let \\eqref{cx} be endowed with an entropy pair $(\\eta, \\vec q).$ Then the relative entropy and relative entropy flux between the states\n $\\vec u,\\vec v \\in U$ are defined by\n \\begin{equation}\\label{red}\n \\begin{split}\n \\eta(\\vec u|\\vec v) &:= \\eta(\\vec u) - \\eta(\\vec v) - \\D \\eta(\\vec v) (\\vec u - \\vec v)\\\\\n \\vec q(\\vec u|\\vec v) &:= \\vec q(\\vec u) - \\vec q(\\vec v) - \\D \\eta(\\vec v) (\\vec f(\\vec u) - \\vec f(\\vec v)).\n \\end{split}\n \\end{equation}\n\\end{definition}\n\n\n\n\n\n\n\\begin{Hyp}[Existence of reconstruction]\\label{hyp:recon}\n In the remainder of this section we will assume existence of a reconstruction $\\widehat{\\vec v} $ of a numerical solution $\\vec v_h$ which weakly\nsolves\n\\begin{equation}\\label{interm} \\partial_t \\widehat{\\vec v} + \\div \\vec f(\\widehat{\\vec v}) = \\div \\qp{ \\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v})} + \\cR_H + \\cR_P\\end{equation}\nwith explicitly computable residuals $\\cR_H,\\cR_P$ and $\\widehat \\varepsilon: \\rT^d \\times [0,T] \\rightarrow [0,\\varepsilon]$ being a function \nwhich is a consequence of the model adaptation procedure and determines in which part of the space-time domain which model is discretised.\nWe assume that the residual can be split into a part denoted $\\cR_H \\in \\leb{2}(\\rT^d \\times (0,T), \\rR^n)$ and a part proportional \nto $\\widehat \\varepsilon,$ denoted $\\cR_P,$ which is an element of $ \\leb{2}(0,T;\\sobh{-1}(\\rT^d, \\rR^n)).$ \n\nWe also assume that an explicitly computable bound for $\\norm{\\widehat{\\vec v} - \\vec v_h}$ is available.\n\\end{Hyp}\n\n\\begin{Rem}[Reconstructions]\n We present some reconstructions in Section \\ref{sec:recon} and point out that different choices of reconstructions will lead to different behaviours of the residuals\n$\\cR_H,\\cR_P$ with respect to the mesh width $h.$ However, our main interest in the work at hand is a rigorous \nestimation of the modelling error, and not the derivation of optimal order discretisation error estimates,\nwhich is a challenging task.\n\\end{Rem}\n\n\n\n\nThroughout this paper we will make the following assumption on the exact solution.\n\\begin{Hyp}[Values in a compact set]\n We assume that we have {\\it a priori} knowledge of a pair $(T,\\mathfrak{O}),$\n where $T>0$ and $\\mathfrak{O} \\subset U$ is compact and convex, such that the exact solution $\\vec u$ of \\eqref{cx} takes values in \n $\\mathfrak{O}$ up to time $T,$ \\ie \n\\[\\vec u({\\vec x},t) \\in \\mathfrak{O} \\quad \\forall \\ ({\\vec x},t) \\in \\rT^d \\times [0,T].\\]\n\\end{Hyp}\n\n\\subsection{A posteriori analysis of the scalar case}\\label{subs:sca}\nIn the scalar setting, \\ie $n=1,$ we restrict ourselves to the ``complex'' model being given by \n\\begin{equation}\\label{vb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = \\div (\\varepsilon \\nabla u) \n\\end{equation}\nand the simple model by \n\\begin{equation}\\label{ivb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = 0.\n\\end{equation}\nTherefore, our model adaptive algorithm can be viewed as a numerical discretisation of\n\\begin{equation}\\label{intb}\n \\partial_t u + \\div \\qp{\\vec f(u)} = \\div (\\widehat \\varepsilon \\nabla u) \n\\end{equation}\nwith a space and time dependent function $\\widehat \\varepsilon$ taking values in $[0,\\varepsilon].$\nThe spatial distribution of $\\widehat \\varepsilon$ determines which model is solved on which part of the domain.\nThe reconstruction $\\widehat{v}$ of the numerical solution is a Lipschitz continuous weak solution to the perturbed problem\n\\begin{equation}\\label{vb_R}\n \\partial_t \\widehat{v} + \\div \\qp{\\vec f(\\widehat{v})} = \\div (\\widehat \\varepsilon \\nabla \\widehat{v}) + \\cR_H + \\cR_P,\n\\end{equation}\nwhere $\\widehat \\varepsilon$ is a space dependent function with values in $[0,\\varepsilon],$ $\\cR_H$ is the ``hyperbolic'' part of the discretisation residual\nwhich is in $\\leb{2}(\\rT^d \\times (0,T),\\rR^n)$, and \n$\\cR_P$ is the ``parabolic'' part of the discretisation residual.\nNote that $\\cR_P$ is not in $\\leb{2}(\\rT^d \\times (0,T),\\rR^n)$ but in $\\leb{2}(0,T;\\sobh{-1}(\\rT^d,\\rR^n))$ and that $\\Norm{\\cR_P}_{\\leb{2}(\\sobh{-1})}$ is proportional to $\\widehat \\varepsilon.$\nSee Hypothesis \\ref{hyp:recon} for our general assumption and \\eqref{eq:dres} for such a splitting in case of a specific reconstruction.\n\nIn what follows we will use the relative entropy stability framework to derive a bound for the difference between the solution $u$\nof \\eqref{vb} and the reconstruction $\\widehat{v}$ of the numerical solution.\nIn the scalar case every strictly convex $\\eta \\in \\cont{2}(U,\\rR)$ is an entropy for \\eqref{vb}, because to each such $\\eta$ a consistent entropy flux may be defined by \n\\begin{equation}\n q_\\alpha (u):= \\int^u \\eta'(v) f_\\alpha'(v) \\d v \\text{ for } \\alpha =1,\\dots, d\n\\end{equation}\nand the compatibility with the diffusive term boils down to\n\\begin{equation}\n - (\\partial_{x_\\alpha } y ) \\eta''(y) (\\partial_{x_\\alpha } y ) \\leq 0 \\quad \\text{ for all } y \\in \\cont{1}(U,\\rR) \\text{ and } \\alpha =1,\\dots,d,\n\\end{equation}\nwhich is satisfied as a consequence of the convexity of $\\eta.$\n\nWe choose \n$\\eta(u)=\\tfrac{1}{2} u^2$\nas this simplifies the subsequent calculations.\nIn particular,\n\\[ \\eta(u|v)= \\frac{1}{2} ( u- v)^2\\]\nfor all $u, v\\in U.$\n\n\\begin{remark}[Stability framework]\nNote that in the scalar case we might also use Kruzhkov's $\\leb{1}$ stability framework \\cite{Kru70} instead of the relative entropy.\nHowever, the exposition at hand is supposed to serve as a blueprint for what we are going to do for systems in Section \\ref{sec:meesy}.\n\\end{remark}\n\n\\begin{Rem}[Bounds on flux]\n Due to the regularity of $\\vec f$ and the compactness of $\\mathfrak{O}$ there exists a\n constant $0 < C_{\\overline{\\vec f}} < \\infty$ such\n that\n \\begin{equation}\n \\label{eq:consts1d}\n\\norm{ \\vec f''( u) } \n \\leq C_{\\overline{\\vec f}} \\quad \\forall u \\in \\mathfrak{O} , \n \\end{equation}\n where $\\norm{\\cdot}$ is the Euclidean norm for vectors.\n Note that $C_{\\overline{\\vec f}}$ can be explicitly computed from $\\mathfrak{O}$ and $\\vec f.$\n\\end{Rem}\nThe main result of this Section is then the following Theorem:\n\\begin{The}[a posteriori modelling error control]\\label{the:1}\n Let \n $u \\in \\sobh{1}(\\rT^d \\times (0,T),\\rR)$ be a solution to \\eqref{vb} and let $\\widehat{v} \\in\\sob{1}{\\infty}(\\rT^d\\times (0,T),\\rR)$ solve \\eqref{vb_R}. \n Then, for almost all $t \\in (0,T)$ the following bound for the difference between $u$ and $\\widehat{v}$ holds:\n \\begin{multline}\\label{eq:the1}\n \\Norm{ u(\\cdot,t) - \\widehat{v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 + \\int_0^t \\varepsilon \\norm{u(\\cdot,s) - \\widehat{v} (\\cdot,s)}_{\\sobh{1}(\\rT^d)}^2 \\d s \n \\\\ \\leq\n \\Big( \\Norm{ u(\\cdot,0) - \\widehat{v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_M + \\cE_D \n \\Big)\\\\ \n \\times \\exp\\big( (\\Norm{ \\nabla \\widehat{v}}_{\\leb{\\infty}(\\rT^d\\times (0,t))} C_{\\overline{\\vec f}} + 1) t\\big),\n \\end{multline}\n with\n \\begin{equation}\n \\begin{split}\n \\cE_M :=& \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d \\times (0,t))}^2,\\\\\n \\cE_D :=& \\Norm{\\cR_H}_{\\leb{2}(\\rT^d \\times (0,t))}^2 +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )}^2.\n \\end{split}\n \\end{equation}\n\n\\end{The}\n\n\\begin{remark}[Dependence of the estimator on $\\varepsilon.$]\nWe expect that the modelling residual part of the estimator in \\eqref{eq:the1}, $\\cE_M$,\nbecomes small in large parts of the computational domain in case $\\varepsilon$ is sufficiently small, even if $\\widehat \\varepsilon$ vanishes everywhere.\nThis means that \\eqref{sim} is a reasonable approximation of \\eqref{cx}.\nIt should be\nnoted that $\\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )} \\sim \\cO(\\varepsilon)$ (as $\\widehat \\varepsilon$ only takes the values $0$ and $\\varepsilon$),\ntherefore we expect $\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}( 0,t; \\sobh{-1}(\\rT^d) )}^2 \\sim \\cO(\\varepsilon).$ \n\\end{remark}\n\n\\begin{proof}[Proof of Theorem \\ref{the:1}]\n Testing \\eqref{vb} and \\eqref{vb_R} with $(u - \\widehat{v})$ and subtracting both equations we obtain\n \\begin{multline}\\label{eq:re1}\n \\int_{\\rT^d}\\frac{1}{2} \\pdt ((u- \\widehat{v})^2) - \\nabla \\widehat{v} \\big(\\vec f(u) - \\vec f(\\widehat{v}) - \\vec f'(\\widehat{v})(u - \\widehat{v}) \\big) + \\varepsilon | \\nabla (u - \\widehat{v})|^2 \\\\\n =\\int_{\\rT^d} (\\widehat \\varepsilon - \\varepsilon) \\nabla \\widehat{v} \\cdot \\nabla (u - \\widehat{v}) + \\cR_H (u - \\widehat{v}) + \\cR_P (u - \\widehat{v}),\n \\end{multline}\nwhere we have used that $\\rT^d$ has no boundary and \n\\begin{equation}\n \\div (\\vec q(u|\\widehat{v})) - \\nabla \\widehat{v} \\big(\\vec f(u) - \\vec f(\\widehat{v}) - \\vec f'(\\widehat{v})(u - \\widehat{v}) \\big) = \\div( \\vec f(u) - \\vec f(\\widehat{v})) (u - \\widehat{v}).\n\\end{equation}\nApplying Young's inequality in \\eqref{eq:re1} we obtain\n \\begin{multline}\\label{eq:re2}\n \\d_t \\left(\\frac{1}{2} \\Norm{u- \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\right) + \\varepsilon \\norm{ (u - \\widehat{v})}_{\\sobh{1}(\\rT^d)}^2 \\\\\n \\leq \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 + \\frac{\\varepsilon}{4} \\norm{u - \\widehat{v}}_{\\sobh{1}(\\rT^d)}^2 + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d)}^2 + \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\\\\n +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)}^2 + \\frac{\\varepsilon}{4} \\norm{u - \\widehat{v}}_{\\sobh{1}(\\rT^d)}^2 + \\norm{ \\widehat{v}}_{\\sob{1}{\\infty}(\\rT^d)} C_{\\overline{\\vec f}} \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 .\n \\end{multline}\n Several terms in \\eqref{eq:re2} cancel each other and we obtain\n \\begin{multline}\\label{eq:re3}\n \\d_t \\left(\\frac{1}{2} \\Norm{u- \\widehat{v}}_{\\leb{2}(\\rT^d)}^2\\right) +\\frac{ \\varepsilon}{2} \\norm{ (u - \\widehat{v})}_{\\sobh{1}(\\rT^d)}^2\\\\\n \\leq \\Norm{\\qp{\\varepsilon - \\widehat \\varepsilon}^{1\/2} \\nabla \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d)}^2 \\\\\n +\\frac{1}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)}^2 + (\\norm{ \\widehat{v}}_{\\sob{1}{\\infty}(\\rT^d)} C_{\\overline{\\vec f}} + 1) \\Norm{u - \\widehat{v}}_{\\leb{2}(\\rT^d)}^2 .\n \\end{multline}\n Integrating \\eqref{eq:re3} in time implies the assertion of the theorem.\n\\end{proof}\n\n\n\\section{Estimating the modeling error for generic reconstructions - the systems case}\\label{sec:meesy}\n\nIn this section we set up a more abstract framework allowing for the analysis of systems of equations. This framework will be built in such a way that it is applicable for two special cases motivated by compressible fluid dynamics, see Remarks \\ref{rem:vhins} and \\ref{rem:vhnsf}. We will make additional assumptions which are not as classical as the existence of a strictly convex entropy.\nThey essentially impose a compatibility of the dissipative flux $\\vec g$ and the parabolic part of the residual $\\cR_P$\n with the {\\it relative} entropy.\n For clarity of exposition we do not explicitly track the constants, rather denote a generic constant $C$ which may depend on $\\mathfrak{O},\\vec f, \\vec g, \\eta$ but is independent of mesh size and solution.\n \n \\begin{remark}[Guiding examples]\\label{rem:gex}\n Note that in this Remark and in Remarks \\ref{rem:vhins} and \\ref{rem:vhnsf} the notation differs from that in the rest of the paper so that the physical quantities \n under consideration are denoted as is standard in the fluid mechanics literature.\n The hypotheses we will make are such that they are satisfied for, at least, two specific cases of great practical relevance.\n The first case is the isothermal Navier-Stokes system (INS), where we replace the Navier-Stokes stress by $\\nabla \\mathbf v$ for simplicity,\n which reads:\n \\begin{equation}\\label{isoNS}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla (p(\\rho)) &= \\div (\\mu \\nabla \\mathbf v) \n \\end{split}\n \\end{equation}\n where $\\rho$ denotes density, $\\mathbf v$ denotes velocity and $p=p(\\rho)$ is the pressure, given by a constitutive relation as a monotone function of density,\nand $\\mu\\geq0$ is the viscosity parameter.\n In this case the simple model are the isothermal Euler equations which are obtained from \\eqref{isoNS} by setting $\\mu=0.$\n For these models the vector of conserved quantities is $\\vec u=\\Transpose{(\\rho, \\rho \\mathbf v)}$ and the mathematical entropy is given by\n \\[ \\eta(\\rho, \\rho \\mathbf v) = W(\\rho) + \\frac{|\\rho \\mathbf v|^2 }{2\\rho},\\]\nwhere Helmholtz energy $W$ and pressure $p$ are related by the Gibbs-Duhem relation\n\\[ p'(\\rho)= \\rho W''(\\rho).\\]\\medskip\nThe second case are the Navier-Stokes-Fourier (NSF) equations for an ideal gas, where we again replace the Navier-Stokes stress by $\\nabla \\mathbf v:$\n \\begin{equation}\\label{NSF}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla (p(\\rho,\\epsilon)) &= \\div (\\mu \\nabla \\mathbf v) \\\\\n \\partial_t e + \\div((e+ p)\\mathbf v ) &= \\div (\\mu(\\nabla \\mathbf v) \\cdot \\mathbf v + \\kappa \\nabla T),\n \\end{split}\n \\end{equation}\n where we understand $(\\nabla \\mathbf v) \\cdot \\mathbf v$ by $((\\nabla \\mathbf v) \\cdot \\mathbf v)_\\alpha =\\sum_\\beta v_\\beta \\partial_{x_\\alpha } v_\\beta.$\n The variables and parameters which appear in \\eqref{NSF} and did not appear before are the temperature $T,$ the specific inner energy $\\epsilon,$\n and the heat conductivity $\\kappa >0$.\n In an ideal gas it holds\n \\begin{equation}\n \\begin{split}\n e&= \\rho \\epsilon + \\frac{1}{2} \\rho |\\mathbf v|^2,\\\\\n p&=(\\gamma - 1) \\rho \\epsilon = \\rho R T,\n \\end{split}\n \\end{equation}\n where $R$ is the specific gas constant and $\\gamma$ is the adiabatic coefficient.\n For air it holds $R=287$ and $\\gamma=1.4.$\nIn this case the simple model are the Euler equations which are obtained from \\eqref{NSF} by setting $\\mu, \\kappa =0.$\nThe vector of conserved quantities is $\\mathbf u=\\Transpose{(\\rho, \\rho \\mathbf v,e)}$ and the (mathematical) entropy is given by \n\\[ \\eta(\\mathbf u)= - \\rho \\ln \\left( \\frac{p}{\\rho^\\gamma}\\right).\\]\nWe will impose in both cases that the state space $\\mathfrak{O}$ enforces positive densities.\n \\end{remark}\n \n\n \\begin{Hyp}[Compatibilities]\\label{Hyp:new}\n We impose that we can find a function $\\cD : U \\times \\rR^{n \\times d} \\times U \\times \\rR^{n \\times d} \\rightarrow [0,\\infty)$ \nand a constant $k>0$ such that for all $\\vec w , \\widetilde{ \\vec w} \\in \\sob{1}{\\infty}(\\rT^d)$ taking values in $\\mathfrak{O}$ the following holds\n \\begin{multline}\\label{ca1}\n\\sum_\\alpha ( \\vec g_\\alpha(\\vec w, \\nabla \\vec w) - \\vec g_\\alpha(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\partial_{x_\\alpha} (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\ \\geq\n\\frac{1}{k} \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) - k (\\norm{\\vec w}_{\\sob{1}{\\infty}}^2 + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}}^2) \\eta(\\vec w| \\widetilde {\\vec w})\n \\end{multline}\n and \n \\begin{multline}\\label{ca2}\n \\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n k^2 (\\norm{\\vec w}_{\\sob{1}{\\infty}}^2 + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}}^2 +1) \\eta(\\vec w| \\widetilde {\\vec w})+ \\frac{1}{2k} \\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \n + k^2 \\sum_\\alpha \\vec g_\\alpha (\\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \\partial_{x_\\alpha} \\D \\eta(\\widetilde {\\vec w})\n \\end{multline}\n and \n \\begin{multline}\\label{rp:comp}\n \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\\\\n \\leq k \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\left((\\norm{\\vec w}_{\\sob{1}{\\infty}} \n + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}} +1) \\sqrt{\\eta( \\vec w | \\widetilde {\\vec w} ) }\n + \\sqrt{\\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\right).\n \\end{multline}\n \\end{Hyp}\n\n \\begin{remark}[Validity of Hypothesis \\ref{Hyp:new} for INS]\\label{rem:vhins}\n In case of the isothermal Navier-Stokes equations \\eqref{isoNS} the diffusive fluxes are given by\n \\begin{equation}\n \\label{bdd1}\n \\vec g_\\alpha = \\Transpose{(0, \\partial_{x_\\alpha} \\mathbf v)}\n \\end{equation}\n so that for understanding relations in Hypothesis \\ref{Hyp:new} it is sufficient to \n compute\n \\begin{equation}\n \\label{bdd2}\n \\frac{\\partial \\eta}{\\partial (\\rho \\mathbf v) } = \\frac{\\rho \\mathbf v}{\\rho} = \\mathbf v.\n \\end{equation}\n Thus, in this case entropy dissipation is given by\n \\[ \\mu \\sum_\\alpha \\vec g_\\alpha (\\vec w, \\nabla \\vec w) \\partial_{x_\\alpha} \\D \\eta(\\vec w)= \\mu \\norm{ \\nabla \\mathbf v}^2, \\]\n where $|\\cdot|$ denotes the Frobenius norm and $\\vec w= \\Transpose{( \\rho, \\rho \\mathbf v)} .$\n With $\\widetilde{\\vec w} = \\Transpose{(\\widetilde \\rho,\\widetilde \\rho \\widetilde\\mathbf v)}$ we define\n \\begin{equation}\n \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})\\\\\n := \\norm{ \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}^2 .\n \\end{equation}\nLet us now verify that we find a constant $k$ such that the assumptions from Hypothesis \\ref{Hyp:new} are valid.\nMaking use of the definitions (\\ref{bdd1}) and (\\ref{bdd2}) we obtain\n \\begin{multline}\\label{ca1t1}\n ( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\left( \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v \\right) : (\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v ) = \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}),\n\\end{multline}\nso that \\eqref{ca1} is satisfied for any $k \\geq 1.$\nNow, again using the definitions, we find for any $k \\geq 1$\n\\begin{multline}\\label{ca2t1}\n \\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\n = \\norm{ \\nabla (\\mathbf v - \\widetilde \\mathbf v ) : \\nabla \\widetilde \\mathbf v}\\\\\n \\leq \\frac{1}{2k} \\norm{ \\nabla (\\mathbf v - \\widetilde \\mathbf v )}^2 + k \\norm{ \\nabla \\widetilde \\mathbf v}^2\n =\n \\frac{1}{2k} \\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \n + k \\sum_\\alpha \\vec g_\\alpha (\\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w}) \\partial_{x_\\alpha} \\D \\eta(\\widetilde {\\vec w}),\n \\end{multline}\n \\ie \\eqref{ca2} is satisfied for any $k \\geq 1.$\n If a reasonable discretisation of the isothermal Navier-Stokes equations is used there is only a parabolic residual in the momentum balance equations, \\ie\n \\begin{multline}\n \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\Norm{\\mathbf v - \\widetilde \\mathbf v }_{\\sobh{1}}\\\\\n \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\qp{k \\Norm{\\vec w - \\widetilde{\\vec w} }_{\\leb{2}} + \\sqrt{\\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w})} }\n \\end{multline}\nwhich shows that \\eqref{rp:comp} is satisfied with a constant $k$ depending only on $\\mathfrak{O}$.\n \\end{remark}\n \n \n \\begin{remark}[Validity of Hypothesis \\ref{Hyp:new} for NSF]\\label{rem:vhnsf}\n In case of the Navier-Stokes-Fourier equations there are two parameters $\\mu,\\kappa$ scaling dissipative mechanisms.\n We will identify $\\mu$ with the small parameter in \\eqref{cx} and keep the ratio $\\tfrac{\\kappa}{\\mu}$, which we will treat as a constant, of order $1.$\n Then, \n $\\vec g_\\alpha = \\Transpose{(0, \\partial_{x_\\alpha} \\mathbf v, \\mathbf v \\cdot \\partial_{x_\\alpha} \\mathbf v + \\tfrac{\\kappa}{\\mu} \\partial_{x_\\alpha} T )}.$\n Therefore, it is sufficient to \n compute\n \\begin{equation}\\label{NSF1} \\frac{\\partial \\eta}{\\partial (\\rho \\mathbf v) } = \\frac{\\gamma-1}{R} \\frac{\\mathbf v}{T}; \\quad \\frac{\\partial \\eta}{\\partial e} =- \\frac{\\gamma-1}{R} \\frac{1}{T}\\end{equation}\n for understanding relations in Hypothesis \\ref{Hyp:new}. \n From equation \\eqref{NSF1} we may compute entropy dissipation:\n \\begin{equation}\n \\mu \\sum_\\alpha \\vec g_\\alpha (\\vec u, \\nabla \\vec u) \\partial_{x_\\alpha} \\D \\eta(\\vec u) \n = \\frac{\\gamma-1}{R} \\frac{\\mu}{T} \\norm{\\nabla \\mathbf v}^2 + \n \\kappa \\frac{\\gamma-1}{R} \\frac{|\\nabla T|^2}{T^2}\n \\end{equation}\nand we define\n \\begin{equation}\n \\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})\\\\\n := \\frac{1}{\\widetilde T} \\norm{ \\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}^2 + \\frac{\\kappa}{\\mu} \\frac{\\norm{\\nabla T - \\nabla \\widetilde T}^2}{\\widetilde T^2} .\n \\end{equation}\n Note that $\\cD ( \\vec w, \\nabla \\vec w, \\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w})$ is not symmetric in $\\vec w$ and $\\widetilde{\\vec w}.$\nLet us now verify that we find a constant $k$ such that the assumptions from Hypothesis \\ref{Hyp:new} are valid.\nBy inserting the definitions we obtain\n \\begin{multline}\\label{ca1t2}\n ( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\frac{\\gamma-1}{R}\\Big[ \\nabla\\qp{ \\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{T}} : \\nabla (\\mathbf v - \\widetilde \\mathbf v)\n- \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} ( \\nabla \\mathbf v \\cdot \\mathbf v - \\nabla \\widetilde \\mathbf v \\cdot \\widetilde \\mathbf v)\\\\\n- \\frac{\\kappa}{\\mu} \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla (T - \\widetilde T)\n\\Big].\n \\end{multline}\n After a lengthy but straightforward computation we arrive at\n \\begin{multline}\\label{ca1t3}\n \\frac{R}{\\gamma-1}( \\vec g(\\vec w, \\nabla \\vec w) - \\vec g(\\widetilde{ \\vec w}, \\nabla \\widetilde{ \\vec w}) ): \\nabla (\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}))\\\\\n= \\frac{1}{\\widetilde T} \\norm{\\nabla (\\mathbf v - \\widetilde \\mathbf v)}^2 - \\frac{\\nabla \\widetilde T}{\\widetilde T^2} (\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v) (\\mathbf v - \\widetilde \\mathbf v) \n\\\\\n+ \\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\mathbf v : \\nabla (\\mathbf v - \\widetilde \\mathbf v) \n- \\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}}\\nabla T \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) \n\\\\\n+ \\frac{\\nabla (T - \\widetilde T)}{\\widetilde T^2} \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) \n+ \\frac{\\kappa}{\\mu} \\frac{\\norm{\\nabla (T - \\widetilde T)}^2}{\\widetilde T^2}\n+ \\frac{\\kappa}{\\mu} \\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}}\\nabla T \\nabla (T - \\widetilde T).\n \\end{multline}\n Note that the two summands of $\\cD$ both appear on the right hand side of \\eqref{ca1t3}.\n Applying Young's inequality to the other terms on the right hand side of \\eqref{ca1t3} shows that \\eqref{ca1}\n is true for some $k$ only depending on $\\mathfrak{O}.$\n \n By inserting we find\n \\begin{multline}\\label{ca2t2}\n \\frac{R}{\\gamma-1}\\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n \\nabla\\qp{ \\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{T}} \\nabla \\widetilde \\mathbf v - \\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde \\mathbf v \\cdot\\widetilde \\mathbf v \n - \\frac{\\kappa}{\\mu}\\nabla\\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde T.\n \\end{multline}\n We may rewrite this as \n \\begin{multline}\\label{ca2t3}\n \\frac{R}{\\gamma-1}\\norm{ \\nabla(\\D \\eta(\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\cdot \\vec g(\\widetilde {\\vec w}, \\nabla \\widetilde {\\vec w}) }\\\\\n \\leq\n \\frac{\\nabla \\mathbf v - \\nabla \\widetilde \\mathbf v}{T} : \\nabla \\widetilde \\mathbf v - \\frac{\\nabla T}{T^2} \\nabla \\widetilde \\mathbf v (\\mathbf v - \\widetilde \\mathbf v) + \\qp{ \\frac{1}{T} - \\frac{1}{\\widetilde T}} \\nabla \\widetilde \\mathbf v : \\nabla \\widetilde \\mathbf v\n \\\\\n + \\frac{\\kappa}{\\mu} \\frac{1}{\\widetilde T^2} \\nabla (T - \\widetilde T) \\cdot \\nabla \\widetilde T + \\frac{\\kappa}{\\mu}\\qp{ \\frac{1}{T^2} - \\frac{1}{\\widetilde T^2}} \\nabla T \\cdot \\nabla \\widetilde T.\n \\end{multline}\n Using Young's inequality we may infer from \\eqref{ca2t3} that \\eqref{ca2}\n is true for some $k$ only depending on $\\mathfrak{O}.$\n \n If a reasonable discretisation of the Navier-Stokes-Fourier equations is used there is no parabolic residual\n in the mass conservation equation, \\ie\n \\begin{multline}\n \\frac{R}{\\gamma-1} \\int_{\\rT^d} \\cR_P (\\D \\eta (\\vec w) - \\D \\eta(\\widetilde {\\vec w}) ) \\leq \\Norm{\\cR_P}_{\\sobh{-1}}\\qp{ \\Norm{\\frac{\\mathbf v}{T} - \\frac{\\widetilde \\mathbf v}{\\widetilde T} }_{\\sobh{1}} \n + \\Norm{\\frac{1}{T} - \\frac{1}{\\widetilde T} }_{\\sobh{1}} }\\\\\n \\leq \\Norm{\\cR_P}_{\\sobh{-1}} \\qp{C ( \\norm{\\vec w}_{\\sob{1}{\\infty}} + \\norm{\\widetilde {\\vec w}}_{\\sob{1}{\\infty}} +1) \\Norm{\\vec w - \\widetilde{\\vec w} }_{\\leb{2}} + \\sqrt{\\cD(\\vec w, \\nabla \\vec w, \\widetilde{\\vec w}, \\nabla \\widetilde{\\vec w})} }\n \\end{multline}\nwhich shows that \\eqref{rp:comp} is satisfied with a constant $k$ depending only on $\\mathfrak{O}$.\n \\end{remark}\n\n\nNow we return to the general setting \\eqref{cx},\\eqref{sim}. In particular,\n$\\vec v$ denotes states in $\\mathfrak{O}$ and not fluid velocities.\n\n\n\n\n\n\n\n\n\\begin{Rem}[Bounds on flux and entropy]\n Due to the regularity of $\\vec f$ and $\\eta,$ and the compactness of $\\mathfrak{O}$ there are\n constants $0 < C_{\\overline{\\vec f}} < \\infty$ and $0< C_{\\underline{\\eta}} < C_{\\overline{\\eta}} < \\infty$ such\n that\n \\begin{equation}\n \\label{eq:consts}\n \\norm{\\Transpose{\\vec v} \\D^2 \\vec f(\\vec u) \\vec v} \n \\leq C_{\\overline{\\vec f}} \\norm{\\vec v}^2, \n \\qquad \n C_{\\underline{\\eta}} \n \\norm{\\vec v}^2\n \\leq \n \\Transpose{\\vec v} \n \\D^2 \\eta(\\vec u)\n \\vec v\n \\leq C_{\\overline{\\eta}} \\norm{\\vec v}^2\n \\Foreach \\vec v \\in \\reals^n, \\vec u \\in \\mathfrak{O},\n \\end{equation}\n and \n \\begin{equation}\\label{eq:constt}\n \\norm{ \\D^3 \\eta (\\vec u)} \\leq C_{\\overline{\\eta}} \\Foreach \\vec u \\in \\mathfrak{O},\n \\end{equation}\n where $\\norm{\\cdot}$ is the Euclidean norm for vectors in \\eqref{eq:consts} and a Frobenius norm for 3-tensors in \\eqref{eq:constt}.\n Note that $C_{\\overline{\\vec f}}$, $C_{\\underline{\\eta}}$ and $C_{\\overline{\\eta}}$ can be explicitly computed from $\\mathfrak{O}$, $\\vec f$ and $\\eta.$\n\\end{Rem}\n\n\nNow we are in position to state the main result of this section\n\\begin{The}[A posteriori modelling error control]\\label{the:2}\n Let \n $\\vec u \\in \\sob{1}{\\infty}(\\rT^d \\times (0,T),\\rR^n)$ be a weak solution to \\eqref{cx} and let $\\widehat{\\vec v} \\in\\sob{1}{\\infty}(\\rT^d\\times (0,T),\\rR^n)$ weakly solve \\eqref{interm}.\n Let \\eqref{cx} be\n endowed with a strictly convex entropy pair.\n Let Hypothesis \\ref{Hyp:new} be satisfied and let\n $\\vec u$ and $\\widehat{\\vec v}$ only take values in $\\mathfrak{O}.$\n Then, the following a posteriori error estimate holds:\n \\begin{multline}\\label{eq:the2}\n \\Norm{ \\vec u(\\cdot,t) - \\widehat{\\vec v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 + \\int_{\\rT^d \\times (0,t)} \\frac{\\varepsilon}{4k} \\cD (\\vec u,\\nabla \\vec u, \\widehat{\\vec v} , \\nabla \\widehat{\\vec v} ) \n \\\\ \n \\leq C \\left( \\Norm{ \\vec u(\\cdot,0) - \\widehat{\\vec v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_D + \\cE_M \\right)\\\\\n \\times \\exp\\left( C ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) t \\right)\n \\end{multline}\n with $C,k$ being constants depending on $(\\mathfrak{O}, \\vec f, \\vec g, \\eta)$ and\n\\begin{equation}\\label{def:ce}\n\\begin{split}\n \\cE_M &:= \\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v})}_{\\leb{2}(\\rT^d \\times (0,t))}^2 +\\int_{\\rT^d \\times (0,t)} (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}) ,\\\\\n \\cE_D &:= \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\leb{2}(0,t;\\sobh{-1}(\\rT^d))}^2 \n + \\Norm{\\cR_H}_{\\leb{2}(\\rT^d \\times (0,t))}^2 .\n \\end{split}\n\\end{equation}\n\n\\end{The}\n\n\\begin{remark}[Energy dissipation degeneracy]\n It should be noted that the estimate in Theorem \\ref{the:2} contains certain assumptions, \\ie $\\vec u\\in\\sob{1}{\\infty}$, which are not verifiable in an a posteriori fashion. \n In particular, we assume more regularity than can expected for solutions of systems of the form \\eqref{cx}; see \\cite{FNS11,FJN12} for existence results for systems of this type.\n However, the weak solutions defined in those references are not unique and only weak-strong uniqueness results are available, cf. \\cite{FJN12}.\n Thus, convergent a posteriori error estimators can only be expected in case the problem \\eqref{cx} has a more regular solution than \n is guaranteed analytically. Note that the corresponding term, \\ie $\\Norm{\\nabla \\vec u}_{\\leb{\\infty}}$ does not appear in the scalar case and is a consequence of the dissipation only being present in the equations for $\\vec v$ and $T$ but not in that for $\\rho$. This leads to a form of degeneracy of the energy dissipation governing the underlying system.\n\\end{remark}\n\n\\begin{remark}[Structure of the estimator]\nNote that the first factor in the estimator consists of three parts. The first part is the error in the discretisation and reconstruction of the initial data.\nThe second part $\\cE_D$ is due to the residuals caused by the discretisation error. The third part $\\cE_M$ consists of residuals caused by the model approximation error.\n\\end{remark}\n\n\\begin{remark}[Structure of modelling error residual]\nRecall that we are interested in the case of $\\varepsilon$ being (very) small. In this case the term \n\\[ \\int_{\\rT^d \\times (0,t)} (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}) \\]\nis the dominating part in the modelling error residual $\\cE_M$ and, thus, letting $\\widehat \\varepsilon$ be $\\varepsilon$ in larger parts of the domain will usually reduce the\nmodelling error residual.\n\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem \\ref{the:2}]\n We test \\eqref{cx} by $\\D \\eta(\\vec u) - \\D \\eta (\\widehat{\\vec v})$ and \\eqref{interm} by $\\D^2 \\eta(\\vec u) (\\vec u - \\widehat{\\vec v})$ and subtract both equations.\n By rearranging terms and using \\eqref{eq:dfdeta} we obtain\n \\begin{multline}\\label{remd1}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\sum_\\alpha \\partial_{x_\\alpha} \\vec q_\\alpha (\\vec u, \\widehat{\\vec v}) + \\varepsilon( \\vec g(\\vec u, \\nabla \\vec u) - \\vec g(\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) ): \\nabla (\\D \\eta(\\vec u) - \\D \\eta(\\widehat{\\vec v})) \n \\\\ = E_1 + E_2 + E_3,\n \\end{multline}\n with\n \\begin{equation}\\label{remd2}\n \\begin{split}\n E_1 &:= \\int_{\\rT^d}\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) : \\nabla (\\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}))\n -\\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) : \\nabla (\\D \\eta(\\vec u) - \\D \\eta(\\widehat{\\vec v})) ,\n \\\\\n E_2 &:=-\\int_{\\rT^d} \\sum_\\alpha \\partial_{x_\\alpha}\\widehat{\\vec v} \\D^2 \\eta(\\widehat{\\vec v}) (\\vec f_\\alpha(\\vec u) - \\vec f_\\alpha (\\widehat{\\vec v}) - \\D \\vec f_\\alpha (\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v})),\\\\\n E_3 &:= \\int_{\\rT^d} (\\cR_H + \\cR_P) \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}).\n \\end{split}\n \\end{equation}\nAs $\\rT^d$ does not have a boundary and because of \\eqref{ca1} we obtain\n \\begin{equation}\\label{remd3}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\frac{\\varepsilon}{k} \\cD (\\vec u, \\nabla \\vec u,\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \n \\leq E_1 + E_2 + E_3 + \\varepsilon k (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2)\\int_{\\rT^d} \\eta(\\vec u| \\widehat{\\vec v}).\n \\end{equation}\nWe are now going to derive estimates for the terms $E_1,E_2,E_3$ on the right hand side of \\eqref{remd3}.\nWe may rewrite $E_1$ as\n\\begin{equation}\\label{e1}\n E_1 = E_{11} + E_{12}\n\\end{equation}\nwith\n\\begin{multline}\\label{e11}\n \\abs{E_{11}} :=\\norm{ \\int_{\\rT^d}\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}): \\nabla (\\D^2 \\eta (\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v} ) \n - \\D \\eta(\\vec u) + \\D \\eta (\\widehat{\\vec v}) ) }\\\\\n \\leq \\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }_{\\leb{2}}^ 2\n + C_{\\overline{\\eta}} ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2) \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2,\n\\end{multline}\nwhere we used \\eqref{eq:constt} and \n\\begin{equation}\\label{e12a}\n E_{12} := \n -\\int_{\\rT^d} (\\varepsilon - \\widehat \\varepsilon) \\vec g(\\widehat{\\vec v} , \\nabla \\widehat{\\vec v}) : \\nabla ( \\D \\eta(\\vec u) - \\D \\eta (\\widehat{\\vec v})).\n \\end{equation}\n Using \\eqref{ca2} we find \n\\begin{multline}\\label{e12} \n| E_{12}| \\leq \n \\varepsilon k^2 (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) \\eta(\\vec u| \\widehat{\\vec v})\\\\\n + \\frac{\\varepsilon}{2k} \\cD(\\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \n + (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v}).\n\\end{multline}\nConcerning $E_2$ we note\n\\begin{equation}\\label{e2}\n|E_2| \\leq C_{\\overline{\\eta}} C_{\\overline{\\vec f}} \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}} \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2.\n\\end{equation}\nWe decompose $E_3$ into two terms\n\\begin{equation}\\label{e3}\n E_3 = \\int_{\\rT^d} \\cR_H \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) + \\cR_P \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) =: E_{31} + E_{32}.\n\\end{equation}\nWe have\n\\begin{equation}\\label{e31}\n |E_{31} |\\leq \\Norm{\\cR_H}_{\\leb{2}}^2+ C_{\\overline{\\eta}}^2 \\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2.\n\\end{equation}\nWe rewrite $E_{32}$ as\n\\begin{equation}\n E_{32} = \\int_{\\rT^d} \\cR_P \\left( - \\D \\eta (\\vec u ) + \\D \\eta (\\widehat{\\vec v}) + \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) \\right) + \\cR_P \\left( \\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v})\\right)\n\\end{equation}\nsuch that we get the following estimate\n\\begin{multline}\\label{e32b}\n |E_{32} |\\leq \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\Norm{\\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v}) - \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) }_{\\sobh{1}(\\rT^d)} \\\\\n + k \\Norm{\\cR_P}_{\\sobh{-1}(\\rT^d)} \\left((\\norm{\\vec u}_{\\sob{1}{\\infty}} + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}} +1) \\Norm{ \\vec u- \\widehat{\\vec v} }_{\\leb{2}(\\rT^d)} \n + \\sqrt{\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }\\right)\n\\end{multline}\ndue to \\eqref{rp:comp}. We have \n\\begin{equation}\\label{e32c}\n \\Norm{\\D \\eta (\\vec u ) - \\D \\eta (\\widehat{\\vec v}) - \\D^2 \\eta(\\widehat{\\vec v}) (\\vec u - \\widehat{\\vec v}) }_{\\sobh{1}} \\leq C_{\\overline{\\eta}} ( \\Norm{\\vec u}_{\\sob{1}{\\infty}} + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}},\n\\end{equation}\nsuch that \\eqref{e32b} becomes\n\\begin{multline}\\label{e32d}\n | E_{32}| \\leq \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2 + 2 \\varepsilon C_{\\overline{\\eta}}^2 ( \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 + 1) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}}^2\\\\\n + \\frac{\\varepsilon}{4k} \\int_{\\rT^d}\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}).\n\\end{multline}\nCombining \\eqref{e31} and \\eqref{e32d} we obtain \n\\begin{multline}\\label{e3f}\n | E_3 | \\leq \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2 + 2 C_{\\overline{\\eta}}^2 (\\varepsilon \\Norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\varepsilon \\Norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 + 2) \\Norm{\\vec u- \\widehat{\\vec v}}_{\\leb{2}}^2\\\\\n + \\Norm{\\cR_H}_{\\leb{2}}^2 + \\frac{\\varepsilon}{4k} \\int_{\\rT^d}\\cD ( \\vec u, \\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}).\n\\end{multline}\nUpon inserting \\eqref{e11}, \\eqref{e12}, \\eqref{e2} and \\eqref{e3f} into \\eqref{remd3} we obtain for $\\varepsilon <1$\n \\begin{multline}\\label{remd4}\n \\int_{\\rT^d} \\partial_t \\eta(\\vec u| \\widehat{\\vec v}) + \\frac{\\varepsilon}{4k} \\cD (\\vec u, \\nabla \\vec u,\\widehat{\\vec v},\\nabla \\widehat{\\vec v}) \\\\\n \\leq \n\\Norm{\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) }_{\\leb{2}}^ 2\n + C (\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) \\int_{\\rT^d} \\eta(\\vec u| \\widehat{\\vec v})\\\\\n + (\\varepsilon - \\widehat \\varepsilon) k^2 \\sum_\\alpha \\vec g_\\alpha (\\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\partial_{x_\\alpha} \\D \\eta(\\widehat{\\vec v})\n + \\frac{k^2}{\\varepsilon} \\Norm{\\cR_P}_{\\sobh{-1}}^2\n + \\Norm{\\cR_H}_{\\leb{2}}^2 \n \\end{multline}\n where we have used that $\\Norm{\\vec u - \\widehat{\\vec v}}_{\\leb{2}}^2$ is bounded in terms of the relative entropy.\nUsing Gronwall's Lemma we obtain\n\\begin{multline}\n \\Norm{ \\vec u(\\cdot,t) - \\widehat{\\vec v} (\\cdot,t) }_{\\leb{2}(\\rT^d)}^2 +\n \\frac{\\varepsilon}{4k} \\int_{\\rT^d \\times (0,t)} \\cD (\\vec u,\\nabla \\vec u, \\widehat{\\vec v}, \\nabla \\widehat{\\vec v}) \\d s \\\\\n \\leq C_{\\overline{\\eta}} \\left( \\Norm{ \\vec u(\\cdot,0) - \\widehat{\\vec v} (\\cdot,0) }_{\\leb{2}(\\rT^d)}^2 + \\cE_D + \\cE_M\\right)\\\\\n \\times \\exp\\left( C(\\norm{\\vec u}_{\\sob{1}{\\infty}}^2 + \\norm{\\widehat{\\vec v}}_{\\sob{1}{\\infty}}^2 +1) t \\right)\n\\end{multline}\nwith $\\cE_D, \\cE_M$ defined in \\eqref{def:ce}.\n\\end{proof}\n\n\n\n\\section{Reconstructions}\\label{sec:recon}\nIn Sections \\ref{sec:mees} and \\ref{sec:meesy} we have assumed existence of reconstructions of numerical solutions whose residuals are computable, see Hypothesis \\ref{hyp:recon}. We have also assumed a certain regularity of these reconstructions. In this Section we will describe one way to obtain such reconstructions for semi- (spatially) discrete dG schemes.\n\nIn previous works reconstructions for dG schemes have been mainly used for deriving a posteriori bounds of {\\it discretisation errors} \\cite[c.f.]{DG_15,GMP_15,GeorgoulisHallMakridakis:2014} for hyperbolic problems.\nIn these works the main idea is to compare the numerical solution $\\vec v_h$ and the exact solution $\\vec u$ not directly,\nbut to introduce an intermediate quantity, the reconstruction $\\widehat{\\vec v}$ of the numerical solution.\nThis reconstruction must have two crucial properties:\n\\begin{itemize}\n\\item Explicit a posteriori bounds for the difference $\\Norm{\\widehat{\\vec v} -\\vec v_h}_\\cX$ for some appropriate $\\cX$ need to be \n available and,\n\\item The reconstruction $\\widehat{\\vec v}$ needs to be globally smooth enough to apply the appropriate stability theory of the underlying PDE.\n\\end{itemize}\nThese two properties allow the derivation of an a posteriori bound for the difference $\\Norm{\\vec u - \\vec v_h}_\\cX.$\n\nIn the sequel we will provide a methodology for the explicit computation of $\\widehat{\\vec v}$ \\emph{only} from the numerical solution $\\vec v_h$. This means trivially that the difference $\\Norm{\\widehat{\\vec v} - \\vec v_h}_\\cX$ can be controlled explicitly.\n\nFrom Sections \\ref{sec:mees} and \\ref{sec:meesy} the stability theory we advocate is that of relative entropy and we have extended the classical approach \nsuch that not only \\emph{discretisation} but also \\emph{modelling errors} are accounted for.\nNote also that for our results from Sections \\ref{sec:mees} and \\ref{sec:meesy} to be applicable we require $\\widehat{\\vec v} \\in \\sob{1}{\\infty}(\\rT^d \\times [0,T]).$\n\nIn this Section we describe how to obtain reconstructions $\\widehat{\\vec v}$ of numerical solutions $\\vec v_h$ \nwhich are obtained by solving \\eqref{cx} on part of the space-time domain and \\eqref{sim} on the rest of the space-time domain.\nFor brevity we will focus on numerical solutions obtained by semi-(spatially)-discrete dG schemes,\nwhich are a frequently used tool for the numerical simulation of models of the forms \\eqref{cx} and \\eqref{sim} alike.\nWe will view $\\vec v_h$ as a discretisation of the ``intermediate'' problem \n\\begin{equation}\n \\label{interm-c}\n \\partial_t \\vec v + \\div \\vec f(\\vec v) = \\div \\qp{ \\widehat \\varepsilon \\vec g(\\vec v, \\nabla \\vec v)}\\end{equation}\nwhere $\\widehat \\varepsilon$ is the model adaptation function, which will be chosen as part of the numerical method.\n\n\\begin{remark}[Alternative types of reconstruction]\nIf \\eqref{cx} was a parabolic problem, this would be a quite strong argument in favour of using elliptic reconstruction, see \\cite{MN03},\nbut this would make the residuals scale with $\\frac{1}{\\varepsilon}.$ Recall that we are interested in the case of $\\varepsilon$ being small.\nAs important examples, \\eg the Navier-Stokes-Fourier equations, are not parabolic we will describe a reconstruction approach here\nwhich was developed for semi-discrete dG schemes for hyperbolic problems in one space dimension in \\cite{GMP_15}.\nAn extension to fully discrete methods can be found in \\cite{DG_15}.\n\\end{remark}\n\nNote that we state reconstructions in this paper to keep it self contained and to describe how we proceed in our numerical experiments in Section \\ref{sec:num}.\nIt is, however, beyond the scope of this work to derive optimal reconstructions for \\eqref{cx}, \\eqref{sim}.\nFor all of these problems the derivation of optimal reconstructions of the numerical solution is a problem in its own right.\nNote that in this framework {\\it optimality} of a reconstruction means that the error estimator, which is obtained\nbased on this reconstruction, is of the same order as the (true) error of the numerical\nscheme.\n\nWe will first outline the reconstructions for \\eqref{sim}, proposed in \\cite{GMP_15}, and investigate in which sense they lead to reconstructions of \nnumerical solutions to\n\\eqref{cx} or \\eqref{interm-c}.\nAfterwards we describe how the reconstruction approach can be extended to dG methods on Cartesian meshes in two space dimensions.\nWe choose Cartesian meshes because they lend themselves to an extension of the approach from \\cite{GMP_15}.\nWe are not able to show the optimality of $\\cR_H$ in this case, though.\nFinding suitable (optimal) reconstructions for non-linear hyperbolic systems on unstructured meshes is the topic of ongoing research.\n\n\n\\subsection{A reconstruction approach for dG approximations of hyperbolic conservation laws}\\label{subs:rec:claw}\nIn this section we recall a reconstruction approach for semi-(spatially)-discrete dG schemes for systems of hyperbolic conservation laws \\eqref{sim}\ncomplemented with initial data $\\vec u(\\cdot,0) = \\vec u_0 \\in L^\\infty(\\rT).$\nWe consider the one dimensional case. Let $\\cT$ be a set of open intervals such that\n\\begin{equation}\n \\bigcup_{S \\in \\cT} \\bar S = \\rT \\text{ (the 1d torus)}, \\text{ and } \\text{ for all } S_1,S_2 \\in \\cT \\text{ it holds } S_1=S_2 \\text{ or } S_1 \\cap S_2 = \\emptyset.\n\\end{equation}\nBy $\\cE$ we denote the set of interval boundaries.\n\nThe space of piecewise polynomial functions of degree $q \\in \\rN$ is defined by\n\\begin{equation}\n \\fes_q := \\{ \\vec w : \\rT \\rightarrow \\rR^n \\, : \\, \\vec w|_S \\in \\rP_q(S,\\rR^n) \\ \\forall \\, S \\in \\cT\\},\n\\end{equation}\nwhere $ \\rP_q(S,\\rR^n)$ denotes the space of polynomials of degree $\\leq q$ on $S$ with values in $\\rR^n.$\n\nFor defining our scheme we also need jump and average operators which require the definition of a broken Sobolev space:\n\\begin{definition}[Broken Sobolev space]\n The broken Sobolev space $\\sobh{1}(\\cT,\\rR^n)$ is defined by\n \\begin{equation}\n \\sobh{1}(\\cT,\\rR^n) := \\{ \\vec w : \\rT \\rightarrow \\rR^n \\, : \\, \\vec w|_S \\in \\sobh{1}(S,\\rR^n) \\, \\forall \\, S \\in \\cT\\}.\n\\end{equation}\n\\end{definition}\n\n\n\n\\begin{definition}[Traces, jumps and averages]\n For any $\\vec w \\in \\sobh{1}(\\cT,\\rR^n)$ we define \n \\begin{itemize}\n \\item $\\vec w^\\pm : \\cE \\rightarrow \\rR^n $ by $ \\vec w^\\pm (\\cdot):= \\lim_{ s \\searrow 0} \\vec w(\\cdot \\pm s) $ ,\n \\item $\\avg{\\vec w} : \\cE \\rightarrow \\rR^n $ by $ \\avg{\\vec w} = \\frac{\\vec w^- + \\vec w^+}{2},$\n \\item $\\jump{\\vec w} : \\cE \\rightarrow \\rR^n $ by $ \\jump{\\vec w} = \\vec w^- - \\vec w^+.$\n \\end{itemize}\n\n\\end{definition}\n\nNow we are in position to state the numerical schemes under consideration:\n\\begin{definition}[Numerical scheme for \\eqref{sim}]\n The numerical scheme is to seek $\\vec v_h \\in \\cont{1}((0,T), \\fes_q)$ such that:\n\\begin{equation}\\label{eq:sddg}\n\\begin{split}\n &\\vec v_h(0)= \\cP_q [\\vec u_0] \\\\\n &\\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi} =0 \\quad \\text{for all } \\vec \\phi \\in \\fes_q,\n \\end{split}\n\\end{equation}\nwhere $\\int_{\\cT}$ is an abbreviation for $\\sum_{S \\in \\cT} \\int_S$, $\\cP_q$ denotes $\\leb{2}$-orthogonal projection into $\\fes_q,$\nand $\\vec F: U \\times U \\rightarrow \\mathbb{R}^n$ is a numerical flux function.\nWe impose that the numerical flux function satisfies the following condition:\nThere exist $L>0$ and $\\vec w : U \\times U \\rightarrow U$ such that\n\\begin{equation}\n \\label{eq:repf}\n \\vec F (\\vec u, \\vec v) = \\vec f(\\vec w(\\vec u, \\vec v)) \\quad \\text{ for all } \\vec u, \\vec v \\in U,\n\\end{equation}\nand\n\\begin{equation}\n \\label{eq:condw}\n \\norm{\\vec w(\\vec u, \\vec v) - \\vec u} + \\norm{\\vec w(\\vec u, \\vec v) - \\vec v} \\leq L \\norm{\\vec u - \\vec v} \\quad \\text{ for all } \\vec u, \\vec v \\in \\mathfrak{O}.\n\\end{equation}\n\\end{definition}\n\n\n\\begin{remark}[Conditions on the flux]\nNote that conditions \\eqref{eq:repf} and \\eqref{eq:condw} imply the consistency and local Lipschitz continuity conditions usually imposed on numerical fluxes in the convergence \nanalysis of dG approximations of hyperbolic conservation laws.\nThe conditions do not make the flux monotone nor do they ensure stability of \\eqref{eq:sddg}. \nThey do, however, ensure that the right hand side of\n\\eqref{eq:sddg} is Lipschitz continuous and, therefore, \\eqref{eq:sddg} has unique solutions for small times.\nObviously, practical interest is restricted to numerical fluxes leading to reasonable stability properties of \\eqref{eq:sddg} at least as long as the exact solution \nto \\eqref{sim} is Lipschitz continuous.\nFluxes of Richtmyer and Lax-Wendroff type lead to stable numerical schemes (as long as the exact solution is smooth) and \nsatisfy a relaxed version of conditions \\eqref{eq:repf} and \\eqref{eq:condw}.\nIt was shown in \\cite{DG_15} that these relaxed conditions (see \\cite[Rem. 3.6]{DG_15}) are sufficient for obtaining optimal a posteriori error estimates.\n\\end{remark}\n\nLet us now return to the main purpose of this section: the definition of a reconstruction operator.\nIn addition, we present a reconstruction of the numerical flux which will be used for splitting the residual into a parabolic and a hyperbolic part, in Section \\ref{subs:rec:hp}.\nThey are based on information from the numerical scheme:\n\\begin{definition}[Reconstructions]\n For each $t \\in [0,T]$ we define the flux reconstruction $\\widehat{\\vec f}(\\cdot,t) \\in \\fes_{q+1}$ through\n \\begin{equation}\\label{eq:rf1}\n \\begin{split}\n \\int_{\\cT} \\partial_x \\widehat{\\vec f}(\\cdot,t) \\cdot \\vec \\phi &= -\\int_{\\cT} \\vec f(\\vec v_h(\\cdot,t)) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\jump{\\vec \\phi} \\quad \\text{for all } \\vec \\phi \\in \\fes_q\\\\\n \\widehat{\\vec f}^+(\\cdot,t) &= \\vec F (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\quad \\text{ on } \\cE.\n \\end{split}\n\\end{equation}\nFor each $t \\in [0,T]$ we define the reconstruction $\\widehat{\\vec v}(\\cdot,t) \\in \\fes_{q+1}$ through\n \\begin{equation}\\label{eq:rf2}\n \\begin{split}\n \\int_{\\rT} \\widehat{\\vec v}(\\cdot,t) \\cdot \\vec \\psi &= \\int_{\\rT} \\vec v_h (\\cdot,t) \\cdot \\vec \\psi \\quad \\text{for all } \\vec \\psi \\in \\fes_{q-1}\\\\\n \\widehat{\\vec v}^\\pm(\\cdot,t) &= \\vec w (\\vec v_h^-(\\cdot,t),\\vec v_h^+(\\cdot,t)) \\quad \\text{ on } \\cE.\n \\end{split}\n\\end{equation}\n\\end{definition}\n\n\\begin{remark}[Properties of reconstruction]\n It was shown in \\cite{GMP_15} that these reconstructions are well-defined, explicitly and locally computable and Lipschitz continuous in space.\n Due to the Lipschitz continuity of $\\vec w$ they are also Lipschitz continuous in time.\n Recall from Sections \\ref{sec:mees} and \\ref{sec:meesy} that the Lipschitz continuity of $\\widehat{\\vec v}$ in space was crucial for our arguments.\n\\end{remark}\n\nDue to the Lipschitz continuity of $\\widehat{\\vec v}$ the definition of the discretisation residual satisfies\n\\begin{equation}\\label{eq:hypres}\n \\cR := \\partial_t \\widehat{\\vec v} + \\partial_x \\vec f(\\widehat{\\vec v}) \\in \\leb{\\infty}.\n\\end{equation}\nAt this point the reader might ask why we have defined $\\widehat{\\vec f}$ as it is not present in \\eqref{eq:hypres}\nand is not needed for computing the residual $\\cR$ either.\nWe will use $\\widehat{\\vec f}$ in Section \\ref{subs:rec:hp} to split the residual into a parabolic and a hyperbolic part.\nAs a preparation to this end let us note that upon combing \n\\eqref{eq:sddg} and \\eqref{eq:rf1} we obtain\n\\[ \\partial_t \\vec v_h + \\partial_x \\widehat{\\vec f} =0\\]\npointwise almost everywhere. Thus, we may split the residual as follows:\n\\[ \\cR = \\partial_t ( \\widehat{\\vec v} - \\vec v_h) + \\partial_x (\\vec f(\\widehat{\\vec v}) - \\widehat{\\vec f}).\\]\nThis splitting was used in \\cite{GMP_15} to argue that the residual is of optimal order.\n\n\n\n\\subsection{A reconstruction approach for dG approximations of hyperbolic\/parabolic problems}\\label{subs:rec:hp}\nWe will describe in this Section how the reconstruction methodology described above can be used in case of dG semi- (spatial) discretisations\nof \\eqref{interm-c} in one space dimension following the local dG methodology.\n\n\\begin{definition}[Discrete gradients]\n By $\\partial_x^-, \\partial_x^+ : \\sobh{1}(\\cT,\\rR^m) \\rightarrow \\fes_q$ we denote discrete gradient operators defined through\n\\begin{equation}\n \\int_{\\rT} \\partial_x^\\pm \\vec y \\cdot \\vec \\phi = - \\int_{\\cT} \\vec y \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec y^\\pm \\jump{\\vec \\phi} \\quad \\text{ for all } \\vec y, \\vec \\phi \\in \\fes_q.\n\\end{equation}\n\\end{definition}\n\\begin{lemma}[Discrete integration by parts]\\label{lem:dibp}\nThe operators $\\partial_x^\\pm$ satisfy the following duality property:\nFor any $\\vec \\phi, \\vec \\psi \\in \\fes_q$ it holds\n\\[ \\int_{\\rT} \\vec \\phi \\partial_x^- \\vec \\psi = - \\int_{\\rT} \\vec \\psi \\partial_x^+ \\vec \\phi .\\]\n\\end{lemma}\nThe proof of Lemma \\ref{lem:dibp} can be found in \\cite{DE12}. \nRewriting \\eqref{interm-c} as \n\\begin{equation}\n \\begin{split}\n \\vec s &= \\partial_x \\vec v\\\\\n \\partial_t \\vec v + \\partial_{x} \\vec f (\\vec v) &= \\partial_{x} (\\widehat \\varepsilon \\vec g(\\vec v , \\vec s)) \n \\end{split}\n\\end{equation}\n motivates the following semi-discrete numerical scheme:\n\\begin{definition}[Numerical scheme]\n The numerical solution $(\\vec v_h, \\vec s_h ) \\in \\qb{\\cont{1}((0,T), \\fes_q)}^2$\n is given as the solution to \n \\begin{equation}\\label{eq:sddg2}\n\\begin{split}\n \\vec v_h(0)&= \\cP_q [\\vec u_0] \\\\\n \\vec s_h &= \\partial_x^- \\vec v_h\\\\\n \\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\widehat \\varepsilon \\vec g(\\vec v_h, \\vec s_h) \\partial_x^- \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi} \n &=0 \\quad \\text{for all } \\vec \\phi \\in \\fes_q.\n \\end{split}\n\\end{equation}\n\\end{definition}\n\nDefining $\\widehat{\\vec f}$ as in \\eqref{eq:rf1} allows us to rewrite \\eqref{eq:sddg2}$_3$, using Lemma \\ref{lem:dibp}, as\n\\begin{equation}\\label{nssf}\n \\partial_t \\vec v_h + \\partial_x \\widehat{\\vec f} - \\partial_x^+ \\cP_q[ \\widehat \\varepsilon \\vec g(\\vec v_h, \\partial_x^- \\vec v_h)]=0.\n\\end{equation}\n\nDue to the arguments given in Section \\ref{subs:rec:claw} the reconstruction $\\widehat{\\vec v}$ is an element of $ \\sob{1}{\\infty}(\\rT \\times (0,T), \\rR^n)$ such that the following \nresidual makes sense in $\\leb{2}(0,T;\\sobh{-1}(\\rT)):$\n\\begin{equation}\n \\cR := \\partial_t \\widehat{\\vec v} + \\partial_x \\vec f(\\widehat{\\vec v}) - \\partial_x \\qp{ \\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\partial_x \\widehat{\\vec v})}.\n\\end{equation}\nUsing \\eqref{nssf} we may rewrite the residual as\n\\begin{equation}\\label{eq:dres}\n \\cR = \\underbrace{\\partial_t (\\widehat{\\vec v} - \\vec v_h) + \\partial_x (\\vec f(\\widehat{\\vec v})- \\widehat{\\vec f})}_{=: \\cR_H} + \\underbrace{\\partial_x^+ \\cP_q[ \\widehat \\varepsilon \\vec g(\\vec v_h, \\partial_x^- \\vec v_h)] - \\partial_x (\\widehat \\varepsilon \\vec g(\\widehat{\\vec v}, \\partial_x \\widehat{\\vec v})) }_{=: \\cR_P},\n\\end{equation}\n\\ie we have a decomposition of the residual as assumed in previous Sections, see \\eqref{interm} in particular.\n\n\n\n\\subsection{Extension of the reconstruction to $2$ space dimensions}\n\\label{subs:rec:2d}\nIn this section we present an extension of the reconstruction approach described before to semi-(spatially)-discrete dG schemes for systems of hyperbolic conservation laws \\eqref{sim}\ncomplemented with initial data $\\vec v(\\cdot,0) = \\vec v_0 \\in L^\\infty(\\rT^2)$ using Cartesian meshes in two space dimensions.\nThe extension to Cartesian meshes in more than two dimensions is straightforward.\n\nWe consider a system of hyperbolic conservation laws in two space dimensions\n\\begin{equation}\\label{claw}\n \\partial_t \\vec v + \\partial_{x_1} \\vec f_1(\\vec v) + \\partial_{x_2} \\vec f_2(\\vec v) =0,\n\\end{equation}\nwhere $\\vec f_{1,2} \\in \\cont{2}(U , \\rR^n).$ \n\nWe discretise $\\rT^2$ using partitions \n\\[-1=x_0 < x_1 < \\dots < x_N =1, \\quad -1=y_0 < y_1 < \\dots < y_M =1.\\] \nWe consider a Cartesian mesh $\\cT$ such that each element satisfies $K=[x_i,x_{i+1}] \\times [y_j,y_{j+1}]$ \nfor some $(i,j)\\in \\{0,\\dots,N-1\\} \\times \\{0,\\dots,M-1\\}.$\nFor any $p,q \\in \\rN$ and $K \\in \\cT$ let\n\\[ \\rP_q \\otimes \\rP_p (K) := \\rP_q([x_i,x_{i+1}]) \\otimes \\rP_p ([y_j,y_{j+1}]).\\]\nBy $\\fes_{p,q}$ we denote the space of trial and test functions, \\ie\n\\[ \\fes_{p,q}:= \\{ \\Phi : \\rT^2 \\rightarrow \\rR^m \\, : \\, \\Phi|_K \\in (\\rP_p \\otimes \\rP_q (K))^m \\ \\forall K \\in \\cT\\}.\\]\nNote that our dG space has a tensorial structure on each element.\n\nAs before $\\cE$ denotes the set of all edges, which can be decomposed into the sets of horizontal and vertical edges $\\cE^h, \\cE^v,$ respectively.\nLet us define the following jump operators:\nFor $\\Phi \\in \\sobh{1}(\\cT,\\rR^n)$ we define\n\\begin{align*}\n \\jump{\\Phi}^h &: \\cE^v \\rightarrow \\rR^n\\, \\quad \\jump{\\Phi}^h(\\cdot) := \\lim_{s \\searrow 0} \\Phi (\\cdot - s \\vec e_1) - \\lim_{s \\searrow 0} \\Phi (\\cdot + s\\vec e_1)\\\\\n \\jump{\\Phi}^v & : \\cE^h \\rightarrow \\rR^n, \\quad \\jump{\\Phi}^v(\\cdot) := \\lim_{s \\searrow 0} \\Phi ( \\cdot - s \\vec e_2) - \\lim_{s \\searrow 0} \\Phi (\\cdot + s\\vec e_2).\n\\end{align*}\n\n\nLet $\\vec F_{1,2}$ be numerical flux functions which satisfy conditions \\eqref{eq:repf} and \\eqref{eq:condw} with functions $\\vec w_1, \\vec w_2: U \\times U \\rightarrow U,$\n\\ie\n\\[ \\vec F_i (\\vec u, \\vec v) = \\vec f_i(\\vec w_i(\\vec u, \\vec v)) \\quad \\text{ for all } \\vec u, \\vec v \\in U \\text{ and } i=1,2.\\]\nThen, we consider semi-(spatially)-discrete discontinuous Galerkin schemes given as follows:\nSearch for $\\vec v_h\\in \\cont{1}([0,\\infty) , \\fes_{q,q})$ satisfying\n\\begin{multline}\\label{sch1}\n \\int_{\\T{}} \\partial_t \\vec v_h \\Phi - \\vec f_1(\\vec v)\\partial_{x_1} \\Phi - \\vec f_2(\\vec v)\\partial_{x_2} \\Phi\\\\\n + \\int_{\\E^v} \\vec F_1(\\vec v_h^-,\\vec v_h^+) \\jump{\\Phi}^h + \\int_{\\E^h} \\vec F_2(\\vec v_h^-,\\vec v_h^+) \\jump{\\Phi}^v =0 \\quad \\forall \\Phi \\in \\fes_{q,q}.\n\\end{multline}\n\n\n\nWhile we have avoided choosing particular bases of our dG spaces we will do so now as we believe that it makes the presentation of our reconstruction approach more concise.\nWe choose so-called \\emph{nodal basis functions} consisting of Lagrange polynomials, see \\cite{HW08}, and as we use a Cartesian mesh we may use tensor-products of one-dimensional Lagrange polynomials\nto this end. \nWe associate the Lagrange polynomials with Gauss points, as in this way the nodal basis functions form an orthogonal basis of our dG space $\\fes_{q,q},$ due to\nthe exactness properties of Gauss quadrature, see \\cite[e.g.]{Hin12}.\nWe will introduce some notation now:\nLet $\\{ \\xi_0,\\dots,\\xi_{q}\\} $ denote the Gauss points on $[-1,1].$ \nFor any element $K= [x_i,x_{i+1}] \\times [y_j,y_{j+1}] \\in \\cT$ let $\\{ \\xi_0^{K,1},\\dots,\\xi_{q}^{K,1}\\} $ and $\\{ \\xi_0^{K,2},\\dots,\\xi_{q}^{K,2}\\} $ denote their image\nunder the linear bijections $[-1,1] \\rightarrow [x_i,x_{i+1}]$ and $[-1,1] \\rightarrow [y_j,y_{j+1}].$\nFor $i=1,2$ we denote by $\\mathit{l}^{K,i}_j$ the Lagrange polynomial satisfying $\\mathit{l}^{K,i}_j(\\xi_k^{K,i})=\\delta_{jk}$. \n\n\\begin{definition}[Flux reconstruction]\nLet $\\widehat{\\vec f}_1 \\in \\fes_{q+1,q} $ satisfy\n\\begin{equation}\n \\label{recon1a}\n \\int_{\\T{}} (\\partial_{x_1} \\widehat{\\vec f}_1) \\Phi = - \\int_{\\T{}} \\vec f_1(\\vec v_h)\\partial_{x_1} \\Phi \n + \\int_{\\E^v} \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)] \\jump{\\Phi}^h \\quad \\forall \\Phi \\in \\fes_{q,q},\n\\end{equation}\nwhere $\\cP_q$ denotes $\\leb{2}$-orthogonal projection in the space of piece-wise polynomials of degree $\\leq q$ on $\\cE^v,$ and \n\\begin{equation}\n \\label{recon1b} \\widehat{\\vec f}_1(x_i,\\xi^{K,2}_k)^+ := \\lim_{s \\searrow 0} \\widehat{\\vec f}_1(x_i+s,\\xi^{K,2}_k) = \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_i,\\xi^{K,2}_k)\n\\end{equation}\nfor $k=0,\\dots,q$ and all $K \\in \\cT.$\nThe definition of $\\widehat{\\vec f}_2 \\in \\fes_{q,q+1} $ is analogous.\n\\end{definition}\n\n\\begin{remark}[Regularity of flux reconstruction]\n Note that in order to split the residual in two space dimensions in a way analogous to what we did in \\eqref{eq:dres} we require that for each $\\alpha=1,\\dots,d$ the components of \n the flux reconstruction\n $\\widehat{\\vec f}_\\alpha $ are Lipschitz continuous in $x_\\alpha$-direction.\n This is exactly what is needed such that $\\partial_{x_\\alpha} \\widehat{\\vec f}_\\alpha$ makes sense in $\\leb{\\infty}.$\n\\end{remark}\n\n\\begin{lemma}[Properties of flux reconstruction]\n The flux reconstructions $\\widehat{\\vec f}_1$, $\\widehat{\\vec f}_2$ are well defined; and $\\widehat{\\vec f}_1$ is Lipschitz continuous in $x_1$-direction and $\\widehat{\\vec f}_2$ is Lipschitz continuous in $x_2$-direction.\n\\end{lemma}\n\n\\begin{proof}\nWe will give the proof for $\\widehat{\\vec f}_1$. \nFor every $K \\in \\T{}$ the restriction $\\widehat{\\vec f}_1|_K$ is determined by \\eqref{recon1a} up to a linear combination (in each component) of\n\\[ 1 \\otimes \\mathit{l}^{K,2}_0, \\dots, 1 \\otimes \\mathit{l}^{K,2}_{q},\\]\nwhere $1$ denotes the polynomial having the value $1$ everywhere.\nPrescribing \\eqref{recon1b} obviously fixes these degrees of freedom.\nTherefore, $\\widehat{\\vec f}_1$ exists, is uniquely determined, and locally computable.\n\nFor showing that $\\widehat{\\vec f}_1$ is Lipschitz in the $x_1$-direction\nit suffices to prove that $\\widehat{\\vec f}_1$ is continuous along the 'vertical' faces.\nLet $K= [x_i,x_{i+1}] \\times [y_j,y_{j+1}] \\in \\cT$ then we define \n\\[ \\chi_K^k := 1_{[x_i,x_{i+1}]} \\otimes (l^{K,2}_k \\cdot 1_{[y_j,y_{j+1}]})\\]\nwhere for any interval $I$ we denote the characteristic function of that interval by $1_I.$\nFor any $k \\in \\{0,\\dots, q\\}$ we have on the one hand\n\\begin{equation}\\label{recon1c} \\int_{\\T{}} \\partial_{x_1} \\widehat{\\vec f}_1 \\chi_K^k\n = \\omega_k h^y_j \\int_{x_i}^{x_{i+1}} \\partial_{x_1} \\widehat{\\vec f}_1(\\cdot, \\xi^{K,2}_k) = \\omega_k h^y_j \\big(\\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^- - \\widehat{\\vec f}_1(x_{i}, \\xi^{K,2}_k)^+\\big)\n \\end{equation}\n where $h^y_j = y_{j+1} - y_j$ and $\\omega_k$ is the Gauss quadrature weight associated to $\\xi_k$.\nOn the other hand we find, using \\eqref{recon1a},\n\\begin{equation}\\label{recon1d} \\int_{\\T{}} \\partial_{x_1} \\widehat{\\vec f}_1 \\chi_K^k \\\\\n = \\omega_k h^y_j \\left( \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i+1},\\xi^{K,2}_k ) - \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i},\\xi^{K,2}_k )\\right).\n \\end{equation}\nCombining \\eqref{recon1c}, \\eqref{recon1d} and \\eqref{recon1b} \nwe obtain\n \\begin{equation}\\label{recon1f} \\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^- \n = \\cP_q[\\vec F_1(\\vec v_h^-,\\vec v_h^+)](x_{i+1},\\xi^{K,2}_k ) = \\widehat{\\vec f}_1(x_{i+1}, \\xi^{K,2}_k)^+ \\quad \\text{for } k=0,\\dots,q.\n \\end{equation}\nAs $\\widehat{\\vec f}_1(x_{i+1}, \\cdot)^\\pm|_{[y_j,y_{j+1}]}$ is a polynomial of degree $q$ and $k$ is arbitrary in equation \\eqref{recon1f} we find \n\\begin{equation}\\label{recon1g}\n \\widehat{\\vec f}_1(x_{i+1}, \\cdot)^+|_{[y_j,y_{j+1}]} = \\widehat{\\vec f}_1(x_{i+1}, \\cdot)^-|_{[y_j,y_{j+1}]}.\n\\end{equation}\nAs $i,j$ were arbitrary this implies Lipschitz continuity of $\\widehat{\\vec f}_1$ in $x_1$-direction.\n\n\\end{proof}\n\n\nFrom equations \\eqref{sch1} and \\eqref{recon1a} we obtain\nthe following pointwise equation almost everywhere:\n\\begin{equation}\\label{sch4}\n\\partial_t \\vec v_h + \\partial_{x_1} \\widehat{\\vec f}_1 + \\partial_{x_2} \\widehat{\\vec f}_2=0 .\n\\end{equation}\n\n\n\n\n\\begin{remark}[Main idea of a $2$ dimensional reconstruction]\nRecalling the arguments presented in previous Sections our main priority is to make $\\widehat{\\vec v} $ Lipschitz continuous.\n The particular reconstruction we describe is based on the following principles inspired by Section \\ref{subs:rec:claw}.\nWe wish $\\widehat{\\vec v}|_K - \\vec v_h|_K$ to be orthogonal to polynomials on $K$ of degree $q-1$ which is ensured by imposing them to coincide on the tensor product Gauss points.\nWe wish $\\widehat{\\vec f}_1 $ and $\\vec f_1(\\widehat{\\vec v})$ to be similar on vertical faces which is ensured by fixing the values of $\\widehat{\\vec v} $ on points of the form $(x_i, \\xi^{K,2}_l)$ when $K= [x_i,x_{i+1}]\\times [y_j,y_{j+1}].$\nImposing the conditions described above on a reconstruction in $\\fes_{q+1,q+1}$ is impossible because it does not have\nenough degrees of freedom.\nThus, we define a reconstruction $\\widehat{\\vec v} \\in \\fes_{q+2,q+2}.$ For such a function imposing the degrees of freedom described above\nleaves four degrees of freedom per cell undefined. Thus, we may prescribe values in corners.\nTo this end let us fix an averaging operator $\\bar {\\vec w}: U^4 \\rightarrow U.$\n\\end{remark}\n\n\\begin{definition}[Solution reconstruction]\nWe define (at each time) the reconstruction $\\widehat{\\vec v} \\in \\fes_{q+2,q+2}$ of $\\vec v_h \\in \\fes_{q,q}$ by prescribing for every $K= [x_i,x_{i+1}]\\times [y_j,y_{j+1}] \\in \\cT$\n\\begin{equation}\\label{urec}\n \\begin{split}\n \\widehat{\\vec v}|_K (\\xi^{K,1}_k, \\xi^{K,2}_l) &= \\vec v_h (\\xi^{K,1}_k, \\xi^{K,2}_l) \\quad \\text{ for } k,l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K (x_i,\\xi^{K,2}_l ) &= \\vec w_1 ( \\vec v_h (x_i, \\xi^{K,2}_l)^-, \\vec v_h (x_i, \\xi^{K,2}_l)^+) \\quad \\text{ for } l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K (x_{i+1}, \\xi^{K,2}_l) &=\\vec w_1 ( \\vec v_h (x_{i+1}, \\xi^{K,2}_l)^-, \\vec v_h (x_{i+1}, \\xi^{K,2}_l)^+) \\quad \\text{ for } l=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( \\xi^{K,1}_k, y_j) &=\\vec w_2 ( \\vec v_h ( \\xi^{K,1}_k, y_j)^-, \\vec v_h ( \\xi^{K,1}_k, y_j)^+) \\quad \\text{ for } k=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( \\xi^{K,1}_k, y_{j+1}) &=\\vec w_2 ( \\vec v_h ( \\xi^{K,1}_k, y_{j+1})^-, \\vec v_h ( \\xi^{K,1}_k, y_{j+1})^+) \\quad \\text{ for } k=0,\\dots,q \\\\\n \\widehat{\\vec v}|_K ( x_i, y_j) &=\\bar{\\vec w} ( \\lim_{s \\searrow 0} \\vec v_h (x_i +s, y_j +s), \n \\lim_{s \\searrow 0} \\vec v_h (x_i -s, y_j +s),\\\\\n & \\quad \\qquad \\lim_{s \\searrow 0} \\vec v_h (x_i +s, y_j -s),\n \\lim_{s \\searrow 0} \\vec v_h (x_i -s, y_j -s))\n \\end{split}\n\\end{equation}\nand analogous prescriptions for the remaining three corners of $K$.\n\\end{definition}\n\n\n\n\\begin{lemma}[Properties of $\\widehat{\\vec v}$]\n The reconstruction $\\widehat{\\vec v},$ is well-defined, locally computable and Lipschitz continuous.\n Moreover, for $q \\geq 1$ the following local conservation property is satisfied:\n \\begin{equation}\\label{eq:consrec} \\int_{K} \\widehat{\\vec v} - \\vec v_h =0 \\quad \\forall \\ K \\in \\cT.\\end{equation}\n\\end{lemma}\n\n\\begin{proof}\nWe will only prove the Lipschitz continuity and the conservation property. As $\\widehat{\\vec v}$ is piecewise polynomial it is sufficient to prove continuity to show Lipschitz continuity.\nLet $K=[x_i,x_{i+1}]\\times [y_j,y_{j+1}]$ and $K'= [x_{i-1},x_{i}]\\times [y_j,y_{j+1}]$ then\n$\\widehat{\\vec v}|_K $ and $\\widehat{\\vec v}|_{K'}$ coincide on $(x_i,y_j)$, $(x_i, \\xi^{K,2}_k)_{k=0,\\dots,q}$ and $(x_i,y_{j+1})$.\nTherefore, $\\widehat{\\vec v}|_K $ and $\\widehat{\\vec v}|_{K'}$ coincide on $\\{x_i\\} \\times [y_j,y_{j+1}]$. \nAnalogous arguments hold for the other edges, such that $\\widehat{\\vec v}$ is indeed (Lipschitz) continuous.\n\nAs the nodal points on each element have tensor structure we can use the exactness properties of one-dimensional Gauss quadrature.\nThe conservation property \\eqref{eq:consrec} follows from the fact that one-dimensional Gauss quadrature with $q+1$ Gauss points is exact for polynomials\nof degree up to $2q+1$ which is larger or equal $q+2$ provided $q \\geq 1.$\n\\end{proof}\n\n\\begin{remark}[Reconstructions for hyperbolic\/parabolic problems in $2$ dimensions]\nIn order to obtain reconstructions and splittings of residuals into hyperbolic and parabolic parts for numerical discretisations of \\eqref{interm-c}\n the reconstructions $\\widehat{\\vec v}, \\widehat{\\vec f}_\\alpha$ described in this section can be used in the same way the reconstructions from Section \\ref{subs:rec:claw} were used in Section \\ref{subs:rec:hp}.\n In particular, $\\widehat{\\vec v}$ described above is already regular enough to serve as a reconstruction in case of a numerical scheme\n for \\eqref{interm-c}.\n The flux reconstructions $(\\widehat{\\vec f}_\\alpha)_{\\alpha=1,\\dots,d}$ can be used to obtain a splitting analogous to \\eqref{eq:dres} by making use of \\eqref{sch4}.\n\\end{remark}\n\n\n\\section{Numerical experiments}\n\n\\label{sec:num}\n\n\nIn this section we study the numerical behaviour of the error\nindicators $\\cE_M$ and $\\cE_D$ presented in the previous Sections and compare this with the ``error'', which we quantify as the difference between the numerical approximation of the adaptive model and the numerical approximation of the full model, on some test problems.\n\n\\subsection{Test 1 : The scalar case - the 1d viscous and inviscid Burgers' equation}\n\\label{sec:Burger1d}\nWe conduct an illustrative experiment using Burgers' equation. In this\ncase the ``complex'' model which we want to approximate is given by\n\\begin{equation}\n \\label{eq:viscburger}\n \\pdt u_\\varepsilon + \\pd{x}{\\qp{\\frac{u_\\varepsilon^2}{2}}} = \\varepsilon \\pd{xx} u_\\varepsilon\n\\end{equation}\nfor fixed $\\varepsilon = 0.005$ with homogeneous Dirichlet boundary data and the simple model we will use in the majority of the domain is given by \n\\begin{equation}\n \\label{eq:burger}\n \\pdt u + \\pd{x}{\\qp{\\frac{u^2}{2}}} = 0.\n\\end{equation}\n\nWe discretise the problem (\\ref{eq:burger}) using a piecewise linear dG scheme\n(\\ref{eq:sddg}) together with Richtmyer type fluxes given by\n\\begin{equation}\n \\label{eq:Richtmyer}\n \\vec F (\\vec v_h^-,\\vec v_h^+)\n =\n \\vec f\\qp{\\frac{1}{2}\\qp{\\vec v_h^- + \\vec v_h^+} - \\frac{\\tau}{h}\\qp{\\vec f(\\vec v_h^+) - \\vec f(\\vec v_h^-)}}.\n\\end{equation}\nNote that these\nfluxes satisfy the assumptions (\\ref{eq:repf})--(\\ref{eq:condw}). In this case our dG formulation is given by \\eqref{eq:sddg} under a first order IMEX temporal discretisation. We take $\\tau = 10^{-4}$ and $h = \\pi\/500$ uniformly over the space-time domain. In the region where the ``complex'' model (\\ref{eq:viscburger}) is implemented for the discretisation of the diffusion term we use an interior penalty discretisation with piecewise constant $\\widehat \\varepsilon$, that is\n\\begin{equation}\n \\widehat \\varepsilon =\n \\begin{cases}\n 0.005 \\text{ over cells where the a posteriori model error bound is large}\n \\\\\n 0 \\text{ otherwise.}\n \\end{cases}\n\\end{equation}\nThis means the discretisation becomes\n\\begin{equation}\\label{eq:sddg2n}\n\\begin{split}\n &\\vec v_h(0)= \\cP_q [\\vec u_0] \\\\\n &\\int_{\\cT} \\partial_t \\vec v_h \\cdot \\vec \\phi - \\vec f(\\vec v_h) \\cdot \\partial_x \\vec \\phi + \\int_{\\cE} \\vec F (\\vec v_h^-,\\vec v_h^+) \\jump{\\vec \\phi}\n +\n \\bih{\\widehat \\varepsilon \\vec v_h}{\\vec \\phi} = 0\n \\quad \\text{for all } \\vec \\phi \\in \\fes_q,\n \\end{split}\n\\end{equation}\nwhere\n\\begin{equation}\n \\bih{\\vec w_h}{\\vec\\phi}\n =\n \\int_{\\T{}} \\pd x {\\vec w_h} \\cdot \\pd x {\\vec \\phi}\n -\n \\int_\\E\n \\jump{\\vec w_h}\\cdot \\avg{\\pd x {\\vec \\phi}}\n +\n \\jump{\\vec\\phi}\\cdot \\avg{\\pd x {\\vec w_h}}\n -\n \\frac{\\sigma}{h} \\jump{\\vec w_h}\\cdot\\jump{\\vec \\phi}\n\\end{equation}\nand $\\sigma = 10$ is the penalty parameter.\n\nThe model adaptive algorithm we employ is described by the following pseudocode:\n\\subsection{{$\\Algoname{Model Adaptation}$}}\n\\label{alg:model-adapt}\n\\begin{algorithmic}\n \\Require\n $(\\tau,t_0,T,\\vec u^0,\\tol,\\tol_c,\\varepsilon)$\n \\Ensure $(\\vec u_h^n)_{\\rangefromto n1N}$, model adaptive solution\n \\State $\\widehat \\varepsilon(x,0):=0$\n \\State $t = \\tau, n=1$\n \\While{$t\\leq T$}\n \\State\n $(\\vec u_h^n) := \\Algoname{Solve one timestep of dg scheme} (\\vec u_h^{n-1},\\widehat \\varepsilon)$\n \n \\State compute $\\cE_D, \\cE_M$\n \n \\If{$\\cE_D + \\cE_M > \\tol$}\n \n \\State Mark a subset of elements, $\\{ J \\}$ where $\\cE_D + \\cE_M$ is large\n \n \\For{$K\\in \\{ J \\}$}\n \n \\State Set $\\widehat \\varepsilon(x,t)|_K := \\varepsilon$\n \n \\EndFor\n \n \\ElsIf{$\\cE_D + \\cE_M < \\tol$}\n \n \\State Do nothing\n \n \\EndIf\n\n \\For{$K\\in \\T{}$}\n \n \\If{$\\cE_M|_K < \\abs{K} \\ tol_c \\ tol\/\\varepsilon$}\n \n \\State $\\widehat \\varepsilon(x,t)|_K = 0$\n \n \\EndIf\n\n \\EndFor\n \n \\State $t := t + \\tau, n := n+1$\n \n \\EndWhile\n \n \\State return $(\\vec u_h^n)_{\\rangefromto n1N}$,\n\\end{algorithmic}\n\nFor this test the parameters are given by $\\tol = 10^{-2} \\AND \\tol_c = 10^{-3}$.\n\n\\begin{Rem}[Coupling to other adaptive procedures]\n \n The a posteriori bound given in Theorem \\ref{the:2} has a structure which allows for both model and mesh adaptivity. This means that Algorithm \\ref{alg:model-adapt} could be easily coupled with other mechanisms employing $h$-$p$ spatial adaptivity in addition to local timestep control. As can be seen from the pseudocode we use the complex model even in the case the discretisation error $\\cE_D$ is large and the modelling error $\\cE_M$ is small. This is due to the fact mesh adaptation is beyond the scope of this paper and we focus on model adaptation.\n\\end{Rem}\nInitial conditions are chosen as\n\\begin{equation}\n u(x,0) := \\sin{x}\n\\end{equation}\nover the interval $[-\\pi,\\pi]$. The results are summarised in Figures \\ref{fig:burger} and \\ref{fig:burger-err}.\n\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger1d}. \n Here we display the solution at various times (top) together with a representation of the model adaptation parameter $\\widehat \\varepsilon$ (bottom).\n Blue is the region $\\widehat \\varepsilon=0$, where the simplified (inviscid Burgers') problem is being computed and red is where $\\widehat \\varepsilon \\neq 0$, where the full (viscous Burgers') problem is being computed.\n We see that initially only the simplified model is computed but as time progresses the full model is solved in a region around where the steep layer forms. As this forms the domain where the complex model is solved collapses and eventually is very localised around the layer.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-0.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5375$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-0-5375.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.1625$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-1625.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.3$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-3.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.55$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-1-55.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=2.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-solution-t-2-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger-err}\n \n A numerical experiment testing model adaptivity on Burgers equation. The simulation is described in \\S\\ref{sec:Burger1d}. Here we display the error $\\norm{u_h-u_{\\varepsilon,h}}$, that is, the difference between the approximation of the full expensive model and that of the adaptive approximation at the same times as in Figure \\ref{fig:burger} together with a representation of $\\widehat \\varepsilon$ (bottom).\n An interesting phenomenon is the propagation of dispersive waves emanating from the interface between the region where $\\widehat \\varepsilon=0$ and that of $\\widehat \\varepsilon\\neq 0$.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-0.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5375$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-0-5375.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.1625$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-1625.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.3$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-3.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.55$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-1-55.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=2.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger-err-t-2-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\subsection{Test 2 : The scalar case - the 2d viscous and inviscid Burgers' equation}\n\\label{sec:Burger2d}\nIn this test we examine how the adaptive procedure extends into the multi-dimensional setting again using Burgers' equation as an illustrative example. In this\ncase the ``complex'' model which we want to approximate is given by\n\\begin{equation}\n \\label{eq:viscburger2d}\n \\pdt u_\\varepsilon + \\qp{u \\one} \\cdot \\nabla u_\\varepsilon = \\varepsilon \\Delta u_\\varepsilon,\n\\end{equation}\nwhere $\\one = \\Transpose{\\qp{1,1}}$. The simple model we will use in the majority of the domain is given by \n\\begin{equation}\n \\label{eq:burger2d}\n \\pdt u + \\qp{u \\one} \\cdot \\nabla u = 0.\n\\end{equation}\nAs in Test 1 we make use of a 1st order IMEX, piecewise linear dG scheme together with Richtmyer fluxes and an IP method for the viscosity. We pick an initial condition\n\\begin{equation}\n u(\\vec x,0) = \\exp\\qp{-10\\norm{\\vec x}^2}\n\\end{equation}\nand use the parameters are $\\varepsilon = 0.01$, $h = \\sqrt{2}\/50$, $\\tau = \\sqrt{2}\/400$, $\\tol = 10^{-2}$ and $\\tol_c = 10^{-3}$ in Algorithm \\ref{alg:model-adapt}. The results are summarised in Figures \\ref{fig:burger2d} and \\ref{fig:burger2d-err}.\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger2d}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger2d}. Here we display the solution at various times (top) together with a representation of $\\widehat \\varepsilon$ (bottom). Blue is the region $\\widehat \\varepsilon=0$, where the simplified (inviscid Burgers') problem is being computed and red is where $\\widehat \\varepsilon \\neq 0$, where the full (viscous Burgers') problem is being computed.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0025$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-0025.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-0-5.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-solution-t-1-5.png}\n }\n \n \\hfill\n \\end{center}\n \\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:burger2d-err}\n \n A numerical experiment testing model adaptivity on Burgers' equation. The simulation is described in \\S\\ref{sec:Burger2d}. Here we display the error $\\norm{u_h-u_{\\varepsilon,h}}$, that is, the difference between the approximation of the full expensive model and that of the adaptive approximation at the same times as in Figure \\ref{fig:burger2d} together with a representation of $\\widehat \\varepsilon$ (bottom).\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0025$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-0025.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-0-5.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.25$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1-25.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=1.5$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/burger2d-err-t-1-5.png}\n }\n \n \\end{center}\n \\end{figure}\n\n\n\\subsection{Test 3 : The Navier-Stokes-Fourier system}\n\\label{sec:NSF}\nIn this test we examine application to the scenario of flow around a NACA aerofoil. We simulate the Navier-Stokes-Fourier system which is given by seeking $\\qp{\\rho_\\mu, \\rho_\\mu\\mathbf v_\\mu, e_\\mu}$\n\\begin{equation}\n \\label{eq:NSF3}\n \\begin{split}\n \\partial_t \\rho_\\mu + \\div(\\rho_\\mu \\mathbf v_\\mu) &=0\\\\\n \\partial_t (\\rho_\\mu \\mathbf v_\\mu) + \\div(\\rho_\\mu \\mathbf v_\\mu \\otimes \\mathbf v_\\mu) + \\nabla p &= \\div (\\mu \\nabla \\mathbf v_\\mu) \\\\\n \\partial_t e_\\mu + \\div((e_\\mu+ p)\\mathbf v_\\mu ) &= \\div (\\mu(\\nabla \\mathbf v_\\mu) \\cdot \\mathbf v_\\mu + \\mu \\nabla T_\\mu),\n \\end{split}\n\\end{equation}\nwith a Reynolds number of $1000$. In this case our ``complex'' model is given by (\\ref{eq:NSF3}) and the approximation is given by\n\\begin{equation}\n \\label{eq:EF}\n \\begin{split}\n \\partial_t \\rho + \\div(\\rho \\mathbf v) &=0\\\\\n \\partial_t (\\rho \\mathbf v) + \\div(\\rho \\mathbf v \\otimes \\mathbf v) + \\nabla p &= 0 \\\\\n \\partial_t e + \\div((e+ p)\\mathbf v ) &= 0.\n \\end{split}\n\\end{equation}\nWe again make use of a 1st order IMEX, piecewise linear dG scheme with the Richtmyer fluxes (\\ref{eq:Richtmyer}) and an IP discretisation of both dissipative terms. The numerical domain we use is a triangulation around a NACA aerofoil, shown in Figure\n5\n, where the discretisation parameters are detailed. We use Algorithm \\ref{alg:model-adapt} where the parameter $\\varepsilon$ and adaptive parameter $\\widehat \\varepsilon$ have been replaced with $\\mu = \\frac{U_r D}{Re}$ with $U_r = 1$ denoting the reference velocity, $D = 1$ denoting the length of the aerofoil and $Re = 1000$ as the Reynolds number. We fix $\\tau = 0.0001$ and take $\\tol = 10^{-2}$ and $\\tol_c = 10^{-3}$. \nSome results are shown in Figure \\ref{fig:naca-vel}, Figure \\ref{fig:naca-temp} and Figure \\ref{fig:naca-pres}.\n\n\\begin{Rem}[Coupling of viscosity and heat conduction]\n In our experiment we choose $\\mu$ as both the coefficient of viscosity and the heat conduction. In practical situations these parameters may scale differently and by splitting the adaptation estimator the model adaptivity can be conducted independently for both the viscous and heat conduction term. \n\\end{Rem}\n\n\\begin{figure}[!ht]\n \\caption[]\n \n The underlying mesh of the NACA simulation. Here the minimum mesh size is around the aerofoil where $h \\approx 0.0008$ and the maximum is away from the aerofoil where $h \\approx 0.3$.\n \n }\n \\subfigure[{\n \n Global view of the mesh.\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-mesh.png}\n }\n \n \\hfill\n \\subfigure[{\n \n Zoom of the NACA aerofoil.\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-mesh-zoom.png}\n }\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]\n {\\label{fig:naca-vel}\n \n A numerical experiment testing model adaptivity on the Navier-Stokes-Fourier system. The simulation is described in \\ref{sec:NSF}. Here we display the velocity of the solution at various times (top) together with a representation of both $\\mu$ and $\\kappa$ (bottom). Blue is the region $\\mu=\\kappa=0$, where the simplified (full Euler system) problem is being computed and red is where $\\mu=\\kappa \\neq 0$, where the full (Navier-Stokes-Fourier system) problem is being computed.\n \n }\n \\begin{center}\n \\subfigure[{\n \n $t=0.0002$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-0-02.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.01$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-1.png}\n }\n \\subfigure[{\n \n $t=0.02$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-2.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=0.05$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-5.png}\n }\n \\subfigure[{\n \n $t=0.24$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-24.png}\n }\n \n \\hfill\n \\subfigure[{\n \n $t=5.24$\n \n }]{\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-velocity-524.png}\n }\n \\end{center}\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]{\\label{fig:naca-temp}\n \n A numerical experiment showing the temperature around the aerofoil at short time scales. Notice the high temperature initially diffused around the leading edge of aerofoil. This is where the model adaptive parameter is not zero, compare with Figure 6.\n \n }\n \\subfigure[{\n \n $t=0.02$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-002.png}\n }\n \\subfigure[{\n \n $t=0.06$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-006.png}\n }\n \\subfigure[{\n \n $t=0.1$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-010.png}\n }\n \\subfigure[{\n \n $t=0.15$\n \n }]\n {\n \\includegraphics[scale=\\figscale,width=0.47\\figwidth]{figures\/naca-temp-015.png}\n }\n\\end{figure}\n\n\\begin{figure}[!ht]\n \\caption[]{\\label{fig:naca-pres\n \n A plot of the pressure around the aerofoil and associated contours at $t=5.24$.\n \n }\n \\includegraphics[scale=\\figscale,width=0.7\\figwidth]{figures\/naca-pressure-524.png}\n\\end{figure}\n\n\n\n\n \\bibliographystyle{alpha}\n ","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\\IEEEPARstart{P}{erception} of obstacles' 3D information via different types of sensors is a fundamental task in the field of computer vision and robotics. This topic has been extensively studied with the development of Autonomous Driving(AD) and intelligent transportation system. Though the LiDAR sensors have the superiority of providing distance information of the obstacles, the texture and color information has been totally lost due to the sparse scanning. Therefore, False Positive (FP) detection and wrong categories classification often happen for LiDAR-based object detection frameworks. In particular, with the development of the deep learning techniques on point-cloud-based representation, {LIDAR-based approaches} can be generally divided into point-based \\cite{shi2019pointrcnn}, voxel-based \\cite{zhou2018voxelnet, lang2019pointpillars}, and hybrid-point-voxel-based methods \\cite{shi2020pv}.\n\n\\begin{figure}[t!]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Figs\/Head_chart.pdf}\n\t\\centering\n\t\\caption{The proposed \\textit{Multi-Sem Fusion (MSF)} is a general multi-modal fusion framework which can be employed for different 3D object detectors. a) illustrate the improvements on three different baselines. b) gives the performance of \\textit{CenterPoint} \\cite{yin2021center} with the proposed modules on the public nuScenes benchmark. c) gives the improvements on different categories respectively. In addition, ``w\/o'' represent ``without'' in short.} \n\t\\label{Fig:head_figure}\n\\end{figure}\n\nOn the contrary, the camera sensors can provide details texture and color information {while the distance information has been totally lost during the perspective projection.}\nThe {fusion} of the two types of data is a promising way for boosting the perception performance of AD. {Generally, multi-modal-based object detection approaches} can be divided into early fusion-based \\cite{dou2019seg, vora2020pointpainting}, deep fusion-based \\cite{zhang2020maff, xu2018pointfusion, tian2020adaptive} and late fusion-based approaches \\cite{pang2020clocs}. Early fusion-based approaches aim at creating a new type of data by combining the raw data directly before sending them into the detection framework. Usually, these kinds of methods require pixel-level correspondence between each type sensor data. Different from the early fusion-based methods, late fusion-based approaches {fuse the detection results at the bounding box level. While} deep fusion-based methods usually extract the features with different types of deep neural networks first and then fuse them at the features level. {\\textit{PointPainting} \\cite{vora2020pointpainting} is an early fusion method which} achieves superior detection results on different benchmarks. {Specifically, The network takes point clouds together with the 2D image semantic predictions as inputs and output the detection results with any LiDAR-based 3D object detector.} {More importantly, it} can be incorporated into any 3D object detectors {regardless of} point-based or voxel-based frameworks.\n\\begin{figure}[H]\n\t\\centering\n\t\\includegraphics[width=0.45\\textwidth]{Figs\/head.pdf}\n\t\\centering\n\t\\caption{(a) is the point cloud painted with the 2D segmentation results. The frustum within the blue line shows misclassified area due to the blurring effect at object's boundary; (b) is the point cloud painted by 3D segmentation results (misclassified points are demonstrated in red color); (c) and (d) give the object detection results based on 2D painted point cloud (with an obvious False Positive (FP) Detection) and the proposed 2D\/3D fusion framework respectively. }\n\t\\label{Fig:2DSeg_in_3d}\n\\end{figure}\n\nHowever, the {blurring effect at object's boundary} often happens inevitably in image-based semantic segmentation methods. This effect becomes much worse when reprojecting the 2D semantic projections into the 3D point clouds. An example of this effect has been shown in Fig. \\ref{Fig:2DSeg_in_3d}. Taking the big truck at the left bottom of the sub-fig. \\ref{Fig:2DSeg_in_3d}-(a) as an example, we can find that there is a large frustum area of the background (e.g, points in orange color) that has been miss-classified as foreground due to the inaccurate projection results. In addition, the re-projection of 3D points to 2D image pixels is not exactly one-to-one because of the digital quantization and many-to-one projection issues. An interesting phenomenon is that the segmentation from the 3D point clouds (e.g., sub-fig. \\ref{Fig:2DSeg_in_3d}-(b)) performs much better at the boundary of obstacles. However, compared to the 2D image, the category classification from the 3D point cloud often gives worse results (e.g., point in red color) due to the detailed texture information has been lost in the point clouds.\n\n\n\n\n\n\n\n{The PointPainting, with 2D image} semantic information, has been proved to be effective for the 3D object detection task even with some semantic mistakes. An intuition idea is that whether the final detection performance can be further improved if both the 2D and 3D semantic results are fused together. Based on this idea, we propose a general multi-modal fusion framework \\textit{Multi-Seg Fusion} and try to fuse different types of sensors at the semantic level to improve the final 3D object detection performance. First of all, {we obtain the 2D\/3D semantic information throughout 2D\/3D parsing approaches by taking images and raw point clouds. Each point in point cloud has two types of semantic information after projecting point clouds onto 2D semantic images based on the intrinsic and extrinsic calibration parameters.}, the semantic results conflict usually happens for a certain point, rather than concatenating the two types of information directly, an AAF strategy is proposed to fuse different kinds of semantic information in point or voxel-level adaptively based on the learned context features in a self-attention style. Specifically, an attention score has been learned for each point or voxel to balance the importance of the two different semantic results.\n \nFurthermore, {in order to detect obstacles with different sizes in an efficient way}, a DFF module is proposed here to fuse the features {at multi-scale receptive fields and a channel attention network to gather related channel information in the feature map}. Then the fused features are passed for the following classification and regression heads to generate the final detection. As the results on nuScenes dataset are given in \\figref{Fig:head_figure} (a), we can obviously find that the proposed modules can robustly boost the detection performance on different baselines. {The results in \\figref{Fig:head_figure}(b) illustrates the contribution of each module we proposed} and from this figure we can find that the detection results can be consistently improved by adding more modules gradually. \n\\figref{Fig:head_figure} (c) shows the improvements on different categories and from this figure we can easily find that all the classes have been improved and ``Bicycle'' gets the most improvement (e.g., 22.3 points) on the nuScenes dataset.\n \n\nThis manuscript is an extension of the previously published conference paper \\cite{xu2021fusionpainting} with a new review of the relevant state-of-the-art methods, new theoretical developments, and new experimental results. Compared to the previous conference paper, the 3D object detection accuracy has been improved with a large margin on the nuScenes 3D object detection benchmark by the submission of this manuscript. Furthermore, we also evaluate the proposed framework on the KITTT 3D object detection dataset and the experimental results on both public datasets prove the superiority of our framework. In general, the contributions of this work can be summarized as follows: \n\\begin{enumerate}\n \\item A general multi-modal fusion framework \\textit{Multi-Sem Fusion} has been proposed to fuse the different types of sensors in the semantic level to boost the 3D object detection performances.\n \\item Rather than combining different semantic results directly, an adaptive attention-based fusion module is proposed to fuse different kinds of semantic information in point or voxel-level by learning the fusion attention scores.\n \\item Furthermore, a deep features fusion module is also proposed to fuse deep features at different levels for considering kinds of size object.\n \\item Finally, the proposed framework has been sufficiently verified on two public large-scale 3D object detection benchmarks and the experimental results show the superiority of the proposed fusion framework. The proposed method achieves SOTA results on the nuScenes dataset and outperforms other approaches with a big margin. Taking the proposed approach as the baseline, we have won the fourth nuScenes detection challenge held at ICRA 2021 on 3D Object Detection open track \\footnote{\\url{https:\/\/eval.ai\/web\/challenges\/challenge-page\/356\/leaderboard\/1012}}. \n\\end{enumerate}\n\n\n\n\n\n\n \n \n\n\n\n\n\n\n\n\\section{Related Work}\\label{sec:related_work}\n\n\n\nGenerally, 3D object detection methods can be categorized as LiDAR-based, image-based \\cite{ku2019monocular} and multi-modal fusion-based, which takes LiDAR point cloud, image and multi-sensor captured data as inputs, respectively. \n\n\\subsection{{LiDAR-based 3D Object Detection}}\nThe existing LIDAR-{based} 3D object detection methods {can be generally categorized into three main groups as} projection-based~\\cite{ku2018joint,luo2018fast,yang2018pixor,liang2021rangeioudet,hu2020you}, voxel-based~\\cite{zhou2018voxelnet,yan2018second,kuang2020voxel, yin2020lidar,zhou2019iou} and point-based~\\cite{zhou2020joint,lang2019pointpillars,qi2018frustum}. \nRangeRCNN \\cite{liang2021rangeioudet} proposed a 3D range {image-based} 3D object detection framework {where} the anchors could be generated on the BEV (bird's-eye-view) map. VoxelNet \\cite{zhou2018voxelnet} is the first {voxel-based} framework, which uses a VFE (voxel feature encoding) layer {to extract the point-level features for each voxel.} {To accelerate the inference speed,} SECOND \\cite{yan2018second} presents a new framework which {employs} the sparse convolution \\cite{graham20183d} to replace the heavy 3D convolution operation. {Inspired by the 2D anchor-free object detector CenterNet \\cite{duan2019centernet}, CenterPoint \\cite{yin2021center} presented an anchor-free-based 3D object detector on LiDAR point cloud and achieved SOTA (state-of-the-art) performance on the nuScenes benchmark recently. To further improve the efficiency,} PointPillars \\cite{lang2019pointpillars} divides the points into vertical columns (e.g., pillars) and extracts features from each pillar with the PointNet \\cite{qi2017pointnet} module. Then, {the feature map} can be regarded as a pseudo image and all the 2D object detection pipelines can be applied for the 3D object detection task. Different from the regular representation like voxel or pseudo image, {PointRCNN \\cite{shi2019pointrcnn} directly extracts features from the raw points and generates the 3D proposals from the foreground points. Based on the PointRCNN, PV-RCNN \\cite{shi2020pv} takes both the advantage from voxel and points representation and learns more discriminative features by employing both 3D voxel CNN and PointNet-based networks together.}\n\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.85\\textwidth]{Figs\/frame.pdf}\n\t\\centering\n\t\\caption{Overview of the proposed \\textit{Multi-Sem Fusion} framework. We first process the input point clouds and 2D image with 2D and 3D parsing approaches to obtain the semantic {information}. Then, the proposed AAF module is adopted to fuse the two types of data at the semantic level. Furthermore, a DFF module is also proposed to fuse the deep features at different spatial levels to boost the detection for accurate kinds of size object. Finally, fused features are sent to the detection heads for producing the final detection results.\n}\n\t\\label{Fig:PointPainting}\n\\end{figure*}\n\n\n\\subsection{{Camera-based 3D Object Detection}}\n3D object detection can be achieved from stereo rigs \\cite{zhou2014modeling} by detecting the 2D object from the image first and recovering the distance {with traditional geometric} stereo matching techniques \\cite{Efficient2010}. {Currently, with sufficient training data}, the depth can be recovered from a single image \\cite{song2021mlda}. Generally, the image-based 3D object detection methods {with deep learning techniques} can be roughly divided into CAD Model-guided based, depth estimation-based and direct regression-based respectively. MANTA (Many-Tasks) \\cite{chabot2017deep} is proposed for many-task vehicle analysis from a given image, which {can simultaneously} output vehicle detection, part localization, visibility characterization and 3D dimension estimation etc. {In \\cite{song2019apollocar3d}, a large-scale dataset ApolloCar3D is provided for instance-level 3D car understanding. Based on this dataset, a key-points-based framework has been designed by extracting the key points first, and then the vehicle pose is obtained using a PnP solver.} Furthermore, a {novel} work \\cite{ku2019monocular} leverages {object} proposals and shapes reconstruction {with an end-to-end deep neural network.} With the rapid development of depth estimation technology from {a} single image, {leveraging} the depth information for 3D object detection becomes popular. \\cite{wang2019pseudo} and \\cite{qian2020end} {compute the disparity map from stereo image pair and} convert the depth maps to pseudo-LiDAR point clouds first and then any point cloud-based detectors can be employed for the 3D object detection task. The similar idea has been employed in \\cite{weng2019monocular}, \\cite{ma2019accurate} and \\cite{cai2020monocular}. Instead of using CAD models or depth estimation, ``SMOKE'' \\cite{liu2020smoke} proposes to regress the 3D bounding box directly from the image and achieve promising performance on the KITTI benchmark. Based on this, \\cite{zhou2020iafa} proposes a plug-and-play module IAFA to aggregate useful foreground information in an instance-aware manner for further improving the detection accuracy.\n\n\n\n\n\n\\subsection{{Multi-sensors Fusion-based 3D Object Detection}}\n{The LiDAR can provide accurate distance information while the textures information has been lost. The camera sensor is just the opposite. In order to utilize the advantages of both sensors, many multi-sensor fusion-based approaches have been proposed. As mentioned in \\cite{nie2020multimodality}, the fusion approaches can be divided into model-based \\cite{muresan2020stabilization} and data-based approaches based on the way of fusing the sensor data. Generally, some model-based approaches have been used for tracking \\cite{muresan2019multi} while data-based approaches usually focus on modular environment perception tasks such as object detection \\cite{du2018general,pang2020clocs,chen2017multi}. For all fusion-based object detection approaches, also} can be generally divided into three types: early fusion \\cite{du2018general}, late fusion \\cite{pang2020clocs} and deep fusion \\cite{chen2017multi}. F-PointNet \\cite{qi2018frustum}, PointFusion \\cite{xu2018pointfusion} and \\cite{du2018general} are proposed to generate the object proposal in the 2D image first and then fuse the image features and point cloud for 3D BBox generation. \nPointPainting \\cite{vora2020pointpainting} is proposed to fuse the semantic information from the 2D image into the point cloud to boost the 3D point cloud object detectors. For achieving the parsing results, any SOTA approach can be used. AVOD \\cite{ku2018joint}, which is a typical late fusion framework, generates features that can be shared by RPN (region proposal network) and second stage refinement network.\nMulti-task fusion\\cite{he2017mask} \\cite{liang2019multi} is also an effective technology, e.g., \\cite{zhou2020joint} jointing the semantic segmentation with 3D object detection tasks to further improve the performance. Furthermore, Radar and HDMaps are also employed for improving the detection accuracy. CenterFusion \\cite{nabati2021centerfusion} focuses on the fusion of radar and camera sensors and associates the radar detections to objects on the image using a frustum-based association method and creates radar-based feature maps to complement the image features in a middle-fusion approach. In \\cite{yang2018hdnet} and \\cite{fang2021mapfusion}, the HDMaps are also taken as strong prior information\u00a0for moving object detection in AD scenarios.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section{Multi-Sem Fusion Framework} \\label{sec:method}\nAn overview of the proposed \\textit{Multi-Sem Fusion} framework is illustrated in \\figref{Fig:PointPainting}. To fully explore the information from different sensors, we advocate fusing them at two different levels. First, the two types of information are early fused with the \\textit{PointPainting} technique by painting the point cloud with both the 2D and 3D semantic parsing results. To handle the {inaccurate segmentation} results, an AAF module is proposed to learn an attention score for different sensors {for the following fusion purpose}. By taking the points {together with fused semantic information} as inputs, deep features can be extracted from the backbone network. By considering that different size object requires different levels features, a novel DFF module is proposed to enhance the features with different levels for exploring the global context information and the local spatial details in a deep way. As shown in \\figref{Fig:PointPainting}, the proposed framework consists of three main models as a multi-modal semantic segmentation module, an AAF module, and a DFF module. First of all, any off-the-shelf 2D and 3D scene parsing approaches can be employed to obtain the semantic segmentation from the RGB image and LiDAR point clouds respectively. Then the 2D and 3D semantic information are fused by the AAF module. Bypassing the points {together with fused semantic labels} into the backbone network, the DFF module is used to {further improve the results by aggregating the features within different receptive fields and a channel attention module.}\n \n\n\n\n\\begin{figure*}[ht]\n \\centering\n\t\\includegraphics[width=1\\textwidth]{Figs\/Attention.pdf}\n\t\\caption{The architecture of the proposed AAF module for 2D\/3D semantic fusion. The input points and 2D\/3D semantic results are first utilized to learn attention scores. Then, the raw points or voxel are painted with 2D\/3D semantic labels adaptive by using the learned attention scores.}\n\t\\label{Fig:attention_module}\n\\end{figure*}\n\\subsection{2D\/3D Semantic Parsing}\n\n\\textit{{2D Image Parsing.}} We can directly get rich texture and color information from 2D images, which can provide complementary information for the 3D point clouds. For acquiring 2D semantic labels, a modern semantic segmentation network is employed for generating pixel-wise segmentation results. More importantly, the proposed framework is agnostic to specific segmentation model, and any state-of-the-art segmentation approaches can be employed here (e.g., \\cite{choudhury2018segmentation,zhao2017pyramid,chen2019hybrid,shelhamer2017fully,Howard_2019_ICCV}, etc). {We employ Deeplabv3+ \\cite{choudhury2018segmentation} for generating the semantic results here.} The network takes 2D images as input and outputs pixel-wise semantic classes scores for both the foreground instances and the background. {Assuming that the obtained semantic map is $S \\in \\mathbb{R}^{w\\times h \\times m}$, where $(w, h)$ is the image size and $m$ is the number of category. By employing the intrinsic and extrinsic matrices, the 2D semantic information can be easily re-projected into the 3D point cloud. Specifically, by assuming} the intrinsic matrix is $\\mathbf{K} \\in \\mathbb{R}^{3\\times 3}$, the extrinsic matrix is $\\mathbf{M}\\in \\mathbb{R}^{3\\times 4}$ and the original 3D points clouds is $\\mathbf{P}\\in \\mathbb{R}^{N\\times 3}$, the projection of 3D point clouds from LiDAR to the camera can be obtained as Eq.\\eqref{eq.projection} shown:\n\\begin{equation} \\label{eq.projection}\n\t {\\mathbf{P}}^{'} = \\text{Proj}(\\mathbf{K}, \\mathbf{M}, \\mathbf{P}),\n\\end{equation} \nwhere $\\mathbf{P}^{'}$ is the point in {camera} coordinate and ``Proj'' represents the projection process. {By this projection, the parsing results} from the 2D image can be assigned to its corresponding 3D points which is denoted by $\\mathbf{P}_{2D}\\in \\mathbb{R}^{N\\times m}$.\n\n\n\n\n\n\\textit{{3D Point Cloud Parsing.}} As we have mentioned above, {parsing results from the point clouds can well overcome the boundary blur influence while keeping the distance information. Similar to the 2D image segmentation, any SOTA 3D parsing approach can be employed here \\cite{zhang2020polarnet,cheng20212,xu2021rpvnet}. We employ the Cylinder3D \\cite{cong2021input} for generating the semantic results because of its impressive performance on the AD scenario.}\n{More importantly, the ground truth point-wise semantic annotations can be generated from the 3D object bounding boxes roughly as \\cite{shi2019pointrcnn} while any extra semantic annotations are not necessary.} Specifically, for foreground instances, the points inside a 3D bounding box are directly assigned with the corresponding class label while the points outside all the 3D bounding boxes are taken as the background. From this point of view, the proposed framework can work on any 3D detection benchmarks directly without any extra point-wise semantic annotations. After obtaining the trained network, we will obtain the parsing results which is denoted by $\\mathbf{P}_{3D}\\in \\mathbb{R}^{N\\times m}$. \n\n\n\\subsection{Adaptive Attention-based 2D\/3D Semantic Fusion} \\label{subsec:shallow_fusion}\n{As mentioned in previous work PointPainting, 2D semantic segmentation network have achieved impressive performance,} however, the {blurring effect at the shape boundary} is also serious due to the limited resolution of the feature map (e.g., $\\frac{1}{4}$ of the original image size). Therefore, {the point clouds with 2D semantic segmentation} usually has misclassified regions around the objects' boundary. For example, the frustum region is illustrated in the sub-fig.~\\ref{Fig:2DSeg_in_3d} (a) behind the big truck {has been totally misclassified}. On the contrary, the {parsing results} from the 3D point clouds usually can produce a clear and accurate object boundary e.g., sub-fig.~\\ref{Fig:2DSeg_in_3d}(b). However, the disadvantages of the 3D segmentor are also obvious. One drawback is that without the color and texture information, the 3D segmentor is difficult to distinguish these categories with similar shapes from the point cloud-only. In order to boost advantages while suppressing disadvantages, an AAF module has been proposed to adaptively combine the 2D\/3D semantic segmentation results. Then the fused semantic information is sent to the following 3D object detectors backbone to further extract the enhanced feature to improve the final detection results.\n\n \\begin{figure*}[h]\n\t\\centering\n \t\\includegraphics[width=0.95\\textwidth]{Figs\/DFF.pdf}\n\t\\centering\n\t\\caption{An illustration of the proposed Deep Feature Fusion (DFF) module which includes one \\textit{fusion} module and one \\textit{channel attention} module respectively. The \\textit{fusion} module includes two branches for producing features with different field-of-view.}\n\t\\label{fig:deep_feature_fusion}\n\\end{figure*}\n\n\\textbf{AAF Module.} The detailed architecture of the proposed AAF module is illustrated in Fig.~\\ref{Fig:attention_module}. By defining the input point clouds as $\\{\\mathbf{P}_i, i=1,2,3...N\\}$, each point $\\mathbf{P}_i$ contains $(x, y, z)$ coordinates and other optional information such as intensity. For simplicity, only the coordinates $(x, y, z)$ are considered in the following context. Our goal is to find {an} efficient strategy to fuse semantic results from {the 2D images and 3D point clouds}. Here, we propose to utilize the learned attention score to {adaptively} fuse the two types of results. Specifically, we first combine point clouds coordinate attributes $(x, y, z)$ and 2D\/3D labels with a concatenation operation to obtain a fused point clouds with the size of $N\\times (2m+3)$. For saving memory consumption, the following fusion is executed at the voxel level rather than the point level. Specifically, each point clouds has been divided into evenly distributed voxels and each voxel contains a fixed number of points. {Here, we represent the voxels as $\\{ V_i, i=1,2,3...E\\}$, where $E$ is the total number of the voxels and each voxel $V_{i} = (\\textbf{P}_i, \\textbf{V}^{i}_{2D}, \\textbf{V}^{i}_{3D}) \\in \\mathbb{R} ^{M \\times(2m+3)}$ containing a fixed number of $M$ points with $2m+3$ features. Here, $\\textbf{P}_i$, $\\textbf{V}^{i}_{2D}$, $\\textbf{V}^{i}_{3D}]$ are point coordinates, predicted 2D and 3D semantic vectors in voxel level respectively.} We employ the sampling technique to keep the same number of points in each voxel. \nThen, local and global feature aggregation strategies are applied to estimate an attention weight for each voxel for determining the importance of the 2D and 3D semantic labels.\n\nIn order to get local features, a PointNet~\\cite{qi2017pointnet}-like module is adopted to extract the voxel-wise information inside each non-empty voxel. For the $i$-th voxel, the local feature can be represented as: \n\\begin{equation} \\label{local_feature}\n V_i = f(p_1, p_2, \u00a0\\cdots , p_M) =\n \\max_{i=1,...,M} \\{\\text{MLP}_{l}(p_{i^{'}})\\} \\in \\mathbb{R}^{C_1},\n\\end{equation}\n{where $\\{p_{i^{'}}, i^{'}=1,2,3...M\\}$ indicates the point insize each voxel}. $\\text{MLP}_{l}(\\cdot)$ and ${max}$ are the local muti-layer perception (MLP) networks and max-pooling operation. Specifically, $\\text{MLP}_{l}(\\cdot)$ consists of a linear layer, a batch normalization layer, an activation layer and outputs local features with $C_1$ channels. To achieve global feature information, we aggregate information based on the $E$ voxels. In particular, we first use a $\\text{MLP}_{g}(\\cdot)$ to map each voxel features from $C_1$ dimension to $C_2$ dimension. Then, another PointNet-like module is applied on all the voxels as:\n\\begin{equation} \\label{global_feature}\nV_{global} = f(V_1, V_2, \\cdots ,V_E) = \\max_{i=1,...,E} \\{\\text{MLP}_{g}(V_i)\\} \\in \\mathbb{R}^{C_2}.\n\\end{equation}\nFinally, the global feature vector $V_{global}$ is expanded to the same size of voxel number and then concatenate them to each local feature $V_i$ to obtain final fused local and global features $V_{gl}\\in \\mathbb{R}^{E \\times{(C_1 + C_2)}}$.\n \n\n{After getting} fused features $V_{gl}$ from the network, we need to estimate an attention score for each point in the voxel. This can be achieved by applying another MLP module $\\text{MLP}_{att}(\\cdot)$ on $V_{gl}$ \nand a Sigmod activation function $\\sigma(\\cdot)$. Afterwards, we multiply the confidence score by corresponding one-hot semantic vectors for each voxel, which is denoted as Eq. (\\ref{attention_2d}), Eq. (\\ref{attention_3d}):\n\\begin{equation} \\label{attention_2d}\n\t \\mathbf{V}^{i}_{a.S} = \\mathbf{V}^{i}_{2D} \\times \\sigma{(\\text{MLP}_{att}(V^{i}_{gl})}) ,\n\\end{equation} \n\\begin{equation}\\label{attention_3d}\n\t \\mathbf{V}^{i}_{a.T} = \\mathbf{V}^{i}_{3D} \\times (1-\\sigma{(\\text{MLP}_{att}(V^{i}_{gl}})))\n\\end{equation} \nwhere $\\mathbf{V}^{i}_{2D}$ and $\\mathbf{V}^{i}_{3D}$ are the 2D and 3D semantic segmentation voxel labels which are encoded with {one-hot} format. {The final semantic vector $V^{i}_{final}$ of each voxel can be obtained by concatenating or element-wise addition of $\\mathbf{V}^{i}_{a.T}$ and $\\mathbf{V}^{i}_{a.S}$.} \n\n\n\n\n\\iffalse\n\\xsqcomments{\n Taking KITTI dataset as example, for a certain point, supposed that we got 2D\/3D semantic segmentation after encoder to {one-hot} vector are as follows:\n\\begin{equation}\\label{p_2d_onehot}\n\t \\mathbf{P}_{2D} = [0,0,1,0]\n\\end{equation} \n\\begin{equation}\\label{p_3d_onehot}\n\t \\mathbf{P}_{3D} = [0,0,0,1]\n\\end{equation} \n}\n\\xsqcomments{In the equation, different dimensions denote the category of 'Car', 'Pedestrian', 'Cyclist' and 'Background', respectively. Here, each point both have 2D\/3D semantic information, and '1' present which category this point belong to. If the weight of score to trust 2D\/3D semantic information is 0.2 and 0.8 for this certain point, after Eq.(\\ref{attention_2d}), Eq(\\ref{attention_3d}), we got follow present:\n\\begin{equation} \\label{attention_2d_final}\n\t \\mathbf{P}_{a.S} = [0,0,1,0] \\otimes 0.2 = [0,0,0.2,0],\n\\end{equation} \n\\begin{equation}\\label{attention_3d_final}\n\t \\mathbf{P}_{a.T} =[0,0,0,1] \\otimes 0.8 = [0,0,0,0.8]\n\\end{equation} \n}\n\\xsqcomments{Different with previous conference, we plus them together as described in Eq.(\\ref{attention_final}).\n\\begin{equation}\\label{attention_final}\n\t \\mathbf{P}_{final} = \\mathbf{P}_{a.T} + \\mathbf{P}_{a.S} = [0,0.2,0,0.8]\n\\end{equation}\nWe can find the weight of 'background' from 3D semantic information in more than 'Pedestrian' which from 3D semantic. The results denote that 3D semantic information can greatly correct the mistakes from 2D boundary-blurring effect happened in 2D semantic segmentation. On the contrary, 2D semantic information will correct the miss-classified which happened in 3D semantic segmentation if the score of 2D is more than 3D semantic segmentation. Then we fused $\\mathbf{P}_{fianl} \\in \\mathbb{R}^{N \\times M \\times m}$ with raw point $P$ in each voxel to replace the voxels shape $\\widehat{V}_m\\in \\mathbb{R}^{N\\times M \\times (3 + 2m)}$, where $m, M$ denote semantic classes number and points number in voxel. Finally, $\\hat{V_m}$ contains the adaptively fused information from 2D and 3D semantic labels.}\n\\fi\n\n\n\n\n\\begin{table*}[ht]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.85\\textwidth}{!}\n{\n\\begin{tabular}{r|c|lll|lll|lll}\n\\hline\n\\multicolumn{1}{r|}{\\multirow{2}{*}{$\\textbf{Methods}$}} &\\multicolumn{1}{c|}{\\multirow{2}{*}{$\\textbf{mAP}$ (Mod.)(\\%)}} &\n\\multicolumn{3}{c|}{$\\textbf{Car}$ $AP_{70}(\\%)$} & \n\\multicolumn{3}{c|}{$\\textbf{Pedestrian}$ $AP_{70}(\\%)$} & \n\\multicolumn{3}{c}{$\\textbf{Cyclist}$ $AP_{70}(\\%)$} \\\\\n\n{} & {} & {Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline\n{SECOND } & {66.64} & \n{90.04} & {81.22} & {78.22} &\n{56.34} & {52.40} & {46.52} &\n83.94 & 66.31 & {62.37} \\\\\n{SECOND $^{\\ast}$} & {68.11} & \n{91.04} & {82.31} & {79.31} &\n {59.28} & {54.18} & {50.20} &\n85.11 & 67.52 & {63.36} \\\\\n{Improvement} & \\R{+1.47} & \n\\R{+1.00} & \\R{+1.09} & \\R{+1.09} &\n\\R{+2.94} & \\R{+1.78} & \\R{+3.68} &\n\\R{+1.17} & \\R{+1.54} & \\R{+0.99} \\\\ \\hline \n\n{Pointpillars} & {62.90} \n& {87.59} & {78.17} & {75.23} &\n {53.58} & {47.58} & {44.04} &\n{82.21} & {62.95} & {58.66} \\\\\n{PointPillars$^{\\star}$} & {65.78} \n{89.58} & {78.60} & {75.63} &\n{60.22} & {54.23} & {49.49} &\n84.83 & 64.50 & {60.17} \\\\\n\n{Improvement} & \\R{+2.88} & \n\\R{+1.99} & \\R{+0.43} & \\R{+0.4} &\n\\R{+6.64} & \\R{+6.65} & \\R{+5.45} &\n\\R{+2.62} & \\R{+1.55} & \\R{+1.51} \\\\ \\hline\n\n{PV-RCNN} & {71.82} &\n{92.23} & {83.10} & {82.42} &\n{65.68} & {59.29} & {53.99} &\n91.57 & 73.06 & {69.80} \\\\\n{PV-RCNN$^{\\ast}$} & {73.95} & \n{91.85} & {84.59} & {82.66} & \n{69.12} & {61.61} & {55.96} & \n94.90 & 75.65 & {71.03} \\\\ \n{Improvement} & \\R{+2.13} & \n\\G{-0.38} & \\R{+1.49} & \\R{+0.24} &\n\\R{+3.44} & \\R{+2.32} & \\R{+1.97} &\n\\R{+3.33} & \\R{+2.59} & \\R{+1.23} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{3D object detection evaluation on KITTI ``val'' split on different baseline approaches, where $^{\\star}$ represents the boosted baseline by adding the proposed fusion modules. ``Easy'', ``Mod.'' and ``Hard'' represent the three difficult levels defined by official benchmark and $\\textbf{mAP}$ (Mod.) represents the average $\\textbf{AP}$ of ``Car'', ``Pedestrian'' and ``Cyclist'' on ``Mod.'' level. For easy understanding, we also highlight the improvements with different colors, where red represents an increase and green represents a decrease compared to the baseline method. This table is better to be viewed in color mode.}\n\\label{tab:kitti_3D_detection}\n\\end{table*}\n\n\\begin{table*}[!ht]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.85\\textwidth}{!}\n{\n \\begin{tabular}{r|c| ccc | ccc| ccc}\n \\hline\n \\multirow{2}{*}{$\\textbf{Methods}$} &\\multirow{2}{*}{$\\textbf{mAP}$ (Mod.)(\\%)} &\n \\multicolumn{3}{c|}{$\\textbf{Car}$ $AP_{70}(\\%)$} &\n \\multicolumn{3}{c|}{$\\textbf{Pedestrian}$ $AP_{70}(\\%)$} & \n \\multicolumn{3}{c}{$\\textbf{Cyclist}$ $AP_{70}(\\%)$} \\\\\n {} & {} & {Easy} & {Mod.} & {Hard} & \n {Easy} & {Mod.} & {Hard} & \n {Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline \n {SECOND} & {71.95} \n & {92.31} & {88.99} & {86.59} \n & {60.5} & {56.21} & {51.25} \n & {87.30} & {70.65} & {66.63} \\\\\n SECOND $^{\\star}$ & {73.72} \n & {94.70} & {91.62} & {88.35} \n & {63.95} & {59.66} & {55.81} \n & {91.95} & {73.28} & {67.75} \\\\\n {Improvement} & \\RED{{+2..90}} \n & \\RED{+2.39} & \\RED{+2.63} & \\RED{+1.76} \n & \\RED{+3.45} & \\RED{+3.45} & \\RED{+4.56} \n & \\RED{+4.65} & \\RED{+2.63} & \\RED{+1.12} \\\\ \\hline\n {Pointpillars } & {69.18} \n & {92.50} & {87.80} & {87.55} \n & {58.58} & {52.88} & {48.30} \n & {86.77} & {66.87} & {62.46} \\\\\n \n PointPillars $^{\\star}$ & {72.13} \n & {94.39} & {87.65} & {89.86} \n & {64.84} & {59.57} & {55.16} \n & {89.55} & {69.18} & {64.65} \\\\\n {Improvement} & \\RED{{+2.95}} \n & \\RED{+1.89} & \\GRE{-0.15} & \\RED{+2.31} \n & \\RED{+6.26} & \\RED{+6.69} & \\RED{+6.86} \n & \\RED{+2.78} & \\RED{+2.31} & \\RED{+2.19} \\\\ \\hline\n {PV-RCNN} & {76.23} \n & {94.50} & {90.62} & {88.53} \n & {68.67} & {62.49} & {58.01} \n & {92.76} & {75.59} & {71.06} \\\\\n PV-RCNN $^{\\star}$ & {78.17} \n & {94.86} & {90.87} & {88.88} \n & {71.99} & {64.71} & {59.01} \n & {96.35} & {78.93} & {74.51} \\\\\n {Improvement} & \\RED{{+1.94}} \n & \\RED{+0.36} & \\RED{+0.25} & \\RED{+0.35}\n & \\RED{+3.32} & \\RED{+2.22} & \\RED{+1.00} \n & \\RED{+3.59} & \\RED{+3.34} & \\RED{+3.45} \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Evaluation of bird's-eye view object detection on KITTI ``val'' split with different baselines. Similar to the 3D detection, the red color represents an increase and the green color represents a decrease compared to the baseline method. This table is also better to be viewed in color mode.}\n\\label{tab:KITTI_BEV_detection}\n\\end{table*}\n\n\n\n\n\\subsection{Deep Feature Fusion Module} \\label{subsec:deep_fusion}\nIn AD scenarios, we not only need to know what the object is but also where object it is, because both of them are very important for the following planning and control modules. In typical object detection frameworks, they correspond to the classification and regression branches respectively. Empirically, global context information is important to recognize the specific class attributes. On the contrary, the object's attributes (e.g., dimension, orientation, and precise location, etc) regression branch pays more attention to detailed spatial information around the ROI (region of interest) in a relatively small range. For accurate kinds of size object detection, therefore various scales receptive fields are necessary. This issue has been considered in most object detection frameworks. However, how to use various field of view fields in an efficient way is vitally important.\n\nTo handle this kind of issue, a specific DFF module has been proposed to aggregate both the high-level and low-level features with different receptive fields. An illustration of the DFF module is shown in \\figref{fig:deep_feature_fusion}. First, the backbone features $\\textbf{F}^{0}_B$ from the feature extractor pass the \\textit{Conv\\_block1} with several convolution layers to obtain the $\\textbf{F}^{1}_B$ as a basic feature.\nHere, \\textit{Conv\\_block1} has four Conv modules and the first $conv$ module takes $C$ channels as input and outputs 128 channels, and the following three $convs$ share the same input {channels} and output channels. For each $conv$ module in the \\figref{fig:deep_feature_fusion}, it consists one $Con2d$, a batch normalization layer, and a Rectified Linear Unit (ReLU) activation layer. For easy understanding, we have given the stride and kernel size for each $conv$ operation {at the bottom of \\figref{fig:deep_feature_fusion}. Then, the feature $\\textbf{F}^{1}_B$ will pass two branches to obtain the features with different receptive fields. For one branch, the features are down-sampled into $1\/2$ size with $Conv-block2$ first and then pass the $Conv2$ operation. Finally, the outputs are up-sampled into the feature map $F_{R}^{L}\\in \\mathbb{R}^{H \\times W \\times C' }$ with $Deconv2$. For the other branch, $F^{1}_{B}$ will pass $Covn3$ and $Conv4$ to obtain the features of $F_{R}^{S}$ consecutively. {And the shape of the output $F_{R}^{L}$ is the same as $F_{R}^{S}$. Specially, $C'$ can be C.}\nAfter adding the high-level and low-level features} element-wisely, a channel-attention (CA) module \\cite{fu2019dual} is employed to further fuse both of them. The CA module selectively emphasizes interdependent channel maps by integrating relative features among all channel maps, so the $\\textbf{F}^{F}_E$ can be fused better in a deep way. Finally, $\\textbf{F}^{G}_E$ is taken as the inputs for the following classification and regression heads. \n\n\n\n \n\n\n\n\n\n\n\n\n\\subsection{3D Object Detection Framework} \\label{subsec:3d_detector}\n\n{The proposed AAF and DFF modules are }detector independent and any off-the-shelf 3D object detectors can be directly employed as the baseline of our proposed framework. The 3D detector receives the points or voxels produced by the AAF module as inputs and can keep backbone structures unchanged to {obtain the backbone features. Then, the backbone features are boosted by passing the proposed DFF module. Finally, detection results are generated from the classification and regression heads.}\n\n\n\n\n\n\n\n\n\n\\section{Experimental Results} \\label{sec:experiments}\n\nTo verify the effectiveness of our proposed framework, we evaluate it on two large-scale 3D object detection dataset in AD scenarios as KITTI \\cite{geiger2012we} and nuScenes \\cite{caesar2020nuscenes}. Furthermore, {the proposed modules is also evaluated on different kinds of baselines} for verifying its generalizability, including SECOND \\cite{yan2018second}, PointPilars \\cite{lang2019pointpillars} and PVRCNN \\cite{shi2020pv}, etc.\n\n\\subsection{Evaluation on KITTI Dataset} \\label{subsec:eval_kitti}\n \n\\textit{KITTI} is one of the most popular benchmarks for 3D object detection in AD, which contains 7481 samples for training and 7518 samples for testing. The objects in each class are divided into three difficulty levels as ``easy'', ``moderate'', and ``hard'', according to the object size, the occlusion ratio, and the truncation level. Since the ground truth annotations of the test samples are not available and the access to the test server is limited, we follow the idea in~\\cite{chen2017multi} and split the training data into ``train'' and ``val'' where each set contains 3712 and 3769 samples respectively. In this dataset, both the LiDAR point clouds and the RGB images have been provided. In addition, both the intrinsic parameters and extrinsic parameters between different sensors have been given.\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\\normalsize\n\\resizebox{1\\textwidth}{!}\n{\n\t\\begin{tabular}{l | c | c | c c c c c c c c c c}\n\t\\hline\n {\\multirow{2}{*}{Methods}} &\n {\\multirow{2}{*}{\\textbf{NDS} (\\%)}} &\n {\\multirow{2}{*}{\\textbf{mAP} (\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\\n {} & {} & {} & {Car} & {Pedestrian} &{Bus} &{Barrier} & {T.C.} &\n {Truck} & {Trailer} & {Moto.} & {Cons.} &\n {Bicycle} \\\\ \\hline \\hline\n \n \n\tSECOND \\cite{yan2018second} &61.96 &50.85 &81.61 &77.37 & 68.53 &57.75 &56.86 &51.93 &38.19 & 40.14 &17.95 & 18.16 \\\\\n\tSECOND$^{\\ast}$ & 67.61 & 62.61 & 84.77 & 83.36 &72.41 &63.67 &74.99 &60.32 &42.89 &66.50 &23.59 &54.62 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.65}} & \\textcolor{red}{\\textbf{+11.76}} & \\textcolor{red}{+3.16} & \\textcolor{red}{+5.99} & \\textcolor{red}{+3.88} & \\textcolor{red}{+5.92} & \\textcolor{red}{+18.13} & \\textcolor{red}{+8.39} & \\textcolor{red}{+4.70} & \\textcolor{red}{+26.36} & \\textcolor{red}{+5.64}& \\textcolor{red}{+36.46} \\\\\n \\hline\n\tPointPillars \\cite{lang2019pointpillars} & 57.50 &43.46 &80.67 &70.80 &62.01 &49.23 &44.00 &49.35 &34.86 &26.74 &11.64 &5.27 \\\\\n\tPointPillars$^{\\ast}$ &66.43 & 61.75 &84.79 &83.41 &70.52 &59.42 &75.43 &57.50 &42.32 &66.97 &22.43 &54.68 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red} {\\textbf{+8.93}} & \\textcolor{red}{\\textbf{+18.29}} & \n\t\\textcolor{red}{+4.12} & \\textcolor{red}{+12.61} & \\textcolor{red}{+8.51 } & \\textcolor{red}{ +10.19} & \\textcolor{red}{+31.43} & \\textcolor{red}{+8.15} & \\textcolor{red}{+7.46} & \\textcolor{red}{+40.23} & \\textcolor{red}{+10.79} & \\textcolor{red}{+49.41} \\\\\n \\hline\n\tCenterPoint \\cite{yin2021center} & 64.82 &56.53 &84.73 &83.42 &66.95 &64.56 &64.76 &54.52 &36.13 &56.81 &15.81 &37.57\\\\\n\tCenterPoint$^{\\ast}$ &69.91 &65.85 &87.00 &87.95 & 71.53 &67.14 &79.89 &61.91 & 41.22 & 73.85 &23.97 &64.06\\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.09}} & \\textcolor{red}{\\textbf{+6.81}} & \\textcolor{red}{+2.77} & \\textcolor{red}{+4.53} & \\textcolor{red}{+4.58} & \\textcolor{red}{+2.58} & \\textcolor{red}{+15.13} & \\textcolor{red}{+7.39} & \\textcolor{red}{+5.09} & \\textcolor{red}{+17.04} & \\textcolor{red}{+8.16}&\\textcolor{red}{+26.49} \\\\\n \\hline\n\t\\end{tabular}\n}\n\\caption{\\normalfont Evaluation results on nuScenes validation dataset. ``NDS'' and ``mAP'' mean nuScenes detection score and mean Average Precision. ``T.C.'', ``Moto.'' and ``Cons.'' are short for ``traffic cone'', ``motorcycle'', and ``construction vehicle'' respectively. `` * '' denotes the improved baseline by adding the proposed fusion module. The red color represents an increase compared to the baseline. This table is also better to be viewed in color mode.}\n\\label{tab:eval_on_nuscenes_val}\n\\end{table*}\n\n\n\n\n\\noindent\\textbf{Evaluation Metrics.} {We follow the official metrics provided by the KITTI for evaluation. ${AP}_{70}$ is used for ``Car'' category while ${AP}_{50}$ is used for ``Pedestrain'' and ``Cyclist''. }\nSpecifically, before the publication of \\cite{simonelli2019disentangling}, the KITTI official benchmark used the 11 recall positions for comparison. After that, the official benchmark changes the evaluation criterion from 11-points to 40-points because the latter one is proved to be more stable than the former \\cite{simonelli2019disentangling}. Therefore, we use the 40-points criterion for all the experiments here. In addition, similar to \\cite{vora2020pointpainting}, the average AP (mAP) of three Classes for ``Moderate'' is also taken as an indicator for evaluating the average performance on all three classes. \n\n\\noindent\\textbf{Baselines.} Three different baselines have been used for evaluation on KITTI: \n\\begin{enumerate}\n \\item \\textit{SECOND \\cite{yan2018second}} is the first to employ the sparse convolution on the voxel-based 3D object detection framework to accelerate the efficiency of LiDAR-based 3D object detection. \n \n \\item \\textit{PointPillars \\cite{lang2019pointpillars}} {is proposed to further improve the detection efficiency by dividing the point cloud into vertical pillars rather than voxels. For each pillar, \\textit{Pillar Feature Net} is applied to extract the point-level feature.}\n \n \\item\\textit{PV-RCNN \\cite{shi2020pv}} is a hybrid point-voxel-based 3D object detector, which can utilize the advantages from both the point and voxel representations.\n\\end{enumerate}\n\n\\noindent\\textbf{Implementation Details.} DeeplabV3+ \\cite{deeplabv3plus2018} and Cylinder3D \\cite{zhu2021cylindrical} are employed for 2D and 3D scene parsing respectively. More details, the DeeplabV3+ is pre-trained on Cityscape \\footnote{\\url{https:\/\/www.cityscapes-dataset.com}}, and the Cylinder3D is trained on KITTI point clouds by taking points in 3D ground trues bounding box as foreground annotation. For AAF module, $m = 4$, $C_1 = 64,C_2 = 128$, respectively. {The voxel size for PointPillars and SECOND are $0.16m \\times 0.16m \\times 4m$ and $0.05m \\times 0.05m \\times 0.1m$} respectively. In addition, both of the two baselines use the same optimization (e.g., AdamW) and learning strategies (e.g., one cycle) for all the experiments.\n{In the DFF module, we set $C = 256$ and $C' = 512$ for \\textit{SECOND} and \\textit{PV-RCNN} framework and $C = 64$ and $C' = 384$ for \\textit{PointPillars} network. The kernel and stride size are represented with $k$ and $s$ in Fig.\\ref{fig:deep_feature_fusion}, respectively.}\n\n{The proposed approach is implemented with PaddlePaddle \\cite{PaddlePaddle_2019} and all the methods are trained on NVIDIA Tesla V100 with 8 GPUs. The AdamW is taken as the optimizer and the one-cycle learning strategy is adopted for training the network. For \\textit{SECOND} and \\textit{PointPillars}, the batch size is 4 per GPU and the maximum learning rate is 0.003 while the bath size is 2 per GPU and the maximum learning rate is 0.001 for \\textit{PV-RCNN}.}\n\n\n\n \n\n\\noindent\\textbf{Quantitative Evaluation.} We illustrate the results on 3D detection and Bird's-eye view in \\tabref{tab:kitti_3D_detection} and \\tabref{tab:KITTI_BEV_detection} respectively.\n{From the table, we can clearly see that remarkable improvements have been achieved on the three baselines across all the categories.}\nTaking Pointpillars as an example, the proposed method has achieved 0.43\\%, 6.65\\%, 1.55\\% points improvements on ``Car'', ``Pedestrian'' and ``Cyclist'' respectively. Interestingly, compared to ``Car'', ``Pedestrian'' and ``Cyclist'' give much more improvements by the fusion painting modules. \n\n\\begin{table*}[ht!]\n \\centering\n \\Large\n \n\\resizebox{0.9\\textwidth}{!}{\n\n\\begin{tabular}{ l| c| c| c| c c c c c c c c c c}\n \\hline\n {\\multirow{2}{*}{Methods}} & \n {\\multirow{2}{*}{Modality}} & {\\multirow{2}{*}{\\textbf{NDS}(\\%)}} & \n {\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\ \n {} & {} & {} & {} & {Car} & {Truck} & {Bus} & {Trailer} & {Cons} & {Ped} & {Moto} & {Bicycle} & {T.C} & {Barrier} \\\\ \\hline \\hline\n PointPillars \\cite{vora2020pointpainting} & L & 55.0 & 40.1 & 76.0 & 31.0 & 32.1 & 36.6 & 11.3 & 64.0 & 34.2 & 14.0 & 45.6 & 56.4 \\\\\n 3DSSD \\cite{yang20203dssd} & L & 56.4 & 46.2 & 81.2 & 47.2 & 61.4 & 30.5 & 12.6 & 70.2 & 36.0 & 8.6 & 31.1 & 47.9 \\\\\n CBGS \\cite{zhu2019class} & L & 63.3 & 52.8 & 81.1 & 48.5 & 54.9 & 42.9 & 10.5 & 80.1 & 51.5 & 22.3 & 70.9 & 65.7 \\\\\n HotSpotNet \\cite{chen2020object} & L & 66.0 & 59.3 & 83.1 & 50.9 & 56.4 & 53.3 & {23.0} & 81.3 & {63.5} & 36.6 & 73.0 & 71.6 \\\\\n CVCNET\\cite{chen2020every} & L & 66.6 & 58.2 & 82.6 & 49.5 & 59.4 & 51.1 & 16.2 & {83.0} & 61.8 & {38.8} & 69.7 & 69.7 \\\\\n CenterPoint \\cite{yin2021center} & L & {67.3} & {60.3} & {85.2} & {53.5} & {63.6} & {56.0} & 20.0 & 54.6 & 59.5 & 30.7 & {78.4} & 71.1 \\\\\n PointPainting \\cite{vora2020pointpainting} & L \\& C & 58.1 & 46.4 & 77.9 & 35.8 & 36.2 & 37.3 & 15.8 & 73.3 & 41.5 & 24.1 & 62.4 & 60.2 \\\\\n 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} & L \\& C & 62.3 & 52.7 & 83.0 & 45.0 & 48.8 & 49.6 & 15.9 & 74.2 & 51.2 & 30.4 & 62.9 & 65.9 \\\\ \\hline\n \n {Our proposed} & L \\& C & \\textbf{71.6} & \\textbf{68.2} & \\textbf{87.1} & \\textbf{60.8} & \\textbf{66.5} & \\textbf{61.7} & \\textbf{30.0} & \\textbf{88.3} & \\textbf{74.7} & \\textbf{53.5} & \\textbf{85.0} & \\textbf{71.8} \\\\ \n \n \\hline\n \\end{tabular}\n\n}\n\\caption{Comparison with other SOTA methods on the nuScenes 3D object detection testing benchmark. ``L'' and ``C'' in the modality column represent LiDAR and Camera sensors respectively. For easy understanding, the highest score in each column is shown in bold font. To be clear, only the results with publications are listed here.}\n\\label{tab:test_tabel}\n\\end{table*}\n\n\n\\subsection{Evaluation on the nuScenes Dataset} \\label{subsec:nuScenes}\nThe nuScenes \\cite{caesar2020nuscenes} is a recently released large-scale (with a total of 1,000 scenes) AD benchmark with different kinds of information including LiDAR point cloud, radar point, Camera images, and High Definition Maps, etc. For a fair comparison, the dataset has been divided into ``train'', ``val'', and ``test'' three subsets officially, which includes 700 scenes (28130 samples), 150 scenes (6019 samples), and 150 scenes (6008 samples) respectively. Objects are annotated in the LiDAR coordinate and projected into different sensors' coordinates with pre-calibrated intrinsic and extrinsic parameters. For the point clouds stream, only the keyframes (2fps) are annotated. With a 32 lines LiDAR scanner, each frame contains about {300,000} points for 360-degree viewpoint. For the object detection task, the obstacles have been categorized into 10 classes as ``car'', ``truck'', ``bicycle'' and ``pedestrian'' etc. Besides the point clouds, the corresponding RGB images are also provided for each keyframe, and for each keyframe, there are 6 cameras that can cover 360 fields of view.\n\n\n\\noindent\\textbf{Evaluation Metrics.} The evaluation metric for nuScenes is totally different from KITTI and they propose to use mean Average Precision (mAP) and nuScenes detection score (NDS) as the main metrics. Different from the original mAP defined in \\cite{everingham2010pascal}, nuScenes consider the BEV center distance with thresholds of \\{0.5, 1, 2, 4\\} meters, instead of the IoUs of bounding boxes. NDS is a weighted sum of mAP and other metric scores, such as average translation error (ATE) and average scale error (ASE). For more details about the evaluation metric please refer to \\cite{caesar2020nuscenes}.\n\n\\noindent\\textbf{Baselines.} We have integrated the proposed module on three different SOTA baselines to verify its effectiveness. Similar to the KITTI dataset, both the \\textit{SECOND} and \\textit{PointPillars} are employed and the other detector is \\textit{CenterPoint} \\cite{yin2021center}. which is the first anchor-free-based 3D object detector for LiDAR-based 3D object detection. \n\n\n\\noindent\\textbf{Implementation Details} HTCNet \\cite{chen2019hybrid} and Cylinder3D \\cite{zhu2021cylindrical} are employed here for obtaining the 2D and 3D semantic segmentation results respectively. We used the HTCNet model trained on nuImages \\footnote{\\url{https:\/\/www.nuscenes.org\/nuimages}} dataset directly for generating the semantic labels. For Cylinder3D, we train it directly on the nuScenes 3D object detection dataset while the point cloud semantic label is produced by taking the points inside each bounding box. In AAF module, $m = 11$, $C_1 = 64$ and $C_2 = 128$ respectively. \n{In DFF module, $C = 256 $ and $C' = 512$ for \\textit{SECOND} and \\textit{CenterPoint} while $C = 64$ and $C' = 384$ for \\textit{PointPillars}. The setting for kernel size $k$ and stride $s$ are given in Fig. \\ref{fig:deep_feature_fusion} and same for all three detectors.} The voxel size for PointPillars, SECOND and CenterPoint are $0.2m \\times 0.2m \\times 8 m$, $0.1m \\times 0.1m \\times0.2m$ and $0.075m \\times 0.075m \\times 0.075m$, respectively. We use AdamW \\cite{ loshchilov2019decoupled} as the optimizer with the max learning rate is 0.001. Following \\cite{caesar2020nuscenes}, 10 previous LiDAR sweeps are stacked into the keyframe to make the point clouds more dense.\n\n\n\n{All the baselines are trained on NVIDIA Tesla V100 (8 GPUs) with a batch size of 4 per GPU for 20 epochs. AdamW is taken as the optimizer and the one-cycle learning strategy is adopted for training the network with the maximum learning rate is 0.001.} \n\n\n\n\n\n\n\\noindent\\textbf{Quantitative Evaluation.} \n{The proposed framework has been evaluated on nuScenes benchmark for both ``val'' and ``test'' splits.} {The results of comparison with three baselines are given in Tab. \\ref{tab:eval_on_nuscenes_val}. From this table, we can see that significant improvements have been achieved on both the $\\text{mAP}$ and $\\text{NDS}$ across all the three baselines. For the \\textit{PointPillars}, the proposed module gives \\textbf{8.93} and \\textbf{18.29} points improvements on $\\text{NDS}$ and $\\text{mAP}$ respectively. For \\textit{SECOND}, the two values are \\textbf{5.65} and \\textbf{11.76} respectively. Even for the strong baseline \\textit{CenterPoint}, the proposed module can also give \\textbf{5.09} and \\textbf{9.32} points improvements.}\nIn addition, we also find that the {categories which share small sizes} such as ``Traffic Cone'', ``Moto'' and ``Bicycle'' have received more improvements compared to other categories. Taking ``Bicycle'' as an example, the $\\text{mAP}$ has been improved by {36.46\\%}, {49.41\\%} and {26.49\\%} compared to \\textit{SECOND}, \\textit{PointPillar} and \\textit{Centerpoint} respectively. {This phenomenon can be explained as these categories with small sizes are hard to be recognized in point clouds because of a few LiDAR points on them. In this case, the semantic information from the 2D\/3D parsing results is extremely helpful for improving 3D object detection performance. It should be emphasized that the result in Tab. \\ref{tab:eval_on_nuscenes_val} without any Test Time Augmentation(TTA) strategies.}\n\n\nTo compare the proposed framework with other SOTA methods, we submit our results (adding fusion modules on \\textit{CenterPoint}) on the nuScenes evaluation sever\\footnote{\\url{https:\/\/www.nuscenes.org\/object-detection\/}} for test split. The detailed results are given in Tab. \\ref{tab:test_tabel}. From this table, we can find that the proposed method achieves the best performance on both the $\\text{mAP}$ and $\\text{NDS}$ scores. Compared to the baseline \\textit{CenterPoint}, 4.3 and 7.9 points improvements have been achieved by adding our proposed fusion modules. For easy understanding, we have highlighted the best performances in bold in each column. It should be noted that the results in Tab. \\ref{tab:test_tabel} the Test Time Augmentation(TTA) strategy including multiple flip and rotation operations is applied during the inference time which is 1.69 points higher than the origin method of ours.\n\n \n \n\n\n\\subsection{Ablation Studies} \\label{subsec:ablation_study}\n\n\n{To verify the effectiveness of different modules, a series of ablation experiments have been designed to analyze the contribution of each component for the final detection results. Specially, three types of experiments are given as \\textit{different semantic representations} and \\textit{different fusion modules} and \\textit{effectiveness of channel attention}.\n}\n\n\\begin{table}[ht!]\n\\centering\n\\small\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.4\\textwidth}{!}\n{ \n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Representations}$ & $\\textbf{mAP }$ (\\%) & $\\textbf{NDS}$ (\\%) \\\\ \\hline\n {PointPillar \\cite{lang2019pointpillars}} & 43.46 & 57.50 \\\\ \n {Semantic ID} & 50.96 (+7.50) & 60.69 (+3.19) \\\\ \n {Onehot Vector} & 52.18 (+8.72) & 61.59 (+4.09) \\\\ \n {Semantic Score} & 53.10 (+9.64) & 62.20 (+4.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the nuScenes \\cite{geiger2012we} dataset for fusing the semantic results with different representations.}\n\\label{tab:ablation_dif_semantic_reps}\n\\end{table}\n\n\\noindent\\textbf{Different Semantic Representations.} First of all, we investigate the influence of the different semantic result representations on the final detection performance. Three different representations as ``Semantic ID'', ``Onehot Vector'' and ``Semantic Score'' are considered here. For ``Semantic ID'', the digital class ID is used directly and for the ``Semantic Score'', the predicted probability after the Softmax operation is used. In addition, to convert the semantic scores to a ``Onehot Vector'', we assign the class with the highest score as ``1'' and keep other classes as ``0''. Here, we just add the semantic feature to the original $(x, y, z)$ coordinates with concatenation operation and \\textit{PointPillars} is taken as the baseline on nuScenes dataset due to its efficiency of model iteration. From the results in \\tabref{tab:ablation_dif_semantic_reps}, we can easily find that the 3D semantic information can significantly boost the final object detection performance regardless of different representations. More specifically, the ``Semantic Score'' achieves the best performance among the three which gives 9.64 and 4.70 points improvements for $\\textbf{mAP}$ and $\\textbf{NDS}$ respectively. We guess that the semantic score can provide more information than the other two representations because it not only provides the class ID but also the confidence for all the classes. \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) \\\\\n {PointPillar \\cite{lang2019pointpillars}} & 78.17 & 47.58 & 62.95 \\\\ \n {Semantic ID.} & 78.42(+0.25) & 48.94(+1.36) & 57.88(-5.07) \\\\ \n {Onehot Vector} & 78.55(+0.35) & 50.15(+2.57) & 54.14(-8.81) \\\\ \n {Semantic Score} & 79.08(+0.91) & 51.41(+3.93) & 61.25(-1.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. The way use 3D semantic info. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\\fi\n\n\n\\noindent\\textbf{Different Fusion Modules.} We also execute different experiments to analyze the performances of each fusion module on both KITTI and nuScene dataset. The \\textit{SECOND} is chosen as the baseline for KITTI while all the three detectors are verified on nuScene. \nThe results are given in \\tabref{tab:ablation_kitti_table} and \\tabref{tab:ablation_nuScene_table} for KITTI and nuScenes respectively. To be clear, the ``3D Sem'' and ``2D Sem.'' represent the 2D and 3D parsing results. ``AAF'' represents the fusion of the 2D and 3D semantic information with the proposed adaptive attention module and the ``AAF + DFF'' represents the module with both the two fusion strategies.\n\n\\begin{table}[hb!]\n\\centering\n\\small\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | ccc|l}\n \\hline\n\\multirow{2}{*}{Strategies} & \\multicolumn{3}{c|}{AP(\\%)} & \\multirow{2}{*}{mAP(\\%)} \\\\\n & \\multicolumn{1}{l}{Car(Mod.)} & \\multicolumn{1}{l}{Ped.(Mod.)} & \\multicolumn{1}{l|}{Cyc.(Mod.)} & \\\\ \\hline\n {SECOND \\cite{yan2018second}} & 88.99 & 56.21 & 70.65 & 71.95 \\\\\n {3D Sem.} & + 0.81 & + 0.70 & + 0.63 & + 0.71 \\\\ \n {2D Sem.} & + 1.09 & + 1.35 & + 1.20 & + 1.41 \\\\ \n {AAF} & + 1.62 & + 1.78 & + 1.91 & + 1.77 \\\\ \n {AAF \\& DFF} & + 2.63 & + 3.45 & + 2.63 & + 2.90 \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Evaluation of ablation studies on the public KITTI \\cite{geiger2012we} dataset. Similar to PointPainting \\cite{vora2020pointpainting}, we provide the 2D BEV detection here. To be clear, only the results of ``Mod'' have been given and \\textbf{mAP} is the average mean of all the three categories.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\n\n\\begin{table}[ht!]\n\\centering\n\\setlength{\\tabcolsep}{2pt}\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc |cc}\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c|}{\\textbf{PointPillars}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%)& \\textbf{mAP}(\\%)& \\textbf{NDS}(\\%) \n \\\\ \\hline \n {SECOND} & 50.85 & 61.96 & 43.46 & 57.50 & 56.53 & 64.82\\\\\n {3D Sem.} & + 3.60& + 1.45 & + 8.72 & + 4.09 & + 4.59 & + 1.86 \\\\ \n {2D Sem.} & + 8.55& + 4.07 & +15.64 &+ 7.55 & + 6.28 & + 3.67 \\\\ \n {AAF} & + 11.30 & + 5.45 & +17.31 & + 8.54 & + 8.46& + 4.56 \\\\ \n {AAF \\& DFF} & + 11.76 & + 5.65 & + 18.19 & + 8.93 & + 9.32 & + 5.09\\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies for different fusion strategies on the nuScenes benchmark. The first row is the performance of the baseline method and the following values are the gains obtained by adding different modules.}\n\\label{tab:ablation_nuScene_table}\n\\end{table}\n\nIn \\tabref{tab:ablation_kitti_table}, the first row is the performance of the baseline method and the following values are the gains obtained by adding different modules. From the table, we can obviously find that all the modules give positive effects on all three categories for the final detection results. For all the categories, the proposed modules give 2.9 points improvements averagely and ``Pedestrian'' achieves the most gain which achieves 3.45 points. Furthermore, we find that deep fusion gives the most improvements compared to the other three modules in this dataset. \n\n\\tabref{tab:ablation_nuScene_table} gives the results on the nuScenes dataset. \n{To be clear, a boosted version of baseline is presented here than in \\cite{xu2021fusionpainting} by fixing the sync-bn issue in the source code.} \nFrom this table, we can see that both the 2D and 3D {semantic information} can significantly boost the performances while the 2D information provide more improvements than the 3D information. This phenomenon can be explained as that the 2D texture information can highly benefit the categories classification results which is very important for the final \\textbf{mAP} computation. In addition, by giving the 2D information, the recall ability can be improved especially for objects in long distance with a few scanned points. In other words, the advantage for 3D semantic information is that the point clouds can well handle the occlusion among objects which are very common in AD scenarios which is hard to deal with in 2D. After fusion, all the detectors achieve much better performances than only one semantic information (2D or 3D).\n\nFurthermore, a deep fusion module is also proposed to aggregate the backbone features to further improve the performance. From the Tab. \\tabref{tab:ablation_kitti_table} and \\tabref{tab:ablation_nuScene_table}, we find that the deep fusion module can slightly improve the results for all three baseline detectors. Interestingly, compared to the \\textit{SECOND} and \\textit{PointPillars}, \\textit{CenterPoint} gives much better performance by adding the deep fusion module. This can be explained that the large deep backbone network in \\textit{CenterPoint} gives much deeper features that are more suitable for the proposed deep feature fusion module. \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc }\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) & \\textbf{NDS}(\\%) & \\textbf{mAP}(\\%) \\\\ \\hline \n {Baselines} & 50.85 & 61.96 & 59.04 & 66.60\\\\\n {Without CA.} & 52.24(\\R{+1.39})& 63.18(\\R{+0.85}) &67.45(\\R{+0.8}) &59.84(\\R{+1.08}) \\\\ \n {Baseline with DFF.} &52.96(\\R{+2.11})&63.75(\\R{+1.79}) &67.84(\\R{+1.24}) &60.20(\\R{+1.16}) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation with\/without channel attention on nuScenes dataset.}\n\\label{tab:ablation_channel_attention_nuscenes}\n\\end{table}\n\\fi\n\n \n\n\\begin{table}[ht!]\n\\centering\n\\renewcommand\\arraystretch{1.1}\n\\setlength{\\tabcolsep}{1pt}\n\\resizebox{0.5\\textwidth}{!}\n{%\n\\begin{tabular}{r|ccc|c} \n\\hline\n\\multirow{2}{*}{\\textbf{Strategies}} & \\multicolumn{3}{c|}{\\textbf{AP(\\%)}} & \\multirow{2}{*}{\\textbf{mAP(\\%)}} \\\\\n & \\multicolumn{1}{l}{Car(Mod.)} & \\multicolumn{1}{l}{Ped.(Mod.)} & \\multicolumn{1}{l|}{Cyc.(Mod.)} & \\\\ \n\\hline\nSECOND & ~88.99~ & 56.21 & 70.65 & 71.95 \\\\\nOne scale CA & ~89.68 (\\textcolor{red}{+0.69})~ & 57.77 (\\textcolor{red}{+1.56}) & ~71.39 (\\textcolor{red}{+0.74})~ & 73.70 (\\textcolor{red}{+1.75}) \\\\\nMulti-scale CA & 90.22 (\\textcolor{red}{+1.23}) & ~58.11 (\\textcolor{red}{+1.90})~ & ~71.77 (\\textcolor{red}{+1.12})~ & 74.41 (\\textcolor{red}{+2.46}) \\\\\n\\hline\n\\end{tabular}\n}\n\\caption{Ablation with\/without multi-scale channel attention (CA) on KITTI dataset. The meaning of \\textit{one-scale CA} is we just use $F_{R}^{L}$ to the \\textit{CA} module. And multi-scale denote that we use both fused $F_{R}^{L}$ and $F_{R}^{S}$ feature to the next module.}\n\\label{tab:ablation_channel_attention_kitti}\n\\end{table}\n\n\n\\noindent\\textbf{Effectiveness of Multi-scale CA.} {In addition, a small experiment is also designed for testing the effectiveness of multi-scale attention in the DFF module. \\textit{SECOND} is taken as the baseline and tested on the KITTI dataset. From the results given in Tab. \\ref{tab:ablation_channel_attention_kitti}, we can see that by employing the one-scale features, the \\textbf{mAP} has been improved by 1.75 points compared to the baseline. By adding the multi-scale operation, additional 0.71 points improvements can be further obtained.}\n\n\n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) & $\\textbf{mAP }$ (\\%) \\\\ \\hline \\hline\n {PointPillar \\cite{lang2019pointpillars}} & & & & \\\\ \n {Semantic ID.} & & & & \\\\ \n {Onehot Vector} & & & & \\\\ \n {Semantic Score} & & & & \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. The way use 3D semantic info. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.35\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{mAP }$ (\\%) & $\\textbf{NDS}$ (\\%) \\\\ \\hline \\hline\n {PointPillar \\cite{lang2019pointpillars}} & 44.46 & 57.50 \\\\ \n {Semantic ID.} & 50.96 (+7.50) & 60.69 (+3.19) \\\\ \n {Onehot Vector} & 52.18 (+8.72) & 61.59 (+4.09) \\\\ \n {Semantic Score} & 53.10 (+9.64) & 62.20 (+4.70) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public nuScenes \\cite{geiger2012we} dataset for different semantic representations. }\n\\label{tab:ablation_dif_semantic_reps}\n\\end{table}\n\\fi\n\n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\n\\begin{tabular}{l | c c c c l l }\n\\hline\n\\textbf{Baselines} & \\textbf{3D-Sem} & \\textbf{2D-Sem} & \\textbf{ShallowFusion} & \\textbf{DeepFusion} & \\textbf{mAP(\\%)} & \\textbf{NDS(\\%)} \\\\ \\hline\n\\multirow{4}{*}{\\textbf{SECOND}} & -- & -- & -- & -- &50.85 (baseline) &61.96 (baseline) \\\\\n &\\checkmark & & & &54.45 (\\textcolor{red}{+3.60}) &63.41 (\\textcolor{red}{+1.45}) \\\\\n & & \\checkmark & & &59.40 (\\textcolor{red}{+8.55}) &66.03 (\\textcolor{red}{+4.07}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &62.15 (\\textcolor{red}{+11.30}) & 67.41 (\\textcolor{red}{+5.45}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &62.86 (\\textcolor{red}{+12.01}) & 68.05 (\\textcolor{red}{+6.09}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{PointPillars}} & -- & -- & -- & -- &43.46 (baseline) &57.50 (baseline) \\\\\n & \\checkmark & & & &52.18 (\\textcolor{red}{+8.72}) &61.59 (\\textcolor{red}{+4.09}) \\\\\n & & \\checkmark & & &59.10 (\\textcolor{red}{+15.64}) & 65.05 (\\textcolor{red}{+7.55}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & & 60.77 (\\textcolor{red}{+17.31}) &66.04 (\\textcolor{red}{+8.54}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &61.58 (\\textcolor{red}{+18.12}) & 66.78 (\\textcolor{red}{+9.28}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{CenterPoint}} & -- & -- & -- & -- &59.04 (baseline) & 66.60 (baseline) \\\\\n & \\checkmark & & & & 61.12 (\\textcolor{red}{+2.08}) &67.68 (\\textcolor{red}{+1.08}) \\\\\n & & \\checkmark & & & 62.81 (\\textcolor{red}{+3.77}) & 68.49 (\\textcolor{red}{+1.89})\\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &64.99 (\\textcolor{red}{+5.95}) & 69.38 (\\textcolor{red}{+2.78}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &65.85 (\\textcolor{red}{+6.81}) & 69.91 (\\textcolor{red}{+3.31}) \\\\ \\hline\n \n\\multirow{1}{*}{\\textbf{CenterPoint*}} &\\checkmark &\\checkmark &\\checkmark &\\checkmark&67.84 (\\textcolor{red}{+8.80}) & 71.64 (\\textcolor{red}{+5.04}) \n\n\\\\ \\hline\n\\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN. }\n\n\\label{tab:ablation_table}\n\\end{table}\n\\fi \n\n\\iffalse\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.4\\textwidth}{!}\n{%\n\\begin{tabular}{|l|c|c|}\n\\hline\n & PointPillars (ms) & SECOND (ms) \\\\ \\hline\nBaseline & 53.5 & 108.2 \\\\ \\hline\n+3D Sem & 56.5\\textcolor{red}{(+3.0)} & 108.7\\textcolor{red}{(+0.5)} \\\\ \\hline\n+2D Sem & 56.9\\textcolor{red}{(+3.4)} & 108.4\\textcolor{red}{(+0.2)} \\\\ \\hline\n+AAF+DFF & 62.5\\textcolor{red}{(+9.0)} & 113.3\\textcolor{red}{(+4.6)} \\\\ \\hline\n\\end{tabular}\n}\n\\caption{Running time of differ modules. Here, +3D Sem, +2D Sem, +AAF+DFF denote the inference time of use PointCloud segmentation, image semantic segmentataion and both of two type segmentaion and AAF \\& DFF module. }\n\\label{tab:differ running time}\n\\end{table}\n\\\\\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.4\\textwidth}{!}\n{%\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nType & 2D Sem & 3D Sem & Projection \\\\ \\hline\nTime(ms) & 32ms & 140ms & 3ms \\\\ \\hline\n\\end{tabular}\n}\n\\caption{The type of 2D Sem and 3D Sem denote that the time of we got 2D semantic segmentation and 3D semantic segmentation respectively. And the Projection is the time of project pointcloud on semantic image to get the score of prediction result}\n\\label{tab:seg time}\n\\end{table}\n\\fi\n\n{\\subsection{Computation Time}} \\label{subsec:comp_time}\n\\begin{table}[ht!]\n\\centering\n\\setlength{\\tabcolsep}{5pt}\n\\renewcommand\\arraystretch{1.1}\n\\resizebox{0.35\\textwidth}{!}\n{ \n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Types}$ & $\\textbf{PointPillars }$ & $\\textbf{SECOND}$ \\\\ \\hline \n {Baseline} & 53.5 (ms) & 108.2 (ms) \\\\\n {+3D Sem} & +3.0 (ms) & +0.5 (ms) \\\\ \n {+2D Sem} & +3.4 (ms) & +0.2 (ms) \\\\ \n {The Proposed} & 62.5 (ms) & 113.3 (ms) \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Inference time of different modules. Here, \\textit{+3D Sem} and \\textit{+2D Sem} denote the time of adding 3D\/2D semantic information to input data. \\textit{The Proposed} denotes the proposed framework with the modules including two types of semantic information and \\textit{AAF} \\& \\textit{DFF} modules.}\n\\label{tab:inference_time}\n\\end{table}\n\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{Figs\/det_res.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results with Open3d \\cite{zhou2018open3d}, where (a) is the ground-truth, (b) is the baseline method based on only point cloud, (c), (d) and (e) are the detection results based on the 2D semantic, 3D semantic and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some false positive and false negative detection. For nuScenes dataset, the baseline detector used here is CenterPoint, and for KITTI is SECOND.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\nBesides the performance, the computation time is also very important. Here, we test the computation time of each module based on the {\\textit{PointPillars} and \\textit{SECOND} on the KITTI dataset in \\tabref{tab:inference_time}.} For a single Nvidia V100 GPU, the \\textit{PointPillars} takes 53.5 ms per frame. By adding the 2D and 3D semantic information, the inference time increases 3.0 ms and 3.4 ms respectively while the time increases about 9 ms by adding two types semantic information and the AFF \\& DFF modules. The inference time for \\textit{SECOND} is given at the right column of \\tabref{tab:inference_time}. Compared with the baseline, the 2D\/3D semantic segmentation information gives nearly no extra computational burden. We explain this phenomenon as that in \\textit{SECOND} the simple \\textit{mean} operation is employed for extracting the features for each voxel and the computation time of this operation will not change too much with the increasing of the feature dimension. For \\textit{PointPillars}, the MPL is employed for feature extracting in each pillar, therefore the computation time will increase largely with the increasing of the feature dimension.\n\n{In addition, we also record the time used for obtaining the 2D\/3D semantic results. For Deeplab V3+, the inference time is about 32 ms per frame while for Cylinder3D, it takes about 140 ms per frame. Furthermore, the re-projection of 2D image to 3D point clouds also takes about 3 ms for each frame. The almost time consuming here are 2D\/3D point clouds segmentation operations. But in the practical using there just need a extra segmentation head after detection network backbone. In other words, multi-head is needed here for both detection and segmentation task. These just taking few milliseconds when using model inference acceleration operation, like C++ inference library TensorRT.}\n\\\\\n\n\n\\subsection{Qualitative Detection Results}\\label{sub:Qualitative_Res}\nWe show some qualitative detection results on nuScenes and KITTI dataset in \\figref{Fig:detection result} in which \\figref{Fig:detection result} (a) is the ground truth, (b), (c), and (d) are the detect result of baseline (CenterPoint) without any extra information, with 2D and 3D semantic information respectively and (e) is final results with all the fusion modules. From these figures, we can easily find that there is some false positive detection caused by the frustum blurring effect in 2D painting, while the 3D {semantic results} give a relatively clear boundary of the object but provides some worse class classification. More importantly, the proposed framework which combines both the two complementary information from 2D and 3D segmentation can give much more accurate detection results.\n\n\\subsection{Ablation Studies} \\label{subsec:ablation_study}\n\nIn this part, we design several ablation experiments to analyze the contribution of different components for the final detection results. Specially, we have designed two series of experiments for both the KITTI and nuScenes dataset respectively. \n\n\n\n\n\\textbf{KITTI Benchmark}: we choose the SECOND detector for the ablation study on KITTI benchmark. \n \n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cccc}\n \\hline\n $\\textbf{Strategies}$ & $\\textbf{Car} (Mod.)$ & $\\textbf{Pedestrian}$ (Mod.) & $\\textbf{Cyclist}$ (Mod.) & $\\textbf{mAP }$ (\\%) \\\\ \\hline \\hline\n {SECOND \\cite{yan2018second}} & 88.99 & 56.21 & 70.65 & 71.95 \\\\ \n {3D Sem.} & + 0.81 & + 0.70 & + 0.63 & + 0.71 \\\\ \n { 2D Sem.} & + 1.09 & + 1.35 & + 1.20 & + 1.41 \\\\ \n { Shallow Fusion} & + 1.62 & + 1.78 & + 1.91 & + 1.77 \\\\ \n { Deep Fusion } & + 2.63 & + 3.45 & + 2.63 & + 2.90 \\\\ \\hline\n \\end{tabular}\n}\n\\caption{Ablation studies on the public KITTI \\cite{geiger2012we} dataset. Here, we take the SECOND \\cite{yan2018second} as the baseline method. Similar to PointPainting \\cite{vora2020pointpainting}, we provide the detection results on the Bird's-eye view here. To be clear, only the results of the ``Moderate'' category have been given and the mAP represents the mean AP of ``Car'', ``Pedestrian'' and ``Cyclist''.}\n\\label{tab:ablation_kitti_table}\n\\end{table}\n\n\n\\textbf{nuScenes Benchmark}: three different detectors have been employed for ablation here. The experimental results are given in \\tabref{tab:ablation_nuScene_table}.\n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}\n{%\n \\begin{tabular}{r | cc | cc |cc}\n \\hline\n \\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Strategies}}} & \n \\multicolumn{2}{c|}{\\textbf{SECOND}} & \n \\multicolumn{2}{c|}{\\textbf{PointPillars}} & \n \\multicolumn{2}{c}{\\textbf{CenterPoint}} \\\\\n {} & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) & \\textbf{mAP} (\\%) & \\textbf{NDS} (\\%) \n \\\\ \\hline \\hline\n {Baseline} & 50.85 & 61.96 & 43.46 & 57.50 & 59.04 & 66.60\\\\ \n {3D Sem.} & + 3.60& + 1.45 & + 8.72 & + 4.09 & + 0.84 & + 0.58 \\\\ \n {2D Sem.} & + 8.55& + 4.07 & + 15.64 &+ 7.55 & + 7.29& + 3.69 \\\\ \n { ShallowFusion} & + 11.30 & + 5.45 & + 17.31 & + 8.54 & + 10.00& + 5.86 \\\\ \n { DeepFusion } & + 12.01 & + 6.09 & + 18.12 & + 9.28 & +xxxxx & + xxxxx\\\\ \\hline\n \\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN.}\n\\label{tab:ablation_nuScene_table}\n\\end{table}\n\nFor Second and PointPillars 3D detector we trained and evaluated with whole dataset,we can see that both of 3D and 2D segmentation information has clearly improved the detect accuracy as showed as Tabel.\\ref{tab:ablation_table}(1th and 2th row),Pointpillar received largest mAP(+17.31) and NDS(+8.54) imporved compare with baseline,\nwhile Centerpoint achieved that mAP(+10.49) and NDS(+5.29) imporved with quarter of the training split dataset and eval with whole dataset,Specilly we trained all the model with same config,with 8GPU and 4 batchsize, so the baseline maybe subtle differences between our experiment and official figures.\n\nTo verify the effectiveness of different modules, a series of ablation studies have been designed here. All the experiments are executed on the validation split and the settings keep the same as in Sec. \\ref{subsec:dataset}. All the results are given in Tab. \\ref{tab:ablation_table}, from where we could figure out the impact of each module. \n\nAs demonstrated in Tab. \\ref{tab:ablation_table}, 3D semantic segmentation information alone contributes about 9.81\\% mAP and 3.92\\% NDS averagely, while 2D semantic segmentation information contributes about 21.90\\% and 8.46\\% averagely. The higher improvement from 2D semantic information benefit from the better recall ability especially for objects in the distance where the point cloud is very sparse. The advantage for 3D semantic information is that the point clouds can well handle the occlusion among objects which is very common and hard to deal in 2D. But the most improvement comes from the combination of 3D Painting, 2D Painting and Adaptive Attention Module, 26.58\\% mAP and 10.90\\% averagely from 3 detectors we used.\n\n\\begin{table}[ht!]\n\\centering\n\\resizebox{0.5\\textwidth}{!}{\n\\begin{tabular}{l | c c c c l l }\n\\hline\n\\textbf{Baselines} & \\textbf{3D-Sem} & \\textbf{2D-Sem} & \\textbf{ShallowFusion} & \\textbf{DeepFusion} & \\textbf{mAP(\\%)} & \\textbf{NDS(\\%)} \\\\ \\hline\n\\multirow{4}{*}{\\textbf{SECOND}} & -- & -- & -- & -- &50.85 (baseline) &61.96 (baseline) \\\\\n &\\checkmark & & & &54.45 (\\textcolor{red}{+3.60}) &63.41 (\\textcolor{red}{+1.45}) \\\\\n & & \\checkmark & & &59.40 (\\textcolor{red}{+8.55}) &66.03 (\\textcolor{red}{+4.07}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &62.15 (\\textcolor{red}{+11.30}) & 67.41 (\\textcolor{red}{+5.45}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &62.86 (\\textcolor{red}{+12.01}) & 68.05 (\\textcolor{red}{+6.09}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{PointPilars}} & -- & -- & -- & -- &43.46 (baseline) &57.50 (baseline) \\\\\n & \\checkmark & & & &52.18 (\\textcolor{red}{+8.72}) &61.59 (\\textcolor{red}{+4.09}) \\\\\n & & \\checkmark & & &59.10 (\\textcolor{red}{+15.64}) & 65.05 (\\textcolor{red}{+7.55}) \\\\\n \n & \\checkmark & \\checkmark & \\checkmark & & 60.77 (\\textcolor{red}{+17.31}) &66.04 (\\textcolor{red}{+8.54}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &61.58 (\\textcolor{red}{+18.12}) & 66.78 (\\textcolor{red}{+9.28}) \\\\ \\hline\n\\multirow{4}{*}{\\textbf{CenterPoint}} & -- & -- & -- & -- &59.04 (baseline) & 66.60 (baseline) \\\\\n & \\checkmark & & & & 61.12 (\\textcolor{red}{+0.84}) &67.68 (\\textcolor{red}{+0.58}) \\\\\n & & \\checkmark & & & 61.12 (\\textcolor{red}{+7.29}) & 67.68 (\\textcolor{red}{+3.69})\\\\\n \n & \\checkmark & \\checkmark & \\checkmark & &64.89 (\\textcolor{red}{+10.00}) & 69.38 (\\textcolor{red}{+5.86}) \\\\ \n & \\checkmark & \\checkmark & \\checkmark & \\checkmark &65.85 (\\textcolor{red}{+10.00}) & 69.91 (\\textcolor{red}{+5.86}) \\\\ \\hline\n \n\n\\\\ \\hline\n\\end{tabular}\n}\n\\caption{An ablation study for different fusion strategies. ``Att-Mod'' is short for Adaptive Attention Module, ``3D-P'' and ``2D-P'' are short for ``3D Painting Module'' and ``2D Painting Module'', respectively. Especially, CenterPoint* means Centerpoint with Double Flip and DCN. }\n\n\\label{tab:ablation_table}\n\\end{table}\n\n\n\n\n\n\n\n\n\n\\subsection{Qualitative Results on KITTI and nuScenes Dataset}\\label{sub:Qualitative_Res}\n\nMore qualitative detection results are illustrated in Fig. \\ref{Fig:detection result}. Fig. \\ref{Fig:detection result} (a) shows the annotated ground truth, (b) is the detection results for CenterPoint based on raw point cloud without using any painting strategy. (c) and (d) show the detection results with 2D painted and 3D painted point clouds, respectively, while (e) is the results based on our proposed framework. \nAs shown in the figure, there are false positive detection results caused 2D painting due to frustum blurring effect, while 3D painting method produces worse foreground class segmentation compared to 2D image segmentation. Instead, our FusionPainting can combine two complementary information from 2D and 3D segmentation, and detect objects more accurately.\n\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{FusionPainting\/TITS\/Figs\/results_rviz.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results, where (a) is the ground-truth, (b) is the detection results based on raw point cloud, (c), (d) and (e) are the detection results based on the projected points with 2D semantic information, 3D semantic information and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some of the false and missed detection, respectively. The baseline detector used here is CenterPoint \\cite{yin2020center}.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\n\n\\section{Experimental Results} \\label{sec:experiments}\n\nTo verify the effectiveness of our proposed framework, we evaluate it on two the large-scale autonomous driving 3D object detection dataset as KITTI \\cite{geiger2012we} and nuScenes \\cite{nuscenes2019}. Furthermore, we also evaluate the proposed modules on different kinds of 3D detectors for verifying their generalizability, such as SECOND \\cite{yan2018second}, PointPilars \\cite{lang2019pointpillars}, PointRCNN \\cite{shi2019pointrcnn} and PVRCNN \\cite{shi2020pv} etc.\n\n \n \n\n\n\\subsection{Evaluation on KITTI Dataset} \\label{subsec:eval_kitti}\nWe test the proposed framework on the KITTI dataset first. \n\n\\textit{KITTI \\cite{geiger2012we}} as one of the most popular benchmarks for 3D object detection in AD, it contains 7481 samples for training and 7518 samples for testing. The objects in each class are divided into three difficulty levels as ``easy'', ``moderate'', and ``hard'', according to the object size, the occlusion ratio, and the truncation level. Since the ground truth annotations of the test samples are not available and the access to the test server is limited, we follow the idea in~\\cite{chen2017multi} and split the source training data into training and validation where each set contains 3712 and 3769 samples respectively. In this dataset, both the point cloud and the RGB image have been provided. In addition, both the intrinsic camera parameters and extrinsic transformation parameters between different sensors have been well-calibrated. For the object detection task, multiple stereo frames have been provided while only a single image from the left camera at the current frame is used in our approach.\n\n\\textit{Evaluation Metrics:} we follow the metrics provided by the official KITTI benchmark for comparison here. Specifically, for ``Car'' class, we use the metric $\\textbf{AP}_\\text{Car}$\\textbf{70} for all the detection results. $\\textbf{AP}_\\text{Car}$\\textbf{70} means that only the samples in which the overlay of bounding box with ground truth is more than 70\\% are considered as positive. While the threshold for ``Pedestrian'' class is 0.5 (e.g., $\\textbf{AP}_\\text{Ped}$\\textbf{50}) due to its smaller size. Similar to \\cite{vora2020pointpainting}, the average AP (mAP) of three Classes for ``Moderate'' is also taken as an indicator for evaluating the average performance on all three classes. \n\nFor KITTI dataset, four different types of baselines have been evaluated here which are listed as below: \n\\begin{enumerate}[wide, labelwidth=!, labelindent=0pt]\n \\item \\textit{SECOND \\cite{yan2018second}:} is the first to employ the sparse convolution on the voxel-based 3D object detection task and the detection efficiency has been highly improved compared to previous work such as VoxelNet \\cite{zhou2018voxelnet}. \n \\item \\textit{PointPillars \\cite{lang2019pointpillars}} divides the point cloud into pillars rather than voxels, and ``Pillar Feature Net'' is applied to each pillar for extracting the point feature. Then 2d convolution network is adopted on the bird-eye-view feature map for object detection. PointPillars is a trade-off between efficiency and performance.\n \\item \\textit{PointRCNN \\cite{shi2020points}} is a pure point cloud-based two-stages 3D object detector. For extract multi-scale features, the PointNet++ \\cite{qi2017pointnet++} is taken as the backbone. First, the region proposal is generated with a binary foreground\/background classifier in the first stage and then the proposal is refined by cropped points and features in the second stage.\n \\item\\textit{PV-RCNN \\cite{shi2020pv}} is a hybrid point-voxel-based 3D object detector, which can utilize the advantages from both the point and voxel representations.\n\\end{enumerate}\n\n\\textbf{Evaluation Results on KITTI Dataset}\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\n\\resizebox{0.75\\textwidth}{!}\n{\n\\begin{tabular}{r|c|lll|lll|lll}\n\\hline\n\\multicolumn{1}{r|}{\\multirow{2}{*}{\\textbf{Methods}}} &\\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP} (Mod)}} &\n\\multicolumn{3}{c|}{\\textbf{Car}} &\n\\multicolumn{3}{c|}{\\textbf{Pedestrian}} & \n\\multicolumn{3}{c}{\\textbf{Cyclist}} \\\\\n\n{} & {} & {Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} &\n{Easy} & {Mod.} & {Hard} \\\\ \\hline \\hline\n{Pointpillars} & {62.90} \n& {87.59} & {78.17} & {75.23} &\n {53.58} & {47.58} & {44.04} &\n{82.21} & {62.95} & {58.66} \\\\\n\n{PointPillars$^{\\star}$} & {65.78} &\n{89.58} & {78.60} & {75.63} &\n{60.22} & {54.23} & {49.49} &\n84.83 & 64.50 & {60.17} \\\\\n\n{Improvement} & \\RED{+2.88} & \n\\RED{+1.99} & \\RED{+0.43} & \\RED{+0.4} &\n\\RED{+6.64} & \\RED{+6.65} & \\RED{+5.45} &\n\\RED{+2.62} & \\RED{+1.55} & \\RED{+1.51} \\\\ \\hline\n\n\n{SECOND} & {66.64} & \n{90.04} & {81.22} & {78.22} &\n{56.34} & {52.40} & {46.52} &\n83.94 & 66.31 & {62.37} \\\\\n{SECOND$^{\\star}$} & {67.17} & \n{91.19} & {82.13} & {79.25} &\n {56.75} & {52.80} & {48.48} &\n84.46 & 66.57 & {62.51} \\\\\n{Improvement} & \\RED{+0.53} & \n\\RED{+1.15} & \\RED{+0.91} & \\RED{+1.03} &\n\\RED{+0.41} & \\RED{+0.40} & \\RED{+1.96} &\n\\RED{+0.52} & \\RED{+0.26} & \\RED{+0.14} \\\\ \\hline \n\n{PointRCNN} & {69.41} & \n{89.68} & {80.45} & {77.91} &\n{65.60} & {56.69} & {49.92} &\n91.01 & 71.10 & {66.61} \\\\\n{PointRCNN$^{\\star}$} & {70.46} & \n{89.91} & {82.44} & {78.35} &\n{66.19} & {56.78} & {50.34} &\n94.72 & 72.16 & {67.62} \\\\\n{Improvement} & \\RED{+1.05} & \n\\RED{+0.23} & \\RED{+1.99} & \\RED{+0.44} &\n\\RED{+0.59} & \\RED{+0.09} & \\RED{+0.42} &\n\\RED{+3.71} & \\RED{+1.06} & \\RED{+1.01} \\\\ \\hline\n\n\n{PV-RCNN} & {71.82} &\n{92.23} & {83.10} & {82.42} &\n{65.68} & {59.29} & {53.99} &\n91.57 & 73.06 & {69.80} \\\\\n{PV-RCNN$^{\\star}$} & {73.95} & \n{91.54} & {84.59} & {82.66} &\n{69.12} & {61.61} & {55.96} &\n92.82 & 75.65 & {71.03} \\\\\n{Improvement} & \\RED{+2.13} & \n\\GRE{-0.69} & \\RED{+1.49} & \\RED{+0.24} &\n\\RED{+3.44} & \\RED{+2.32} & \\RED{+1.97} &\n\\RED{+1.25} & \\RED{+2.59} & \\RED{+1.23} \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:kitti_3D_detection}\n\\caption{\\normalfont Evaluation of object 3D detection on KITTI ``val'' split for different baseline approaches. $^{\\star}$ represents the baseline by adding the proposed fusion modules. }\n\\end{table*}\n\n\n\n\n\\begin{table*}[ht!]\n\\renewcommand\\arraystretch{1.3}\n\\centering\n \\renewcommand\\arraystretch{1.1}\n\\resizebox{0.90\\textwidth}{!}\n{\n\\begin{tabular}{cllllllllll}\n\\hline\n\\multicolumn{1}{|c|}{\\multirow{2}{*}{Method}} & \\multicolumn{1}{c||}{mAP} & \\multicolumn{3}{c||}{Car} (AP_{70}) & \\multicolumn{3}{c||}{Pedestrain} (AP_{70}) & \\multicolumn{3}{c|}{Cyclist} (AP_{70}) \\\\ \\cline{2-11} \n\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{c||}{Mod.} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c||}{Hard} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c||}{Hard} & \\multicolumn{1}{c|}{Easy} & \\multicolumn{1}{c|}{Mod.} & \\multicolumn{1}{c|}{Hard} \\\\ \\hline \\hline\n\n\\multicolumn{1}{|c|}{Pointpillars} & \\multicolumn{1}{l||}{69.18} & \\multicolumn{1}{l|}{92.50} & \\multicolumn{1}{l|}{87.80} & \\multicolumn{1}{l||}{87.55} & \\multicolumn{1}{l|}{58.58} & \\multicolumn{1}{l|}{52.88} & \\multicolumn{1}{l||}{48.30} & \\multicolumn{1}{l|}{86.77} & \\multicolumn{1}{l|}{66.87} & \\multicolumn{1}{l|}{62.46} \\\\\n\n\\multicolumn{1}{|c|}{PointPillars(ours)} & \\multicolumn{1}{l||}{72.13} & \\multicolumn{1}{l|}{94.39} & \\multicolumn{1}{l|}{87.65} & \\multicolumn{1}{l||}{89.86} & \\multicolumn{1}{l|}{64.84} & \\multicolumn{1}{l|}{59.57} & \\multicolumn{1}{l||}{55.16} & \\multicolumn{1}{l|}{89.55} & \\multicolumn{1}{l|}{69.18} & \\multicolumn{1}{l|}{64.65} \\\\\n\\multicolumn{1}{|c|}{\\uparrow} & \\multicolumn{1}{l||}{+2.95} & \\multicolumn{1}{l|}{+1.89} & \\multicolumn{1}{l|}{-0.15} & \\multicolumn{1}{l||}{+2.31} & \\multicolumn{1}{l|}{+6.26} & \\multicolumn{1}{l|}{+6.69} & \\multicolumn{1}{l||}{+6.86} & \\multicolumn{1}{l|}{+2.78} & \\multicolumn{1}{l|}{+2.31} & \\multicolumn{1}{l|}{+2.19} \\\\ \\hline\n\n\n\\multicolumn{1}{|c|}{SECOND} & \\multicolumn{1}{l||}{71.95} & \\multicolumn{1}{l|}{92.31} & \\multicolumn{1}{l|}{88.99} & \\multicolumn{1}{l||}{86.59} & \\multicolumn{1}{l|}{60.5} & \\multicolumn{1}{l|}{56.21} & \\multicolumn{1}{l||}{51.25} & \\multicolumn{1}{l|}{87.30} & \\multicolumn{1}{l|}{70.65} & \\multicolumn{1}{l|}{66.63} \\\\\n\\multicolumn{1}{|c|}{SECOND(Ours)} & \\multicolumn{1}{l||}{73.72} & \\multicolumn{1}{l|}{94.25} & \\multicolumn{1}{l|}{90.61} & \\multicolumn{1}{l||}{88.19} & \\multicolumn{1}{l|}{62.37} & \\multicolumn{1}{l|}{57.99} & \\multicolumn{1}{l||}{54.23} & \\multicolumn{1}{l|}{89.41} & \\multicolumn{1}{l|}{72.56} & \\multicolumn{1}{l|}{67.38} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+1.77} & \\multicolumn{1}{l|}{+1.94} & \\multicolumn{1}{l|}{+1.62} & \\multicolumn{1}{l||}{+1.6} & \\multicolumn{1}{l|}{+1.87} & \\multicolumn{1}{l|}{+1.78} & \\multicolumn{1}{l||}{+2.98} & \\multicolumn{1}{l|}{+2.11} & \\multicolumn{1}{l|}{+1.91} & \\multicolumn{1}{l|}{+0.75} \\\\ \\hline\n\n\n\\multicolumn{1}{|c|}{PointRCNN} & \\multicolumn{1}{l||}{74.13} & \\multicolumn{1}{l|}{93.04} & \\multicolumn{1}{l|}{88.70} & \\multicolumn{1}{l||}{86.64} & \\multicolumn{1}{l|}{68.25} & \\multicolumn{1}{l|}{59.41} & \\multicolumn{1}{l||}{53.71} & \\multicolumn{1}{l|}{94.39} & \\multicolumn{1}{l|}{74.27} & \\multicolumn{1}{l|}{69.70} \\\\\n\\multicolumn{1}{|c|}{PointRCNN(ours)} & \\multicolumn{1}{l||}{74.92} & \\multicolumn{1}{l|}{93.33} & \\multicolumn{1}{l|}{89.13} & \\multicolumn{1}{l||}{86.92} & \\multicolumn{1}{l|}{68.63} & \\multicolumn{1}{l|}{59.92} & \\multicolumn{1}{l||}{54.08} & \\multicolumn{1}{l|}{95.69} & \\multicolumn{1}{l|}{75.70} & \\multicolumn{1}{l|}{71.09} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+0.79} & \\multicolumn{1}{l|}{+0.29} & \\multicolumn{1}{l|}{+0.43} & \\multicolumn{1}{l||}{0.28} & \\multicolumn{1}{l|}{+0.38} & \\multicolumn{1}{l|}{+0.51} & \\multicolumn{1}{l||}{+0.37} & \\multicolumn{1}{l|}{+1.30} & \\multicolumn{1}{l|}{+1.43} & \\multicolumn{1}{l|}{+1.39} \\\\ \\hline\n\n\\multicolumn{1}{|c|}{PV-RCNN} & \\multicolumn{1}{l||}{76.23} & \\multicolumn{1}{l|}{94.50} & \\multicolumn{1}{l|}{90.62} & \\multicolumn{1}{l||}{88.53} & \\multicolumn{1}{l|}{68.67} & \\multicolumn{1}{l|}{62.49} & \\multicolumn{1}{l||}{58.01} & \\multicolumn{1}{l|}{92.76} & \\multicolumn{1}{l|}{75.59} & \\multicolumn{1}{l|}{71.06} \\\\\n\\multicolumn{1}{|c|}{PV-RCNN(ours)} & \\multicolumn{1}{l||}{78.17} & \\multicolumn{1}{l|}{94.86} & \\multicolumn{1}{l|}{90.87} & \\multicolumn{1}{l||}{88.88} & \\multicolumn{1}{l|}{71.99} & \\multicolumn{1}{l|}{64.71} & \\multicolumn{1}{l||}{59.01} & \\multicolumn{1}{l|}{96.35} & \\multicolumn{1}{l|}{78.93} & \\multicolumn{1}{l|}{74.51} \\\\\n\\multicolumn{1}{|c|}{} & \\multicolumn{1}{l||}{+1.94} & \\multicolumn{1}{l|}{+0.36} & \\multicolumn{1}{l|}{+0.25} & \\multicolumn{1}{l||}{+0.35} & \\multicolumn{1}{l|}{+3.32} & \\multicolumn{1}{l|}{+2.22} & \\multicolumn{1}{l||}{+1.00} & \\multicolumn{1}{l|}{+3.59} & \\multicolumn{1}{l|}{+3.34} & \\multicolumn{1}{l|}{+3.45} \\\\ \\hline\n\\end{tabular}\n}\n\\label{tab:kitti BEV detection}\n\\caption{\\normalfont kitti BEV detection.}\n\\end{table*}\n\n\\textit{Performance on Baseline Methods:}\n\n\\textit{Comparison with other SOTA methods:}\n\n\\textbf{Qualitative Results on KITTI Dataset}\n\n\n\\subsection{Evaluation on the nuScenes Dataset} \\label{subsec:nuScenes}\n\\textit{Dataset:} nuScenes 3D object detection benchmark~\\cite{nuscenes2019} has been employed for evaluation here, which is a large-scale dataset with a total of 1,000 scenes. For a fair comparison, the dataset has been officially divided into train, val, and testing, which includes 700 scenes (28,130 samples), 150 scenes (6019 samples), 150 scenes (6008 samples) respectively. For each video, only the key frames (every 0.5s) are annotated with 360-degree view. With a 32 lines LiDAR used by nuScenes, each frame contains about {300,000} points. For object detection task, 10 kinds of obstacles are considered including ``car'', ``truck'', ``bicycle'' and ``pedestrian'' et al. Besides the point clouds, the corresponding RGB images are also provided for each keyframe. For each keyframe, there are 6 images that cover 360 fields-of-view.\n\n\\textit{Evaluation Metrics:} For 3D object detection, \\cite{nuscenes2019} proposes mean Average Precision (mAP) and nuScenes detection score (NDS) as the main metrics. Different from the original mAP defined in \\cite{everingham2010pascal}, nuScenes consider the BEV center distance with thresholds of \\{0.5, 1, 2, 4\\} meters, instead of the IoUs of bounding boxes. NDS is a weighted sum of mAP and other metric scores, such as average translation error (ATE) and average scale error (ASE).\n\n\\textit{Baselines:} in addition, to verify its universality,we have implemented the proposed module on three different State-of-the-Art 3D object detectors as following,\n\\begin{itemize}\n\\item \\textit{SECOND} \\cite{yan2018second}, which is the first to employ the sparse convolution on the voxel-based 3D object detection task and the detection efficiency has been highly improved compared to previous work such as VoxelNet. \n\\item \\textit{PointPillars} \\cite{lang2019pointpillars}, which divides the point cloud into pillars and ``Pillar Feature Net'' is applied to each pillar for extracting the point feature. Then 2d convolution network has been adopted on the bird-eye-view feature map for object detection. PointPillars is a trade-off between efficiency and performance. \n\\item \\textit{CenterPoint} \\cite{yin2021center}, is the first anchor-free based 3D object detector which is very suitable for small object detection. \n\\end{itemize}\n\n\\textit{Implementation Details:} HTCNet \\cite{chen2019hybrid} and Cylinder3D \\cite{zhou2020cylinder3d} are employed here as the 2D Segmentor and 3D Segmentor, respectively, due to their outstanding semantic segmentation ability. The 2D segmentor is pretrained on nuImages\\footnote{\\url{https:\/\/www.nuscenes.org\/nuimages}} dataset, and the 3D segmentor is pretrained on nuScenes detection dataset while the point cloud semantic information can be generated by extracting the points inside each bounding box of the obstacles.\n\nFor Adaptive Attention Module, $m=11$, $C_1=64,C_2=128$, respectively, \nFor each baseline, all the experiments share the same setting, the voxel size for SECOND, PointPillar and CenterPoint are $0.2m \\times 0.2m \\times 8 m$, $0.1m \\times 0.1m \\times0.2m$ and $0.075m \\times 0.075m \\times 0.075m$, respectively. We use AdamW \\cite{ loshchilov2019decoupled} with max learning rate 0.001 as the optimizer. Follow \\cite{nuscenes2019}, 10 previous LiDAR sweeps are stacked into the keyframe to make the point clouds more dense.\n\n\n\n\n\n\n\n\n\n\\begin{table*}[ht!]\n\\centering\n\\resizebox{0.95\\textwidth}{!}\n{\n\t\\begin{tabular}{l | c | c | c c c c c c c c c c}\n\t\\hline\n \\multicolumn{1}{c|}{\\multirow{2}{*}{Methods}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{NDS}(\\%)}} &\n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\\n \n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{l|}{} & \\multicolumn{1}{l}{Car} & \\multicolumn{1}{l}{Pedestrian} & \\multicolumn{1}{l}{Bus} & \\multicolumn{1}{l}{Barrier} & \\multicolumn{1}{l}{T.C.} &\n \\multicolumn{1}{l}{Truck} & \\multicolumn{1}{l}{Trailer} & \\multicolumn{1}{l}{Moto.} & \\multicolumn{1}{l}{Cons.} &\n \\multicolumn{1}{l}{Bicycle} \\\\ \n \\hline \\hline\n \n \n\t\n\n\tSECOND \\cite{yan2018second} &61.96 &50.85 &81.61 &77.37 & 68.53 &57.75 &56.86 &51.93 &38.19 & 40.14 &17.95 & 18.16 \\\\\n\tSECOND$^{\\ast}$ & 67.41 & 62.15 & 83.98 & 82.86 &71.13 &63.29 &74.66 &59.97 &41.53 &66.85 &22.90 &54.35 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{{+5.45}} & \\textcolor{red}{{+11.30}} & \\textcolor{red}{+2.37} & \\textcolor{red}{+5.49} & \\textcolor{red}{+2.60} & \\textcolor{red}{+5.54} & \\textcolor{red}{+17.80} & \\textcolor{red}{+8.04} & \\textcolor{red}{+3.34} & \\textcolor{red}{+26.71} & \\textcolor{red}{+5.05}& \\textcolor{red}{+36.19} \\\\\n \\hline\n\t\n\tPointPillars \\cite{lang2019pointpillars} & 57.50 &43.46 &80.67 &70.80 &62.01 &49.23 &44.00 &49.35 &34.86 &26.74 &11.64 &5.27 \\\\\n\tPointPillars$^{\\ast}$ &66.04 & 60.77&83.51 &82.90&69.96 &58.46&74.50&56.90&39.25&66.40&21.68&54.10 \\\\\n\tImprovement $\\uparrow$ & \\textcolor{red} {{+8.54}} & \\textcolor{red}{{+17.31}} & \n\t\\textcolor{red}{+2.84} & \\textcolor{red}{+12.10} & \\textcolor{red}{+7.95 } & \\textcolor{red}{ +9.23} & \\textcolor{red}{+30.50} & \\textcolor{red}{+7.55} & \\textcolor{red}{+4.39} & \\textcolor{red}{+39.66} & \\textcolor{red}{+10.04} & \\textcolor{red}{+48.83} \\\\\n \\hline\n\t\n\tCenterPoint \\cite{yin2021center} & 64.82 &56.53 &84.73 &83.42 &66.95 &64.56 &64.76 &54.52 &36.13 &56.81 &15.81 &37.57\\\\\n\tCenterPoint$^{\\ast}$ &70.68 &66.53 &87.04 &88.44 & 70.66 &67.26 &79.57 &62.98 & 45.09 & 74.65&25.36&64.41\\\\\n\tImprovement $\\uparrow$ & \\textcolor{red}{\\textbf{+5.86}} & \\textcolor{red}{\\textbf{+10.00}} & \\textcolor{red}{+2.31} & \\textcolor{red}{+5.02} & \\textcolor{red}{+3.71} & \\textcolor{red}{+2.70} & \\textcolor{red}{+14.81} & \\textcolor{red}{+8.46} & \\textcolor{red}{+8.96} & \\textcolor{red}{+17.84} & \\textcolor{red}{+9.55}&\\textcolor{red}{+26.84} \\\\\n \\hline\n\t\\end{tabular}\n}\n\\caption{\\normalfont Evaluation results on nuScenes validation dataset. ``NDS'' and ``mAP'' mean nuScenes detection score and mean Average Precision. ``T.C.'', ``Moto.'' and ``Cons.'' are short for ``traffic cone'', ``motorcycle'', and ``construction vehicle'' respectively. `` * '' denotes the improved baseline by adding the proposed ``FusionPainting''. To hight the effectiveness of our methods, the improvements of each method are demonstrated in red.}\n\\label{tab:eval_on_nuscenes_val}\n\\end{table*}\n\n\n\n\n\n\n\n\\textbf{Evaluation Results on nuScenes Dataset} \\label{subsubsec:evaluation}\nWe have evaluated the proposed framework on nuScenes benchmark for both validation and test splits.\n\n\\textit{Performance on Baseline Methods:} First of all, we have integrated the proposed fusion module into three different baselines methods and experimentally we have achieved consistently improvements on both $\\text{mAP}$ and $\\text{NDS}$ compared to all baselines. Detailed results are given in Tab. \\ref{tab:eval_on_nuscenes_val}. From this table, we can obviously find that the proposed module gives more than 10 points improvements on $\\text{mAP}$ and \u00a05 points improvements on $\\text{NDS}$ for all the three baselines. The improvements have reached 17.31 and 8.45 points for PointPillars method \u00a0on $\\text{mAP}$ and $\\text{NDS}$ respectively. In addition, we find that the ``Traffic Cone'', ``Moto'' and ``Bicycle'' have received the surprising improvements compared to other classes. For ``Bicycle'' category specifically, the AP has improved about \\textbf{36.19\\%}, \\textbf{48.83\\%} and \\textbf{26.84\\%} based on Second, PointPillar and Centerpoint respectively. Interestingly, all these categories are small objects which are hard to be detected because of a few LiDAR points on them. By adding the prior semantic information, the category classification becomes relatively easier. Especially, the experiments here we share the same setting, so the baseline figures may be subtle differences with official.\n\n\\textit{Comparison with other SOTA methods:} To compare the proposed framework with other SOTA methods, we submit our best results (proposed module on the CenterPoint \\cite{yin2021center}) to the nuScenes evaluation sever\\footnote{\\url{https:\/\/www.nuscenes.org\/object-detection\/}}. The detailed results are listed in Tab. \\ref{tab:test_tabel}. To be clear, only these methods with publications are compared here due to space limitation. From this table, we can find that the ``NDS'' and ``mAP'' have been improved by 3.1 points and 6.0 points respectively compared with the baseline method CenterPoint \\cite{yin2021center}. More importantly, our algorithm outperforms previous multi-modal method 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} by a large margin, \\textit{i.e.}, improving 8.1 and 13.6 points in terms of NDS and mAP metrics.\n\\begin{figure*}[ht!]\n\t\\centering\n\t\\includegraphics[width=0.975\\textwidth]{FusionPainting\/TITS\/Figs\/results_rviz.pdf}\n\t\\centering\n\t\\caption{Visualization of detection results, where (a) is the ground-truth, (b) is the detection results based on raw point cloud, (c), (d) and (e) are the detection results based on the projected points with 2D semantic information, 3D semantic information and fusion semantic information, respectively. Especially, the yellow and red dash ellipses show some of the false and missed detection, respectively. The baseline detector used here is CenterPoint.}\n\t\\label{Fig:detection result}\n\\end{figure*}\n\n\\textbf{Qualitative Results on nuScenes Dataset}\n\nMore qualitative detection results are illustrated in Fig. \\ref{Fig:detection result}. Fig. \\ref{Fig:detection result} (a) shows the annotated ground truth, (b) is the detection results for CenterPoint based on raw point cloud without using any painting strategy. (c) and (d) show the detection results with 2D painted and 3D painted point clouds, respectively, while (e) is the results based on our proposed framework. \nAs shown in the figure, there are false positive detection results caused 2D painting due to frustum blurring effect, while 3D painting method produces worse foreground class segmentation compared to 2D image segmentation. Instead, our FusionPainting can combine two complementary information from 2D and 3D segmentation, and detect objects more accurately.\n\n\n\n \n\\begin{table*}[ht!]\n \\centering\n \\renewcommand\\arraystretch{1.2}\n\\resizebox{0.95\\textwidth}{!}{\n\n\\begin{tabular}{ l| c| c| c| c c c c c c c c c c}\n \\hline\n \\multicolumn{1}{c|}{\\multirow{2}{*}{Methods}} & \n \\multicolumn{1}{c|}{\\multirow{2}{*}{Modality}} & \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{NDS}(\\%)}} & \n \\multicolumn{1}{c|}{\\multirow{2}{*}{\\textbf{mAP}(\\%)}} & \n \\multicolumn{10}{c}{\\textbf{AP} (\\%)} \\\\ \n \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{c|}{} & \\multicolumn{1}{l}{Car} & \\multicolumn{1}{l}{Truck} & \\multicolumn{1}{l}{Bus} & \\multicolumn{1}{l}{Trailer} & \\multicolumn{1}{l}{Cons} & \\multicolumn{1}{l}{Ped} & \\multicolumn{1}{l}{Moto} & \\multicolumn{1}{l}{Bicycle} & \\multicolumn{1}{l}{T.C} & \\multicolumn{1}{l}{Barrier} \\\\ \\hline\n PointPillars \\cite{vora2020pointpainting} & L & 55.0 & 40.1 & 76.0 & 31.0 & 32.1 & 36.6 & 11.3 & 64.0 & 34.2 & 14.0 & 45.6 & 56.4 \\\\\n 3DSSD \\cite{yang20203dssd} & L & 56.4 & 46.2 & 81.2 & 47.2 & 61.4 & 30.5 & 12.6 & 70.2 & 36.0 & 8.6 & 31.1 & 47.9 \\\\\n \n CBGS \\cite{zhu2019class} & L & 63.3 & 52.8 & 81.1 & 48.5 & 54.9 & 42.9 & 10.5 & 80.1 & 51.5 & 22.3 & 70.9 & 65.7 \\\\\n \n HotSpotNet \\cite{chen2020object} & L & 66.0 & 59.3 & 83.1 & 50.9 & 56.4 & 53.3 & {23.0} & 81.3 & {63.5} & 36.6 & 73.0 & \\textbf{71.6} \\\\\n CVCNET\\cite{chen2020every} & L & 66.6 & 58.2 & 82.6 & 49.5 & 59.4 & 51.1 & 16.2 & {83.0} & 61.8 & {38.8} & 69.7 & 69.7 \\\\\n CenterPoint \\cite{yin2021center} & L & {67.3} & {60.3} & {85.2} & {53.5} & {63.6} & {56.0} & 20.0 & 54.6 & 59.5 & 30.7 & ne{78.4} & 71.1 \\\\\n PointPainting \\cite{vora2020pointpainting} & L \\& C & 58.1 & 46.4 & 77.9 & 35.8 & 36.2 & 37.3 & 15.8 & 73.3 & 41.5 & 24.1 & 62.4 & 60.2 \\\\\n 3DCVF \\cite{DBLP:conf\/eccv\/YooKKC20} & L \\& C & 62.3 & 52.7 & 83.0 & 45.0 & 48.8 & 49.6 & 15.9 & 74.2 & 51.2 & 30.4 & 62.9 & 65.9 \\\\ \\hline\n \n \n {FusionPainting(Ours)} & L \\& C & \\textbf{70.4} & \\textbf{66.3} & \\textbf{86.3} & \\textbf{58.5} & \\textbf{66.8} & \\textbf{59.4} & \\textbf{27.7} & \\textbf{87.5} & \\textbf{71.2} & \\textbf{51.7} & \\textbf{84.2} & {70.2} \\\\ \\hline\n \n \\end{tabular}\n}\n\\caption{Comparison with other SOTA methods on the nuScenes 3D object detection benchmark. ``L'' and ``C'' in the modality column represent lidar and camera respectively. For each column, we use ``underline'' to illustrate the best results of other methods and we give the improvements of our method compared to other best performance. Here, we only list all the methods results with publications.}\n\\label{tab:test_tabel}\n\\end{table*}\n\\section{Introduction}\n\n \n\n\n\n\\input{Files\/01_introduction}\n\\input{Files\/02_related_work}\n\\input{Files\/03_methods}\n\\input{Files\/04_experiments}\n\\input{Files\/05_ablation_study}\n \n\n\n\n\n\n\n\n\n\\section{Conclusion and Future Works}\n\n{In this work, we proposed an effective framework \\textit{Multi-Sem Fusion} to fuse the RGB image and LiDAR point clouds in two different levels. For the first level, the proposed AAF module aggregates the semantic information from both the 2D image and 3D point clouds segmentation results adaptively with learned weight scores. For the second level, a DFF module is proposed to further fuse the boosted feature maps with different receptive fields by a channel attention module. Thus, the features can cover objects of different sizes. More importantly, the proposed modules are detector independent which can be seamlessly employed in different frameworks. The effectiveness of the proposed framework has been evaluated on public benchmark and outperforms the state-of-the-art approaches. However, the limitation of the current framework is also obvious that both the 2D and 3D parsing results are obtained by offline approaches which prevent the application of our approach in real-time AD scenarios. An interesting research direction is to share the backbone features for both object detection and segmentation and take the segmentation as an auxiliary task.} \n\n\n\n\n\n\n\n\n\n\n\n\n\n\\ifCLASSOPTIONcaptionsoff\n \\newpage\n\\fi\n\n\n\n\n\n\n\n\\bibliographystyle{ieeetr}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Discussion and Conclusion}\nWe presented a counterfactual-based attribution method to explain changes in a large-scale ad system's output metric. Using the computational structure of the system, the method provides attribution scores that are more accurate than prior methods.\n\n\n\n\\bibliographystyle{ACM-Reference-Format}\n\n\\section{Defining the attribution problem}\nFor a system's outcome metric $Y$, let $Y=y_t$ be a value that needs to be explained (e.g., an extreme value). Our goal is to explain the value by \\textit{attributing} it to a set of input variables, $\\bm{X}$. Can we rank the variables by their contribution in \\textit{causing} the outcome? \n\nFor example, consider a system that crashes whenever its load crosses 0.9 units. The system's crash metric can be described by the following structural equations, $Y=I_{Load>=0.9}; Load=0.5X_1+ 0.4X_2+0.9X_3; X_i= Bernoulli(0.5) \\forall i$. The corresponding graph for the system has the following edges: ${X_1, X_2, X_3} \\rightarrow Load; Load \\rightarrow Y$. The value of each input $X_i$ is affected by the independent error terms through the Bernoulli distribution. Suppose the initial reference value was $(X_1=0,X_2=0, X_3=0, Y=0)$ and the next observed value is $(X_1=1,X_2=1, X_3=1, Y=1)$. Given that the system crashed ($Y=1$), how do we attribute it to $X_1, X_2, X_3$? \nIntuitively, $X_3$ is a sufficient cause of the crash since changing $X_3=1$ would lead to the crash irrespective of values of other variables. However, $X_1$ and $X_2$ can be equally a reason for this particular crash since their coefficients sum to $0.9$. \nHowever, if either of $X_1$ or $X_2$ are observed to be zero, then the other one cannot explain the crash. \nThis example indicates that the attribution for any input variable depends on the equations of the data-generating process and also on the values of other variables.\n\\subsection{Attribution score for system metric change}\nWe now define the \\textit{attribution score} for explaining an observed value wrt a reference value. While system inputs can be continuous, we utilize the fact that system metrics are measured and compared over time. That is, we are often interested in attribution for a metric value compared to an reference timestamp. Reference values are typically chosen from previous values that are expected to be comparable (E.g., metric value at last hour or last week). \nBy comparing to a reference timestamp, we simplify the problem by considering only two values of a continuous variable:\nits \\textit{observed} value, and its value on the\\textit{ reference} timestamp. \n\nFormally, we express the problem of attribution of an outcome metric, $Y=y_t$ as explaining change in the metric wrt. a reference, $\\Delta Y = y_t - y'$: Why did the outcome value change from $y'$ to $y_t$? \n\n\\begin{definition}\\label{def:attr-problem}\n\\textbf{Attribution Score. }\nLet $Y=y_t$ and $Y=y'$ be the observed and reference values respectively of a system metric. Let $\\bm{V}$ be the set of input variables. Then, an \\textit{attribution score} for $X \\in \\bm{V}$ provides the contribution of $X$ in causing the change from $y'$ to $y_t$.\n\\end{definition}\n\n\n\n\n\\subsection{The need for SCM and counterfactuals}\nTo estimate the \\textit{causal} contribution, we need to model the\ndata-generating process from input variables to the outcome. This is usually done by a structural causal model (SCM) $M$, that consists of a causal graph and structural equations describing the generating functions for each variable. \n\n\n\n\n\n\\noindent \\textbf{SCM. } Formally, a structural causal model~\\cite{pearl2009book} is defined by a tuple $\\langle \\bm{V}, \\bm{U}, \\bm{F}, P(\\bm{u}) \\rangle$ where $\\bm{V}$ is the set of observed variables, $\\bm{U}$ refer to the unobserved variables, $\\bm{F}$ is a set of functions, and $P(\\bm{U})$ is a strictly positive probability measure for $\\bm{U}$. For each $V\\in \\bm{V}$, $f_V \\in \\bm{F}$ determines its data-generating process, \n$V=f_V(\\operatorname{Pa}_V, U_V)$ where $\\operatorname{Pa}_V \\subseteq \\bm{V} \\setminus \\{V\\}$ denotes \\textit{parents} of $V$ and $U_V\\subseteq \\bm{U}$. We consider a non-linear, additive noise SCM such that $\\forall V \\in \\bm{V}$, $f_V$ can be written as a additive combination of some $f^*_V(\\operatorname{Pa}_V)$ and the unobserved variables (error terms). We assume a Markovian SCM such that unobserved variables (corresponding to error terms) are mutually independent, thus the SCM corresponds to a directed acyclic graph (DAG) over $\\bm{V}$ with edges to each node from its parents. Note that a specific realization of the unobserved variables, $\\bm{U}=\\bm{u}$ determines the values of all other variables.\n\n\\noindent \\textbf{Counterfactual. }Given an SCM, values of unobserved variables $\\bm{U}=\\bm{u}$, a target variable $Y\\in \\bm{V}$ and a subset of inputs $\\bm{X} \\subseteq \\bm{V}\\setminus \\{Y\\}$, a counterfactual corresponds to the query, \\textit{``What would have been the value of $Y$ (under $\\bm{u}$), had $\\bm{X}$ been $\\bm{x}$}''. It is written as $Y_{\\bm{x}}(\\bm{u})$.\n\nUsing counterfactuals, we can formally express the attribution question in the the above example. Suppose the observed values are $Y=y_t$ and $X_i=x_i$ for some input $X_i$, under $\\bm{U}=\\bm{u}$. At an earlier reference timestamp with a different value of the unobserved variables, $\\bm{U}=\\bm{u}'$, the values are $Y=y'$ and $X_i=x'_i$. Starting from the observed value ($\\bm{U}=u$), the attribution for $X_i$ is characterized by the change in $Y$ after changing $X_i$ to its reference value, $Y_{x_i}(\\bm{u}) - Y_{x'_i}(\\bm{u})= y_t - Y_{x'_i}(\\bm{u})$.\nThat is, given that $Y$ is $y_t$ with $X_i=x_i$ and all other variables at their observed value, how much would $Y$ change if $X_i$ is set to $x'_i$?\nSimilarly, we can ask, $Y_{x_i, x'_1}(\\bm{u}) - Y_{x'_i, x'_1}(\\bm{u})$ ($i \\neq 1$), denoting the change in $Y$'s value upon setting $X=x_i$ when $X_1$ is set to its reference values. Thus, there can be multiple expressions to determine the counterfactual impact of $X_i$ depends on the values of other variables. \n\n\n\n\\section{Attribution using CF-Shapley value}\nTo develop an attribution score, we propose a way to average over the different possible counterfactual impacts. First, we posit desirable axioms that an attribution score should satisfy, as in~\\cite{lundberg2017unified,pmlr-v162-jung22a}.\n\\subsection{Desirable axioms for an attribution score} \\label{sec:axioms}\n\\begin{axiom*} Given two values of the metric, observed, $Y(\\bm{u})$ and reference, $Y(\\bm{u}')$, corresponding to unobserved variables, $\\bm{u}$ and $\\bm{u}'$ respectively, following properties are desirable for an attribution score $\\bm{\\phi}$ that measures the causal contribution of inputs $V \\in \\bm{V}$.\n\\begin{enumerate}\n \\item \\textbf{CF-Efficiency. } The sum of attribution scores for all $V \\in \\bm{V}$ equals the counterfactual change in output from reference to observed value, $Y(\\bm{u}) -Y_{\\bm{v}'}(\\bm{u})=Y_{\\bm{v}}(\\bm{u}') -Y(\\bm{u}')=\\sum_V \\phi_V$.\n \\item \\textbf{CF-Irrelevance}. If a variable $X$ has no effect on the counterfactual value of output under all witnesses, $Y_{x',\\bm{s}'}(\\bm{u})=Y_{\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}$, then $\\phi_X=0$.\n \\item \\textbf{CF-Symmetry. } If two variables have the same effect on counterfactual value of output $Y_{\\bm{s}'}(\\bm{u})-Y_{x'_1,\\bm{s}'}(\\bm{u})=Y_{\\bm{s}'}(\\bm{u})-Y_{x'_2,\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X_1, X_2\\}$, then their attribution scores are same, $\\phi_{X_1}=\\phi_{X_2}$. \n \\item \\textbf{CF-Approximation.} For any subset of variables $\\bm{S} \\subseteq \\bm{V}$ set to their reference values $\\bm{s}'$, the sum of attribution scores approximates the counterfactual change from observed value. I.e., there exists a weight $\\omega(\\bm{S})$ s.t. the vector $\\bm{\\phi}_{\\bm{S}}$ is the solution to the weighted least squares, $\\arg \\min_{\\bm{\\phi}^*_{\\bm{S}}} \\sum_{\\bm{S} \\subseteq \\bm{V}} \\omega(\\bm{S}) ((Y(\\bm{u}) - Y_{\\bm{s}'}(\\bm{u})) -\\sum_{S \\in \\bm{S}} \\phi^*_{S})^2$. \n\\end{enumerate}\n\\end{axiom*}\nSimilar to shapley value axioms, these axioms convey intuitive properties that a \\textit{counterfactual} attribution score should satisfy. \\textit{CF-Efficiency} states the sum of attribution scores for inputs should equal the difference between the observed metric and the counterfactual metric when all inputs are set to their reference values. \\textit{CF-Irrelevance} states that if changing the value of an input $X$ has no effect on the output counterfactual under all values of other variables, then the Shapley value of $X$ should be zero. \\textit{CF-Symmetry }states that if changing the value of two inputs has the same effect on the counterfactual output under all values of the other variables, then both variables should have an identical attribution score. And finally, \\textit{CF-Approximation} states the difference between the observed output and the counterfactual output due to a change in any subset of variables is roughly equal to the sum of attribution scores for those variables.\n\nNote that \\textit{CF-Efficiency} does not necessarily imply that the sum of attribution scores is equal to the actual difference between the observed value and reference value. This is because the actual difference is a combination of the input variables' contribution and statistical noise (error terms). That is, $y_t - y' = Y_{\\bm{v}}(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u}') = \\sum_V \\phi_V + (Y_{\\bm{v}'}(\\bm{u})- Y_{\\bm{v}'}(\\bm{u}'))$, where we used the CF-Efficiency property for a desirable attribution score $\\bm{\\phi}$. The second term corresponds to the difference in metric with the same input variables but different noise corresponding to the observed and reference timestamps.\nThis is the unavoidable noise component since we are explaining the change due to a \\textit{single} observation.\nTherefore, for any counterfactual attribution score to meaningfully explain the observed difference, it is useful to select a reference timestamp to minimize the difference over exogenous factors (e.g., using a previous value of the metric on the same day of week or same hour). Given the true structural equations and an attribution score that satisfies the axioms, if the scores do sum to the observed difference in a metric, then it implies that reference timestamp was well-selected.\n\n\\subsection{The CF-Shapley value}\nWe now define the {\\textit{CF-Shapley}}\\xspace value that satisfies all four axioms.\n\\begin{definition}\nGiven an observed output metric $Y=y_t$ and a reference value $y'$, the CF-Shapley value for contribution by input $X$ is given by, \n\\begin{equation} \\label{eq:cf-shap}\n \\phi_X = \\sum_{\\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}}\\frac{Y_{\\bm{s}'}(\\bm{u})- Y_{x',\\bm{s}'}(\\bm{u}) }{n C(n-1, |\\bm{S}|)}\n\\end{equation}\nwhere $n$ is the number of input variables $\\bm{V}$, $\\bm{S}$ is the subset of variables set to their reference values $\\bm{s}'$, and $\\bm{U}=\\bm{u}$ is the value of unobserved variables such that $Y(\\bm{u})=y_t$.\n\\end{definition}\n\\begin{proposition}\n{\\textit{CF-Shapley}}\\xspace value satisfies all four axioms, Efficiency, Irrelevance, Symmetry and Approximation.\n\\end{proposition}\n\\begin{proof}\n\\textit{Efficiency.} Following ~\\cite{pmlr-v162-jung22a,vstrumbelj2014explaining}, the {\\textit{CF-Shapley}}\\xspace value for an input $V_i$ can be written as, \n\\begin{equation}\n \\phi_{V_i}= \\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} Y_{\\bm{w}'_{pre}(\\pi)}(\\bm{u}) - Y_{v'_i, \\bm{w}'_{pre}(\\pi)}(\\bm{u}) \n\\end{equation}\nwhere $\\Pi$ is the set of all permutations over the $n$ variables and $\\bm{W}_{pre}(\\pi)$ is the subset of variables that precede $V_i$ in the permutation $\\pi \\in \\Pi$. The sum is, \n\\begin{equation}\n \\begin{split}\n &\\sum_{i=1}^{n} \\phi_{V_i} = \\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} \\sum_{i=0}^{n} Y_{\\bm{w}'_{pre}(\\pi)}(\\bm{u}) - Y_{v'_i, \\bm{w}'_{pre}(\\pi)}(\\bm{u}) \\\\\n \n &=\\frac{1}{n!} \\sum_{\\pi \\in \\Pi(n)} Y_{\\emptyset}(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u}) \\\\\n &= Y(\\bm{u}) - Y_{\\bm{v}'}(\\bm{u})\n \\end{split}\n\\end{equation}\nWe can show it analogously under $\\bm{U}=\\bm{u}'$. \n\\\\\n\\textit{CF-Irrelevance.}\nIf $Y_{x', \\bm{s}'}(\\bm{u}) = Y_{\\bm{s}'}(\\bm{u}) \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{X\\}$, then the numerator in Eqn.~\\ref{eq:cf-shap} for $\\phi_X$, will be zero and the result follows. \n\\\\\n\\textit{CF-Symmetry.}\nAssuming same effect on counterfactual value, we write the {\\textit{CF-Shapley}}\\xspace value for $V_i$ and show it is the same for $V_j$. \n\\begin{equation*}\n\\begin{split}\n &\\phi_{V_i} = \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_i,\\bm{w}'}(\\bm{u}) }{n C(n-1, |\\bm{W}|)}\\\\\n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}}\\frac{Y_{\\bm{w}'}(u)- Y_{v'_i,\\bm{w}'}(u)}{n C(n-1, |\\bm{W}|)} + \\sum_{\\bm{Z}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}} \\frac{Y_{v'_j,\\bm{z}'}(u)- Y_{v'_i,v'_j,\\bm{z}'}(u)}{n C(n-1, |\\bm{Z}|+1)} \\\\ \n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_j,\\bm{w}'}(\\bm{u})}{n C(n-1, |\\bm{W}|)} + \\sum_{\\bm{Z}\\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}} \\frac{Y_{v'_i,\\bm{z}'}(\\bm{u})- Y_{v'_i,v'_j,\\bm{z}'}(\\bm{u})}{n C(n-1, |\\bm{Z}|+1)} \\\\\n &= \\sum_{\\bm{W}\\subseteq \\bm{V} \\setminus \\{V_j\\}}\\frac{Y_{\\bm{w}'}(\\bm{u})- Y_{v'_j,\\bm{w}'}(\\bm{u})}{n C(n-1, |\\bm{W}|)} = \\phi_{V_j}\n\\end{split}\n\\end{equation*}\nwhere the third equality uses $Y_{v'_i,\\bm{s}'}(\\bm{u})=Y_{v'_j,\\bm{s}'}(\\bm{u}) \\text{\\ } \\forall \\bm{S} \\subseteq \\bm{V} \\setminus \\{V_i, V_j\\}$.\n\\\\\n\\textit{CF-Approximation.} Here we use a property~\\cite{lundberg2017unified} on value functions of standard Shapley values. There exists specific weights $\\omega(S)$ such that the Shapley value is the solution to $\\arg \\min_{\\bm{\\phi}^*_{\\bm{S}}} \\sum_{\\bm{S}\\subseteq \\bm{V}}\\omega(\\bm{w}) (\\nu(S) -\\sum_{\\bm{s} \\in \\bm{S}} \\phi^*_w)^2$ where $\\nu(\\bm{S})$ is the value function of any subset $\\bm{S} \\subseteq \\bm{V}$. The result follows by selecting $\\nu(\\bm{S})=Y(\\bm{u})-Y_{\\bm{s}'}(\\bm{u})$. \n\\end{proof}\n\n\n\\noindent \\textbf{Comparison to do-shapley. }\nUnlike {\\textit{CF-Shapley}}\\xspace, the do-shapley value~\\cite{pmlr-v162-jung22a} takes the expectation over all values of the unobserved $\\bm{u}$, $\\bm{E}_{\\bm{u}}[Y|do(\\bm{S})] - \\bm{E}_{\\bm{u}}[Y]$. Thus, it measures the \\textit{average} causal effect over values of $\\bm{u}$, whereas for attributing a single observed value, we want to know the contributions of inputs under the same $\\bm{u}$.\n\n\\subsection{Estimating {\\textit{CF-Shapley}}\\xspace values}\n\\label{sec:est-cf}\nEqn.~\\ref{eq:cf-shap} requires estimation of counterfactual output at different (hypothetical) values of input, and in turn requires both the causal graph and the structural equations of the SCM. Using knowledge on the system's metric computation, the first step is to construct its computational graph. Then for each node in the graph, we fit its generating function using a predictive model over its parents, which we consider as the data-generating process (fitted SCM).\n\nTo fit the SCM equations, for each node $V$, a common way is to use supervised learning to build a model $\\hat{f}_V$ estimating its value using the values of its parent nodes at the same timestamp.\nHowever, such a model will have high variance due to natural temporal variation in the node's value over time. Since including variables predictive of the outcome reduces the variance of an estimate in general~\\cite{bottou2013counterfactual}, we utilize auto-correlation in time-series data to include the previous values of the node as predictive features. Thus, the final model is expressed as, $\\forall V \\in \\bm{V}$,\n\\begin{equation}\\label{eq:catden-model}\n \\hat{v}_t = \\hat{f}(\\operatorname{Pa}_V, v_{t-1},v_{t-2} \\cdots , v_{t-r})\n\\end{equation}\nwhere $r$ is the number of auto-correlated features that we include. \nThe model can be trained using a supervised time-series prediction algorithm with auxiliary features, such as DeepAR~\\cite{salinas2020deepar}.\n\n\n\n\nWe then use the fitted SCM equations to estimate the counterfactual with the 3-step algorithm from\nPearl~\\cite{pearl2009book}, assuming additive error.\nTo compute $Y_{\\bm{s}'}(\\bm{u})$ for any subset $\\bm{S} \\subseteq \\bm{V}$, the three steps are,\n\\begin{enumerate}\n \\item \\textbf{Abduction.} Infer error of structural equations on all observed variables. For each $V \\in \\bm{V}$, $\\hat{\\epsilon}_{v,t} = v_t - \\hat{f}_V(\\operatorname{Pa}(V), v_{t-1}..v_{t-r})$ where $v_t$ is the observed value at timestamp $t$.\n \\item \\textbf{Action.} Set the value of $\\bm{S}\\leftarrow s'$, ignoring any parents of $\\bm{S}$.\n \\item \\textbf{Prediction. } Use the inferred error term and new value of $s'$ to estimate the new outcome, by proceeding step-wise for each level of the graph~\\cite{pearl2009book,dash2022evaluating} (i.e., following a topological sort of the graph), starting with $\\bm{S}$'s children and proceeding downstream until $Y$ node's value is obtained. For each $X \\in \\bm{V}$ ordered by the topological sort of the graph (after $\\bm{S}$), $x'=\\hat{f}_X(Pa'(X), \\cdots) +\\hat{\\epsilon}_{x,t}$. And finally, we will obtain, $y'=\\hat{f}_Y(Pa'(Y), \\cdots) +\\hat{\\epsilon}_{y,t}$.\n\\end{enumerate}\n Thus, the {\\textit{CF-Shapley}}\\xspace score for any input is obtained by repeatedly applying the above algorithm and aggregating the required counterfactuals in Eqn.~\\ref{eq:cf-shap}; we use a common Monte Carlo approximation to sample a fixed number ($M=1000$) of values of $\\bm{S}$~\\cite{castro2009polynomial,fatima2008shapleycompute}. \n\n\n\\section{{\\textit{CF-Shapley}}\\xspace for Ad Matching system}\n\\section{Case study on ad matching system}\nWe now apply the {\\textit{CF-Shapley}}\\xspace attribution method on data logs of a real-world ad matching system from July 6 to Dec 28, 2021. For each query, we have log data on the number of ads matched by the system. In addition, each query is marked with its category. The category {query volume}\\xspace is measured as the number of queries issued by users for each category. This allows us to calculate the ground-truth matching density on each day, category-wise and aggregate. Separately, to compute the category-wise \\textit{input} ad demand for a day, we fetch each ad listing available on the day and assign it to a category if any query from that category contains a word that is present in its keywords. This is the total sum of ad listings that are potentially relevant to the query for the exact matching algorithm (that matches the full query exactly to the full ad keyword phrase).\n\n\n\n\n\n\n\n\n\\subsection{Implementing {\\textit{CF-Shapley}}\\xspace: Fitting the SCM}\n\\label{sec:fit-scm}\nWe follow the method outlined in Section~\\ref{sec:est-cf}. The main task is to estimate the structural equations for category-wise ad density.\nThere are over 250 categories; fitting a separate model for each is not efficient. Besides, it may be beneficial to exploit the common patterns in the different time-series. We therefore consider a deep learning-based model, DeepAR~\\cite{salinas2020deepar} that fits a single recurrent network for multiple timeseries (we also tried a transformer-based model, temporal fusion transformer (TFT)~\\cite{lim2021temporal} but found it hard to tune to obtain comparable accuracy). As specified in Equation~\\ref{eq:catden-model}, for each category, the DeepAR model is given {ad demand}\\xspace, {query volume}\\xspace and the autoregressive values of density for the past 14 days. Note that rather than predicting over a range of days (which can be innacurate), we fit the timeseries model separately for each day using data up to its $t-1$th day, to utilize the additional information available from the previous day. To implement DeepAR, we used the open-source GluonTS library. \n\nWe compare the DeepAR model to three baselines. As simple baselines that capture the weekly pattern, we consider, \\textbf{1)} category density on the same day a week before; and \\textbf{2)} the average density over the last four weeks. We also consider a 3-layer feed-forward network that uses the same features as DeepAR. Table~\\ref{tab:deepar-acc} shows the prediction error. DeepAR model obtains the lowest error on the validation set according to all three metrics: mean absolute percentage error (MAPE), median APE, and the symmetric MAPE~\\cite{makridakis2020m4}.\nFor our results, we choose DeepAR as the estimated SCM equation and apply {\\textit{CF-Shapley}}\\xspace on data from Nov 15 to Dec 28. We chose Nov 15 to allow sufficient days of training data.\n\n\\noindent \\textbf{Choosing reference timestamp. }\nThe {\\textit{CF-Shapley}}\\xspace method requires specifying a reference day that provides the \n\nthe ``expected\/usual'' density value. Common ways to choose it are the last day's value or the value last week on the same day. We choose the latter due to known weekly patterns in the density metric. \n\n\\begin{table}[tb]\n \\centering\n \\resizebox{0.7\\columnwidth}{!}{%\n \\begin{tabular}{lccc}\n \\toprule\n Model &Mean APE (\\%) & Median APE (\\%) & sMAPE\\\\\n \\midrule\n LastWeek & 21.2 & 11.5 & 0.20\\\\\n Avg4Weeks & 25.1 & 10.6 & 0.17 \\\\\n FeedForward & 20.0 & 10.8 & 0.20\\\\\n DeepAR & \\textbf{15.6} & \\textbf{7.8} & \\textbf{0.13} \\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Accuracy of category-wise density prediction models.}\n \\label{tab:deepar-acc}\n \\vspace{-3em}\n\\end{table}\n\n\\begin{comment}\n\\begin{table}[tbh]\n \\centering\n \\begin{tabular}{c|c|c|c}\n Model &MeanAPE (\\%) & MedAPE (\\%) & SMAPE\\\\\n LastWeek & 23.9 & 11.3 & 19.6\\\\\n Avg4Weeks & 20.6 & 10.4 & 17.2 \\\\\n FF & 19.0 & 10.7 & 18.8\\\\\n DeepAR & 15.3 & 7.7 & 13.1 \n \\end{tabular}\n \\caption{Category-wise density prediction model. MAPE computed over all days.}\n \\label{tab:deepar-acc}\n\\end{table}\n\\end{comment}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.7\\columnwidth]{figures\/shapley_sum_validate.pdf}\n \\caption{Comparison of actual percentage change in daily density with sum of estimated {\\textit{CF-Shapley}}\\xspace attribution scores. }\n \\label{fig:shapley-valid}\n\\end{figure}\n\n\n\n\n\n\\subsection{Validating the CF-Efficiency axiom}\n We first check whether the obtained {\\textit{CF-Shapley}}\\xspace scores sum up to the observed percentage change in daily density metric (Figure~\\ref{fig:shapley-valid}). The difference between the sum of {\\textit{CF-Shapley}}\\xspace scores and the actual change is less than 0.10\\% for all days, indicating that our choice of reference timestamp is appropriate (Sec.~\\ref{sec:axioms}) and that the shapley value computation by approximation is capturing relevant signal.\n\n\\subsection{Choosing dates for evaluation}\nWhile we computed attribution scores for all days, typically one is interested in attribution for unexpected values for daily density.\n\nTo discover unusual days for attribution, we fit a standard time-series model to the aggregate daily density data.\nWe use four candidate models: \\textbf{1)} daily density on the same day last week; \\textbf{2)} mean density of the last 4 weeks; \\textbf{3)} a feed forward network; and \\textbf{4)} DeepAR model. As for the category-wise prediction, all neural network models are provided the last 14 days of daily density. Table\\ref{tab:agg-acc} shows the mean APE, median APE, and SMAPE. The feed forward model obtains the lowest error. While DeepAR is a more expressive model than FeedForward, a potential reason for its lower accuracy is the number of training samples (only as many data points as the number of days for dailydensity prediction unlike category-wise prediction). \nFor its simplicity, we use the FeedForward network for detecting outlier days. Its prediction for different days and the outliers detected can be seen in Figure~\\ref{fig:ff-preds}. Like DeepAR, the feedforward model is implemented as a Bayesian probabilistic model, so it outputs prediction samples rather than a point prediction.\n\n\n\\begin{table}[tb]\n \\centering\n \\resizebox{0.7\\columnwidth}{!}{%\n \\begin{tabular}{lccc}\n \\toprule\n Model & Mean APE (\\%) & Median APE (\\%) & sMAPE \\\\\n \\midrule\n LastWeek & 4.6 & 3.1 & 0.047\\\\\n Last4Weeks & 4.5 & 3.3 & 0.047\\\\\n FeedForward & \\textbf{3.0} & \\textbf{2.2} & \\textbf{0.031} \\\\\n DeepAR & 3.4 & 2.4 & 0.035\\\\\n \n \\bottomrule\n \\end{tabular}}\n \\caption{Prediction error for daily density models.}\n \\label{tab:agg-acc}\n \\vspace{-2em}\n\\end{table}\n\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.65\\columnwidth]{figures\/ff_pred_pattern.pdf}\n \\caption{Outliers through FeedForward model's prediction. Shaded region represents the 95\\% prediction interval.}\n \\label{fig:ff-preds}\n\\end{figure}\n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figures\/cat_contrib_Dec4.pdf}\n \\caption{Attribution scores of different categories by {ad demand}\\xspace and {query volume}\\xspace on December 4.}\n \\label{fig:cat-contrib}\n\\end{figure}\n\n\\begin{table}\n \\centering\n \\resizebox{0.85\\columnwidth}{!}{%\n \\begin{tabular}{lcc}\n \\toprule\n Category & AdDemandAttrib & QueryVolumeAttrib \\\\\n \\midrule\n \\textit{Sort by AdDemandAttrib} & &\\\\\n Internet \\& Telecom & \\textbf{0.0450} & 0.00850 \\\\\n Apparel & -0.00843 & 0.00151 \\\\\n Arts \\& Entertainment & 0.00663 & -0.00928 \\\\\n Hobbies \\& Leisure & 0.00646 & -0.0168 \\\\\n Travel \\& Tourism & -0.00584 & -0.000287 \\\\\n \\midrule\n \\textit{Sort by QueryVolumeAttrib} & &\\\\\n Hobbies \\& Leisure & 0.00646 & \\textbf{-0.0168} \\\\\n Arts \\& Entertainment & 0.00633 & -0.00928 \\\\\n Internet \\& Telecom & 0.0450 & 0.00850 \\\\ \n Law \\& Government & 0.000203 & -0.00743 \\\\\n Health & 0.000161 & -0.00645 \\\\\n \\bottomrule\n \\end{tabular}}\n \\caption{Ad demand and {query volume}\\xspace attribution on Dec 4. }\n \\label{tab:dec4-attr}\n \\vspace{-3em}\n\\end{table}\n\n\nDays where the daily density goes beyond the 95\\% prediction interval are chosen for attribution. A visual inspection shows two clusters, Thanksgiving\/Black Friday and Christmas, which are expected due to their significance in the US. We also find an extreme value on Dec. 4. In all three cases, the daily density increases. Intuitively, one may have expected the opposite for holidays: density would decrease since people are expected to spend less time online. \n\n\\subsection{Qualitative analysis}\nWe now use {\\textit{CF-Shapley}}\\xspace to explain these observed changes. \n\n\\noindent \\textbf{December 4. }\nFigure~\\ref{fig:cat-contrib} shows the attribution by different categories, aggregated up to obtain 22 high-level categories. The \\textit{Internet \\& Telecom} (IT) category has the biggest positive attribution score while the \\textit{Hobbies \\& Leisure} (HL) category has the biggest negative attribution score. That is, daily density decreased on the day due to the HL category. \n\nTo understand why, we look at the attribution scores separately for {ad demand}\\xspace and {query volume}\\xspace for each category in Table~\\ref{tab:dec4-attr}. The attribution score reflects to the percentage change in daily density compared to last week, due to {ad demand}\\xspace or {query volume}\\xspace of a category. The only categories to have an attribution score greater than 1\\% are \\textit{IT} and \\textit{HL}, agreeing with the category-wise analysis. Specifically, the change in {ad demand}\\xspace due to \\textit{IT} leads to a 4.5\\% increase in daily density. The {query volume}\\xspace change in \\textit{HL}, on the other hand, leads to a 1.7\\% decrease in daily density. Considering all categories together, {ad demand}\\xspace change leads to an 6.5\\% increase in daily density and {query volume}\\xspace change leads to a 5.6\\% decrease. The net result is a 1\\% improvement over the last week. While an increase of 1\\% of daily density may look small, note that the value last week was already inflated due to it being a Black Friday week. This is why we detect outliers using the expected time-series pattern rather than simply difference from last week. On such days, one may also consider an alternative baseline, e.g., two weeks before.\n\nAre the attributions meaningful? In the absence of ground-truth, we dive deeper into the query logs to check for evidence. We do find a significant increase in queries for the \\textit{HL} category. In fact, more than 70\\% of the increase in {query volume}\\xspace for \\textit{HL} is due to cheetah-related queries. On manual inspection, we find that December 4 is \\textit{International Cheetah Day}. Cheetah-related queries also contribute to 86\\% of the {ad demand}\\xspace increase for \\textit{HL} category. \nGiven that the category density of \\textit{HL} is much lower than the daily density, this increase in {query volume}\\xspace causes a \\textit{decrease} in daily density, leading to the negative attribution score. Due to {ad demand}\\xspace volume increase (perhaps in anticipation of the Cheetah Day), the \\textit{HL} also leads to an increase of 0.6\\% in daily density (see Table~\\ref{tab:dec4-attr}. On the other hand, \\textit{IT} category's main contribution is from an increase in ad demand. Logs show a substantial (14\\%) increase in ads compared to last week for the category on Dec 4, which explains its high attribution score for {ad demand}\\xspace.\nThis increase is sustained across queries, possibly indicating a shift for the first Saturday after the holiday weekend. \n\n\\noindent \\textbf{Nov 25 and 26 (Thanksgiving).} On Thanksgiving holiday (Nov 25), we may have expected density to drop since many people in the US are expected to spend more time with their family and less time online. At the same time, online shopping on Black Friday (Nov 26) may increase density. Instead, we find that the density increases significantly on \\textit{both} days (see Figure~\\ref{fig:ff-preds}. Specifically, compared to last week, daily density on Nov 26 increased by 18.3\\%, out of which 13.5\\% is contributed by {query volume}\\xspace change and 4.8\\% by {ad demand}\\xspace. How to explain this result? Using the {\\textit{CF-Shapley}}\\xspace method, for {query volume}\\xspace change, we find that the categories \\textit{Health}, \\textit{Law and Government} and \\textit{Business \\& Industrial} are the top-ranked categories. Each contribute more than 2\\% of the density increase, leading to a cumulative 7\\% increase. From the logs, we see that {query volume}\\xspace for these categories decreased as people spent less time on work or health related queries. Since these categories tend to have low density, the daily density increased as a result. On the {ad demand}\\xspace side, \\textit{Online Media \\& Ecommerce} contributed nearly 3\\% increase in daily density, perhaps due to increased demand for Black Friday shopping. \nNov 25 exhibits similar patterns for {query volume}\\xspace. \n\n\\noindent \\textbf{Dec 24 and Dec 25 (Christmas).} On Christmas days too, there is an significant increase in density. Like the Thanksgiving days, health and work-related queries are issued fewer times, leading to an overall increase in daily density (all three categories have attribution scores >1\\%). However, we find that the top categories by {query volume}\\xspace change are \\textit{Hobbies \\& Leisure} and \\textit{Arts \\& Entertainment}. Both these categories experience a surge in their query volume and being high-density categories, cause a 2.1\\% and 1.8\\% increase in daily density respectively. To explain this, we look at the query logs and find that the rise in \\textit{Hobbies \\& Leisure} queries is fueled by the \\textit{toys \\& games} subcategory, which is aligned with the expectation of the holiday days. On Dec 25, \\textit{Hobbies \\& Leisure} is also the category which has the highest attribution score by {ad demand}\\xspace (2.7\\%).\nOverall, the category contributes 4.8\\% increase, nearly one-third of the total density increase on Christmas day, signifying the importance of \\textit{toys \\& games} subcategory for Christmas.\n\n\n\n\n\\section{Introduction}\nIn large-scale systems, a common problem is to explain the reasons for a change in the output, especially for unexpected and big changes. Explaining the reasons or attributing the change to input factors can help isolate the cause and debug it if the change is undesirable, or suggest ways to amplify the change if desirable. For example, in a distributed system, system failure~\\cite{zhang2019inflection} or performance anomaly~\\cite{nagaraj2012structured,attariyan2012xray} are important undesirable outcomes. In online platforms such as e-commerce websites or search websites, a desirable outcome is increase in revenue and it is important to understand why the revenue increased or decreased~\\cite{singal2022shapley,dalessandro2012causally}.\n\nTechnically, this problem can be framed as an attribution problem~\\cite{efron2020prediction,yamamoto2012understanding, dalessandro2012causally}. Given a set of candidate factors, which of them can best explain the observed change in output? Methods include statistical analysis based on conditional probabilities~\\cite{berman2018beyondmta,ji2016probabilistic-mta,shao2011mta} or computation of game-theoretic attribution scores like Shapley value~\\cite{lundberg2017unified,singal2022shapley,dalessandro2012causally}. However, most past work assumes that the output can be written as a function of the inputs, ignoring any structure in the computation of the output. \n\n\\begin{figure}\n \\centering\n \\includegraphics[width=0.8\\columnwidth]{figures\/graph-atttribution-crop.pdf}\n \\caption{Causal graph for an ad matching system that reflects computation of the matching density metric. For each query category, the number of queries (query volume) and ads (ad demand) determine the categorical density for each day. Different categorical densities combine to yield the daily density. Goal is to attribute daily density to ad demand and query volume of different categories. The category density is directly affected by category-wise ad demand and query volume and thus has a relatively more stable relationship with the inputs than overall daily density. \n }\n \\label{fig:causal-graph}\n \\vspace{-3em}\n\\end{figure}\n\nIn this paper, we consider large-scale systems such as search or ad systems where output metrics are aggregated over different kinds of inputs or composed over multiple pipeline stages, leading to a natural computational structure (instead of a single function of the inputs). For example, in an ad system, the number of ads that are matched per query is a composite measure that is composed of an analogous metric over each query category (see Figure~\\ref{fig:causal-graph}). While the overall matching density may fluctuate, the matching density per category is expected to be more stably associated with the input queries and ads. As another example, the output metric may be a result of a series of modules in a pipeline, e.g., recommendations that are finally shown to a user may be a result of multiple pipeline stages where each stage filters some items. Our key insight is that utilizing the computational structure of a real-world system can break up the system into smaller sub-parts that stay stable over time and thus can be modelled accurately. In other words, the system's computation can be modelled as a set of independent, causal mechanisms~\\cite{peters2017elements} over a structural causal model (SCM)~\\cite{pearl2009book}. \n\nModeling the system's computation as a SCM also provides a principled way to define an attribution score. Specifically, we show that attribution can be defined in terms of \\textit{counterfactuals} on the SCM. Following recent work on causal shapley values~\\cite{heskes2020causal,pmlr-v162-jung22a}, we posit four axioms that any desirable attribution method for an output metric should satisfy. We then propose a counterfactual variant of the Shapley value that satisfies all these properties. Thus, given the computational structure, our proposed {\\textit{CF-Shapley}}\\xspace method has the following steps: \\textbf{1)} utilize machine learning algorithms to fit the SCM and compute counterfactual values of the metric under any input, and \\textbf{2) } use the estimated counterfactuals to construct an attribution score to rank the contribution of different inputs. On simulated data, our results show that the proposed method is significantly more accurate for explaining inputs' contribution to an observed change in a system metric, compared to Shapley value~\\cite{lundberg2017unified} or its recent causal variants~\\cite{heskes2020causal,pmlr-v162-jung22a}.\n\n\n\n\nWe apply the proposed method, {\\textit{CF-Shapley}}\\xspace attribution, to a large-scale ad matching system that outputs relevant ads for each search query issued by a user. The key outcome is \\textit{matching density}, the number of ads matched per query. This density is roughly proportional to revenue generated, since only the queries for which ads are selected contribute to revenue. There are two main causes for a change in matching density: change in query volume or change in demand from advertisers. Given that queries are typically organized by categories, the attribution problem is to understand which of these two are driving an observed change in matching density, and from which categories. \n\nTo do so, we construct a causal graph representing the system's computation pipeline (Figure~\\ref{fig:causal-graph}). Given six months of system's log data, we repurpose time-series prediction models to learn the structural equation for category-wise density as a function of query volume and ad demand, its parents in the graph. \n\nFor this system, we find that category-wise attribution is possible with minimal assumptions, while attribution between query volume and ad demand requires knowledge of the structural equations that generate category-wise density.\nIn both cases, we show how the {\\textit{CF-Shapley}}\\xspace method can be used to estimate the system's counterfactual outputs and the resultant attribution scores.\nAs a sanity check, {\\textit{CF-Shapley}}\\xspace attribution scores satisfy the \\textit{efficiency} property for attributing the matching density metric: their sum matches the observed change in density. We then use {\\textit{CF-Shapley}}\\xspace scores to explain density changes on five outlier days from November to December 2021, uncovering insights on how changes in {query volume}\\xspace or {ad demand}\\xspace for different categories affects the density metric. We validate the results through an analysis of external events during the time period.\n\nTo summarize, our contributions include, \n\\begin{itemize}\n \\item A method for attributing metrics in a large-scale system utilizing its computational structure as a causal graph, that outperforms recent Shapley value-based methods on simulated data.\n \n \\item A case study on estimating counterfactuals in a real-world ad matching system, providing a principled way for attributing change in its output metric.\n\\end{itemize}\n\n\n\n\n\\section{Related Work} \nOur work considers a causal interpretation of the attribution problem. Unlike attribution methods on predictions from a (deterministic) machine learning model~\\cite{lundberg2017unified,ide2021anomaly}, here we are interested in attributing real-world outcomes where the data-generating process includes noise. Since the attribution problem concerns explaining a single outcome or event, we focus on causality on individual outcomes~\\cite{halpern2016book} rather than \\textit{general} causality that deals with the average effect of a cause on the outcome over a (sub)population~\\cite{pearl2009book}. In other words, we are interested in estimating the counterfactual, given that we already have an observed event. Counterfactuals are the hardest problem in Pearl's ladder of causation, compared to observation and intervention~\\cite{pearl2019seven}. \n\nWhile counterfactuals have been applied in feature attribution for machine learning models~\\cite{kommiya2021towards,verma2020counterfactual}, less work has been done for attributing real-world outcomes in systems using formal counterfactuals. Recent work uses the \\textit{do}-intervention to propose do-shapley values~\\cite{heskes2020causal,pmlr-v162-jung22a} that attribute the interventional quantity $P(Y|do(\\bm{V}))$ across different inputs $v \\in \\bm{V}$. While do-shapley values are useful for calculating the average effect of different inputs on the output $Y$, they are not applicable for attributing an \\textit{individual} change in the output. For attributing individual changes, \\cite{janzing2019causaloutliers} analyze root cause identification for outliers in a structural causal model, and find that attribution conditional on the parents of a node is more effective than global attribution. They quantify the attribution using information theoretic scores, but do not provide any axiomatic characterization of the resulting attribution score.\nIn this work, we propose four axioms that characterize desirable properties for an attribution score for explaining individual change in output and present the CF-Shapley value that satisfies those axioms.\n\n\\noindent \\textbf{Attribution in ad systems. }\nMulti-touch attribution is the most common attribution problem studied in online ad systems. Given an ad click, the goal is to assign credit to the different preceding exposures of the same item to the user, e.g., previous ad exposures, emails, or other media. Multiple methods have been proposed to estimate the attribution such as attributing all to the last exposure~\\cite{berman2018beyondmta}, an average over all exposures, or using probabilistic models to model the click data as a function of the input exposures~\\cite{shao2011mta,ji2016probabilistic-mta}. Recent methods utilize the game-theoretic attribution score using Shapley values that summarizes the attribution over multiple simulations of input variables, with~\\cite{dalessandro2012causally} or without a causal interpretation~\\cite{singal2022shapley}. \nMulti-touch attribution can be considered as a one-level SCM problem, where there is an output node being affected by all input nodes. It does not cover more complex systems where there is a computational structure.\n\n\\noindent \\textbf{Performance Anomaly Attribution. }\nComputational structure (e.g., specific system capabilities or logs) has been considered in the systems literature to root-cause performance anomalies~\\cite{attariyan2012xray} or system failures~\\cite{zhang2019inflection}. Some methods use causal reasoning to motivate their attribution algorithm, but they do so informally.\nOur work provides a formal analysis of the system attribution problem.\n\n\\section{Evaluation}\nOur goal is to attribute observed changes in the output metric of an ad matching system. We first describe the system and conduct a simulation study to evaluate {\\textit{CF-Shapley}}\\xspace scores.\n\n\\subsection{Description of the ad matching system}\nWe consider an ad matching system where the goal is to retrieve all the relevant ads for a particular web search query by a user (these ads are ranked later to show only top ones to the user). The outcome variable is the average number of ads matched for each query, called the \\textit{``matching density''} (or simply \\textit{density}). This outcome can be affected by multiple factors, including the availability of ads by advertisers, the distribution and amount of user queries issued on the system, any algorithm changes, or any other system bug or unknown factors. For simplicity, we consider a matching algorithm based on matching \\textit{exact} full text between a query and provided keyword phrases for an ad. This algorithm remains stable over time due to its simplicity. Thus, we can safely assume that there are no algorithm changes or code bugs for the matching algorithm under study. Given an extreme or unexpected value of density, our goal then is to attribute between change in ads and change in queries.\n\nSince there are millions of queries and ads, we categorize the data by nearly 250 semantic query categories. Examples of query categories are \"Fashion Apparel\", \"Health Devices\", \"Internet\", and so on.\nA naive solution may be to simply compare the magnitude of observed change in ad demand or query volume across categories. That is, given a change in density on day $t$, choose a reference day $r$ (e.g., same day last week) and compare the values of {ad demand}\\xspace and {query volume}\\xspace. We may conclude that the factor with the highest percentage change is causing the overall change in density. However, the limitation is that\nthe factor with the highest percentage change may neither be necessary nor sufficient to cause the change because its effect depends on the values of other factors. E.g., an increase in {query volume}\\xspace for a category can either have positive, negative, or no effect on the daily density depending on its ad demand compared to other categories. This is because the density is computed as a query volume-weighted average of category density; increase in {query volume}\\xspace for a low-demand (and hence low-density) category \\textit{decreases} the aggregate density (see Eqn.~\\ref{eq:dailyden-eqn}). \n\n\n\\subsection{Constructing an SCM for ad density metric}\nTo apply the {\\textit{CF-Shapley}}\\xspace method for attributing a matching density value, we define a causal graph based on how the metric is computed, as shown in \nFigure~\\ref{fig:causal-graph}. The number of queries for a category is measured by the number of search result page views (SRPV). The number of ads is measured by the number of listings posted by advertisers. For simplicity, we call them \\textit{{query volume}\\xspace} and \\textit{{ad demand}\\xspace}. We assume that given a category, the {ad demand}\\xspace and {query volume}\\xspace are independent of each other since they are driven by the advertiser and user goals respectively. The combination of {ad demand}\\xspace and {query volume}\\xspace for a category determine its category-wise density which then is aggregated to yield the \\textit{daily density}. As we are interested in attribution over days as a time unit, we refer to the aggregate density as daily density, $y$. \nThus, the variables $\\{ad^{c1}, qv^{c1}, ad^{c2}, qv^{c2} \\cdots ad^{ck}, qv^{ck} \\}$ are the $2k$ inputs to the system where $ci$ is the category, ${ad}$ refers to {ad demand}\\xspace, ${qv}$ refers to {query volume}\\xspace, and $k$ is the number of categories. \n\nThe structural equation from category-wise densities to daily density is known. It is simply a weighted average of the category-wise densities, weighted by the {query volume}\\xspace.\n\\begin{equation}\\label{eq:dailyden-eqn}\n y_t = f({den}^{c1}_t, {qv}^{c1}_t, \\cdots {den}^{ck}_t, {qv}^{ck}_t) = \\frac{\\sum_c {den}^c_t {qv}^c_t}{\\sum_c {qv}^c_t}\n\\end{equation}\nwhere $den^c_t$ is the density of category $c$ on day $t$ and ${qv}^c_t$ is the {query volume}\\xspace for the category on day $t$.\nHowever, the equation from category-wise {ad demand}\\xspace and {query volume}\\xspace to category density is infeasible to obtain. This would involve ``replay'' of a computationally expensive matching algorithm to real-time queries and ad listings but the ad listings are not available (only a daily snapshot of ads inventory is stored in the logs). We will show how to to estimate the structural equation for category density in Section~\\ref{sec:fit-scm}.\n\n\n\n\n\\subsection{Evaluating {\\textit{CF-Shapley}}\\xspace on simulated data}\nBefore applying {\\textit{CF-Shapley}}\\xspace on the ad matching system, we first evaluate the method on simulated data motivated by the causal graph of the system. This is because it is impossible to know the ground-truth attribution using data from a real-world system, since we do not know how the change in input variables led to the observed metric value and which inputs were the most important. \n\nWe construct simulated data based on the causal structure of Figure~\\ref{fig:causal-graph}. For each category, we assume {ad demand}\\xspace and {query volume}\\xspace as independent Guassian random variables (we simulate real-world variation in {query volume}\\xspace using a Beta prior). The category-wise density is constructed as a monotonic function of ad demand and has a biweekly periodicity. The SCM equations are,\n\\begin{align} \\label{eq:sim-gen}\n \\gamma &= \\mathcal{B}(0.5,0.5); \\text{\\ } {qv}^c_t = \\mathcal{N}(1000\\gamma, 100); \\text{\\ } {ad}^c_t = \\mathcal{N}(10000, 100) \\nonumber \\\\\n den^c_t &= g({ad}^c_t,{qv}^c_t, den^c_{t-1}) + \\mathcal{N}(0,\\sigma^2) \\nonumber \\\\\n &= \\kappa * {ad}^c_t\/{qv}^c_t + \\beta * a* den^c_{t-1} + \\mathcal{N}(0,\\sigma^2) \\\\\n y_t &= \\frac{\\sum_c den^c_t {qv}^c_t}{\\sum_c {qv}^c_t} \\label{eq:dailyden}\n\\end{align}\nwhere ${qv}^c_t$ and ${ad}^c_t$ are the query volume and ad demand respectively for category $c$ at time $t$. They combine to produce the ad matching density $den^c_t$ based on a function $g$ and additive normal noise. The variance of the noise, $\\sigma^2$ determines the stochastic variation in the system. For the simulation, we construct $g$ based on two observations about the category density: \\textbf{1)} it is roughly a ratio of the \\textit{relevant} ads and the number of queries; and \\textbf{2)} it exhibits auto-correlation with its previous value and periodicity over a longer time duration. We use $\\kappa$ to denote the fraction of relevant ads and add a second term with parameter $a$ to simulate a biweekly pattern, $a=1 \\text{\\ } \\operatorname{if} \\operatorname{floor}(t\/7) \\text{ is even} \\operatorname{else} \\text{ }a=-1 $.\n$\\beta$ is the relative importance of the previous value in determining the current category density. \n Finally, all the category-wise densities are weighted by their query volume ${qv}^c_t$ and averaged to produce the daily density metric, $y_t$.\n \nEach dataset generated using these equations has 1000 days and 10 categories; we set $\\kappa=0.85, \\beta=0.15$ for simplicity. We intervene on the {ad demand}\\xspace or {query volume}\\xspace of the 1000th point to construct an outlier metric that needs to be attributed. Given the biweekly pattern, reference date is chosen 14 days before the 1000th point.\n\n\\noindent \\textbf{Setting ground-truth attribution.} Even with simulated data, setting ground-truth attribution can be tricky. For example, if there is an increase in {ad demand}\\xspace for one category and increase in {query volume}\\xspace for another, it is not clear which one would cause the biggest impact on the daily density. That depends on their {query volume}\\xspace and {ad demand}\\xspace respectively and any changes in other categories. To evaluate attribution methods, therefore, we consider simple interventions where objective ground-truth can be obtained. Specifically, for ease of interpretation, we intervene on only two categories at a time such that the first has a substantially higher chance of affecting the outcome metric than the second. \n\nWe consider two configurations: change in 1) {ad demand}\\xspace and 2) {query volume}\\xspace. For changing {ad demand}\\xspace (\\textit{Config 1}), we choose two categories such that the first has the highest {query volume}\\xspace and the second has the lowest {query volume}\\xspace. We double the {ad demand}\\xspace for both categories with a slight difference (x2 for the first category, x2.1 for the second). Since the category-wise densities are weighted by {query volume}\\xspace to obtain the daily density metric, for the same (or similar) change in demand, it is natural that first category has higher impact on daily density (even though they may have similar impact on their category-wise density). For \\textit{Config 1}, thus, the ground-truth attribution is the first category. For changing {query volume}\\xspace (\\textit{Config 2}), we choose two categories such that the first has the most extreme density and the second has density equal to the reference daily density. Then, we change {query volume}\\xspace as above: x2 for the first category and x2.1 for the second. Following Eqn.~\\ref{eq:dailyden}, {query volume}\\xspace change in a category having the same density as the daily density is expected to have low impact on daily density (keeping other categories constant, if category density is not impacted by {query volume}\\xspace, an increase in {query volume}\\xspace for a category with density equal to daily density causes zero change in daily density). Thus, the ground-truth attribution (category with the highest impact on output metric) is again the first category. Note that {query volume}\\xspace has higher variation across categories, so a higher multiplicative factor does not necessarily mean a higher absolute difference.\n\n\n\\noindent \\textbf{Baseline attribution methods.} We compare {\\textit{CF-Shapley}}\\xspace to the standard Shapley value (as implemented in SHAP~\\cite{lundberg2017unified}, {\\texttt{Shapley}}\\xspace) and the do-shapley value ({\\texttt{DoShapley}}\\xspace)~\\cite{pmlr-v162-jung22a}. The {\\texttt{Shapley}}\\xspace method ignores the structure and fits a model directly predicting daily density $y_t$ using (category-wise) {ad demand}\\xspace and {query volume}\\xspace features. It uses the predictions of this model for computing the Shapley score. For the {\\texttt{DoShapley}}\\xspace method, we notice that our causal graph corresponds to the Direct-Causal graph structure in their paper and use the estimator from Eq. (5) in ~\\cite{pmlr-v162-jung22a}, that depends on the same daily density predictor as the standard Shapley value. \nWe also evaluate on three intuitive baselines based on absolute change in inputs: \\textbf{1)} The category with the biggest change in {ad demand}\\xspace ({\\texttt{AdDemandDelta}}\\xspace); \\textbf{2)} {query volume}\\xspace ({\\texttt{QVolumeDelta}}\\xspace); or \\textbf{3)} density multiplied by {query volume}\\xspace ({\\texttt{ProductDelta}}\\xspace) since this product is used in the daily density equation.\n\nFor the {\\textit{CF-Shapley}}\\xspace algorithm, we fit the structural equation for category density, using the following features: {ad demand}\\xspace, {query volume}\\xspace, $den_{t-1}, den_{t-7}, den_{t-14}$. For both the {\\textit{CF-Shapley}}\\xspace category density prediction and the {\\texttt{Shapley}}\\xspace daily density prediction model, we use a 3-layer feed forward network. We use all data uptil 999th day for training and validation for all models. \n\n\\begin{figure}[tb]\n \\centering\n \\includegraphics[width=0.85\\columnwidth]{figures\/attr_truem_category.pdf}\n \\caption{Category attribution accuracy for various methods.}\n \\label{fig:sim-cat-attr}\n\\end{figure}\n\n\\noindent \\textbf{Results. }\nFor each attribution method, we measure accuracy compared to the ground-truth as we increase the noise ($\\sigma$) in the true data-generating process (SCM) ($\\sigma=\\{0.1, 1, 10\\}$). As noise in the generating process increases, we expect higher error for fitting structural equations and thus the attribution task becomes harder.\n\\textit{Attribution accuracy} is defined as the fraction of times a method outputs the highest attribution score for the correct category (first category), over 20 simulations.\n\nFigure~\\ref{fig:sim-cat-attr} shows the results. {\\textit{CF-Shapley}}\\xspace obtains the highest attribution accuracy for both Config 1 and 2. In general, attribution for {ad demand}\\xspace is easier than {query volume}\\xspace because both the category density and daily density are monotonic functions of the {ad demand}\\xspace. That is why we observe near 100\\% accuracy for {\\textit{CF-Shapley}}\\xspace under \\textit{Config 1}, even with high noise. The attribution accuracy for \\textit{Config 2} is 70-80\\%, decreasing as more noise is introduced.\n\nIn comparison, none of the baselines achieve more than 50\\% (random-guess) accuracy. Note that the {\\texttt{Shapley}}\\xspace and {\\texttt{DoShapley}}\\xspace methods obtain similar attribution accuracies. While their attribution scores are different, the highest ranked category often turns out to be the same since they rely on the same daily density model (but use different formulae). Inspecting the predictive accuracy of the daily density model offers an explanation: error on the daily density prediction is higher than that for category-wise density prediction (and it increases as the noise is increased). This indicates the value of computing an individualized counterfactual using the full graph structure, rather than focusing on the average (causal) effect. Finally, the other intuitive baselines fail on both tasks since they only look at the change in the input variables.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\\label{sec:intro}\n\n\\citet[page 74]{AngristPischke2015} argue that ``careful reasoning about OVB [omitted variables bias] is an essential part of the 'metrics game.'' Largely for this reason, researchers have eagerly adopted new tools that let them quantitatively assess the impact of omitted variables on their results. In particular, researchers now widely use the sensitivity analysis methods developed in \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}. These methods have been extremely influential, with about 3600 and 2500 Google Scholar citations as of June 2022, respectively. Looking beyond citations, researchers are actively using these methods. Every top 5 journal in economics is now regularly publishing papers which use the methods in \\cite{Oster2019}. Aggregating across these five journals, from the three year period starting when \\cite{Oster2019} was published, 2019--2021, 33 published papers have used these methods, and often quite prominently.\n\nThese methods, however, rely on an assumption called \\emph{exogenous controls}. This assumption states that the omitted variables of concern are uncorrelated with all included covariates. For example, consider a classic regression of wages on education and controls like parental education. Typically we are worried that the coefficient on education in this regression is a biased measure of the returns to schooling because unobserved ability is omitted. To apply Oster's \\citeyearpar{Oster2019} method for assessing the importance of this unobserved variable, we must assume that unobserved ability is uncorrelated with parent's education, along with all other included controls.\\footnote{Appendix A.1 in \\cite{Oster2019} briefly describes one approach to relaxing the exogenous controls assumption in her setting. We show that this approach changes the interpretation of her sensitivity parameter in a way that may change researchers' conclusions about the robustness of their results. We discuss this in detail in our appendix \\ref{sec:OsterRedefinition}.}\n\nSuch exogeneity assumptions on the control variables are usually thought to be very strong and implausible. For example, \\cite{AngristPischke2017} discuss how\n\\begin{quote}\n``The modern distinction between causal and control variables on the right-hand side of a regression equation requires more nuanced assumptions than the blanket statement of regressor-error orthogonality that's emblematic of the traditional econometric presentation of regression.'' (page 129)\n\\end{quote}\nPut differently: We usually do not expect the included control variables to be uncorrelated with the omitted variables; instead we merely hope that the treatment variable is uncorrelated with the omitted variables after adjusting for the included controls. These control variables are therefore usually thought to be endogenous.\n\nIn this paper we provide a new approach to sensitivity analysis that allows the included control variables to be endogenous, unlike \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}. Like these previous papers, however, we measure the importance of unobserved variables by comparing them to the included covariates. Thus researchers can still measure how strong selection on unobservables must be relative to selection on observables in order to overturn their baseline findings.\n\n\\subsubsection*{Overview of Our Approach}\n\nIn section \\ref{sec:model} we describe the baseline model. The parameter of interest is $\\beta_\\text{long}$, the coefficient on a treatment variable $X$ in a long OLS regression of an outcome variable $Y$ on a constant, treatment $X$, the observed covariates $W_1$, and some unobserved covariates $W_2$. In section \\ref{sec:causalModels} we discuss three causal models which allow us to interpret this parameter causally, based on three different identification strategies: unconfoundedness, difference-in-differences, and instrumental variables. Since $W_2$ is unobserved, we cannot compute the long regression of $Y$ on $(1,X,W_1,W_2)$ in the data. Instead, we can only compute the medium regression of $Y$ on $(1,X,W_1)$. We begin by considering a baseline model with a no selection on unobservables assumption, which implies that the coefficients on $X$ in the long and medium regressions are the same. Importantly, this baseline model does \\emph{not} assume the controls $W_1$ are exogenous. They are included solely to aid in identification of the coefficient on $X$ in the long regression. \n\nWe then move to assess the importance of the no selection on unobservables assumption. In section \\ref{sec:MainNewAnalysis} we develop a sensitivity analysis that does not rely on the exogenous controls assumption, while still allowing researchers to compare the magnitude of selection on observables with the magnitude of selection on unobservables. Our results use either one, two, or three sensitivity parameters; not all of our results require all three parameters. The first sensitivity parameter compares the relative magnitude of the coefficients on the observed and unobserved covariates in a treatment selection equation. This parameter thus measures the magnitude of selection on unobservables by comparing it with the magnitude of selection on observables. The second sensitivity parameter compares the relative magnitude of the coefficients on the observed and unobserved covariates in the outcome equation. The third sensitivity parameter controls the relationship between the observed and the unobserved covariates; this parameter thus measures the magnitude of control endogeneity.\n\nWe provide three main identification results. Our first result (Theorem \\ref{cor:IdsetRyANDcFree}) characterizes the identified set for $\\beta_\\text{long}$, the coefficient on treatment in the long regression of the outcome on the treatment and the observed and unobserved covariates. This theorem only requires that researchers make an assumption about a single sensitivity parameter---the relative magnitudes of selection on observables and unobservables. In contrast, \\cite{Oster2019} requires that researchers reason about two different sensitivity parameters. Moreover, our result allows for arbitrarily endogenous controls, unlike existing results in the literature. We provide a closed form, analytical expression for the identified set, which makes this result easy to use in practice. Using this result, we show how to do breakdown analysis: To find the largest magnitude of selection on unobservables relative to unobservables to needed to overturn a specific baseline finding. This value is called a breakdown point, and can be used to measure the robustness of one's baseline results. We provide a simple expression for the breakdown point and recommend that researchers report estimates of it along with their baseline estimates. This estimated breakdown point provides a scalar summary of a study's robustness to selection on unobservables while allowing for arbitrarily endogenous controls.\n\nIf researchers are willing to partially restrict the magnitude of control endogeneity, then their results will be more robust to selection on unobservables. Our second result (Theorem \\ref{thm:IdsetRyFree}) therefore characterizes the identified set for $\\beta_\\text{long}$ when researchers make an assumption about two sensitivity parameters: the relative magnitude of selection on unobservables and the magnitude of control endogeneity. We again provide a simple closed form expression for the identified set, and then show how to use this result to do breakdown analysis. Finally, if researchers are willing to restrict the impact of unobservables on outcomes, then they can again obtain results that are more robust to selection on unobservables. In this case, the identified set is more difficult to characterize (see Theorem \\ref{thm:mainMultivariateResult} in the appendix). However, our third main result (Theorem \\ref{cor:BFCalculation3D}) shows that we can nonetheless easily numerically compute objects like breakdown points.\n\nIn section \\ref{sec:empirical} we show how to use our results in empirical practice. We use data from \\citet[\\emph{Econometrica}]{BFG2020} who studied the effect of historical American frontier life on modern cultural beliefs. Specifically, they test a well known conjecture that living on the American frontier cultivated individualism and antipathy to government intervention. They heavily rely on Oster's (2019) method to argue against the importance of omitted variables. Using our results, we obtain more nuanced conclusions about robustness. In particular, when allowing for endogenous controls, we find that effects obtained from questionnaire based outcomes are no longer robust, but effects from election and property tax outcomes remain robust. This analysis highlights that previous empirical findings of robustness based on \\cite{Oster2019}, for example, may no longer be robust once the controls are allowed to be endogenous.\n\n\\subsubsection*{Related Literature}\n\nWe conclude this section with a brief review of the literature. We focus on two literatures: The literature on endogenous controls and the literature on sensitivity analysis in linear or parametric models.\n\nThe idea that the treatment variable and the control variables should be treated asymmetrically in the assumptions goes back to at least \\cite{BarnowCainGoldberger1980}. They developed the ``linear control function estimator'', which is based on an early parametric version of the now standard unconfoundedness assumption. \\citet[page 190]{HeckmanRobb1985}, \\cite{HeckmanHotz1989}, and \\citet[page 5035]{HeckmanVytlacil2007part2} all provide detailed discussions of this estimator. It was also one of the main estimators used in \\cite{LaLonde1986}. \\cite{StockWatson2011} provide a textbook description of it on pages 230--233 and pages 250--251. \\cite{AngristPischke2009} also discuss it at the end of their section 3.2.1. Also see \\cite{Frolich2008}. Note that this earlier analysis was based on mean independence assumptions, while the analysis in our paper only uses linear projections. We do this so that our baseline model is not falsifiable, which allows us to avoid complications that arise in falsifiable models (e.g., see \\citealt{MastenPoirier2021FF}). More recently, \\cite{HuenermundLouw2020} remind researchers that most control variables are likely endogenous and hence their coefficients should not be interpreted as causal. \n\nAlthough control variables are often thought to be endogenous, the literature on sensitivity analysis generally assumes the controls are exogenous. As mentioned earlier, this includes \\cite*{AltonjiElderTaber2005, AltonjiElderTaber2008} and \\cite{Oster2019}. However, Appendix A.1 of \\cite{Oster2019} describes one approach for relaxing the exogenous controls assumption by redefining her sensitivity parameter $\\delta$. We discuss this in detail in appendix \\ref{sec:OsterRedefinition}. There we show that such a redefinition implies that $\\delta = 1$ is no longer a natural reference point. In particular, we show that this redefinition can change researchers' conclusions about the robustness of their results. \\cite{Krauth2016} explicitly allows for endogenous controls, but he relies on a similar redefinition approach which implies that his sensitivity parameter $\\lambda$ measures the magnitude of selection on unobservables as well as the endogeneity of the controls as the same time. In particular, like our results for Oster's $\\delta$, when exogenous controls does not hold, $\\lambda = 1$ is no longer a natural reference point for Krauth's $\\lambda$. See Appendix \\ref{subsec:KrauthDisc} for more discussion. Thus it too does not solely measure selection on unobservables when the controls are endogenous. \\cite{Imbens2003} starts from the standard unconfoundedness assumption which allows endogenous controls, but in his parametric assumptions (see his likelihood equation on page 128) he assumes that the unobserved omitted variable is independent of the observed covariates. \n\\cite{CinelliHazlett2020} develop an alternative to \\cite{Oster2019} that allows researchers to compare the relative strength of the observed and unobserved covariates on outcomes and on treatment selection. Their approach also imposes the exogenous controls assumption (see the last paragraph of their page 53). \\cite{AET2019} propose an approach to allow for endogenous controls based on imposing a factor model on all covariates, observable and unobservable. Their approach and ours are not nested; in particular, their results require the number of covariates to go to infinity, while we suppose the number of covariates is fixed. This difference arises because they explicitly model the covariate selection process. We instead take the covariates as given and impose assumptions directly on these covariates, rather than deriving such assumptions from a model of covariate selection. Our results also allow researchers to be fully agnostic about the relationship between the observed and unobserved covariates.\n\nThere are several other related papers on sensitivity analysis. The sensitivity parameters we use are defined based on the relative magnitude of different coefficients. That is similar to previous work by \\cite{Chalak2019}, who shows how to use relative magnitude constraints to assess sensitivity to omitted variables when a proxy for the omitted variable is observed. \\citet[section 7.3]{ZhangCinelliChenPearl2021} discuss a sensitivity analysis that uses constraints on the relative magnitude of coefficients in a setting with exogenous controls. Finally, note that the standard unconfoundedness assumption (for example, chapter 12 in \\citealt{ImbensRubin2015}) allows for endogenous controls. For this reason, several papers that assess sensitivity to unconfoundedness also allow for endogenous controls. This includes \\cite{Rosenbaum1995, Rosenbaum2002}, \\cite{MastenPoirier2018}, and \\cite{MastenPoirierZhang2020}. These methods, however, do not provide formal results for calibrating their sensitivity parameters based on comparing selection on observables with selection on unobservables. These methods also focus on binary or discrete treatments, whereas the analysis in our paper can be used for continuous treatments as well.\n\n\\subsubsection*{Notation Remark}\n\nFor a scalar random variable $A$ and a random column vector $B$, we define $A^{\\perp B} = A - [\\var(B)^{-1} \\cov(B,A)]' B$. This is the sum of the residual from a linear projection of $A$ onto $(1,B)$ and the intercept in that projection. Many of our equations therefore do not include intercepts because they are absorbed into $A^{\\perp B}$ by definition. Note also that $A^{\\perp B}$ is uncorrelated with each component of $B$, by definition. Let $R_{A \\sim B \\mathrel{\\mathsmaller{\\bullet}} C}^2$ denote the R-squared from a regression of $A^{\\perp C}$ on $(1,B^{\\perp C})$. This is sometimes called the partial R-squared.\n\n\\section{The Baseline Model}\\label{sec:model}\n\nLet $Y$ and $X$ be observed scalar variables. Let $W_1$ be a vector of observed covariates of dimension $d_1$. Let $W_2$ be an unobserved scalar covariate; we discuss vector $W_2$ in appendix \\ref{sec:vectorW2}. Let $W = (W_1,W_2)$. Consider the OLS estimand of $Y$ on $(1,X,W_1,W_2)$. Let $(\\beta_\\text{long}, \\gamma_1, \\gamma_2)$ denote the coefficients on $(X,W_1,W_2)$. The following assumption ensures these coefficients and other OLS estimands we consider are well defined.\n\n\\begin{Aassump}\\label{assump:posdefVar}\nThe variance matrix of $(Y,X,W_1,W_2)$ is finite and positive definite.\n\\end{Aassump}\n\nWe can write\n\\begin{equation}\\label{eq:outcome}\n\tY = \\beta_\\text{long} X + \\gamma_1' W_1 + \\gamma_2 W_2 + Y^{\\perp X,W}\n\\end{equation}\nwhere $Y^{\\perp X,W}$ is defined to be the OLS residual plus the intercept term, and hence is uncorrelated with each component of $(X,W)$ by construction. Suppose our parameter of interest is $\\beta_\\text{long}$. In section \\ref{sec:causalModels} we discuss three causal models that lead to this specific OLS estimand as the parameter of interest, using either unconfoundedness, difference-in-differences, or instrumental variables as an identification strategy. Alternatively, it may be that we are simply interested in $\\beta_\\text{long}$ as a descriptive statistic. The specific motivation for interest in $\\beta_\\text{long}$ does not affect our technical analysis. \n\nNext consider the OLS estimand of $X$ on $(1,W_1,W_2)$. Let $(\\pi_1,\\pi_2)$ denote the coefficients on $(W_1,W_2)$. Then we can write\n\\begin{equation}\\label{eq:XprojectionW1W2}\n\tX = \\pi_1' W_1 + \\pi_2 W_2 + X^{\\perp W}\n\\end{equation}\nwhere $X^{\\perp W}$ is defined to be the OLS residual plus the intercept term, and hence is uncorrelated with each component of $W$ by construction. Although equation \\eqref{eq:XprojectionW1W2} is not necessarily causal, we can think of the value of $\\pi_1$ as representing ``selection on observables'' while $\\pi_2$ represents ``selection on unobservables.'' The following is thus a natural baseline assumption.\n\n\\begin{Aassump}[No selection on unobservables]\\label{assump:baselineEndogControl1}\n\t$\\pi_2 = 0$.\n\\end{Aassump}\n\nLet $\\beta_\\text{med}$ denote the coefficient on $X$ in the OLS estimand of $Y$ on $(1,X,W_1)$. With no selection on unobservables, we have the following result.\n\n\\begin{theorem}\\label{thm:baselineEndogControl1}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar} and A\\ref{assump:baselineEndogControl1} hold. Then the following hold.\n\\begin{enumerate}\n\\item $\\beta_\\text{long} = \\beta_\\text{med}$. Consequently, $\\beta_\\text{long}$ is point identified.\n\n\\item The identified set for $\\gamma_1$ is $\\ensuremath{\\mathbb{R}}^{d_1}$.\n\\end{enumerate}\n\\end{theorem}\n\nThis result allows for endogenous controls, in the sense that the observed covariates $W_1$ can be arbitrarily correlated with the unobserved covariate $W_2$. But it restricts the relationship between $(X,W_1,W_2)$ in such a way that we can still point identify $\\beta_\\text{long}$ even though $W_1$ and $W_2$ are arbitrarily correlated. The coefficient $\\gamma_1$ on the observed controls, however, is completely unidentified. The difference between the roles of $X$ and $W_1$ in Theorem \\ref{thm:baselineEndogControl1} reflects the sentiment of \\cite{AngristPischke2017} that we discussed in the introduction.\n\nIn practice, we are often worried that the no selection on unobservables assumption A\\ref{assump:baselineEndogControl1} does not hold. In section \\ref{sec:MainNewAnalysis} we develop a new approach to assess the importance of this assumption. \n\n\\section{Sensitivity Analysis}\\label{sec:MainNewAnalysis}\n\nWe have argued that, in practice, most controls are endogenous in the sense that they are potentially correlated with the omitted variables of concern. Consequently, methods for assessing the sensitivity of identifying assumptions for the treatment variable of interest should allow for the controls to be endogenous to some extent. In this section, we develop such a method. In section \\ref{sec:sensitivityParameters} we first describe the three sensitivity parameters that we use; note that we do not use all of these parameters in all of our results. In section \\ref{sec:mainIdentification} we then state our main identification results. In section \\ref{sec:interpretation} we make several remarks regarding interpretation of the sensitivity parameters. Finally, in section \\ref{sec:estimationInference} we briefly discuss estimation and inference.\n\n\\subsection{The Sensitivity Parameters}\\label{sec:sensitivityParameters}\n\nRecall from section \\ref{sec:model} that our parameter of interest is $\\beta_\\text{long}$, the OLS coefficient on $X$ in the long regression of $Y$ on $(1,X,W_1,W_2)$. We cannot compute this regression from the data, since $W_2$ is not observed. Instead, we can compute $\\beta_\\text{med}$, the OLS coefficient on $X$ in the medium regression of $(1,X,W_1)$. The difference between these two regression coefficients is given by the classic omitted variable bias formula. We can write this formula as a function of the coefficient $\\gamma_2$ on $W_2$ in the long regression outcome equation \\eqref{eq:outcome} and the coefficient $\\pi_2$ on $W_2$ in the selection equation \\eqref{eq:XprojectionW1W2} as follows:\n\\begin{equation}\\label{eq:ourOVBformula}\n\t\\beta_{\\text{med}} - \\beta_\\text{long}\n\t=\n\t\\frac{\\gamma_2 \\pi_2 (1 - R^2_{W_2 \\sim W_1})}{\\pi_2^2 \\left(1 - R^2_{W_2 \\sim W_1}\\right) + \\var(X^{\\perp W})}\n\\end{equation}\nwhere $R_{W_2 \\sim W_1}^2$ denotes the population $R^2$ in a linear regression of the unobservable $W_2$ on the observables $(1,W_1)$. This bias is a function of the coefficient $\\pi_2$. Hence $\\pi_2$ is a natural sensitivity parameter. Using $\\pi_2$ as a sensitivity parameter, however, would require researchers to make judgment calls about the absolute magnitude of this coefficient. This may be difficult. So, similar to the definition of Oster's $\\delta$ (which we review in appendix \\ref{subsec:osterredef}), we instead define a \\emph{relative} sensitivity parameter. Specifically, let $\\| \\cdot \\|_{\\Sigma_{\\text{obs}}}$ denote the weighted Euclidean norm on $\\ensuremath{\\mathbb{R}}^{d_1}$: $\\|a\\|_{\\Sigma_{\\text{obs}}} = \\left(a'\\Sigma_{\\text{obs}}a\\right)^{1\/2}$ where $\\Sigma_{\\text{obs}} \\equiv \\var(W_1)$. \n\nWe then consider the following assumption.\n\n\\begin{Aassump}\\label{assump:rx}\n$| \\pi_2 | \\leq \\bar{r}_X \\| \\pi_1 \\|_{\\Sigma_{\\text{obs}}}$ for a known value of $\\bar{r}_X \\geq 0$.\n\\end{Aassump}\n\nTo interpret assumption A\\ref{assump:rx}, we first normalize the variance of the unobserved $W_2$ to 1.\n\n\\begin{Aassump}\\label{assump:normalizeVarianceW2}\n$\\var(W_2) = 1$.\n\\end{Aassump}\n\nUsing this normalization A\\ref{assump:normalizeVarianceW2}, assumption A\\ref{assump:rx} can be written as\n\\[\n\t\\sqrt{ \\var(\\pi_2 W_2) }\n\t\\leq\n\t\\bar{r}_X \\cdot \\sqrt{\\var(\\pi_1' W_1)}.\n\\]\nSo assumption A\\ref{assump:rx} says that the association between treatment $X$ and a one standard deviation increase in the index of unobservables is at most $\\bar{r}_X$ times the association between treatment and a one standard deviation increase in the index of observables. Finally, note that $\\| \\pi_1 \\|_{\\Sigma_{\\text{obs}}}$ is invariant to linear transformations of $W_1$, including rescalings, since the index $\\pi_1'W_1$ is invariant with respect to linear transformations. This invariance ensures that $\\bar{r}_X$ is a unit-free sensitivity parameter.\n\nThe baseline model of section \\ref{sec:model} corresponds to the case $\\bar{r}_X = 0$, since it implies $\\pi_2 = 0$. We relax the baseline model by considering values $\\bar{r}_X > 0$. Our first main result (Theorem \\ref{cor:IdsetRyANDcFree}) describes the identified set using only A\\ref{assump:rx}. Researchers may also be willing to make additional restrictions so we consider two additional sensitivity parameters as well, however. These parameters are also motivated by the omitted variables bias formula in equation \\eqref{eq:ourOVBformula}. The bias is a function of $\\gamma_2$ so it is natural to also consider assumptions that restrict this parameter. Like our assumption on $\\pi_2$, we consider a restriction on the relative magnitudes of $\\gamma_1$ and $\\gamma_2$, the coefficients of $W_1$ and $W_2$ in the outcome equation \\eqref{eq:outcome}.\n\n\\begin{Aassump}\\label{assump:ry}\n$| \\gamma_2 | \\leq \\bar{r}_Y \\| \\gamma_1 \\|_{\\Sigma_{\\text{obs}}}$ for a known value of $\\bar{r}_Y \\geq 0$.\n\\end{Aassump}\n\nMaintaining the normalization A\\ref{assump:normalizeVarianceW2}, A\\ref{assump:ry} has a similar interpretation as A\\ref{assump:rx}: It says that the association between the outcome and a one standard deviation increase in the index of unobservables is at most $\\bar{r}_Y$ times the association between the outcome and a one standard deviation increase in the index of observables. $\\| \\gamma_1 \\|_{\\Sigma_{\\text{obs}}}$ is also invariant to linear transformations of $W_1$ and hence $\\bar{r}_Y$ is also a unit-free sensitivity parameter.\n\nFinally, the omitted variable bias in equation \\eqref{eq:ourOVBformula} is a function of $R_{W_2 \\sim W_1}^2$. So we also consider restrictions directly on the relationship between the observed and unobserved covariates.\n\n\\begin{Aassump}\\label{assump:corr}\n$R_{W_2 \\sim W_1} \\leq \\bar{c}$ for a known value of $\\bar{c} \\in [0,1]$.\n\\end{Aassump}\n\n Assumption A\\ref{assump:corr} allows researchers to constrain the magnitude of control endogeneity. In particular, the exogenous controls assumption is equivalent to $R_{W_2 \\sim W_1}^2 = 0$ and hence can be obtained by setting $\\bar{c} = 0$. Values $\\bar{c} > 0$ allow for partially endogenous controls. Note that $R_{W_2 \\sim W_1}^2$ is invariant to linear transformations of $W_1$ as well as to the normalization on $W_2$. Finally, it will sometimes be useful to note that $R_{W_2 \\sim W_1}^2 = \\| \\cov(W_1,W_2) \\|_{\\Sigma_{\\text{obs}}^{-1}}^2$.\n\n\\subsection{Identification}\\label{sec:mainIdentification}\n\nIn this section we state our main results. For simplicity, we first normalize the treatment variable so that $\\var(X) = 1$ and the covariates so that $\\var(W_1) = I$. All of the results below can be rewritten without these normalizations, at the cost of additional notation. With these normalizations, $\\| \\cdot \\|_{\\Sigma_{\\text{obs}}^{-1}} =\\| \\cdot \\|_{\\Sigma_{\\text{obs}}}$. We use $\\| \\cdot \\|$ to refer to this norm throughout.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$ Only}\n\nLet $\\mathcal{B}_I(\\bar{r}_X)$ denote the identified set for $\\beta_\\text{long}$ under the positive definite variance assumption A\\ref{assump:posdefVar}, the normalization assumption A\\ref{assump:normalizeVarianceW2}, and the restriction A\\ref{assump:rx} on $\\pi_2$. In particular, this identified set does not impose the restriction A\\ref{assump:ry} on $\\gamma_2$ or the restriction A\\ref{assump:corr} on $R_{W_2 \\sim W_1}^2$.\nLet\n\\[\n\t\\underline{B}(\\bar{r}_X) = \\inf \\mathcal{B}_I(\\bar{r}_X)\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X) = \\sup \\mathcal{B}_I(\\bar{r}_X)\n\\]\ndenote its smallest and largest values. Our first main result, Theorem \\ref{cor:IdsetRyANDcFree} below, provides simple, closed form expressions for these sharp bounds. Similarly, let $\\mathcal{B}_I(\\bar{r}_X,\\bar{c})$ denote the identified set for $\\beta_\\text{long}$ if we also impose A\\ref{assump:corr}. Let\n\\[\n\t\\underline{B}(\\bar{r}_X, \\bar{c}) = \\inf \\mathcal{B}_I(\\bar{r}_X, \\bar{c})\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X, \\bar{c}) = \\sup \\mathcal{B}_I(\\bar{r}_X, \\bar{c})\n\\]\ndenote its smallest and largest values. Our second main result, Theorem \\ref{thm:IdsetRyFree} below, similarly provides simple, closed form expressions for these sharp bounds. \n\nTo state these results, we first define some additional notation. For any random vectors $A$ and $B$, let $\\sigma_{A,B} = \\cov(A,B)$, a matrix of dimension $\\text{dim}(A) \\times \\text{dim}(B)$. Let\n\\begin{align*}\n \t\\bar{z}_X(\\bar{r}_X) \n\t&= \n\t\\begin{cases}\n \t\\dfrac{\\bar{r}_X \\|\\sigma_{W_1,X}\\|}{\\sqrt{1 - \\bar{r}_X^2}} & \\text{ if } \\bar{r}_X < 1 \\\\\n \t+\\infty & \\text{ if } \\bar{r}_X \\geq 1.\n\t\\end{cases}\n\\end{align*}\nThe sensitivity parameter $\\bar{r}_X$ will affect the bounds only via this function. Note that this function also depends on the data, via the covariance between treatment $X$ and the covariates $W_1$. The bounds also depend on the data via three constants:\n\\begin{align*}\n k_0 &= 1 - \\|\\sigma_{W_1, X}\\|^2 = \\var(X^{\\perp W_1}) > 0\\\\\n k_1 &= \\sigma_{X,Y} - \\sigma_{W_1, X}'\\sigma_{W_1, Y} = \\cov(Y,X^{\\perp W_1})\\\\\n k_2 &= \\sigma_Y^2 - \\|\\sigma_{W_1, Y}\\|^2 = \\var(Y^{\\perp W_1}) >0.\n\\end{align*}\nThe inequalities here follow from A\\ref{assump:posdefVar}, positive definiteness of $\\var(Y,X,W_1)$. Note that the coefficient on $X$ in the medium OLS regression of $Y$ on $(1,X,W_1)$ can be written as $\\beta_{\\text{med}} = k_1\/k_0$ by the FWL theorem. \n\nWe also use the following assumption.\n\n\\begin{Aassump}\\label{assump:noKnifeEdge}\n$\\sigma_{W_1,Y} \\ne \\sigma_{X,Y}\\sigma_{W_1,X}$ and $\\sigma_{W_1,X} \\neq 0$.\n\\end{Aassump}\n\nThis assumption is not necessary, but it simplifies the proofs. A sufficient condition for A\\ref{assump:noKnifeEdge} is $\\beta_\\text{short} \\neq \\beta_{\\text{med}}$, where $\\beta_\\text{short}$ is the coefficient on $X$ in the short OLS regression of $Y$ on $(1,X)$. This follows from $\\beta_{\\text{med}} - \\beta_\\text{short} = \\sigma_{W_1,X}'(\\sigma_{W_1,Y} - \\sigma_{X,Y}\\sigma_{W_1,X})$. We can now state our first main result.\n\n\\begin{theorem}\\label{cor:IdsetRyANDcFree}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar}, A\\ref{assump:rx}, A\\ref{assump:normalizeVarianceW2}, and A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. If $\\bar{r}_X^2 < k_0$, then\n\\[\n\t\\underline{B}(\\bar{r}_X) = \\beta_{\\text{med}} - \\text{dev}(\\bar{r}_X) \\qquad \\text{and} \\qquad\n \\overline{B}(\\bar{r}_X) = \\beta_{\\text{med}} + \\text{dev}(\\bar{r}_X)\n\\]\nwhere\n\\[\n\t\\text{dev}(\\bar{r}_X) = \\bar{r}_X \\|\\cov(X,W_1)\\| \\sqrt{\\frac{k_2(1 - R^2_{Y \\sim X \\mathrel{\\mathsmaller{\\bullet}} W_1})}{k_0(k_0 - \\bar{r}_X^2)}}.\n\\]\nOtherwise, $\\underline{B}(\\bar{r}_X) = -\\infty$ and $\\bar{B}(\\bar{r}_X) = +\\infty$.\n\\end{theorem}\n\nTheorem \\ref{cor:IdsetRyANDcFree} characterizes the largest and smallest possible values of $\\beta_\\text{long}$ when some selection on unobservables is allowed, the observed covariates are allowed to be arbitrarily correlated with the unobserved covariate, and we make no restrictions on the coefficients in the outcome equation. In fact, with the exception of at most three extra elements, the interval $[\\underline{B}(\\bar{r}_X), \\overline{B}(\\bar{r}_X)]$ is the identified set for $\\beta_\\text{long}$ under these assumptions. Here we focus on the smallest and largest elements to avoid technical digressions that are unimportant for applications.\n\nThere are two important features of Theorem \\ref{cor:IdsetRyANDcFree}: First, it only requires researchers to reason about \\emph{one} sensitivity parameter, unlike some existing approaches, including \\cite{Oster2019} and \\cite{CinelliHazlett2020}. Second, and also unlike those results, it allows for arbitrarily endogenous controls. So this result allows researchers to examine the impact of selection on unobservables on their baseline results without also having to reason about the magnitude of endogenous controls.\n\nSince Theorem \\ref{cor:IdsetRyANDcFree} provides explicit expressions for the bounds, we can immediately derive a few of their properties. First, when $\\bar{r}_X = 0$, the bounds collapse to $\\beta_\\text{med}$, the point estimand from the baseline model with no selection on unobservables. So we recover the baseline model as a special case. For small values of $\\bar{r}_X > 0$, the bounds are no longer a singleton, but their length increases continuously as $\\bar{r}_X$ increases away from zero. The rate at which they increase depends on just a few features of the data: The relationship between treatment and the observed covariates, $\\cov(X,W_1)$, the variance in outcomes after adjusting for the covariates, $\\var(Y^{\\perp W_1})$, the variance in treatments after adjusting for the covariates, $\\var(X^{\\perp W_1})$, and the R-squared from the regression of $Y^{\\perp W_1}$ on a constant and $X^{\\perp W_1}$. We also see that the bounds are symmetric around $\\beta_\\text{med}$. Finally, the bounds can only be finite if $\\bar{r}_X < 1$. We discuss interpretation of the magnitude of $\\bar{r}_X$ in detail in section \\ref{sec:interpretation}.\n\nIn practice, empirical researchers often do breakdown analysis. In this context, they ask:\n\\begin{quote}\nHow strong does selection on unobservables have to be relative to selection on observables in order to overturn our baseline findings?\n\\end{quote}\nWe can use Theorem \\ref{cor:IdsetRyANDcFree} to answer this question. Suppose in the baseline model we find $\\beta_\\text{med} \\geq 0$. We are concerned, however, that $\\beta_\\text{long} \\leq 0$, in which case our positive finding is driven solely by selection on unobservables. Define\n\\[\n\t\\bar{r}_X^\\text{bp} \n\t= \\inf \\{ \\bar{r}_X \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X) \\text{ for some $b \\leq 0$} \\}.\n\\]\nThis value is called a \\emph{breakdown point}. It is the largest amount of selection on unobservables we can allow for while still concluding that $\\beta_\\text{long}$ is nonnegative. Note that the breakdown point when $\\beta_\\text{med} \\leq 0$ can be defined analogously.\n\n\\begin{corollary}\\label{corr:breakdownPointRXonly}\nSuppose the assumptions of Theorem \\ref{cor:IdsetRyANDcFree} hold. Then\n\\[\n\t\\bar{r}_X^\\text{bp} = \\left(\\frac{\\beta_\\text{med}^2\\var(X^{\\perp W_1})^2}{\\|\\cov(X,W_1)\\|^2 \\var(Y^{\\perp W_1}) + \\beta_\\text{med}^2 \\var(X^{\\perp W_1})^2}\\right)^{1\/2}.\n\\]\n\\end{corollary}\n\nThe breakdown point described in Corollary \\ref{corr:breakdownPointRXonly} characterizes the magnitude of selection on unobservables relative to selection on observables needed to overturn one's baseline findings. One of our main recommendations is that researchers present estimates of this point as a scalar measure of the robustness of their results. We illustrate this recommendation in our empirical application in section \\ref{sec:empirical}.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$ and $\\bar{c}$}\n\nIn some applications, the bounds in Theorem \\ref{cor:IdsetRyANDcFree} may be quite large, even for small values of $\\bar{r}_X$. In this case, researchers may be willing to restrict the relationship between the observed covariates and the omitted variable. So next we present a similar result, but now imposing A\\ref{assump:corr}. Generalize the $\\bar{z}_X(\\cdot)$ function as follows:\n\\begin{align*}\n \t\\bar{z}_X(\\bar{r}_X,\\bar{c}) &= \\begin{cases}\n \t\\dfrac{\\bar{r}_X \\|\\sigma_{W_1,X}\\| \\sqrt{1 - \\min\\{\\bar{c}, \\bar{r}_X\\}^2}}{1 -\\bar{r}_X \\min\\{\\bar{c}, \\bar{r}_X\\}} & \\text{ if } \\bar{r}_X\\bar{c} < 1 \\\\\n \t+\\infty & \\text{ if } \\bar{r}_X\\bar{c} \\ge 1.\n\t\\end{cases}\n\\end{align*}\nNote that for $\\bar{c} = 1$, $\\bar{z}_X(\\bar{r}_X,1) = \\bar{z}_X(\\bar{r}_X)$ for all $\\bar{r}_X \\geq 0$. Also, $\\bar{z}_X(\\bar{r}_X,\\bar{c}) = \\bar{z}_X(\\bar{r}_X)$ when $\\bar{r}_X \\leq \\bar{c}$. As before, the sensitivity parameters will only affect the bounds via this function.\n\n\\begin{theorem}\\label{thm:IdsetRyFree}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar}, A\\ref{assump:rx}, A\\ref{assump:normalizeVarianceW2}, A\\ref{assump:corr}, and A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. If $\\bar{z}_X(\\bar{r}_X, \\bar{c})^2 < k_0$, then\n\\[\n\t\\underline{B}(\\bar{r}_X, \\bar{c}) = \\beta_{\\text{med}} - \\text{dev}(\\bar{r}_X, \\bar{c})\n\t\\qquad \\text{and} \\qquad\n\t\\overline{B}(\\bar{r}_X, \\bar{c}) = \\beta_{\\text{med}} + \\text{dev}(\\bar{r}_X,\\bar{c})\t\n\\]\nwhere\n\\[\n\t\\text{dev}(\\bar{r}_X, \\bar{c}) = \\sqrt{\\frac{\\bar{z}_X(\\bar{r}_X,\\bar{c})^2\\left(\\frac{k_2}{k_0} - \\beta_{\\text{med}}^2\\right)}{k_0 - \\bar{z}_X(\\bar{r}_X, \\bar{c})^2}}.\n\\]\nOtherwise, $\\underline{B}(\\bar{r}_X, \\bar{c}) = -\\infty$ and $\\bar{B}(\\bar{r}_X, \\bar{c}) = +\\infty$.\n\\end{theorem}\n\nThe interpretation of Theorem \\ref{thm:IdsetRyFree} is similar to our earlier result Theorem \\ref{cor:IdsetRyANDcFree}. It characterizes the largest and smallest possible values of $\\beta_\\text{long}$ when some selection on unobservables is allowed and the controls are allowed to be partially but not arbitrarily endogenous. We also make no restrictions on the coefficients in the outcome equation. As before, with the exception of at most three extra elements, the interval $[\\underline{B}(\\bar{r}_X, \\bar{c}), \\overline{B}(\\bar{r}_X, \\bar{c})]$ is the identified set for $\\beta_\\text{long}$ under these assumptions.\n\nEarlier we saw that $\\bar{r}_X < 1$ is necessary for the bounds of Theorem \\ref{cor:IdsetRyANDcFree} to be finite. Theorem \\ref{thm:IdsetRyFree} shows that, if we are willing to restrict the value of $\\bar{c}$, then we can allow for $\\bar{r}_X > 1$ while still obtaining finite bounds. Thus there is a trade-off between the magnitude of selection on unobservables we can allow for and the magnitude of control endogeneity. One way to summarize this trade-off is to use \\emph{breakdown frontiers} (\\citealt{MastenPoirier2019BF}). Specifically, when $\\beta_\\text{med} \\geq 0$, define\n\\[\n\t\\bar{r}_X^\\text{bf}(\\bar{c}) \n\t= \\inf \\{ \\bar{r}_X \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X, \\bar{c}) \\text{ for some $b \\leq 0$} \\}.\n\\]\nFor any fixed $\\bar{c}$, $\\bar{r}_X^\\text{bf}(\\bar{c})$ is a breakdown point: It is the largest magnitude of selection on unobservables relative to selection on observables that we can allow for while still concluding that our parameter of interest is nonnegative. As we vary $\\bar{c}$, this breakdown point changes: It increases as $\\bar{c}$ gets smaller, because we can allow for more selection on unobservables if we impose stronger restrictions on exogeneity of the observed covariates. Conversely, it decreases as $\\bar{c}$ gets larger, because we can allow for less selection on unobservables if we allow for more endogeneity of the observed covariates. In particular, $\\bar{r}_X^\\text{bf}(1) = \\bar{r}_X^\\text{bp}$, the breakdown point of Corollary \\ref{corr:breakdownPointRXonly}. Like that corollary, we can derive an analytical characterization of the function $\\bar{r}_X^\\text{bf}(\\cdot)$, but we omit this for brevity.\n\n\\subsubsection*{Identification Using Restrictions on $\\bar{r}_X$, $\\bar{c}$, and $\\bar{r}_Y$}\n\nFinally, in some empirical settings the results may not be robust even if we impose exogenous controls ($\\bar{c} = 0$). In these cases, we might be willing to restrict the impact of unobservables on outcomes; that is, we may be willing to impose A\\ref{assump:ry}. Let $\\mathcal{B}_I(\\bar{r}_X,\\bar{r}_Y,\\bar{c})$ denote the identified set for $\\beta_\\text{long}$ under A\\ref{assump:posdefVar} and A\\ref{assump:rx}--A\\ref{assump:corr}. Unlike the two identified sets we considered above, this set is less tractable. We provide a precise characterization in appendix \\ref{sec:generalIdentSet}. Here we instead use our characterization to show how to do breakdown analysis.\n\nSuppose we are interested in the robustness of the conclusion that $\\beta_\\text{long} \\geq \\underline{b}$ for some known scalar $\\underline{b}$. For example, $\\underline{b} = 0$. Define the function\n\\[\n\t\\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c}, \\underline{b}) \n\t= \\inf \\{ \\bar{r}_Y \\geq 0 : b \\in \\mathcal{B}_I(\\bar{r}_X, \\bar{c}, \\bar{r}_Y) \\text{ for some $b \\leq \\underline{b}$} \\}.\n\\]\nThis is a three-dimensional breakdown frontier. In particular, we can use it to define the set\n\\[\n\t\\text{RR} = \\{ (\\bar{r}_X, \\bar{r}_Y, \\bar{c}) : \\bar{r}_Y \\leq \\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c}, \\underline{b}) \\}.\n\\]\n\\cite{MastenPoirier2019BF} call this the \\emph{robust region} because the conclusion of interest, $\\beta_\\text{long} \\geq \\underline{b}$, holds for any combination of sensitivity parameters in this region. The size of this region is therefore a measure of the robustness of our baseline conclusion.\n\nAlthough we do not have a closed form expression for the smallest and largest elements of $\\mathcal{B}_I(\\bar{r}_X, \\bar{c}, \\bar{r}_Y)$, our next main result shows that we can still easily compute the breakdown frontier numerically. To state the result, define the sets\n\\begin{align*}\n\t\\mathcal{D} &= \\mathbb{R} \\times \\{c \\in \\mathbb{R}^{d_1}: \\|c\\| < 1\\} \\times \\mathbb{R} \\\\\n\t\\mathcal{D}^0 &= \\{(z,c,b) \\in \\mathcal{D}: z\\sqrt{1 - \\|c\\|^2}(\\sigma_{W_1,Y} - b\\sigma_{W_1,X}) - (k_1 - k_0b)c \\ne 0 \\}\n\\end{align*} \nand define the functions\n\\begin{align*}\n \\text{devsq}(z) &= \\frac{z^2(k_2\/k_0 - \\beta_\\text{med}^2)}{k_0 - z^2} \\\\\n \\underline{r}_Y(z,c,b) &= \\begin{cases}\n 0 & \\text{ if } b = \\beta_{\\text{med}} \\\\ \n \\dfrac{|k_1 - k_0b|}{\\|z\\sqrt{1 - \\|c\\|^2}(\\sigma_{W_1,Y} - b\\sigma_{W_1,X}) - (k_1 - k_0b)c\\|} & \\text{ if } (z,c,b) \\in \\mathcal{D}^0 \\text{ and } b \\neq \\beta_{\\text{med}} \\\\ \n +\\infty & \\text{ otherwise} \n \\end{cases} \\\\\n p(z,c;\\bar{r}_X) &= \\bar{r}^2_X\\|\\sigma_{W_1,X}\\sqrt{1 - \\|c\\|^2} - cz\\|^2 - z^2.\n\\end{align*}\n\nWe can now state our last main result.\n\n\\begin{theorem}\\label{cor:BFCalculation3D}\nSuppose the joint distribution of $(Y,X,W_1)$ is known. Suppose A\\ref{assump:posdefVar} holds. Suppose A\\ref{assump:rx}--A\\ref{assump:noKnifeEdge} hold. Normalize $\\var(X) = 1$ and $\\var(W_1) = I$. \n\\begin{enumerate}\n\\item If $\\underline{b} \\ge \\beta_{\\text{med}}$ then $\\bar{r}_Y^\\text{bf}(\\bar{r}_{X}, \\bar{c}, \\underline{b}) = 0$.\n\n\\item If $\\underline{B}(\\bar{r}_X,\\bar{c}) > \\underline{b}$, then $\\bar{r}_Y^\\text{bf}(\\bar{r}_{X}, \\bar{c}, \\underline{b}) = +\\infty$.\n \n\\item If $\\underline{B}(\\bar{r}_X,\\bar{c}) \\le \\underline{b} < \\beta_\\text{med}$, then\n\\begin{align*}\n \\bar{r}^\\text{bf}_Y(\\bar{r}_X,\\bar{c}, \\underline{b}) = \\min_{(z,c_1,c_2,b) \\in (-\\sqrt{k_0},\\sqrt{k_0}) \\times \\ensuremath{\\mathbb{R}} \\times \\ensuremath{\\mathbb{R}} \\times (-\\infty, \\underline{b}]}& \\ \\ \\underline{r}_Y(z,c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X},b) \\\\\n \\text{subject to }& \\ \\ p(z, c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X}; \\bar{r}_X) \\ge 0 \\\\\n &\\ \\ (b - \\beta_{\\text{med}})^2 < \\text{devsq}(z) \\\\\n &\\ \\ \\|c_1\\sigma_{W_1,Y} + c_2 \\sigma_{W_1,X}\\| \\leq \\bar{c}.\n\\end{align*}\n\\end{enumerate}\n\\end{theorem}\n\nTheorem \\ref{cor:BFCalculation3D} shows that the three dimensional breakdown frontier can be computed as the solution to a smooth optimization problem. Importantly, this problem only requires searching over a 4-dimensional space. In particular, this dimension does not depend on the dimension of the covariates $W_1$. Consequently, it remains computationally feasible even with a large number of observed covariates, as is often the case in empirical practice. For example, the results for our empirical application take about 15 seconds to compute.\n\n\\subsection{Interpreting the Sensitivity Parameters}\\label{sec:interpretation}\n\nThus far we have introduced the sensitivity parameters (section \\ref{sec:sensitivityParameters}) and described their implications for identification (section \\ref{sec:mainIdentification}). Next we make several remarks regarding how to interpret the magnitudes of these parameters.\n\n\\subsubsection*{Which Covariates to Calibrate Against?}\n\nAs we discuss below, one of the main benefits of using \\emph{relative} sensitivity parameters like $\\bar{r}_X$ is that $\\bar{r}_X = 1$ is a natural reference point of ``equal selection.'' However, the interpretation of this reference point depends on the choice of covariates that we calibrate against. Put differently, when we say that we compare ``selection on unobservables to selection on observables,'' \\emph{which} observables do we mean?\n\nTo answer this, we split the observed covariates into two groups: (1) The control covariates, which we label $W_0$, and (2) The calibration covariates, which we continue to label $W_1$. Write equation \\eqref{eq:outcome} as\n\\[\n\tY = \\beta_\\text{long} X + \\gamma_0' W_0 + \\gamma_1' W_1 + \\gamma_2 W_2 + Y^{\\perp X,W}\n\t\\tag{\\ref{eq:outcome}$^\\prime$}\n\\]\nwhere $W = (W_0,W_1,W_2)$. Likewise, write equation \\eqref{eq:XprojectionW1W2} as\n\\[\n\tX = \\pi_0' W_0 + \\pi_1' W_1 + \\pi_2 W_2 + X^{\\perp W}.\n\t\\tag{\\ref{eq:XprojectionW1W2}$^\\prime$}\n\\]\nThe key difference from our earlier analysis is that, like in assumption A\\ref{assump:rx}, we will continue to only compare $\\pi_1$ with $\\pi_2$. That is, we only compare the omitted variable to the observed variables in $W_1$; we do not use $W_0$ for calibration. A similar remark applies to A\\ref{assump:ry}.\n\nThis distinction between control and calibration covariates is useful because in many applications we do not necessarily think the omitted variables have similar explanatory power as \\emph{all} of the observed covariates included in the model. For example, in our empirical application in section \\ref{sec:empirical}, we include state fixed effects as control covariates, but we do not use them for calibration. \n\nWe next briefly describe how to generalize our results in section \\ref{sec:mainIdentification} to account for this distinction. By the FWL theorem, a linear projection of $Y$ onto $(1,X^{\\perp W_0}, W_1^{\\perp W_0}, W_2^{\\perp W_0})$ has the same coefficients as equation \\eqref{eq:outcome}. Likewise for a linear projection of $X$ onto $(1,W_1^{\\perp W_0}, W_2^{\\perp W_0})$. Hence we can write\n\\begin{align*}\n\tY &= \\beta_\\text{long} X^{\\perp W_0} + \\gamma_1'W_1^{\\perp W_0} + \\gamma_2 W_2^{\\perp W_0} + \\widetilde{U}\\\\\n\tX &= \\pi_1' W_1^{\\perp W_0} + \\pi_2 W_2^{\\perp W_0} + \\widetilde{V}\n\\end{align*}\nwhere \n\\begin{align*}\n\t\\widetilde{U} &= Y^{\\perp X^{\\perp W_0}, W_1^{\\perp W_0}, W_2^{\\perp W_0}} \\qquad \\text{ and} \\qquad\t\\widetilde{V} = X^{\\perp W_1^{\\perp W_0}, W_2^{\\perp W_0}}.\n\\end{align*}\nBy construction, $\\widetilde{U}$ has zero covariance with $(X^{\\perp W_0},W_1^{\\perp W_0},W_2^{\\perp W_0})$ and $\\widetilde{V}$ has zero covariance with $(W_1^{\\perp W_0}, \\allowbreak W_2^{\\perp W_0})$. Therefore our earlier results continue to hold when $(X,W_1,W_2)$ are replaced with $(X^{\\perp W_0}, \\allowbreak W_1^{\\perp W_0}, \\allowbreak W_2^{\\perp W_0})$. Finally, this change implies that $\\bar{c}$ should be interpreted as an upper bound on $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$, the square root of the R-squared from the regression of $W_2$ on $W_1$ after partialling out $W_0$.\n\n\\subsubsection*{What is a Robust Result?}\n\nHow should researchers determine what values of $\\bar{r}_X$ and $\\bar{r}_Y$ are large and what values are small? Like in \\cite{AltonjiElderTaber2005} and \\cite{Oster2019}, these are \\emph{relative} sensitivity parameters. Consequently, the values $\\bar{r}_X = 1$ and $\\bar{r}_Y = 1$ are natural reference points. Specifically, when $\\bar{r}_X < 1$, the magnitude of the coefficient on the unobservable $W_2$ in equation \\eqref{eq:XprojectionW1W2} is smaller than the magnitude of the coefficient on the observable controls $W_1$ in the outcome equation. This is one way to formalize the idea that ``selection on unobservables is smaller than selection on observables.'' Likewise, when $\\bar{r}_X > 1$, we can think of this as formalizing the idea that ``selection on unobservables is larger than selection on observables.'' A similar interpretation applies to $\\bar{r}_Y$. These interpretations do \\emph{not}, however, imply that the value 1 should be thought of as a universal, context independent cutoff between ``small'' and ``large'' values of these two sensitivity parameters.\n\nWhy? As we described above, researchers must choose which of their observed covariates should be used to calibrate against. Consequently, the choice of $W_0$ (and hence $W_1$) affects the interpretation of the magnitude of $\\bar{r}_X$. One way in which this choice manifests itself is via its impact on the breakdown point: Including more relevant variables in $W_1$ will tend to will tend to shrink the breakdown point $\\bar{r}_X^\\text{bp}$, because the explanatory power of the observables we're calibrating against increases when we move variables from $W_0$ to $W_1$. This observation does \\emph{not} necessarily imply that the result is becoming less robust, but rather that the standard by which we are measuring sensitivity is changing. If the calibration variables $W_1$ have a large amount of explanatory power, then even an apparently small value of $\\bar{r}_X^\\text{bp}$ like 0.3 could be considered to be robust. Conversely, when the calibration variables $W_1$ do not have much explanatory power, then even an apparently large value of $\\bar{r}_X^\\text{bp}$ like 3 could be considered sensitive.\n\nThis discussion can be summarized by the following relationship:\n\\begin{equation}\\label{eq:pseudoEq}\n\t\\text{Selection on Unobservables} = r \\cdot (\\text{Selection on Observables}).\n\\end{equation}\nThe left hand side is the absolute magnitude of selection on unobservables, while the right hand side is the proportion $r$ of the absolute magnitude of selection on observables. $\\bar{r}_X^\\text{bp}$ is a bound on $r$. Our discussion above merely states that even if the bound $\\bar{r}_X^\\text{bp}$ on $r$ seems small, the magnitude of selection on unobservables allowed for can be very large if the magnitude of selection on observables is large. And conversely, even if $\\bar{r}_X^\\text{bp}$ seems large, the amount of unobserved selection allowed for must be small if the magnitude of selection on observables is also small.\n\nOverall, the value of using relative sensitivity parameters like $\\bar{r}_X$ is not that they allow us to obtain a universal threshold for what is or is not a robust result. Instead, it gives researchers a \\emph{unit free} measurement of sensitivity that is \\emph{interpretable} in terms of the effects of observed variables. Finally, note that the issues we've raised in this discussion equally apply to the existing methods in the literature as well, including \\cite{Oster2019}; they are not unique to our analysis.\n\n\\subsubsection*{Assessing Exogenous Controls}\n\nThus far we have discussed the interpretation of $\\bar{r}_X$ and $\\bar{r}_Y$. Next consider $\\bar{c}$. This is a constraint on the covariance between the observed calibration covariates and the unobserved covariate. In particular, recall that it is a bound on $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$. So what values of this parameter should be considered large, and what values should be considered small? One way to calibrate this parameter is to compute\n\\[\n\tc_k = R_{W_{1k} \\sim W_{1,-k} \\mathrel{\\mathsmaller{\\bullet}} W_0}\n\\]\nfor each covariate $k$ in $W_1$. That is, compute the square root of the population R-squared from the regression of $W_{1k}$ on the rest of the calibration covariates $W_{1,-k}$, after partialling out the control covariates $W_0$. These numbers tell us two things. First, if many of these values are nonzero and large, we may worry that the exogenous controls assumption fails. That is, if $W_2$ is in some way similar to the observed covariates $W_1$, then we might expect that $R_{W_2 \\sim W_1 \\mathrel{\\mathsmaller{\\bullet}} W_0}$ is similar to some of the $c_k$'s. So this gives us one method for assessing the plausibility of exogenous controls. Second, we can use the magnitudes of these values to calibrate our choice of $\\bar{c}$, in analysis based on Theorems \\ref{thm:IdsetRyFree} or \\ref{cor:BFCalculation3D}. For example, you could choose the largest value of $c_k$. A less conservative approach would be to select the median value.\n\n\\subsection{Estimation and Inference}\\label{sec:estimationInference}\n\nThus far we have described population level identification results. In practice, we only observe finite sample data. Our identification results depend on the observables $(Y,X,W_1)$ solely through their covariance matrix. In our empirical analysis in section \\ref{sec:empirical}, we apply our identification results by using sample analog estimators that replace $\\var(Y,X,W_1)$ with a consistent estimator $\\widehat{\\var}(Y,X,W_1)$. For example, we let $\\widehat{\\beta}_\\text{med}$ denote the OLS estimator of $\\beta_\\text{med}$, the coefficient on $X$ in the medium regression of $Y$ on $(1,X,W_1)$. We expect the corresponding asymptotic theory for estimation and inference on the bound functions to be straightforward, but for brevity we do not develop it in this paper. Inference on the breakdown points and frontiers could also be done as in \\cite{MastenPoirier2019BF}.\n\n\\section{Causal Models}\\label{sec:causalModels}\n\nIn this section we describe three different causal models in which the parameter $\\beta_\\text{long}$ in equation \\eqref{eq:outcome} has a causal interpretation. These models are based on three different kinds of identification strategies: Unconfoundedness, difference-in-differences, and instrumental variables. Here we focus on simple models, but keep in mind that our analysis can be used anytime the causal parameter of interest can be written as the coefficient on a treatment variable in a long regression of the form in equation \\eqref{eq:outcome}.\n\n\\subsection{Unconfoundedness}\n\nRecall that $Y$ denotes the realized outcome, $X$ denotes treatment, $W_1$ denotes the observed covariates, and $W_2$ denotes the unobserved variables of concern. Let $Y(x)$ denote potential outcomes, where $x$ is any logically possible value of treatment. Assume this potential outcome has the following form:\n\\begin{equation}\\label{eq:OsterOutcomeEq}\n\tY(x) = \\beta_c x + \\gamma_1' W_1 + \\gamma_2 W_2 + U\n\\end{equation}\nwhere $(\\beta_c,\\gamma_1,\\gamma_2)$ are unknown constants. The parameter of interest is $\\beta_c$, the causal effect of treatment on the outcome. $U$ is an unobserved random variable. Suppose the realized outcome satisfies $Y = Y(X)$. Consider the following assumption.\n\\begin{itemize}\n\\item[] Linear Latent Unconfoundedness: $\\text{corr}(X^{\\perp W_1,W_2}, U^{\\perp W_1,W_2}) = 0$.\n\\end{itemize}\nThis assumption says that, after partialling out the observed covariates $W_1$ and the unobserved variables $W_2$, treatment is uncorrelated with the unobserved variable $U$. This model has two unobservables, which are treated differently via this assumption. We call $W_2$ the \\emph{confounders} and $U$ the \\emph{non-confounders}. $W_2$ are the unobserved variables which, when omitted, may cause bias. In contrast, as long as we adjust for $(W_1,W_2)$, omitting $U$ does not cause bias. Note that, given equation \\eqref{eq:OsterOutcomeEq}, linear latent unconfoundedness can be equivalently written as $\\text{corr}(X^{\\perp W_1,W_2}, Y(x)^{\\perp W_1,W_2}) = 0$ for all logically possible values of treatment $x$.\n\nLinear latent unconfoundedness is a linear parametric version of the nonparametric latent unconfoundedness assumption\n\\begin{equation}\\label{eq:nonparaLatentUnconf}\n\tY(x) \\indep X \\mid (W_1,W_2)\n\\end{equation}\nfor all logically possible values of $x$. In particular, with the linear potential outcomes assumption of equation \\eqref{eq:OsterOutcomeEq}, nonparametric latent unconfoundedness (equation \\eqref{eq:nonparaLatentUnconf}) implies linear latent unconfoundedness. We use the linear parametric version to avoid overidentifying restrictions that can arise from the combination of linearity and statistical independence.\n\nThe following result shows that, in this model, the causal effect of $X$ on $Y$ can be obtained from $\\beta_\\text{long}$, the coefficient on $X$ in the long regression described in equation \\eqref{eq:outcome}.\n\n\\begin{proposition}\\label{prop:unconfoundednessBaseline}\nConsider the linear potential outcomes model \\eqref{eq:OsterOutcomeEq}. Suppose linear latent unconfoundedness holds. Suppose A\\ref{assump:posdefVar} holds. Then $\\beta_c = \\beta_\\text{long}$.\n\\end{proposition}\n\nSince $W_2$ is unobserved, however, this result cannot be used to identify $\\beta_c$. Instead, suppose we believe the no selection on unobservables assumption A\\ref{assump:baselineEndogControl1}. Recall that this assumption says that $\\pi_2 = 0$, where $\\pi_2$ is the coefficient on $W_2$ in the OLS estimand of $X$ on $(1,W_1,W_2)$. Under this assumption, we obtain the following result. Recall that $\\beta_\\text{med}$ denotes the coefficient on $X$ in the medium regression of $Y$ on $(1,X,W_1)$. \n\n\\begin{corollary}\\label{corr:SelectionOnObsCausal}\nSuppose the assumptions of Proposition \\ref{prop:unconfoundednessBaseline} hold. Suppose A\\ref{assump:baselineEndogControl1} holds ($\\pi_2 = 0$). Then $\\beta_c = \\beta_\\text{med}$.\n\\end{corollary}\n\nThe selection on observables assumption A\\ref{assump:baselineEndogControl1} is usually thought to be quite strong, however. Nonetheless, since $\\beta_c = \\beta_\\text{long}$, our results in section \\ref{sec:MainNewAnalysis} can be used to assess sensitivity to selection on unobservables.\n\n\\subsection{Difference-in-differences}\n\nLet $Y_t(x_t)$ denote potential outcomes at time $t$, where $x_t$ is a logically possible value of treatment. For simplicity we do not consider models with dynamic effects of treatment or of covariates. Also suppose $W_{2t}$ is a scalar for simplicity. Suppose\n\\begin{equation}\\label{eq:DiDoutcomeEq}\n\tY_t(x_t) = \\beta_c x_t + \\gamma_1' W_{1t} + \\gamma_2 W_{2t} + V_t\n\\end{equation}\nwhere $V_t$ is an unobserved random variable and $(\\beta_c,\\gamma_1,\\gamma_2)$ are unknown parameters that are constant across units. The classical two way fixed effects model is a special case where\n\\begin{equation}\\label{eq:TWFE}\n\tV_t = A + \\delta_t + U_t.\n\\end{equation}\nwhere $A$ is an unobserved random variable that is constant over time, $\\delta_t$ is an unobserved constant, and $U_t$ is an unobserved random variable.\n\nSuppose there are two time periods, $t \\in \\{1,2\\}$. Let $Y_t = Y_t(X_t)$ denote the observed outcome at time $t$. For any time varying random variable like $Y_t$, let $\\Delta Y = Y_2 - Y_1$. Then taking first differences of the observed outcomes yields\n\\[\n\t\\Delta Y = \\beta _c \\Delta X + \\gamma_1' \\Delta W_1 + \\gamma_2 \\Delta W_2 + \\Delta V.\n\\]\nLet $\\beta_\\text{long}$ denote the OLS coefficient on $\\Delta X$ from the long regression of $\\Delta Y$ on $(1,\\Delta X, \\Delta W_1, \\Delta W_2)$.\n\n\\begin{proposition}\\label{prop:DiD}\nConsider the linear potential outcome model \\eqref{eq:DiDoutcomeEq}. Suppose the following exogeneity assumption holds:\n\\begin{itemize}\n\\item $\\cov(\\Delta X, \\Delta V) = 0$, $\\cov(\\Delta W_2, \\Delta V) = 0$, and $\\cov(\\Delta W_1, \\Delta V) = 0$.\n\\end{itemize}\nThen $\\beta_c = \\beta_\\text{long}$.\n\\end{proposition}\n\nThe exogeneity assumption in Proposition \\ref{prop:DiD} says that $\\Delta V$ is uncorrelated with all components of $(\\Delta X, \\Delta W_1, \\Delta W_2)$. A sufficient condition for this is the two way fixed effects assumption \\eqref{eq:TWFE} combined with the assumption that the $U_t$ are uncorrelated with $(X_s,W_{1s}, W_{2s})$ for all $t$ and $s$. Given this exogeneity assumption, the only possible identification problem is that $\\Delta W_2$ is unobserved. Hence we cannot adjust for this trend variable. If we assume, however, that treatment trends $\\Delta X$ are not related to the unobserved trend $\\Delta W_2$, then we can point identify $\\beta_c$. Specifically, consider the linear projection of $\\Delta X$ onto $(1,\\Delta W_1, \\Delta W_2)$:\n\\[\n\t\\Delta X = \\pi_1' (\\Delta W_1) + \\pi_2 (\\Delta W_2) + (\\Delta X)^{\\perp \\Delta W_1, \\Delta W_2}.\n\\]\nUsing this equation to define $\\pi_2$, we now have the following result. Here we let $\\beta_\\text{med}$ denote the coefficient on $\\Delta X$ in the medium regression of $\\Delta Y$ on $(1, \\Delta X, \\Delta W_1)$.\n\n\\begin{corollary}\\label{corr:DiD}\nSuppose the assumptions of Proposition \\ref{prop:DiD} hold. Suppose A\\ref{assump:baselineEndogControl1} holds ($\\pi_2 = 0$). Then $\\beta_c = \\beta_\\text{med}$.\n\\end{corollary}\n\nThis result implies that $\\beta_c$ is point identified when $\\pi_2 = 0$. This assumption is a version of common trends, because it says that the unobserved trend $\\Delta W_2$ is not related to the trend in treatments, $\\Delta X$. Our results in section \\ref{sec:MainNewAnalysis} allow us to analyze the impacts of failure of this common trends assumption on conclusions about the causal effect of $X$ on $Y$, $\\beta_c$. In particular, our results allow researchers to assess the failure of common trends by comparing the impact of observed time varying covariates with the impact of unobserved time varying confounders. In this context, allowing for endogenous controls means allowing for the trend in observed covariates to correlate with the trend in the unobserved covariates.\n\n\\subsection{Instrumental variables}\n\nLet $Z$ be an observed variable that we want to use as an instrument. Let $Y(z)$ denote potential outcomes, where $z$ is any logical value of the instrument. Assume\n\\[\n\tY(z) = \\beta_c z + \\gamma_1' W_1 + \\gamma_2 W_2 + U\n\\]\nwhere $U$ is an unobserved scalar random variable and $(\\beta_c, \\gamma_1, \\gamma_2)$ are unknown constants. Thus $\\beta_c$ is the causal effect of $Z$ on $Y$. In an instrumental variables analysis, this is typically called the reduced form causal effect, and $Y(z)$ are reduced form potential outcomes. Suppose $\\cov(Z,U) = 0$, $\\cov(W_2, U) = 0$, and $\\cov(W_1, U) = 0$. Then $\\beta_c$ equals the OLS coefficient on $Z$ from the long regression of $Y$ on $(1,Z,W_1,W_2)$. In this model, Theorem \\ref{thm:baselineEndogControl1} implies that $\\beta_c$ is also obtained as the coefficient on $Z$ in the medium regression of $Y$ on $(1,Z,W_1)$, and thus is point identified. In this case, assumption A\\ref{assump:baselineEndogControl1} is an instrument exogeneity assumption. Our results in section \\ref{sec:MainNewAnalysis} thus allow us to analyze the impacts of instrument exogeneity failure on conclusions about the reduced form causal effect of $Z$ on $Y$, $\\beta_c$.\n\nIn a typical instrumental variable analysis, the reduced form causal effect of the instrument on outcomes is not the main effect of interest. Instead, we usually care about the causal effect of a treatment variable on outcomes. The reduced form is often just an intermediate tool for learning about that causal effect. Our analysis in this paper can be used to assess the sensitivity of conclusions about this causal effect to failures of instrument exclusion or exogeneity too. This analysis is somewhat more complicated, however, and so we develop it in a separate paper. Nonetheless, empirical researchers do sometimes examine the reduced form directly to study the impact of instrument exogeneity failure. For example, see section D7 and table D15 of \\cite{Tabellini2020}.\n\n\\section{Empirical Application: The Frontier Experience and Culture}\\label{sec:empirical}\n\nWhere does culture come from? \\cite{BFG2020} study the origins of people's preferences for or against government redistribution, intervention, and regulation. They provide the first systematic empirical analysis of a famous conjecture that living on the American frontier cultivated individualism and antipathy to government intervention. The idea is that life on the frontier was hard and dangerous, had little to no infrastructure, and required independence and self-reliance to survive. It was far from the federal government. And it was an opportunity for upward mobility through effort, rather than luck. These features then create cultural change, in particular, leading to ``more pervasive individualism and opposition to redistribution''. Overall, \\cite{BFG2020} find evidence supporting this frontier life conjecture.\n\nThe main results in \\cite{BFG2020} are based on an unconfoundedness identification strategy and use linear models. They note that ``the main threat to causal identification of $\\beta$ lies in omitted variables'' and hence they strongly rely on Oster's (2019) method to ``show that unobservables are unlikely to drive our results'' (page 2344). As we have discussed, however, this approach is based on the exogenous controls assumption. In this section, we apply our methods to examine the impact of allowing for endogenous controls on Bazzi et al.'s empirical conclusions. Overall, we come to a more nuanced conclusion about robustness: While they found that all of their analyses were robust to the presence of omitted variables, we find that their analysis using questionnaire based outcomes is quite sensitive, but their analysis using property tax levels and voting patterns is robust. We also find suggestive evidence that the controls are endogenous, which highlights the value of sensitivity analysis methods that allow for endogenous controls. We discuss all of these findings in more detail below.\n\n\\subsection{Data}\n\nWe first describe the variables and data sources. The main units of analysis are counties in the U.S., although we will also use some individual level data. The treatment $X$ is the ``total frontier experience'' (TFE). This is defined as the number of years between 1790 and 1890 a country spent ``on the frontier'', divided by 10. A county is ``on the frontier'' if it had a population density less than 6 people per square mile and was within 100 km of the ``frontier line''. The frontier line is a line that divides sparse counties (less than or equal to 2 people per square mile) from less sparse counties. By definition, the frontier line changed over time in response to population patterns, but it did so unevenly, resulting in some counties being ``on the frontier'' for longer than others. Figure 3 in \\cite{BFG2020} shows the spatial distribution of treatment.\n\nThe outcome variable $Y$ is a measure of modern culture. They consider 8 different outcome variables. Since data is not publicly available for all of them, we only look at 5 of these. They can be classified into two groups. The first are questionnaire based outcomes:\n\\begin{enumerate}\n\\item \\emph{Cut spending on the poor}. This variable comes from the 1992 and 1996 waves of the American National Election Study (ANES), a nationally representative survey. In those waves, it asked\n\\begin{itemize}\n\\item[] ``Should federal spending be increased, decreased, or kept about the same on poor people?''\n\\end{itemize}\nLet $Y_{1i} = 1$ if individual $i$ answered ``decreased'' and 0 otherwise.\n\n\\item \\emph{Cut welfare spending}. This variable comes from the Cooperative Congressional Election Study (CCES), waves 2014 and 2016. In those waves, it asked\n\\begin{itemize}\n\\item[] ``State legislatures must make choices when making spending decisions on important state programs. Would you like your legislature to increase or decrease spending on Welfare? 1. Greatly Increase 2. Slightly Increase 3. Maintain 4. Slightly Decrease 5. Greatly Decrease.''\n\\end{itemize}\nLet $Y_{2i} = 1$ if individual $i$ answered ``slightly decrease'' or ``greatly decrease'' and 0 otherwise.\n\n\\item \\emph{Reduce debt by cutting spending}. This variable also comes from the CCES, waves 2000--2014 (biannual). It asked\n\\begin{itemize}\n\\item[] ``The federal budget deficit is approximately [\\$ year specific amount] this year. If the Congress were to balance the budget it would have to consider cutting defense spending, cutting domestic spending (such as Medicare and Social Security), or raising taxes to cover the deficit. Please rank the options below from what would you most prefer that Congress do to what you would least prefer they do: Cut Defense Spending; Cut Domestic Spending; Raise Taxes.''\n\\end{itemize}\nLet $Y_{3i} = 1$ if individual $i$ chooses ``cut domestic spending'' as a first priority, and 0 otherwise.\n\\end{enumerate}\nThese surveys also collected data on individual demographics, specifically age, gender, and race. The second group of outcome variables are based on behavior rather than questionnaire responses:\n\\begin{enumerate}\n\\setcounter{enumi}{3}\n\n\\item $Y_{4i}$ is the average effective \\emph{property tax rate} in county $i$, based on data from 2010 to 2014 from the National Association of Home Builders (NAHB) data, which itself uses data from the American Community Survey (ACS) waves 2010--2014.\n\n\\item $Y_{5i}$ is the average \\emph{Republican vote share} over the five presidential elections from 2000 to 2016 in county $i$, using data from Leip's Atlas of U.S. Presidential Elections.\n\\end{enumerate}\n\nNext we describe the observed covariates. We partition these covariates into $W_1$ and $W_0$ by following the implementation of Oster's \\citeyearpar{Oster2019} approach in \\cite{BFG2020}. $W_1$, the calibration covariates which \\emph{are} used to calibrate selection on unobservables, is a set of geographic and climate controls: Centroid Latitude, Centroid Longitude, Land area, Average rainfall, Average temperature, Elevation, Average potential agricultural yield, and Distance from the centroid to rivers, lakes, and the coast. $W_0$, the control covariates which are \\emph{not} used to calibrate selection on unobservables, includes state fixed effects. The questionnaire based outcomes use individual level data. For those analyses, we also include age, age-squared, gender, race, and survey wave fixed effects in $W_0$. In \\cite{BFG2020}, they were included in $W_1$. We instead include them in $W_0$ to keep the set of calibration covariates $W_1$ constant across the five main specifications.\n\n\\subsection{Baseline Model Results}\n\n\\cite{BFG2020} has a variety of analyses. We focus on the subset of their main results for which replication data is publicly available. These are columns 1, 2, 4, 6, and 7 of their table 3. Panel A in our table \\ref{table:mainTable1} replicates those results. From columns (1)--(3) we see that individuals who live in counties with more exposure to the frontier prefer cutting spending on the poor, on welfare, and to reduce debt by spending cuts. Moreover, these point estimates are statistically significant at conventional levels. From columns (4) and (5), we see that counties with more exposure to the frontier have lower property taxes and are more likely to vote for Republicans. As \\cite{BFG2020} argue, these baseline results therefore support the conjecture that frontier life led to opposition to government intervention and redistribution.\n\n\\def\\rule{0pt}{1.25\\normalbaselineskip}{\\rule{0pt}{1.25\\normalbaselineskip}}\n\\begin{table}[t]\n\\centering\n\\SetTblrInner[talltblr]{rowsep=0pt}\n\\resizebox{.98\\textwidth}{!}{\n\\begin{talltblr}[\n caption = {The Effect of Frontier Life on Opposition to Government Intervention and Redistribution.\\label{table:mainTable1}},\n remark{Note} = {Panel A and the first row of Panel B replicate columns 1, 2, 4, 6, and 7 of table 3 in \\cite*{BFG2020}, while the second row of Panel B and Panel C are new. As in \\cite{BFG2020}, Panel B uses Oster's rule of thumb choice $R_\\text{long}^2 = 1.3 R_\\text{med}^2$.},\n]{\np{0.25\\textwidth}>{\\centering}\n*{3}{>{\\centering \\arraybackslash}p{0.15\\textwidth}} |\n*{2}{>{\\centering \\arraybackslash}p{0.15\\textwidth}}\n}\n \\toprule\n\\rule{0pt}{1.25\\normalbaselineskip}\n& Prefers Cut Public Spending on Poor & Prefers Cut Public Spending on Welfare &Prefers Reduce Debt by Spending Cuts & County Property Tax Rate &Republican Presidential Vote Share \\\\[4pt]\n & (1) & (2) & (3) & (4) & (5) \\\\[4pt]\n \\hline\n\\multicolumn{5}{l}{Panel A. Baseline Results} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\nTotal Frontier Exp. & 0.010 & 0.007 & 0.014 & -0.034 & 2.055 \\\\\n& {\\footnotesize (0.004) } & {\\footnotesize (0.003) } & {\\footnotesize (0.002) } & {\\footnotesize (0.007)} & {\\footnotesize (0.349)} \\\\[2pt]\n \\rule{0pt}{1.25\\normalbaselineskip}\nMean of Dep Variable & 0.09 & 0.40 & 0.41 & 1.02 & 60.04 \\\\ \nNumber of Individuals & 2322 & 53,472 & 111,853 & - & - \\\\\nNumber of Counties & 95 & 1863 & 1963 & 2029 & 2036 \\\\[2pt]\n \\rule{0pt}{1.25\\normalbaselineskip}\n Controls: & & & & & \\\\[2pt]\n \\ \\ Survey Wave FEs & X & X & X & - & - \\\\\n \\ \\ Ind.\\ Demographics & X & X & X & - & - \\\\\n \\ \\ State Fixed Effects & X & X & X & X & X \\\\\n \\ \\ Geographic\/Climate & X & X & X &X & X \\\\[2pt]\n \\hline\n\\multicolumn{5}{l}{Panel B. Sensitivity Analysis (Exogenous Controls; Oster 2019)} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\n$\\delta^\\text{bp}$ (wrong) & 16.01 & 3.1 & 5.9 & -27.4 & -8.5 \\\\\n$\\delta^\\text{bp}$ (correct) & 2.28 & 3.05 & 2.58 & 90.7 & -23.3 \\\\[2pt]\n \\hline\n\\multicolumn{5}{l}{Panel C. Sensitivity Analysis (Endogenous Controls)} \\rule{0pt}{1.25\\normalbaselineskip} \\\\[0.5em]\n\\hline\n\\rule{0pt}{1.25\\normalbaselineskip}\n$\\bar{r}_X^\\text{bp}$ ($\\times 100$) & 2.83 & 3.05 & 5.85 & 72.0 & 80.4 \\\\[2pt]\n\\bottomrule\n\\end{talltblr}\n}\n\\end{table}\n\n\\subsection{Assessing Selection on Observables}\n\nThe baseline results in table 1 rely on a selection on observables assumption, that treatment $X$ is exogenous after adjusting for the observed covariates $(W_0,W_1)$. How plausible is this assumption? \\cite{BFG2020} say\n\\begin{itemize}\n\\item[] ``The main threat to causal identification of $\\beta$ lies in omitted variables correlated with both contemporary culture and TFE. We address this concern in four ways. First, we rule out confounding effects of modern population density. Second, we augment [the covariates] to remove cultural variation highlighted in prior work. \\emph{Third, we show that unobservables are unlikely to drive our results.} Finally, we use an IV strategy that isolates exogenous variation in TFE due to changes in national immigration flows over time.'' (page 2344, emphasis added)\n\\end{itemize}\nTheir first two approaches continue to rely on selection on observables, simply by including additional control variables. We focus on their third strategy: to use a formal econometric method to assess the importance of omitted variables. \n\n\\subsubsection*{Sensitivity Analysis Assuming Exogenous Controls}\n\nWe start by summarizing the sensitivity analysis based on \\cite{Oster2019} (hereafter Oster), as used in \\cite{BFG2020}. Oster's analysis uses two sensitivity parameters: (i) $\\delta$, which we define in equation \\eqref{eq:defOfDelta} in appendix \\ref{subsec:osterredef} and (ii) $R_\\text{long}^2$, the R-squared from the long regression of $Y$ on $(1,X,W_0,W_1,W_2)$, including the omitted variable of concern $W_2$. For any choice of $(\\delta,R_\\text{long}^2)$, Oster's Proposition 2 derives the identified set for $\\beta_\\text{long}$. Oster's Proposition 3 derives the breakdown point for $\\delta$, as a function of $R_\\text{long}^2$, for the conclusion that the identified set does not contain zero. Denote this point by $\\delta^\\text{bp}(R_\\text{long}^2)$. This is the smallest value of $\\delta$ such that the identified set contains zero. Put differently: For any $\\delta < \\delta^\\text{bp}(R_\\text{long}^2)$, the true value of $\\beta_\\text{long}$ cannot be zero.\n\nThe second row of Panel B of table \\ref{table:mainTable1} shows sample analog estimates of this breakdown point, which is commonly referred to as \\emph{Oster's delta}. As in \\cite{BFG2020}, we use Oster's rule of thumb choice $R_\\text{long}^2 = 1.3 R_\\text{med}^2$. $R_\\text{med}^2$ is the R-squared from the medium regression of $Y^{\\perp W_0}$ on $(1,X^{\\perp W_0},W_1^{\\perp W_0})$, which can be estimated from the data. Thus the table shows estimates of $\\delta^\\text{bp}(1.3 R_\\text{med}^2)$. The first row of Panel B shows the values of Oster's delta as reported in table 2 of \\cite{BFG2020}. These were incorrectly computed. It appears to us that, rather than using the correct expression in Proposition 3 of \\cite{Oster2019}, they set the first displayed equation on page 193 of that paper equal to zero and solved for $\\delta$. That does not give the correct breakdown point. Note that we noticed this same mistake in several other papers published in top 5 economics journals.\n\n\\cite{BFG2020} conclude:\n\\begin{itemize}\n\\item[] ``Oster (2019) suggests $| \\delta | > 1$ leaves limited scope for unobservables to explain the results'' and therefore, based on their $\\delta^\\text{bp}$ estimates, ``unobservables are unlikely to drive our results'' (page 2344)\n\\end{itemize}\nThis conclusion remains unchanged if the same rule is applied to the correctly computed $\\delta^\\text{bp}$ estimates.\n\n\\subsubsection*{Assessing Exogenous Controls}\n\nAs we have discussed, Oster's method combined with the $\\delta = 1$ cutoff rule relies on the exogenous controls assumption. Is exogenous controls plausible in this application? The answer depends on which omitted variables $W_2$ we are concerned about. \\cite{BFG2020} does not specifically describe the unmeasured omitted variables of concern, nor do they discuss the plausibility of exogenous controls. However, in their extra robustness checks they consider the following variables:\n\\begin{table}[h]\n\\centering\n\\begin{tabular}{l l}\nContemporary population density & Sex ratio \\\\\nConflict with Native Americans & Rainfall risk \\\\\nEmployment share in manufacturing & Portage sites \\\\\nMineral resources & Prevalence of slavery \\\\\nImmigrant share & Scotch-Irish settlement \\\\\nTiming of railroad access & Birthplace diversity \\\\\nRuggedness &\n\\end{tabular}\n\\end{table}\n\n\\noindent The additional omitted variables of concern might therefore be similar to these variables. Thus the question is: Are \\emph{all} of the geographic\/climate variables in $W_1$ uncorrelated with variables like these? This seems unlikely, especially since many of these additional variables are also geographic\/climate type variables. Moreover, although this assumption is not falsifiable---since $W_2$ is unobserved---we can assess its plausibility by examining the correlation structure of the observed covariates. Specifically, we compute the parameters $c_k$ defined in section \\ref{sec:interpretation}. These are square roots of R-squareds from regressing each element of $W_1$ on the other elements, after partialling out $W_0$. Table \\ref{table:ck_calib} shows sample analog estimates of these $c_k$'s.\n\nThe estimates in table \\ref{table:ck_calib} show a substantial range of correlation between the observed covariates in $W_1$. Recall that the exogenous controls assumption says that each element of $W_1$ is uncorrelated with $W_2$, after partialling out $W_0$. Thus if $W_2$ was included in this table it would have a value of zero. Therefore, if $W_2$ is a variable similar to the components of $W_1$ then we would expect exogenous controls to fail. This suggests that it is important to use sensitivity analysis methods that allow for endogenous controls.\n\n\\def\\rule{0pt}{1.25\\normalbaselineskip}{\\rule{0pt}{1.25\\normalbaselineskip}}\n\n\\begin{table}[h]\n\\caption{Correlations Between Observed Covariates. \\label{table:ck_calib}}\n\\centering\n\\begin{tabular}{l c}\n\\toprule\n$W_{1k}$ & $\\widehat{R}_{W_{1k} \\sim W_{1,-k} \\mathrel{\\mathsmaller{\\bullet}} W_0}$ \\\\[4pt]\n \\hline\n \\rule{0pt}{1.25\\normalbaselineskip}\nAverage temperature & 0.945 \\\\\nCentroid Latitude & 0.936 \\\\\nElevation & 0.825 \\\\\nAverage potential agricultural yield & 0.805 \\\\\nAverage rainfall & 0.748 \\\\\nDistance from centroid to the coast & 0.698 \\\\\nCentroid Longitude & 0.659 \\\\\nDistance from centroid to rivers & 0.367 \\\\\nDistance from centroid to lakes & 0.316 \\\\\nLand area & 0.313 \\\\\n\\bottomrule\n\\end{tabular}\n\\end{table}\n\n\\subsubsection*{Sensitivity Analysis Allowing For Endogenous Controls}\n\nNext we present the findings from the sensitivity analysis that we developed in section \\ref{sec:MainNewAnalysis}, which allows for endogenous controls. We begin with our simplest result, Theorem \\ref{cor:IdsetRyANDcFree}, that only uses a single sensitivity parameter $\\bar{r}_X$. Panel C of table \\ref{table:mainTable1} reports sample analog estimates of the breakdown point $\\bar{r}_X^\\text{bp}$ described in Corollary \\ref{corr:breakdownPointRXonly}. This is the largest amount of selection on unobservables, as a percentage of selection on observables, allowed for until we can no longer conclude that $\\beta_\\text{long}$ is nonzero. Recall that, since those results allow for arbitrarily endogenous controls, Theorem \\ref{cor:IdsetRyANDcFree} implies that $\\bar{r}_X^\\text{bp} < 1$. As discussed in section \\ref{sec:interpretation}, however, this does not imply that these results should always be considered non-robust. Instead, when the calibration covariates $W_1$ are a set of variables that are important for treatment selection, researchers should consider large values of $\\bar{r}_X^\\text{bp}$ to indicate the robustness of their baseline results. For example, in columns (4) and (5) of Panel C we see that the breakdown point estimates for the two behavior based outcomes are 72\\% and 80.4\\%. For example, for the average Republication vote share outcome, we can conclude $\\beta_\\text{long} > 0$ as long as selection on unobservables is at most 80.4\\% as large as selection on observables. In contrast, the breakdown point estimates in columns (1)--(3) are substantially smaller: between about 3\\% and 6\\%. For these outcomes, we therefore only need selection on unobservables to be at least 3 to 6\\% as large as selection on observables to overturn our conclusion that $\\beta_\\text{long} > 0$. Thus, unlike the conclusions based on Oster's method, we find that the analysis using questionnaire based outcomes is highly sensitive to selection on unobservables. In contrast, the analysis using behavior based outcomes is quite robust to selection on unobservables. This contrast continues to hold after considering restrictions on the magnitude of endogenous controls and the impact of unobservables on outcomes too. We present these analyses next.\n\n\\begin{figure}[t]\n\\caption{Sensitivity Analysis for Average Republican Vote Share. See body text for discussion.\\label{fig:republican}}\n\\centering\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-idset-1d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-idset-2d} \\\\[6pt]\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-breakfront-2d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-breakfront-3d}\n\\end{figure}\n\nFor brevity we discuss just one questionnaire based outcome, cut spending on the poor, and one behavior based outcome, average Republican vote share. Figure \\ref{fig:republican} shows the results for average Republican vote share. The top left plot shows the estimated identified set $\\beta_\\text{long}$ as a function of $\\bar{r}_X$, allowing for arbitrarily endogenous controls and no restrictions on the outcome equation. This is the set described by Theorem \\ref{cor:IdsetRyANDcFree}. The horizontal intercept is the breakdown point $\\bar{r}_X^\\text{bp} = 80.4\\%$, as reported in Panel C, column (5) of table \\ref{table:mainTable1}. This result allows for arbitrarily endogenous controls.\n\nIf we are willing to somewhat restrict the magnitude of control endogeneity, we can allow for more selection on unobservables. The top right figure shows a sequence of estimated identified sets for $\\beta_\\text{long}$ as a function of $\\bar{r}_X$ on the horizontal axis, as described in Theorem \\ref{thm:IdsetRyFree}. It starts at the darkest line, $\\bar{c} = 1$ (arbitrarily endogenous controls), and then as the shading of the bound functions becomes lighter, we get closer to exogenous controls ($\\bar{c} = 0$). Put differently: For any fixed value of $\\bar{r}_X$, imposing stronger assumptions on exogeneity of the controls results in a smaller identified set. The bottom left picture shows the impact of assuming partially exogenous controls on the breakdown point for selection on unobservables. Specifically, this plot shows the estimated breakdown frontier $\\bar{r}_X^\\text{bf}(\\bar{c})$. This function shows the horizontal intercept in the top right figure, as a function of $\\bar{c}$. At $\\bar{c} = 1$, we recover the breakdown point 80.4\\% that allows for arbitrarily endogenous controls. If we impose exogenous controls, however, and set $\\bar{c} = 0$, we obtain a breakdown point around 135\\%. That is, under exogenous controls, we can allow for selection on unobservables of up to 135\\% as large as selection on observables before our baseline results break down. In fact, we only need $\\bar{c}$ less than or equal to about $0.3$ to obtain a breakdown point at or above 100\\%.\n\nAll of the analysis thus far has left the impact of unobservables on outcomes unrestricted. So in our final analysis we also consider the effect of restricting the impact of unobservables on outcomes. The bottom right plot in figure \\ref{fig:republican} shows the three-dimensional breakdown frontiers $\\bar{r}_Y^\\text{bf}(\\bar{r}_X, \\bar{c})$ described in Theorem \\ref{cor:BFCalculation3D}. Any combination of sensitivity parameters $(\\bar{r}_X, \\bar{r}_Y, \\bar{c})$ below this three-dimensional function lead to an identified set that allows us to conclude $\\beta_\\text{long} > 0$. This includes, for example, $\\bar{r}_X = \\bar{r}_Y = 110\\%$ and $\\bar{c} = 0.7$. Note that $0.7$ is around the middle of the distribution of $c_k$ values in table \\ref{table:ck_calib}, and hence might be considered a moderate or slightly conservative value of the magnitude of control endogeneity. For this value, our baseline finding is robust to omitted variables that have up to 110\\% of the effect on treatment and outcomes as the observables. If we impose exogenous controls ($\\bar{c} = 0$) then we can allow the impact of the omitted variable on outcomes to be 200\\% as large as the observables and the impact of the omitted variable on treatment to be up to about 240\\% as large as the observables, and yet still conclude that $\\beta_\\text{long} > 0$.\n\n\\begin{figure}[t]\n\\caption{Sensitivity Analysis for Cut Spending on Poor. See body text for discussion.\\label{fig:cutpoor}}\n\\centering\n\\includegraphics[width=0.45\\textwidth]{images\/bazzi-Aii1-idset-2d}\n\\hspace{0.025\\textwidth}\n\\includegraphics[width=0.45\\textwidth]{images\/bazzi-Aii1-breakfront-3d}\n\\end{figure}\n\nThese findings suggest that the empirical conclusions for average Republican vote share are quite robust to failures of the selection on observables assumption. In contrast, we next consider the analysis for the cut spending on the poor outcome. Figure \\ref{fig:cutpoor} shows the results. The left plot shows the estimated identified sets for $\\beta_\\text{long}$ as a function of $\\bar{r}_X$ on the horizontal axis, as described in Theorem \\ref{thm:IdsetRyFree}. For $\\bar{c} = 1$, the horizontal intercept gives an estimated value for $\\bar{r}_X^\\text{bp}$ of 2.83\\%, as reported in Panel C, column (1) of table \\ref{table:mainTable1}. Moreover, as shown in the figure, even if we impose exogenous controls, $\\bar{c} = 0$, the identified sets do not change much, and hence the breakdown point does not change much. The breakdown frontier $\\bar{r}_X^\\text{bf}(\\bar{c})$ is essentially flat and hence we do not report it. These conclusions do not change substantially if we also impose restrictions on how omitted variables affect the outcomes. The right plot in figure \\ref{fig:cutpoor} shows the estimated three-dimensional breakdown frontier. It shows that we can allow for larger amounts of selection on unobservables if we are willing to greatly restrict the impact of unobservables on outcomes. For example, if we allow for arbitrarily endogenous controls ($\\bar{c} = 1$) then we can allow for the effect of omitted variables on the treatment and outcomes to be as much as 50\\% that of the effect of the observables while still concluding that $\\beta_\\text{long} > 0$. Alternatively, if we restrict the effect of omitted variables on outcomes to be at most 25\\% that of observables, then we can allow the omitted variables to affect treatment by more than 100\\% of the effect of the observables while still concluding that $\\beta_\\text{long} > 0$.\n\nOverall, we see that there are some cases where the results for cutting spending on the poor could be considered robust. But there are also many cases where these results could be considered sensitive. In contrast, the results for average Republican vote share are robust across a wide range of relaxations of the baseline model. Similar findings hold for the other three outcome variables: The three results using the questionnaire based outcomes tend to be much more sensitive than the two results using behavior based outcomes.\n\n\\subsubsection*{The Effect of the Choice of Calibration Covariates}\n\n\\begin{figure}[t]\n\\caption{Effect of Calibration Covariates on Analysis For Republican Vote Share. See body text for discussion.\\label{fig:calibrationCovars}}\n\\centering\n\\includegraphics[width=0.40\\textwidth]{images\/bazzi-Bi5-now0-idset-2d}\n\\hspace{0.05\\textwidth}\n\\includegraphics[width=0.4\\textwidth]{images\/bazzi-Bi5-now0-breakfront-3d}\n\\end{figure}\n\nIn section \\ref{sec:interpretation} we discussed the importance of choosing which variables to calibrate against (the variables in $W_1$) versus which variables to use as controls only (the variables in $W_0$). We next briefly illustrate this in our empirical application. The results in table \\ref{table:mainTable1} and figures \\ref{fig:republican} and \\ref{fig:cutpoor} all include state fixed effects as controls, but do not use them for calibration; that is, these variables are in $W_0$. Next we consider the impact of instead putting them into $W_1$ and calibrating the magnitude of selection on unobservables against them, in addition to the geographic and climate controls already in $W_1$.\n\nFigure \\ref{fig:calibrationCovars} shows figures corresponding to the top right and bottom right plots in figure \\ref{fig:republican}, but now also using state fixed effects for calibration. We first see that the identified sets for $\\beta_\\text{long}$ (left plot) are larger, for any fixed $\\bar{r}_X$. This makes sense because the \\emph{meaning} of $\\bar{r}_X$ has changed with the change in calibration controls. In particular, the breakdown point $\\bar{r}_X^\\text{bp}$ is now about 30\\%, whereas previously it was about 80\\%. This change can be understood as a consequence of equation \\eqref{eq:pseudoEq}. By including state fixed effects---which have a large amount of explanatory power---in our calibration controls, we have increased the magnitude of selection on observables. Holding selection on unobservables fixed, this implies that $r$ must decrease. This discussion reiterates the point that the magnitude of $\\bar{r}_X$ must always be interpreted as dependent on the set of calibration controls. For example, our finding in figure \\ref{fig:calibrationCovars} that the estimated $\\bar{r}_X^\\text{bp}$ is about 30\\% should not be interpreted as saying that the results are sensitive; in fact, an effect about 30\\% as large as these calibration covariates is substantially large, and so it may be that we do not expect the omitted variable to have such a large additional impact.\n\nThe right plot in figure \\ref{fig:calibrationCovars} shows the estimated three-dimensional breakdown frontiers. The frontiers have all shifted inward, compared to the bottom right plot of figure \\ref{fig:republican} which did not use state fixed effects for calibration. Consequently, a superficial reading of this plot may suggest that the results for average Republican vote share are no longer robust. However, as we just emphasized in our discussion of the left plot, by including state fixed effects in the calibration covariates $W_1$, we are changing the meaning of all three sensitivity parameters. Since the expanded set of calibration covariates has substantial explanatory power, even a relaxation like $(\\bar{r}_Y, \\bar{r}_X, \\bar{c}) = (50\\%, 50\\%, 1)$---which is below the breakdown frontier and hence allows us to conclude that $\\beta_\\text{long}$ is positive---could be considered to be a large impact of omitted variables. So these figures do not change our overall conclusions about the robustness of the analysis for average Republican vote share.\n\nFinally, as we emphasized in section \\ref{sec:interpretation}, our discussion about the choice of calibration covariates are not unique to our analysis; they apply equally to all other methods that use covariates to calibrate the magnitudes of sensitivity parameters in some way.\n\n\\subsection{Empirical Conclusions}\n\nOverall, a sensitivity analysis based on our new methods leads to a more nuanced empirical conclusion than originally obtained by \\cite{BFG2020}. We found that their analysis using questionnaire based outcomes is quite sensitive to the presence of omitted variables, while their analysis using property tax levels and voting patterns is robust. This has several empirical implications. \n\nFirst, the questionnaire based outcomes are the most easily interpretable as measures of opposition to redistribution, regulation, and preferences for small government. In contrast, it is less clear that property taxes and Republican presidential vote share alone should be interpreted as direct measures of opposition to redistribution. So the fact that the questionnaire based outcomes are sensitive to the presence of omitted variables suggests that Bazzi et al.'s overall conclusion in support of the ``frontier thesis'' should be considered more tentative than previously stated. Second, it suggests that the impact of frontier life may occur primarily through broader behavior based channels like elections, rather than individuals' more specific policy preferences and behavior in their personal lives. It may be useful to explore this difference in future empirical work.\n\nFinally, note that \\cite{BFG2020} perform a wide variety of additional supporting analyses that we have not examined here. It would be interesting to apply our methods to these additional analyses, to see whether allowing for endogenous controls affects the sensitivity of these other analyses. In particular, their figure 5 considers another set of outcome variables: Republican vote share in each election from 1900 to 2016. In contrast, our analysis above looked only at one election outcome: the average Republican vote share over the five elections from 2000 to 2016. They use these additional baseline estimates along with a qualitative discussion of the evolution of Republican party policies over time to argue that the average Republican vote share outcome between 2000--2016 can be interpreted as a measure of opposition to redistribution. It would be interesting to see how these supporting results hold up to a sensitivity analysis that allows for endogenous controls.\n\n\\section{Conclusion}\\label{sec:conclusion}\n\nAs \\cite{AngristPischke2017} emphasize, most researchers do not expect to identify causal effects for many variables at the same time. Instead, we target a single variable, called the treatment, while the other variables are called controls, and are included solely to aid identification of causal effects for the treatment variable. These control variables are therefore usually thought to be endogenous. And yet most of the available methods for doing sensitivity analysis rely on an assumption that these controls are exogenous. This raises the question of whether these methods for assessing sensitivity are themselves sensitive to allowing the controls to be endogenous. In this paper we provide a new approach to assessing the sensitivity of selection on observables assumptions in linear models. Our results have two key features that distinguish them from existing methods: First, they allow the controls to be endogenous. Second, our first main result only requires researchers to pick a single sensitivity parameter. In contrast, many existing methods rely on exogenous controls \\emph{and} require researchers to pick or reason about at least two different sensitivity parameters. Our results are also simple to implement in practice, via an accompanying Stata package \\texttt{regsensitivity}. Finally, in our empirical application to Bazzi et al.'s \\citeyearpar{BFG2020} study of the impact of frontier life on modern culture, we showed that allowing for endogenous controls does matter in practice, leading to more nuanced empirical conclusions than those obtained in \\cite{BFG2020}.\n\n\\subsection*{\\emph{Internal and External Calibrations of Sensitivity Parameters}}\n\nOur analysis raises several open questions for the broader literature on sensitivity analysis. A typical method specifies continuous relaxations or deviations from one's baseline assumptions and then asks: How much can we relax or deviate from our baseline assumptions until our conclusions breakdown? Answering this question requires \\emph{calibrating} the sensitivity parameters: How do we know when these sensitivity parameters are `large' in some sense? A key insight of \\cite*{AltonjiElderTaber2005} was that we could answer this question by performing an \\emph{internal} calibration, by comparing the magnitude of the sensitivity parameters to the magnitude other parameters in the model. However, as we have discussed in this paper, the value of such internal calibrations is to provide (1) a unit free sensitivity parameter which (2) can be interpreted in terms of the effects of observed variables. It does \\emph{not} provide a single universal threshold for what is or is not a robust result. In particular, the choice of which observed variables to calibrate against will change the scale and interpretation of the sensitivity parameter. Consequently, the value 1 should be considered a unit free reference point, not a threshold for robustness.\n\nThis observation leads to several questions: How should researchers choose the covariates against which they calibrate? And for any given choice of covariates, if 1 is not the threshold for robustness, what is the threshold? The difficulty of answering these questions speaks to the difficulty of using a single dataset to assess sensitivity and to calibrate those sensitivity parameters. An alternative approach is \\emph{external} calibration, where a secondary dataset is used to calibrate the sensitivity parameters. This approach uses sensitivity parameters that are not defined relative to other parameters in the model, and does not require researchers to pick a set of covariates to calibrate against. Such absolute sensitivity parameters are common in the literature on nonparametric sensitivity analysis (e.g., \\citealt{Rosenbaum1995, Rosenbaum2002} or \\citealt{MastenPoirier2018}). This external calibration approach is also common in the literature on measurement error or missing data, where secondary datasets are used to assess the extent of measurement error or the strength of violations of missing at random assumptions. It is possible that some combination of both internal and external calibration approaches will lead to the most robust set of methods for assessing the role of selection on unobservables in empirical work.\n\n\\singlespacing\n\\bibliographystyle{econometrica}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nGiven a local Artin $\\res$-algebra $A=R\/I$, with $R=\\res[\\![x_1,\\dots,x_n]\\!]$, an interesting problem is to find how far is it from being Gorenstein. In \\cite{Ana08}, Ananthnarayan introduces for the first time the notion of Gorenstein colength, denoted by $\\operatorname{gcl}(A)$, as the minimum of $\\ell(G)-\\ell(A)$ among all Gorenstein Artin $\\res$-algebras $G=R\/J$ mapping onto $A$. Two natural questions arise immediately:\n\n\\medskip\n\n{\\sc Question A}: How we can explicitly compute the Gorenstein colength of a given local Artin $\\res$-algebra $A$?\n\n\\medskip\n\n{\\sc Question B}: Which are its minimal Gorenstein covers, that is, all Gorenstein rings $G$ reaching the minimum $\\operatorname{gcl}(A)=\\ell(G)-\\ell(A)$?\n\n\\medskip\nAnanthnarayan generalizes some results by Teter \\cite{Tet74} and Huneke-Vraciu \\cite{HV06} and provides a characterization of rings of $\\operatorname{gcl}(A)\\leq 2$ in terms of the existence of certain self-dual ideals $\\mathfrak{q}\\in A$ with respect to the canonical module $\\omega_A$ of $A$ satisfying $\\ell(A\/\\mathfrak{q})\\leq 2$. For more information on this, see \\cite{Ana08} or \\cite[Section 4]{EH18}, for a reinterpretation in terms of inverse systems. Later on, Elias and Silva (\\cite{ES17}) address the problem of the colength from the perspective of Macaulay's inverse systems. In this setting, the goal is to find polynomials $F\\in S$ such that $I^\\perp\\subset \\langle F\\rangle$ and $\\ell(\\langle F\\rangle)-\\ell(I^\\perp)$ is minimal. Then the Gorenstein $\\res$-algebra $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$. A precise characterization of such polynomials $F\\in S$ is provided for $\\operatorname{gcl}(A)=1$ in \\cite{ES17} and for $\\operatorname{gcl}(A)=2$ in \\cite{EH18}.\n\nHowever, the explicit computation of the Gorenstein colength of a given ring $A$ is not an easy task even for low colength - meaning $\\operatorname{gcl}(A)$ equal or less than 2 - in the general case. For examples of computation of colength of certain families of rings, see \\cite{Ana09} and \\cite{EH18}.\n\nOn the other hand, if $\\operatorname{gcl}(A)=1$, the Teter variety introduced in \\cite[Proposition 4.2]{ES17} is precisely the variety of all minimal Gorenstein covers of $A$ and \\cite[Proposition 4.5]{ES17} already suggests that a method to compute such covers is possible.\n\n\\bigskip\nIn this paper we address questions A and B by extending the previous definition of Teter variety of a ring of Gorenstein colength 1 to the variety of minimal Gorenstein covers $MGC(A)$ where $A$ has arbitrary Gorenstein colength $t$. We use a constructive approach based on the integration method to compute inverse systems proposed by Mourrain in \\cite{Mou96}.\n\nIn section 2 we recall the basic definitions of inverse systems and introduce the notion of integral of an $R$-module $M$ of $S$ with respect to an ideal $K$ of $R$, denoted by $\\int_K M$. Section 3 links generators $F\\in S$ of inverse systems $J^\\perp$ of Gorenstein covers $G=R\/J$ of $A=R\/I$ with elements in the integral $\\int_{\\mathfrak m^t} I^\\perp$, where $\\mathfrak m$ is the maximal ideal of $R$ and $t=\\operatorname{gcl}(A)$. This relation is described in \\Cref{F} and \\Cref{propint} sets the theoretical background to compute a $\\res$-basis of the integral of a module extending Mourrain's integration method.\n\nIn section 4, \\Cref{ThMGC} proves the existence of a quasi-projective sub-variety $MGC^n(A)$ whose set of closed points are associated to polynomials $F\\in S$ such that $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$. Section 5 is devoted to algorithms: explicit methods to compute a $\\res$-basis of $\\int_{\\mathfrak m^t}I^\\perp$ and $MGC(A)$ for colengths 1 and 2. Finally, in section 6 we provide several examples of the minimal Gorenstein covers variety and list the comptutation times of $MGC(A)$ for all analytic types of $\\res$-algebras with $\\operatorname{gcl}(A)\\leq 2$ appearing in Poonen's classification in \\cite{Poo08a}.\n\nAll algorithms appearing in this paper have been implemented in \\emph{Singular}, \\cite{DGPS}, and the library \\cite{E-InvSyst14} for inverse system has also been used.\n\n\\medskip\n{\\sc Acknowledgements:} The second author wants to thank the third author for the opportunity to stay at INRIA Sophia Antipolis - M\\'editerran\\'ee (France) and his hospitality during her visit on the fall of 2017, where part of this project was carried out. This stay was financed by the Spanish Ministry of Economy and Competitiveness through the Estancias Breves programme (EEBB-I-17-12700).\n\n\\section{Integrals and inverse systems}\n\nLet us consider the regular local ring $R=\\res[\\![x_1,\\dots,x_n]\\!]$ over an arbitrary field $\\res$, with maximal ideal $\\mathfrak m$. Let $S=\\res[y_1,\\dots,y_n]$ be the polynomial ring over the same field $\\res$. Given $\\alpha=(\\alpha_1,\\dots,\\alpha_n)$ in $\\mathbb{N}^n$, we denote by $x^\\alpha$ the monomial $x_1^{\\alpha_1}\\cdots x_n^{\\alpha_n}$ and set $\\vert\\alpha\\vert=\\alpha_1+\\dots+\\alpha_n$. Recall that $S$ can be given an $R$-module structure by contraction:\n$$\n\\begin{array}{cccc}\n R\\times S & \\longrightarrow & S & \\\\\n (x^\\alpha, y^\\beta) & \\mapsto & x^\\alpha \\circ y^\\beta = &\n \\left\\{\n \\begin{array}{ll}\n y^{\\beta-\\alpha}, & \\beta \\ge \\alpha; \\\\\n 0, & \\mbox{otherwise.}\n \\end{array}\n \\right.\n\\end{array}\n$$\n\nThe Macaulay inverse system of $A=R\/I$ is the sub-$R$-module $I^\\perp=\\lbrace G\\in S\\mid I\\circ G=0\\rbrace$ of $S$. This provides the order-reversing bijection between $\\mathfrak m$-primary ideals $I$ of $R$ and finitely generated sub-$R$-modules $M$ of $S$ described in Macaulay's duality. As for the reverse correspondence, given a sub-$R$-module $M$ of $S$, the module $M^\\perp$ is the ideal $\\operatorname{Ann_R} M=\\lbrace f\\in R\\mid f\\circ G=0\\,\\mbox{ for any } G\\in M\\rbrace$ of $R$. Moreover, it characterizes zero-dimensional Gorenstein rings $G=R\/J$ as those with cyclic inverse system $J^\\perp=\\langle F\\rangle$, where $\\langle F\\rangle$ is the $\\res$-vector space $\\langle x^\\alpha\\circ F:\\vert\\alpha\\vert\\leq \\deg F\\rangle_\\res$. For more details on this construction, see \\cite{ES17} and \\cite{EH18}.\n\nConsider an Artin local ring $A=R\/I$ of socle degree $s$ and inverse system $I^\\perp$. We are interested in finding Artin local rings $R\/\\operatorname{Ann_R} F$ that cover $R\/I$, that is $I^\\perp\\subset \\langle F\\rangle$, but we also want to control how apart are those two inverse systems. In other words, given an ideal $K$, we want to find a Gorenstein cover $\\langle F\\rangle$ such that $K\\circ \\langle F\\rangle\\subset I^\\perp$. Therefore it makes sense to think of an inverse operation to contraction.\n\n\\begin{definition}[Integral of a module with respect to an ideal] Consider an $R$-submodule $M$ of $S$. We define the integral of $M$ with respect to the ideal $K$, denoted by $\\int_K M$, as $$\\int_K M=\\lbrace G\\in S\\mid K\\circ G\\subset M\\rbrace.$$\n\n\\end{definition}\n\nNote that the set $N=\\lbrace G\\in S\\mid K\\circ G\\subset M\\rbrace$ is, in fact, an $R$-submodule $N$ of $S$ endowed with the contraction structure. Indeed,\ngiven $G_1,G_2\\in N$ then $K\\circ (G_1+G_2)=K\\circ G_1+K\\circ G_2\\subset M$, hence $G_1+G_2\\in N$.\nFor all $a\\in R$ and $G\\in N$ we have $K\\circ (a\\circ G)=aK\\circ G=a\\circ (K\\circ G)\\subset M$, hence $a\\circ G\\in N$.\n\n\\begin{proposition}\n\\label{integral}\nLet $K$ be an $\\mathfrak m$-primary ideal of $R$ and let $M$ be a finitely generated sub-$R$-module of $S$. Then $$\\int_K M=\\left(KM^\\perp\\right)^\\perp.$$\n\\end{proposition}\n\\begin{proof}\nLet $G\\in\\left(KM^\\perp\\right)^\\perp$. Then $\\left(KM^\\perp\\right)\\circ G=0$, so $M^\\perp\\circ\\left(K\\circ G\\right)=0$. Hence $K\\circ G\\subset M$, i.e. $G\\in\\int_K M$. We have proved that $\\left(KM^\\perp\\right)^\\perp\\subseteq\\int_K M$.\nNow let $G\\in\\int_K M$. By definition, $K\\circ G\\subset M$, so $M^\\perp\\circ\\left(K\\circ G\\right)=0$ and hence $\\left(M^\\perp K\\right)\\circ G=0$. Therefore, $G\\in\\left(M^\\perp K\\right)^\\perp$.\n\\end{proof}\n\nOne of the key results of this paper is the effective computation of $\\int_K M$ (see \\Cref{AlINT}).\nLast result gives a method for the computation of this module by computing two Macaulay duals. However, since computing Macaulay duals is expensive, \\Cref{AlINT} avoids the computation of such duals.\n\n\\begin{remark}\\label{incl} The following properties hold:\n\\noindent\n$(i)$ Given $K\\subset L$ ideals of $R$ and $M$ $R$-module, if $K\\subset L$, then $\\int_L M\\subset\\int_K M.$\n\\noindent\n$(ii)$\nGiven $K$ ideal of $R$ and $M\\subset N$ $R$-modules, if $M\\subset N$, then $\\int_K M\\subset\\int_K N.$\n\\noindent\n$(iii)$\nGiven any $R$-module $M$, $\\int_ R M=M$.\n\\end{remark}\n\nThe inclusion $K\\circ \\int_K M\\subset M$ follows directly from the definition of integral. However, the equality does not hold:\n\n\\begin{example}\\label{Ex1}\nLet us consider $R=\\res[\\![x_1,x_2,x_3]\\!]$, $K=(x_1,x_2,x_3)$, $S=\\res[y_1,y_2,y_3]$ and $M=\\langle y_1y_2,y_3^3\\rangle$. \nWe can compute Macaulay duals with the \\emph{Singular} library \\textbf{Inverse-syst.lib}, see \\cite{E-InvSyst14}. We get $\\int_K M=\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^4 \\rangle$ by \\Cref{integral} and hence $K\\circ \\int_K M=\\langle y_1,y_2,y_3^3\\rangle\\subsetneq M.$\n\\end{example}\n\nWe also have the inclusion $M\\subset\\int_K K\\circ M$. Indeed, for any $F\\in M$, $K\\circ F\\subset K\\circ M$ and hence $F\\in\\int_K K\\circ M=\\lbrace G\\in S\\mid K\\circ G\\subset K\\circ M\\rbrace$. Again, the equality does not hold.\n\n\\begin{example} Using the same example as in \\Cref{Ex1}, we get\n$K\\circ M=\\mathfrak m\\circ \\langle y_1y_2,y_3^3\\rangle=\\langle y_1,y_2,y_3^2\\rangle,$ and\n$\\int_K( K\\circ M)=\\left(K (K\\circ M)^\\perp\\right)^\\perp=\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^2\\rangle\\nsubseteq M.$\n\\end{example}\n\n\\begin{remark}\nNote that if we integrate with respect to a principal ideal $K=(f)$ of $R$, then \n$\\int_K M=\\lbrace G\\in S\\mid f\\circ G\\in M\\rbrace$. Hence in this case we will denote it by $\\int_f M$.\n\\end{remark}\n\nIn particular, if we consider a principal monomial ideal $K=(x^\\alpha)$, then the expected equality for integrals $$x^{\\alpha}\\circ\\int_{x^{\\alpha}}M=M$$ holds. Indeed, for any $m\\in M$, take $G=y^{\\alpha}m$. Since $x^{\\alpha}\\circ y^{\\alpha}=1$, then $x^\\alpha\\circ y^{\\alpha}m=m$ and the equality is reached.\n\n\\begin{remark}\nIn general, $\\int_{x^{\\alpha}} x^{\\alpha}\\circ M\\neq x^{\\alpha}\\circ\\int_{x^{\\alpha}} M$, hence the inclusion $M\\subset\\int_K K\\circ M$ is not an equality even for principal monomial ideals. See \\Cref{r1}.\n\\end{remark}\n\nLet us now consider an even more particular case: the integral of a cyclic module $M=\\langle F\\rangle$ with respect to the variable $x_i$. Since the equality $x_i\\circ \\int_{x_i}M=M$ holds, there exists $G\\in S$ such that $x_i\\circ G=F$. This polynomial $G$ is not unique because it can have any constant term with respect to $x_i$, that is $G=y_iF+p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. However, if we restrict to the non-constant polynomial we can define the following:\n\n\\begin{definition}[$i$-primitive]\nThe $i$-primitive of a polynomial $f\\in S$ is the polynomial $g\\in S$, denoted by $\\int_i f$, such that\n\\begin{enumerate}\n\\item[(i)] $x_i\\circ g=f$,\n\\item[(ii)] $g\\vert_{y_i=0}=0$.\n\\end{enumerate}\n\\end{definition}\n\nIn \\cite{EM07}, Elkadi and Mourrain proposed a definition of $i$-primitive of a polynomial in a zero-characteristic setting using the derivation structure instead of contraction. Therefore, we can think of the integral of a module with respect to an ideal as a generalization of their $i$-primitive.\n\nSince we are considering the $R$-module structure given by contraction, the $i$-primitive is precisely\n$$\\int_i f=y_if.$$\n\nIndeed, $x_i\\circ(y_if)=f$ and $(y_if)\\mid_{y_i=0}=0$, hence (i) and (ii) hold.\nUniqueness can be easily proved. Consider $g_1,g_2$ to be $i$-primitives of $f$. Then $x_i\\circ (g_1-g_2)=0$ and hence $g_1-g_2=p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. Clearly $(g_1-g_2)\\vert_{y_i=0}=p(y_1,\\dots,\\hat{y}_i,\\dots,y_n)$. On the other hand, $(g_1-g_2)\\vert_{y_i=0}=g_1\\vert_{y_i=0}-g_2\\vert_{y_i=0}=0$. Hence $p=0$ and $g_1=g_2$.\n\n\\begin{remark}\\label{r1} Note that, by definition, $x_k\\circ \\int_k f=f$. Any $f$ can be decomposed in $f=f_1+f_2$, where the first term is a multiple of $y_k$ and the second has no appearances of this variable. Then\n$$\\int_k x_k\\circ f=\\int_k x_k\\circ f_1+\\int_k x_k\\circ f_2=f_1+\\int_k 0=f_1.$$\n\\noindent\nTherefore, in general, $$f_1=\\int_k x_k\\circ f\\neq x_k\\circ \\int_k f=f.$$\nHowever, for all $l\\neq k$, $$\\int_l x_k\\circ f=\\frac{y_lf_1}{y_k}=x_k\\circ \\int_l f.$$\n\\end{remark}\n\n\\bigskip\n\nLet us now recall Theorem 7.36 of Elkadi-Mourrain in \\cite{EM07}, which describes the elements of the inverse system $I^\\perp$ up to a certain degree $d$. We define $\\mathcal{D}_d=I^\\perp\\cap S_{\\leq d}$, for any $1\\leq d\\leq s$, where $s=\\operatorname{socdeg}(A)$. Since $\\mathcal{D}_s=I^\\perp$, this result leads to an algorithm proposed by the same author to obtain a $\\res$-basis of an inverse system. We rewrite the theorem using the contraction setting instead of derivation.\n\n\\begin{theorem}[Elkadi-Mourrain]\\label{EM}\nGiven an ideal $I=(f_1,\\dots,f_m)$ and $d>1$. Let $\\lbrace b_1,\\dots,b_{t_{d-1}}\\rbrace$ be a $\\res$-basis of $\\mathcal{D}_{d-1}$. The polynomials of $\\mathcal{D}_d$ with no constant term, i.e. no terms of degree zero, are of the form\n\\begin{equation}\\label{thm}\n\\Lambda=\\sum_{j=1}^{t_{d-1}}\\lambda_j^1\\int_1 b_j\\vert_{y_2=\\cdots=y_n=0}+\\sum_{j=1}^{t_{d-1}}\\lambda_j^2\\int_2 b_j\\vert_{y_3=\\cdots=y_n=0}+\\dots+\\sum_{j=1}^{t_{d-1}}\\lambda_j^n\\int_n b_j,\\quad\\lambda_j^k\\in\\res,\n\\end{equation}\nsuch that\n\\begin{equation}\\label{cond1}\n\\sum_{j=1}^{t_{d-1}}\\lambda_j^k (x_l\\circ b_j)-\\sum_{j=1}^{t_{d-1}}\\lambda_j^l(x_k\\circ b_j)=0, 1\\leq k1$.\n\\end{example}\n\n\\subsection{Gorenstein colength 2}\n\nBy \\cite{EH18}, we know that an Artin ring $A$ of socle degree $s$ is of Gorenstein colength 2 if and only if there exists a polynomial $F$ of degree $s+1$ or $s+2$ such that $K_F\\circ F=I^\\perp$, where $K_F=(L_1,\\dots,L_{n-1},L_n^2)$ and $L_1,\\dots,L_n$ are suitable independent linear forms.\n\nObserve that a completely analogous characterization to the one we did for Teter rings is not possible. If $A=R\/I$ has Gorenstein colength 2, by \\Cref{cor}, there exists $F=\\sum_{i=1}^2\\sum_{j=1}^{h_i}a_j^iF_j^i\\in\\int_{\\mathfrak m^2}I^\\perp$, where $\\lbrace\\overline{F^i_j}\\rbrace_{1\\leq i\\leq 2,1\\leq j\\leq h_i}$ is a $\\res$-basis of $\\mathcal{L}_{A,2}$, that generates a minimal Gorenstein cover of $A$ and then trivially $I^\\perp\\subset\\langle F\\rangle$. However, the reverse implication is not true.\n\n\\begin{example} Consider $A=R\/\\mathfrak m^3$, where $R$ is the ring of power series in 2 variables, and consider $F=y_1^2y_2^2$. It is easy to see that $F\\in\\int_{\\mathfrak m^2}I^\\perp=S_{\\leq 4}$ and $I^\\perp\\subset\\langle F\\rangle$. However, it can be proved that $\\operatorname{gcl}(A)=3$ using \\cite[Corollary 3.3]{Ana09}. Note that $K_F=\\mathfrak m^2$ and hence $\\ell(R\/K_F)=3$.\n\\end{example}\n\nTherefore, given $F\\in\\int_{\\mathfrak m^2}I^\\perp$, the condition $I\\subset\\langle F\\rangle$ is not sufficient to ensure that $\\operatorname{gcl}(A)=2$. We must require that $\\ell(R\/K_F)=2$ as well.\n\n\\begin{proposition}\\label{gcl2} Given a non-Gorenstein non-Teter local Artin ring $A=R\/I$, $\\operatorname{gcl}(A)=2$ if and only if there exist a polynomial $F=\\sum_{i=1}^2\\sum_{j=1}^{h_i} a_j^iF_j^i\\in\\int_{\\mathfrak m^2}I^\\perp$ such that $\\lbrace\\overline{F_j^i}\\rbrace_{1\\leq i\\leq 2,1\\leq j\\leq h_i}$ is an adapted $\\res$-basis of $\\mathcal{L}_{A,2}$ and $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$ for suitable independent linear forms $L_1,\\dots,L_n$.\n\\end{proposition}\n\n\\begin{proof} We will only prove that if $F$ satisfies the required conditions, then $\\operatorname{gcl}(A)=2$. By definition of $K_F$, if $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$, then $(L_1,\\dots,L_{n-1},L_n^2)\\subseteq K_F$. Again by \\cite{EH18}, $\\operatorname{gcl}(A)\\leq \\ell(R\/K_F)$ and hence $\\operatorname{gcl}(A)\\leq \\ell\\left(R\/(L_1,\\dots,L_{n-1},L_n^2)\\right)=2$. Since $\\operatorname{gcl}(A)\\geq 2$ by hypothesis, then $\\operatorname{gcl}(A)=2$. The converse implication follows from \\Cref{KF}.\n\\end{proof}\n\n\\begin{example} Recall the ring $A=R\/I$ in \\Cref{Ex3}. Since\n$$\\int_{\\mathfrak m^2}I^\\perp=\\langle y_1^3,y_1^2y_2,y_1y_2^2,y_2^3,y_1^2y_3,y_1y_2y_3,y_2^2y_3,y_1y_3^2,y_2y_3^3,y_3^5\\rangle$$\nand $\\operatorname{gcl}(A)>1$, its Gorenstein colength is 2 if and only if there exist some\n$$F\\in\\langle y_1^2,y_1y_2,y_1y_3,y_2^2,y_2y_3,y_3^4,y_1^3,y_1^2y_2,y_1y_2^2,y_2^3,y_1^2y_3,y_1y_2y_3,y_2^2y_3,y_1y_3^2,y_2y_3^3,y_3^5\\rangle_\\res$$\nsuch that $(L_1,\\dots,L_{n-1},L_n^2)\\circ F=I^\\perp$. Consider $F=y_3^4+y_1^2y_2$, then\n$$(x_1,x_2^2,x_3)\\circ F=\\langle x_1\\circ F,x_2^2\\circ F,x_3\\circ F\\rangle=\\langle y_1y_2,y_3^3\\rangle$$\nand hence $\\operatorname{gcl}(A)=2$.\n\\end{example}\n\n\\section{Minimal Gorenstein covers varieties}\n\nWe are now interested in providing a geometric interpretation of the set of all minimal Gorenstein covers $G=R\/J$ of a given local Artin $\\res$-algebra $A=R\/I$. From now on, we will assume that $\\res$ is an algebraically closed field. The following result is well known and it is an easy linear algebra exercise.\n\n\\begin{lemma}\n\\label{semic}\nLet $\\varphi_i:\\res^a \t\\longrightarrow \\res^b$, $i=1\\cdots,r$, be a family of Zariski continuous maps.\n Then the function $\\varphi^*:\\res^a\\longrightarrow \\mathbb N$ defined by\n$\\varphi^*(z)=\\dim_{\\res} \\langle \\varphi_1(z),\\cdots, \\varphi_r(z)\\rangle_{\\res}$ is lower semicontinous, i.e. for all $z_0 \\in \\res^a$ there is a Zariski open set\n$z_0\\in U \\subset \\res^a$ such that for all $z\\in U$ it holds\n$\\varphi^*(z)\\geq \\varphi^*(z_0)$.\n\\end{lemma}\n\n\\begin{theorem}\\label{ThMGC}\nLet $A=R\/I$ be an Artin ring of Gorenstein colength $t$.\nThere exists a quasi-projective sub-variety $MGC^n(A)$, $n=\\dim(R)$, of\n$\\mathbb P_{\\res}\\left(\\mathcal{L}_{A,t}\\right)$\nwhose set of closed points are the points $[\\overline{F}]$, $\\overline{F}\\in \\mathcal{L}_{A,t}$,\nsuch that $G=R\/\\operatorname{Ann_R} F$ is a minimal Gorenstein cover of $A$.\n\\end{theorem}\n\\begin{proof}\nLet $E$ be a sub-$\\res$-vector space of $\\int_{\\mathfrak m^t}I^\\perp$ such that\n$$\n\\int_{\\mathfrak m^t}I^\\perp \\cong E\\oplus I^{\\perp},\n$$\nwe identify $\\mathcal{L}_{A,t}$ with $E$. From \\Cref{F}, for all minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ we may assume that $F\\in E$. Given $F\\in E$, the quotient $G=R\/\\operatorname{Ann_R} F$ is a minimal cover of $A$ if and only if the following two numerical conditions hold:\n\\begin{enumerate}\n\\item $\\dim_{\\res}(\\langle F\\rangle)= \\dim_{\\res}A+t$, and\n\\item $\\dim_{\\res}(I^{\\perp}+ \\langle F \\rangle) =\\dim_{\\res}\\langle F \\rangle$.\n\\end{enumerate}\n\n\\noindent\nDefine the family of Zariski continuous maps $\\lbrace\\varphi_{\\alpha}\\rbrace_{\\vert\\alpha\\vert\\leq\\deg F}$, $\\alpha\\in\\mathbb{N}^n$, where\n$$\\begin{array}{rrcl}\n\\varphi_{\\alpha}: & E & \\longrightarrow & E\\\\\n& F & \\longmapsto & x^\\alpha\\circ F\\\\\n\\end{array}$$\n\\noindent\nIn particular, $\\varphi_0=Id_R$.\nWe write\n$$\\begin{array}{rrcl}\n\\varphi^\\ast: & E & \\longrightarrow & \\mathbb{N}\\\\\n& F & \\longmapsto & \\dim_\\res\\langle x^\\alpha\\circ F,\\vert\\alpha\\vert\\leq\\deg F\\rangle_\\res\n\\end{array}$$\n\n\\noindent\nNote that $\\varphi^\\ast(F)=\\dim_\\res \\langle F\\rangle$ and, by \\Cref{semic}, $\\varphi^\\ast$ is a lower semicontinuous map. Hence $U_1=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle\\geq\\dim_\\res A+t\\rbrace$ is an open Zariski set in $E$. Using the same argument,\n$U_2=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle\\geq\\dim_\\res A+t+1\\rbrace$\nis also an open Zariski set in $E$ and hence $Z_1=E\\backslash U_2$ is a Zariski closed set such that $\\dim_\\res\\langle F\\rangle\\leq\\dim_\\res A+t$ for any $F\\in Z_1$.\nThen $Z_1\\cap U_1=\\lbrace F\\in E\\mid \\dim_\\res\\langle F\\rangle=\\dim_\\res A+t\\rbrace$ is a locally closed set.\n\nLet $G_1,\\cdots,G_e$ be a $\\res$-basis of $I^{\\perp}$ and consider the constant map\n$$\\begin{array}{rrcl}\n\\psi_{i}: & E & \\longrightarrow & E\\\\\n& F & \\longmapsto & G_i\\\\\n\\end{array}$$\nfor any $i=1,\\cdots,e$.\nBy \\Cref{semic},\n\n$$\\begin{array}{rrcl}\n\\psi^\\ast: & E & \\longrightarrow & \\mathbb{N}\\\\\n& F & \\longmapsto & \\dim_\\res \\langle\\lbrace x^{\\alpha}\\circ F\\rbrace_{\\vert\\alpha\\vert\\leq\\deg F}, G_1,\\dots, G_e\\rangle_\\res=\\dim_\\res\\left(\\langle F\\rangle+I^\\perp\\right)\n\\end{array}$$\n\n\\noindent\nis a lower semicontinuous map. Using an analogous argument, we can prove that $T=\\lbrace F\\in E\\mid \\dim_\\res(I^\\perp+\\langle F\\rangle)=\\dim_\\res A+t\\rbrace$ is a locally closed set. Therefore,\n$$W=(Z_1\\cap U_1)\\cap T=\\lbrace F\\in E\\mid \\dim_\\res A+t=\\dim_\\res(I^\\perp+\\langle F\\rangle)=\\dim_\\res\\langle F\\rangle\\rbrace$$\n\\noindent\nis a locally closed subset of $E$ whose set of closed points are all the $F$ in $E$ satisfying $(1)$ and $(2)$, i.e. defining a minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ of $A$.\n\nMoreover, since $\\langle F\\rangle=\\langle \\lambda F\\rangle$ for any $\\lambda\\in\\res^\\ast$, conditions $(1)$ and $(2)$ are invariant under the multiplicative action of $\\res^*$ on $F$ and hence\n$MGC^n(A)=\\mathbb P_{\\res}(W)\\subset \\mathbb P_{\\res}(E)=\\mathbb P_{\\res}\\left(\\mathcal{L}_{A,t}\\right)$.\n\\end{proof}\n\nRecall that the embedding dimension of $A$ is $\\operatorname{emb dim}(A)=\\dim_\\res\\mathfrak m\/(\\mathfrak m^2+I)$.\n\n\\begin{proposition}\nLet $G$ be a minimal Gorenstein cover of $A$.\nThen\n$$\n\\operatorname{emb dim}(G)\\le \\tau(A)+\\operatorname{gcl}(A)-1.\n$$\n\\end{proposition}\n\\begin{proof}\nSet $A=R\/I$ such that $\\operatorname{emb dim}(A)=\\dim R=n$. Consider the power series ring $R'$ of dimension $n+t$ over $\\res$ for some $t\\geq 0$ such that $G=R'\/J'$ with $\\operatorname{emb dim}(G)=\\dim R'$. See \\cite{EH18} for more details on this construction. We denote by $\\mathfrak m$ and $\\mathfrak m'$ the maximal ideals of $R$ and $R'$, respectively, and consider $K_{F'}=(I^\\perp:_{R'} F')$. \nFrom \\Cref{KF}$.(i)$, it is easy to deduce that $K_{F'}\/(\\mathfrak m K_{F'}+J')\\simeq I^\\perp\/(\\mathfrak m\\circ I^\\perp)$. Hence $\\tau(A)=\\dim_\\res K_{F'}\/(\\mathfrak m K_{F'}+J')$ by \\cite[Proposition 2.6]{ES17}. Then\n$$\\operatorname{emb dim}(G)+1=\\dim_\\res R'\/(\\mathfrak m')^2\\leq \\dim_\\res R'\/(\\mathfrak m K_{F'}+J')=\\operatorname{gcl}(A)+\\tau(A),$$\nwhere the last equality follows from \\Cref{KF}$.(ii)$.\n\\end{proof}\n\n\\begin{definition}\\label{DefMGC}\nGiven an Artin ring $A=R\/I$, the variety $MGC(A)=MGC^n(A)$, with $n=\\tau(A)+\\operatorname{gcl}(A)-1$, is called the minimal Gorenstein cover variety associated to $A$.\n\\end{definition}\n\n\\begin{remark}\\label{RemMGC}\nLet us recall that in \\cite{EH18} we proved that for low Gorenstein colength of $A$, i.e. $\\operatorname{gcl}(A)\\le 2$, $\\operatorname{emb dim}(G)=\\operatorname{emb dim}(A)$ for any minimal Gorenstein cover $G$ of $A$. In this situation we can consider $MGC(A)$ as the variety $MGC^n(A)$ with $n=\\operatorname{emb dim}(A)$.\n\\end{remark}\n\nObserve that this notion of minimal Gorenstein cover variety generalizes the definition of Teter variety introduced in \\cite{ES17}, which applies only to rings of Gorenstein colength 1, to any arbitrary colength.\n\n\\section{Computing $MGC(A)$ for low Gorenstein colength}\\label{s5}\n\nIn this section we provide algorithms and examples to compute the variety of minimal Gorenstein covers of a given ring $A$ whenever its Gorenstein colength is 1 or 2. These algorithms can also be used to decide whether a ring has colength greater than 2, since it will correspond to empty varieties.\n\nTo start with, we provide the auxiliar algorithm to compute the integral of $I^\\perp$ with respect to the $t$-th power of the maximal ideal of $R$. If there exist polynomials defining minimal Gorenstein covers of colength $t$, they must belong to this integral.\n\n\\subsection{Computing integrals of modules}\n\nConsider a $\\res$-basis $\\mathbf{b}=(b_1,\\dots,b_t)$ of a finitely generated sub-$R$-module $M$ of $S$ and consider $x_k\\circ b_i=\\sum_{j=1}^t a_j^i b_j$, for any $1\\leq i\\leq t$ and $1\\leq k\\leq n$. Let us define matrices $U_k=(a_j^i)_{1\\leq j,i\\leq t}$ for any $1\\leq k\\leq n$. Note that\n\n$$\\left(x_k\\circ b_1 \\cdots x_k\\circ b_t\\right)=\n\\left(b_1 \\cdots b_t\\right)\n\\left(\\begin{array}{ccc}\na_1^1 & \\dots & a_1^t\\\\\n\\vdots & & \\vdots\\\\\na_t^1 & \\dots & a_t^t\n\\end{array}\\right)\n.$$\n\n\\noindent\nNow consider any element $h\\in M$. Then\n$$x_k\\circ h=x_k\\circ\\sum_{i=1}^t h_ib_i=\\sum_{i=1}^t (x_k\\circ h_ib_i)=\\sum_{i=1}^t (x_k\\circ b_i)h_i=$$\n$$=\\left(x_k\\circ b_1 \\cdots x_k\\circ b_t\\right)\n\\left(\\begin{array}{c}\nh_1\\\\\n\\vdots\\\\\nh_t\n\\end{array}\\right)=\\left(b_1 \\cdots b_t\\right)U_k\n\\left(\\begin{array}{c}\nh_1\\\\\n\\vdots\\\\\nh_t\n\\end{array}\\right),$$\n\\noindent\nwhere $h_1,\\dots,h_t\\in\\res$.\n\n\\begin{definition} Let $U_k$, $1\\leq k\\leq n$, be the square matrix of order $t$ such that\n$$x_k\\circ h=\\mathbf{b}\\,U_k\\,\\mathbf{h}^t,$$\nwhere $\\mathbf{h}=(h_1,\\dots,h_t)$ for any $h\\in M$, with $h=\\sum_{i=1}^t h_ib_i$. We call $U_k$ the contraction matrix of $M$ with respect to $x_k$ associated to a $\\res$-basis $\\mathbf{b}$ of $M$.\n\\end{definition}\n\n\\begin{remark} Since $x_kx_l\\circ h=x_lx_k\\circ h$ for any $h\\in M$, we have $U_kU_l=U_lU_k$, with $1\\leq k1$.\n\\end{proof}\n\n\\begin{algorithm}[H]\n\\caption{Compute the Teter variety of $A=R\/I$ with $n\\geq 2$}\n\\label{AlMGC1}\n\\begin{algorithmic}\n\\REQUIRE\n$s$ socle degree of $A=R\/I$;\\\\\n$b_1,\\dots,b_t$ $\\res$-basis of the inverse system $I^\\perp$;\\\\\n$F_1,\\dots,F_h$ adapted $\\res$-basis of $\\mathcal{L}_{A,1}$;\\\\\n$U_1,\\dots,U_n$ contraction matrices of $\\int_\\mathfrak m I^\\perp$.\n\\ENSURE\nideal $\\mathfrak{a}$ such that $MGC(A)=\\mathbb{P}_\\res^{h-1}\\backslash\\mathbb{V}_+(\\mathfrak{a})$.\n\\RETURN\n\\begin{enumerate}\n\\item Set $F=a_1F_1+\\dots+a_hF_h$ and $\\mathbf{F}=(a_1,\\dots,a_h)^t$, where $a_1,\\dots,a_h$ are variables in $\\res$.\n\\item Build matrix $A=\\left(\\mu^\\alpha_j\\right)_{1\\leq\\vert\\alpha\\vert\\leq s+1,1\\leq j\\leq t}$, where $$U^\\alpha \\textbf{F}=\\sum_{j=1}^t\\mu^\\alpha_jb_j,\\quad U^\\alpha=U_1^{\\alpha_1}\\cdots U_n^{\\alpha_n}.$$\n\\item Compute the ideal $\\mathfrak{a}$ generated by all minors of order $t$ of the matrix $A$.\n\\end{enumerate}\n\\end{algorithmic}\n\\end{algorithm}\n\nWith the following example we show how to apply and interpret the output of the algorithm:\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2]\\!]$ and $I=\\mathfrak m^2$ \\cite[Example 4.3]{ES17}. From \\Cref{ExAlINT} we gather all the information we need for the input of \\Cref{AlMGC1}:\n{\\sc Input}:\n$b_1=1,b_2=y_1,b_3=y_2$ $\\res$-basis of $I^\\perp$; $F_1=y^2,F_2=y_1y_2,F_3=y_1^2$ adapted $\\res$-basis of $\\mathcal{L}_{A,1}$; $U_1'$,$U_2'$ contraction matrices of $\\int_\\mathfrak m I^\\perp$.\\\\\n{\\sc Output}: $\\operatorname{rad}(\\mathfrak{a})=a_2^2-a_1a_3$.\\\\\nThen $MGC(A)=\\mathbb{P}^2\\backslash\\lbrace a_2^2-a_1a_3=0\\rbrace$ and any minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} F$ of $A$ is given by a polynomial $F=a_1y^2_2+a_2y_1y_2+a_3y_1^2$ such that $a_2^2-a_1a_3\\neq 0$.\n\\end{example}\n\n\\bigskip\n\n\\subsection{Computing $MGC(A)$ in colength 2}\n\nConsider a $\\res$-basis $b_1,\\dots,b_t$ of $I^\\perp$ and an adapted $\\res$-basis $\\overline{F}_1,\\dots,\\overline{F}_{h_1},\\overline{G}_1,\\dots,\\overline{G}_{h_2}$ of $\\mathcal{L}_{A,2}$ (see \\Cref{adapted}) such that\n\\begin{itemize}\n\\item $b_1,\\dots,b_t,F_1,\\dots,F_{h_1}$ is a $\\res$-basis of $\\int_\\mathfrak m I^\\perp$,\n\\item $b_1,\\dots,b_t,F_1,\\dots,F_{h_1},G_1,\\dots,G_{h_2}$ is a $\\res$-basis of $\\int_{\\mathfrak m^2} I^\\perp$.\n\\end{itemize}\n\nThroughout this section, we will Consider local Artin rings $A=R\/I$ such that $\\operatorname{gcl}(A)>1$. If a minimal Gorenstein cover $G=R\/\\operatorname{Ann_R} H$ of colength 2 exists, then, by \\Cref{cor}, we can assume that $H$ is a polynomial of the form\n$$H=\\sum_{i=1}^{h_1}\\alpha_iF_i+\\sum_{i=1}^{h_2}\\beta_iG_i,\\quad \\alpha_i,\\beta_i\\in\\res.$$\nWe want to obtain conditions on the $\\alpha$'s and $\\beta$'s under which $H$ actually generates a minimal Gorenstein cover of colength 2. By definition, $H\\in\\int_{\\mathfrak m^2}I^\\perp$, hence $x_k\\circ H\\in\\mathfrak m\\circ\\int_\\mathfrak m\\left(\\int_\\mathfrak m I^\\perp\\right)\\subseteq\\int_{\\mathfrak m}I^\\perp$ and\n$$x_k\\circ H=\\sum_{j=1}^t\\mu^j_kb_j+\\sum_{j=1}^{h_1}\\rho^j_kF_j,\\quad \\mu_k^j,\\rho_k^j\\in\\res.$$\nSet matrices $A_H=(\\mu_k^j)$ and $B_H=(\\rho_k^j)$. Let us describe matrix $B_H$ explicitly. We have\n$$x_k\\circ H=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i(x_k\\circ G_i).$$\nNote that each $x_k\\circ G_i$, for any $1\\leq i\\leq h_2$, is in $\\int_{\\mathfrak m}I^\\perp$ and hence it can be decomposed as\n$$x_k\\circ G_i=\\sum_{j=1}^t\\lambda^{k,i}_jb_j+\\sum_{j=1}^{h_1}a^{k,i}_jF_j,\\quad \\lambda^{k,i}_j,a_j^{k,i}\\in\\res.$$\nThen\n$$x_k\\circ H=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i\\left(\\sum_{j=1}^t\\lambda^{k,i}_jb_j+\\sum_{j=1}^{h_1}a^{k,i}_jF_j\\right)=b+\\sum_{j=1}^{h_1}\\left(\\sum_{i=1}^{h_2}\\beta_ia^{k,i}_j\\right)F_j,$$\nwhere $b:=\\sum_{i=1}^{h_1}\\alpha_i(x_k\\circ F_i)+\\sum_{i=1}^{h_2}\\beta_i\\left(\\sum_{j=1}^t\\lambda_j^{k,i}b_j\\right)\\in I^\\perp$.\nObserve that\n\\begin{equation}\\label{rho}\n\\rho_k^j=\\sum_{i=1}^{h_2}a^{k,i}_j\\beta_i,\n\\end{equation}\nhence the entries of matrix $B_H$ can be regarded as polynomials in variables $\\beta_1,\\dots,\\beta_{h_2}$ with coefficients in $\\res$.\n\n\\begin{lemma}\\label{rk(B)} Consider the matrix $B_H=(\\rho_k^j)$ as previously defined and let $B'_H=(\\varrho^j_k)$ be the matrix of the coefficients of $\\overline{L_k\\circ H}=\\sum_{j=1}^{h_1}\\varrho_k^j\\overline{F}_j\\in\\mathcal{L}_{A,1}$ where $L_1,\\dots,L_n$ are independent linear forms. Then,\n\\begin{enumerate}\n\\item[(i)] $\\operatorname{rk} B_H=\\dim_\\res\\left(\\displaystyle\\frac{\\mathfrak m\\circ H+I^\\perp}{I^\\perp}\\right)$,\n\\item[(ii)] $\\operatorname{rk} B'_H=\\operatorname{rk} B_H$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\nSince $\\overline{x_k\\circ H}=\\sum_{j=1}^{h_1}\\rho_k^j\\overline{F}_j$ and $\\overline{F}_1,\\dots,\\overline{F}_{h_1}$ is a $\\res$-basis of $\\mathcal{L}_{A,1}$, then $\\operatorname{rk} B_H=\\dim_\\res\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res$. Note that $\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res=(\\mathfrak m\\circ H+I^\\perp)\/I^\\perp\\subseteq\\mathcal{L}_{A,1}$, hence (i) holds.\n\n\\noindent\nFor (ii) it will be enough to prove that $\\langle \\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res=\\langle \\overline{L_1\\circ H},\\dots,\\overline{L_n\\circ H}\\rangle_\\res$.\nIndeed, since $L_i=\\sum_{j=1}^n\\lambda^i_jx_j$ for any $1\\leq i\\leq n$, then $\\overline{L_i\\circ H}=\\sum_{j=1}^n\\lambda_j^i(\\overline{x_j\\circ H})\\in\\langle\\overline{x_1\\circ H},\\dots,\\overline{x_n\\circ H}\\rangle_\\res$. The reverse inclusion comes from the fact that $L_1,\\dots,L_n$ are linearly independent and hence $(L_1,\\dots,L_n)=\\mathfrak m$.\n\\end{proof}\n\n\\begin{lemma}\\label{BH0} With the previous notation, consider a polynomial $H\\in\\int_{\\mathfrak m^2}I^\\perp$ with coefficients $\\beta_1,\\dots,\\beta_{h_2}$ of $G_1,\\dots,G_{h_2}$, respectively, and its corresponding matrix $B_H$. Then the following are equivalent:\n\\begin{enumerate}\n\\item[(i)] $B_H\\neq 0$,\n\\item[(ii)] $\\mathfrak m\\circ H\\nsubseteq I^\\perp$,\n\\item[(iii)] $(\\beta_1,\\dots,\\beta_{h_2})\\neq (0,\\dots,0)$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}\n$(i)$ implies $(ii)$. If $B_H\\neq 0$, by \\Cref{rk(B)}, $(\\mathfrak m\\circ H+I^\\perp)\/I^\\perp\\neq 0$ and hence $\\mathfrak m\\circ H\\nsubseteq I^\\perp$.\n\n\\noindent\n$(ii)$ implies $(iii)$. If $\\mathfrak m\\circ H\\nsubseteq I^\\perp$, by definition $H\\notin \\int_\\mathfrak m I^\\perp$ and hence $H\\in\\int_{\\mathfrak m^2} I^\\perp\\backslash\\int_\\mathfrak m I^\\perp$. Therefore, some $\\beta_i$ must be non-zero.\n\n\\noindent\n$(iii)$ implies $(i)$. Since $G_i\\in\\int_{\\mathfrak m^2}I^\\perp\\backslash\\int_\\mathfrak m I^\\perp$ for any $1\\leq i\\leq h_2$ and, by hypothesis, there is some non-zero $\\beta_i$, we have that $H\\in\\int_{\\mathfrak m^2}I^\\perp\\backslash \\int_\\mathfrak m I^\\perp$. We claim that $x_k\\circ H\\in\\int_\\mathfrak m I^\\perp\\backslash I^\\perp$ for some $k\\in\\lbrace 1,\\dots,n\\rbrace$. Suppose the claim is not true. Then $x_k\\circ H\\in I^\\perp$ for any $1\\leq k\\leq n$, or equivalently, $\\mathfrak m\\circ H\\subseteq I^\\perp$ but this is equivalent to $H\\in\\int_\\mathfrak m I^\\perp$, which is a contradiction. Since\n$$x_k\\circ H=b+\\sum_{j=1}^{h_1}\\rho_k^jF_j\\in\\int_\\mathfrak m I^\\perp\\backslash I^\\perp,\\quad b\\in I^\\perp,$$\n\\noindent\nfor some $k\\in\\lbrace 1,\\dots,n\\rbrace$, then $\\rho_k^j\\neq 0$, for some $j\\in\\lbrace 1,\\dots,h_1\\rbrace$. Therefore, $B_H\\neq 0$.\n\\end{proof}\n\n\\begin{lemma}\\label{lemaB} Consider the previous setting. If $B_H=0$, then either $\\operatorname{gcl}(A)=0$ or $\\operatorname{gcl}(A)=1$ or $R\/\\operatorname{Ann_R} H$ is not a cover of $A$.\n\\end{lemma}\n\n\\begin{proof}\nIf $B_H=0$, then $\\mathfrak m\\circ H\\subseteq I^\\perp$ and hence $\\ell(\\langle H\\rangle)-1\\leq \\ell(I^\\perp)$. If $I^\\perp\\subseteq \\langle H\\rangle$, then $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$ such that $\\ell(G)-\\ell(A)\\leq 1$. Therefore, either $\\operatorname{gcl}(A)\\leq 1$ or $G$ is not a cover.\n\\end{proof}\n\n\\bigskip\nSince we already have techniques to check whether $A$ has colength 0 or 1, we can focus completely on the case $\\operatorname{gcl}(A)>1$. Then, according to \\Cref{lemaB}, if $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$, then $B_H\\neq 0$.\n\n\\begin{proposition}\\label{rk B} Assume that $B_H\\neq 0$. Then $\\operatorname{rk} B_H=1$ if and only if $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq I^\\perp$\nfor some independent linear forms $L_1,\\dots,L_n$.\n\\end{proposition}\n\n\\begin{proof}\nSince $B_H\\neq 0$, there exists $k$ such that $x_k\\circ H\\notin I^\\perp$. Without loss of generality, we can assume that $x_n\\circ H\\notin I^\\perp$. If $\\operatorname{rk} B_H=1$, then any other row of $B_H$ must be a multiple of row $n$. Therefore, for any $1\\leq i\\leq n-1$, there exists $\\lambda_i\\in\\res$ such that $(x_i-\\lambda_ix_n)\\circ H\\in I^\\perp$. Take $L_n:=x_n$ and $L_i:=x_i-\\lambda_ix_n$. Then $L_1,\\dots,L_n$ are linearly independent and $L_i\\circ H\\in I^\\perp$ for any $1\\leq i\\leq n-1$. Moreover, $L_n^2\\circ H\\in \\mathfrak m^2\\circ\\int_{\\mathfrak m^2}I^\\perp\\subseteq I^\\perp$.\n\nConversely, let $B'_H=(\\varrho^j_k)$ be the matrix of the coefficients of $\\overline{L_k\\circ H}=\\sum_{j=1}^{h_1}\\varrho_k^j\\overline{F_j}\\in\\mathcal{L}_{A,1}$. By \\Cref{rk(B)}, since $B_H\\neq 0$, then $B'_H\\neq 0$. By hypothesis, $\\overline{L_1\\circ H}=\\dots=\\overline{L_{n-1}\\circ H}=0$ in $\\mathcal{L}_{A,1}$ but, since $B'_H\\neq 0$, then $\\overline{L_n\\circ H}\\neq 0$. Then $\\operatorname{rk} B'_H=1$ and hence, again by \\Cref{rk(B)}, $\\operatorname{rk} B_H=1$.\n\\end{proof}\n\nRecall that $\\langle H\\rangle=\\langle \\lambda H\\rangle$ for any $\\lambda\\in\\res^\\ast$. Therefore, as pointed out in \\Cref{ThMGC}, for any $H\\neq 0$, a Gorenstein ring $G=R\/\\operatorname{Ann_R} H$ can be identified with a point $[H]\\in\\mathbb{P}_\\res\\left(\\mathcal{L}_{A,2}\\right)$ by taking coordinates $(\\alpha_1:\\dots:\\alpha_{h_1}:\\beta_1:\\dots:\\beta_{h_2})$. Observe that $\\mathbb{P}_\\res\\left(\\mathcal{L}_{A,2}\\right)$ is a projective space over $\\res$ of dimension $h_1+h_2-1$, hence we will denote it by $\\mathbb{P}_\\res^{h_1+h_2-1}$.\n\nOn the other hand, by \\Cref{rho}, any minor of $B_H=(\\rho_k^j)$ is a homogeneous polynomial in variables $\\beta_1,\\dots,\\beta_{h_2}$. Therefore, we can consider the homogeneous ideal $\\mathfrak{b}$ generated by all order-2-minors of $B_H$ in $\\res[\\alpha_1,\\dots,\\alpha_{h_1},\\beta_1,\\dots,\\beta_{h_2}]$. Hence $\\mathbb{V}_+(\\mathfrak{b})$ is the projective variety consisting of all points $[H]\\in\\mathbb{P}_\\res^{h_1+h_2-1}$ such that $\\operatorname{rk} B_H\\leq 1$.\n\n\\begin{remark} In this section we will use the notation $MGC_2(A)$ to denote the set of points $[H]\\in\\mathbb{P}_\\res^{h_1+h_2-1}$ such that $G=R\/\\operatorname{Ann_R} H$ is a Gorenstein cover of $A$ with $\\ell(G)-\\ell(A)=2$. Since we are considering rings such that $\\operatorname{gcl}(A)>1$, we can characterize rings of higher colength than 2 as those such that $MGC_2(A)=\\emptyset$. On the other hand, $\\operatorname{gcl}(A)=2$ if and only if $MGC_2(A)\\neq\\emptyset$, hence in this case $MGC_2(A)=MGC(A)$, see \\Cref{DefMGC} and \\Cref{RemMGC}.\n\\end{remark}\n\n\\begin{corollary}\\label{MGC1} Let $A=R\/I$ be an Artin ring such that $\\operatorname{gcl}(A)=2$. Then\n$$MGC_2(A)\\subseteq\\mathbb{V}_+(\\mathfrak{b})\\subseteq\\mathbb{P}_\\res^{h_1+h_2-1}.$$\n\\end{corollary}\n\n\\begin{proof}\nBy \\Cref{KF}$.(ii)$, points $[H]\\in MGC_2(A)$ correspond to Gorenstein covers $G=R\/\\operatorname{Ann_R} H$ of $A$ such that $I^\\perp=(L_1,\\dots,L_{n-1},L_n^2)\\circ H$ for some $L_1,\\dots,L_n$. Since $B_H\\neq 0$ by \\Cref{lemaB}, then we can apply \\Cref{rk B} to deduce that $\\operatorname{rk} B_H=1$.\n\\end{proof}\n\nNote that the conditions on the rank of $B_H$ do not provide any information about which particular choices of independent linear forms $L_1,\\dots,L_n$ satisfy the inclusion $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq I^\\perp$. In fact, it will be enough to understand which are the $L_n$ that meet the requirements. To that end, we fix $L_n=v_1x_1+\\dots+v_nx_n$, where $v=(v_1,\\dots,v_n)\\neq 0$. We can choose linear forms $L_i=\\lambda_1^ix_1+\\dots+\\lambda_n^ix_n$, where $\\lambda_i=(\\lambda_1^i,\\dots,\\lambda_n^i)\\neq 0$, for $1\\leq i\\leq n-1$, such that $L_1,\\dots,L_n$ are linearly independent and $\\lambda_i\\cdot v=0$. Observe that the $k$-vector space generated by $L_1,\\dots,L_{n-1}$ can be expressed in terms of $v_1,\\dots,v_n$, that is,\n$$\\langle L_1,\\dots,L_{n-1}\\rangle_\\res=\\langle v_lx_k-v_kx_l:\\,1\\leq k1$. Consider a point $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})\\subset\\mathbb{P}^{h_1+h_2-1}\\times\\mathbb{P}^{n-1}$. Then\n$$[H]\\in MGC_2(A)\\Longleftrightarrow ([H],[v])\\notin \\mathbb{V}_+(\\mathfrak{a}),$$\n\\end{proposition}\n\n\\begin{proof}\nFrom \\Cref{MGC1} we deduce that if $[H]$ is a point in $MGC_2(A)$, then $\\operatorname{rk} B_H\\leq 1$. The same is true for any point $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$. Let us consider these two cases:\n\n\\emph{Case $B_H=0$}. Since $\\operatorname{gcl}(A)>1$, then $R\/\\operatorname{Ann_R} H$ is not a Gorenstein cover of $A$ by \\Cref{lemaB}, hence $[H]\\notin MGC_2(A)$. On the other hand, as stated in the proof of \\Cref{proj}, $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$ for any $v\\neq 0$. By \\Cref{BH0} and $\\operatorname{gcl}(A)\\neq 1$, it follows that\n$$(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subseteq \\mathfrak m\\circ H\\subsetneq I^\\perp$$\n\\noindent\nfor any $L_1,\\dots,L_n$ linearly independent linear forms, where $L_n=v_1x_1+\\dots+v_nx_n$. Therefore, the rank of matrix $U_{H,v}$ is always strictly smaller than $\\dim_\\res I^\\perp$. Hence $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{a})$ for any $v\\neq 0$.\n\n\\emph{Case $\\operatorname{rk} B_H=1$}. If $[H]\\in MGC_2(A)$, then there exist $L_1,\\dots,L_n$ such that $(L_1,\\dots,L_{n-1},L_n^2)\\circ H=I^\\perp$. Take $v$ as the vector of coefficients of $L_n$, it is an admissible vector by definition. By \\Cref{proj}, $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})$ is unique and $\\operatorname{rk} U_{H,v}=\\dim_\\res I^\\perp$. Therefore, $([H],[v])\\notin\\mathbb{V}_+(\\mathfrak{a})$.\n\nConversely, if $([H],[v])\\in\\mathbb{V}_+(\\mathfrak{c})\\cap\\mathbb{V}_+(\\mathfrak{a})$, then $\\operatorname{rk} U_{H,v}<\\dim_\\res I^\\perp$ and hence $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subsetneq I^\\perp$, where $L_n=v_1x_1+\\dots+v_nx_n$. By unicity of $v$, no other choice of $L_1,\\dots,L_n$ satisfies the inclusion $(L_1,\\dots,L_{n-1},L_n^2)\\circ H\\subset I^\\perp$, hence $[H]\\notin MGC_2(A)$.\n\\end{proof}\n\n\\begin{corollary} Assume $\\operatorname{gcl}(A)>1$. With the previous definitions,\n$$MGC_2(A)=\\mathbb{V}_+(\\mathfrak{b})\\backslash \\pi_1\\left(\\mathbb{V}_+(\\mathfrak{c})\\cap\\mathbb{V}_+(\\mathfrak{a})\\right).$$\n\\end{corollary}\n\n\\begin{proof}\nIt follows from \\Cref{proj} and \\Cref{MGC2}.\n\\end{proof}\n\nFinally, let us recall the following result for bihomogeneous ideals:\n\n\\begin{lemma} Let ideals $\\mathfrak{a},\\mathfrak{c}$ be as previously defined, $\\mathfrak{d}=\\mathfrak{a}+\\mathfrak{c}$ the sum ideal and $\\pi_1:\\mathbb{P}_\\res^{h_1+h_2-1}\\times \\mathbb{P}_\\res^{n-1}\\longrightarrow \\mathbb{P}_\\res^{h_1+h_2-1}$ be the projection map. Let $\\widehat{\\mathfrak{d}}$ be the projective elimination of the ideal $\\mathfrak{d}$ with respect to variables $v_1,\\dots,v_n$. Then,\n$$\\pi_1(\\mathbb{V}_+(\\mathbb{\\mathfrak{a}})\\cap\\mathbb{V}_+(\\mathbb{\\mathfrak{c}}))=\\mathbb{V}_+(\\widehat{\\mathfrak{d}}).$$\n\\end{lemma}\n\\begin{proof}\nSee \\cite[Section 8.5, Exercise 16]{CLO97}.\n\\end{proof}\n\n\\bigskip\n\nWe end this section by providing an algorithm to effectively compute the set $MGC_2(A)$ of any ring $A=R\/I$ such that $\\operatorname{gcl}(A)>1$.\n\n\\begin{algorithm}[H]\n\\caption{Compute $MGC_2(A)$ of $A=R\/I$ with $n\\geq 2$ and $\\operatorname{gcl}(A)>1$}\n\\label{AlMGC2}\n\\begin{algorithmic}\n\\REQUIRE\n$s$ socle degree of $A=R\/I$;\n$b_1,\\dots,b_t$ $\\res$-basis of the inverse system $I^\\perp$;\n$F_1,\\dots,F_{h_1},G_1,\\dots,G_{h_2}$ an adapted $\\res$-basis of $\\mathcal{L}_{A,2}$;\n$U_1,\\dots,U_n$ contraction matrices of $\\int_{\\mathfrak m^2} I^\\perp$.\n\\ENSURE\nideals $\\mathfrak{b}$ and $\\widehat{\\mathfrak{d}}$ such that\n$MGC_2(A)=\\mathbb{V}_+(\\mathfrak{b})\\backslash\\mathbb{V}_+(\\widehat{\\mathfrak{d}})$.\n\\RETURN\n\\begin{enumerate}\n\\item Set $H=\\alpha_1F_1+\\dots+\\alpha_{h_1}F_{h_1}+\\beta_1G_1+\\dots+\\beta_{h_2}G_{h_2}$, where $\\alpha,\\beta$ are variables in $\\res$. Set column vectors $\\textbf{H}=(0,\\dots,0,\\alpha,\\beta)^t$ and $v=(v_1,\\dots,v_n)^t$ in $R=\\res[\\alpha,\\beta,v]$, where the first $t$ components of $\\textbf{H}$ are zero.\n\\item Build matrix $B_H=(\\rho_i^j)_{1\\leq i\\leq n,\\,1\\leq j\\leq h_1}$, where\n$U_i\\textbf{H}$ is the column vector $(\\mu_i^1,\\dots,\\mu_i^t,\\rho_i^1,\\dots,\\rho_i^{h_1},0,\\dots,0)^t$.\n\\item Build matrix $C_{H,v}=\\left(\\begin{array}{c|c}\nB_H & v\\\\\n\\end{array}\\right)$ as an horizontal concatenation of $B_H$ and the column vector $v$.\n\\item Compute the ideal $\\mathfrak{c}\\subseteq R$ generated by all minors of order 2 of $B_H$.\n\\item Build matrix $U_{H,v}$ as a vertical concatenation of matrices $(\\varrho^j_{l,k})_{1\\leq j\\leq h_1,\\,1\\leq l2$.\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2]\\!]$ and $I=(x_1^2,x_1x_2^2,x_2^4)$. Applying \\Cref{AlINT} twice we get the necessary input for \\Cref{AlMGC2}:\\\\\n{\\sc Input}: $b_1=1,b_2=y_1,b_3=y_2,b_4=y_2^2,b_5=y_1y_2,b_6=y_2^3$ $\\res$-basis of $I^\\perp$; $F_1=y_2^4,F_2=y_1y_2^2,F_3=y_1^2,G_1=y_1^2y_2,G_2=y_1y_2^3,G_3=y_2^5,G_4=y_1^3$ adapted $\\res$-basis of $\\mathcal{L}_{A,2}$; $U_1, U_2$ contraction matrices of $\\int_{\\mathfrak m^2}I^\\perp$.\\\\\n{\\sc Output}: $\\mathfrak{b}=(b_3b_4,b_2b_4)$, $\\mathfrak{\\widehat{d}}=(b_3b_4,b_2b_4,b_2^2-b_1b_3)$.\\\\\n$MGC_2(A)=\\mathbb{V}_+(b_3b_4,b_2b_4)\\backslash\\mathbb{V}_+ (b_3b_4,b_2b_4,b_2^2-b_1b_3)=\\mathbb{V}_+(b_3b_4,b_2b_4)\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3).$\nNote that if $b_3b_4=b_2b_4=0$ and $b_4\\neq 0$, then both $b_2$ and $b_3$ are zero and the condition $b_2^2-b_1b_3=0$ always holds. Therefore, $\\operatorname{gcl}(A)=2$ and hence\n$$MGC(A)=\\mathbb{V}_+(b_4)\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3)\\simeq \\mathbb{P}^5\\backslash\\mathbb{V}_+ (b_2^2-b_1b_3),$$\n\\noindent\nwhere $(a_1:a_2:a_3:b_1:b_2:b_3)$ are the coordinates of the points in $\\mathbb{P}^5$. Moreover, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_2^4+a_2y_1y_2^2+a_3y_1^2+b_1y_1^2y_2+b_2y_1y_2^3+b_3y_2^5$$\n\\noindent\nsatisfies $b_2^2-b_1b_3\\neq 0$. All such covers admit $(x_1,x_2^2)$ as the corresponding $K_H$.\n\\end{example}\n\n\\section{Computations}\n\nThe first aim of this section is to provide a wide range of examples of the computation of the minimal Gorenstein cover variety of a local ring $A$. In \\cite{Poo08a}, Poonen provides a complete classification of local algebras over an algebraically closed field of length equal or less than 6. Note that, for higher lengths, the number of isomorphism classes is no longer finite. We will go through all algebras of Poonen's list and restrict, for the sake of simplicity, to fields of characteristic zero.\n\nOn the other hand, we also intend to test the efficiency of the algorithms by collecting the computation times. We have implemented algorithms 1, 2 and 3 of \\Cref{s5} in the commutative algebra software \\emph{Singular} \\cite{DGPS}. The computer we use runs into the operating system Microsoft Windows 10 Pro and its technical specifications are the following: Surface Pro 3; Processor: 1.90 GHz Intel Core i5-4300U 3 MB SmartCache; Memory: 4GB 1600MHz DDR3.\n\n\\subsection{Teter varieties}\n\nIn this first part of the section we are interested in the computation of Teter varieties, that is, the $MGC(A)$ variety for local $\\res$-algebras $A$ of Gorenstein colength 1. All the results are obtained by running \\Cref{AlMGC1} in \\emph{Singular}.\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_1x_2,x_1x_3,x_2x_3,x_2^3,x_3^3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,2\\rbrace$ and $\\tau(A)=3$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\scriptsize{\n\\begin{verbatim}\nF;\na(4)*x(2)^3+a(1)*x(3)^3+a(6)*x(1)^2+a(5)*x(1)*x(2)+a(3)*x(1)*x(3)+a(2)*x(2)*x(3)\nradical(a);\na(1)*a(4)*a(6)\n\\end{verbatim}}}\n\\noindent\nWe consider points with coordinates $(a_1:a_2:a_3:a_4:a_5:a_6)\\in\\mathbb{P}^5$. Therefore, $MGC(A)=\\mathbb{P}^5\\backslash\\mathbb{V}_+(a_1a_4a_6)$ and any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where $H=a_1y_3^3+a_2y_2y_3+a_3y_1y_3+a_4y_2^3+a_5y_1y_2+a_6y_1^2$ with $a_1a_4a_6\\neq 0$.\n\\end{example}\n\nIn \\Cref{table:TMGC1} below we show the computation time (in seconds) of all isomorphism classes of local $\\res$-algebras $A$ of $\\operatorname{gcl}(A)=1$ appearing in Poonen's classification \\cite{Poo08a}. In this table, we list the Hilbert function of $A=R\/I$, the expression of the ideal $I$ up to linear isomorphism, the dimension $h-1$ of the projective space $\\mathbb{P}^{h-1}$ where the variety $MGC(A)$ lies and the computation time. Note that our implementation of \\Cref{AlMGC1} includes also the computation of the $\\res$-basis of $\\int_\\mathfrak m I^\\perp$, hence the computation time corresponds to the total.\n\nNote that \\Cref{AlMGC1} also allows us to prove that all the other non-Gorenstein local rings appearing in Poonen's list have Gorenstein colength at least 2.\n\n{\\footnotesize{\n\\begin{table}[H]\n\\begin{tabular}{|c|c|c|c|}\n\\hline\nHF(R\/I) & I & h-1 & t(s)\\\\\n\\hline\n$1,2$ & $(x_1,x_2)^2$ & 2 & 0,06\\\\\n$1,2,1$ & $x_1x_2,x_2^2,x_1^3$ & 2 & 0,06\\\\\n$1,3 $ & $(x_1,x_2,x_3)^2$ & 5 & 0,13\\\\\n$1,2,1,1 $ & $x_1^2,x_1x_2,x_2^4$ & 2 & 0,23\\\\\n$1,2,2$ & $x_1x_2,x_1^3,x_2^3$ & 2 & 0,11\\\\\n & $x_1x_2^2,x_1^2,x_2^3$ & 2 & 0,05\\\\\n$1,3,1$ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2,x_1^3$ & 5 & 0,16\\\\\n$1,4$ & $(x_1,x_2,x_3,x_4)^2$ & 9 & 2,30\\\\\n$1,2,1,1,1$ & $x_1x_2,x_1^5,x_2^2$ & 2 & 0,17\\\\\n$1,2,2,1$ & $x_1x_2,x_1^3,x_2^4$ & 2 & 0,09\\\\\n & $x_1^2+x_2^3,x_1x_2^2,x_2^4$ & 2 & 0,1\\\\\n$1,3,1,1$ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2,x_1^4$ & 5& 3,05\\\\\n$1,3,2$ & $x_1^2,x_1x_2,x_1x_3,x_2^2,x_2x_3^2,x_3^3$ & 5 & 0,33\\\\\n & $x_1^2,x_1x_2,x_1x_3,x_2x_3,x_2^3,x_3^3$ & 5 & 0,23\\\\\n$1,4,1$ & $x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4,x_3x_4,x_2^2,x_3^2,x_4^2,x_1^3$ & 9 & 3,21\\\\\n$1,5$ & $(x_1,x_2,x_3,x_4,x_5)^2$ & 14 & 1,25\\\\\n\\hline\n\\end{tabular}\n\\medskip\n\\caption[MGC1]{Computation times of $MGC(A)$ of local rings $A=R\/I$ with $\\ell(A)\\leq 6$ and $\\operatorname{gcl}(A)=1$.}\n\\label{table:TMGC1}\n\\end{table}}}\n\n\\subsection{Minimal Gorenstein covers variety in colength 2}\nNow we want to compute $MGC(A)$ for $\\operatorname{gcl}(A)=2$. All the examples are obtained by running \\Cref{AlMGC2} in \\emph{Singular}.\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_1x_2,x_1x_3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,1\\rbrace$ and $\\tau(A)=2$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\tiny{\n\\begin{multicols}{2}\n\\begin{verbatim}\nH;\nb(10)*x(1)^3+b(7)*x(1)^2*x(2)+\n+b(8)*x(1)*x(2)^2+b(9)*x(2)^3+\n+b(1)*x(1)^2*x(3)+b(2)*x(1)*x(2)*x(3)+\n+b(3)*x(2)^2*x(3)+b(4)*x(1)*x(3)^2+\n+b(6)*x(2)*x(3)^2+b(5)*x(3)^3+\n+a(5)*x(1)^2+a(4)*x(1)*x(2)+\n+a(3)*x(2)^2+a(2)*x(1)*x(3)+\n+a(1)*x(3)^2\nradical(b);\n_[1]=b(8)^2-b(7)*b(9)\n_[2]=b(7)*b(8)-b(9)*b(10)\n_[3]=b(6)*b(8)-b(4)*b(9)\n_[4]=b(3)*b(8)-b(2)*b(9)\n_[5]=b(2)*b(8)-b(1)*b(9)\n_[6]=b(1)*b(8)-b(3)*b(10)\n_[7]=b(7)^2-b(8)*b(10)\n_[8]=b(6)*b(7)-b(4)*b(8)\n_[9]=b(4)*b(7)-b(6)*b(10)\n_[10]=b(3)*b(7)-b(1)*b(9)\n_[11]=b(2)*b(7)-b(3)*b(10)\n_[12]=b(1)*b(7)-b(2)*b(10)\n_[13]=b(3)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(5)*b(8)\n_[15]=b(1)*b(6)-b(5)*b(7)\n_[16]=b(2)*b(5)-b(4)*b(6)\n_[17]=b(4)^2-b(1)*b(5)\n_[18]=b(3)*b(4)-b(5)*b(8)\n_[19]=b(2)*b(4)-b(5)*b(7)\n_[20]=b(1)*b(4)-b(5)*b(10)\n_[21]=b(2)*b(3)-b(4)*b(9)\n_[22]=b(1)*b(3)-b(4)*b(8)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(6)*b(10)\n_[25]=b(1)^2-b(4)*b(10)\n_[26]=b(3)*b(5)*b(10)-b(6)^2*b(10)\n_[27]=b(3)^2*b(10)-b(6)*b(9)*b(10)\n_[28]=b(4)*b(6)^2-b(5)^2*b(8)\n_[29]=b(6)^3*b(10)-b(5)^2*b(9)*b(10)\nradical(d);\n_[1]=b(8)^2-b(7)*b(9)\n_[2]=b(7)*b(8)-b(9)*b(10)\n_[3]=b(6)*b(8)-b(4)*b(9)\n_[4]=b(3)*b(8)-b(2)*b(9)\n_[5]=b(2)*b(8)-b(1)*b(9)\n_[6]=b(1)*b(8)-b(3)*b(10)\n_[7]=b(7)^2-b(8)*b(10)\n_[8]=b(6)*b(7)-b(4)*b(8)\n_[9]=b(4)*b(7)-b(6)*b(10)\n_[10]=b(3)*b(7)-b(1)*b(9)\n_[11]=b(2)*b(7)-b(3)*b(10)\n_[12]=b(1)*b(7)-b(2)*b(10)\n_[13]=b(3)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(5)*b(8)\n_[15]=b(1)*b(6)-b(5)*b(7)\n_[16]=b(2)*b(5)-b(4)*b(6)\n_[17]=b(4)^2-b(1)*b(5)\n_[18]=b(3)*b(4)-b(5)*b(8)\n_[19]=b(2)*b(4)-b(5)*b(7)\n_[20]=b(1)*b(4)-b(5)*b(10)\n_[21]=b(2)*b(3)-b(4)*b(9)\n_[22]=b(1)*b(3)-b(4)*b(8)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(6)*b(10)\n_[25]=b(1)^2-b(4)*b(10)\n_[26]=b(3)*b(5)*b(10)-b(6)^2*b(10)\n_[27]=b(3)^2*b(10)-b(6)*b(9)*b(10)\n_[28]=b(4)*b(6)^2-b(5)^2*b(8)\n_[29]=a(5)*b(3)*b(5)-a(5)*b(6)^2\n_[30]=a(5)*b(3)^2-a(5)*b(6)*b(9)\n_[31]=b(6)^3*b(10)-b(5)^2*b(9)*b(10)\n_[32]=a(5)*b(6)^3-a(5)*b(5)^2*b(9)\n\\end{verbatim}\n\\end{multicols}}}\n\\noindent\nWe can simplify the output by using the primary decomposition of the ideal $\\mathfrak{b}=\\bigcap_{i=1}^k \\mathfrak{b}_i$. Then,\n$$MGC(A)=\\left(\\bigcup_{i=1}^k \\mathbb{V}_+(\\mathfrak{b}_i)\\right)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\bigcup_{i=1}^k \\left(\\mathbb{V}_+(\\mathfrak{b}_i)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})\\right).$$\n\\noindent\n\\emph{Singular} \\cite{DGPS} provides a primary decomposition $\\mathfrak{b}=\\mathfrak{b}_1\\cap \\mathfrak{b}_2$ that satisfies $\\mathbb{V}_+(\\mathfrak{b}_2)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\emptyset$. Therefore, we get\n$$MGC(A)=\\mathbb{V}_+(b_1,b_2,b_4,b_7,b_8,b_{10},b_3b_6-b_5b_9)\\backslash\\left(\\mathbb{V}_+(a_5)\\cup\\mathbb{V}_+ (-b_6^3+b_5^2b_9,b_3b_5-b_6^2,b_3^2-b_6b_9)\\right).$$\n\\noindent\nin $\\mathbb{P}^{14}$. We can eliminate some of the variables and consider $MGC(A)$ to be the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_3b_6-b_5b_9)\\backslash\\left(\\mathbb{V}_+(a_5)\\cup\\mathbb{V}_+ (b_5^2b_9-b_6^3,b_3b_5-b_6^2,b_3^2-b_6b_9)\\right)\\subset \\mathbb{P}^8.$$\n\\noindent\nTherefore, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_3^2+a_2y_1y_3+a_3y_2^2+a_4y_1y_2+a_5y_1^2+b_3y_2^2y_3+b_5y_3^3+b_6y_2y_3^2+b_9y_2^3$$\n\\noindent\nsatisfies $b_3b_6-b_5b_9=0$, $a_5\\neq 0$ and at least one of the following conditions: $b_5^2b_9-b_6^3\\neq 0, b_3b_5-b_6^2\\neq 0, b_3^2-b_6b_9\\neq 0$.\n\n\\noindent\nMoreover, note that $\\mathbb{V}_+(\\mathfrak{c})\\backslash\\mathbb{V}_+(\\mathfrak{a})=\\mathbb{V}_+(\\mathfrak{c_1})\\backslash\\mathbb{V}_+(\\mathfrak{a})$, where $\\mathfrak{c}=\\mathfrak{c}_1\\cap \\mathfrak{c}_2$ is the primary decomposition of $\\mathfrak{c}$ and $\\mathfrak{c}_1=\\mathfrak{b}_1+(v_1,v_2b_5-v_3b_6,v_2b_3-v_3b_9)$. Hence, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3^2)$, where $L_1,L_2,L_3$ are independent linear forms in $R$ such that $L_3=v_2x_2+v_3x_3$, with $v_2b_5-v_3b_6=v_2b_3-v_3b_9=0$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2-x_1^3)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,1,1\\rbrace$ and $\\tau(A)=2$. The output provided by our implementation of the algorithm in \\emph{Singular} \\cite{DGPS} is the following:\n{\\tiny{\n\\begin{multicols}{2}\n\\begin{verbatim}\nH;\n-b(10)*x(1)^4+b(9)*x(1)^2*x(2)+\n+b(7)*x(1)*x(2)^2+b(8)*x(2)^3+\n+b(6)*x(1)^2*x(3)+b(1)*x(1)*x(2)*x(3)+\n+b(2)*x(2)^2*x(3+b(3)*x(1)*x(3)^2+\n+b(4)*x(2)*x(3)^2+b(5)*x(3)^3+\n+a(5)*x(1)*x(2)+a(4)*x(2)^2+\n+a(3)*x(1)*x(3)+a(2)*x(2)*x(3)+\n+a(1)*x(3)^2\nradical(b);\n_[1]=b(8)*b(10)\n_[2]=b(7)*b(10)\n_[3]=b(4)*b(10)\n_[4]=b(2)*b(10)\n_[5]=b(1)*b(10)\n_[6]=b(6)*b(8)-b(2)*b(9)\n_[7]=b(7)^2-b(8)*b(9)\n_[8]=b(6)*b(7)-b(1)*b(9)\n_[9]=b(4)*b(7)-b(3)*b(8)\n_[10]=b(3)*b(7)-b(4)*b(9)\n_[11]=b(2)*b(7)-b(1)*b(8)\n_[12]=b(1)*b(7)-b(2)*b(9)\n_[13]=b(4)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(4)*b(9)\n_[15]=b(1)*b(6)-b(3)*b(9)\n_[16]=b(4)^2-b(2)*b(5)\n_[17]=b(3)*b(4)-b(1)*b(5)\n_[18]=b(2)*b(4)-b(5)*b(8)\n_[19]=b(1)*b(4)-b(5)*b(7)\n_[20]=b(3)^2-b(5)*b(6)+b(3)*b(10)\n_[21]=b(2)*b(3)-b(5)*b(7)\n_[22]=b(1)*b(3)-b(5)*b(9)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(3)*b(8)\n_[25]=b(1)^2-b(4)*b(9)\n_[26]=b(5)*b(9)*b(10)\n_[27]=b(3)*b(9)*b(10)\nradical(d);\n_[1]=b(8)*b(10)\n_[2]=b(7)*b(10)\n_[3]=b(4)*b(10)\n_[4]=b(2)*b(10)\n_[5]=b(1)*b(10)\n_[6]=b(6)*b(8)-b(2)*b(9)\n_[7]=b(7)^2-b(8)*b(9)\n_[8]=b(6)*b(7)-b(1)*b(9)\n_[9]=b(4)*b(7)-b(3)*b(8)\n_[10]=b(3)*b(7)-b(4)*b(9)\n_[11]=b(2)*b(7)-b(1)*b(8)\n_[12]=b(1)*b(7)-b(2)*b(9)\n_[13]=b(4)*b(6)-b(5)*b(9)\n_[14]=b(2)*b(6)-b(4)*b(9)\n_[15]=b(1)*b(6)-b(3)*b(9)\n_[16]=b(4)^2-b(2)*b(5)\n_[17]=b(3)*b(4)-b(1)*b(5)\n_[18]=b(2)*b(4)-b(5)*b(8)\n_[19]=b(1)*b(4)-b(5)*b(7)\n_[20]=b(3)^2-b(5)*b(6)+b(3)*b(10)\n_[21]=b(2)*b(3)-b(5)*b(7)\n_[22]=b(1)*b(3)-b(5)*b(9)\n_[23]=b(2)^2-b(4)*b(8)\n_[24]=b(1)*b(2)-b(3)*b(8)\n_[25]=b(1)^2-b(4)*b(9)\n_[26]=b(5)*b(9)*b(10)\n_[27]=b(3)*b(9)*b(10)\n_[28]=a(4)*b(5)*b(10)\n_[29]=a(4)*b(3)*b(10)\n\\end{verbatim}\n\\end{multicols}}}\n\\noindent\n\\emph{Singular} \\cite{DGPS} provides a primary decomposition $\\mathfrak{b}=\\mathfrak{b}_1\\cap \\mathfrak{b}_2\\cap\\mathfrak{b}_3$ such that $\\mathbb{V}_+(\\mathfrak{b})\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})=\\mathbb{V}_+(\\mathfrak{b}_2)\\backslash\\mathbb{V}_+(\\mathfrak{\\widehat{d}})$. Therefore, we get\n$$MGC(A)=\\mathbb{V}_+(b_1,b_2,b_4,b_7,b_8,b_9,b_3^2-b_5b_6+b_3b_{10})\\backslash\\left(\\mathbb{V}_+(a_4)\\cup\\mathbb{V}_+(b_{10})\\cup\\mathbb{V}_+ (b_3,b_5)\\right).$$\n\\noindent\nin $\\mathbb{P}^{14}$. We can eliminate some of the variables and consider $MGC(A)$ to be the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_3^2-b_5b_6+b_3b_{10})\\backslash\\left(\\mathbb{V}_+(a_4)\\cup\\mathbb{V}_+(b_{10})\\cup\\mathbb{V}_+ (b_3,b_5)\\right)\\subset \\mathbb{P}^8.$$\n\\noindent\nTherefore, any minimal Gorenstein cover is of the form $G=R\/\\operatorname{Ann_R} H$, where\n$$H=a_1y_3^2+a_2y_2y_3+a_3y_1y_3+a_4y_2^2+a_5y_1y_2+b_3y_1y_3^2+b_5y_3^3+b_6y_1^2y_3-b_{10}y_1^4$$\n\\noindent\nsatisfies $b_3^2-b_5b_6+b_3b_{10}=0$, $a_4\\neq 0$, $b_{10}\\neq 0$ and either $b_3\\neq 0$ or $b_5\\neq 0$ (or both).\n\n\\noindent\nMoreover, note that $\\mathbb{V}_+(\\mathfrak{c})\\backslash\\mathbb{V}_+(\\mathfrak{a})=\\mathbb{V}_+(\\mathfrak{c_2})\\backslash\\mathbb{V}_+(\\mathfrak{a})$, where $\\mathfrak{c}=\\mathfrak{c}_1\\cap \\mathfrak{c}_2\\cap \\mathfrak{c}_3$ is the primary decomposition of $\\mathfrak{c}$ and $\\mathfrak{c}_2=\\mathfrak{b}_2+(v_2,v_1b_5-v_3b_3-v_3b_{10},v_1b_3-v_3b_6)$. Hence, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3^2)$, where $L_1,L_2,L_3$ are independent linear forms in $R$ such that $L_3=v_1x_1+v_3x_3$, with $v_1b_5-v_3b_3-v_3b_{10}=v_1b_3-v_3b_6=0$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_1x_2)$. Note that $\\operatorname{HF}_A=\\lbrace 1,3,2\\rbrace$ and $\\tau(A)=2$. Doing analogous computations to the previous examples, \\emph{Singular} provides the following variety:\n$$MGC(A)=\\mathbb{P}^7\\backslash\\mathbb{V}_+(b_2^2-b_1b_3)$$\n\\noindent\nThe coordinates of points in $MGC(A)$ are of the form $(a_1:\\dots:a_4:b_1:b_2:b_3:b_4)\\in \\mathbb{P}^7$ and they correspond to a polynomial\n$$H=b_1y_1^2y_3+b_2y_1y_2y_3+b_3y_2^2y_3+b_4y_3^3+a_1y_3^2+a_2y_2^2+a_3y_1y_2+a_4y_1^2$$\n\\noindent\nsuch that $b_2^2-b_1b_3\\neq 0$. Any $G=R\/\\operatorname{Ann_R} H$ is a minimal Gorenstein cover of colength 2 of $A$ and all such covers admit $(x_1,x_2,x_3^2)$ as the corresponding $K_H$.\n\\end{example}\n\n\\begin{example}\nConsider $A=R\/I$, with $R=\\res[\\![x_1,x_2,x_3,x_4]\\!]$ and $I=(x_1^2,x_2^2,x_3^2,x_4^2,x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4)$. Note that $\\operatorname{HF}_A=\\lbrace 1,4,1\\rbrace$ and $\\tau(A)=3$. Doing analogous computations to the previous examples, \\emph{Singular} provides the following variety:\n$$MGC(A)=\\mathbb{V}_+(b_6b_{10}-b_9b_{16})\\backslash\\left(\\mathbb{V}_+(d_1)\\cup\\mathbb{V}_+(d_2)\\right)\\subset \\mathbb{P}^{12},$$\n\\noindent\nwhere $d_1=(a_7a_9-a_8^2)$ and $d_2=(b_9^2b_{16}-b_{10}^3,b_6b_9-b_{10}^2,b_6^2-b_{10}b_{16})$. The coordinates of points in $MGC(A)$ are of the form $(a_1:\\dots:a_9:b_6:b_9:b_{10}:b_{16})\\in \\mathbb{P}^{12}$ and they correspond to a polynomial\n$$H=b_{16}y_3^3+b_6y_3^2y_4+b_{10}y_3y_4^2+b_9y_4^3+a_9y_1^2+a_8y_1y_2+a_7y_2^2+$$\n$$+a_6y_1y_3+a_5y_2y_3+a_4y_3^2+a_3y_1y_4+a_2y_2y_4+a_1y_4^2$$\n\\noindent\nsuch that $G=R\/\\operatorname{Ann_R} H$ is a minimal Gorenstein cover of colength 2 of $A$. Moreover, any $K_H$ such that $K_H\\circ H=I^\\perp$ will be of the form $K_H=(L_1,L_2,L_3,L_4^2)$, where $L_1,L_2,L_3,L_4$ are independent linear forms in $R$ such that $L_4=v_3x_3+v_4x_4$, with $v_3b_9-v_4b_{10}=v_3b_6-v_4b_{16}=0$.\n\\end{example}\n\n\nAs in the case of colength 1, we now provide a table for the computation times of $MGC(A)$ of all isomorphism classes of local $\\res$-algebras $A$ of length equal or less than 6 such that $\\operatorname{gcl}(A)=2$.\n\n{\\small{\n\\begin{table}[h]\n\\begin{tabular}{|c|c|c|}\n\\hline\n$\\operatorname{HF}(R\/I)$ & $I$ & t(s)\\\\\n\\hline\n$1,3,1 $ & $x_1x_2,x_1x_3,x_1^2,x_2^2,x_3^2$ & 0,42\\\\\n$1,2,2,1 $ & $x_1^2,x_1x_2^2,x_2^4$ & 0,18\\\\\n$1,3,1,1 $ & $x_1x_2,x_1x_3,x_2x_3,x_2^2,x_3^2-x_1^3$ & 3,56\\\\\n$1,3,2 $ & $x_1x_2,x_2x_3,x_3^2,x_2^2-x_1x_3,x_1^3$ & 4,4\\\\\n& $x_1x_2,x_3^2,x_1x_3-x_2x_3,x_1^2+x_2^2-x_1x_3$ & 1254,34\\\\\n& $x_1x_2,x_1x_3,x_2^2,x_3^2,x_1^3$ & 3,33\\\\\n& $x_1x_2,x_1x_3,x_2x_3,x_1^2+x_2^2-x_3^2$ & 4,61\\\\\n& $x_1^2,x_1x_2,x_2x_3,x_1x_3+x_2^2-x_3^2$ & 4,09\\\\\n& $x_1^2,x_1x_2,x_2^2,x_3^2$ & 0,45\\\\\n$1,4,1 $ & $x_1^2,x_2^2,x_3^2,x_4^2,x_1x_2,x_1x_3,x_1x_4,x_2x_3,x_2x_4$ & 242,28\\\\\n\\hline\n\\end{tabular}\n\\medskip\n\\caption[AlMGC2]{Computation times of $MGC(A)$ of local rings $A=R\/I$ with $\\ell(A)\\leq 6$ and $\\operatorname{gcl}(A)=2$.}\n\\label{table:TMGC2}\n\\end{table}}}\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzeqtg b/data_all_eng_slimpj/shuffled/split2/finalzzeqtg new file mode 100644 index 0000000000000000000000000000000000000000..6077840ae1ed9efbb2df03d246dbcf7546cbad72 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzeqtg @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\n\nExtracting phenomenological predictions from string theory is a subtle task.\nChief among the complications is the question of \nfinding a suitable vacuum.\nWithout solving this problem, one is limited to making generic statements that might hold across\nbroad classes of string theories.\nBut even within the context of specific string models with certain favorable characteristics,\nmost attempts at extracting the corresponding phenomenological predictions follow a common path. First, one tallies the massless states that arise in such models. Then, one constructs a field-theoretic Lagrangian which describes the dynamics of these states. Finally, one proceeds to analyze this Lagrangian using all of the regular tools of quantum field theory\nwithout further regard for the origins of these states within string theory. \n\n\nAlthough such a treatment may be sufficient for certain purposes, calculations performed in this manner have a serious shortcoming: \nby disregarding the infinite towers of string states that necessarily accompany these low-lying modes within the \nfull string theory, \nsuch calculations implicitly disregard many of the underlying string symmetries that ultimately \nendow string theory with a plethora of remarkable properties that transcend our field-theoretic expectations. \nAt first glance, it may seem that these extra towers of states cannot play an important role for \nlow-energy physics because these states typically have masses which are set by \nthe string scale (generically assumed near the Planck scale) or by the scales associated with the compactification geometry. \nFor this reason it would seem that these heavy states can legitimately be integrated out of the theory, thereby \njustifying a treatment based on a Lagrangian description of the low-lying modes alone, along with possible \nhigher-order operators suppressed by powers of these heavier scales. However, it is \ndifficult to justify integrating out {\\it infinite}\\\/ towers of states, \nmuch less towers whose state degeneracies at each mass level grow {\\it exponentially}\\\/ with mass. \nYet this is precisely the situation we face in string theory.\nIndeed, these infinite towers of states particularly affect \nthose operators (such as those associated with the Higgs mass \nand the cosmological constant) which have positive\ndimension and are therefore sensitive to all mass scales in the theory.\n\nMany of the string symmetries that rely on these infinite towers of states go beyond what can be \nincorporated within the framework of an effective field theory (EFT).~ \nFor example, strong\/weak coupling duality relations intrinsically rely on the presence of \nthe full towers of string states, both perturbative and non-perturbative. \nBut there also exist stringy symmetries that operate purely within the perturbative weak-coupling regime. \nA prime example of this is T-duality, under which the physics of closed strings compactified \non small compactification volumes is indistinguishable from the physics associated with strings \ncompactified on large compactification volumes. This sort of \nequivalence between ultraviolet (UV) and \ninfrared (IR) physics cannot be incorporated within an\nEFT-based approach in which we integrate out heavy states while treating light states as dynamical.\n\n\nBoth strong\/weak coupling duality and T-duality are spacetime symmetries.\nAs such, like all spacetime physics, they are merely the {\\it consequences}\\\/ of an underlying string theory.\nBut closed string theories have another symmetry of this sort which is even more fundamental and which \nmust be imposed for consistency \ndirectly on the worldsheet.\nThis is worldsheet modular invariance, which will be the focus of this\npaper. \nWorldsheet modular invariance is crucial\nsince it lies at the heart of many of the finiteness properties for which string theory is famous.\nMoreover, since modular invariance is an exact symmetry of all perturbative closed-string vacua, it provides \ntight constraints on the spectrum of string states at all mass scales as well as on their interactions. \nIndeed, this symmetry is the ultimate ``UV-IR mixer'',\noperating over all scales and enforcing a delicate balancing between low-scale and high-scale physics. \nThere is no sense in which its breaking can be confined to low energies, and likewise\nthere is no sense in which it can be broken by a small amount. \nAs an exact symmetry governing string dynamics,\nworldsheet modular invariance is preserved even as the theory passes through phase transitions such as the \nStandard-Model electroweak or QCD phase transitions, as might occur \nunder cosmological evolution. Indeed, any shifts in the low-energy degrees of freedom induced by such phase transitions \nare automatically accompanied by corresponding shifts in the high-scale theory such that modular invariance is maintained \nand finiteness is preserved. Yet this entire structure is missed if we integrate out the heavy states and \nconcentrate on the light states alone. \n\nWhile certain phenomenological questions are not likely to depend on such symmetries, this need not always be the case.\nFor example, these symmetries are likely to be critical for addressing fundamental questions connected with finiteness and\/or \nthe stability of (or even the coexistence of) \ndifferent scales under radiative corrections. Chief among these questions are hierarchy problems, \nwhich provide clues as to the UV theory \nand its potential connections to IR physics. Indeed, two of the most pressing mysteries in physics are the hierarchy problems \nassociated with the cosmological constant and with the masses of scalar fields such as the Higgs field.\nHowever, integrating out the heavy string states eliminates all of the stringy physics that may \nprovide alternative ways of addressing such problems. \nThe lesson, then, is clear: If we are to take string theory literally as a theory of physics, \nthen we should perform our calculations within the full framework of string theory, incorporating all of \nthe relevant symmetries and infinite towers of states that string theory provides.\n\nWith this goal in mind, \nwe begin this paper by \nestablishing a fully string-theoretic framework for calculating\none-loop Higgs masses directly from first principles in perturbative closed string theories.\nThis is the subject of Sect.~\\ref{sec2}.~\nOur framework will make no assumptions other than worldsheet modular invariance \nand will therefore be applicable to\nall closed strings, regardless of the specific string \nconstruction utilized. \nOur results will thus have a generality that extends beyond individual string models.\nAs we shall see, this framework operates independently of spacetime supersymmetry, and can \nbe employed even when spacetime supersymmetry is broken \n(or even when the string model has no spacetime supersymmetry to begin with at any scale).\nLikewise, our framework can be utilized for all scalar Higgs fields, regardless of the particular gauge symmetries they break. \nThis therefore includes the Higgs field responsible for electroweak symmetry breaking in the Standard Model. \n\nOne of the central results emerging from our framework is a relationship \nbetween the Higgs mass and the one-loop cosmological constant. \nThis connection arises as the result of a gravitational modular anomaly,\nand is thus generic for all closed string theories.\nThis then provides a string-theoretic connection between the two fundamental quantities which are \nknown to suffer from hierarchy problems in the absence of spacetime supersymmetry. \nFrom the perspective of ordinary quantum field theory, such a relation \nbetween the Higgs mass and the cosmological constant would be entirely unexpected. \nIndeed, quantum field theories are insensitive to the zero of energy.\nString theory, by contrast, unifies\ngauge theories with gravity.\nThus, it is only within a string context that such a relation could ever arise. \nAs we shall see, this relationship does not require supersymmetry in any form. \nIt holds to one-loop order, but its direct emergence as \nthe result of a fundamental string symmetry leads us to believe that \nit actually extends more generally. We stress that it is not the purpose of this paper to \nactually solve either of these hierarchy problems\n(although we shall return to this issue briefly in Sect.~\\ref{sec:Conclusions}). \nHowever, we now see that these two hierarchies are connected in a deep way within a string context.\n\nAs we shall find, the Higgs mass receives contributions from all of the states \nthroughout the string spectrum which couple to the Higgs in specified ways.\nThis includes the physical (level-matched) string states as well as the unphysical (non-level-matched) string states.\nDepending on the string model in question, \nwe shall also find that our expression for the total Higgs mass can be divergent; ultimately this will depend on the charges carried by the massless states.\nAccordingly,\nwe shall then proceed to develop a set of regulators \nwhich can tame the Higgs-mass divergences while at the same time allowing\nus to express the Higgs mass as a weighted supertrace over only the physical string states.\nDeveloping these regulators is the subject of Sect.~\\ref{sec3}.~\nTo do this, \nwe shall begin by reviewing prior results in the mathematics literature which will form the basis for\nour work.\nBuilding on these results, we will then proceed to develop\na set of regulators which are completely general, \nwhich preserve modular invariance, and which can be used in a wide variety of contexts even beyond\ntheir role in regulating the Higgs mass.\n\nIn Sect.~\\ref{sec4}, we shall then use these modular-invariant regulators in order to \nrecast our results for the Higgs mass in a form that is closer to what we might expect in field theory.\nThis will also allow us to develop\nan understanding of how the Higgs mass ``runs'' in string theory\nand to develop a physical ``renormalization'' prescription that can operate at all scales.\nTowards this end, we begin in Sect.~\\ref{UVIRequivalence}\nwith a general discussion of how (and to what extent) one can meaningfully extract an effective field theory \nfrom UV\/IR-mixed theories such as modular-invariant string theories.\nThis issue is surprisingly subtle, since modular invariance relates UV and IR divergences to each other\nwhile at the same time softening both. \nFor example, we shall demonstrate that while the Higgs mass is quadratically divergent\nin field theory, modular invariance renders the Higgs mass at most logarithmically divergent in string theory.\nWe shall then apply our regulators from Sect.~\\ref{sec3} to our Higgs-mass\nresults in Sect.~\\ref{sec2}\nand thereby demonstrate how the Higgs mass ``runs''\nas a function of an energy scale $\\mu$.\nThe results of our analysis are highlighted in Fig.~\\ref{anatomy}, which \nnot only exhibits features which might be expected in \nan ordinary effective field theory but also includes features which clearly transcend traditional quantum field-theoretic\nexpectations. The latter include the existence of a ``dual'' infrared region at high energy scales as well as an invariance under \nan intriguing ``scale duality'' transformation {\\mbox{$\\mu\\to M_s^2\/\\mu$}}, where $M_s$ denotes the string scale.\nThis scale-inversion duality symmetry in turn implies the existence of a fundamental limit on the extent to which a modular-invariant theory such as string\ntheory can exhibit UV-like behavior.\n\nAll of our results in Sects.~\\ref{sec2} through \\ref{sec4} are formulated in a fashion that assumes that our \nmodular-invariant string theories can be described through charge lattices.\nHowever, it turns out that our results can be recast in a completely general fashion\nthat does not require the existence of a charge lattice.\nThis is the subject of Sect.~\\ref{sec5}.~\nMoreover, we shall find that this reformulation has an added benefit, allowing us to extract a modular-invariant stringy effective\npotential for the Higgs from which the Higgs mass can be obtained through a modular-covariant double derivative with respect\nto fluctuations of the Higgs field. \nThis potential therefore sits at the core of our string-theoretic calculations\nand allows us to understand not only the behavior of the Higgs mass but also the overall stability of the string theory\nin a very compact form.\nIndeed, in some regions this potential exhibits explicitly string-theoretic behavior. However, in other regions,\nthis potential --- despite its string-theoretic origins --- exhibits a number of features which are\nreminiscent of the traditional Coleman-Weinberg Higgs potential.\n\nFinally, in Sect.~\\ref{sec:Conclusions},\nwe provide an overall discussion of our results and outline some possibilities for future research. \nWe also provide an additional bird's-eye perspective on the \nmanner in which modular invariance induces UV\/IR mixing \nand the reason why the passage from a full string-theoretic result to an EFT description \nnecessarily breaks the modular symmetry. \nWe will also discuss some of the possible implications of our results for addressing\nthe hierarchy problems associated with the cosmological constant and the Higgs mass.\nThis paper also has two Appendices which provide the details of calculations \nwhose results are quoted in Sects.~\\ref{chargeinsertions} and \\ref{Lambdasect} respectively.\n\nOur overarching goal in this paper is to provide a fully string-theoretic framework for the calculation of the Higgs\nmass --- a framework in which modular invariance is baked into the formalism from the very beginning.\nOur results can therefore potentially serve as the launching point for a rigorous investigation of \nthe gauge hierarchy problem in string theory.\nHowever, our methods are quite general and can easily be adapted to other quantities of phenomenological interest,\nincluding not only the masses of all particles in the theory but\nalso the gauge couplings, quartic couplings, and indeed the couplings associated with all allowed interactions.\n\nAs already noted, much of the inspiration for this work stems from our conviction that \nit is not an accident or phenomenological irrelevancy \nthat string theories contain not only low-lying modes but also infinite towers of massive states.\nTogether, all of these states conspire to enforce many of the unique symmetries for which\nstring theory is famous, and thus \ntheir effects are an intrinsic part of the predictions of string theory.\nIn this spirit, one might even view our work as a continuation of the line\n originally begun in the classic 1987 paper of Kaplunovsky~\\cite{Kaplunovsky:1987rp}\nwhich established a framework for calculating string threshold corrections in which \nthe contributions of the infinite towers of string states were included.\nIndeed, as discussed in Sect.~\\ref{higgsmin},\nsome of our results for the Higgs mass \neven resemble results obtained in Ref.~\\cite{Kaplunovsky:1987rp} for threshold corrections.\nOne chief difference in our work,\nhowever, is our insistence on maintaining modular invariance at all steps \nin the calculation, {\\it including the regulators}\\\/,\nespecially when seeking to understand the behavior of dimensionful operators.\nIt is this extra ingredient which is critical for ensuring consistency with the underlying\nstring symmetries, and which allows us to probe the unique effects \nof such symmetries (such as those induced by UV\/IR mixing) in a rigorous manner.\n\n\n\n\\section{Modular invariance and the Higgs mass: A general framework\\label{sec2}}\n\nIn this section we develop a framework for calculating the Higgs mass in any four-dimensional\nmodular-invariant string theory.\nOur framework incorporates modular invariance in a fundamental way, and ultimately leads\nto a completely general expression for one-loop Higgs mass.\nOur results can therefore easily be applied to any \nfour-dimensional \nclosed-string model.\nThroughout most of this paper, our analysis will focus on heterotic string models and will proceed under the assumption \nthat the string model in question can be described through a corresponding charge lattice.\nAs we shall see, the existence of a charge lattice provides a very direct way of performing\nour calculations and illustrating our main points.\nHowever, as we shall discuss in Sect.~\\ref{sec5}, our results \nare ultimately more general than this, and apply even for closed-string models\nthat transcend a specific charge-lattice construction.\n\n\n\\subsection{Preliminaries: String partition functions, charge lattices, and modular invariance}\n\nWe begin by reviewing basic facts about string partition\nfunctions, charge lattices, and modular invariance,\nestablishing our notation and normalizations along the way.\nThe one-loop partition function for any closed heterotic string in four spacetime dimensions is\na statistics-weighted trace over the Fock space of closed-string states, and thus takes the general form\n\\begin{equation}\n {\\cal Z} (\\tau,{\\overline{\\tau}}) ~\\equiv~ \\tau_2^{-1} \\frac{1}{\\overline{\\eta}^{12} \\eta^{24}} \n \\, \\sum_{m,n} \\, (-1)^F \\, {\\overline{q}}^m q^n~.\n\\label{Zform}\n\\end{equation}\nHere $\\tau$ is the one-loop (torus) modular parameter, \n{\\mbox{$\\tau_2\\equiv {\\rm Im}\\,\\tau$}}, \n{\\mbox{$q\\equiv \\exp(2\\pi i\\tau)$}}, \n$F$ is the spacetime fermion number, \nand the Dedekind eta-function is {\\mbox{$\\eta(\\tau)\\equiv q^{1\/24}\\prod_{n=1}^\\infty (1-q^n)$}}.\nIn this expression, the $\\overline{\\eta}$- and $\\eta$-functions represent the \ncontributions from the string oscillator states [which include\nappropriate right- and left-moving vacuum energies $(-1\/2,-1)$ respectively],\nwhile the $(m,n)$ sum tallies the contributions from\nthe Kaluza-Klein (KK) and winding excitations of the heterotic-string worldsheet fields --- \nexcitations which result from the compactification\nof the heterotic string to four dimensions from its critical spacetime dimensions ($=10$ for the \nright-movers and $26$ for the left-movers), \nwith $(m,n)$ representing the corresponding right- and left-moving worldsheet energies.\nThese KK\/winding contributions can be written in terms of the charge vectors {\\mbox{${\\bf Q}\\equiv \\lbrace {\\bf Q}_R,{\\bf Q}_L\\rbrace $}} of \na $(10,22)$-dimensional Lorentzian charge lattice ---\nor equivalently the KK\/winding momenta $\\lbrace {\\bf p}_R,{\\bf p}_L\\rbrace$ of a corresponding momentum lattice of the\nsame dimensionality ---\nvia\n\\begin{equation}\n m= {{\\bf Q}_R^2\\over 2} = {\\alpha' {\\bf p}_R^2\\over 2} ~,\n ~~~~~ n= {{\\bf Q}_L^2\\over 2} = {\\alpha' {\\bf p}_L^2\\over 2} ~,\n\\label{mnQ}\n\\end{equation}\nwhere {\\mbox{$\\alpha'\\equiv 1\/M_s^2$}} with $M_s$ denoting the string scale.\nThus the partition function in Eq.~(\\ref{Zform}) can be written as a sum over charge vectors ${\\bf Q}_L, {\\bf Q}_R$:\n\\begin{equation}\n {\\cal Z}(\\tau,{\\overline{\\tau}}) ~=~ \\tau_2^{-1} {1\\over \\overline{\\eta}^{12} \\eta^{24}} \\, \\sum_{{\\bf Q}_L,{\\bf Q}_R} \n (-1)^F {\\overline{q}}^{{\\bf Q}_R^2\/2} \n q^{{\\bf Q}_L^2\/2}~.\n\\label{preZQdef}\n\\end{equation}\n\nIn general, the spacetime mass $M$ of the resulting string state is given by \n{\\mbox{$\\alpha' M^2 = 2(m+n) + 2(\\Delta_L+\\Delta_R) + 2(a_L+a_R)$}}\nwhere $\\Delta_{R,L}$ are the contributions from the oscillator excitations and {\\mbox{$(a_R,a_L)=(-1\/2, -1)$}} are the corresponding\nvacuum energies.\nIdentifying individual left- and right-moving contributions\nto $M^2$ through the convention \n\\begin{equation}\n M^2 ~=~ {1\\over 2} (M_L^2 + M_R^2)\n\\label{masssum}\n\\end{equation}\nthen yields {\\mbox{$\\alpha' M_R^2 = 4 (m+ \\Delta_R +a_R) $}} \nand {\\mbox{$\\alpha' M_L^2 = 4 (n+ \\Delta_L +a_L) $}}.\nWriting these masses in terms of the lattice charge vectors then yields\n\\begin{eqnarray}\n {\\alpha'\\over 2} M_R^2 ~&=&~ {\\bf Q}_R^2 + 2 \\Delta_R + 2 a_R~,\\nonumber\\\\\n {\\alpha'\\over 2} M_L^2 ~&=&~ {\\bf Q}_L^2 + 2 \\Delta_L + 2 a_L~.\n\\label{eq:massRL}\n\\end{eqnarray}\nStates are level-matched (physical) if {\\mbox{$M_R^2 = M_L^2$}} and unphysical otherwise.\nIndeed, with these conventions, \ngauge bosons in the left-moving non-Cartan algebra are massless, with {\\mbox{${\\bf Q}_L^2=2$}} and {\\mbox{$\\Delta_L=0$}},\nwhile those in the left-moving Cartan algebra are massless, with {\\mbox{${\\bf Q}_L^2 =0$}} and {\\mbox{$\\Delta_L=1$}}.\n(Indeed, such results apply to all left-moving simply-laced gauge groups with level-one affine realizations; more\ncomplicated situations, such as necessarily arise for the right-moving gauge groups, are discussed in Ref.~\\cite{Dienes:1996yh}.)\nNote that the CPT conjugate \nof any state with charge vector $\\lbrace {\\bf Q}_R,{\\bf Q}_L\\rbrace$ has charge vector \n$-\\lbrace {\\bf Q}_R,{\\bf Q}_L\\rbrace$.\nThus CPT invariance requires that all states in the string spectrum come\nin $\\pm\\lbrace {\\bf Q}_R,{\\bf Q}_L\\rbrace$ pairs.\nBy contrast, \nsince the right-moving gauge group is necessarily non-chiral as a result of superconformality constraints,\nthe chiral conjugate of \nany state with charge vector $\\lbrace {\\bf Q}_R,{\\bf Q}_L\\rbrace$ has charge vector \n$\\lbrace {\\bf Q}_R,-{\\bf Q}_L\\rbrace$.\n \nOne important general property of the partition functions in Eq.~(\\ref{Zform}) ---\nand indeed the partition functions of {\\it all}\\\/ closed strings in any spacetime dimension --- \nis that they must be {\\it modular invariant}\\\/, {\\it i.e.}\\\/, invariant\nunder all transformations of the form\n{\\mbox{$\\tau \\to (a\\tau+b)\/(c\\tau+d)$}} where {\\mbox{$a,b,c,d\\in \\mathbb{Z}$}} and {\\mbox{$ad-bc=1$}}\n(with the same transformation for ${\\overline{\\tau}}$).\nModular invariance is thus an exact symmetry underpinning all heterotic strings,\nand in this paper we shall be exploring its consequences for the masses of the Higgs fields in such theories.\nFor these purposes, it will be important to understand the manner in which these\npartition functions achieve their modular invariance.\nIn general, the partition functions for heterotic strings in four dimensions\ncan be rewritten in the form\n\\begin{equation}\n {\\cal Z} (\\tau,{\\overline{\\tau}}) ~\\equiv~ \\tau_2^{-1} \\frac{1}{\\overline{\\eta}^{12} \\eta^{24}}\\, \n \\sum_{{\\overline{\\imath}},i} N_{{\\overline{\\imath}} i} ~\\overline{g_{\\overline{\\imath}} (\\tau)} f_i(\\tau)~\n\\label{partfZ}\n\\end{equation}\nwhere each $({\\overline{\\imath}},i)$ term represents the contribution from a different sector of the theory\nand where the left-moving holomorphic $f_i$ functions (and the corresponding right-moving antiholomorphic\n$g_{\\overline{\\imath}}$ functions) \ntransform covariantly under modular transformations according to\nrelations of the form\n\\begin{equation} \n f\\left( \\frac{a\\tau+b}{c\\tau+d}\\right) ~\\sim~ (c\\tau+d)^{k} \\,f(\\tau)\n\\label{fis}\n\\end{equation}\nwhere $k$ is the so-called modular weight of the $f_i$ functions\n(with an analogous weight ${\\overline{k}}$ for the $g_{\\overline{\\imath}}$ functions)\nand where the $\\sim$ notation allows for the possibility of overall \n$\\tau$-independent phases which\nwill play no future role in our arguments.\nWe likewise have \n\\begin{equation}\n \\eta\\left( \\frac{a\\tau+b}{c\\tau+d}\\right) ~\\sim~ (c\\tau+d)^{1\/2} \\, \\eta(\\tau)~.\n\\label{modfis}\n\\end{equation}\nThus, since {\\mbox{$\\tau_2\\to \\tau_2\/|c\\tau+d|^2$}} as {\\mbox{$\\tau\\to (a\\tau+b)\/(c\\tau+d)$}}, we immediately\nsee that modular invariance of the entire partition function in Eq.~(\\ref{partfZ}) requires not only that\nthe $N_{{\\overline{\\imath}} i}$ coefficients in Eq.~(\\ref{partfZ}) be chosen correctly but also that\n{\\mbox{$k=11$}} and {\\mbox{${\\overline{k}} =5$}}. \nIn general, \nfor strings realizable through free-field constructions, \nthese $f_i$ and $g_{\\overline{\\imath}}$ functions produce the lattice sum in Eq.~(\\ref{preZQdef}) because\nthey can be written in the factorized forms \n\\begin{equation}\n f_i~\\sim~ \\prod_{\\ell=1}^{22} \\vartheta\n \\begin{bmatrix} \\alpha_\\ell^{(i)} \\\\ \\beta_\\ell^{(i)} \\end{bmatrix}~,~~~~~\n g_{\\overline{\\imath}}~\\sim~ \\prod_{\\ell=1}^{10} \\vartheta\n \\begin{bmatrix} \\alpha_\\ell^{({\\overline{\\imath}})} \\\\ \\beta_\\ell^{({\\overline{\\imath}})} \\end{bmatrix}~,~~\n\\label{partft}\n\\end{equation}\nwhere each $\\vartheta$-function factor \nis the trace over the $\\ell^{\\rm th}$ direction $Q_\\ell$ of the charge lattice:\n\\begin{equation}\n \\vartheta_\\ell(\\tau) \n ~\\equiv~ \\vartheta \\begin{bmatrix} \\alpha_\\ell \\\\ \\beta_\\ell \\end{bmatrix}(\\tau)\n ~\\equiv~\n \\sum_{Q_\\ell\\in \\mathbb{Z}+\\alpha_\\ell} e^{2\\pi i \\beta_\\ell Q_\\ell} \\,q^{Q_\\ell^2\/2}~.~~\n\\label{thetadef}\n\\end{equation}\nIndeed, the $\\vartheta_\\ell$-functions transform \nunder modular transformations as in Eq.~(\\ref{fis}),\nwith modular weight $1\/2$.\nThe modular invariance of the underlying string theory\nthen ensures that there exists a special\n$(10,22)$-dimensional ``spin-statistics vector'' ${\\bf S}$ \nsuch that we may identify \nthe spacetime fermion number $F$ within Eq.~(\\ref{Zform})\nas {\\mbox{${F = 2 {\\bf Q}\\cdot {\\bf S}}$}}~(mod~2)\nfor any state with charge ${\\bf Q}$,\nwhere the dot notation `$\\cdot$' signifies the Lorentzian \n(left-moving minus right-moving) dot product.\nModular invariance \nalso implies \nthat the shifted charges ${\\bf Q} -{\\bf S}$ \nassociated with the allowed string states\ntogether form a Lorentzian lattice which is both odd and self-dual.\nIt is with this understanding that \nwe refer to the charges ${\\bf Q}$ themselves as populating a ``lattice''.\nIndeed, it is the self-duality property of the \nshifted charge lattice $\\lbrace {\\bf Q}-{\\bf S}\\rbrace$ which guarantees \nthat the $f_i$ and $g_{\\overline{\\imath}}$ functions in Eq.~(\\ref{fis})\ntransform covariantly \nunder the modular group,\nas in Eq.~(\\ref{fis}).\n\nFor later purposes, we simply observe that the\ngeneral structure given in Eq.~(\\ref{partfZ}) is \ntypical of the modular-invariant quantities that arise\nas heterotic-string Fock-space traces.\nIndeed, a general quantity of the form\n\\begin{equation}\n \\tau_2^\\kappa~\n \\frac{1}{\\overline{\\eta}^{12} \\eta^{24}}\\, \n \\sum_{{\\overline{\\imath}},i} N_{{\\overline{\\imath}} i} ~\\overline{g_{\\overline{\\imath}} (\\tau)} f_i(\\tau)~\n\\label{genexp}\n\\end{equation}\ncannot be modular invariant unless \nthe $N_{{\\overline{\\imath}} i}$ are chosen correctly\nand the corresponding $f_i$ and $g_{\\overline{\\imath}}$ functions transform\nas in Eq.~(\\ref{fis})\nwith \n\\begin{equation}\n k-12 ~=~ {\\overline{k}}-6 ~=~ \\kappa~.\n\\label{genexp2}\n\\end{equation}\nWhile {\\mbox{$\\kappa = -1$}} for the partition \nfunctions of four-dimensional heterotic strings, as\ndescribed above,\nwe shall see that other important Fock-space traces can have different\nvalues of $\\kappa$.\nFor example, the partition functions of heterotic strings\nin $D$ spacetime dimensions have {\\mbox{$\\kappa = 1-D\/2$}}, with\ncorresponding changes to the dimensionalities of \ntheir associated charge lattices.\n\n\n\\subsection{Higgsing and charge-lattice deformations} \\label{latdeform}\n\n\nIn general, different string models exhibit different spectra and thus have different charge lattices.\nHowever, Higgsing a theory changes its spectrum in certain dramatic ways, \nsuch as by giving mass to formerly massless gauge bosons and thereby breaking the associated gauge symmetries.\nThus, in string theory, Higgsing \ncan ultimately be viewed as a process of \ntransforming the charge \nlattice from one configuration to another. \n\nOf course, \nmodular invariance must \nbe maintained throughout the Higgsing process. \nIndeed, it is only in this way that\nwe can regard the Higgsing process as a fully string-theoretic \noperation that \nshifts the string vacuum state within the space of self-consistent string vacua.\nHowever, modular invariance then implies that\nthe charge-lattice transformations induced by Higgsing are not arbitrary. \nInstead, they must preserve those charge-lattice properties, \nas described above, which guarantee the modular invariance of the theory.\n\nThis in turn tells us that the process of Higgsing is likely to be far more complicated\nin string theory than it is in ordinary quantum field theory.\nIn general, \nthe charge lattice receives contributions from all sectors of the theory,\nand modular transformations mix these different contributions in highly non-trivial ways.\nThus the process of Higgsing a given gauge symmetry within a given sector of a string\nmodel generally involves not only the physics associated with that gauge symmetry \nbut also the physics of all of the {\\it other}\\\/ sectors of the theory as well, both twisted and untwisted,\nand the properties of the {\\it other}\\\/ gauge symmetries, including gravity, that might also be present \nin the string model --- even if these other gauge symmetries are apparently completely disjoint from \nthe symmetry being Higgsed. \nWe shall see explicit examples of this below.\nMoreover, in string theory the dynamics of the Higgs VEV --- and indeed the dynamics of all string moduli ---\nis generally governed by an effective potential \nwhich is nothing but the vacuum energy of the theory, expressed as a function of this VEV.~\nThus the overall dynamics associated with Higgsing can be rather subtle: \nthe Higgs VEV determines the deformations of the charge lattice, and these deformations\nalter the vacuum energy which in turn determines the VEV. \n\n\n\nIn this paper, our goal is to calculate the mass of the physical Higgs scalar field that emerges \nin the Higgsed phase ({\\it i.e.}\\\/, after the \ntheory has already been Higgsed). We shall therefore assume that our theory contains a scalar Higgs field\nwhich has already settled into the new minimum of its potential.\nThis will allow us to \nsidestep the (rather complex) model-dependent issue \nconcerning the manner in which the Higgsing itself occurs, and instead\nfocus on the {\\it perturbations}\\\/ of the field around this new minimum.\nIn this way we will be able to determine \nthe curvature of the scalar potential at this local minimum, and thereby obtain the corresponding Higgs mass. \n\nTo do this, we shall begin by \nexploring the manner in which a general charge lattice is deformed\nas we vary a scalar Higgs field away from the minimum of its potential.\nOur discussion will be completely general, and we shall defer to Sect.~\\ref{sec:EWHiggs} \nany assumptions that\nmight be specific to the particular Higgs field responsible for electroweak symmetry breaking.\nFor concreteness, we shall let\nour scalar field have\na value $\\langle \\phi \\rangle + \\phi$, where $\\langle \\phi\\rangle$ is the Higgs VEV \nat the minimum of its potential and where $\\phi$ describes the fluctuations away from this point.\nIf $\\lbrace {\\bf Q}_L,{\\bf Q}_R\\rbrace$ are the charge vectors associated with a\ngiven string state in the Higgsed phase ({\\it i.e.}\\\/, at the minimum of the potential, when {\\mbox{$\\phi=0$}}),\nthen turning on $\\phi$ corresponds to deforming these charge vectors. \nIn general, for {\\mbox{$\\phi\/\\langle \\phi\\rangle\\ll 1$}}, we shall parametrize these deformations\naccording to\n\\begin{eqnarray} \n {\\bf Q}_L &\\to& ~{\\bf Q}_L + \\sqrt{\\alpha'} \\phi {\\bf Q}_a + {1\\over 2} \\alpha' \\phi^2 {\\bf Q}_b~+~... ~~~~~ \\nonumber\\\\\n {\\bf Q}_R &\\to& ~{\\bf Q}_R + \\sqrt{\\alpha'} \\phi \\tilde {\\bf Q}_a + {1\\over 2} \\alpha' \\phi^2 \\tilde {\\bf Q}_b~+~...~,\n\\label{Qdeform}\n\\end{eqnarray}\nwhere ${\\bf Q}_a$, ${\\bf Q}_b$, $\\tilde{\\bf Q}_a$, and $\\tilde {\\bf Q}_b$ are deformation charge vectors of \ndimensionalities $22$, $22$, $10$, and $10$ respectively.\nIndeed, the forms of these vectors are closely correlated with the specific gauge symmetries\nbroken by the Higgsing process,\nand as such these vectors continue to \ngovern the fluctuations\nof the Higgs scalar around this Higgsed minimum.\n\n\nIn this paper, we shall keep our analysis as general as possible. \nAs such, we shall not make any specific assumptions regarding\nthe forms of these vectors.\nHowever, as discussed above, we know that the Higgsing process --- and even the fluctuations around the Higgsed minimum \nof the potential --- should not break modular invariance. In particular,\nthe corresponding charge-lattice deformations in Eq.~(\\ref{Qdeform}) should not \ndisturb level-matching. This means that the value of the difference ${\\bf Q}_L^2 - {\\bf Q}_R^2$ should not be \ndisturbed when $\\phi$ is taken to non-zero values, which in turn means that \nthis difference should be independent of $\\phi$.\nThis then constrains the choices for \nthe vectors ${\\bf Q}_a$, ${\\bf Q}_b$, $\\tilde{\\bf Q}_a$, and $\\tilde {\\bf Q}_b$.\n\nTo help simplify the notation, let us assemble a single 32-dimensional charge vector \n{\\mbox{${\\bf Q} \\equiv ({\\bf Q}_L,{\\bf Q}_R)^t$}} (where `$t$' signifies the transpose). \nRecalling that the dot notation `$\\cdot$' signifies \nthe Lorentzian (left-moving minus right-moving) contraction of vector indices,\nas appropriate for a Lorentzian charge lattice,\nwe therefore require that {\\mbox{${\\bf Q}^2 \\equiv {\\bf Q}^t\\cdot {\\bf Q}$}} be $\\phi$-independent for all $\\phi$.\nGiven the above shifts, we find that terms within {\\mbox{${\\bf Q}^t\\cdot {\\bf Q}$}} \nwhich are respectively linear and quadratic in $\\phi$ will cancel provided \n\\begin{eqnarray}\n ({\\bf Q}^t_a, \\tilde {\\bf Q}^t_a) \\cdot {\\bf Q} ~&=&~ 0~,~~~~~\\nonumber\\\\\n ({\\bf Q}_b^t, \\tilde {\\bf Q}^t_b) \\cdot {\\bf Q} + ({\\bf Q}_a^t, \\tilde {\\bf Q}_a^t ) \\cdot \n \\begin{pmatrix}\n {\\bf Q}_a\\\\\n \\tilde {\\bf Q}_a\n \\end{pmatrix}\n ~&=&~0~.\n\\label{levelmatching}\n\\end{eqnarray}\nThese are thus modular-invariance\nconstraints on the allowed choices for the shift vectors ${\\bf Q}_a$, ${\\bf Q}_b$, $\\tilde{\\bf Q}_a$, and $\\tilde {\\bf Q}_b$.\n\nWe can push these constraints one step further if we write these shift vectors in terms of $\\bf Q$ itself via relations of the form\n\\begin{equation}\n \\begin{pmatrix}\n {\\bf Q}_a\\\\\n \\tilde {\\bf Q}_a\n \\end{pmatrix}\n = {\\cal T} \\cdot {\\bf Q} ~,~~~~~\n \\begin{pmatrix}\n {\\bf Q}_b\\\\ \\tilde {\\bf Q}_b\n \\end{pmatrix}\n = {\\cal N} \\cdot {\\bf Q} ~\n\\label{shiftQ}\n\\end{equation}\nwhere ${\\cal T}$ and ${\\cal N}$ are {\\mbox{$(32\\times 32)$}}-dimensional matrices \nand where `$\\cdot$' retains its Lorentzian signature for the index contraction that underlies matrix multiplication. \nThe first constraint equation above then tell us that {\\mbox{${\\bf Q}^t \\cdot {\\cal T}\\cdot {\\bf Q}=0$}}, which implies that ${\\cal T}$ must be antisymmetric,\nwhile the second constraint equation tells us that {\\mbox{${\\bf Q}^t \\cdot ( {\\cal N}+ {\\cal T}^t\\cdot {\\cal T})\\cdot {\\bf Q} =0$}}, which implies that\n{\\mbox{${\\cal N}+ {\\cal T}^t \\cdot {\\cal T}$}} must also be antisymmetric. \nIt turns out that the precise value of {\\mbox{${\\cal N} + {\\cal T}^t \\cdot {\\cal T}$}} will have no bearing on the Higgs mass. We will therefore\nset it to zero (which is indeed antisymmetric), implying that {\\mbox{${\\cal N} = - {\\cal T}^t \\cdot {\\cal T}$}}.~ \nThus, while ${\\cal T}$ is antisymmetric, ${\\cal N}$ is symmetric.\nIndeed, if we write our ${\\cal T}$-matrix in terms of left- and right-moving submatrices ${\\cal T}_{ij}$ in the form\n\\begin{equation}\n {\\cal T} ~=~ \\begin{pmatrix}\n {\\cal T}_{11} & {\\cal T}_{12} \\\\\n {\\cal T}_{21} & {\\cal T}_{22} \n \\end{pmatrix}~,\n\\end{equation}\nthen we must have {\\mbox{${\\cal T}_{11}^t = -{\\cal T}_{11}$}}, {\\mbox{${\\cal T}_{22}^t = -{\\cal T}_{22}$}}, and {\\mbox{${\\cal T}_{12}^t = -{\\cal T}_{21}$}}.\nLikewise, we then find that\n\\begin{eqnarray}\n {\\cal N}_{11} ~&=&~ - {\\cal T}_{11}^t {\\cal T}_{11} + {\\cal T}_{21}^t {\\cal T}_{21}\\nonumber\\\\ \n {\\cal N}_{12} ~&=&~ - {\\cal T}_{11}^t {\\cal T}_{12} + {\\cal T}_{21}^t {\\cal T}_{22}\\nonumber\\\\ \n {\\cal N}_{21} ~&=&~ - {\\cal T}_{12}^t {\\cal T}_{11} + {\\cal T}_{22}^t {\\cal T}_{21}\\nonumber\\\\ \n {\\cal N}_{22} ~&=&~ - {\\cal T}_{12}^t {\\cal T}_{12} + {\\cal T}_{22}^t {\\cal T}_{22}~.\n\\label{NTrelations}\n\\end{eqnarray}\n \n\n\\subsection{Example: The Standard-Model Higgs\\label{sec:EWHiggs}}\n\nIn general, within any given string model, \nthe deformation vectors \n${\\bf Q}_a$, ${\\bf Q}_b$, $\\tilde{\\bf Q}_a$, and $\\tilde {\\bf Q}_b$ in Eq.~(\\ref{Qdeform}) \ndepend on the particular charge vector $({\\bf Q}_R,{\\bf Q}_L)$ being deformed.\nHowever the ${\\cal T}$- and ${\\cal N}$-matrices in Eq.~(\\ref{shiftQ}) are universal for all charge vectors within the model.\nIt is therefore these matrices which carry all of the relevant information concerning the response of the theory \nto fluctuations of the particular Higgs field under study.\nIn general, these matrices depend on how the gauge groups and corresponding Higgs field are embedded\nwithin the charge lattice.\nThus the precise forms of these matrices depend on the particular string model under study and the Higgs field\nto which it gives rise.\n\nTo illustrate this point, it may be helpful to consider the special case of the Standard-Model (SM) Higgs.\nFor concreteness, we shall work within the framework of heterotic string models \nin which the Standard Model itself is realized at affine level {\\mbox{$k=1$}} through a standard level-one $SO(10)$ embedding.\nIn the following we shall adhere to the conventions in Ref.~\\cite{Dienes:1995sq}.\nSince $SO(10)$ has rank $5$, this group can be minimally embedded within a \nfive-dimensional sublattice $\\lbrace Q_1,Q_2,Q_3,Q_4,Q_5\\rbrace$ \nwithin the full 22-dimensional left-moving lattice $\\lbrace {\\bf Q}_L\\rbrace$. \nWithin this sublattice, we shall take\nthe {\\mbox{$\\ell=1,2$}} directions as corresponding to \nthe {\\mbox{$U(2) = SU(2)\\times U(1)$}} electroweak subgroup of $SO(10)$, \nwhile the {\\mbox{$\\ell =3,4,5$}} directions will correspond to the {\\mbox{$U(3)= SU(3)\\times U(1)$}} color subgroup.\nBy convention we will take the $SU(2)_L$ representations to lie along the line perpendicular to $(1,1,0,0,0)$\nwithin the two-dimensional $U(2)$ sublattice,\nand the $SU(3)_c$ representations to lie within the two-dimensional plane perpendicular to $(0,0,1,1,1)$\nwithin the three-dimensional $U(3)$ sublattice.\nIt then follows that any state with charge vector ${\\bf Q}_L$ has $SU(2)$ quantum numbers determined by projecting ${\\bf Q}_L$ onto the \n$SU(2)$ line [thereby yielding an $SU(2)$ weight in the corresponding $SU(2)$ weight system]\nand $SU(3)$ quantum numbers determined by projecting ${\\bf Q}_L$ onto the $SU(3)$ plane [thereby yielding an $SU(3)$ weight within the \ncorresponding $SU(3)$ weight system].\nLikewise, the $SO(10)$-normalized hypercharge $Y$ of any state with left-moving charge vector\n${\\bf Q}_L$ is given by {\\mbox{$Y=\\sum_{\\ell=1}^5 a_{Y}^{(\\ell)} Q_\\ell$}} where \n\\begin{equation}\n {\\bf a}_{Y}~=~ (\\, {\\textstyle{1\\over 2}},\\, {\\textstyle{1\\over 2}}, ~ -\\mbox{\\small ${\\frac{1}{3}}$},\\, -\\mbox{\\small ${\\frac{1}{3}}$},\\, -\\mbox{\\small ${\\frac{1}{3}}$}\\, )~\n\\end{equation}\n(with all other components vanishing).\nThus, {\\mbox{$Y\\equiv {\\bf a}_{Y} \\cdot {\\bf Q}_L$}}. \nIndeed we see that {\\mbox{$k_Y\\equiv 2 {\\bf a}_{Y}\\cdot {\\bf a}_{Y} = 5\/3$}}, as appropriate for the standard $SO(10)$ embedding\n(as well as other non-standard $SO(10)$ embeddings~\\cite{Dienes:1995sq}).\nIn a similar way, the electromagnetic charge $q_{\\rm EM}$ of any state with charge vector ${\\bf Q}_L$ is given by \n{\\mbox{$q_{\\rm EM}= {\\bf a}_{\\rm EM} \\cdot {\\bf Q}_L$}}, where\n\\begin{equation}\n {\\bf a}_{\\rm EM}~=~ (\\, 0, ~ 1, ~ -\\mbox{\\small ${\\frac{1}{3}}$}, \\, -\\mbox{\\small ${\\frac{1}{3}}$},\\, -\\mbox{\\small ${\\frac{1}{3}}$} )~\n\\end{equation}\n(with all other components vanishing).\nAs a check we verify that {\\mbox{$T_3 = {\\bf a}_{T_3} \\cdot {\\bf Q}_L$}},\nwhere\n{\\mbox{${\\bf a}_{T_3} = {\\bf a}_{\\rm EM} - {\\bf a}_{Y} = (-{\\textstyle{1\\over 2}}, {\\textstyle{1\\over 2}}, 0,0,0) = {\\textstyle{1\\over 2}} {\\bf Q}_{T^+}$}}\nwhere ${\\bf Q}_{T^+}$ is the charge vector (or root vector) associated with \nthe $SU(2)$ gauge boson with positive $T_3$ charge.\n \nThus far, we have focused on the gauge structure of the theory. As we have seen, the corresponding charge vectors\nfollow our usual group-theoretic expectations, just as they would in ordinary quantum field theory.\nHowever, the charge vectors associated with the SM matter states \nin string theory are far more complex than would be expected in quantum field theory\nand actually spill beyond the $SO(10)$ sublattice. \n\n\nTo see why this is so, it is perhaps easiest to \nconsider the original $SO(10)$ theory {\\it prior}\\\/ to electroweak breaking.\nIn this phase of the theory, the SM matter content consists of massless fermion and Higgs fields transforming \nin the ${\\bf 16}$ and ${\\bf 10}$ representations of $SO(10)$, respectively.\nThe former representations has charge vectors \nwith $SO(10)$-sublattice components {\\mbox{$Q^{(f)}_\\ell =\\pm 1\/2$}} for each $\\ell$ (with an odd net number of minus signs),\nwhile the latter has\n{\\mbox{$Q^{(\\phi)}_\\ell =\\pm \\delta_{\\ell k}$}} where {\\mbox{$k=1,2,...,5$}}.\nThus, the ${\\bf 16}$ and ${\\bf 10}$ representations have \nconformal dimensions {\\mbox{$h_{\\bf 16}= 5\/8$}} and {\\mbox{$h_{\\bf 10}=1\/2$}}.\nIndeed, according to the gauge embeddings discussed above,\nthe particular Higgs states which are electrically neutral have {\\mbox{$Q_\\ell =\\pm \\delta_{\\ell 1}$}}. \nHowever, as a result of the non-trivial left-moving heterotic-string\nvacuum energy {\\mbox{$E_L= -1$}},\nany massless string state must correspond to worldsheet excitations contributing a total \nleft-moving conformal dimension {\\mbox{$h_L=1$}}.\nThus, even within the $SO(10)$ embedding specified above, string consistency constraints require \nthat the SM fermion and Higgs states carry non-trivial \ncharges not only {\\it within}\\\/ the $SO(10)$ sublattice $\\lbrace Q_1,Q_2,...,Q_5\\rbrace$ {\\it but also beyond it} --- {\\it i.e.}\\\/, \nelsewhere in the 17 remaining left-moving lattice directions {\\mbox{${\\bf Q}_{\\rm int} \\equiv \\lbrace Q_6,...,Q_{\\rm 22}\\rbrace$}}\nwhich {\\it a priori}\\\/ correspond to gauge symmetries beyond those of the SM (such as those of potential hidden sectors).\nIndeed, these additional excitations must contribute additional left-moving conformal dimensions $3\/8$ and $1\/2$ for\nthe SM matter and Higgs fields respectively, corresponding to\n{\\mbox{$[{\\bf Q}^{(f)}_{\\rm int}]^2 = 3\/4$}} for the fermions and {\\mbox{$[{\\bf Q}^{(\\phi)}_{\\rm int}]^2 = 1$}} for the Higgs.\n\n \nA similar phenomenon also occurs within the 10-dimensional {\\it right-moving}\\\/ charge lattice, with \ncomponents $\\lbrace \\tilde Q_1,..., \\tilde Q_{10}\\rbrace$.\nThe component associated with the non-zero component of the {\\bf S}-vector discussed\nbelow Eq.~(\\ref{thetadef}) --- henceforth chosen as $\\tilde Q_1$ --- \ndescribes the spacetime spin-helicity of the state. \nAs such, we must have {\\mbox{$\\tilde Q_1^{(f)}=\\pm 1\/2$}} for the SM fermions\nand {\\mbox{$\\tilde Q_1^{(\\phi)}=0$}} for the scalar Higgs. \nOf course, the right-moving side of the heterotic string has \n{\\mbox{$E_R= -1\/2$}}, requiring that all massless string states have total right-moving conformal dimensions {\\mbox{$h_R=1\/2$}}. \nWe thus find that the SM fermion and Higgs fields \nmust have additional nine-dimensional\ncharge vectors {\\mbox{$\\tilde {\\bf Q}_{\\rm int}\\equiv \\lbrace\\tilde Q_2,..., \\tilde Q_{10}\\rbrace$}}\n(presumably corresponding to additional {\\it right-moving}\\\/ gauge symmetries)\nsuch that \n{\\mbox{$[\\tilde {\\bf Q}^{(f)}_{\\rm int}]^2 = 3\/4$}}\nand \n{\\mbox{$[\\tilde {\\bf Q}^{(\\phi)}_{\\rm int}]^2 = 1$}}.\n\nWe see, then, that the electrically neutral Higgs field prior to electroweak symmetry breaking\nmust have a total $32$-dimensional charge vector of the form\n\\begin{eqnarray}\n {\\bf Q}_\\phi ~&\\equiv&~ \n ({\\bf Q}_L^{(\\phi)}\\,|\\, {\\bf Q}_R^{(\\phi)}) \\nonumber\\\\ \n ~&=&~ \n (1,0,0,0,0, \\, {\\bf Q}_{\\rm int}^{(\\phi)} \\,|\\, 0 , \\, \\tilde {\\bf Q}_{\\rm int}^{(\\phi)} ) \n\\label{Higgsform}\n\\end{eqnarray}\nwhere {\\mbox{$[{\\bf Q}_{\\rm int}^{(\\phi)}]^2 = [\\tilde {\\bf Q}_{\\rm int}^{(\\phi)}]^2 = 1$}}. \nIn general, the specific forms of\n${\\bf Q}_{\\rm int}^{(\\phi)}$\nand \n$\\tilde {\\bf Q}_{\\rm int}^{(\\phi)}$\ndepend on the specific string model and the spectrum beyond the Standard Model.\nHowever, those components which are specified within Eq.~(\\ref{Higgsform}) are guaranteed \nby the underlying $SO(10)$ structure \nand by the requirement that the Higgs be electrically neutral.\nOf course, the process of electroweak symmetry breaking can in principle alter the form of this vector.\nHowever, we know that $U(1)_{\\rm EM}$ necessarily remains unbroken.\nThus, even if the forms of the particular ``internal'' vectors ${\\bf Q}_{\\rm int}^{(\\phi)}$ and \n$\\tilde {\\bf Q}_{\\rm int}^{(\\phi)}$ \nare shifted under electroweak symmetry breaking,\nthe zeros in the charge vector in Eq.~(\\ref{Higgsform})\n ensure the electric neutrality of the Higgs field and must therefore be preserved.\nThis remains true not only for the physical Higgs field after electroweak symmetry breaking, but also\nfor its quantum fluctuations in the Higgsed phase.\n\nThis observation immediately allows us to constrain the form of the ${\\cal T}$-matrices which parametrize\nthe response of the charge lattice to small fluctuations of the Higgs field around its minimum.\nBecause the zeros in the charge vector in Eq.~(\\ref{Higgsform}) must remain vanishing --- and\nindeed because the electromagnetic charges and spin-statistics of {\\it all}\\\/ string states must remain\nunaltered \nunder such fluctuations ---\nwe see that the (necessarily anti-symmetric) ${\\cal T}$-matrix can at most have the general form\n\\begin{equation}\n{\\cal T} ~\\sim~ \\left(\n \\begin{array}{cccccc|cc}\n ~ & ~ & ~ & ~ & ~ & {\\bf t} & 0 & \\tilde{\\bf t} \\\\ \n ~ & ~ & ~ & ~ & ~ & {\\bf 0} & 0 & \\tilde{\\bf 0} \\\\ \n ~ & ~ & {\\bf 0}_{5\\times 5} & ~ & ~ & {\\bf 0} & 0 & \\tilde {\\bf 0} \\\\ \n ~ & ~ & ~ & ~ & ~ & {\\bf 0} & 0 & \\tilde {\\bf 0} \\\\ \n ~ & ~ & ~ & ~ & ~ & {\\bf 0} & 0 & \\tilde {\\bf 0} \\\\ \n -{\\bf t}^t & {\\bf 0}^t & {\\bf 0}^t & {\\bf 0}^t & {\\bf 0}^t & {\\bf t}_{11} & {\\bf 0}^t & {\\bf t}_{12} \\\\ \n \\hline \n \\rule{0pt}{2.5ex} 0 & 0 & 0 & 0 & 0 & {\\bf 0} & 0 & \\tilde{\\bf 0} \\\\ \n -\\tilde \n {\\bf t}^t & \\tilde {\\bf 0}^t & \\tilde {\\bf 0}^t & \\tilde{\\bf 0}^t & \\tilde{\\bf 0}^t & -{\\bf t}_{12}^t & \\tilde {\\bf 0}^t & \n {\\bf t}_{22} \n\\end{array}\n\\right)\n\\label{Tmatrixform}\n\\end{equation}\nwhere \n${\\bf t}$ is an arbitrary \n 17-dimensional row vector;\nwhere $\\tilde {\\bf t}$ is an arbitrary \n9-dimensional row vector;\nwhere ${\\bf t}_{11}$, ${\\bf t}_{12}$, and ${\\bf t}_{22}$\nare arbitrary \nmatrices of dimensionalities {\\mbox{$17\\times 17$}}, {\\mbox{$9\\times 17$}}, and {\\mbox{$9\\times 9$}}, respectively, with\n${\\bf t}_{11}$ and ${\\bf t}_{22}$ antisymmetric;\nand where ${\\bf 0}$ and $\\tilde {\\bf 0}$ are respectively $17$- and $9$-dimensional\nzero row vectors. \nIndeed, as we have seen,\nonly this form of the ${\\cal T}$-matrix can preserve the electromagnetic charges and spin-statistics\nof the string states \nunder small shifts in the Higgs field around its new minimum, assuming a heterotic string model\nwith a standard level-one $SO(10)$ embedding.\nThe precise forms of ${\\bf t}$, $\\tilde {\\bf t}$, \n${\\bf t}_{11}$, ${\\bf t}_{12}$, and ${\\bf t}_{22}$ then depend on \nmore model-specific details of how the Higgs is realized within the theory --- details which go beyond\nthe $SO(10)$ embedding.\n\n\nAs indicated above, this is only one particular example of the kinds of ${\\cal T}$-matrices that can occur.\nHowever, all of the results of this paper will be completely general,\nand will not rest on this particular example.\n\n\n\\subsection{Calculating the Higgs mass}\n\nWe can now use the general results in Sect.~\\ref{latdeform}\nto calculate the mass of $\\phi$.\nIn general, this mass can be defined as\n\\begin{equation}\n m_\\phi^2 ~\\equiv~ {d^2 \\Lambda(\\phi)\\over d \\phi^2 } \\biggl|_{\\phi=0}\n\\label{higgsdef}\n\\end{equation}\nwhere\n\\begin{equation}\n \\Lambda(\\phi) ~\\equiv~ -{{\\cal M}^4\\over 2} \\int_{\\cal F} \\dmu \\, {\\cal Z}(\\tau,{\\overline{\\tau}},\\phi)~.\n\\label{Lambdaphi}\n\\end{equation}\nIndeed, $\\Lambda(\\phi)$ is the vacuum energy that governs the dynamics of $\\phi$.\nHere\n$ d^2\\tau \/\\tau_2^2$ is the modular-invariant integration measure, \n${\\cal F}$ is the fundamental domain of the modular group\n\\begin{equation}\n {\\cal F} ~\\equiv~ \\lbrace \\tau :\\, -{\\textstyle{1\\over 2}} <\\tau_1\\leq {\\textstyle{1\\over 2}}, \\, \\tau_2>0,\\, |\\tau|\\geq 1 \\rbrace~,\n\\label{Fdef}\n\\end{equation}\nand {\\mbox{${\\cal M}\\equiv M_s\/(2\\pi)$}} \nis the reduced string scale.\nIn this expression, following Eq.~(\\ref{preZQdef}), the shifted partition function is given by\n\\begin{equation}\n {\\cal Z}(\\tau,{\\overline{\\tau}},\\phi) ~=~ \\tau_2^{-1} {1\\over \\overline{\\eta}^{12} \\eta^{24}} \\, \\sum_{{\\bf Q}_L,{\\bf Q}_R} \n (-1)^F {\\overline{q}}^{{\\bf Q}_R^2\/2} \n q^{{\\bf Q}_L^2\/2}~\n\\label{ZQdef}\n\\end{equation}\nwhere the left- and right-moving charge vectors ${\\bf Q}_L$ and ${\\bf Q}_R$ are now deformed as in Eq.~(\\ref{Qdeform})\nand thus depend on $\\phi$.\n\nGiven this definition, we begin by evaluating the leading contribution to the Higgs mass by taking \npartial derivatives of ${\\cal Z}$, {\\it i.e.}\\\/,\n\\begin{equation}\n {\\partial^2{\\cal Z} \\over \\partial\\phi^2} ~=~ \n \\tau_2^{-1} {1\\over \\overline{\\eta}^{12} \\eta^{24}} \\, \\sum_{{\\bf Q}_L,{\\bf Q}_R\\in L} \n (-1)^F \\,X\\, \\,{\\overline{q}}^{{\\bf Q}_R^2\/2} q^{{\\bf Q}_L^2\/2}~\n\\label{eq:Zexp}\n\\end{equation}\nwhere the summand insertion $X$ is given by\n\\begin{equation}\n X ~\\equiv~ \\pi i \n {\\partial^2\\over \\partial\\phi^2} (\\tau {\\bf Q}_L^2 - {\\overline{\\tau}} {\\bf Q}_R^2 ) \n - \\pi^2 \\left\\lbrack {\\partial\\over \\partial\\phi} (\\tau {\\bf Q}_L^2 - {\\overline{\\tau}} {\\bf Q}_R^2)\\right\\rbrack^2~.\n\\label{stuff}\n\\end{equation}\nNote that it is the {\\it partial}\\\/ derivative $\\partial^2\/\\partial\\phi^2$ in Eq.~(\\ref{eq:Zexp}) which provides the \nleading contribution to the Higgs mass;\nwe shall return to this point shortly.\nExpanding $X$ in powers of $\\tau_1$ and $\\tau_2$ and then setting {\\mbox{$\\phi= 0$}} yields\n\\begin{equation}\n X\\bigl|_{\\phi=0} ~=~ \n A \\,\\tau_1 + B \\,\\tau_2 + C \\,\\tau_1^2 + D \\,\\tau_2^2 + E \\,\\tau_1\\tau_2~,\n\\end{equation}\nwhere\n\\begin{eqnarray} \n A~&=&~0~,\\nonumber\\\\\n B~&=&~ -2\\pi\\alpha' (\n {\\bf Q}_a^2 + \\tilde {\\bf Q}_a^2 + {\\bf Q}_b^t {\\bf Q}_L + \\tilde {\\bf Q}_b^t {\\bf Q}_R )~,\\nonumber\\\\ \n C~&=&~0~,\\nonumber\\\\\n D~&=&~ 4\\pi^2 \\alpha' \\,({\\bf Q}_a^t {\\bf Q}_L + \\tilde {\\bf Q}_a^t {\\bf Q}_R)^2~ , \\nonumber\\\\\n E~&=&~0~.\n\\label{intermed}\n\\end{eqnarray} \nNote that $A$, $C$, and $E$ each vanish as the result of the constraints in \nEq.~(\\ref{levelmatching}). This is consistent, as these are the quantities which are proportional \nto powers of $\\tau_1$, which multiplies ${\\bf Q}_L^2-{\\bf Q}_R^2$ within Eq.~(\\ref{stuff}). \n\nUsing Eqs.~(\\ref{shiftQ}) and (\\ref{NTrelations}),\nwe can now express the shift vectors within Eq.~(\\ref{intermed}) directly in terms \nof ${\\bf Q}_L$ and ${\\bf Q}_R$.\nFor convenience we define\n\\begin{equation}\n {\\bf Q}_h \\equiv {\\cal T}_{21} {\\bf Q}_L~,~~~~~\n \\tilde {\\bf Q}_h \\equiv {\\cal T}_{12} {\\bf Q}_R~,\n\\label{Qhdef}\n\\end{equation}\nand likewise define \n\\begin{equation}\n {\\bf Q}_j \\equiv {\\cal T}_{11} {\\bf Q}_L~,~~~~~\n \\tilde {\\bf Q}_j \\equiv {\\cal T}_{22} {\\bf Q}_R~.\n\\label{Qjdef}\n\\end{equation}\nWe then find\n\\begin{eqnarray}\n B ~&=&~ -4\\pi \\alpha' ({\\bf Q}_h^2 + \\tilde {\\bf Q}_h^2 - \\tilde {\\bf Q}_j^t {\\bf Q}_h - {\\bf Q}_j^t \\tilde {\\bf Q}_h)~,\\nonumber\\\\\n D ~&=&~ 4\\pi^2 \\alpha' ({\\bf Q}_R^t {\\bf Q}_h - {\\bf Q}_L^t \\tilde {\\bf Q}_h)^2~.\n\\label{eq:finalb}\n\\end{eqnarray}\nNote the identity {\\mbox{${\\bf Q}_R^t {\\bf Q}_h = - {\\bf Q}_L^t \\tilde {\\bf Q}_h$}}, as a result of which\nour expression for $D$ can actually be collapsed into one term.\nHowever, we have retained this form for $D$ in order to make manifest the symmetry between left- and right-moving contributions.\nOur overall insertion into the partition function is then given by\n{\\mbox{$X|_{\\phi=0} \\equiv {{\\cal X}}\/{\\cal M}^2$}}, where\n\\begin{eqnarray}\n {\\cal X} ~&=&~ \n \\tau_2^2 \\, ({\\bf Q}_R^t {\\bf Q}_h - {\\bf Q}_L^t \\tilde {\\bf Q}_h)^2 \\nonumber\\\\\n && ~~ - {\\tau_2\\over \\pi } \\left({\\bf Q}_h^2 + \\tilde {\\bf Q}_h^2 - \\tilde {\\bf Q}_j^t {\\bf Q}_h - {\\bf Q}_j^t \\tilde {\\bf Q}_h\\right)~.~~~~~~\n\\label{Xdef}\n\\end{eqnarray}\n\n\n\\subsection{Modular completion and additional Higgs-mass contributions \\label{sec:completion}}\n\n\nThus far, we have calculated the leading contribution to the Higgs mass \nby evaluating $\\partial^2 {\\cal Z}\/\\partial \\phi^2$.\nHowever, the full contribution $d^2 {\\cal Z}\/d\\phi^2$ (with full rather than \npartial $\\phi$-derivatives) also includes\nvarious additional effects on the partition function ${\\cal Z}$ \nthat come from fluctuations of the Higgs field. For example, such fluctuations deform \nthe background moduli fields (such as the metric that contracts compactified components of ${\\bf Q}_L^2$ and ${\\bf Q}_R^2$ within the \ncharge lattice).\nSuch effects produce additional contributions to the total Higgs mass.\n\nIt turns out that \nwe can calculate all of these extra contributions in a completely general way \nthrough the requirement of modular invariance. \nIndeed, because modular invariance remains unbroken even when the theory is Higgsed,\nthe final expression for the total Higgs mass must not only be modular invariant but also arise\nthrough a modular-covariant sequence of calculational operations.\nAs we shall demonstrate, the above expression for the insertion ${\\cal X}$ \nin Eq.~(\\ref{Xdef}) does not have this property. We shall therefore determine the additional contributions\nto the Higgs mass by performing the ``modular completion'' of ${\\cal X}$ --- {\\it i.e.}\\\/, by determining the\nadditional contribution to ${\\cal X}$ which will render \nthis insertion consistent with modular invariance.\n \nIn general, prior to the insertion of ${\\cal X}$, the partition-function trace in Eq.~(\\ref{preZQdef}) \n[or equivalently the trace in Eq.~(\\ref{ZQdef}) evaluated at {\\mbox{$\\phi=0$}}]\nis presumed to already be modular invariant, as required for the consistency of the underlying string.\nIn order to determine the modular completion of the quantity ${\\cal X}$ in Eq.~(\\ref{Xdef}),\nwe therefore need to understand the modular-invariance effects \nthat arise when ${\\cal X}$ is inserted into this partition-function trace.\nBecause ${\\cal X}$ involves various combinations of components of charge vectors,\nlet us begin by investigating the effect of inserting powers of a single charge vector \ncomponent $Q_\\ell$ (associated with the $\\ell^{\\rm th}$ lattice direction) \ninto our partition-function trace.\nWithin the partition functions described in Eqs.~(\\ref{partfZ}) and (\\ref{partft}),\ninserting $Q_\\ell^n$ for any power $n$\nis tantamount to replacing\n\\begin{equation}\n \\vartheta_\\ell ~\\to~ \\sum_{Q_\\ell\\in \\mathbb{Z} +\\alpha_\\ell} e^{2\\pi i \\beta_\\ell Q_\\ell} ~Q_\\ell^n~ q^{Q_\\ell^2\/2}~.\n\\label{Qinsert}\n\\end{equation}\nHowever, one useful way to proceed is to recognize that this latter sum\ncan be rewritten as\n\\begin{equation}\n {1\\over (2\\pi i)^n} \\, \\frac{\\partial^n }{\\partial z_\\ell^n} \\vartheta_\\ell(z_\\ell|\\tau) \\biggl|_{z_{\\ell=0}}\n\\label{zderiv}\n\\end{equation}\nwhere the generalized $\\theta_\\ell (z_\\ell|\\tau)$ \nfunction is defined as\n\\begin{equation}\n \\vartheta_\\ell(z_\\ell|\\tau) \n ~\\equiv~\n \\sum_{Q_\\ell\\in \\mathbb{Z} +\\alpha_\\ell} e^{2\\pi i (\\beta_\\ell +z_\\ell) Q_\\ell} \\,q^{Q_\\ell^2\/2}~.~~\n\\label{thetadefz}\n\\end{equation}\nIndeed, we see that $\\vartheta_\\ell(\\tau)$ is nothing but $\\vartheta_\\ell (z_\\ell|\\tau)$\nevaluated at {\\mbox{$z_\\ell=0$}}.\nHowever, for arbitrary $z$, these \ngeneralized $\\vartheta(z|\\tau)$ functions have the schematic modular-transformation properties\n\\begin{equation} \n \\vartheta_\\ell\\left(z\\biggl|\\frac{a\\tau+b}{c\\tau+d}\\right) ~\\sim~ \n (c\\tau+d)^{1\/2} \\, e^{ \\pi i c(c\\tau+d) z^2}\\, \n \\vartheta_{\\ell} ((c\\tau+d) z | \\tau)~.\n\\label{modtransthetaz}\n\\end{equation}\nIt then follows that \n\\begin{equation}\n \\vartheta\\left(z\\biggl|\\frac{a\\tau+b}{c\\tau+d}\\right) \n \\biggl|_{z=0} ~\\sim~ (c\\tau+d)^{1\/2} \\,\n \\vartheta(z|\\tau)\\biggl|_{z=0}~,\n\\label{firstderiv}\n\\end{equation}\nand likewise \n\\begin{equation}\n \\frac{\\partial}{\\partial z} \n \\vartheta\\left(z\\biggl|\\frac{a\\tau+b}{c\\tau+d}\\right) \n \\biggl|_{z=0} ~\\sim~ (c\\tau+d)^{3\/2} \\,\\frac{\\partial}{\\partial z} \n \\vartheta(z|\\tau)\\biggl|_{z=0}~.\n\\label{secondderiv}\n\\end{equation}\nThis indicates that while the function {\\mbox{$\\vartheta(z|\\tau)|_{z=0}$}} transforms covariantly with modular weight $1\/2$,\nits first derivative {\\mbox{$[\\partial\\vartheta(z|\\tau)\/\\partial z]|_{z=0}$}} transforms covariantly with modular weight $3\/2$.\n\nAt first glance, one might expect this pattern to continue, with the second derivative\n{\\mbox{$[\\partial^2 \\vartheta(z|\\tau)\/dz^2]|_{z=0}$}} transforming\ncovariantly with modular weight $5\/2$. However, this is not what happens. Instead, \nfrom Eq.~(\\ref{modtransthetaz}) we find\n\\begin{eqnarray}\n \\frac{\\partial^2}{\\partial z^2} \n \\vartheta\\left(z\\biggl|\\frac{a\\tau+b}{c\\tau+d}\\right) \n\\biggl|_{z=0} &\\sim&~ (c\\tau+d)^{5\/2} \\,\\frac{\\partial}{\\partial z} \n \\vartheta(z|\\tau)\\biggl|_{z=0} \\nonumber\\\\\n && ~ + ~2\\pi i \\, c\\, (c\\tau+d)^{3\/2}\\, \\vartheta(\\tau)~.~~~~~~\\nonumber\\\\\n\\end{eqnarray}\nWhile the term on the first line is the expected result,\nthe term on the second line represents a {\\it modular anomaly} which destroys the \nmodular covariance of the second derivative.\n\nSince modular covariance must be preserved, we must perform\na {\\it modular completion}. In this simple case, this means that\nwe must replace \n${\\partial^2 \/ \\partial z^2}$ with a {\\it modular-covariant second derivative}\\\/ $D^2_z$\nsuch that $D^2_z $ not only contains\n$\\partial^2 \/\\partial z^2$ but also has the property that {\\mbox{$D^2_z \\theta(z|\\tau)|_{z=0}$}} transforms covariantly with weight $5\/2$.\nIt is straightforward to show that the only such modular-covariant derivative is\n\\begin{equation}\n D^2_z ~\\equiv~ \\frac{\\partial^2}{\\partial z^2} + \\frac{\\pi}{\\tau_2}~,\n\\label{modcovderiv}\n\\end{equation}\nand with this definition one indeed finds\n\\begin{equation}\n \\left\\lbrace \\left[ D^2_z \\vartheta( z|\\tau)\\right]_{\\tau\\to \\frac{a\\tau+b}{c\\tau+d}} \\right\\rbrace\\biggl|_{z=0} \n ~\\sim~ (c\\tau+d)^{5\/2} \\, D^2_z \\vartheta(z|\\tau)\\biggl|_{z=0}~,\n\\end{equation}\nthereby continuing the pattern set by Eqs.~(\\ref{firstderiv}) and (\\ref{secondderiv}).\nIt turns out that this modular-covariant second $z$-derivative \nis equivalent to the\nmodular-covariant $\\tau$-derivative \n\\begin{equation} D_\\tau ~\\equiv ~ {\\partial \\over \\partial \\tau} ~-~ {ik\\over 2\\tau_2}~ \n\\end{equation}\nwhich preserves the modular covariance of any modular function of weight $k$.\nIndeed, \n our $\\vartheta(z|\\tau)$ functions have {\\mbox{$k=1\/2$}} and satisfy \nthe heat equation \n$\\partial^2 \\vartheta(z|\\tau) \/\\partial z^2 = 4\\pi i \\, \\partial \\vartheta(z|\\tau) \/\\partial \\tau$.\nIn this sense, the $z$-derivative serves as a ``square root'' of the $\\tau$-derivative and gives us a precise\nmeans of extracting the individual charge insertions (rather than their squares).\n In this connection, we emphasize that there is a tight correspondence between the Higgs field and the\n $z$-parameter. Specifically, when we deform a theory through a continuous change in the value of the Higgs VEV, \n its partition function deforms through a corresponding continuous change in the $z$-parameter.\n \n\n\n \nIn principle we could continue to examine higher $z$-derivatives \n(all of which will also suffer from modular anomalies),\nbut the results we have thus far will be sufficient for our purposes.\nRecalling the equivalence between the expressions in Eqs.~(\\ref{Qinsert}) and (\\ref{zderiv}),\nwe thus see that the insertion of a single power of any given $Q_\\ell$ does not disturb the\nmodular covariance of the corresponding holomorphic (or anti-holomorphic) factor in the \npartition-function trace, but the insertion of a quadratic term $Q_\\ell^2$\nalong the $\\ell^{\\rm th}$ lattice direction\ndoes {\\it not}\\\/ lead to a modular-covariant result and must, according to Eq.~(\\ref{modcovderiv}),\nbe replaced by the modular-covariant insertion \n$Q_\\ell^2 - 1\/(4\\pi\\tau_2)$.\nThus, our rules for modular completion \nthrough second-order in charge-vector components are given by\n\\begin{equation}\n\\begin{cases} \n ~{\\bf Q}_\\ell &\\to~~ {\\bf Q}_\\ell \\\\\n ~{\\bf Q}_\\ell{\\bf Q}_{\\ell'} &\\to~~ {\\bf Q}_\\ell {\\bf Q}_{\\ell'} - \\frac{1}{4\\pi \\tau_2} \\delta_{\\ell,\\ell'}~.\n\\end{cases}\n\\label{completionrules}\n\\end{equation}\nThese general results hold for all lattice directions $(\\ell,\\ell')$ regardless of whether \nthey correspond to left- or right-moving lattice components.\nSuch modular completions have also arisen in other contexts, \nsuch as within string-theoretic threshold corrections~\\cite{Kiritsis:1994ta,Kiritsis:1996dn,Kiritsis:1998en}. \n\nWith these modular-completion rules in hand, we can now investigate the modular completion of the \nexpression for ${\\cal X}$ in Eq.~(\\ref{Xdef}).\nIt is simplest to begin by focusing on\nthe quartic terms, {\\it i.e.}\\\/, the terms in the top line of Eq.~(\\ref{Xdef}).\nGiven the identity just below Eq.~(\\ref{eq:finalb}),\nthese terms are proportional to $({\\bf Q}_L^t \\tilde {\\bf Q}_h)^2$.\nWith $Q_{L\\ell}$ denoting the $\\ell^{\\rm th}$ component of ${\\bf Q}_L$, {\\it etc.}\\\/, \nwe find \n\\begin{eqnarray}\n ({\\bf Q}_L^t \\tilde {\\bf Q}_h)^2\n &=& \\left( \\sum_{\\ell=1}^{22} \\sum_{m=1}^{10} Q_{L\\ell} ({\\cal T}_{12})_{\\ell m} Q_{Rm} \\right)^2 \\nonumber\\\\\n &=& \\sum_{\\ell,\\ell' =1}^{22} \\sum_{m,m'=1}^{10} \n ({\\cal T}_{12})_{\\ell m} ({\\cal T}_{12})_{\\ell' m'} \\nonumber\\\\ \n && ~~~~~~~~~~~\\times~ Q_{Rm} Q_{Rm'} Q_{L\\ell} Q_{L\\ell'}~.~~~~~~~~~~~~~~\n\\label{intterm}\n\\end{eqnarray}\nFollowing the rules in Eq.~(\\ref{completionrules}), we can readily \nobtain the modular completion of this expression by replacing the final line in Eq.~(\\ref{intterm})\nwith\n\\begin{equation}\n \\left( Q_{Rm} Q_{Rm'} - \\frac{1}{4\\pi \\tau_2} \\delta_{mm'} \\right) \n \\left( Q_{L\\ell} Q_{L\\ell'} - \\frac{1}{4\\pi \\tau_2} \\delta_{\\ell\\ell'} \\right)~. \n\\label{substi}\n\\end{equation}\nSubstituting Eq.~(\\ref{substi}) into Eq.~(\\ref{intterm}) and recalling \nthat {\\mbox{${\\cal T}_{12}^t = -{\\cal T}_{21}$}}, \nwe thus find that the modular completion \nof the quartic term $({\\bf Q}_L^t \\tilde {\\bf Q}_h)^2$\nwithin ${\\cal X}$\nis given by\n\\begin{equation}\n ({\\bf Q}_L^t \\tilde {\\bf Q}_h)^2 - \\frac{1}{4\\pi \\tau_2} \\left( {\\bf Q}_h^2 + \\tilde {\\bf Q}_h^2\\right) + \\frac{\\xi}{(4\\pi\\tau_2)^2}~~~ \n\\label{quadr1}\n\\end{equation}\nwhere \n\\begin{eqnarray}\n \\xi ~&\\equiv&~ \n {\\rm Tr} ({\\cal T}_{12}^t {\\cal T}_{12})\n ~=~ {\\rm Tr} ({\\cal T}_{21}^t {\\cal T}_{21})~\\nonumber\\\\\n && ~~~=~ - {\\rm Tr} ({\\cal T}_{12} {\\cal T}_{21}) ~=~ - {\\rm Tr} ({\\cal T}_{21} {\\cal T}_{12})~.~~~~~~~~\n\\end{eqnarray}\n\nRemarkably, the quadratic terms ${\\bf Q}_h^2 + \\tilde {\\bf Q}_h^2$ that are generated within Eq.~(\\ref{quadr1}) \nalready appear on the second line of Eq.~(\\ref{Xdef}).\nIn other words, even if we had not already known of these quadratic terms, we could have deduced their existence\nthrough the modular completion of our quartic terms!\nConversely, we could have generated the quartic terms through a modular completion of these quadratic terms --- {\\it i.e.}\\\/,\neach set of terms provides the modular completion of the other.\nThus, the only remaining terms within Eq.~(\\ref{Xdef}) that might require modular completion are \nthe final quadratic terms on the second line of Eq.~(\\ref{Xdef}), namely\n$\\tilde {\\bf Q}_j^t {\\bf Q}_h + {\\bf Q}_j^t \\tilde {\\bf Q}_h$.\nHowever, \n${\\bf Q}_h$ and ${\\bf Q}_j$ involve only left-moving components of the lattice while\n$\\tilde {\\bf Q}_h$ and $\\tilde {\\bf Q}_j$ involve only right-moving components.\nThus \n$\\tilde {\\bf Q}_j^t {\\bf Q}_h + {\\bf Q}_j^t \\tilde {\\bf Q}_h$\nis already modular complete.\nPutting all the pieces together, we therefore find that the total expression for ${\\cal X}$ in Eq.~(\\ref{Xdef}) has\na simple (and in fact universal) modular completion:\n\\begin{equation}\n {\\cal X} ~\\to~ \n {\\cal X} ~+~ \\frac{\\xi}{4\\pi^2} ~. \n\\label{Xmodcomplete}\n\\end{equation}\nIndeed, this sole remaining extra term generated by the modular completion stems from \nthe final term in Eq.~(\\ref{quadr1}).\nIt is noteworthy that this extra term \nis entirely independent of the charge vectors.\nThis is consistent with our expectation that such additional terms\nrepresent the contributions from the deformations of the moduli fields under Higgs fluctuations --- deformations\nwhich act in a universal (and hence $Q$-independent) manner.\n\nSome remarks are in order regarding\nthe uniqueness of the completion\nin Eq.~(\\ref{Xmodcomplete}).\nIn particular, at first glance one might wonder how the modular completion of \nthe quadratic terms $\\tilde {\\bf Q}_j^t {\\bf Q}_h + {\\bf Q}_j^t \\tilde {\\bf Q}_h$ could \nuniquely determine the quartic terms in ${\\cal X}$, given that the modular-completion rules \nwithin Eq.~(\\ref{completionrules}) only seem to generate extra terms \nwhich are of lower powers in charge-vector components.\nHowever, the important point is that the rules in Eq.~(\\ref{completionrules}) only ensure\nthe modular covariance of the individual (anti-)holomorphic components\nof the partition-function trace. \nIn particular, these rules do not, in and of themselves, ensure that \nwe continue to satisfy \nthe additional constraint in Eq.~(\\ref{genexp2}) \nthat arises when stitching these holomorphic and anti-holomorphic components together\nas in Eq.~(\\ref{genexp}). \nHowever, given that ${\\bf Q}_h^2$ increases the modular weight of the \nholomorphic component by two units without increasing the modular weight of the\nanti-holomorphic component,\nand given that \n$\\tilde {\\bf Q}_h^2$ does the opposite, the only way to properly modular-complete their sum\nis by ``completing the square'' and realizing these terms as the off-diagonal terms that are generated\nthrough a factorized modular completion as in Eq.~(\\ref{substi}).\nThis then compels the introduction of the appropriate quartic diagonal terms, as seen above.\n\nIn this connection, it is also important to note that modular completion involves\nmore than simply demanding that our final result be modular invariant.\nAfter all, we have seen in Eq.~(\\ref{Xmodcomplete}) that the modular completion of ${\\cal X}$ \ninvolves the addition of a pure number, {\\it i.e.}\\\/, the addition of \na quantity which is intrinsically modular-invariant on its own (or more precisely, \na quantity whose\ninsertion into the partition-function summand automatically preserves\nthe modular invariance of the original partition function). \nHowever, \nas we have stated above, modular completion \nensures more than the\nmere modular invariance of our final result --- it also ensures that\nthis result is obtainable\nthrough a modular-covariant sequence of calculational operations.\nAs we have seen, the extra additive constant that forms the modular completion\nof ${\\cal X}$ in Eq.~(\\ref{Xmodcomplete}) is crucial in allowing us to ``complete the square'' \nand thereby cast our results into the factorized form\nof Eq.~(\\ref{substi}) --- a form which itself emerged as a consequence of \nour underlying modular-covariant\n$z$-derivatives $D^2_z$.\nAs such, the constant appearing in Eq.~(\\ref{Xmodcomplete}) is an intrinsic part \nof our resulting expression for $m_\\phi^2$.\n\n\n\\subsection{Classical stability condition\\label{stability}}\n\nThus far, we have focused on deriving an expression for the Higgs mass, as defined in Eq.~(\\ref{higgsdef}). \nHowever, our results presuppose that we are discussing a classically stable \nparticle. In other words, while we are identifying the mass with the second $\\phi$-derivative of the\nclassical potential, we are implicitly assuming that the first $\\phi$-derivative vanishes so that we are sitting\nat a minimum of the Higgs potential.\nThus, there is an extra condition that we need to impose, namely \n\\begin{equation}\n {d \\Lambda(\\phi)\\over d \\phi } \\biggl|_{\\phi=0} ~=~ 0~.\n\\label{linearcond}\n\\end{equation}\nThis condition must be satisfied for the particular vacuum state \nwithin which our Higgs-mass calculation has been performed.\n\nIt is straightforward to determine the ramifications of this condition.\nProceeding exactly as above, we find in analogy with\nEq.~(\\ref{stuff})\nthat {\\mbox{$\\partial{\\cal Z}\/\\partial \\phi|_{\\phi=0}$}} corresponds to an insertion given by\n{\\mbox{$Y|_{\\phi=0} = {{\\cal Y}\/{\\cal M}}$}}, where\n\\begin{equation}\n {{\\cal Y}} ~\\sim~ \\tau_2 \\left( {\\bf Q}_R^t {\\bf Q}_h - {\\bf Q}_L^t \\tilde {\\bf Q}_h\\right) ~\\sim~ \\tau_2 \\,({\\bf Q}_R^t {\\cal T}_{21} {\\bf Q}_L)~. \n\\label{Ydef}\n\\end{equation}\nGiven this result, there are {\\it a priori}\\\/ three distinct ways in which \nthe condition in Eq.~(\\ref{linearcond}) can be satisfied \nwithin a given string vacuum.\nFirst, ${\\cal Y}$ might vanish for each state in the corresponding string spectrum.\nSecond, ${\\cal Y}$ might not vanish for each state in the string spectrum \nbut may vanish in the {\\it sum}\\\/ over the string states\n(most likely in a pairwise fashion between chiral and anti-chiral states with opposite charge vectors).\nHowever, there is also a third possibility:\nthe entire partition-function trace may be non-zero, even with the ${\\cal Y}$ insertion,\nbut nevertheless vanish when integrated over the fundamental domain of the modular group, as in Eq.~(\\ref{Lambdaphi}). \nIn general, very few mathematical examples are known of situations in which this latter\nphenomenon occurs~{\\mbox{\\cite{Moore:1987ue,Dienes:1990qh,Dienes:1990ij}}},\nalthough the fact that this would involve an integrand with vanishing modular weight offers\nunique possibilities.\n\n\nTwo further comments regarding this condition are in order.\nFirst, it is easy to verify that this condition respects modular invariance, as it must.\nIndeed, the quantity ${\\cal Y}$, as defined above, is already modular complete.\nAt first glance, this might seem surprising, given that the quartic terms within ${\\cal X}$ are nothing but the square of ${\\cal Y}$,\nand we have already seen that these quartic terms are not modular complete by themselves.\nHowever, it is the squaring of ${\\cal Y}$ \nthat introduces the higher powers of charge-vector components\nwhich in turn induce the modular anomaly. \nSecond, if ${\\cal Y}$ vanishes when summed over all of the string states,\nthen it might be tempting to hope that the quartic terms within ${\\cal X}$ \nalso vanish when summed over the string states.\nUnfortunately, this hope is not generally realized, since important sign information is lost when these quantities\nare squared.\nOf course, if ${\\cal Y}$ vanishes for each individual state in the string spectrum,\nthen the quartic terms within ${\\cal X}$ will also evaluate to zero in any calculation of the corresponding Higgs mass.\nThis would then simplify the explicit evaluation of ${\\cal X}$ for such a string vacuum.\n\n\n\\subsection{A relation between the Higgs mass and the cosmological constant}\n\nLet us now collect our results for the Higgs mass.\nFor notational simplicity we define \n\\begin{equation}\n \\langle A \\rangle ~\\equiv~ \\int_{\\cal F} \\dmu \n ~\\frac{\\tau_2^{-1}}{\\overline{\\eta}^{12} \\eta^{24}} \\, \\sum_{{\\bf Q}_L,{\\bf Q}_R} \n (-1)^F A~ {\\overline{q}}^{{\\bf Q}_R^2\/2} q^{{\\bf Q}_L^2\/2}~\n\\label{expvalue}\n\\end{equation}\nwhere the charge vectors $\\lbrace {\\bf Q}_L,{\\bf Q}_R\\rbrace$ in the sum over states are henceforth understood as \nunperturbed ({\\it i.e.}\\\/, with {\\mbox{$\\phi=0$}}) and thus correspond directly to the charges that arise at the minimum of the Higgs potential. \nOur results then together imply that \n\\begin{equation} \n m_\\phi^2 ~=~ -\\frac{{\\cal M}^2}{2} \\langle {\\cal X}\\rangle ~-~ \\frac{{\\cal M}^2}{2} \\frac{\\xi}{4\\pi^2} \\langle {\\bf 1} \\rangle\n\\label{preresult}\n\\end{equation}\nwhere ${\\cal X}$ is given in Eq.~(\\ref{Xdef}).\nAs indicated above, these results implicitly assume that\n{\\mbox{$\\langle {\\cal Y}\\rangle=0$}}, \nwhere ${\\cal Y}$ is defined in Eq.~(\\ref{Ydef}).\nHowever, we immediately recognize that the quantity $\\langle {\\bf 1}\\rangle$ within Eq.~(\\ref{preresult}) is nothing other than \nthe one-loop zero-point function (cosmological constant) $\\Lambda$!~\nMore precisely, we may identify $\\Lambda$ as {\\mbox{$\\Lambda(\\phi)|_{\\phi=0}$}} \n[where $\\Lambda(\\phi)$ is given in Eq.~(\\ref{Lambdaphi})], or equivalently\n\\begin{equation}\n \\Lambda ~=~ -\\frac{{\\cal M}^4}{2} \\,\\langle {\\bf 1}\\rangle~.\n\\label{lambdadeff}\n\\end{equation}\nWe thus obtain the relation\n\\begin{equation}\n m_\\phi^2 ~=~ \\frac{\\xi}{4\\pi^2} \\, \\frac{\\Lambda}{{\\cal M}^2} ~-~ \\frac{{\\cal M}^2}{2} \\, \\langle {\\cal X} \\rangle~.\n\\label{relation1}\n\\end{equation}\nIndeed, retracing our steps in arbitrary spacetime dimension $D$,\nwe obtain the analogous relation\n\\begin{equation}\n m_\\phi^2 ~=~ \\frac{\\xi}{4\\pi^2} \\, \\frac{\\Lambda}{{\\cal M}^{D-2}} ~-~ \\frac{{\\cal M}^2}{2} \\, \\langle {\\cal X} \\rangle~\n\\label{relation1b}\n\\end{equation}\nwhere the cosmological constant $\\Lambda$ now has mass dimension $D$. \n\nRemarkably, this is a general relation between the Higgs mass and the one-loop cosmological constant! \nBecause this relation rests on nothing but modular invariance,\nit holds generally for {\\it any}\\\/ perturbative \nclosed string in any arbitrary spacetime dimension $D$. \nThe cosmological-constant term in Eq.~(\\ref{relation1}) is universal, \nemerging as \nthe result of a modular anomaly that required a modular completion, or equivalently\nas the result of a universal shift in the moduli.\n By contrast, the second term depends on the particular charges that are inserted into the partition-function trace.\n\nFor weakly coupled heterotic strings,\nwe can push this relation one step further.\nIn such theories the string scale {\\mbox{$M_s\\equiv 2\\pi {\\cal M}$}} and Planck scale $M_p$ are connected through the \nrelation {\\mbox{$M_s= g_s M_P$}} where\n$g_s$ is the string coupling \nwhose value is set by the vacuum expectation value of the dilaton.\nDepending on the particular string model, $g_s$ in turn sets the values of the individual gauge couplings.\nLikewise, the canonically normalized scalar field $\\phi$ is \n{\\mbox{$\\widehat \\phi \\equiv \\phi\/g_s$}}. \nWe thus find that our relation in Eq.~(\\ref{relation1}) equivalently takes the form\n\\begin{equation}\n m_{\\widehat \\phi}^2 ~=~ \\frac{\\xi}{M_P^2} \\, \\Lambda ~-~ \\frac{g_s^2 {\\cal M}^2}{2} \\, \\langle {\\cal X} \\rangle~.\n\\label{relation2}\n\\end{equation}\n\n \nIn quantum field theory, we would not expect to find a relation between a Higgs mass\nand a cosmological constant. Indeed, quantum field theories do not involve gravity and are thus\ninsensitive to the absolute zero of energy.\nEven worse, in quantum field theory, the one-loop zero-point function is badly divergent. \nString theory, by contrast, not only unifies \ngauge theories with gravity but also yields a {\\it finite}\\\/ $\\Lambda$ (the latter\noccurring as yet another byproduct of modular invariance).\nThus, it is only within a string context that such a relation could ever arise, and indeed\nEqs.~(\\ref{relation1}) and (\\ref{relation2}) are precisely the relations \nthat arise for all weakly-coupled four-dimensional heterotic strings. \nWe expect that this is but the tip of the iceberg, and that other modular-invariant string constructions\nlead to similar results.\nIt is intriguing that such relations join together precisely the two quantities ($m_\\phi$ and $\\Lambda$) whose\nvalues lie at the heart of the two most pressing hierarchy problems in modern physics.\n\n \n\n\n\n\n\n\\section{Regulating the Higgs mass:~ From amplitudes to supertraces \\label{sec3}}\n\n\n\n\nIn Eq.~(\\ref{relation1}) we obtained a result in which the Higgs mass, via the definition in Eq.~(\\ref{expvalue}),\n is expressed in terms of certain one-loop string amplitudes\nconsisting of \nmodular integrals of various traces over the entire string spectrum.\nAs discussed below Eq.~(\\ref{eq:massRL}),\nthese traces include the contributions of not only {\\it physical}\\\/ ({\\it i.e.}\\\/, level-matched) string states with {\\mbox{$M_L^2=M_R^2$}},\nbut also {\\it unphysical}\\\/ ({\\it i.e.}\\\/, non level-matched) string states with {\\mbox{$M_L^2 \\not= M_R^2$}}.\nThis distinction between physical and unphysical string states is important because only \nthe physical string states can serve as {\\it bona-fide}\\\/ in- and out-states.\nBy contrast, the unphysical states are intrinsically stringy and have no field-theoretic analogues.\n\n\nWe now wish to push our calculation several steps further.\nIn particular, there are three aspects to our result \nin Eq.~(\\ref{relation1}) which we will need to understand \nin order to allow us to make contact with traditional quantum-field-theoretic expectations. \nThe first concerns the fact that while the one-loop vacuum energy $\\Lambda$ which appears in these\nresults is finite for all tachyon-free string models --- even without spacetime supersymmetry --- the\nremaining amplitude $\\langle {\\cal X}\\rangle$ which appears in these \nexpressions is generically divergent.\nNote that this is not in conflict with string-theoretic expectations; \nin particular, as we shall discuss in Sect.~\\ref{UVIRequivalence},\nstring theory generally softens various field-theoretic divergences \nbut need not remove them entirely. \nThus, our expression for the Higgs mass is formally divergent and requires some sort of regulator \nin order to extract finite results. \nSecond, while these results are expressed in terms of sums over the entire string spectrum,\nwe would like to be able to express\nthe Higgs mass directly in terms of supertraces over only the {\\it physical}\\\/ string states --- {\\it i.e.}\\\/, the states\nwith direct field-theoretic analogues.\nThis will ultimately allow us to express the Higgs mass in a form that might be recognizable within ordinary quantum field\ntheory, and thereby extract an effective field theory (EFT) description of the Higgs mass\nin which our Higgs mass experiences an effective renormalization-group ``running''. \nThis will also allow us\nto extract a stringy effective potential for the Higgs field. \nFinally, as a byproduct, we would also like to implicitly perform the stringy modular integrations \ninherent in Eq.~(\\ref{expvalue}). \n\nAs it turns out, these three issues are intimately related.\nHowever, appreciating these connections requires \na deeper understanding of the properties of the modular functions\non which which our Higgs-mass calculations rest.\nIn this section, we shall therefore outline the \nmathematical procedures which will enable us to address all three of our goals.\nMany of these methods originated in the classic mathematics papers of Rankin~{\\mbox{\\cite{rankin1a,rankin1b}}} and Selberg~\\cite{selberg1} \nfrom the late 1930s, and were later extended in an important way by Zagier~\\cite{zag} in the early 1980s.\nSome of the Rankin-Selberg results also later independently found their way \ninto the string literature in various forms~{\\mbox{\\cite{McClain:1986id,OBrien:1987kzw,Kutasov:1990sv}}},\nand have occasionally been studied and further developed (see, {\\it e.g.}\\\/, Refs.~\\cite{Dienes:1994np, Dienes:1995pm,\nDienes:2001se,\nAngelantonj:2010ic,\nAngelantonj:2011br,\nAngelantonj:2012gw,\nAngelantonj:2013eja,\nPioline:2014bra,\nFlorakis:2016boz}).\nOur purpose in recounting these results here is not only to pull them all together and explain their logical connections\nin relatively non-technical terms, but also to extend them in certain directions which will be important for our work in Sect.~\\ref{sec4}.~\nThis conceptual and mathematical groundwork\nwill thus form the underpinning for our further analysis of the Higgs mass in Sect.~\\ref{sec4}.\n\n\n\\subsection{The Rankin-Selberg technique\\label{sec:RStechnique}}\n\nWe are interested in \nmodular integrals such as those in Eq.~(\\ref{expvalue}) \nwhich generically take the form\n\\begin{equation}\n I~\\equiv ~\\int_{\\mathcal{F}}\\dmu \\,F(\\tau,{\\overline{\\tau}})~,\n\\label{eq:I}\n\\end{equation}\nwhere ${\\cal F}$ is the modular-group fundamental domain given in Eq.~(\\ref{Fdef}),\nwhere $d\\tau_1 d\\tau_2\/\\tau_2^2$ is the modular-covariant integration measure\n(with {\\mbox{$\\tau\\equiv \\tau_1+i\\tau_2$}}, {\\mbox{$\\tau_i\\in\\mathbb{R}$}}), \nand where the integrand $F$ is modular invariant.\nIn general the integrands $F$ take the form\n\\begin{equation}\n F~\\equiv~ \\tau_2^k\\, \\sum_{m,n} a_{mn} {\\overline{q}}^m q^n\n\\label{integrand}\n\\end{equation}\nwhere {\\mbox{$q\\equiv e^{2\\pi i\\tau}$}}\nand where $k$\nis the modular weight of the holomorphic and anti-holomorphic modular functions whose products contribute\nto $F$.\nNote that integrands of this form include those in Eq.~(\\ref{Zform}): we simply power-expand\nthe $\\eta$-function denominators and absorb these powers into \n$m$ and $n$.\nThus, with string integrands written as in Eq.~(\\ref{integrand}) we can now directly identify\n{\\mbox{$m= \\alpha' M_R^2\/4$}} and\n{\\mbox{$n= \\alpha' M_L^2\/4$}}.\nThe quantity $a_{mn}$ then tallies the number of bosonic minus fermionic string degrees of freedom\ncontributing to each $(M_R^2,M_L^2)$ term.\n\n\nInvariance under {\\mbox{$\\tau\\to \\tau+1$}} guarantees that every term within $F$ has {\\mbox{$m-n\\in\\mathbb{Z}$}}.\nThe {\\mbox{$m=n$}} terms represent the contributions from physical string states with spacetime masses {\\mbox{$\\alpha' M^2 = 2(m+n)= 4n$}}, \nwhile the {\\mbox{$m\\not=n$}} terms represent the contributions from off-shell ({\\it i.e.}\\\/, unphysical) string states.\nWithin the {\\mbox{$\\tau_2\\geq 1$}} integration subregion within ${\\cal F}$,\nthe {\\mbox{$m\\not=n$}} terms make no contribution to the integral $I$ \nbecause these contributions are eliminated when we perform the \n$\\int_{-1\/2}^{1\/2} d\\tau_1$ integral.\n[Indeed, within this subregion of ${\\cal F}$\nexpressions such as Eq.~(\\ref{eq:I}) come\nwith an implicit instruction \nthat we are to perform the $\\tau_1$ integration prior to performing the $\\tau_2$ integration.]\nHowever, the full integral $I$ does receive {\\mbox{$m\\not=n$}} contributions \nfrom the {\\mbox{$\\tau_2<1$}} subregion within ${\\cal F}$.\nThus, in general, both physical and unphysical string states contribute to amplitudes such as $I$.\n\nOur goal is to express $I$ in terms of contributions from the physical string states alone.\nClearly this could be done if we could somehow transform the region of integration within $I$\nfrom the fundamental domain ${\\cal F}$ to the positive half-strip \n\\begin{equation}\n {\\cal S} ~\\equiv~ \\lbrace \\tau :\\, -{\\textstyle{1\\over 2}} <\\tau_1\\leq {\\textstyle{1\\over 2}}, \\, \\tau_2>0 \\rbrace~,\n\\label{Sdef}\n\\end{equation}\nfor we would then have\n\\begin{equation}\n \\int_{\\cal S} \\dmu \\,F(\\tau,{\\overline{\\tau}}) ~=~ \\int_0^\\infty {d\\tau_2 \\over \\tau_2^2} \\, g(\\tau_2)\n\\end{equation}\nwhere $g(\\tau_2)$ is our desired trace over only the physical string states:\n\\begin{equation}\n g(\\tau_2) ~=~ \\int_{-1\/2 }^{1\/2} d\\tau_1 \\,F(\\tau,{\\overline{\\tau}})~=~ \\tau_2^k\\, \\sum_{n} a_{nn} \\,e^{-4\\pi\\tau_2 n}~.~~~~\n\\label{gtrace}\n\\end{equation} \n\nFortunately, there exists a well-known method for ``unfolding'' ${\\cal F}$ into ${\\cal S}$.\nWhile ${\\cal F}$ is the fundamental domain of the modular group $\\Gamma$ generated by both {\\mbox{$\\tau\\to -1\/\\tau$}} and {\\mbox{$\\tau\\to \\tau+1$}},\nthe strip ${\\cal S}$ is the fundamental domain of the modular {\\it subgroup}\\\/ {\\mbox{$\\Gamma_\\infty$}} generated solely by {\\mbox{$\\tau\\to \\tau+1$}}.\n(Indeed, this is the subgroup that preserves the cusp at {\\mbox{$\\tau=i\\infty$}}.) \nThus the strip ${\\cal S}$ can be realized as the sum of the images of ${\\cal F}$ transformed under all modular transformations $\\gamma$ (including the identity) in \nthe coset {\\mbox{$\\Gamma_\\infty \\backslash \\Gamma$}}:\n\\begin{equation}\n {\\cal S} ~=~ \\bigcup\\limits_{\\gamma\\in \\Gamma_\\infty \\backslash \\Gamma} \\gamma\\cdot {\\cal F} ~.\n\\label{stripF}\n\\end{equation}\nIt then follows \nfor any integrand $\\widetilde F(\\tau,{\\overline{\\tau}})$ \nthat\n\\begin{equation}\n \\int_{\\cal S} \\dmu \\, \\widetilde F(\\tau,{\\overline{\\tau}}) ~=~ \\int_{\\cal F} \\dmu \n \\sum_{\\gamma\\in \\Gamma_\\infty \\backslash \\Gamma} \\widetilde F_\\gamma(\\tau,{\\overline{\\tau}})~,\n\\label{unfold}\n\\end{equation}\nwhere $\\widetilde F_\\gamma(\\tau,{\\overline{\\tau}})$ is the $\\gamma$-transform of $\\widetilde F(\\tau,{\\overline{\\tau}})$.\nMoreover, if $\\widetilde F(\\tau,{\\overline{\\tau}})$ is invariant under {\\mbox{$\\tau\\to\\tau+1$}}, then the total integrand on\nthe right side of Eq.~(\\ref{unfold}) is modular invariant.\n\nAt this stage, armed with the result in Eq.~(\\ref{unfold}), \nwe see that we are halfway towards our goal.\nHowever, two fundamental problems remain.\nFirst, while choosing $\\widetilde F$ as our original integrand $F$ would allow us to express the left side of Eq.~(\\ref{unfold}) \ndirectly in terms of the desired trace in Eq.~(\\ref{gtrace}), our need to relate this to the original integral $I$ in \nEq.~(\\ref{eq:I}) would instead seem to require choosing $\\tilde F$ such that \n{\\mbox{$F= \\sum_{\\gamma\\in \\Gamma_\\infty \\backslash \\Gamma} \\widetilde F_\\gamma$}}.\nSecond, the manipulations underlying Eq.~(\\ref{unfold}), such as the exchanging of sums and regions of integration, implicitly \nassumed that the integrand on the right side of Eq.~(\\ref{unfold}) converges sufficiently rapidly as {\\mbox{$\\tau_2\\to\\infty$}} \n[or equivalently that the integrand on the left side of Eq.~(\\ref{unfold}) converges sufficiently rapidly as {\\mbox{$\\tau_2\\to 0$}}]\nso that all relevant integrals are absolutely convergent.\nHowever this is generally not the case for the physical situations that will interest us.\n\nIt turns out that these problems together motivate a unique choice for $\\widetilde F$.\nNote that $g(\\tau_2)$ generally has a form resembling that in Eq.~(\\ref{gtrace}), consisting of an infinite sum multiplied by a power of $\\tau_2$. As {\\mbox{$\\tau_2\\to 0$}}, \nthe successive terms in this sum are less and less suppressed by the exponential factor $e^{-4\\pi n\\tau_2}$. We therefore expect the infinite sum within $g(\\tau_2)$ to experience an increased tendency to diverge as {\\mbox{$\\tau_2\\to 0$}}. Let us assume for the moment that the divergence of this infinite sum grows no faster than some inverse power of $\\tau_2$ as {\\mbox{$\\tau_2\\to 0$}}. In this case, \nthe divergence of the sum within $g(\\tau_2)$ will cause \n$g(\\tau_2)$ itself to diverge as {\\mbox{$\\tau_2\\to 0$}} unless $g(\\tau_2)$ also includes a prefactor consisting of sufficiently many powers of $\\tau_2$ to hold the divergence of the sum in check. We can therefore {\\it regulate}\\\/ our calculation by introducing sufficiently many \nextra powers of $\\tau_2$ into $g(\\tau_2)$. In other words, in such cases we shall take\n\\begin{equation}\n \\widetilde F(\\tau,{\\overline{\\tau}}) ~=~ \\tau_2^s \\, F(\\tau,{\\overline{\\tau}})\n\\label{extratau2}\n\\end{equation}\nwhere $s$ is chosen sufficiently large (typically requiring {\\mbox{$s>1$}}) so as to guarantee convergence.\nIndeed, since the number of powers of $\\tau_2$ within $g(\\tau_2)$ is generally correlated in string theory\nwith the number of uncompactified spacetime dimensions,\nwe may view this insertion of extra powers of $\\tau_2$ as a stringy version of dimensional regularization,\ntaking {\\mbox{$D\\to D_{\\rm eff} \\equiv D-2s$}}.\nHowever, since our original integrand $F(\\tau,{\\overline{\\tau}})$ is presumed modular invariant,\nthe choice in Eq.~(\\ref{extratau2}) in turn implies that the integrand on the right side of Eq.~(\\ref{unfold}) must\nbe taken as\n\\begin{equation}\n \\sum_{\\gamma \\in \\Gamma_\\infty \\backslash \\Gamma} ({\\rm Im}\\, \\gamma\\cdot \\tau )^{s} \\,F_\\gamma(\\tau,{\\overline{\\tau}}) ~=~\n E(\\tau,{\\overline{\\tau}},s) \\,F(\\tau,{\\overline{\\tau}})\n\\end{equation}\nwhere $E(\\tau,{\\overline{\\tau}},s)$ is the {\\it non-holomorphic Eisenstein series}, often simply denoted $E(\\tau,s)$ \nand defined by \n\\begin{equation}\n E(\\tau,s) ~\\equiv~ \\sum_{\\gamma\\in \\Gamma_\\infty \\backslash \\Gamma} \n [{\\rm Im}\\, (\\gamma\\cdot \\tau) ]^{s} ~=~ \n {\\textstyle{1\\over 2}} \\, \\sum_{(c,d)=1} \\frac{\\tau_2^s}{|c\\tau+d|^{2s}}~\n\\label{Eisenstein}\n\\end{equation}\nwith the second sum in Eq.~(\\ref{Eisenstein}) restricted to integer, relatively prime values of $c,d$.\nThus, with these choices, we now have\n\\begin{equation}\n \\int_{\\cal F} \\dmu \\, E(\\tau,s) F(\\tau,{\\overline{\\tau}})~=~ \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2} g(\\tau_2) ~ \n\\label{stepone}\n\\end{equation}\nwhere the expression on the right side depends on only the physical string states.\n\nThe Eisenstein series $E(\\tau,s)$ \nhas a number of important properties. \nIt is convergent for all {\\mbox{$s>1$}}, \nbut can be analytically continued to all values of $s$.\nIt is not only modular invariant (consistent with ${\\cal F}$ as the corresponding region of \nintegration), but its insertion on the left side of Eq.~(\\ref{stepone}) relative to our original starting point in \nEq.~(\\ref{eq:I}) softens the divergence as {\\mbox{$\\tau_2\\to\\infty$}}, as required. \nMost importantly for our purposes,\nhowever, this function has a simple pole at {\\mbox{$s=1$}}, with a $\\tau$-independent residue $3\/\\pi$.\nThe fact that this residue is $\\tau$-independent means that we can formally extract \nour original integral $I$ in Eq.~(\\ref{eq:I}) by taking the {\\mbox{$s=1$}} residue of both sides of Eq.~(\\ref{stepone}):\n\\begin{equation}\n I ~=~ \\frac{\\pi}{3}\\, \\oneRes\\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2} \\,g(\\tau_2) ~. \n\\label{RSresult}\n\\end{equation}\nWe have therefore succeeded in expressing our original modular integral $I$ in terms of only the contributions\nfrom the physical states.\nThe result in Eq.~(\\ref{RSresult}) was originally obtained by Rankin and Selberg in 1939 \n(see, {\\it e.g.}\\\/, Refs.~{\\mbox{\\cite{rankin1a,rankin1b,selberg1}}}), \nand has proven useful for a number of applications in both physics and pure mathematics. \n\nAt this stage,\nthree important comments are in order. \nFirst, it may seem that the result in Eq.~(\\ref{RSresult}) implies that the unphysical states \nultimately make no contributions to the amplitude $I$. However, this is untrue: the result in Eq.~(\\ref{RSresult}) was derived under\nthe supposition that our original integrand $F(\\tau,{\\overline{\\tau}})$ is modular invariant, and this modular invariance \ndepends crucially on the existence of both physical and unphysical states in the full string spectrum.\nFor example, through the requirement of modular invariance,\nthe distribution of unphysical states in the string spectrum has a profound effect~{\\mbox{\\cite{Dienes:1994np,Dienes:1995pm}}} \non the values of the physical-state degeneracies $\\lbrace a_{nn}\\rbrace$ \nwhich appear in Eq.~(\\ref{gtrace}).\n\nAs our second comment, we point out that the above results can be reformulated in a manner which\neliminates the $\\tau_2$ integration completely and which depends directly on the integrand $g(\\tau_2)$.\nTo see this, we note if we define $I(s)$ as the term on the left\nside of Eq.~(\\ref{stepone}),\nthen the relation in \nEq.~(\\ref{stepone}) simply states that \n$I(s)$ is nothing but the\nMellin transform of $g(\\tau_2)\/\\tau_2$. \nOne can therefore use the inverse Mellin transform to write $g(\\tau_2)\/\\tau_2$ directly in terms of $I(s)$.\nWhile such an inverse relation is useful in many contexts, \nfor our purposes it will be sufficient to note that such an inverse relation implies a direct connection\nbetween the poles of $I(s)$ and the asymptotic behavior of $g(\\tau_2)$ as {\\mbox{$\\tau_2\\to 0$}}.\nSpecifically, one finds a correlation\n\\begin{eqnarray}\n && {\\rm poles~of}~ I(s) ~{\\rm at}~ s=s_n ~{\\rm with~residues} ~ c_n \\nonumber\\\\\n && ~~~~\\Longrightarrow~~ g(\\tau_2)\\sim \\sum_n c_n \\tau_2^{1-s_n} ~~{\\rm as}~~\\tau_2\\to 0~.~~~~~~~ \n\\label{correlations}\n\\end{eqnarray}\nAs we have seen, $I(s)$ has a single pole along the real axis at {\\mbox{$s=1$}}, with residue $3I\/\\pi$. \nHowever, $I(s)$ also has an infinite number of poles at locations {\\mbox{$s_n= \\rho_n\/2$}}, where\n$\\rho_n$ are the non-trivial zeros of the Riemann $\\zeta$-function $\\zeta(s)$. \nAccording to the Riemann hypothesis, these zeros all have the form {\\mbox{$\\rho_n = {\\textstyle{1\\over 2}} \\pm i\\gamma_n $}}\nwhere {\\mbox{$\\gamma_n\\in\\mathbb{R}$}}.\nThe fact that {\\mbox{${\\rm Re}\\\/(s_n)<1$}} for all of these additional poles of $I(s)$ \nthen implies that the amplitude $I$ dominates the leading behavior of $g(\\tau_2)$ as {\\mbox{$\\tau_2\\to 0$}},\nallowing us to write~{\\mbox{\\cite{zag,Kutasov:1990sv}}}\n\\begin{equation}\n I~=~ \\frac{\\pi}{3}\\, \\lim_{\\tau_2\\to 0} g(\\tau_2)~.\n\\label{reformulation}\n\\end{equation}\nOf course, from Eq.~(\\ref{correlations}) we see that the {\\mbox{$\\tau_2\\to 0$}} limit of $g(\\tau_2)$ also contains\nsubleading oscillatory terms~\\cite{zag} corresponding to the non-trivial zeros of the $\\zeta$-function.\nThis suggests, through Eq.~(\\ref{gtrace}), that the $a_{nn}$ coefficients tend to oscillate in sign\nas {\\mbox{$n\\to \\infty$}}. This oscillating sign is in fact a consequence of the so-called \n``misaligned supersymmetry''~{\\mbox{\\cite{Dienes:1994np,Dienes:1995pm,Dienes:2001se}}} \nwhich is a generic property of all tachyon-free non-supersymmetric string models ---\na property whose existence is a direct consequence\nof modular invariance in general situations where $I$ is finite and {\\mbox{$F(\\tau,{\\overline{\\tau}})\\not=0$}}.\n\nOur final comment, however, is perhaps the most crucial.\nAs we have seen, the results in Eqs.~(\\ref{RSresult}) and (\\ref{reformulation}) were derived under the assumption,\nas stated within the above derivation,\nthat the infinite sum within the definition of $g(\\tau_2)$ in Eq.~(\\ref{gtrace}) \ndiverges no more rapidly than some inverse power of $\\tau_2$ as {\\mbox{$\\tau_2\\to 0$}}. This requirement was needed \nso that the introduction of sufficiently many $\\tau_2$ prefactors could suppress this divergence and render a finite result.\nUndoing the modular transformations involved in Eq.~(\\ref{unfold}),\nwe see that this is equivalent to demanding that our original integrand $F(\\tau,{\\overline{\\tau}})$ either fall, remain constant,\nor grow less rapidly than $\\tau_2$ as {\\mbox{$\\tau_2\\to \\infty$}}.\nIndeed, these are the conditions under which the Rankin-Selberg analysis is valid.\nNot surprisingly, these are also the conditions under which any integrand $F$ lacking terms with {\\mbox{$m=n<0$}} will produce a finite value for $I$.\n \n\n\\subsection{Regulating divergences\\label{Regulators}}\n\nThe techniques discussed in Sect.~\\ref{sec:RStechnique} are completely adequate\nfor situations in which the original amplitude $I$ is finite, with \nan integrand $F(\\tau,{\\overline{\\tau}})$ remaining finite or diverging less rapidly than $\\tau_2$ as {\\mbox{$\\tau_2\\to \\infty$}}.\nHowever, many physical situations of interest\n(including those we shall ultimately need to consider in this paper) \nlead to integrands $F(\\tau,{\\overline{\\tau}})$ which diverge more rapidly than this as {\\mbox{$\\tau_2\\to\\infty$}}.\nAs a result, the corresponding integral $I$ formally diverges and must be regulated.\n\nIn this section we shall discuss three different methods of regulating such amplitudes.\nThese methods are appropriate for cases \n--- such as we shall ultimately face ---\nin which \nthe integrand experiences a power-law divergence $\\sim \\tau_2^p$ with {\\mbox{$p\\geq 1$}} as {\\mbox{$\\tau_2\\to\\infty$}}.\nAs we shall see, these regulators each have different strengths and weaknesses,\nand thus it will prove useful to have all three at our disposal.\nIn particular, two of these regulators will explicitly break modular invariance,\nbut are closer in spirit to those that are traditionally\nemployed in ordinary quantum field theory.\nBy contrast, the third regulator will be fully modular invariant.\nBy comparing the results we will then be able \nto discern the novel effects \nthat emerge through a fully modular-invariant regularization procedure\nand understand the reasons why such a regulator is greatly superior to the others.\n\nAll three of these regulators proceed from the same fundamental observation.\nLet us suppose that $F(\\tau,{\\overline{\\tau}})$ diverges\nat least as quickly as $\\tau_2$ as {\\mbox{$\\tau_2\\to\\infty$}}.\nClearly, this behavior \nwill cause the integral $I$ to diverge on the left side \nof Eq.~(\\ref{RSresult}).\nHowever, this behavior will also cause\n$g(\\tau_2)$ to diverge as {\\mbox{$\\tau_2\\to \\infty$}}, which means that the\nright side of Eq.~(\\ref{RSresult}) will also diverge.\nThus, in principle, a relation such as that in Eq.~(\\ref{RSresult}) will be rendered meaningless.\nHowever, if there were a consistent way of {\\it subtracting}\\\/ or {\\it regulating}\\\/ the appropriate divergence on each \nside of Eq.~(\\ref{RSresult}),\nwe can imagine that we might then obtain an analogous relation between a finite regulated \nintegral $\\widetilde I$\nand a corresponding finite regulated physical-state trace $\\widetilde g(\\tau_2)$.\nAs we shall see, all three of the regulators we shall discuss have this property and \nlead to results which are analogous to the result in Eq.~(\\ref{RSresult}) and relate regulated integrals to\nregulated physical-state supertraces.\n\n\n\\subsubsection{Minimal regulator\\label{sec:minimal}}\n\nPerhaps the simplest and most minimal regulator that can be envisioned~\\cite{zag} is one in which we\ndirectly excise the divergence from the integral $I$ without disturbing the rest of the integral. \nBecause the divergences on both sides of Eq.~(\\ref{RSresult}) arise as {\\mbox{$\\tau_2\\to \\infty$}}, \nwe can formally excise this region of integration\nfrom ${\\cal F}$ by defining a truncated region ${\\cal F}_t$ to be the same as ${\\cal F}$ but with the additional\nrestriction that {\\mbox{$\\tau_2 t$}} region (specifically as {\\mbox{$\\tau_2\\to\\infty$}}),\nwe can \n``undo'' the {\\mbox{$\\tau_2\\to \\infty$}} limit and alternatively define\n\\begin{equation}\n \\widetilde I(t) ~\\equiv~ \\int_{{\\cal F}_t} \\dmu \\, F + \\int_{{\\cal F}-{\\cal F}_t} \\dmu \\, [ F- \\Phi(\\tau_2)]~\n\\label{Itdef}\n\\end{equation} \nwhere we shall continue to assume that {\\mbox{$F(\\tau,{\\overline{\\tau}})\\sim \\Phi(\\tau_2)+...$}} as {\\mbox{$\\tau_2\\to\\infty$}}.\n Note that $\\widetilde I(t)$ is convergent for all finite $t$, as we desire. Moreover,\nbecause the second term in Eq.~(\\ref{Itdef}) has an integrand which is convergent throughout \nthe integration region ${\\cal F}-{\\cal F}_t$, taking the {\\mbox{$t\\to\\infty$}} limit eliminates the second term and\n$\\widetilde I(t)$ reproduces our original unregulated integral $I$ in Eq.~(\\ref{eq:I}).\nThus $\\widetilde I(t)$ represents an alternative, $t$-dependent method of regulating our original\nintegral $I$, one which is distinct from the minimal regularization $\\widetilde I$ \nin Eq.~(\\ref{tildeIdef}). \n\n\nThese two regularizations are deeply connected, however \n --- a fact which will also enable us to express $\\widetilde I(t)$ in terms of supertraces,\njust as we did for $\\widetilde I$.\nNote that the only $t$-dependence within $\\widetilde I(t)$ arises from the \nintegration of the subtraction term $\\Phi(\\tau_2)$\nalong the $t$-dependent lower boundary \nof the integration region ${\\cal F}-{\\cal F}_t$.\nWe thus see that the subtraction term $\\Phi(\\tau_2)$\nwhich regularizes $\\widetilde I(t)$ \nintroduces a non-trivial dependence on $t$ such that\n\\begin{equation}\n \\widetilde I(t)~=~ \\Phi_I(t) + {\\cal C}\n\\label{tdependence}\n\\end{equation}\n where we recall that $\\Phi_I(\\tau_2)$ is the anti-derivative of $\\Phi(\\tau_2)\/\\tau_2^2$\nand where ${\\cal C}$ is an as-yet unknown $t$-independent quantity.\nHowever, it is easy to solve for ${\\cal C}$.\nGiven that {\\mbox{${\\cal C}= \\widetilde I(t) - \\Phi_I(t)$}}, we immediately see \nby taking the {\\mbox{$t\\to \\infty$}} limit of both sides and comparing with Eq.~(\\ref{tildeIdef}) that\n{\\mbox{$\\lim_{t\\to \\infty}{\\cal C} = \\widetilde I$}}.\nHowever, ${\\cal C}$ is independent of $t$, which means that {\\mbox{${\\cal C}=\\widetilde I$}}\nfor {\\it any}\\\/ value of {\\mbox{$t\\geq 1$}}.\nWe thus obtain a general relation, valid for all {\\mbox{$t\\geq 1$}},\nbetween our two regulators: \n\\begin{equation}\n \\widetilde I(t) ~=~ \\widetilde I +\\Phi_I(t)~.\n \\end{equation}\nOur previous result for $\\tilde I$ in \nEq.~(\\ref{Zresult}) then yields~\\cite{zag} \n\\begin{equation}\n \\widetilde I(t) ~=~ \\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2}\\, \\widetilde g(\\tau_2) ~+~ \n \\Phi_I(t)~+~ \\widetilde\\Phi~.\n\\end{equation}\n\nThus, just as with our minimal regulator, we find that our $t$-dependent regulator\nproduces a finite integral $\\widetilde I(t)$ which continues to be expressible\nin terms of a physical-state supertrace.\nIndeed, for the divergence structure {\\mbox{$\\Phi(\\tau_2)= c_0+c_1\\tau_2$}},\nwe find that\nour $t$-dependent regularized integral \n\\begin{equation}\n \\widetilde I(t) ~\\equiv~ \\int_{{\\cal F}_t} \\dmu \\, F + \\int_{{\\cal F}-{\\cal F}_t} \\dmu \\, ( F- c_0 - c_1\\tau_2)~\n\\label{I2}\n\\end{equation} \nis given by\n\\begin{eqnarray}\n \\widetilde I(t) \n ~&=&~ \\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2}\\, \n \\biggl\\lbrack g(\\tau_2) -c_0-c_1 \\tau_2 \\biggr\\rbrack ~~~~\\nonumber\\\\\n &&~ ~ + \\left( \\frac{\\pi}{3} - {1\\over t}\\right) \\, c_0 + \n \\log\\left( 4\\pi \\,t\\, e^{-\\gamma}\\right) c_1~.\n\\label{Zagierresult2}\n\\end{eqnarray}\n\nOnce again, $c_0$ plays a special role in this result\nbecause the presence of the $c_0$ term within $\\Phi(\\tau_2)$ does not lead\nto a divergence.\nIndeed, given that the region ${\\cal F}-{\\cal F}_t$ has volume $1\/t$ with respect to the $\\dmu$ measure, \nwe see that the subtraction of $c_0$ within Eq.~(\\ref{I2}) simply removes\na finite quantity $c_0\/t$ from the value of $\\widetilde I(t)$.\nFor integrands having this divergence structure\nwe can therefore define a {\\it modified}\\\/ (or {\\it improved}\\\/) non-minimal regulator\n\\begin{eqnarray}\n \\widehat I(t) ~&\\equiv&~ \\widetilde I(t) + c_0\/t \\nonumber\\\\\n ~&=&~ \\int_{{\\cal F}_t} \\dmu \\, F + \\int_{{\\cal F}-{\\cal F}_t} \\dmu \\, ( F - c_1\\tau_2)~,~~~~\n\\label{I3}\n\\end{eqnarray} \nwhereupon we find from Eq.~(\\ref{Zagierresult2}) that\n\\begin{eqnarray}\n \\widehat I(t) \n ~&=&~ \\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2}\\, \n \\biggl\\lbrack g(\\tau_2) -c_0-c_1 \\tau_2 \\biggr\\rbrack ~~~~~\\nonumber\\\\\n &&~ ~ + \\frac{\\pi}{3} \\, c_0 + \\log\\left( 4\\pi \\,t\\, e^{-\\gamma}\\right) c_1~.\n\\label{Zagierresult3}\n\\end{eqnarray}\nIn other words, the $1\/t$-dependence on the right side of Eq.~(\\ref{Zagierresult2})\nwas in some sense spurious, reflecting a corresponding $1\/t$-dependence that was\nneedlessly inserted into the regulator definition in Eq.~(\\ref{I2}) and which has\nnow been removed from Eqs.~(\\ref{I3}) and (\\ref{Zagierresult3}).\nThe right side of Eq.~(\\ref{Zagierresult3}) is then independent of $c_0$ in the manner discussed\nbelow Eq.~(\\ref{Zagierresult}) for the minimal regulator.\n\n\n\\subsubsection{Modular-invariant regulators \\label{sec:modinvregs}}\n\nAlthough our results in Eqs.~(\\ref{Zagierresult}), (\\ref{Zagierresult2}), and (\\ref{Zagierresult3}) \nwere each derived in a manner that remained true to the modular-invariant unfolding procedure,\nneither side of these relations is modular invariant \nby itself. \nIn other words, even though \nthese relations correctly\nallow us to \nexpress our regulated\nintegrals $\\widetilde I$, $\\widetilde I(t)$, and $\\widehat I(t)$ in terms of a corresponding \nregulated physical-state supertrace $\\widetilde g(\\tau_2)$,\nneither $\\widetilde I$, $\\widetilde I(t)$, nor $\\widehat I(t)$ is itself a modular-invariant quantity. \nThis is an important observation because \nthese latter quantities will ultimately correspond to physical observables \nwithin the modular-invariant string context.\nWe must therefore additionally require that these observables themselves be modular invariant.\n\nThe issue, of course, is that \nneither $\\widetilde I$, $\\widetilde I(t)$, nor $\\widehat I(t)$ \nincorporates a modular-invariant way of eliminating the associated divergences as {\\mbox{$\\tau_2\\to\\infty$}}. \nHowever, it is possible to design regulators in which such divergences are indeed eliminated in a \nfully modular-invariant way.\nIn this work we shall present a particular set of modular-invariant regulators which will have several useful properties for our purposes.\n\nIn order to define these regulators, let us first recall that the partition function of\na bosonic worldsheet field compactified on a circle of radius $R$ is given by\n\\begin{eqnarray}\n Z_{\\rm circ}(a,\\tau) &=&\n \\sqrt{ \\tau_2}\\,\n \\sum_{m,n\\in\\mathbb{Z}} \\,\n \\overline{q}^{(ma-n\/a)^2\/4} \\,q^{(ma+n\/a)^2\/4}\\nonumber\\\\\n &=& \\sqrt{ \\tau_2}\\,\n \\sum_{m,n\\in\\mathbb{Z}} \\,\n e^{-\\pi \\tau_2 (m^2 a^2 + n^2 \/a^2)} \\, e^{2\\pi i mn \\tau_1}~~~\n\\nonumber\\\\\n\\label{Zcircdef}\n\\end{eqnarray}\nwhere we have defined the dimensionless inverse radius\n{\\mbox{$a\\equiv \\sqrt{\\alpha'}\/R$}}.\nHere the sum over $m$ and $n$ represents \nthe sum over all possible KK momentum and winding modes, respectively.\nNote that {\\mbox{$Z_{\\rm circ}\\to 1\/a$}} as {\\mbox{$a\\to 0$}},\nwhile {\\mbox{$Z_{\\rm circ}\\to a$}} as {\\mbox{$a\\to \\infty$}}.\nAs expected, $Z_{\\rm circ}(a,\\tau)$ is modular invariant for any $a$.\nUsing $Z_{\\rm circ}(a,\\tau)$, we shall then \nregulate any divergent integral of the form in Eq.~(\\ref{eq:I}) \nby defining\na corresponding series of regulated integrals $\\widetilde I_\\rho (a)$:\n\\begin{equation}\n \\widetilde I_\\rho (a) ~\\equiv~ \\int_{{\\cal F}} \\dmu \\, F(\\tau) \\, {\\cal G}_\\rho(a,\\tau)\n\\label{Iadef}\n\\end{equation}\nwhere our regulator functions \n${\\cal G}_\\rho(a,\\tau)$ are defined for any {\\mbox{$\\rho\\in \\mathbb{R}^+$}}, {\\mbox{$\\rho\\not=1$}}, as\n\\begin{equation}\n {\\cal G}_\\rho(a,\\tau) ~\\equiv~ \n A_\\rho\\, a^2 \\frac{\\partial}{\\partial a} \\biggl\\lbrack Z_{\\rm circ}( \\rho a,\\tau) - Z_{\\rm circ}(a,\\tau)\\biggr\\rbrack~ \n\\label{regG}\n\\end{equation} \nwhere {\\mbox{$A_\\rho\\equiv \\rho\/(\\rho-1)$}} is an overall normalization factor.\nNote that ${\\cal G}_\\rho(a,\\tau)$ inherits its modular invariance from $Z_{\\rm circ}$, thereby rendering the regulated\nintegral $\\widetilde I_\\rho(a)$ in Eq.~(\\ref{Iadef}) fully modular invariant for any $a$ and $\\rho$. \nWe further note that ${\\cal G}_\\rho(a,\\tau)$ satisfies the identity\n\\begin{equation}\n {\\cal G}_\\rho(a,\\tau) ~=~ {\\cal G}_{1\/\\rho}(\\rho a,\\tau)~.\n\\label{rhoflipidentity}\n\\end{equation}\nWe can therefore take {\\mbox{$\\rho>1$}} without loss of generality. \n\n\n\n\nThese functions ${\\cal G}_\\rho(a,\\tau)$ have two important properties which \nmake them suitable as regulators\nwhen {\\mbox{$a\\ll 1$}}.\nFirst, as {\\mbox{$a\\to 0$}}, we find that {\\mbox{${\\cal G}_\\rho(a,\\tau)\\to 1$}} for all $\\tau$.\nThus the {\\mbox{$a\\to 0$}} limit restores our original unregulated theory.\nSecond, for any {\\mbox{$a>0$}}, we find that {\\mbox{${\\cal G}_\\rho(a,\\tau)\\to 0$}} {\\it exponentially rapidly}\\\/ as {\\mbox{$\\tau_2\\to\\infty$}}.\nThus the insertion of ${\\cal G}_\\rho (a,\\tau)$ into the integrand of Eq.~(\\ref{Iadef})\nsuccessfully eliminates whatever power-law divergence \nmight have otherwise arisen from the original integrand $F(\\tau)$. \nIndeed, we see that this now happens in a smooth, fully modular-invariant way rather than through\na sharp, discrete subtraction.\nMotivated by these two properties, we shall therefore focus on situations in which {\\mbox{$a\\ll 1$}}, \nas these are the situations in which our regulator preserves as much of the original theory as possible (as we expect\nof a good regulator) while simultaneously eliminating all power-law divergences as {\\mbox{$\\tau_2\\to\\infty$}}.\nIn fact, for the special case {\\mbox{$\\rho=2$}} and for the specific values \n{\\mbox{$a= 1\/\\sqrt{k+2}$}} \n the insertion of this regulator even has a direct physical interpretation, arising through a procedure in which \nthe various fields in the background string geometry are turned on in such \na way that the CFT associated with the flat four-dimensional spacetime\nis replaced by that associated with a $SU(2)_k$ WZW model~\\cite{Kiritsis:1994ta,Kiritsis:1996dn,Kiritsis:1998en}. \n\n\nThat said, there is one further property of these regulator functions ${\\cal G}_\\rho(a,\\tau)$ \nwhich will prove useful for our purposes.\nWhen {\\mbox{$a\\ll 1$}}, the contributions from all non-zero winding modes \nwithin $Z_{\\rm circ}$ (and ultimately within ${\\cal G}$)\nare exponentially suppressed relative to those of the KK momentum modes.\nIn other words, when {\\mbox{$a\\ll 1$}} we can effectively restrict our summation in Eq.~(\\ref{Zcircdef})\nto cases with {\\mbox{$n=0$}}.\nWe then find that ${\\cal G}_\\rho(a,\\tau)$ loses its dependence on $\\tau_1$,\nrendering ${\\cal G}_\\rho(a,\\tau)$ a function of $\\tau_2$ alone.\nIn such cases we shall simply denote our regulator function as ${\\cal G}_\\rho(a,\\tau_2)$.\n\n \nIn Fig.~\\ref{regulator_figure}, we have plotted \nthe regulator function ${\\cal G}_2(a,\\tau_2)$ within the $(a,\\tau_2)$ plane for {\\mbox{$\\tau_2\\geq 1$}} (left panel)\nand as a function of {\\mbox{$\\tau_2\\geq 1$}} for various discrete values of {\\mbox{$a\\ll 1$}} (right panel).\nWe see, as promised, that\n{\\mbox{${\\cal G}_2(a,\\tau_2) \\to 0$}} for all {\\mbox{$a>0$}} as {\\mbox{$\\tau_2\\to\\infty$}},\nwhile {\\mbox{${\\cal G}_2(a,\\tau_2)\\to 1$}} for all {\\mbox{$\\tau_2\\geq 1$}} as {\\mbox{$a\\to 0$}}.\nWe also note that this suppression for large $\\tau_2$ is quite pronounced,\neven for {\\mbox{$a\\ll 1$}}.\n\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[keepaspectratio, width=0.51\\textwidth]{KKregulator3D.pdf}\n\\hskip 0.12 truein\n\\includegraphics[keepaspectratio, width=0.45\\textwidth]{KKregulator.pdf}\n\\caption{\n{\\it Left panel}\\\/: The modular-invariant regulator function ${\\cal G}_2(a,\\tau_2)$, \nplotted within the $(a,\\tau_2)$ plane for {\\mbox{$a\\ll 1$}} and {\\mbox{$\\tau_2\\geq 1$}}.\n{\\it Right panel}\\\/: The modular-invariant regulator ${\\cal G}_2(a,\\tau_2)$, \nplotted as a function of $\\tau_2$ for\n{\\mbox{$a=0.05$}} (blue), {\\mbox{$a=0.1$}} (orange), and {\\mbox{$a=0.3$}} (green).\nIn all cases we see that {\\mbox{${\\cal G}_2(a,\\tau_2) \\to 0$}} for all {\\mbox{$a>0$}} as {\\mbox{$\\tau_2\\to\\infty$}},\nwhile {\\mbox{${\\cal G}_2(a,\\tau_2)\\to 1$}} for all {\\mbox{$\\tau_2\\geq 1$}} as {\\mbox{$a\\to 0$}}.\nIndeed, for {\\mbox{$a=0.05$}}, we see that {\\mbox{${\\cal G}_2(a,\\tau_2)\\approx 1$}} for all {\\mbox{$\\tau_2\\lsim 100$}}.\nThus for small non-zero $a$ this regulator succeeds in suppressing the divergences\nthat might otherwise arise as {\\mbox{$\\tau_2\\to\\infty$}} while nevertheless \nhaving little effect throughout the rest of\nthe fundamental domain.}\n\\label{regulator_figure}\n\\end{figure*}\n\nFor any $a$ and $\\rho$, we see \nfrom Fig.~\\ref{regulator_figure}\nthat there is a corresponding value $\\tau_2^\\ast$ \nwhich can be taken as characterizing the approximate $\\tau_2$-location of the transition between\nthe unregulated region with {\\mbox{${\\cal G}_\\rho(a,\\tau_2)\\approx 1$}} and \nthe regulated region with {\\mbox{${\\cal G}_\\rho(a,\\tau_2)\\approx 0$}}.\nFor example, we might define $\\tau_2^\\ast$ as the critical \nvalue corresponding to the top of the ``ridge'' in the left panel \nof Fig.~\\ref{regulator_figure} (or equivalently the maximum in the right panel of Fig.~\\ref{regulator_figure}).\nAlternatively, given the shapes of the functions in the right panel of Fig.~\\ref{regulator_figure},\nwe might define $\\tau_2^\\ast$ as the location \nat which ${\\cal G}_\\rho(a,\\tau_2)$ experiences an inflection from being concave-down to concave-up.\nFinally, a third possibility might be to define $\\tau_2^\\ast$ as\nthe value of $\\tau_2$ at which {\\mbox{${\\cal G}_\\rho(a,\\tau_2) =1\/2$}},\nrepresenting the ``midpoint'' between {\\mbox{${\\cal G}=1$}} and {\\mbox{${\\cal G}=0$}}.\nFor the {\\mbox{$\\rho=2$}} case shown in Fig.~\\ref{regulator_figure},\nwe then find for {\\mbox{$a\\ll 1$}} that each of these has a rather straightforward \nscaling behavior with $a^{-2}$:\n\\begin{eqnarray}\n \\hbox{ridge top:}~~~ && ~~ \\tau_2^\\ast ~\\approx~ \\frac{3}{2\\pi a^2} ~\\approx~ \\frac {0.477}{a^2}~\\nonumber\\\\\n \\hbox{inflection:}~~~ && ~~ \\tau_2^\\ast ~\\approx~ \\frac{3+\\sqrt{6}}{2\\pi a^2} ~\\approx~ \\frac {0.867}{a^2}~\n ~~~~~~\\nonumber\\\\\n \\hbox{{\\mbox{${\\cal G}=1\/2$}}:}~~~ && ~~ \\tau_2^\\ast ~\\approx~ \\frac{1.411}{a^2}~.\n\\label{tau2ast}\n\\end{eqnarray}\nIndeed, each of these results becomes increasingly accurate as {\\mbox{$a\\to 0$}}.\nMoreover, although the \nnumerical coefficient in the third case depends significantly on $\\rho$,\nthe numerical coefficients in the first two cases are actually independent of $\\rho$.\nIn all cases, however, \nwe see that ${\\cal G}_\\rho(a,\\tau_2)$ suppresses the contributions from regions of the fundamental domain\nwith {\\mbox{$\\tau_2\\gg \\tau_2^\\ast$}} while preserving the contributions from regions with {\\mbox{$1< \\tau_2\\ll \\tau_2^\\ast$}}.\nIndeed, this property holds regardless of our precise definition for $\\tau_2^\\ast$. \n\nArmed with these regulator functions ${\\cal G}_\\rho(a,\\tau_2)$, \nwe now wish to express the integral in Eq.~(\\ref{Iadef})\nin terms of an appropriately regulated supertrace over physical string states.\nHowever, given that $\\widetilde I_\\rho(a)$ is fully modular invariant and convergent as {\\mbox{$\\tau_2\\to\\infty$}},\nwe can simply use the original Rankin-Selberg result in Eq.~(\\ref{RSresult}).\nWe thus have\n\\begin{equation}\n \\widetilde I_\\rho (a) ~=~ \\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2} \\,\\widetilde g_\\rho(a,\\tau_2) ~ \n\\label{Irhoa}\n\\end{equation}\nwhere, in analogy with Eq.~(\\ref{gtrace}), we have \n\\begin{equation}\n \\widetilde g_\\rho(a,\\tau_2) ~\\equiv~ \\int_{-1\/2}^{1\/2} d\\tau_1 \\, F(\\tau) \\, {{\\cal G}}_\\rho(a,\\tau)~.\n\\label{gFGdef}\n\\end{equation}\n\nIn general, for arbitrary $a$,\nthe regulator\n${\\cal G}_\\rho(a,\\tau)$ will have a traditional $(q,{\\overline{q}})$ power-expansion\nof the form {\\mbox{${\\cal G}\\sim \\sum_{r,s} b_{rs} {\\overline{q}}^r q^s$}}, just as we have\n{\\mbox{$F\\sim \\sum_{m,n} a_{mn}{\\overline{q}}^m q^n$}} in Eq.~(\\ref{integrand}). \nGiven this, \nwe find that the $\\tau_1$ integral in Eq.~(\\ref{gFGdef}) projects onto those states for which {\\mbox{$n-m= r-s$}}.\nHowever, ${\\cal G}_\\rho(a,\\tau)$ generally receives contributions from states with many different values of $r-s$.\nAs a result, $\\widetilde g_\\rho(a,\\tau_2)$ will generally receive contributions from not only the {\\it physical}\\\/ \n{\\mbox{$m=n$}} states within $F(\\tau)$ but also \nsome of the {\\it unphysical}\\\/ {\\mbox{$m\\not=n$}} states.\nIn other words, for general $a$, our regulator function ${\\cal G}_\\rho(a,\\tau)$ becomes entangled\nwith the physical-state trace in a way that allows unphysical states to contribute.\n\nAs we have seen, it is useful for practical purposes\nthat ${\\cal G}_\\rho(a,\\tau)$ loses its dependence on $\\tau_1$ when {\\mbox{$a\\ll 1$}}.\nIn other words, \nfor {\\mbox{$a\\ll 1$}}\nwe find that the contributions from terms with {\\mbox{$r\\not=s$}} within ${\\cal G}$ are suppressed.\nThe $\\tau_1$ integral in Eq.~(\\ref{gFGdef}) then projects onto only the {\\mbox{$m=n$}} physical states, as desired,\nand to a good approximation our expression for $\\widetilde g_\\rho(a,\\tau_2)$ in Eq.~(\\ref{gFGdef}) \nsimplifies to\n\\begin{equation}\n \\widetilde g_\\rho(a,\\tau_2) ~\\approx~ g(\\tau_2) \\, {{\\cal G}}_\\rho(a,\\tau_2)~\n\\label{gFGdef2}\n\\end{equation}\nwhere $g(\\tau_2)$ is our original unregulated physical-state trace in Eq.~(\\ref{gtrace}).\nThus, for {\\mbox{$a\\ll 1$}}, the same regulator function ${\\cal G}_\\rho(a,\\tau_2)$ which smoothly softens the \n{\\mbox{$\\tau_2\\to\\infty$}} divergence\nin the integrand $\\widetilde I_\\rho(a)$ also smoothly softens the \n{\\mbox{$\\tau_2\\to\\infty$}} divergence\nin the physical-state trace $\\widetilde g_\\rho(a,\\tau_2)$ --- all without introducing contributions from unphysical states.\nHowever, we shall later demonstrate that the integral \nin Eq.~(\\ref{Iadef}) can actually be performed exactly, yielding\nan expression in terms of purely physical states for all values of $a$.\n\n\nWhile these regulator functions ${\\cal G}_\\rho(a,\\tau_2)$ are suitable for many applications,\nit turns out that we can use these functions in order to \nconstruct additional modular-invariant regulators\nwhose symmetry properties transcend even those of ${\\cal G}_\\rho(a,\\tau_2)$.\nTo do this, we first observe from the modular invariance of ${\\cal G}_\\rho(a,\\tau_2)$ that \n\\begin{equation}\n {\\cal G}_\\rho(a, 1\/\\tau_2) ~=~ {\\cal G}_\\rho(a, \\tau_2) ~\n\\label{StransG}\n\\end{equation}\nfor any $\\rho$, $a$, and $\\tau_2$. Indeed, invariance under {\\mbox{$\\tau_2\\to 1\/\\tau_2$}} follows directly from\ninvariance under the modular transformation {\\mbox{$\\tau\\to -1\/\\tau$}} for {\\mbox{$\\tau_1=0$}}.\nSecond, the identity in Eq.~(\\ref{rhoflipidentity}) tells us that the parameters $(\\rho,a)$\nwhich define our ${\\cal G}$-functions\nhave a certain redundancy, such that the ${\\cal G}$-function with $(\\rho,a)$ is the same as\nthe ${\\cal G}$-function with $(1\/\\rho, \\rho a)$. \nIndeed, only the combination {\\mbox{$a'\\equiv \\sqrt{\\rho} a$}} is invariant under this redundancy. \n Thus, while Eq.~(\\ref{StransG}) provides a symmetry under reciprocal flips in $\\tau_2$,\nEq.~(\\ref{rhoflipidentity}) provides a symmetry under reciprocal flips in $\\rho$.\n\nGiven these two symmetries, it is natural to wonder\nwhether ${\\cal G}_\\rho(a,\\tau)$ also exhibits a reciprocal flip symmetry in\nthe one remaining variable {\\mbox{$a'\\equiv \\sqrt{\\rho} a$}}.\nThis would thus be a symmetry under {\\mbox{$a\\to 1\/\\rho a$}}, or equivalently under {\\mbox{$\\rho a^2\\to 1\/(\\rho a^2)$}}. \nIndeed, we shall find in Sect.~\\ref{sec:alignment} \nthat such an additional symmetry will be very useful and \nrender the modular symmetry manifest in certain cases where it would otherwise have been obscure. \nUnfortunately, ${\\cal G}_\\rho(a,\\tau)$ does not exhibit such a symmetry.\nOne might nevertheless wonder whether it is possible to modify this regulator\nfunction in such a way that it might exhibit this additional symmetry as well.\n\n \nIt turns out that this enhanced symmetry structure is relatively easy to arrange.\nFirst, we observe that $Z_{\\rm circ}(a,\\tau)$ is itself invariant under\n{\\mbox{$a\\to 1\/a$}} for any $\\tau$; indeed, this is the symmetry underlying T-duality for closed strings.\nGiven this, it is then straightforward to verify that \nthe functions\n\\begin{equation}\n \\widehat {\\cal G}_\\rho(a,\\tau) ~\\equiv~ \\frac{1}{1+ \\rho a^2} \\, {\\cal G}_\\rho(a,\\tau)~\n\\label{hatGdef}\n\\end{equation}\nnot only inherit all of the regulator properties and symmetries \ndiscussed above for ${\\cal G}_\\rho(a,\\tau)$ when {\\mbox{$a\\ll 1$}},\nbut are also manifestly invariant under {\\mbox{$a'\\to 1\/a'$}}, or equivalently {\\mbox{$a\\to 1\/(\\rho a)$}}, for any $\\tau$: \n\\begin{equation}\n \\widehat {\\cal G}_\\rho(a,\\tau) ~=~ \\widehat{\\cal G}_\\rho ( 1\/\\rho a, \\tau)~.\n\\label{newest}\n\\end{equation}\n We shall therefore take these $\\widehat {\\cal G}$-functions as defining our enhanced modular-invariant regulators.\nWe shall likewise define\ncorresponding enhanced regularized \nintegrals $\\widehat I_\\rho(a)$ as in Eq.~(\\ref{Iadef}), but with ${\\cal G}_\\rho(a,\\tau)$\nreplaced by $\\widehat {\\cal G}_\\rho(a,\\tau)$.\nWe then find that we can express $\\widehat I_\\rho(a)$ in terms of corresponding\nphysical-state traces $\\widehat g_\\rho(a,\\tau_2)$ as in\nEqs.~(\\ref{Irhoa}) through (\\ref{gFGdef2}),\nexcept with ${\\cal G}_\\rho(a,\\tau_2)$ replaced by $\\widehat {\\cal G}_\\rho(a,\\tau_2)$\nthroughout. \n\nThe enhanced regulators in Eq.~(\\ref{hatGdef})\ncan also be understood in a completely different way, through analogy with what we\nhave already observed for our non-minimal regulators in Sect.~\\ref{sec:nonminimal}.~\nAs discussed after Eq.~(\\ref{Zagierresult3}),\nthe quantity $\\widetilde I(t)$ defined through our non-minimal regulator\nultimately contained \na spurious $t$-dependence that could be removed without disturbing\nthe suitability of the regulator itself.\nIt is for this reason that \nwe were able to transition from our original\nnon-minimal regulator in Eq.~(\\ref{I2}) to \nour improved non-minimal regulator in Eq.~(\\ref{I3}) in which\nsuch spurious terms were eliminated.\n\nIt turns out that a similar situation arises for our original modular-invariant \nregulators ${\\cal G}_\\rho(a,\\tau_2)$.\nIndeed, as we shall find in Sect.~\\ref{sec4}, use of these regulators would have led to results with analogously spurious terms --- {\\it i.e.}\\\/,\nterms which obscure the underlying symmetries of the theory.\nHowever, just as with Eq.~(\\ref{I3}), it is possible to define\nimproved modular-invariant regulators \nin which such spurious effects are eliminated.\nIndeed, these improved modular-invariant regulators \nare nothing but the regulators $\\widehat {\\cal G}_\\rho(a,\\tau)$\nintroduced in Eq.~(\\ref{hatGdef}).\nAdditional reasons for adopting the $\\widehat {\\cal G}_\\rho(a,\\tau)$ regulators\nwill be discussed in Sect.~\\ref{sec:Conclusions}.~\nThese improved regulators will therefore be our main interest in this paper.\n\n\n\\subsubsection{Aligning the non-minimal and modular-invariant regulators\\label{sec:alignment}}\n\nNeedless to say, the most important feature of\nour modular-invariant regulators is precisely that they are modular invariant.\nUse of these regulators therefore provides a way of controlling the divergences that might\nappear in string amplitudes while simultaneously preserving the modular invariance that rests at the heart\nof all that we are doing in this paper.\n\nThis becomes especially apparent upon comparing these modular-invariant regulators with\nthe non-minimal regulators of Sect.~\\ref{sec:nonminimal}. Recall that the non-minimal regulators operate by isolating those terms\nwithin the partition function $F(\\tau)$ which would have led to a divergence as {\\mbox{$\\tau_2\\to \\infty$}}, and then performing\na brute-force subtraction of those terms over the entire region of the fundamental domain ${\\cal F}$\nwith {\\mbox{$\\tau_2\\geq t$}}.\nIn so doing, modular invariance is broken twice: first, in artificially separating those terms \nwithin the partition function which would have led to a divergence from those \nwhich do not; and second, in then selecting a particular sharp location {\\mbox{$\\tau_2=t$}} at which \nto perform the subtraction of these divergence-inducing terms, essentially multiplying these terms \nby $\\Theta(t-\\tau_2)$ where $\\Theta$ is the Heaviside function.\nBy contrast, our modular-invariant regulator keeps the entire partition function $F$ intact\nand then multiplies $F$\nby a single modular-invariant regulator function $\\widehat {\\cal G}_\\rho(a,\\tau)$.\nAs such it does not induce a sharp Heaviside-like subtraction at any particular location within the fundamental domain,\nbut rather (as illustrated in the right panel of Fig.~\\ref{regulator_figure})\ninduces a smooth damping which operates most strongly for {\\mbox{$\\tau_2\\gg \\tau_2^\\ast$}} and which can be removed (or pushed off\ntowards greater and greater values of $\\tau_2^\\ast$) as {\\mbox{$a\\to 0$}}. \nAll of these crucial differences are induced by the modular invariance of the regulator\nand render our modular-invariant regulators \nwholly different from the non-minimal regulator of Sect.~\\ref{sec:nonminimal}.\n\n\nThese two regulators do share one common feature, however: they both introduce suppressions \ninto the integrands of our string amplitudes.\nWithin the non-minimal regulator this takes the form of a \nsharp subtraction that occurs at {\\mbox{$\\tau_2=t$}},\nwhile the modular-invariant regulator\ngives rise to a smoother suppression, a transition from\n{\\mbox{$\\widehat{\\cal G}\\approx 1$}} to {\\mbox{$\\widehat{\\cal G}\\approx 0$}} \n that occurs near {\\mbox{$\\tau_2\\approx \\tau_2^\\ast$}}.\nTo the extent that these two regulators share this single common feature, it is therefore possible \nto ``align''\nthem by choosing a particular definition for $\\tau_2^\\ast$\nwithin the modular-invariant regulator and then identifying\n\\begin{equation}\n t ~=~ \\tau_2^\\ast~.\n\\label{prealignment}\n\\end{equation} \nIn general, we have seen in Eq.~(\\ref{tau2ast})\nthat $\\tau_2^\\ast$ takes the general form\n\\begin{equation}\n \\tau_2^\\ast ~=~ \\frac{\\xi}{a^2} ~=~ \\frac{\\xi \\rho}{\\rho a^2}~,\n\\label{tau2asta}\n\\end{equation}\nwhere $\\xi$ is a numerical coefficient which depends on the particular definition of $\\tau_2^\\ast$ that is chosen.\nThus, for any value of $t$,\nwe can correspondingly tune our choices for {\\mbox{$\\rho>1$}} and $\\rho a^2$ in order to enforce Eq.~(\\ref{prealignment}) \nand in this sense bring our regulators into alignment.\n\nThis alone is sufficient to align our regulators.\nHowever, in keeping with the spirit of symmetry-enhancement that motivated our transition\nfrom ${\\cal G}$ to $\\widehat {\\cal G}$,\nwe can push this one step further.\nWe have already seen that our $\\widehat{\\cal G}$ regulator has a symmetry\nunder {\\mbox{$\\tau_2\\to 1\/\\tau_2$}} (and thus under {\\mbox{$\\tau_2^\\ast \\to 1\/\\tau_2^\\ast$}})\nas well as a symmetry under {\\mbox{$a\\to 1\/\\rho a$}} [or equivalently under {\\mbox{$\\rho a^2 \\to 1\/(\\rho a^2)$}}].\nAlthough these are independent symmetries,\nthe fact that $\\tau_2^\\ast$ and $\\rho a^2$ are related through Eq.~(\\ref{tau2asta}) suggests\nthat we can align these \ntwo symmetries as well by further demanding that {\\mbox{$\\xi \\rho=1$}}.\n\nFor either of the first two $\\tau_2^\\ast$ definitions in Eq.~(\\ref{tau2ast}), this \nis a very easy condition to enforce: we simply take {\\mbox{$\\rho = 1\/\\xi$}}. \nThis is possible because the ``ridge-top'' and ``inflection'' definitions \nlead to values of $\\xi$ which are independent of $\\rho$.\nBy contrast, for the {\\mbox{${\\cal G} = 1\/2$}} condition (where {\\mbox{${\\cal G}\\approx \\widehat{\\cal G}$}} in the {\\mbox{$\\rho a^2 \\ll 1$}} limit),\nthe value of $\\xi$ is itself highly $\\rho$-dependent and it turns out that the constraint {\\mbox{$\\rho \\xi(\\rho)=1$}} has\nno solution for $\\rho$.\n\nIt is easy to understand why these different $\\tau_2^\\ast$ definitions lead to such different outcomes for $\\rho$.\nFor {\\mbox{$a\\ll 1$}} and {\\mbox{$\\rho>1$}}, the contributions to $\\widehat {\\cal G}$ from $Z_{\\rm circ}(\\rho a,\\tau)$ are hugely \nsuppressed compared with those from $Z_{\\rm circ}(a,\\tau)$.\nAs a result,\nany defining condition for $\\tau_2^\\ast$ which depends on the \nactual values of $\\widehat {\\cal G}$ will carry a sensitivity to $\\rho$ only through\nthe $\\widehat {\\cal G}$-prefactor {\\mbox{$A_\\rho = \\rho\/(\\rho-1)$}}. \nBy contrast, any defining condition for $\\tau_2^\\ast$ which depends on the \nvanishing of {\\it derivatives}\\\/ of $\\widehat {\\cal G}$ becomes insensitive to the overall\nscale factor $A_\\rho$ \nand thus independent of $\\rho$.\nIndeed, the ``ridge-top'' and ``inflection'' definitions depend on the vanishing of\nthe first and second $\\widehat {\\cal G}$-derivatives respectively. \nSuch conditions therefore lead to a vastly simpler algebraic structure for $\\tau_2^\\ast$ as\na function of $\\rho$.\n\nThus, pulling the pieces together, we see that we can align our modular-invariant regulator with\nour non-minimal regulator by adopting a particular definition for $\\tau_2^\\ast$\nand then choosing the values of the $(\\rho,a)$ parameters \nwithin our modular-invariant regulator such that\n\\begin{equation}\n t ~=~ \\frac{\\xi \\rho}{\\rho a^2}~.\n\\label{identification}\n\\end{equation}\nMoreover, we can further enhance the symmetries underlying\nthis identification by restricting our attention to $(\\rho,a)$ choices \nfor which {\\mbox{$\\xi \\rho=1$}}.\nHowever, because modular invariance essentially smoothes out the\nsharp transition at {\\mbox{$\\tau_2=t$}}\nthat otherwise existed within the non-minimal regulator, \nwe face an inevitable uncertainty in how we define $\\tau_2^\\ast$.\nIn the following, we shall therefore adopt \n\\begin{equation}\n t ~=~ \\frac{1}{\\rho a^2}~\n\\label{alignment}\n\\end{equation}\nas our alignment condition.\nDirectly enforcing this condition \nenables us to sidestep\nthe issues associated with choosing a particular value of $\\xi$ or a \nparticular definition for $\\tau_2^\\ast$.\nHowever, in enforcing this condition we should remain mindful of our regulator condition \nthat {\\mbox{$a\\ll 1$}}.\nLikewise, whenever needed, our choices for $\\rho$ should lie within\na range that is sensibly close to the approximate values of $\\xi^{-1}$ that characterize\nthe transition from {\\mbox{$\\widehat {\\cal G}\\approx 1$}} to {\\mbox{$\\widehat{\\cal G}\\approx 0$}}.\nFor example, when needed for the purposes of illustration, we shall \nchoose the fiducial value {\\mbox{$\\rho=2$}}.\nIndeed, such a value is very close to the value that would be required for the ``ridge-top''\ndefinition, yielding {\\mbox{$\\xi \\rho = 0.954$}}.\nHowever, by enforcing Eq.~(\\ref{alignment}) directly, we will be able to maintain alignment\nwithout needing to identify a particular definition \nfor $\\tau_2^\\ast$. Moreover, as already noted, the combination $\\rho a^2$ is\ninvariant under the symmetry in Eq.~(\\ref{rhoflipidentity}). \nThis combination will therefore appear naturally in many of our future calculations,\nthereby largely freeing us from the need to specify $\\rho$ and $a$ individually.\n\nOf course, we see from Eq.~(\\ref{alignment}) that choosing $\\rho$ within this range and taking {\\mbox{$a\\ll 1$}} will be possible\nonly if {\\mbox{$t\\gg 1$}}. Thus, although the choice of $t$ is completely arbitrary\nwithin the non-minimal regulator, only those non-minimal regulators with {\\mbox{$t\\gg 1$}} can\nbe aligned with our modular-invariant regulators in a meaningful way.\n\n\n\n\n \n\n\n\n\n\\section{Towards a field-theoretic interpretation: \n The Higgs mass as a supertrace over physical string states\n \\label{sec4}}\n\n\nEquipped with the mathematical machinery from Sect.~\\ref{sec3}, we now \nseek to express\nour result for the Higgs mass given in Eq.~(\\ref{relation1}) \nin terms of the supertraces over only the physical string states.\nIn so doing we will be \ndeveloping an \nunderstanding of our results from a field-theory perspective\n--- indeed, as a string-derived effective field theory (EFT) valid at low energies.\nAll of these results will be crucial for allowing us to understand how\nthe Higgs mass ``runs'' within such an EFT, and\nultimately allowing us to extract \na corresponding ``stringy'' effective Higgs potential in Sect.~\\ref{sec5}.\n\n\n\n\\subsection{Modular invariance, UV\/IR equivalence, and the passage to an EFT \\label{UVIRequivalence}}\n\n\nOur first task is to understand the manner through which\none may extract an EFT description\nof a theory with modular invariance.\nThis is a subtle issue because such theories, as we shall see, possess a certain\nUV\/IR equivalence. However, understanding this issue is ultimately crucial for the physical interpretations that we will be providing\nfor our results in the rest of this section, especially as they relate to the effects\nof the mathematical regulators\nwe have presented in Sect.~\\ref{sec3}.\n\nIn this paper, our interest has thus far focused on performing a fully string-theoretic calculation\nof the Higgs mass. Given that modular invariance is a fundamental symmetry of perturbative closed strings,\nwe have taken great care to preserve modular invariance at every step of our calculations (or to note the extent\nto which this symmetry has occasionally been violated, such as for two of the three possible regulators discussed in\nSect.~\\ref{sec3}).~ \nHowever, modular transformations\nmix the contributions of individual string states into each other in \nhighly non-trivial ways across the entire string spectrum.\nIndeed, we shall see that modular invariance even leads to a fundamental \nequivalence between ultraviolet (UV) and infrared (IR) divergences.\nThus a theory such as string theory can be modular invariant\nonly if all of its states across the {\\it entire}\\\/ string spectrum \nare carefully balanced against each other~\\cite{Dienes:1994np} and treated similarly, as a coherent whole.\nEFTs, by contrast, are predicated on an approach that treats UV physics and IR physics\nin fundamentally different ways, retaining the dynamical degrees of freedom associated with the IR physics \nwhile simultaneously ``integrating out'' the degrees of freedom associated with the UV physics.\nAs a result, any attempt to develop\na true EFT description of a modular-invariant \ntheory such as string theory inherently breaks modular invariance.\n\nIt is straightforward to see that modular invariance leads to an equivalence between UV and IR divergences.\nIn general, one-loop closed-string amplitudes are typically \nexpressed in terms of modular-invariant integrands $F(\\tau)$ which\nare then integrated over the fundamental domain ${\\cal F}$ of the modular group.\nIf such an amplitude diverges, this divergence will arise from the {\\mbox{$\\tau_2\\to \\infty$}}\nregion within ${\\cal F}$.\nGiven that the contributions from the heavy string states \nwithin the integrand are naturally suppressed as {\\mbox{$\\tau_2\\to\\infty$}},\nit would be natural to interpret this divergence\nas an IR divergence involving low-energy physics.\n\nHowever, such an interpretation would be inconsistent\nwithin a modular-invariant theory.\nIn any modular-invariant theory with a modular-invariant integrand $F(\\tau)$, \nwe can always rewrite\nour amplitude through the identity\n\\begin{equation} \n \\int_{\\cal F} \\dmu ~ F(\\tau) = \n \\int_{\\cal F} \\dmu ~ F(\\gamma \\cdot \\tau) = \n \\int_{\\gamma\\cdot {\\cal F}} \\dmu ~ F(\\tau)~~~\n\\label{anychoice}\n\\end{equation}\nwhich holds for any modular transformation $\\gamma$.\nFrom Eq.~(\\ref{anychoice}) we see that \nchoosing ${\\cal F}$ as our region of integration\nis mathematically equivalent to choosing\nany of its images {\\mbox{$\\gamma\\cdot {\\cal F}$}} under any modular transformation $\\gamma$.\nOne of these equivalent choices is {\\mbox{${\\cal F}'\\equiv \\gamma_S\\cdot {\\cal F}$}} \nwhere $\\gamma_S$ is the {\\mbox{$\\tau\\to -1\/\\tau$}} modular transformation.\nThis region is explicitly given as\n\\begin{eqnarray}\n {\\cal F}' ~&\\equiv&~ \\lbrace \\tau :\\, \\tau_2>0,\\, |\\tau|\\leq 1, \\,\n (\\tau_1 +1)^2 + \\tau_2^2 \\geq 1, ~~~~~~~~\\nonumber\\\\\n && ~~~~~~~~~~~~~~ (\\tau_1 -1)^2 + \\tau_2^2 \\geq 1\\, \\rbrace~,~~~~~\n\\end{eqnarray}\nand as such includes the {\\mbox{$\\tau_2\\to 0$}} region but no longer includes the {\\mbox{$\\tau_2\\to\\infty$}} region.\nIndeed, via the identity in Eq.~(\\ref{anychoice}) we see that the\ndivergence of our amplitude now appears as {\\mbox{$\\tau_2\\to 0$}}.\nHowever, \nthere is no suppression of the contributions from the heavy string states \nwithin the integrand\nas {\\mbox{$\\tau_2\\to 0$}}.\nInstead, any divergence as {\\mbox{$\\tau_2\\to 0$}} arises through the accumulating contributions of the heavy\nstring states and would therefore naturally be interpreted as a UV divergence. \nThus, by trading ${\\cal F}$ for ${\\cal F}'$ through Eq.~(\\ref{anychoice}), we see that we can \nalways mathematically recast what would naively appear \nto be an IR divergence as {\\mbox{$\\tau_2\\to \\infty$}} \ninto what would naively appear to be a UV divergence as {\\mbox{$\\tau_2\\to 0$}} --- all without\ndisturbing the integrand of our amplitude in any way.\nA similar conclusion holds for the many other ${\\cal F}''$ domains that could equivalently have been chosen for\nother choices of the modular transformation $\\gamma$.\n\n\n\n\n\n\nThis is a fundamental observation.\nWhen we calculate an amplitude in string theory, \nwe are equipped with an integrand which reflects\nthe spectrum of string states but we must\nchoose an appropriate fundamental domain of the modular group.\nThis choice is not something dictated within the theory itself, but instead\namounts to a {\\it convention}\\\/ which is adopted for the sake of performing a calculation.\nIt is possible, of course, that the amplitude in question diverges.\nAs we have seen, if we choose the fundamental domain ${\\cal F}$ as defined in Eq.~(\\ref{Fdef}) \nthen this divergence will manifest itself\nas an IR divergence.\nHowever, if we choose ${\\cal F}'$ as our fundamental domain, this same divergence of\nthe amplitude will manifest itself as a UV divergence.\nBoth interpretations are equally valid \nbecause the divergence \nof a one-loop modular-invariant string amplitude\nis neither intrinsically UV nor intrinsically IR.~\nIndeed, such a divergence \nis a property {\\it of the amplitude itself}\\\/ and is not intrinsically tied\nto any particular value of $\\tau$.\nSuch a divergence is then merely {\\it represented}\\\/ as a UV or IR divergence depending on our choice of\na region of integration.\n\nThis observation can also be understood through a comparison with our expectations from quantum field theory.\nAs we have seen in Eq.~(\\ref{unfold}), there is a tight relation between\nthe fundamental domain ${\\cal F}$ and the strip ${\\cal S}$ defined in Eq.~(\\ref{Sdef}): \nessentially ${\\cal F}$ is a ``folded'' version of ${\\cal S}$. Likewise, the modular-invariant integrand $F(\\tau)$ \nthat is integrated over ${\\cal F}$ is nothing but the sum of the images of the {\\it non}\\\/-invariant integrand which \nwould be integrated over ${\\cal S}$.\nThus, through the unfolding procedure in Eq.~(\\ref{unfold}), we have two equivalent representations\nfor the same physics. These are often called the ${\\cal F}$- and ${\\cal S}$-representations.\n\nIt is through the ${\\cal S}$-representation that we can most directly\nmake contact with the results that would come from a quantum field theory based on point particles. \nWithin the ${\\cal S}$-representation, we can identify $\\tau_2$ as the Schwinger proper-time\nparameter, with {\\mbox{$\\tau_2\\to\\infty$}} corresponding to the field-theoretic IR limit\nand with {\\mbox{$\\tau_2\\to 0$}} corresponding to the field-theoretic UV limit.\nIndeed, within field theory these limits are physically distinct, just as they are geometrically\ndistinct within the strip.\nHowever, \nupon folding the strip ${\\cal S}$ \ninto the fundamental domain ${\\cal F}$, we see that {\\it both}\\\/ the UV {\\it and}\\\/ IR field-theoretic regions\nwithin ${\\cal S}$ are together mapped onto the {\\mbox{$\\tau_2\\to\\infty$}} region within ${\\cal F}$.\nIndeed, the distinct UV and IR regions of the strip ${\\cal S}$ are now ``folded'' so as \nto lie directly on top of each other within ${\\cal F}$.\nThus, within the ${\\cal F}$-representation, the {\\mbox{$\\tau_2\\to\\infty$}} limit in some sense\nrepresents {\\it both}\\\/ the UV and IR field-theory limits simultaneously --- limits\nwhich would have been viewed as distinct within field theory but which are now related to each other in\nstring theory through modular invariance. \nAn identical argument also holds for the {\\mbox{$\\tau_2\\to 0$}} region within ${\\cal F}'$.\n\nWe can therefore summarize the situation as follows.\nFor a modular-invariant string-theoretic amplitude there is only one kind of divergence. \nIt can be represented as either a UV divergence or an IR divergence depending on our choice\nof fundamental domain (region of integration). However, in either case, this single \nstring-theoretic divergence can be mapped back to what can \nbe considered a modular-invariant {\\it combination}\\\/ of UV and IR field-theoretic divergences in \nfield theory ({\\it i.e.}\\\/, on the strip ${\\cal S}$). Indeed, we may schematically write\n\\begin{equation}\n \\underbrace{ \n {\\rm IR}_{{\\cal F}} \\,=\\, {\\rm UV}_{{\\cal F}'} }_{\\hbox{string theory}}\n ~\\Longleftrightarrow~\n \\underbrace{ {\\rm IR}_{{\\cal S}} \\,\\oplus\\, {\\rm UV}_{{\\cal S}} }_{\\hbox{field theory}}~~\n\\label{UVIR}\n\\end{equation}\nwhere `$\\oplus$' signifies a modular-invariant combination.\nWe shall obtain an explicit example of such a combination below.\nIt is ultimately in this way, through Eq.~(\\ref{UVIR}), that \nour modular-invariant string theory loses its ability to distinguish between UV and IR physics.\nWe will discuss these issues further in Sect.~\\ref{sec:Conclusions}.\n\n\n\nOur discussion in this paper has thus far been formulated with ${\\cal F}$ chosen\nas our fundamental domain. In this way we have been implicitly casting \nour string-theoretic divergences as infrared. In the following, we shall therefore continue along this line and attach corresponding \nphysical interpretations to our mathematical results as far as possible. \nHowever, we shall also \noccasionally\nindicate how our results might alternatively \nappear within the ${\\cal F}'$-representation, or within the \nunfolded ${\\cal S}$-representation of ordinary quantum field theory. \nThis will ultimately be important for extracting an EFT for the Higgs mass,\nfor understanding how our Higgs mass ``runs'' within such an EFT,\nand for eventually interpreting our results in terms of a stringy effective potential.\n\n\nOne further comment regarding the nature of these divergences is in order.\nThe above discussion has focused on the manner in which modular invariance mixes\nUV and IR divergences\nwhen passing from field theory to string theory.\nHowever, it is also important to remember that modular invariance likewise affects\nthe {\\it strengths}\\\/ of these divergences.\nTo understand this, we recall from Eq.~(\\ref{stripF}) that the strip ${\\cal S}$, which serves as the field-theoretic region of integration,\nis nothing but the sum of the images of ${\\cal F}$, a string-theoretic region of integration,\nunder each of the modular transformations $\\gamma$ in the coset {\\mbox{$\\Gamma_\\infty\\backslash \\Gamma$}}.\nHowever, there are an infinite number of such modular transformations within this coset.\nThis means, in essence, that our string-theoretic divergences (if any) are added together\nan infinite number of times when ${\\cal F}$ is unfolded into ${\\cal S}$,\nimplying that the resulting field-theoretic divergences are far more severe than those of the string.\nPhrased somewhat differently, we see that modular invariance softens a given\nfield-theoretic divergence by allowing us to reinterpret part of this divergence as resulting from an infinity\nof identical copies of a weaker (modular-invariant) string divergence, whereupon we are authorized to\nselect only one such copy. \n\nThis observation is completely analogous to what happens within field theory in the presence of a gauge symmetry.\nIf we were to disregard the gauge symmetry when calculating a field-theoretic amplitude, \nwe would integrate over an infinite number of gauge slices when performing our path integrals. \nThis would result in divergences which are spuriously severe.\nHowever, modular invariance is similar to gauge symmetry in the sense that both represent redundancies of \ndescription. \n(In the case of modular invariance, the redundancy arises from the fact that all values of {\\mbox{$\\gamma \\cdot \\tau$}}\nfor {\\mbox{$\\gamma\\in\\Gamma$}} correspond to the same worldsheet torus.)\nIn a modular-invariant theory, we therefore divide out by the (infinite) volume of the \nredundancy coset {\\mbox{$\\Gamma_\\infty\\backslash \\Gamma$}} and consider only one modular-invariant ``slice''. \nIndeed, this is precisely what is happening when we pass from ${\\cal S}$ to ${\\cal F}$ (or any of its images)\nas the appropriate region of integration in a modular-invariant theory, where the particular choice of image\nis nothing but the particular choice of slice.\nThis passage from ${\\cal S}$ to a particular modular-invariant slice\nthen softens our field-theoretic divergences and in some cases even eliminates them entirely.\n\nWe have already seen one example of this phenomenon: the one-loop vacuum energy (cosmological\nconstant) $\\Lambda$ is badly divergent in quantum field theory, yet finite in any tachyon-free closed string theory.\nIndeed, it is modular invariance which is alone responsible for this phenomenon.\nAs we shall see, a similar softening of divergences also occurs for the Higgs mass.\n\n\nWe conclude this discussion with one additional comment.\nIt is a common assertion that string theory lacks UV divergences.\nThe rationale usually provided for this is that string theory intrinsically has a minimum length \nscale, namely $M_s^{-1}$, and that this provides a ``cutoff'' that eliminates \nall physics from arbitrarily short length scales and thereby eliminates the associated UV divergences. \nHowever, this argument fails to acknowledge that IR divergences may still remain,\nand of course in a modular-invariant theory the UV and IR divergences are mixed.\nIndeed, as we have explained, there is no modular-invariant way of disentangling these\ntwo kinds of divergences. Thus string theory is not free of divergences.\nThese divergences are simply softer than they would have been in field theory.\n \n\n\\subsection{The divergence structure of the Higgs mass}\n\n\nWith the above comments in mind, we now consider the divergence structure of the Higgs mass.\nWe begin by recalling from Eq.~(\\ref{relation1}) that \nthe Higgs mass $m_\\phi$ \nwithin any four-dimensional heterotic string \nis given by \n\\begin{equation}\n m_\\phi^2 ~=~ -\\frac{{\\cal M}^2}{2} \\biggl(\n \\langle {\\cal X}_{1a}\\rangle \n + \\langle {\\cal X}_{1b}\\rangle \n + \\langle {\\cal X}_2\\rangle \\biggr)\n + \\frac{\\xi}{4\\pi^2} \\frac{\\Lambda}{{\\cal M}^2}~~~\n\\label{Higgsmass}\n\\end{equation}\nwhere\n\\begin{eqnarray}\n {\\cal X}_{1a} ~&\\equiv &~\n \\frac{\\tau_2}{\\pi} \n \\left( \\tilde {\\bf Q}_j^t {\\bf Q}_h + {\\bf Q}_j^t \\tilde {\\bf Q}_h \\right) \\nonumber\\\\ \n {\\cal X}_{1b} ~&\\equiv &~\n - \\frac{\\tau_2}{\\pi} \n \\left( {\\bf Q}_h^2 + \\tilde {\\bf Q}_h^2 \\right) \\nonumber\\\\ \n {\\cal X}_2 ~&\\equiv &~ \n \\tau_2^2 \\, \\left({\\bf Q}_R^t {\\bf Q}_h - {\\bf Q}_L^t \\tilde {\\bf Q}_h \\right)^2 \\nonumber\\\\\n && ~~~ = ~ \n 4 \\tau_2^2\\, ({\\bf Q}_R^t {\\bf Q}_h)^2 ~=~ 4 \\tau_2^2 \\, ({\\bf Q}_L^t \\tilde {\\bf Q}_h )^2 ~~~~~~\n\\label{Xidef}\n\\end{eqnarray}\nand where $\\Lambda$ is the one-loop cosmological constant.\nNote that we have explicitly separated those terms ${\\cal X}_{1a}$ and ${\\cal X}_{1b}$\nwhich are quadratic in charge\ninsertions \nfrom those terms ${\\cal X}_2$ which are quartic, as these will shortly play very different roles.\nMoreover, within the quadratic terms, we have further distinguished \nthose insertions ${\\cal X}_{1a}$ within which each term consists of a paired contribution \nof a left-moving charge with a right-moving charge\nfrom those insertions ${\\cal X}_{1b}$\nin which each term consists of two charges which are \nboth either left- or right-moving.\nIndeed, we recall from Sect.~\\ref{sec2} that only \n$\\langle {\\cal X}_{1a}\\rangle$ \nand the sum \n$\\langle {\\cal X}_{1b}\\rangle + \\langle {\\cal X}_2\\rangle$ \nare modular invariant;\nin particular, \n$\\langle {\\cal X}_{1b}\\rangle$\nand \n $\\langle {\\cal X}_2\\rangle$ \nare the modular completions of each other\nand thus \nneither \nis modular invariant by itself.\nThat said, it will prove convenient in this section to \nsimply define\n\\begin{eqnarray}\n {\\cal X}_{1} ~&\\equiv &~ {\\cal X}_{1a} + {\\cal X}_{1b}\\nonumber\\\\\n &= & ~\n \\frac{\\tau_2}{\\pi} \n \\left( \\tilde {\\bf Q}_j^t {\\bf Q}_h + {\\bf Q}_j^t \\tilde {\\bf Q}_h \n - {\\bf Q}_h^2 - \\tilde {\\bf Q}_h^2 \\right) ~,\n\\label{Xidef2}\n\\end{eqnarray}\nso long as we remember that only the \nfull combination $\\langle {\\cal X}_1\\rangle +\\langle {\\cal X}_2\\rangle$ is modular invariant.\n\nAs discussed in Sect.~\\ref{sec2}, these results are completely general and apply to any\nscalar $\\phi$ whose VEV determines the vacuum structure of the theory.\nIndeed, the various charge insertions \n${\\bf Q}_h$, $ \\tilde {\\bf Q}_h$, $ {\\bf Q}_j$, and $\\tilde {\\bf Q}_j$\nin Eq.~(\\ref{Xidef})\nare defined in Eqs.~(\\ref{Qhdef}) and (\\ref{Qjdef}) in terms of the\n${\\cal T}$-matrices which encapsulate the relevant information concerning\nspecific scalar under study.\n\nUnlike the other terms in Eq.~(\\ref{Higgsmass}), the final term $\\Lambda$ \nemerges as the result of a universal shift in the background moduli.\nAs such, this quantity is wholly independent of the specific ${\\cal T}$-matrices,\nand merely provides a uniform shift to the masses of all scalars \nin the theory regardless of the specific roles these scalars might play in breaking gauge symmetries \nor otherwise affecting the vacuum state of the theory.\nIn other words, $\\Lambda$ provides what is essentially a mere ``background'' contribution\nto our scalar masses. Moreover, as the one-loop cosmological constant of the theory,\n$\\Lambda$ is an independent physical observable unto itself.\nFor this reason, we shall defer our discussion of $\\Lambda$ to Sect.~\\ref{Lambdasect} \nand focus instead on the effects coming from the ${\\cal X}_i$ insertions\nin Eq.~(\\ref{Higgsmass}).\n\nIn order to make use of the machinery in Sect.~\\ref{sec3}, we must\nfirst understand the divergence structure that can arise from each of these ${\\cal X}_i$\ninsertions as {\\mbox{$\\tau_2\\to \\infty$}}.\nFor any string model in four spacetime dimensions,\nthe original partition function prior to any ${\\cal X}_i$ insertions has\nthe form indicated in Eq.~(\\ref{Zform}),\nwith an overall factor of $\\tau_2^{-1}$. \nThus, the insertion of \n${\\cal X}_1$ leads to \nintegrands without a leading factor of $\\tau_2$,\nwhile the insertion of\n${\\cal X}_2$ leads to integrands with a \nleading factor of $\\tau_2^{+1}$. \n\nDetermining the possible divergences \nas {\\mbox{$\\tau_2\\to\\infty$}} requires that we\nalso understand the spectrum of low-lying states \nthat contribute to these integrands.\nWe shall, of course, assume that our string model\nis free of of physical (on-shell) tachyons.\nThus, expanding the partition function ${\\cal Z}$ of our string model as \nin Eq.~(\\ref{integrand}) with {\\mbox{$k= -1$}}, \nwe necessarily have {\\mbox{$a_{nn}=0$}} for all {\\mbox{$n< 0$}} in Eq.~(\\ref{integrand}).\n\nThere is, however,\nan {\\it off-shell}\\\/ tachyonic state which must always appear within the spectrum of any self-consistent heterotic string model:\nthis is the so-called {\\it proto-graviton}~\\cite{Dienes:1990ij} with {\\mbox{$(m,n)=(0,-1)$}}, and no possible GSO projection can eliminate this state \nfrom the spectrum.\nAlthough this state is necessarily a singlet under all of the gauge symmetries of the model,\nit transforms as a vector under the spacetime Lorentz group. Consequently \nthe degrees of freedom that compose this state have non-vanishing charge vectors of the form\n\\begin{equation}\n {\\bf Q}_{\\hbox{\\scriptsize proto-graviton}} ~=~ ({\\bf 0}_{22} \\, | \\pm 1, {\\bf 0}_9 )\n\\label{protocharge}\n\\end{equation}\nwhere we have written this charge vector in the same basis as used in Eq.~(\\ref{Tmatrixform}),\nwith the non-zero charge component in Eq.~(\\ref{protocharge}) lying along the spacetime-statistics direction\ndiscussed in Sect.~\\ref{sec:EWHiggs}.\n\nBecause of this non-zero charge component, the proto-graviton state has the possibility of contributing to one-loop string\namplitudes even when certain charge insertions occur.\nHowever, we have seen in Eq.~(\\ref{Tmatrixform})\nthat the ${\\cal T}$-matrices appropriate for shifts in the Higgs VEV do not disturb the spin-statistics of the states\nin the spectrum, and thus necessarily have zeros along the corresponding columns and rows. \nIndeed, these zeros are a general feature which would apply to all such ${\\cal T}$-matrices\nregardless of the specific Higgs scalar under study or its particular gauge embedding.\nAs a result, the would-be contributions from the proto-graviton state\ndo not survive either of the ${\\cal X}_i$ insertions in Eq.~(\\ref{Xidef}).\nIndeed, similar arguments also apply to potential contributions from the proto-gravitino states (such as would appear in the\nspectra of string models exhibiting spacetime supersymmetry).\n\nIn general, a heterotic string model can also contain other off-shell tachyonic $(m,n)$ states with {\\mbox{$m\\not=n$}} but {\\mbox{$m+n<0$}}. Unlike the proto-graviton state with {\\mbox{$(m,n)=(0,-1)$}}, such states would generally have {\\mbox{$(m,n)= (k+1,k)$}} where {\\mbox{$-10$}} have a built-in Boltzmann-like \nsuppression $\\sim e^{-\\pi \\alpha' M^2 \\tau_2}$ as {\\mbox{$\\tau_2\\to\\infty$}}, while massless states\ndo not. \nThus massless states are unprotected by the Boltzmann suppression factor as {\\mbox{$\\tau_2\\to\\infty$}},\nwhich is why their contributions are subtracted as part of the regularization procedure.\n\nWithin the {\\it non}\\\/-minimal regulator, however, we distinguish between\ntwo different ranges for $\\tau_2$: one range with {\\mbox{$1\\leq \\tau_2 \\leq t$}}, and a\nsecond range with {\\mbox{$t\\leq \\tau_2 < \\infty$}}.\nOnly within the second range do we subtract the contributions from the massless states;\nindeed, massless states are considered ``safe'' within the first range.\nBut for any finite $t$, it is possible that there are many light states which do not have\nappreciable Boltzmann suppression factors at {\\mbox{$\\tau_2=t$}}.\nSuch light (or ``effectively massless'') states are therefore essentially indistinguishable from \ntruly massless states as far\nas their Boltzmann suppression factors are concerned.\nIndeed, it is only as {\\mbox{$\\tau_2\\to\\infty$}} that we can distinguish the truly massless\nstates relative to all the others. \n\nThis suggests that for any finite value of $t$, we can assess whether a given state \nof mass $M$ is effectively light or heavy\naccording to the magnitude of its corresponding Boltzmann suppression factor at {\\mbox{$\\tau_2=t$}} within the \npartition function.\nRecalling that the contribution from a physical string state of \nmass $M$ to the string partition function scales as $e^{-\\pi \\alpha' M^2 \\tau_2}$,\nwe can establish an arbitrary criterion for the magnitude of the Boltzmann suppression \nof a state with mass {\\mbox{$M=\\mu$}} at the cutoff $t$:\n\\begin{equation}\n e^{-\\pi \\alpha' \\mu^2 t} ~\\sim~ e^{-\\varepsilon}\n\\label{massscale}\n\\end{equation}\nwhere {\\mbox{$\\varepsilon\\geq 0$}} \nis an arbitrarily chosen dimensionless parameter.\nAccording to this criterion, states whose Boltzmann factors at {\\mbox{$\\tau_2= t$}} \nexceed $e^{-\\varepsilon}$ have not experienced significant Boltzmann suppression and \ncan then be considered light relative to that choice of $t$, \nwhile all others can be considered heavy.\nWe thus find that our division between light and heavy states can be demarcated by\na running mass scale $\\mu (t)$ defined as\n\\begin{equation}\n \\mu^2(t) ~\\equiv~ \\frac{\\varepsilon}{\\pi \\alpha' t}~.\n\\label{mudefeps}\n\\end{equation}\nNote, as expected, that {\\mbox{$\\mu(t)\\to 0$}} as {\\mbox{$t\\to\\infty$}}. Thus, as expected, \nthe only states that can be considered light as {\\mbox{$t\\to\\infty$}} are those which are exactly massless. \n\nUltimately, the choice of $\\varepsilon$ \ndetermines an overall scale for the mapping between $t$ and $\\mu$ and is thus a matter of convention.\nFor the sake of simplicity within Eq.~(\\ref{mudefeps}) and our subsequent expressions, we shall henceforth \nchoose {\\mbox{$\\varepsilon = \\pi$}}, whereupon Eq.~(\\ref{mudefeps}) reduces to\n\\begin{equation}\n \\mu^2(t) ~=~ \\frac{1}{\\alpha' t}~.\n\\label{mut}\n\\end{equation}\n \nWith these adaptations, our result for the regulated Higgs mass $\\widehat m_\\phi^2 (t)$ in Eq.~(\\ref{nonminresult}) \ncan be rewritten as\n\\begin{eqnarray}\n \\widehat m_\\phi^2(\\mu) \\,&=&\\, \\frac{\\xi}{4\\pi^2} \\frac{\\Lambda}{{\\cal M}^2} \n +{\\textstyle{1\\over 2}} {\\cal M}^2 \\, \\biggl[\n - \\frac{\\pi}{3} \\, {\\rm Str}\\, {\\mathbb{X}}_1 \\nonumber\\\\\n && ~~~~~~ + (\\,\\newzStr \\,{\\mathbb{X}}_2) \\log \\left(\\frac{\\mu^2}{{\\cal M}_\\ast^2}\\right) \\biggr]~~~~~~~~~~\n\\label{nonminresultFT}\n\\end{eqnarray}\nwhere we have defined \n\\begin{equation}\n {\\cal M}_\\ast^2 ~\\equiv~ 4\\pi \\,e^{-\\gamma} \\,M_s^2 ~=~ 16\\pi^3\\, e^{-\\gamma}\\, {\\cal M}^2~\n\\end{equation}\nand where we have restored the additional universal $\\Lambda$-term in Eq.~(\\ref{nonminresultFT}).\nWe thus see that while the first two terms in Eq.~(\\ref{nonminresultFT}) are independent of $\\mu$ and \ntogether constitute what may be considered an overall threshold term, \nthe logarithmic $\\mu$-dependence within $\\widehat m_\\phi^2(\\mu)$ for any $\\mu$ arises from those\nphysical string states which are charged under ${\\mathbb{X}}_2$ with masses {\\mbox{$M\\leq \\mu$}}.\n\n\nAs we have seen, the enhanced non-minimal regulator we are using here \noperates by explicitly subtracting the contributions \nof the ${\\mathbb{X}}_2$-charged massless states from all regions {\\mbox{$\\tau_2\\geq t$}}.\nThis is a sharp cutoff, and it is natural to wonder \nhow such a cutoff actually maps back onto the strip under the unfolding process.\nIndeed, answering this question will give us some idea about how this sort of cutoff might be interpreted \nin field-theoretic language.\nAs expected, imposing a sharp cutoff {\\mbox{$\\tau_2\\leq t$}} within the ${\\cal F}$ representation\nproduces both an IR cutoff as well as a UV cutoff on the strip.\nThe IR cutoff is inherited directly from the string-theory cutoff and takes the same form {\\mbox{$\\tau_2\\leq t$}}, thereby excising\nall parts of the strip with $\\tau_2$ exceeding $t$, independent of $\\tau_1$.\nHowever, the corresponding UV cutoff is highly non-trivial and is actually sensitive to $\\tau_1$ as well ---\na degree of freedom that does not have a direct interpretation in the field theory.\nMathematically, this UV cutoff excises from the strip \nthat portion of the region~\\cite{zag}\n\\begin{equation}\n \\bigcup_{(a,c)=1} \\, S_{a\/c}\n\\label{circles}\n\\end{equation}\nwhich lies within the range {\\mbox{$-1\/2\\leq \\tau_1 \\leq 1\/2$}},\nwhere $S_{a\/c}$ denotes the disc\nof radius $(2c^2 t)^{-1}$ \nwhich is tangent to the {\\mbox{$\\tau_2=0$}} axis at\n{\\mbox{$\\tau_1= a\/c$}} and where the union in Eq.~(\\ref{circles}) includes all such disks\nfor all relatively prime integers $(a,c)$.\nThus, as one approaches the {\\mbox{$\\tau_2=0$}} axis of the strip from above, the excised region \nconsists of an infinite series of smaller and smaller discs which are all tangential to this axis\nin an almost fractal-like pattern.\nClearly, \nall points which actually lie along the {\\mbox{$\\tau_2=0$}} axis with {\\mbox{$\\tau_1\\in \\mathbb{Q}$}} are excised\nfor any finite $t$ \n(and strictly speaking the other points along the {\\mbox{$\\tau_2=0$}} axis with {\\mbox{$\\tau_1\\not\\in \\mathbb{Q}$}} are \nnot even part of the strip).\nThus, through this highly unusual UV regulator, all UV divergences on the strip are indeed eliminated \nfor any finite $t$.\nOf course, this excised UV region is nothing but the image of the IR-excised region {\\mbox{$\\tau_2\\geq t$}} under\nall of the modular transformations (namely those within the coset {\\mbox{$\\Gamma_\\infty\\backslash \\Gamma$}}) \nthat play a role in building the strip from ${\\cal F}$.\nHowever, in field-theoretic language this amounts to a highly unusual UV regulator indeed!\n\n\n\n\n\\subsection{Results using the modular-invariant regulator}\n\n\nFinally, we turn to the results for the Higgs mass \nthat are obtained using the fully modular-invariant regulator \n$\\widehat {\\cal G}_\\rho(a,\\tau_2)$ in Eq.~(\\ref{hatGdef}).\nAs we have stressed, only such results can be viewed as faithful to the modular symmetry\nthat underlies closed string theory, and therefore only such results can be viewed\nas truly emerging from closed string theories.\n\nWe have seen in Eq.~(\\ref{Higgsmass}) that the string-theoretic Higgs mass \n$m_\\phi^2$ has two contributions: one of these stems from the ${\\cal X}_i$\ninsertions and requires regularization, while the other --- namely the\ncosmological-constant term --- is finite within any tachyon-free modular-invariant\ntheory and hence does not.\nWhen discussing the possible regularizations of the Higgs mass using\nthe minimal and non-minimal regulators in Sects.~\\ref{higgsmin} and \\ref{higgsnonmin},\nwe simply carried the cosmological-constant \nterm along within our calculations and focused\non applying our regulators to the contributions with ${\\cal X}_i$ insertions.\nThis was adequate for the minimal and non-minimal regulators because these regulators\ninvolve the explicit subtraction of divergences \nand thus have no effect on quantities which are already finite and therefore lack\ndivergences to be subtracted.\nOur modular-invariant regulator, by contrast, operates by deforming the theory.\nIndeed, this deformation has the effect of\nmultiplying the partition function of the theory\nwith a new factor $\\widehat {\\cal G}_\\rho(a,\\tau)$.\nAs such, this regularization procedure \ncan be expected to have an effect even when acting on finite quantities such as $\\Lambda$.\nWhen regularizing the Higgs mass in this manner,\nwe must therefore consider how this regulator affects both classes of Higgs-mass contributions --- \nthose involving non-trivial ${\\cal X}_i$ insertions, and those coming from the \ncosmological constant.\nIndeed, with\n $\\widehat m_\\phi^2(\\rho,a)\\bigl|_{{\\cal X},\\Lambda}$\nrespectively denoting these two classes of contributions \nto the $\\widehat {\\cal G}$-regulated version \n $\\widehat m_\\phi^2(\\rho,a)$\nof the otherwise-divergent string-theoretic Higgs mass in Eq.~(\\ref{Higgsmass}),\nwe can write \n\\begin{eqnarray}\n \\widehat m_\\phi^2(\\rho,a) \n ~&\\equiv&~ \\widehat m_\\phi^2(\\rho,a)\\Bigl|_{\\cal X} + ~ \\widehat m_\\phi^2(\\rho,a)\\Bigl|_\\Lambda~ \\nonumber\\\\ \n ~&\\equiv&~ \\widehat m_\\phi^2(\\rho,a)\\Bigl|_{\\cal X} + ~ \\frac{\\xi}{4\\pi^2 {\\cal M}^2} \\,\\widehat \\Lambda(\\rho,a)~.~~~~~~ \n\\label{twocontributions}\n\\end{eqnarray}\nWe shall now consider each of these contributions in turn.\n\n\n\\subsubsection{Contributions from terms with charge insertions\\label{chargeinsertions}}\n\nOur first contribution in Eq.~(\\ref{twocontributions})\nis given by \n\\begin{equation}\n \\widehat m_\\phi^2(\\rho,a)\\Bigl|_{\\cal X} ~\\equiv~ -\\frac{{\\cal M}^2}{2} \n \\Bigl\\langle {\\cal X}_1+ {\\cal X}_2\\Bigr\\rangle_{\\cal G} \n \\label{Higgsmass1}\n\\end{equation}\n where\n\\begin{equation}\n \\langle A \\rangle_{\\cal G} ~\\equiv~ \\int_{\\cal F} \\dmu\n \\left\\lbrace \n \\left\\lbrack \n \\tau_2^{-1} \\sum_{m,n} (-1)^F A~ {\\overline{q}}^{m} q^{n} \n \\right\\rbrack\n \\widehat {\\cal G}_\\rho(a,\\tau)\\right\\rbrace ~\n\\label{AG}\n\\end{equation}\nwith {\\mbox{$m\\equiv \\alpha' M_R^2\/4$}}, {\\mbox{$n\\equiv \\alpha' M_L^2\/4$}}. \nIndeed, the insertion of $\\widehat {\\cal G}_\\rho(a,\\tau)$ into the integrand of \nEq.~(\\ref{AG}) is what tames the logarithmic divergence.\nFollowing the result in Eq.~(\\ref{Irhoa}) we then find that \n$\\widehat m_\\phi^2 (\\rho,a)$ can be expressed as\n\\begin{equation}\n \\widetilde m_\\phi^2(\\rho,a)\\Bigl|_{{\\cal X}} ~=~ \n \\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-2} \\,\\widehat g_\\rho(a,\\tau_2) ~ \n\\label{Irhoa2}\n\\end{equation}\nwhere \n\\begin{eqnarray}\n && \\widehat g_\\rho(a,\\tau_2) \\,\\equiv\\, \n -\\frac{{\\cal M}^2}{2}\n \\int_{-1\/2}^{1\/2} d\\tau_1 \\nonumber\\\\ \n && ~~~~~~~~~ \n \\times\\, \\left\\lbrace \n\\left\\lbrack \n \\sum_{m,n} (-1)^F \\,({\\mathbb{X}}_1 + \\tau_2 {\\mathbb{X}}_2)\\, {\\overline{q}}^{m} q^{n} \\right\\rbrack \n \\,\\widehat {\\cal G}_\\rho(a,\\tau)\\right\\rbrace ~~~\\nonumber\\\\\n&& ~~~~~ \\approx ~\n -\\frac{{\\cal M}^2}{2}\n \\left\\lbrack {\\rm Str} \\,({\\mathbb{X}}_1 + \\tau_2 {\\mathbb{X}}_2)\\, e^{-\\pi \\alpha' M^2 \\tau_2} \\right\\rbrack \n \\widehat {\\cal G}_\\rho(a,\\tau_2) ~.~~~ \\nonumber\\\\\n\\label{gFGdef3}\n\\end{eqnarray}\nNote that in passing to the approximate factorized form in the final expression of Eq.~(\\ref{gFGdef3}), we \nhave followed the result in Eq.~(\\ref{gFGdef2}) \nand explicitly restricted our attention to those cases\nwith {\\mbox{$a\\ll 1$}}, as appropriate for the \nregulator function $\\widehat {\\cal G}_\\rho(a,\\tau)$.\nIndeed, the term within square brackets in the second line of \nEq.~(\\ref{gFGdef3}) is our desired supertrace over physical string states,\nwhile the regulator function $\\widehat{\\cal G}_\\rho(a,\\tau_2)$ --- an example of which \nis plotted in the right panel of Fig.~\\ref{regulator_figure} --- generally eliminates the\ndivergence that would otherwise have arisen as {\\mbox{$\\tau_2\\to \\infty$}} for any {\\mbox{$a>0$}}.\nMoreover, we learn that\nas a consequence of the\nidentity in Eq.~(\\ref{StransG})\n--- an identity which holds\nfor $\\widehat {\\cal G}$ as well\nas for ${\\cal G}$ itself ---\nthe behavior shown in\nthe right panel of Fig.~\\ref{regulator_figure} \ncan be symmetrically ``reflected'' through {\\mbox{$\\tau_2=1$}}, resulting in the same \nsuppression behavior as {\\mbox{$\\tau_2\\to 0$}}. \n\n\nThe next step is to substitute Eq.~(\\ref{gFGdef3}) back into Eq.~(\\ref{Irhoa2})\nand evaluate the residue at {\\mbox{$s=1$}}. \nIn general, the presence of the regulator function $\\widehat{\\cal G}_\\rho(a,\\tau_2)$ within \nEq.~(\\ref{gFGdef3}) renders this calculation somewhat intricate. However, we \nknow that {\\mbox{$\\widehat {\\cal G}_\\rho(a,\\tau_2)\\to 1$}} as {\\mbox{$a\\to 0$}}.\nIndeed, having already exploited our regulator in \nallowing us to pass from Eq.~(\\ref{Higgsmass1}) to Eq.~(\\ref{Irhoa2}),\nwe see that taking {\\mbox{$a\\to 0$}} corresponds to the limit in which we subsequently remove our regulator.\nLet us first\nfocus on the contributions from massive states.\nIn the {\\mbox{$a\\to 0$}} limit, we then obtain\n\\begin{eqnarray}\n&& \\oneRes \\int_0^\\infty d\\tau_2 \\, \\tau_2^{s-2}\\, \\widehat g_\\rho(a,\\tau_2) \\nonumber\\\\\n&& ~~\\,=\\, -{\\textstyle{1\\over 2}} {\\cal M}^2 \\, \\oneRes \\, \\bigl\\lbrack\n \\Gamma(s-1) \\, \\pStr {\\mathbb{X}}_1 \\,(\\pi \\alpha' M^2)^{1-s} ~~~~\\nonumber\\\\ \n && ~~~~~~~~~~~~~~~~~~~~~~+ \\Gamma(s) \\, \\pStr {\\mathbb{X}}_2 \\,(\\pi \\alpha' M^2)^{-s}\\bigr\\rbrack \\nonumber\\\\ \n&& ~~\\,=\\, -{\\textstyle{1\\over 2}} {\\cal M}^2 \\, \\pStr {\\mathbb{X}}_1~,\n\\label{cheat}\n\\end{eqnarray}\nwhereupon we find that the contribution from massive states yields\n\\begin{equation}\n M>0:~~~\\lim_{a\\to 0} \\widehat m_\\phi^2 (\\rho,a)\\Bigl|_{\\cal X} \\,=\\, \n - \\frac{\\pi}{6} {\\cal M}^2 \\, \\pStr {\\mathbb{X}}_1~.\n\\label{prelimitingcase}\n\\end{equation}\n This result is independent of $\\rho$.\nMoreover, as expected for massive states, this contribution is finite.\nOf course, there will also be contributions from massless states.\nIn general, these contributions are more subtle to evaluate, and we know\nthat as {\\mbox{$a\\to 0$}} the effective removal of \nthe regulator will lead to divergences coming from \npotentially non-zero values of $\\zStr {\\mathbb{X}}_2$ (since\nit is the massless states which are charged under ${\\mathbb{X}}_2$ which\ncause the Higgs mass to diverge).\nHowever, massless states charged under ${\\mathbb{X}}_1$ --- like the massive\nstates --- do not lead to divergences. We might therefore imagine restricting our\nattention to cases with {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}, and deforming\nour theory slightly so that these massless ${\\mathbb{X}}_1$-charged states accrue small non-zero masses.\nIn that case, the calculation in Eq.~(\\ref{cheat}) continues to apply.\nWe can imagine removing this deformation without encountering any divergences.\nThis suggests that the full result for the regulated Higgs mass in the \n{\\mbox{$a\\to 0$}} limit should be the same as in Eq.~(\\ref{prelimitingcase}), but\nwith massless ${\\mathbb{X}}_1$-charged states also included.\nWe therefore expect\n\\begin{equation}\n \\lim_{a\\to 0} \\,\\widehat m_\\phi^2 (\\rho,a)\\Bigl|_{\\cal X} ~=~\n - \\frac{\\pi}{6} {\\cal M}^2 \\, {\\rm Str}\\, {\\mathbb{X}}_1~\n\\label{limitingcase}\n\\end{equation}\nin cases for which {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}.\nWe shall rigorously confirm this result below.\n\nAs discussed in Sect.~\\ref{sec:modinvregs},\nthe two quantities $(\\rho, a)$ that parametrize our modular-invariant regulator\nare analogous to the quantity $t$ that parametrized our non-minimal regulator.\nIndeed, these quantities effectively specify the value of the ``cutoff'' imposed by these regulators,\nand as such we can view these quantities as corresponding to a floating physical mass scale $\\mu$.\nThis scale $\\mu$ is defined in terms of $t$ for the non-minimal regulator\nin Eq.~(\\ref{mut}), \nand we have already seen that\nmaintaining alignment between this regulator and our modular-invariant regulator requires\nthat we enforce the condition in Eq.~(\\ref{alignment}).\nWe shall therefore identify a physical scale $\\mu$ for our modular-invariant regulator as \n\\begin{equation}\n \\mu^2(\\rho,a) ~\\equiv ~ \\frac{\\rho a^2}{\\alpha'}~.\n\\label{mudef}\n\\end{equation}\nSince {\\mbox{$\\rho \\sim {\\cal O}(1)$}}, \nthe {\\mbox{$a\\ll 1$}} region \nfor our regulator corresponds to the restricted region {\\mbox{$\\mu \\ll M_s$}}.\n\nThe identification in Eq.~(\\ref{mudef})\nenables us to rewrite\nour result in Eq.~(\\ref{limitingcase}) in the more suggestive form\n\\begin{equation}\n \\lim_{\\mu \\to 0} \\,\\widehat m_\\phi^2 (\\mu) \\Bigl|_{\\cal X} ~=~\n - \\frac{\\pi}{6} {\\cal M}^2 \\,{\\rm Str}\\,{\\mathbb{X}}_1~\n\\label{limitingcasemu}\n\\end{equation}\nin cases for which {\\mbox{$\\zStr{\\mathbb{X}}_2=0$}}.\nIn EFT language, we can therefore regard this result as holding in the deep infrared.\n\nThe natural question that arises, then, is to determine how our regulated Higgs \nmass $\\widehat m_\\phi^2(\\mu)$\n{\\it runs}\\\/ as a function of the scale $\\mu$.\nIn order to do this,\nwe need to evaluate \n$\\widehat m_\\phi^2 (\\rho,a)$ \nas a function of $a$ for small {\\mbox{$a\\ll 1$}} {\\it without}\\\/ taking the full {\\mbox{$a\\to 0$}} limit.\n\n\nAs indicated above, this calculation is somewhat intricate and is presented in Appendix~\\ref{higgsappendix}.~\nThe end result, given in Eq.~(\\ref{finalhiggsmassa}),\nis an expression for $\\widehat m_\\phi^2(\\rho,a)$ \nwhich is both {\\it exact}\\\/ and valid for all $a$.\nUsing the identification in Eq.~(\\ref{mudef}) and henceforth taking the benchmark value {\\mbox{$\\rho=2$}},\nthe result in Eq.~(\\ref{finalhiggsmassa}) can then be expressed in terms of the scale $\\mu$,\nyielding\n\\begin{eqnarray}\n && \\widehat m_\\phi^2(\\mu)\\Bigl|_{\\cal X} \\,=\\, \\frac{{\\cal M}^2}{1+\\mu^2\/M_s^2} \\Biggl\\lbrace \\nonumber\\\\ \n && ~~\\phantom{+} \n \\, \\zStr {\\mathbb{X}}_1 \\left\\lbrack - \\frac{\\pi}{6}\\left(1+\\mu^2\/M_s^2\\right) \\right\\rbrack \\nonumber\\\\\n && ~+ \\, \\zStr {\\mathbb{X}}_2 \\left\\lbrack \n \\log\\left( \\frac{ \\mu}{2\\sqrt{2} e M_s}\\right) \n \\right\\rbrack \\nonumber\\\\\n && ~+ \\, \\pStr {\\mathbb{X}}_1 \\, \\Biggl\\lbrace - \\frac{\\pi}{6} \n - \\frac{1}{2\\pi} \\left(\\frac{M}{{\\cal M}}\\right)^2 \\times \\nonumber\\\\\n && ~~~~~~~~~~~~ \n \\times \\left\\lbrack \n {\\cal K}_0^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) + \n {\\cal K}_2^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) \n \\right\\rbrack \\Biggr\\rbrace \\nonumber\\\\\n && ~+ \\, \\pStr {\\mathbb{X}}_2 \\, \n \\Biggl \\lbrack \n 2{\\cal K}_0^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) \n - {\\cal K}_1^{(1,2)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu}\\right) \\Biggr\\rbrack \n \\Biggr\\rbrace\n\\nonumber\\\\\n \\label{finalhiggsmassmu}\n\\end{eqnarray}\nwhere we have defined the Bessel-function combinations\n\\begin{equation}\n {\\cal K}_\\nu^{(n,p)} (z) ~\\equiv~ \\sum_{r=1}^\\infty ~ (rz)^{n} \\Bigl\\lbrack \n K_\\nu(rz\/\\rho) - \\rho^p K_\\nu(rz) \\Bigr\\rbrack~, \n\\label{Besselcombos}\n\\end{equation}\nwith $K_\\nu(z)$ denoting\nthe modified Bessel function of the second kind.\nWe see, then, that \nthe contributions to \nthe running of \n$\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$ \nfrom the different\nstates in our theory \ndepend rather non-trivially on their masses and on their various ${\\mathbb{X}}_1$ \nand ${\\mathbb{X}}_2$ charges, with\nthe contributions\nfrom each string state with non-zero mass $M$\ngoverned by various combinations of Bessel functions $K_\\nu(z)$ with\narguments {\\mbox{$z\\sim M\/\\mu$}}.\n\nThere is a plethora of physics wrapped within Eq.~(\\ref{finalhiggsmassmu}), and we shall\nunpack this result in several stages.\nFirst, it is straightforward to take the {\\mbox{$\\mu\\to 0$}} limit of Eq.~(\\ref{finalhiggsmassmu}) \nin order to verify our expectation in Eq.~(\\ref{limitingcase}).\nIndeed, in the {\\mbox{$\\mu\\to 0$}} limit, we have {\\mbox{$z\\to \\infty$}} for all {\\mbox{$M>0$}}.\nSince\n\\begin{equation}\n {\\cal K}_\\nu^{(n,p)}(z) ~\\sim~ \\sqrt{\\frac{\\pi \\rho}{2}} \\,z^{n-1\/2} \\,e^{-z\/\\rho} ~~~~{\\rm as}~ z\\to \\infty~,\n\\label{asymptoticform}\n\\end{equation}\nit then follows that\nall of the terms \ninvolving Bessel functions in Eq.~(\\ref{finalhiggsmassmu})\nvanish exponentially in the {\\mbox{$\\mu\\to 0$}} limit.\nFor cases in which {\\mbox{$\\zStr {\\mathbb{X}}_2 =0$}} [{\\it i.e.}\\\/, cases in which the original Higgs mass $m_\\phi^2$\nis finite, with no massless states charged under ${\\mathbb{X}}_2$], \nwe thus reproduce the result in Eq.~(\\ref{limitingcase}).\n\n\n \nUsing the result in Eq.~(\\ref{finalhiggsmassmu}),\nwe can also study the running of $\\widehat m_\\phi^2(\\mu)$ as a function of {\\mbox{$\\mu>0$}}.\nOf course, given that our $\\widehat {\\cal G}$-function acts as a regulator only for {\\mbox{$a\\ll 1$}},\nour analysis is restricted to the {\\mbox{$\\mu\\ll M_s$}} region.\nLet us first concentrate on the contributions from the terms within \nEq.~(\\ref{finalhiggsmassmu}) that do not involve Bessel functions.\nThese contributions are given by\n \\begin{equation}\n {{\\cal M}^2} \\,\\biggl\\lbrace \n - \\frac{\\pi}{6}\\, {\\rm Str}\\, {\\mathbb{X}}_1 \n + \\, \\zStr {\\mathbb{X}}_2 \\log\\left( \\frac{ \\mu}{2\\sqrt{2} e M_s}\\right)\\biggr\\rbrace~.\n\\label{finalhiggsmasslimit}\n\\end{equation}\nFrom this we see \nthat our deep-infrared \ncontribution to $\\widehat m_\\phi^2$ in Eq.~(\\ref{limitingcase}) \nactually persists as an essentially constant contribution for all scales {\\mbox{$\\mu\\ll M_s$}}. \nWe also see from Eq.~(\\ref{finalhiggsmasslimit}) that each massless string state \nalso contributes an additional logarithmic running \nwhich is proportional to its ${\\mathbb{X}}_2$ charge and which\npersists all the way into the deep infrared.\nGiven that massless ${\\mathbb{X}}_2$-charged states are\nprecisely the states that led to the original logarithmic \ndivergence in the {\\it unregulated}\\\/ Higgs mass $m_\\phi^2$,\nthis logarithmic running is completely expected.\nIndeed, it formally leads to a divergence in our regulated\nHiggs mass $\\widehat m_\\phi^2(\\mu)$ in the full {\\mbox{$\\mu\\to 0$}} limit\n(at which our regulator is effectively removed),\nbut otherwise produces a finite contribution for all other {\\mbox{$\\mu>0$}}.\nThe issues connected with this logarithm are actually no different from those\nthat arise in an ordinary field-theoretic calculation. \nWe shall discuss these issues in more detail in Sect.~\\ref{sec:Conclusions}\nbut in the meantime this term will not concern us further.\n\n\n\n\n\n\n\nThe remaining contributions are those arising from the terms\nwithin Eq.~(\\ref{finalhiggsmassmu}) involving supertraces over Bessel functions.\nAlthough our analysis is restricted to the {\\mbox{$\\mu\\ll M_s$}} region,\nour supertraces receive contributions from the entire string spectrum.\nThis necessarily includes states with masses {\\mbox{$M\\gsim M_s$}}, but may also include\npotentially light states with non-zero masses far below $M_s$.\nThe existence of such light states depends on our string construction and\non the specific string model in question. \nIndeed, such states are particularly relevant for the kinds of string models that motivate\nour analysis, namely (non-supersymmetric) string models \nin which the Standard Model is realized directly within the low-energy spectrum.\n\nThe Bessel functions \ncorresponding to states with masses {\\mbox{$M\\gsim M_s$}}\nhave arguments {\\mbox{$z\\sim M\/\\mu \\gg 1$}} when {\\mbox{$\\mu\\ll M_s$}}.\nAs a result,\nin accordance with Eqs.~(\\ref{finalhiggsmassmu}) and (\\ref{asymptoticform}), \nthe contributions from these states to the running of $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$ \nare exponentially suppressed.\nIt then follows that the dominant contributions to\nthe Bessel-function running of $\\widehat m_\\phi(\\mu)$ \nwithin the {\\mbox{$\\mu\\ll M_s$}} region \ncome from the correspondingly light states, {\\it i.e.}\\\/, states with masses {\\mbox{$M\\ll M_s$}}.\nHowever, for states with masses {\\mbox{$M\\ll M_s$}},\nwe see from Eq.~(\\ref{finalhiggsmassmu}) \nthat the corresponding Bessel-function contributions which are proportional to their ${\\mathbb{X}}_1$ charges\nare all suppressed by a factor $(M\/{\\cal M})^2$.\nWe thus conclude that\nthe contribution from a state of non-zero mass {\\mbox{$M\\ll M_s$}} within the string spectrum \nis sizable only when this state carries a non-zero ${\\mathbb{X}}_2$ charge.\nIndeed, we see from Eq.~(\\ref{finalhiggsmassmu}) that this contribution\nfor each bosonic degree of freedom of mass $M$ is given by\n\\begin{equation}\n 2{\\cal K}_0^{(0,1)}\\!\\left(z\\right) - {\\cal K}_1^{(1,2)}\\!\\left(z\\right) \n\\label{lightcontribution}\n\\end{equation}\nper unit of ${\\mathbb{X}}_2$ charge, \nwhere {\\mbox{$z\\equiv 2\\sqrt{2}\\pi M\/\\mu$}}.\n\n\n\n\n\nIn Fig.~\\ref{transientfigure}, we plot this contribution\nas a function of $\\mu\/M$.\nAs expected, we see that states with {\\mbox{$M\\gg \\mu$}} produce no running and can be ignored --- essentially\nthey have been ``integrated out'' of our theory at the scale $\\mu$ and leave behind only an exponential tail.\nBy contrast, states with {\\mbox{$M\\lsim \\mu$}} are still dynamical at the scale $\\mu$. \nWe see from Fig.~\\ref{transientfigure} that their effective contributions are then effectively {\\it logarithmic}\\\/.\nIndeed, as {\\mbox{$z\\to 0$}}, one can show that~\\cite{Paris}\n\\begin{eqnarray}\n {\\cal K}_0^{(0,1)}(z)~&\\sim &~ - {\\textstyle{1\\over 2}} \\log\\,z + {\\textstyle{1\\over 2}}\\left[ \\log\\,(2\\pi) - \\gamma\\right] \\nonumber\\\\ {\\cal K}_1^{(1,2)}(z)~&\\sim &~ 1~\n\\label{Kasymp}\n\\end{eqnarray}\nwhere $\\gamma$ is the Euler-Mascheroni constant.\nThis leads to an \nasymptotic logarithmic running of the form \n\\begin{equation}\n \\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\frac{\\mu}{M}\\right]\n\\label{loglimit}\n\\end{equation}\nfor {\\mbox{$\\mu\\gg M$}} in Fig.~\\ref{transientfigure}.\nFinally, between these two behaviors, we see that the expression in Eq.~(\\ref{lightcontribution})\ninterpolates smoothly and even gives rise to a transient ``dip''.\nThis is a uniquely string-theoretic behavior resulting from the \nspecific combination of Bessel functions in Eq.~(\\ref{lightcontribution}).\nOf course, the statistics factor $(-1)^F$ within the supertrace\nflips the sign of this contribution for degrees of freedom which are fermionic. \n\n\n\\begin{figure}[t!]\n\\centering\n\\includegraphics[keepaspectratio, width=0.48\\textwidth]{transient.pdf}\n\\caption{\nThe expression in Eq.~(\\ref{lightcontribution}), plotted as a function of $\\mu\/M$.\nThis quantity is the Bessel-function contribution per unit ${\\mathbb{X}}_2$ charge \nto the running of the regulated Higgs mass $\\widehat m_\\phi^2(\\mu)\\bigl|_{{\\cal X}}\/{\\cal M}^2$\nfrom a bosonic state of non-zero mass $M$.\nThis contribution is universal for all $\\mu\/M$ and assumes only that {\\mbox{$\\mu \\ll M_s$}}.\nWhen {\\mbox{$\\mu\\gg M$}}, the state is fully dynamical and produces a running which is effectively logarithmic.\nBy contrast, when {\\mbox{$\\mu \\ll M$}}, the state is heavier than the scale $\\mu$ and is effectively\nintegrated out, thereby suppressing any contributions to the running. Finally, within the\nintermediate {\\mbox{$\\mu\\approx M$}} region, \nthe Bessel-function expression in Eq.~(\\ref{lightcontribution}) \nprovides a smooth connection between \nthese two asymptotic behaviors and even gives rise to a transient ``dip''\nin the overall running.\nNote that for a fixed scale $\\mu$, adjusting the mass $M$ of the relevant state\nupwards or downwards simply corresponds to shifting this curve\nrigidly to the right or left, respectively.\nIn this way one can imagine summing over all such contributions to the running\nas one takes the supertrace over the entire ${\\mathbb{X}}_2$-charged string spectrum.} \n\\label{transientfigure}\n\\end{figure}\n\n\n\nThus far we have focused on the \nHiggs-mass running, as shown in Fig.~\\ref{transientfigure},\n from a single massive string degree of freedom of mass $M$.\nHowever, the contribution from another string state with a different mass $M'$ can be\nsimply obtained by rigidly sliding this curve towards the left or right (respectively corresponding to \ncases with {\\mbox{$M'M$}}, respectively).\nThe complete supertrace contribution in Eq.~(\\ref{finalhiggsmassmu})\nis then obtained by summing over all of these curves, each with its appropriate horizontal displacement and each weighted by \nthe corresponding net (bosonic minus fermionic) number of degrees of freedom.\nThe resulting net running from the final term within \nEq.~(\\ref{finalhiggsmassmu}) is therefore highly sensitive to the properties of the \nmassive ${\\mathbb{X}}_2$-charged part of the string spectrum. \nThis will be discussed further in Sect.~\\ref{seehow}.~\nOf course, as discussed above, the contributions from states with {\\mbox{$M'\\gg \\mu$}} are\nexponentially suppressed. Thus, for any $\\mu$, the only states which contribute meaningfully to\nthis Bessel-function running of the Higgs mass are those with {\\mbox{$M\\lsim \\mu$}}. \n\n\n\nThus, combining these Bessel-function contributions with those from Eq.~(\\ref{finalhiggsmasslimit})\nand keeping only those (leading) terms which dominate when {\\mbox{$M\\ll \\mu\\ll M_s$}},\nwe see that we can approximate the exact result in Eq.~(\\ref{finalhiggsmassmu}) as\n\\begin{eqnarray}\n&& \\widehat m_\\phi^2(\\mu)\\Bigl|_{\\cal X} \\,\\approx\\,\n - \\frac{\\pi}{6}\\, {\\cal M}^2\\, {\\rm Str}\\, {\\mathbb{X}}_1 \n + {\\cal M}^2 \\, \\zStr {\\mathbb{X}}_2 \\,\\log\\left( \\frac{ \\mu}{2\\sqrt{2} e M_s}\\right)\\nonumber\\\\\n && ~~~~~~~ + {\\cal M}^2 \\,\\effStr \n {\\mathbb{X}}_2 \\,\\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\frac{\\mu}{M}\\right]~.~~~~~~~~\n\\label{approxhiggsmassmu}\n\\end{eqnarray}\nInterestingly, we see that to leading order,\nthe ${\\mathbb{X}}_1$ charges of the string states\nonly contribute to an overall constant term in Eq.~(\\ref{approxhiggsmassmu}),\nand they do this for all states regardless of their masses.\nBy contrast, it is the ${\\mathbb{X}}_2$ charges \nof the states which induce a corresponding running,\nand this only occurs for those\nstates within the EFT at the scale $\\mu$ --- {\\it i.e.}\\\/, those light \nstates with masses {\\mbox{$M\\lsim \\mu$}}.\n\n\nThe net running produced by the final term in Eq.~(\\ref{approxhiggsmassmu})\ncan exhibit a variety of behaviors.\nTo understand this, let us consider the behavior of this term\nas we increase $\\mu$ from the deep infrared.\nOf course, this term does not produce any running at all \nuntil we reach {\\mbox{$\\mu \\sim M_{\\rm lightest}$}},\nwhere $M_{\\rm lightest}$ is the mass of the lightest massive string state\ncarrying a non-zero ${\\mathbb{X}}_2$ charge.\nThis state then contributes a logarithmic\nrunning which persists for all higher $\\mu$.\nHowever, as $\\mu$ increases still further,\nadditional ${\\mathbb{X}}_2$-charged string states \nenter the EFT and contribute their own individual logarithmic contributions. \nOf course, if these additional states \nhave masses {\\mbox{$M\\gg M_{\\rm lightest}$}},\nthe logarithmic nature of the running shown in Fig.~\\ref{transientfigure}\nfrom the state with mass $M_{\\rm lightest}$\nwill survive intact until {\\mbox{$\\mu \\sim M$}}.\nHowever, if the spectrum of states is relatively dense \nbeyond $M_{\\rm lightest}$, the logarithmic contributions from each of these states\nmust be added together, leading to a far richer behavior.\n \nOne important set of string models exhibiting the latter property\nare those involving a relatively large compactification radius $R$.\nIn such cases, we can identify {\\mbox{$M_{\\rm lightest}\\sim 1\/R$}},\nwhereupon we expect an entire tower of corresponding Kaluza-Klein (KK) states\nof masses {\\mbox{$M_k\\sim k\/R$}}, {\\mbox{$k\\in\\mathbb{Z}^+$}}, each sharing a common charge ${\\mathbb{X}}_2$ and\na common degeneracy of states $g$.\nFor any scale $\\mu$, the final term in Eq.~(\\ref{approxhiggsmassmu}) \nthen takes the form\n\\begin{eqnarray}\n && {\\cal M}^2 \\,g \\,{\\mathbb{X}}_2 \\,\\sum_{k=1}^{\\mu R}\n \\,\\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\, \\frac{\\mu R}{k}\\right]~\\nonumber\\\\\n && ~~=~ {\\cal M}^2 \\,g \\,{\\mathbb{X}}_2 \\,\\left\\lbrace \n \\mu R \\,\\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\, \\mu R\\right] - \\log \\,(\\mu R)!\\right\\rbrace\\nonumber\\\\ \n && ~~=~ {\\cal M}^2 \\,g \\,{\\mathbb{X}}_2 \\,\n \\, \\left\\lbrace \n \\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\right] +1 \\right\\rbrace \\, \\mu R\n\\label{newpower}\n\\end{eqnarray}\nwhere in passing to the third line we have used Stirling's approximation\n{\\mbox{$\\log N! \\approx N \\log N - N$}}.\nWe thus see that in such cases our sum over logarithms actually produces a {\\it power-law}\\\/ \nrunning! In this case the running is linear, \nbut in general the KK states associated with $d$ large toroidally-compactified dimensions\ncollectively yield a regulated Higgs mass whose running scales as $\\mu^d$.\n\nThis phenomenon whereby a sum over KK states deforms a running from logarithmic to power-law\nis well known from phenomenological studies of theories with large extra dimensions,\nwhere it often plays a crucial role \n(see, {\\it e.g.}\\\/, Refs.~\\cite{Dienes:1998vh, Dienes:1998vg, Dienes:1998qh}).\nThis phenomenon can ultimately be understood from \nthe observation that a large compactification radius\neffectively increases the overall spacetime dimensionality of the theory,\nthereby shifting the mass dimensions of quantities such \nas gauge couplings and Higgs masses and simultaneously shifting their corresponding runnings.\nIndeed, as discussed in detail in Appendices~A and B \nof Ref.~\\cite{Dienes:1998vg}\n(and as illustrated in Fig.~11 therein),\nthe emergence of power-law running from logarithmic running is surprisingly robust. \n\nOf course, it may happen that \nthe spectrum of light states not only has a lightest mass $M_{\\rm lightest}$\nbut also a heaviest mass $M_{\\rm heaviest}$, with a significant mass gap beyond this\nbefore reaching even heavier scales. \nIf such a situation were to arise (but clearly does not within the large extra-dimension\nscenario described above),\nthen the corresponding running of $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$\nwould only be power-law within the range {\\mbox{$M_{\\rm lightest}\\lsim \\mu\\lsim M_{\\rm heaviest}$}}.\nFor {\\mbox{$\\mu > M_{\\rm heaviest}$}}, by contrast, the running would then revert back to logarithmic. \n\nIn summary, we see that while the first term within\nEq.~(\\ref{approxhiggsmassmu}) \nrepresents an overall constant contribution arising from the entire spectrum of ${\\mathbb{X}}_1$-charged states,\nthe second term represents an overall logarithmic contribution from precisely the massless ${\\mathbb{X}}_2$-charged states\nwhich were the source of the original divergence of the unregulated Higgs mass $m_\\phi^2$.\nBy contrast, the final term\nin Eq.~(\\ref{approxhiggsmassmu}) \nrepresents the non-trivial contribution to the running from\nthe massive ${\\mathbb{X}}_2$-charged states.\nAs we have seen,\nthis latter contribution can exhibit a variety of behaviors, ranging from logarithmic (in cases with\nrelatively large mass splittings between the lightest massive ${\\mathbb{X}}_2$-charged\nstates) to power-law (in cases with relatively small uniform mass splittings between\nsuch states).\nOf course, \ndepending on the details of the underlying string spectrum,\nmixtures between these different behaviors are also possible. \n \n\n\\subsubsection{Contribution from the cosmological constant \\label{Lambdasect}} \n\nLet us now turn to the \nsecond term in Eq.~(\\ref{twocontributions}).\nThis contribution lacks ${\\cal X}_i$ insertions\nand arises from the cosmological-constant term in Eq.~(\\ref{Higgsmass}).\nAlthough this contribution is the result of a universal shift in the background moduli\nand is thus independent of the specific ${\\cal T}$-matrices,\nwe shall now demonstrate that it too can be expressed as a supertrace over the physical string spectrum.\nIt also develops a scale dependence when subjected to our modular-invariant regulator.\n\nWithin the definition of $\\Lambda$ in Eq.~(\\ref{lambdadeff}),\nthe integrand function ${\\cal F}(\\tau,{\\overline{\\tau}})$ is simply $(-{\\cal M}^4\/2) {\\cal Z}(\\tau,{\\overline{\\tau}})$\nwhere ${\\cal Z}(\\tau,{\\overline{\\tau}})$ is the partition function of the string in the Higgsed phase.\nOf course, if this theory exhibits unbroken spacetime supersymmetry, \nthe contributions from the bosonic states in the spectrum cancel level-by-level against\nthose from their fermionic superpartners. In such cases we then have {\\mbox{${\\cal Z}=0$}}, implying {\\mbox{$\\Lambda=0$}}.\nOtherwise, for heterotic strings, we necessarily have {\\mbox{${\\cal Z}\\not =0$}}.\nIndeed, it is a theorem (first introduced in Ref.~\\cite{Dienes:1990ij} and discussed more recently, {\\it e.g.}\\\/, in \nRef.~\\cite{Abel:2015oxa})\nthat any non-supersymmetric heterotic string model in $D$ spacetime dimensions must contain an off-shell\ntachyonic {\\it proto-graviton}\\\/ state\nwhose contribution to the partition function remains uncancelled. This then results in a string partition function\nwhose power-series expansion has the leading behavior {\\mbox{$Z=(D-2)\/q + ...$}} \n\nIn principle this proto-graviton contribution would appear to introduce \nan exponential divergence as {\\mbox{$\\tau_2\\to\\infty$}}, thereby taking us beyond\nthe realm of validity for the mathematical techniques presented in Sect.~\\ref{sec:RStechnique}.~\nHowever, this tachyonic state is off-shell and thus does not appear in the actual physical string spectrum.\nIndeed, as long as there are no additional {\\it on-shell}\\\/ tachyons present in the theory,\nthe corresponding integral $\\Lambda$ is fully convergent\nbecause the integral over the fundamental domain ${\\cal F}$ comes\nwith an explicit instruction that we are to integrate across $\\tau_1$ in the {\\mbox{$\\tau_2>1$}} region of ${\\cal F}$\n{\\it before}\\\/ integrating over $\\tau_2$.\nThis integration therefore prevents the proto-graviton state from contributing to $\\Lambda$ \nwithin the {\\mbox{$\\tau_2>1$}} region of integration, and likewise prevents this state from contributing to $g(\\tau_2)$. \n\n\nAssuming, therefore, that we can disregard the proto-graviton contribution to ${\\cal Z}$ as {\\mbox{$\\tau_2\\to\\infty$}},\nwe find that {\\mbox{${\\cal Z}\\sim \\tau_2^{-1}$}} as {\\mbox{$\\tau_2\\to\\infty$}}.\nThus ${\\cal Z}$ is effectively of rapid decay and we can use the \noriginal Rankin-Selberg results in Eq.~(\\ref{RSresult}).\nIn this connection, we note that this assumption regarding the proto-graviton contribution\nfinds additional independent \nsupport through the arguments presented in Ref.~\\cite{Kutasov:1990sv} which \ndemonstrate that any contributions from the proto-graviton beyond those in Eq.~(\\ref{RSresult}) \nare suppressed by an infinite volume factor in all spacetime dimensions {\\mbox{$D>2$}}.\nA similar result is also true in string models\nwith exponentially suppressed cosmological constants~\\cite{Abel:2015oxa}.\n \nWith ${\\cal Z}$ taking the form in Eq.~(\\ref{integrand})\nand with the mass $M$ of each physical string state \nidentified via {\\mbox{$\\alpha' M^2=2(m+n)=4m$}},\nwe then have\n\\begin{equation}\n g(\\tau_2) ~=~ -\\frac{{\\cal M}^4}{2} \\, \\tau_2^{-1}\\, {\\rm Str}\\, e^{-\\pi \\alpha' M^2 \\tau_2}~.\n\\label{glambda}\n\\end{equation}\nInserting this result into Eq.~(\\ref{RSresult}) and performing the $\\tau_2$ integral\nthen yields~\\cite{Dienes:1995pm}\n\\begin{eqnarray}\n \\Lambda ~&=&~ - \\frac{{\\cal M}^4}{2}\\, \\frac{\\pi}{3} \\,\\oneRes\n \\biggl[\\pi^{2-s} \\,\\Gamma(s-2) \\,{\\rm Str}\\, (\\alpha' M^2)^{2-s}\\biggr\\rbrack\\nonumber\\\\\n ~&=&~ \\frac{{\\cal M}^4}{2} \\frac{\\pi^2}{3} {\\rm Str}\\, (\\alpha' M^2) \\nonumber\\\\\n ~&=&~ \\frac{1}{24} {\\cal M}^2 \\, {\\rm Str}\\, M^2.\n\\label{eq:lamlam}\n\\end{eqnarray} \nWe thus see that $\\Lambda$ is given as a universal supertrace \nover {\\it all}\\\/ physical string states,\nand not only those with specific charges relative to the Higgs field.\n\nAs evident from the form of the final supertrace in Eq.~(\\ref{eq:lamlam}), \nmassless states do not ultimately contribute within this expression for $\\Lambda$.\nStrictly speaking, our derivation in Eq.~(\\ref{eq:lamlam}) already implicitly assumed\nthis, given that the intermediate steps in Eq.~(\\ref{eq:lamlam}) are valid only for {\\mbox{$M>0$}}.\nHowever, it is easy to see that the contributions\nfrom massless states lead to a $\\tau_2$-integral whose divergence has no residue at {\\mbox{$s=1$}}.\nThus, massless states make no contribution to this expression, \nand the result in Eq.~(\\ref{eq:lamlam}) stands.\n\nThis does {\\it not}\\\/ mean that massless states do not contribute to $\\Lambda$, however.\nRather, this just means that the constraints from modular invariance \nso tightly connect \nthe contributions to $\\Lambda$ from the massless states to those from the \nmassive states \n(and also those from the unphysical string states of any mass)\nthat an expression for $\\Lambda$ as in Eq.~(\\ref{eq:lamlam}) becomes possible.\n \n\n \nFor further insight into this issue,\nit is instructive to obtain this same result through Eq.~(\\ref{reformulation}). \nWe then have\n\\begin{equation}\n \\Lambda~=~ -\\frac{\\pi}{3} \\frac{{\\cal M}^4}{2} \\lim_{\\tau_2\\to 0} \n \\biggl\\lbrack \\tau_2^{-1} \\,{\\rm Str}\\, \\exp\\left( -\\pi \\alpha' M^2 \\tau_2\\right)\\biggr\\rbrack~.\n\\end{equation}\nExpanding the exponential {\\mbox{$e^{-x}\\approx 1 -x + ...$}} and taking the {\\mbox{$\\tau_2\\to 0$}} limit of each term separately, \nwe find that the linear term leads directly to the result in Eq.~(\\ref{eq:lamlam}) while the contributions from all\nof the higher terms vanish.\nInterestingly, the constant term would {\\it a priori}\\\/ appear to lead to a divergence for $\\Lambda$.\nThe fact that $\\Lambda$ is finite in such theories then additionally tells us that~\\cite{Dienes:1995pm}\n\\begin{equation} \n {\\rm Str}\\, {\\bf 1} ~=~0~.\n\\label{eq:lamlam0}\n\\end{equation}\nAs apparent from our derivation, this constraint must hold for any \ntachyon-free modular-invariant theory \n({\\it i.e.}\\\/, any modular-invariant theory in which $\\Lambda$ is finite). \nIndeed, this is one of the additional constraints from modular invariance which\nrelates the contributions of the physical string states\nwhich are massless to the contributions of those which are massive.\nThus, we may regard the result in Eq.~(\\ref{eq:lamlam}) --- like all of the results of this paper --- \nas holding within a modular-invariant context \nin which other constraints such as that in Eq.~(\\ref{eq:lamlam0}) are also simultaneously \nsatisfied.\nWe also see from this analysis that \nour supertrace definition in Eq.~(\\ref{supertracedef})\nmay be more formally defined as~\\cite{Dienes:1995pm}\n\\begin{equation}\n {\\rm Str} \\, A ~\\equiv~ \n \\lim_{y\\to 0} \\, \\sum_{{\\rm physical}~ i} (-1)^{F_i} \\, A_i \\, e^{- y \\alpha' M_i^2}~.\n\\label{supertracedef2}\n\\end{equation}\n\nThe supertrace results in Eqs.~(\\ref{eq:lamlam}) and (\\ref{eq:lamlam0}) were first derived in \nRef.~\\cite{Dienes:1995pm}.\nAs discussed in Refs.~{\\mbox{\\cite{Dienes:1995pm,Dienes:2001se}}}, \nthese results hold for all tachyon-free heterotic strings in four dimensions,\nand in fact similar results hold in all spacetime dimensions {\\mbox{$D>2$}}.\nFor theories exhibiting spacetime supersymmetry,\nthese relations are satisfied rather trivially.\nHowever, even if the spacetime supersymmetry is broken\n--- and even if the scale of supersymmetry breaking is relatively large or at the Planck scale ---\nthese results nevertheless continue to hold.\nIn such cases, these supertrace relations do not arise as the results of pairwise cancellations\nbetween the contributions of bosonic and fermionic string states.\nRather, these relations emerge as the results of conspiracies that occur across the {\\it entire}\\\/\nstring spectrum, with the bosonic and fermionic string states \nalways carefully arranging themselves \nat all string mass levels \nso as to exhibit a so-called ``misaligned supersymmetry''~{\\mbox{\\cite{Dienes:1994np,Dienes:2001se}}}.\nNo pairing of bosonic and fermionic states \noccurs within misaligned supersymmetry,\nyet misaligned supersymmetry ensures that these supertrace relations are always satisfied.\nThese results therefore constrain the extent to which supersymmetry can be broken in tachyon-free string theories\nwhile remaining consistent with modular invariance. \n\n\nThe results that we have obtained thus far pertain to the cosmological constant $\\Lambda$.\nAs such, they would be sufficient if we were aiming to understand this quantity unto itself,\nsince $\\Lambda$ is finite in any tachyon-free modular-invariant\ntheory and hence requires no regulator.\nHowever, in this paper our interest in this quantity stems from the fact that $\\Lambda$ is\nan intrinsic contributor to the total Higgs mass in Eq.~(\\ref{relation1}),\nand we already have seen that the Higgs mass requires regularization.\nAt first glance, one might imagine regulating the terms with non-zero ${\\cal X}_i$ insertions\nwhile leaving the $\\Lambda$-term alone.\nHowever, it is ultimately inappropriate to regularize only a subset of terms that contribute\nto the Higgs mass --- for consistency we must apply the same regulator to the entire expression\nat once.\nIndeed, we recall from Sect.~\\ref{sec2} that the entire Higgs-mass expression including $\\Lambda$ forms\na modular-invariant unit, with $\\Lambda$ emerging from the modular completion\nof some of the terms with non-trivial ${\\cal X}_i$ insertions.\nFor this reason,\nwe shall now study the analogously regulated \ncosmological constant\n\\begin{equation}\n \\widehat \\Lambda (\\rho,a) ~\\equiv~ \\int_{\\cal F} \\dmu~ {\\cal Z}(\\tau) \\, \\widehat{\\cal G}_\\rho(a,\\tau)~\n\\label{Lambdahatdef}\n\\end{equation}\nand determine the extent to which this regularized cosmological constant\ncan also be expressed in terms of supertraces over the physical string states.\n\nOur discussion proceeds precisely as for the terms involving the ${\\cal X}_i$ insertions.\nFollowing the result in Eq.~(\\ref{Irhoa}) we find that \n$\\widehat \\Lambda (\\rho,a)$ can be expressed as\n\\begin{equation}\n \\widehat \\Lambda(\\rho,a) ~=~ \n\\frac{\\pi}{3}\\, \\oneRes \\, \\int_0^\\infty d\\tau_2 \\,\\tau_2^{s-3} \\,\\widehat g_\\rho(a,\\tau_2) ~ \n\\label{Irhoa3}\n\\end{equation}\nwhere \n\\begin{eqnarray}\n && \\widehat g_\\rho(a,\\tau_2) \\,\\equiv\\, \n -\\frac{{\\cal M}^4}{2}\n \\int_{-1\/2}^{1\/2} d\\tau_1 \\nonumber\\\\ \n && ~~~~~~~~~ \n \\times\\, \\left\\lbrace \n\\left\\lbrack \n \\sum_{m,n} (-1)^F \\,{\\overline{q}}^{m} q^{n} \\right\\rbrack \n \\,\\widehat {\\cal G}_\\rho(a,\\tau)\\right\\rbrace ~~~\\nonumber\\\\\n&& ~~~~~ = ~\n -\\frac{{\\cal M}^4}{2}\n \\left\\lbrack {\\rm Str} \\, e^{-\\pi \\alpha' M^2 \\tau_2} \\right\\rbrack \n \\widehat {\\cal G}_\\rho(a,\\tau_2) ~.~~~~~ \n\\label{gFGdef4}\n\\end{eqnarray}\nIn the second line the sum over $(m,n)$ indicates a sum over the entire spectrum \nof the theory, while\nin passing to the factorized form in the third line of Eq.~(\\ref{gFGdef4}) we \nhave again followed the result in Eq.~(\\ref{gFGdef2}) \nand explicitly restricted our attention to those cases\nwith {\\mbox{$a\\ll 1$}}, as appropriate for the \nregulator function $\\widehat {\\cal G}_\\rho(a,\\tau)$.\nIndeed, the term within square brackets in the second line of \nEq.~(\\ref{gFGdef3}) is our desired supertrace over physical string states,\nwhile the regulator function $\\widehat{\\cal G}_\\rho(a,\\tau_2)$ \nprovides a non-trivial $\\tau_2$-dependent weighting to the different\nterms within $g_\\rho(a,\\tau_2)$. \n\n\nOnce again, \nthe next step is to substitute Eq.~(\\ref{gFGdef4}) back into Eq.~(\\ref{Irhoa3})\nand evaluate the residue at {\\mbox{$s=1$}}. \nIn general, the presence of the regulator function $\\widehat{\\cal G}_\\rho(a,\\tau_2)$ within \nEq.~(\\ref{gFGdef3}) renders this calculation somewhat intricate. However, \njust as for the terms with non-trivial ${\\cal X}_i$ insertions,\nwe know that {\\mbox{$\\widehat {\\cal G}_\\rho(a,\\tau_2)\\to 1$}} as {\\mbox{$a\\to 0$}}.\nIn this limit, we therefore expect to obtain our original (finite) unregulated $\\Lambda$:\n\\begin{equation}\n \\lim_{a\\to 0} \\,\\widehat \\Lambda(\\rho,a) ~=~ \\Lambda ~=~ \n \\frac{1}{24} {\\cal M}^2 \\, {\\rm Str}\\, M^2~\n\\label{limitingcase2}\n\\end{equation}\nwhere in the final equality we have utilized the result in Eq.~(\\ref{eq:lamlam}).\nEquivalently, upon identifying the physical scale $\\mu$ as in Eq.~(\\ref{mut}),\nwe thus expect\n\\begin{equation}\n \\lim_{\\mu\\to 0} \\,\\widehat \\Lambda (\\mu) ~=~ \\Lambda~.\n\\label{lambdamulimit}\n\\end{equation}\n\nLet us now determine how $\\widehat\\Lambda(\\mu)$ runs\nas a function of the scale $\\mu$.\nTo do this, we need to evaluate $\\widehat \\Lambda(\\rho,a)$ explicitly as a function of $\\rho$ and $a$.\nThis question is tackled in Appendix~\\ref{lambdaappendix}, yielding the exact result\nin Eq.~(\\ref{lambdaresult}). Written in terms of the physical scale $\\mu$ in Eq.~(\\ref{mudef}) \nthis result then takes the form\n\\begin{eqnarray}\n \\widehat\\Lambda (\\mu ) ~&=&~ \\frac{1}{1+\\mu^2\/M_s^2} \\, \\Biggl\\lbrace\n \\frac{{\\cal M}^2}{24} \\, {\\rm Str}\\,M^2 \\nonumber\\\\\n && ~~ - \\frac{7}{960 \\pi^2} \\,(n_B-n_F)\\, \\mu^4 \n~\\nonumber\\\\\n && ~~ - \\frac{1}{2\\pi^2}\\,\n \\pStr M^4 \\,\\Biggl[ {\\cal K}_1^{(-1,0)}\\!\\left( \\frac{2\\sqrt{2} \\pi M}{\\mu} \\right) ~~~~~~\\nonumber\\\\\n && ~~~~~~~~~ + 4\\, {\\cal K}_2^{(-2,-1)}\\!\\left( \\frac{2\\sqrt{2} \\pi M}{\\mu}\\right)\\nonumber\\\\\n && ~~~~~~~~~ + {\\cal K}_3^{(-1,0)}\\!\\left( \\frac{2\\sqrt{2} \\pi M}{\\mu} \\right)\n \\Biggr] ~ \\Biggr\\rbrace~\n\\label{lambdamuresult}\n\\end{eqnarray}\nwhere we have again taken {\\mbox{$\\rho=2$}} as our benchmark value, \nwhere ${\\cal K}_\\nu^{(n,p)}(z)$ are the Bessel-function combinations defined in Eq.~(\\ref{Besselcombos}),\nand where $n_{B}$ and $n_F$ are the numbers of massless bosonic and fermionic degrees of freedom in the theory\nrespectively (so that {\\mbox{$\\zStr {\\bf 1} = n_B-n_F$}}).\n\n\n\nIt is straightforward to verify that \nthis result is consistent \nwith the result in Eq.~(\\ref{lambdamulimit}) in the {\\mbox{$\\mu\\to 0$}} limit.\nBecause all of the Bessel-function combinations within Eq.~(\\ref{lambdamuresult}) vanish\nexponentially rapidly as their arguments grow to infinity, \nonly the first term in Eq.~(\\ref{lambdamuresult}) survives in this limit.\nWe therefore find that the {\\mbox{$\\mu\\to 0$}} limit of Eq.~(\\ref{lambdamuresult}) \nyields the anticipated result in Eq.~(\\ref{lambdamulimit}). \n\nFrom Eq.~(\\ref{lambdamuresult}) \nwe can also understand the manner in which $\\widehat \\Lambda(\\mu)$ runs as a function of $\\mu$ for all {\\mbox{$0<\\mu\\ll M_s$}}.\nLet us first focus on the Bessel-function terms within the square brackets in Eq.~(\\ref{lambdamuresult}).\nBy themselves, these terms behave in much the same way as shown in Fig.~\\ref{transientfigure}, except without the \ntransient dip and with the asymptotic behavior for {\\mbox{$\\mu\\gsim M$}} scaling as a power (rather than logarithm) of $\\mu$.\nMore specifically, to leading order in $\\mu\/M$ and for {\\mbox{$\\mu\\gsim M$}}, we find using the techniques developed\nin Ref.~\\cite{Paris} that\n\\begin{eqnarray}\n && {\\cal K}_1^{(-1,0)}\\!\\left( z \\right) \n + 4 \\,{\\cal K}_2^{(-2,-1)}\\!\\left( z\\right) + {\\cal K}_3^{(-1,0)}\\!\\left( z \\right) ~~~~~~~\\nonumber\\\\ \n && ~~~~~~~~~~~~~~~~~~~~~~~\\sim~ \\frac{7}{480} \\left( \\frac{\\mu}{M}\\right)^4~\n\\end{eqnarray}\nwhere {\\mbox{$z\\equiv 2\\sqrt{2}\\pi M\/\\mu$}}.\nBy contrast, for {\\mbox{$\\mu \\lsim M$}}, this quantity is exponentially suppressed.\nThus, \nrecalling \nthe result in Eq.~(\\ref{eq:lamlam})\nfor our original unregulated (but nevertheless finite) cosmological constant $\\Lambda$ \nand once again keeping only those (leading) running terms which dominate for {\\mbox{$M\\ll \\mu \\ll M_s$}},\nwe find that Eq.~(\\ref{lambdamuresult}) simplifies to take the approximate form\n \\begin{eqnarray}\n \\widehat\\Lambda (\\mu ) ~&\\approx&~ \\Lambda - \\frac{7}{960 \\pi^2} \\left[ \\left(\\zStr {\\bf 1} \\right) +\n \\left( \\effStr {\\bf 1} \\right) \\right]\\! \\mu^4 ~\\nonumber\\\\\n &\\approx&~ \\Lambda - \\frac{7}{960 \\pi^2 } \\left( \\zeffStr {\\bf 1} \\right) \\mu^4 ~.\n\\label{lambdamuresult2}\n\\end{eqnarray}\nWe once again emphasize that we have retained the second term (scaling as $\\mu^4$) as this is the \nleading $\\mu$-dependent term when {\\mbox{$M\\ll \\mu \\ll M_s$}}. \nJust as for $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$, there also generally \nexist additional running terms \nwhich scale as $\\mu^2$ and $\\log\\,\\mu$, but these terms are subleading\n relative to the above $\\mu^4$ term when {\\mbox{$M\\ll \\mu \\ll M_s$}}. \nWe shall discuss these subleading terms further in Sect.~\\ref{sec5}.~\nMoreover, just as we saw for $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$, the $\\mu^4$ scaling\nbehavior can be enhanced to an even greater effective power $\\mu^n$ with {\\mbox{$n>4$}} if the spectrum of light states\nis sufficiently dense when taking the supertrace over string states.\nHowever, even this leading $\\mu^n$ scaling is generally subleading compared with the constant term $\\Lambda$.\nThus the regulated quantity $\\widehat \\Lambda(\\mu)$ --- unlike $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$ \nin Eq.~(\\ref{approxhiggsmassmu}) ---\nis dominated by a constant term and exhibits at most a highly suppressed running relative to this constant.\n\n\n\n\n\\FloatBarrier\n\\subsubsection{The Higgs mass in string theory: See how it runs!\\label{seehow}}\n\n\nWe now finally combine both contributions \n$\\widehat m_\\phi^2(\\mu)\\bigl|_{{\\cal X},\\Lambda}$\nas in Eq.~(\\ref{twocontributions})\nin order to obtain our final result for the \ntotal modular-invariant regulated Higgs mass $\\widehat m_\\phi^2(\\mu)$.\nThe exact result, of course, is given by the sum of Eqs.~(\\ref{finalhiggsmassmu})\nand (\\ref{lambdamuresult}), with the \nlatter first multiplied by $\\xi\/(4\\pi^2 {\\cal M}^2)$.\nHowever, once again taking the corresponding approximate forms in\nEqs.~(\\ref{approxhiggsmassmu}) and (\\ref{lambdamuresult2})\nwhich are valid for {\\mbox{$M\\ll \\mu\\ll M_s$}},\nwe see that the $\\mu^4$ running within \nEq.~(\\ref{lambdamuresult2})\nis no longer the dominant running for \n$\\widehat m_\\phi^2(\\mu)$ as a whole, \nas it is extremely suppressed compared with the running coming from\nEq.~(\\ref{approxhiggsmassmu}).\nWe thus find that to leading order,\nthe net effect of adding \nEqs.~(\\ref{approxhiggsmassmu}) and (\\ref{lambdamuresult2})\nis simply to add the overall constant $\\xi \\Lambda\/(4\\pi^2 {\\cal M}^2)$ to \nthe result in Eq.~(\\ref{approxhiggsmassmu}).\nWe therefore find that the total regulated Higgs mass has the leading\nrunning behavior\n\\begin{eqnarray}\n && \\widehat m_\\phi^2(\\mu) ~\\approx~\n \\frac{\\xi}{4\\pi^2} \\frac{\\Lambda}{{\\cal M}^2}\n - \\frac{\\pi}{6}\\, {\\cal M}^2\\, {\\rm Str}\\, {\\mathbb{X}}_1\\nonumber\\\\\n && ~~~~~~~~~~ \n + {\\cal M}^2 \\, \\zStr {\\mathbb{X}}_2 \\,\\log\\left( \\frac{ \\mu}{2\\sqrt{2} e M_s}\\right)\\nonumber\\\\\n && ~~~~~~~~~~ \n + {\\cal M}^2 \\,\\effStr \n {\\mathbb{X}}_2 \\,\\log\\left[ \\frac{1}{\\sqrt{2}}\\,e^{-(\\gamma+1)} \\frac{\\mu}{M}\\right]~~~~~~~~~\n\\label{totalhiggsrunning}\n\\end{eqnarray}\nwhere we have retained only the terms \nthat are leading for {\\mbox{$M\\ll \\mu\\ll M_s$}}.\nOnce again, just as for $\\widehat m_\\phi^2(\\mu)\\bigl|_{\\cal X}$,\nwe see that to this order the ${\\mathbb{X}}_2$ charges of the string states\nlead to non-trivial running while their ${\\mathbb{X}}_1$ charges only contribute\nto an overall additive constant. \nIndeed, in the {\\mbox{$\\mu\\to 0$}} limit, we find\n\\begin{equation}\n \\lim_{\\mu\\to 0} \\widehat m_\\phi^2(\\mu) ~=~\n \\frac{\\xi}{4\\pi^2} \\frac{\\Lambda}{{\\cal M}^2}\n - \\frac{\\pi}{6}\\, {\\cal M}^2\\, {\\rm Str}\\, {\\mathbb{X}}_1\n\\label{asymplimit}\n\\end{equation}\nwhen {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}.\nOf course, when {\\mbox{$\\zStr {\\mathbb{X}}_2\\not=0$}}, the {\\mbox{$\\mu\\to 0$}} limit diverges, as expected from the \nfact that the massless ${\\mathbb{X}}_2$-charged states are precisely the states that led\nto a divergence in the original unregulated Higgs mass $m_\\phi^2$. \nAs discussed in Sect.~\\ref{chargeinsertions},\nwe nevertheless continue to obtain a finite result for the regulated Higgs mass $\\widehat m_\\phi^2(\\mu)$ \nfor all {\\mbox{$\\mu>0$}} even when\n{\\mbox{$\\zStr {\\mathbb{X}}_2\\not=0$}}.\n\nIn order to understand what the running in Eq.~(\\ref{totalhiggsrunning}) looks like for {\\mbox{$0<\\mu\\ll M_s$}}, \nlet us begin by considering the contribution from \na single ${\\mathbb{X}}_2$-charged string state with a given mass {\\mbox{$0 M_{\\rm lightest}$}}, where\n$M_{\\rm lightest}$ denotes the mass \nof the lightest ${\\mathbb{X}}_2$-charged states.\nIndeed, as discussed previously, whether an effective power-law running emerges depends on the density of states in the theory\nwith masses {\\mbox{$M\\gsim M_{\\rm lightest}$}}. \nIt is for this reason that we have indicated in Fig.~\\ref{anatomy} \nthat the net running within the {\\mbox{$\\mu> M_{\\rm lightest}$}} region can be either logarithmic\nor power-law.\nHowever, as we progress to lower scales {\\mbox{$\\mu\\sim M_{\\rm lightest}$}}, \nwe enter the ``dip region''\nwhere this logarithmic\/power-law running shuts off.\nFinally, for {\\mbox{$\\mu0$}} while\nmassless states have {\\mbox{$\\beta_0=0$}}. However, we see that even massless \nstates introduce a $\\phi$-dependence prior to the truncation to {\\mbox{$\\phi=0$}}.\n\nFocusing initially on the first term within $P_\\Lambda(a)$ in Eq.~(\\ref{Pacos}), we immediately see that \n\\begin{eqnarray}\n && \\partial_\\phi^2 \\left. \\left[ \\frac{{\\cal M}^2}{24 a} \\, {\\rm Str}\\, M^2 \\right] \\right|_{\\phi=0} \n = ~ \\left. \\frac{{\\cal M}^2}{24a} \\, {\\rm Str}\\, (\\partial_\\phi^2 M^2) \\right|_{\\phi=0} \\nonumber\\\\\n && ~~~~~~~~~~~~~~~ = ~ - \\frac{{\\cal M}^2}{2} \\,\\left[ \\frac{\\pi}{3a} \\,{\\rm Str}\\, {\\mathbb{X}}_1\\right]~\n\\end{eqnarray}\nwhere we have used the result in Eq.~(\\ref{newXi}) in passing to the final expression.\nThis successfully reproduces the initial terms within $P_{\\cal X}(a)$ in Eq.~(\\ref{finalPa}).\n Next, we evaluate the second $\\phi$-derivative of\nthe Bessel-function terms within $P_\\Lambda(a)$ in Eq.~(\\ref{Pacos}).\nTo do this, we note the mathematical identity \n\\begin{eqnarray}\n && \\partial_\\phi^2 \\left[ M^2 K_2{ \\left( \\frac{r M}{a \\calM} \\right) }\\right] ~=~ \n - \\frac{r M}{2a{\\cal M}} \\left( \\partial_\\phi^2 M^2\\right) K_1{ \\left( \\frac{r M}{a \\calM} \\right) } \\nonumber\\\\\n &&~~~~~~~~~~~~~~~~~ + \\frac{r^2}{4 a^2 {\\cal M}^2} \\left(\\partial_\\phi M^2\\right)^2 K_0{ \\left( \\frac{r M}{a \\calM} \\right) }~\n\\end{eqnarray} \nwhich follows from standard results for Bessel-function derivatives along with a judicious \nrepackaging of terms.\nGiven this, and given the relations in Eq.~(\\ref{newXi}), we then find that \n\\begin{eqnarray}\n&& \\partial_\\phi^2 \\left. \\left\\lbrace \n \\frac{a}{\\pi^2 } \n \\, \\bpStr \\!\\left\\lbrack M^2 \\sum_{r=1}^\\infty\n \\frac{1}{r^2} K_2\\left(\\frac{ r M}{ a {\\cal M}}\\right)\\right\\rbrack\\right\\rbrace \\right|_{\\phi=0} \\nonumber\\\\\n&& ~~~~~~~~~ =~ \n \\frac{2}{\\pi}\\, \\pStr {\\mathbb{X}}_1 \\,\\left\\lbrack \\sum_{r=1}^\\infty \\left(\\frac{M}{r{\\cal M}}\\right)\n \\,K_1\\left( \\frac{r M}{a{\\cal M}} \\right)\\right\\rbrack \\nonumber\\\\\n&& ~~~~~~~~~ \\phantom{=}~ + \n \\frac{4}{a} \\,\\pStr {\\mathbb{X}}_2 \\left\\lbrack \\sum_{r=1}^\\infty \n K_0\\left( \\frac{ r M}{a{\\cal M}} \\right) \\right\\rbrack~.~~~~~~~~~~~~~~ \n\\label{besselderivs}\n\\end{eqnarray}\nwhere the supertrace in the first line is over all states whose mass functions $M^2(\\phi)$ have {\\mbox{$\\beta_0>0$}}. \nWe thus see that the result in Eq.~(\\ref{besselderivs}) likewise successfully reproduces \nthe Bessel-function terms within $P_{\\cal X}(a)$ in Eq.~(\\ref{finalPa}).\n\nOur final task is to evaluate $\\partial_\\phi^2$ acting on the second term in Eq.~(\\ref{Pacos}).\nAt first glance, it would appear that this term does not yield any contribution since \nit is wholly independent of the mass $M$ and would thus not lead to any $\\phi$-dependence.\nIndeed, as evident from Eq.~(\\ref{P2cosa}), this term represents a contribution to $P_\\Lambda(a)$ \nfrom purely massless states, and as such the identification {\\mbox{$M=0$}} has already been implemented\nwithin this term. This is why no factors of the mass $M$ remain within this term.\nHowever, as discussed above, for the purposes of the present calculation we are to regard the masses $M$ \nas functions of $\\phi$ before taking the $\\phi$-derivatives.\n Thus, when attempting to take $\\phi$-derivatives of the second term in Eq.~(\\ref{Pacos}),\nwe should properly go back one step to the original derivation of this term that appears in Eq.~(\\ref{P2cosa}) \nand reinsert a non-trivial mass function $M^2(\\phi)$ with {\\mbox{$\\beta_0=0$}} into the derivation.\nThe remaining derivation of this term then algebraically mirrors the derivation of the {\\it massive}\\\/ \nBessel-function term in Eq.~(\\ref{P2cosb}), only with $M^2$ now replaced by $M^2(\\phi)$ with {\\mbox{$\\beta_0=0$}}.\nIn other words, for the purposes of our current calculation, we should formally identify\n\\begin{eqnarray}\n&& \\frac {{\\cal M}^4}{2}\\, \\frac{\\pi^2}{45}\\,(n_B-n_F) \\,{a^3} \\nonumber\\\\\n&& ~~~=~ \n \\frac{{\\cal M}^2}{2}\\left. \\frac{a}{\\pi^2 } \n \\, \\bzStr \\left\\lbrack M^2 \\sum_{r=1}^\\infty\n \\frac{1}{r^2} K_2\\left(\\frac{ r M}{ a {\\cal M}}\\right) \\right\\rbrack \\right|_{\\phi=0}~~~~~~~\n\\label{substitution}\n\\end{eqnarray}\nand then evaluate the $\\phi$-derivatives before \ntruncating to {\\mbox{$\\phi=0$}}.\nAside from the overall factor of $- {\\cal M}^2\/2$, acting with $\\partial^2_\\phi$ and then truncating to {\\mbox{$\\phi=0$}} yields \nthe same result as on the right side of Eq.~(\\ref{besselderivs}),\nexcept with each supertrace over massive states \nreplaced with a supertrace over massless states.\nWe thus need to evaluate these Bessel-function expressions at zero argument.\nHowever, for small arguments {\\mbox{$z\\ll 1$}}, the Bessel functions have the leading asymptotic behaviors\n\\begin{equation}\n K_\\nu(z) ~\\sim~ \\begin{cases}\n ~-\\log (z\/2) - \\gamma +...& {\\rm for}~~\\nu=0 \\\\\n ~\\phantom{-}{\\textstyle{1\\over 2}} \\,\\Gamma(\\nu) \\,(z\/2)^\\nu + ... & {\\rm for}~~\\nu>0 \n \\end{cases}\n\\label{Bessellimits}\n\\end{equation}\nwhere $\\gamma$ is the Euler-Mascheroni constant.\nAnalyzing the ${\\rm Str} \\,{\\mathbb{X}}_1$ term,\nwe thus see that\n\\begin{eqnarray}\n && \\frac{2}{\\pi}\\, \\lim_{M\\to 0} \\,\\pStr {\\mathbb{X}}_1 \\left\\lbrack \\sum_{r=1}^\\infty \\left(\\frac{M}{r{\\cal M}}\\right)\n \\,K_1\\left( \\frac{r M}{a{\\cal M}} \\right)\\right\\rbrack \\nonumber\\\\\n && ~~~~~~~=~ \\frac{2}{\\pi}\\, \\zStr {\\mathbb{X}}_1 \\,\\lim_{M\\to 0} \\left\\lbrack \\sum_{r=1}^\\infty \\left(\\frac{M}{r{\\cal M}}\\right)\n \\left(\\frac{a {\\cal M}}{r M} \\right)\\right\\rbrack \\nonumber\\\\\n && ~~~~~~~=~ \\frac{2a}{\\pi} \\,\\sum_{r=1}^\\infty \\,\\frac{1}{r^2} ~= ~ \\frac{\\pi}{3}\\, a~,\n\\label{hidden}\n\\end{eqnarray} \nthereby successfully reproducing the corresponding term which appears in $P_{\\cal X}(a)$.\nIndeed, we see that the {\\mbox{$M\\to 0$}} limit in Eq.~(\\ref{hidden}) is convergent and continuous with the exact {\\mbox{$M=0$}} result.\n\nFor theories in which {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}, there are no further terms to consider.\nThe results of this analysis are then clear:\nwithin such theories, we have found that\n\\begin{equation}\n P_{\\cal X}(a) ~=~ \\partial_\\phi^2 \\, P_\\Lambda(a,\\phi) \\bigl|_{\\phi=0}~.\n\\end{equation}\nThrough Eq.~(\\ref{mtoP}), this then implies that \n\\begin{equation}\n \\widehat m_\\phi^2(\\rho,a)\\bigl|_{\\cal X} ~=~ \n \\partial_\\phi^2 \\, \\widehat\\Lambda( \\rho,a,\\phi) \\bigl|_{\\phi=0}~,\n\\end{equation}\nwhereupon use of Eq.~(\\ref{twocontributions}) tells us that\n\\begin{equation}\n \\widehat m_\\phi^2(\\rho,a) ~=~ \\left.\\left( \\partial_\\phi^2 + \\frac{\\xi}{4\\pi^2 {\\cal M}^2} \\right) \n \\, \\widehat\\Lambda( \\rho,a,\\phi) \\right|_{\\phi=0}~,\n\\end{equation}\nor equivalently\n\\begin{eqnarray}\n \\widehat m_\\phi^2(\\mu) ~&=&~ \n \\left.\\left( \\partial_\\phi^2 + \\frac{\\xi}{4\\pi^2 {\\cal M}^2} \\right) \n \\, \\widehat\\Lambda( \\mu,\\phi) \\right|_{\\phi=0} ~~~~~\\nonumber\\\\\n &=&~ \\left. D_\\phi^2 ~\\widehat\\Lambda( \\mu,\\phi) \\right|_{\\phi=0}~,\n\\label{finalCWresult}\n\\end{eqnarray}\nwhere we have defined the modular-covariant derivative \n\\begin{equation}\n D_\\phi^2 ~\\equiv~ \\partial_\\phi^2 + \\frac{\\xi}{4\\pi^2 {\\cal M}^2}~.\n\\label{Dphi2}\n\\end{equation}\nOf course, for theories with {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}, our original unregulated Higgs mass \nwas already finite and {\\it a priori}\\\/ there was no \nneed for a regulator. However, even within such theories, it is the use of our modular-invariant\nregulator for both $\\Lambda$ and $m_\\phi^2$ which enabled us to extract EFT descriptions \nof these quantities and to analyze their runnings as functions of an effective scale $\\mu$.\n\nThe result in Eq.~(\\ref{finalCWresult}) is both simple and profound.\nIndeed, comparing this result with our starting point in Eq.~(\\ref{higgsdef})\nand recalling the subsequent required modular completion in Eq.~(\\ref{Xmodcomplete}),\nwe see that we have in some sense come full circle.\nHowever, as stressed above, we have now demonstrated this result using only the general \nexpressions for ${\\mathbb{X}}_1$ and ${\\mathbb{X}}_2$ in Eq.~(\\ref{newXi}) and thus \n{\\it entirely without the assumption of a charge lattice}\\\/.\nThis result therefore holds for {\\it any}\\\/ modular-invariant string theory with {\\mbox{$\\zStr{\\mathbb{X}}_2=0$}}. \nIndeed, as indicated above, we can view $D_\\phi^2$ as a modular-covariant\nderivative, in complete analogy with the lattice-derived\ncovariant derivative $D_z^2$ in Eq.~(\\ref{modcovderiv}).\n\nBut more importantly, we see from Eq.~(\\ref{finalCWresult}) that \nwithin such theories\nwe can now identify $\\widehat \\Lambda(\\mu,\\phi)$ as {\\it an effective potential for the Higgs}\\\/. Strictly speaking, this is not the entire effective potential --- it does\nnot, for example, allow us to survey different minima \nas a function of $\\phi$ in order to select the global and local minima, as would be needed in order\nto determine the ground states of the theory in different possible phases (with unbroken and\/or broken symmetries).\nHowever, we see that $\\widehat \\Lambda(\\mu,\\phi)$ does provide a {\\it piece}\\\/ of the full potential, namely \nthe portion of the potential in the immediate vicinity of the assumed minimum (around which \n$\\phi$ parametrizes the fluctuations, as always).\nWith this understanding, we shall nevertheless simply refer to $\\widehat \\Lambda(\\mu,\\phi)$ as the Higgs effective potential.\nIndeed, as expected, we see from Eq.~(\\ref{finalCWresult}) that \nthe Higgs mass is related to the curvature of this potential around this minimum. \nOne can even potentially imagine repeating the calculations in this paper {\\it without}\\\/ implicitly assuming \nthe stability condition in Eq.~(\\ref{linearcond}), thereby dropping the implicit assumption that we are sitting\nat a stable vacuum of the theory. In that case, the first and second $\\phi$-derivatives of $\\widehat \\Lambda(\\mu,\\phi)$\nwould describe the slope and curvature of the potential for arbitrary values of $\\phi$, whereupon the methods in this paper\ncould provide a method of ``tracing out'' the \nshape of the full potential. However, at best this would appear to be a challenging undertaking.\n\nAs remarked above, the form of Eq.~(\\ref{finalCWresult}) makes sense from the perspective \nof Eq.~(\\ref{higgsdef}), in conjunction with the subsequent modular completion.\nAt first glance, it may seem surprising that such a result would continue to survive \neven after imposing our modular-invariant regulator \nin order to generate our regulated expressions\nfor $\\widehat m_\\phi^2(\\mu)$ and $\\widehat \\Lambda(\\mu,\\phi)$,\nand perhaps even more surprising after \nthe Rankin-Selberg techniques and their generalizations in Sect.~\\ref{sec3}\nare employed in order to express these regulated quantities in terms of supertraces over purely physical (level-matched)\nstring states.\nUltimately, however, \nthe result in Eq.~(\\ref{finalCWresult})\nconcerns the $\\phi$-structure of the theory and the response of the theory to fluctuations in the Higgs field.\nIn theories with {\\mbox{$\\zStr{\\mathbb{X}}_2=0$}},\nthese properties are essentially ``orthogonal'' to the manipulations that occurred in Sects.~\\ref{sec3} and \\ref{sec4},\nwhich ultimately concern the regulators and the resulting behavior of these quantities as functions of $\\mu$.\nIn other words, in such theories\nthe process of $\\phi$-differentiation in some sense ``commutes''\nwith all of these other manipulations.\nThus the relation in Eq.~(\\ref{finalCWresult}) holds not only for our original unregulated Higgs mass\nand cosmological constant, but also for their regulated counterparts as well as for the running which describes\ntheir dependence on the variables defining the regulator.\n\nIt is also intriguing that we are able to identify a modular-covariant \nderivative $D_\\phi^2$ within the results in Eq.~(\\ref{finalCWresult}).\nOf course, this is the {\\it second}\\\/ $\\phi$-derivative.\nBy contrast, the {\\it first}\\\/ $\\phi$-derivative does not require modular completion.\nWe have already seen this in Sect.~\\ref{stability}, where we found\nthat $\\partial_\\phi$ acting on the partition function ${\\cal Z}$ corresponds to insertion\nof the factor ${\\cal Y}$, which was already modular invariant.\nIn this sense, $\\phi$-derivatives are similar to the $z$-derivatives \ndiscussed in Sect.~\\ref{sec:completion}.~\n\nThe result in Eq.~(\\ref{finalCWresult}) holds only for theories in which {\\mbox{$\\zStr{\\mathbb{X}}_2=0$}}.\nHowever, when {\\mbox{$\\zStr{\\mathbb{X}}_2\\not=0$}}, there is an additional term \nto consider within $P_\\Lambda$.\nTaking the {\\mbox{$M\\to 0$}} limit of the $\\pStr\\,{\\mathbb{X}}_2$ result in \nEq.~(\\ref{besselderivs}) in conjunction with the limiting behavior in Eq.~(\\ref{Bessellimits}),\nwe formally obtain\n\\begin{eqnarray}\n && \\frac{4}{a} \\,\\lim_{M\\to 0}\\, \\pStr {\\mathbb{X}}_2 \\left\\lbrack \\sum_{r=1}^\\infty \n K_0\\left( \\frac{ r M}{a{\\cal M}} \\right) \\right\\rbrack\\nonumber\\\\\n && ~~~~=~ \\frac{4}{a} \\,\\zStr {\\mathbb{X}}_2 \\sum_{r=1}^\\infty \\left[ -\\log\\left( \\frac{rM}{2a{\\cal M}}\\right) -\\gamma \\right]~.~~~~~\n\\label{badstuff}\n\\end{eqnarray}\nUnfortunately, this infinite $r$-summation is not convergent. \nIt also does not correspond to what is presumably the \nexact {\\mbox{$M=0$}} result within $P_{\\cal X}$.\nWe stress that these complications arise only when {\\mbox{$\\zStr {\\mathbb{X}}_2\\not=0$}}, \nwhich is precisely the condition under which the original unregulated Higgs mass is divergent. \n\n\n\n\n\nIn order to better understand this phenomenon,\nwe can perform a more sophisticated analysis \nby analytically performing the $r$-summation \nin complete generality before taking the {\\mbox{$M\\to 0$}} limit.\nWe begin by defining the Bessel-function combinations\n\\begin{equation}\n {\\mathbb{K}}_\\nu(z) ~\\equiv~ 2\\, \\sum_{r=1}^\\infty \\, (r z)^{-\\nu} K_\\nu (r z)~.\n\\label{bbK}\n\\end{equation}\nThese Bessel-function combinations are relevant for both $P_{\\cal X}$ and $P_\\Lambda$\nin the same way that the combinations ${\\cal K}^{(n,p)}_\\nu(z)$ \nin Eq.~(\\ref{Besselcombos})\nwere relevant for $\\widehat m_\\phi^2\\bigl|_{\\cal X}$ and $\\widehat \\Lambda$, and indeed\n\\begin{equation}\n {\\cal K}^{(-\\nu,p)}_\\nu (z) ~=~ {\\textstyle{1\\over 2}} \\bigl[\n \\rho^{-\\nu} \\,{\\mathbb{K}}_\\nu(z\/\\rho) - \\rho^p \\,{\\mathbb{K}}_\\nu(z) \\bigr]~.\n\\end{equation} \nUsing the techniques in Ref.~\\cite{Paris}, it is then straightforward (but exceedingly tedious) \nto demonstrate that ${\\mathbb{K}}_\\nu(z)$ for {\\mbox{$z\\ll 1$}} has a \nMaclaurin-Laurent series representation given by\n\\begin{widetext}\n\\begin{eqnarray}\n{\\mathbb{K}}_{\\nu}(z) ~&=&~ \n \\sum_{p=1}^{\\nu}\\, \n 2^{-\\nu}\\pi^{p}\n \\frac{(-1)^{\\nu-p}}{\\left(\\nu-p\\right)!}\n \\zeta^{\\ast}(2p)\n \\left(\\frac{z}{2}\\right)^{-2p} \n ~+~ 2^{-\\nu} \\sqrt{\\pi} \\,\\Gamma\\left( {\\textstyle{1\\over 2}}-\\nu\\right)\\, \\frac{1}{z} \\nonumber\\\\\n&&~~~~+~ \n \\frac{(-2)^{-\\nu}}{\\nu!}\\left[\\gamma-\\frac{H_\\nu}{2}+\\log\\left(z\/4\\pi\\right)\\right]\n ~+~ \\sum_{p=1}^{\\infty} \\,2^{-\\nu}\\pi^{-p}\n \\frac{(-1)^{\\nu+p}}{\\left(\\nu+p\\right)!}\n \\zeta^{\\ast}(2p+1)\n \\left(\\frac{z}{2}\\right)^{2p}\n\\label{lyon}\n\\end{eqnarray}\n where {\\mbox{$H_n\\equiv \\sum_{k=1}^{n}1\/k$}} is the $n^{\\rm th}$ harmonic number \nand where {\\mbox{$\\zeta^\\ast(s) \\equiv \\pi^{-s\/2} \\Gamma(s\/2) \\zeta(s) = \\zeta^\\ast(1-s)$}} \nis the ``completed'' Riemann $\\zeta$-function. \nThe representation in Eq.~(\\ref{lyon}) \nis particularly useful for {\\mbox{$z\\ll 1$}}, allowing us to extract\nthe leading behaviors \n\\begin{eqnarray}\n{\\mathbb{K}}_{0}(z) ~&=&~\\frac{\\pi}{z}+\\left[\\gamma+\\log\\left(\\frac{z}{4\\pi}\\right)\\right]\n -\\frac{\\zeta(3)z^2}{8\\pi^{2}}+\\frac{3\\zeta(5)z^4}{128\\pi^{4}}+\\ldots\\nonumber \\\\\n{\\mathbb{K}}_{1}(z) ~&=&~ \\frac{\\pi^{2}}{3z^{2}}-\\frac{\\pi}{z}\n -\\frac{1}{2}\\left[\\gamma-\\frac{1}{2} +\\log\\left(\\frac{z}{4\\pi}\\right)\\right]\n +\\frac{\\zeta(3)z^2}{32\\pi^{2}} -\\frac{\\zeta(5)z^4}{256\\pi^{4}}+\\ldots\\nonumber \\\\\n{\\mathbb{K}}_{2}(z) ~&=&~ \\frac{2\\pi^{4}}{45z^{4}}-\\frac{\\pi^{2}}{6z^{2}}+\\frac{\\pi}{3z}\n +\\frac{1}{8}\\left[\\gamma-\\frac{3}{4}+\\log\\left(\\frac{z}{4\\pi}\\right)\\right]\n -\\frac{\\zeta(3)z^2}{192\\pi^{2}}+\\frac{\\zeta(5)z^4}{2048\\pi^{4}}+\\ldots\n\\label{Kseries}\n\\end{eqnarray}\n\\end{widetext}\nIndeed, use of the expression for ${\\mathbb{K}}_1(z)$ confirms our result in Eq.~(\\ref{hidden}).\n\nArmed with the expression for ${\\mathbb{K}}_2(z)$ in Eq.~(\\ref{Kseries}), we can now rigorously evaluate the leading terms within\n$P_\\Lambda(a)$ ---\nand by extension within $\\widehat \\Lambda(\\rho,a)$ --- in complete generality, even when massless states\nare included.\nStarting from Eq.~(\\ref{Pacos}) in conjunction with the replacement in Eq.~(\\ref{substitution}),\nwe now have\n\\begin{eqnarray}\n P_\\Lambda(a) ~&=&~ \\frac{{\\cal M}^2}{24a} \\,{\\rm Str}\\, M^2 \n - \\frac{1}{4\\pi^2 a} \\,{\\rm Str}\\, M^4 \\,{\\mathbb{K}}_2\\!\\left( \\frac{M}{a{\\cal M}}\\right)~\\nonumber\\\\\n ~&\\approx&~ \\frac{{\\cal M}^2}{24a} \\,{\\rm Str}\\, M^2 \n - \\frac{1}{4\\pi^2 a} ~\\aMeffStr\\, M^4 \\,{\\mathbb{K}}_2\\!\\left( \\frac{M}{a{\\cal M}}\\right)~\\nonumber\\\\\n\\label{Plam}\n\\end{eqnarray}\nwhere the final supertrace on the first line is over {\\it all}\\\/ states in the theory, including those that are massless,\nand where in passing to the second line we have recognized that ${\\mathbb{K}}_2(z)$ is exponentially suppressed unless {\\mbox{$z\\ll 1$}}.\nThe fact that ${\\mathbb{K}}_2(z)$ is now explicitly restricted to the {\\mbox{$z\\ll 1$}} regime implies that it is legitimate\nto insert the series expansion for ${\\mathbb{K}}_2(z)$ from Eq.~(\\ref{Kseries}) within Eq.~(\\ref{mtoP}).\nIdentifying the physical scale $\\mu$ as in Eq.~(\\ref{mudef}) \nand retaining only the leading terms for {\\mbox{$\\mu\\ll M_s$}},\nwe then obtain\n\\begin{eqnarray}\n \\widehat \\Lambda(\\mu,\\phi) \\,&=&~ \\frac{1}{1+\\mu^2\/M_s^2} \\Biggl\\lbrace \\nonumber\\\\\n && \\phantom{-}\\frac{1}{24}{\\cal M}^2 \\,{\\rm Str}\\, M^2 + \\zeffStr \\left( \\frac{M^2 \\mu^2}{96\\pi^2} \n - \\frac{7\\mu^4}{960\\pi^2}\\right) \\nonumber\\\\ \n && - \\frac{1}{32\\pi^2} \\,\\zeffStr M^4 \\log\\left( \\sqrt{2}\\, e^{\\gamma+1\/4} \\frac{M}{\\mu}\\right)+...\\Biggr\\rbrace ~\\nonumber\\\\\n ~&=&~ \\frac{1}{24}{\\cal M}^2 \\,{\\rm Str}\\, M^2 \n -{\\rm Str}\\, \\frac{M^2 \\mu^2}{96\\pi^2} \\nonumber\\\\\n && + \\zeffStr \\left( \\frac{M^2 \\mu^2}{96\\pi^2} \n - \\frac{7\\mu^4}{960\\pi^2} \\right) \\nonumber\\\\ \n && - \\frac{1}{32\\pi^2} \\,\\zeffStr M^4 \\log\\left( \\sqrt{2}\\, e^{\\gamma+1\/4} \\frac{M}{\\mu}\\right)+... \\nonumber\\\\\n\\label{intermed2}\n\\end{eqnarray}\nwhere we have continued to adopt our benchmark value {\\mbox{$\\rho=2$}}\nand where we recall that each factor of $M$ carries a $\\phi$-dependence through Eq.~(\\ref{TaylorM}).\nNote that in passing to the final expression in Eq.~(\\ref{intermed2}) we have Taylor-expanded the overall \nprefactor and kept only those terms of the same order as those already shown.\nHowever, we now see that the $\\mu^2$ term from expanding the prefactor cancels the corresponding $\\mu^2$\nterm from ${\\mathbb{K}}_2$, leaving behind a net $\\mu^2$ term which scales as the $M^2$ supertrace \nof only those states whose masses {\\it exceed}\\\/ $\\mu$.\nWe thus obtain our final result\n\\begin{eqnarray}\n \\widehat \\Lambda(\\mu,\\phi) \\,&=&\\, \n \\frac{1}{24}{\\cal M}^2 \\,{\\rm Str}\\, M^2 \n -\\antieffStr \\frac{M^2 \\mu^2}{96\\pi^2} \n - \\zeffStr \\frac{7\\mu^4}{960\\pi^2} \\nonumber\\\\ \n && - \\frac{1}{32\\pi^2} \\,\\zeffStr M^4 \\log\\left( \\sqrt{2}\\, e^{\\gamma+1\/4} \\frac{M}{\\mu}\\right) \\,+\\, ... \\nonumber\\\\\n\\label{Lambdafull}\n\\end{eqnarray}\n Indeed, this result provides the leading approximation to the exact expression\nin Eq.~(\\ref{lambdamuresult}).\n\nThe first and third terms in this result are consistent with \nthose in Eq.~(\\ref{lambdamuresult2}), and indeed the $\\mu^4$ term is the contribution\nfrom the massless states within the second supertrace in Eq.~(\\ref{Plam}). \nHowever, to this order, we now see that there \nare two additional terms.\nThe first is a term scaling as $\\mu^2$ which depends on the spectrum of states with masses {\\mbox{$M\\gsim \\mu$}}.\nThis contribution lies outside the range {\\mbox{$M\\ll \\mu$}} studied in Eq.~(\\ref{lambdamuresult2})\nbut nevertheless generally appears for {\\mbox{$M\\gsim \\mu$}}.\nThe second is a logarithmic term. \nThis term is subleading when compared to the other terms shown, and \nmassless states make no contribution to this term (divergent or otherwise) when evaluated at {\\mbox{$\\phi=0$}} because\nof its $M^4$ prefactor.\n\n \n\nThis logarithmic term is nevertheless of critical importance \nwhen we consider the corresponding Higgs mass. \nAs we have seen in Eq.~(\\ref{finalCWresult}), the Higgs mass $\\widehat m_\\phi^2(\\mu)$\nreceives a contribution which scales as $\\partial_\\phi^2 \\widehat \\Lambda(\\mu)$.\nOf course, all of the dependence on $\\phi$ is carried within the masses $M$ which\nappear in Eq.~(\\ref{Lambdafull}), and as expected $\\widehat\\Lambda(\\mu)$ depends not\non these masses directly but on their squares.\nHowever, for any function $f(M^2)$ we have the algebraic identity\n\\begin{equation}\n \\partial_\\phi^2 f(M^2) \n ~=~ (\\partial_\\phi^2 M^2) \\frac{\\partial f}{\\partial M^2} +\n (\\partial_\\phi M^2)^2 \n \\frac{\\partial^2 f}{(\\partial M^2)^2}~.~~\n\\end{equation} \nThus, identifying {\\mbox{$f\\sim \\widehat\\Lambda(\\mu)$}} \nand recalling Eq.~(\\ref{newXi}), we obtain\n\\begin{eqnarray}\n \\partial_\\phi^2 \\,\\widehat \\Lambda(\\mu) \\Bigl|_{\\phi=0} &=& \n \\left. -4\\pi \\,{\\rm Str} \\,{\\mathbb{X}}_1 \\,\\frac{\\partial \\widehat \\Lambda(\\mu)}{\\partial M^2}\\right|_{\\phi=0} \\nonumber\\\\\n && ~~~+ \\left. 16\\pi^2 {\\cal M}^2 \\,{\\rm Str} \\,{\\mathbb{X}}_2 \\,\\frac{\\partial^2 \\widehat \\Lambda(\\mu)}{(\\partial M^2)^2}\\right|_{\\phi=0}~\n ~~~~~~~~\n\\label{derivs}\n\\end{eqnarray}\nwhere we have implicitly used the fact that only non-negative powers of $\\phi$ appear within $\\widehat\\Lambda(\\mu)$,\nthereby ensuring that our truncation to {\\mbox{$\\phi=0$}} factorizes within each term.\n\nIn principle, both supertraces in Eq.~(\\ref{derivs}) include massless states.\nMoreover, we see that the ${\\rm Str} \\, {\\mathbb{X}}_1$ term is proportional to the single \n$M^2$-derivative of $\\widehat\\Lambda(\\mu)$, and when acting on the logarithm term within Eq.~(\\ref{Lambdafull})\nwe find that massless states continue to be harmless, yielding no contribution (and therefore no divergences).\nBy contrast, we see that the ${\\rm Str} \\, {\\mathbb{X}}_2$ term is proportional \nto the {\\it second}\\\/ $M^2$-derivative of $\\widehat \\Lambda(\\mu)$.\nThis derivative therefore leaves behind a logarithm with no leading $M^2$ factors remaining.\nThus, for {\\mbox{$M= 0$}}, we obtain a logarithmic divergence for the Higgs mass --- as expected ---\nso long as {\\mbox{$\\zStr \\,{\\mathbb{X}}_2\\not=0$}}.\nIndeed, all of this information is now directly encoded within the effective potential\n$\\widehat\\Lambda(\\mu)$ for this theory, as given in Eq.~(\\ref{Lambdafull}). \n\nThis situation is analogous to the behavior \nof the traditional Coleman-Weinberg potential $V(\\varphi_c)$ as originally given in Refs.~{\\mbox{\\cite{Coleman:1973jx,Weinbergpost}}}.\nIn that case, it was shown that $V(\\varphi_c)$ contains a term\nscaling as \n\\begin{equation}\n V(\\varphi_c) ~\\sim~ \\varphi_c^4 \\,\\log \\varphi_c^2\n\\end{equation}\nwhere $\\varphi_c$ are the fluctuations of the classical Higgs field around its VEV\nand where one has assumed a $U(1)$-charged scalar field subject to a $\\lambda \\phi^4$ interaction.\nThe Higgs mass (which goes as the second derivative $\\partial^2 V\/\\partial \\varphi_c^2$)\ntherefore remains finite even as {\\mbox{$\\varphi_c\\to 0$}}, whereas the {\\it fourth}\\\/ derivative $\\partial^4 V\/\\partial \\varphi_c^4$\nactually has a logarithmic singularity as {\\mbox{$\\varphi_c\\to 0$}}.\nIndeed, this fourth derivative describes the behavior of the coupling $\\lambda$.\nThe cure for this disease, as suggested in Refs.~{\\mbox{\\cite{Coleman:1973jx,Weinbergpost}}}, is to\nmove away from the {\\mbox{$\\varphi_c=0$}} origin, and instead define the coupling $\\lambda$ at this shifted point.\n\n\nOf course, in our more general string context, we see that our potential scales like $M^4 \\log M$. Moreover, within the ${\\mathbb{X}}_2$ term,\nit is not the fourth derivative with respect to $M$ which leads to difficulties --- rather, it is the {\\it second}\\\/ derivative\nwith respect to $M^2$. As a consequence, this logarithmic divergence shows up in the Higgs mass rather than in a four-point coupling.\nThat said, it is possible that the cure for this disease may be similar to that discussed in \nRefs.~{\\mbox{\\cite{Coleman:1973jx,Weinbergpost}}}.\nIn particular, this suggests that in string theories for which {\\mbox{$\\zStr {\\mathbb{X}}_2\\not=0$}}, a cure for our logarithmically divergent\nHiggs mass and the fact that\nradiative potential is not twice-differentiable there \nmay be similarly found by avoiding the sharp {\\mbox{$\\phi=0$}} truncation that originally appears in Eq.~(\\ref{higgsdef}),\nand by instead deforming our theory away from the {\\mbox{$\\phi=0$}} origin in $\\phi$-space.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n \n\n\n\n\n\n\n\nFinally, given that we are now equipped with our effective Higgs potential $\\widehat \\Lambda(\\mu,\\phi)$, \nwe can revisit our classical stability condition, as originally discussed in Sect.~\\ref{stability}.~\nIn general, our theory will be sitting at an extremum of the potential as long as \n\\begin{equation}\n \\partial_\\phi \\widehat\\Lambda(\\mu,\\phi)\\Bigl|_{\\phi=0} ~=~0~.\n\\label{stabcond}\n\\end{equation}\nThis, then, is a supplementary condition that we have implicitly assuming to be satisfied within our analysis.\nNote that \n\\begin{eqnarray}\n \\partial_\\phi \\widehat\\Lambda(\\mu,\\phi)\\Bigl|_{\\phi=0} \n ~&=&~ \\left. (\\partial_\\phi M^2) \\frac{\\partial \\widehat \\Lambda(\\mu)}{\\partial M^2} \\right|_{\\phi=0}\\nonumber\\\\ \n ~&=&~ 4\\pi {\\cal M} \\,{\\rm Str} \\,{\\mathbb{Y}} \\left.\\frac{\\partial \\widehat \\Lambda(\\mu)}{\\partial M^2} \\right|_{\\phi=0} ~\n\\label{connecttoY}\n\\end{eqnarray}\nwhere\n\\begin{equation}\n {\\mathbb{Y}} ~\\equiv~ \\frac{1}{ 4\\pi{\\cal M}} \\,(\\partial_\\phi M^2)\\Bigl|_{\\phi=0}~.\n\\label{Ydef2}\n\\end{equation}\nIndeed, for the case of theories with an underlying charge lattice, we have {\\mbox{${\\mathbb{Y}} = \\tau_2^{-1} {\\cal Y}$}}\nwhere ${\\cal Y}$ is given in Eq.~(\\ref{Ydef}).\nThus the stability of the theory (and the possible existence of a destabilizing $\\phi$-tadpole) is closely \nrelated to the values of ${\\mathbb{Y}}$ across the string spectrum, as already anticipated in Sect.~\\ref{stability}.~\nSubstituting the exact expression in Eq.~(\\ref{lambdamuresult})\ninto Eq.~(\\ref{connecttoY}),\nwe find\n\\begin{eqnarray}\n && \\partial_\\phi \\widehat\\Lambda(\\mu,\\phi)\\Bigl|_{\\phi=0} \n \\,=~ \\frac{{\\cal M}^3}{1+\\mu^2\/M_s^2} \n \\,{\\rm Str} \\,{\\mathbb{Y}} \\,\\Biggl\\lbrace \n \\frac{\\pi}{6}\n + \\frac{1}{2\\pi} \\left(\\frac{M}{{\\cal M}}\\right)^2 \\times \\nonumber\\\\ \n && ~~~~~~~~\\times \n \\left\\lbrack \n {\\cal K}_0^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) + \n {\\cal K}_2^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) \n \\right\\rbrack \\Biggr\\rbrace \\nonumber\\\\ \n && ~~~=~ \\frac{{\\cal M}^3}{1+\\mu^2\/M_s^2} ~\\Biggl\\lbrace\n \\zStr {\\mathbb{Y}} \\left\\lbrack \\frac{\\pi}{6}\\left(1+\\mu^2\/M_s^2\\right) \\right\\rbrack \\nonumber\\\\\n && ~~~~~~~~+ \\pStr {\\mathbb{Y}} \\,\\Biggl\\lbrace \n \\frac{\\pi}{6}\n + \\frac{1}{2\\pi} \\left(\\frac{M}{{\\cal M}}\\right)^2 \\times \\nonumber\\\\ \n && ~~~~~~~~\\times \n \\left\\lbrack \n {\\cal K}_0^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) + \n {\\cal K}_2^{(0,1)}\\!\\left( \\frac{2\\sqrt{2}\\pi M}{\\mu} \\right) \n \\right\\rbrack \\Biggr\\rbrace \\Biggr\\rbrace~ \\nonumber\\\\ \n\\label{explicitBessel}\n\\end{eqnarray}\nwhere in passing to the second expression we have explicitly separated the\ncontributions from the massless and massive string states, \nand where \n${\\cal K}_\\nu^{(n,p)} (z)$ continue to denote\nthe combinations of Bessel functions in Eq.~(\\ref{Besselcombos}).\n\nInterestingly (but not unexpectedly), the terms multiplying ${\\rm Str}\\,{\\mathbb{Y}}$ \nin Eq.~(\\ref{explicitBessel}) are the same as the terms multiplying ${\\rm Str}\\,{\\mathbb{X}}_1$ \nin Eq.~(\\ref{finalhiggsmassmu}).\nEquivalently, we can view the quantity in Eq.~(\\ref{explicitBessel}) as the coefficient\nof the tadpole term (linear in $\\phi$) within the effective potential\n$\\Lambda(\\mu,\\phi)$ in Eq.~(\\ref{lambdamuresult}) when the masses \nare Taylor-expanded as in Eq.~(\\ref{TaylorM}).\n\nThere are several ways in which the expressions in Eq.~(\\ref{explicitBessel})\nmight vanish for all $\\mu$, as required for a stable vacuum.\nIn principle, for a given value of $\\mu$, there might exist a spectrum of states\nwith particular masses $M$ such that the contributions from the Bessel and\nnon-Bessel terms together happen to cancel when tallied across the spectrum. \nAny continuous change in the \nvalue of $\\mu$ might then induce a corresponding continuous change in the \nspectrum such that this cancellation is maintained. This is not unlike what\nhappens in the traditional field-theoretic Coleman-Weinberg potential, where changing the\nscale $\\mu$ can change the vacuum state and the spectrum of excitations built upon it.\nOf course in the present case we are \nworking within the context of string theory rather than field theory.\nAs such, we are dealing with an infinite tower of \nstring states and simultaneously maintaining modular invariance as the\nspectrum is deformed.\n\nAnother possibility is to simply demand stability in the deep infrared region,\nas {\\mbox{$\\mu\\to 0$}}. From Eq.~(\\ref{explicitBessel}) \nwe see that this would then require simply that \n\\begin{equation}\n {\\rm Str} \\,{\\mathbb{Y}} ~=~0~\n\\label{fullsutrace}\n\\end{equation}\nwhere the supertrace is over all string states, both massless and massive.\n\nA final possibility is to guarantee stability for every value of $\\mu$ by demanding\nthe somewhat stronger condition\n\\begin{equation}\n ~~~~~{\\rm Str} \\,{\\mathbb{Y}} \\,=\\,0~ ~~{\\rm for~each~mass~level~individually}.~~\n\\label{bestcondition}\n\\end{equation}\nOf course, Eq.~(\\ref{bestcondition}) implies Eq.~(\\ref{fullsutrace}), but \nthe fact that ${\\rm Str}\\,{\\mathbb{Y}}$ vanishes for each mass level {\\it individually}\\\/ ensures\nthat stability no longer rests on any $\\mu$-dependent cancellations involving the\nBessel functions.\n\nComparing Eq.~(\\ref{Ydef2}) with Eq.~(\\ref{newXi}), we see that {\\mbox{${\\mathbb{X}}_2 = {\\mathbb{Y}}^2$}}.\nHowever,\nas discussed below Eq.~(\\ref{Ydef}),\nconstraints on ${\\mathbb{Y}}$ do not necessarily become constraints on ${\\mathbb{X}}_2$, even if the values of\n${\\mathbb{Y}}$ happen to cancel pairwise amongst degenerate states across the string spectrum\n[which would guarantee Eq.~(\\ref{bestcondition})]. \nThus the requirement of stability does not necessarily lead \nto any immediate constraints on the supertraces of ${\\mathbb{X}}_2$. \n\n\n\n \n\n\nIn summary, then, we have shown that for theories with {\\mbox{$\\zStr\\,{\\mathbb{X}}_2=0$}} there exists an effective Higgs potential\n$\\widehat\\Lambda(\\mu,\\phi)$ from which the Higgs mass can be obtained through the modular-covariant double-derivative $D^2_\\phi$,\nas in Eq.~(\\ref{finalCWresult}).\nThis effective potential is given exactly in Eq.~(\\ref{lambdamuresult}), with\nthe leading terms given in Eq.~(\\ref{Lambdafull}).\nBy contrast, for theories with {\\mbox{$\\zStr \\,{\\mathbb{X}}_2\\not=0$}} we have found\nthat the effective potential $\\widehat \\Lambda(\\mu,\\phi)$ \npicks up an additional contribution whose second \nderivative is discontinuous at {\\mbox{$\\phi=0$}}.\nIn this sense, the Higgs mass is not well defined at {\\mbox{$\\phi=0$}}.\nOf course, one option is to retain the expression obtained in Eq.~(\\ref{finalhiggsmassmu});\nthis expression is not the second derivative of $\\widehat\\Lambda(\\mu,\\phi)$ when {\\mbox{$\\zStr\\,{\\mathbb{X}}_2\\not =0$}},\nbut it is indeed finite except as {\\mbox{$\\mu\\to 0$}}.\nAn alternative option is to define our Higgs mass away from the {\\mbox{$\\phi=0$}} origin.\nEither way, these features exactly mirror those seen within the traditional\nColeman-Weinberg potential. \n\n\n\n\n\n\n\n\n\\section{Pulling it all together: Discussion, top-down perspectives, and future directions \\label{sec:Conclusions}}\n\nA central question when analyzing any string theory is to understand the properties of \nits ubiquitous scalars --- its Higgs fields, its moduli fields, its axions, and so forth. \nTo a great extent the behavior of a scalar is dominated by its \nmass, and in this paper we have developed a completely general \nframework for understanding the masses of such scalars at one-loop order in\nclosed string theories.\nOur framework can be applied at all energy scales, is independent of any supersymmetry, and \nmaintains worldsheet modular invariance and hence finiteness at all times. \nMoreover, our framework is entirely string-based and does not rely on establishing any \nparticular low-energy effective field theory. Indeed the notion of an effective \nfield theory at a given energy scale ends up being an {\\it output}\\\/ of our analysis,\nand we have outlined the specific conditions and approximations under which such an EFT emerges\nfrom an otherwise completely string-theoretic calculation.\n\nBeyond the crucial role played by the scalar mass, another \nmotivation for studying this quantity is its special status as \nthe ``canary in the coal mine'' for UV completion. \nThe scalar mass term is virtually the only operator that \nis both highly UV-sensitive and also IR-divergent when coupled to massless states. \nThus, once we understand this operator, we understand much of the entire structure of the theory. \n\nWe can appreciate the special status of this operator if we think about a typical EFT.~\nWithin such an EFT, the familiar result for the one-loop contributions to the Higgs mass \ntakes the general form \n\\begin{equation}\nm_{\\phi}^{2}~=~\\frac{M_{\\text{UV}}^{2}}{32\\pi^{2}}\\,\\eftStr\\,\\partial_{\\phi}^{2}M^{2}\\,-\\eftStr\\,\\partial_{\\phi}^{2}\\left[\\frac{M^{4}}{64\\pi^{2}}\\log\\left(c\\frac{M^{2}}{M_{\\text{UV}}^{2}}\\right)\\right]~\n\\label{eq:CW}\n\\end{equation}\nwhere $M_{\\rm UV}$ is an ultraviolet cutoff, where $c$ is a constant,\nand where $\\eftStr$ denotes a supertrace over the states in the effective theory.\nThis expression has both\na quadratic UV-divergence which we would normally subtract by a counter-term as well as a logarithmic\ncutoff dependence which would normally be indicative of RG running. Thus any UV-completion such as string theory\nhas to resolve two issues within this \nexpression at once: not only must it make the quadratic term finite, but it must also\nbe able to give us specific information about the running. In particular, \nto what value does the Higgs mass actually run in the IR?~\nSuch information is critical in order to nail down the logarithmic running, anchoring \nit firmly as a function of scale. \n\nPrior to our work, such questions remained unanswered.\nIn retrospect, one clue could already be found in the earlier work of Ref.~\\cite{Dienes:1995pm}, which in turn\n rested on previous results in Ref.~\\cite{Kutasov:1990sv}.\nIn Ref.~\\cite{Dienes:1995pm}, it was shown that\nthe one-loop cosmological constant $\\Lambda$ \nfor any non-tachyonic closed string \ncan be expressed as a supertrace over\nthe entire infinite spectrum of level-matched physical string states:\n\\begin{equation}\n \\Lambda~=~ \\frac{1}{24}\\,{\\cal M}^2\\, {\\rm Str}\\, M^2~.\n\\label{eq:lam-rep}\n\\end{equation}\nThis result, which we have rederived in Eq.~(\\ref{eq:lamlam}),\nimmediately suggests two things.\nThe first is that it might be possible to derive an analogous spectral supertrace formula\nfor the one-loop Higgs mass within such strings which, like that in Eq.~(\\ref{eq:lam-rep}), depends on only the physical states in the theory.\nThe second, stemming from a comparison between the result in Eq.~(\\ref{eq:lam-rep}) and \nthe first term in Eq.~(\\ref{eq:CW}), is that there might exist a possible\nderivative-based connection between the one-loop Higgs mass and the one-loop cosmological constant.\n\n \nIn this study, we have addressed all of these issues.\nIndeed, one of the central results of our study is an\nequivalent spectral supertrace formula for the one-loop Higgs mass.\nLike the calculation of the cosmological constant,\nour calculation for the Higgs mass relies on nothing more than worldsheet modular invariance ---\nan exact symmetry which maintains string finiteness and is preserved, even today.\nAnother of our central results is a deep connection between \nthe Higgs mass and the cosmological constant.\nHowever, we also found that unlike the cosmological constant, the Higgs mass may\nactually have a leading logarithmic divergence.\nIndeed, this issue depends on the particular string model under study,\nand in particular the presence of massless states carrying specific charges.\nAs a result of this possible divergence,\nand as a result of the extreme sensitivity of the Higgs mass to physics at all scales, arriving at a fully consistent treatment of the Higgs mass \nrequired us to broach several delicate issues. \nThese encompassed varied aspects of regularization and renormalization and touched \non the very legitimacy of extracting an effective field theory from a UV\/IR-mixed theory. \nThe scope of our study was therefore quite broad, with a number of \nimportant insights and techniques developed along the way.\n\n\nOur first step was to understand how the Higgs and similar scalars\nreside within a typical modular-invariant string theory. \nIn particular, for closed string theories with charge lattices, \nwe began by examining the manner in which fluctuations\nof the Higgs field deform these charge lattices, all the while bearing in mind that these\ndeformations must preserve modular invariance. \nWe were then able to express the contributions to the Higgs mass in terms of \none-loop modular integrals with specific charge insertions ${\\cal X}_i$ incorporated into the\nstring partition-function traces.\nHowever, we found that these insertions have an immediate consequence, producing\na modular anomaly which then requires us to \nperform a ``modular completion'' of the theory.\n This inevitably introduces an additional term into the Higgs mass, one which \nis directly related to the one-loop cosmological constant.\nOur derivation of this term rested solely on considerations of modular invariance and thereby\nendows this result with a generality \nthat holds across the space of perturbative closed-string models.\nIn this way we arrived at one of the central conclusions of our work, namely the existence\nof a universal relation between scalar masses and the cosmological constant in any tachyon-free closed string theory.\nThis relation is given in Eq.~(\\ref{relation1}) for four-dimensional theories,\nand in Eq.~(\\ref{relation1b}) for theories in arbitrary spacetime dimensions $D$. \nStemming only from modular invariance, this result is exact and holds regardless of \nother dynamics that the theory may experience.\n\nHaving established the generic structure of one-loop contributions to the Higgs mass,\nwe then pushed our calculation one step further with the aim of expressing our \nresult for the Higgs mass as a supertrace over the purely physical level-matched spectrum of the theory. \nIndeed, we demonstrated that the requirements of modular covariance so deeply constrain \nthe contributions to the Higgs mass from the unphysical states that these latter\ncontributions can be expressed in terms of contributions from the physical states\nalone. However, part of this calculation required dealing with the \nlogarithmic divergences which can arise.\nThis in turn required that we somehow {\\it regularize}\\\/ the Higgs mass. \n\nFor this reason, we devoted a large portion of our study to establishing\na general formalism for regulating quantities such as the Higgs mass that\nemerge in string theory. We initially considered two forms of \nwhat could be called ``standard'' regulators.\nThe ``minimal'' regulator is essentially a subtraction of the contributions\nof the massless states. We referred to this as a minimal regulator because\nit does not introduce any additional parameters into the theory. Thus, for\nany divergent quantity, there is a single corresponding regulated quantity.\nWe also discussed what we referred to as a ``non-minimal'' regulator,\nbased on a mathematical regularization originally introduced in the mathematics\nliterature~\\cite{zag}. \nThis regulator introduces a new dimensionless parameter $t$, so that for any divergent quantity\nthere exist a set of corresponding regularized quantities parametrized by $t$, with the limit\n{\\mbox{$t\\to \\infty$}} corresponding to the removal of the regulator and the restoration of the original unregulated quantity.\nThis regulator is essentially the one used in Ref.~\\cite{Kaplunovsky:1987rp} and later in Ref.~\\cite{Dienes:1996zr}.\n\nAs we have explained in Sect.~\\ref{sec3}, both of these regulators yield finite quantities which can be expressed in terms of supertraces over only those string states which are physical ({\\it i.e.}\\\/, level-matched). Indeed, in each of these cases,\nthe relation between the regulated quantities and the corresponding supertraces respects modular invariance. Thus, the regulated quantity and the corresponding supertrace in each case transform identically under modular transformations. However, for both the minimal and non-minimal regulators, neither the regulated quantity nor the corresponding supertrace expression is modular invariant on its own. While this additional criterion \nwas not important for the purposes that led to the original development of these regulators in the mathematics literature,\nthis criterion is critical for us because we now wish these regulated quantities to correspond to physical observables (such as our regulated Higgs mass). Each of these regulated quantities must therefore be independently modular invariant on its own. \n\nWe therefore presented a third set of regulators --- those based on the functions $\\widehat{\\cal G}_\\rho(a,\\tau)$.\nThese are our modular-invariant regulators, and they depend on two free parameters $(\\rho,a)$.\nUnlike the minimal and non-minimal regulators, \nthese regulators do not operate by performing a sharp, brute-force subtraction of particular contributions \nwithin the integrals associated with one-loop string amplitudes.\nInstead, we simply multiply the integrand of any one-loop string amplitude by the regulator function $\\widehat {\\cal G}_\\rho(a,\\tau)$. These functions have two important properties which \nmake them suitable as regulators when {\\mbox{$a\\ll 1$}}.\nFirst, as {\\mbox{$a\\to 0$}}, we find that {\\mbox{$\\widehat{\\cal G}_\\rho(a,\\tau)\\to 1$}} for all $\\tau$.\nThus the {\\mbox{$a\\to 0$}} limit restores our original unregulated theory.\nSecond, {\\mbox{$\\widehat{\\cal G}_\\rho(a,\\tau)\\to 0$}} exponentially quickly as {\\mbox{$\\tau\\to i\\infty$}} for all {\\mbox{$a>0$}}. \nThese functions thereby suppress all relevant divergences\nwhich might appear in this limit.\nBut most importantly for our purposes, $\\widehat {\\cal G}_\\rho(a,\\tau)$ is completely modular invariant.\nIn particular, this function is completely smooth, with no sharp transitions in its behavior.\nAs a result, multiplying the integrand of any one-loop string amplitude by $\\widehat{\\cal G}_\\rho(a,\\tau)$ \ndoes not simply excise certain problematic contributions within the corresponding string amplitude, \nbut rather provides a smooth, modular-invariant way of deforming (and thereby regulating) the entire theory.\nThis function even has a physical interpretation in the {\\mbox{$\\rho=2$}} special case, arising as \nthe result of the geometric deformations discussed in Refs.~\\cite{Kiritsis:1994ta, Kiritsis:1996dn, Kiritsis:1998en}. \n\n\nArmed with this regulator, we then demonstrated that \nour regulated Higgs mass can be expressed as the \nsupertrace over only the physical string states.\nOur result for $\\widehat m_\\phi^2(\\rho,a)$ is given in Eq.~(\\ref{twocontributions}),\nwhere $\\widehat m_\\phi^2(\\rho,a)\\bigl|_{\\cal X}$ is given in \nEq.~(\\ref{finalhiggsmassa})\nand where $\\widehat \\Lambda(\\rho,a)$ is given in \nEq.~(\\ref{lambdaresult}).\nWe stress that this is the exact string-theoretic result for the\nregulated Higgs mass, expressed as a function of the regulator parameters $(\\rho,a)$.\nMoreover $\\widehat \\Lambda(\\rho,a)$ by itself is the corresponding regulated\ncosmological constant. As discussed in the text, \nthe one-loop cosmological constant $\\Lambda$ requires regularization in this context\neven though it is already finite in all tachyon-free closed string theories.\n\nWe originally derived these results under the assumption that our underlying\nstring theory could be formulated with an associated charge lattice.\nThis assumption gave our calculations a certain concreteness,\nallowing us to see exactly which states \nwith which kinds of charges ultimately contribute to the Higgs mass.\nHowever, we then proceeded to demonstrate that many of our results are actually more\ngeneral than this, and do not require a charge lattice at all.\nThis lattice-free reformulation also had an added benefit, allowing us to demonstrate a second\ndeep connection between the Higgs mass and the cosmological constant beyond that in Eq.~(\\ref{relation1}). \nIn particular, we were able to demonstrate that each of these quantities can be expressed\nin terms of a common underlying quantity $\\widehat \\Lambda(\\rho,a,\\phi)$ via\nrelations of the form\n\\begin{equation}\n \\begin{cases}\n ~\\widehat \\Lambda(\\rho,a) &=~ \\widehat \\Lambda(\\rho,a,\\phi)\\bigl|_{\\phi=0} \\\\\n ~\\widehat m_\\phi^2 (\\rho,a) &=~ D_\\phi^2 \\,\\widehat \\Lambda(\\rho,a,\\phi)\\bigl|_{\\phi=0}\n \\end{cases}\n\\label{effpotl}\n\\end{equation}\nwhere $D_\\phi^2$ is the modular-covariant second $\\phi$-derivative given in Eq.~(\\ref{Dphi2}). \nThese relations allow us to interpret\n$\\widehat\\Lambda(\\rho,a,\\phi)$ as a stringy effective potential for the Higgs.\nIndeed, these relations are ultimately the fulfillment of our original suspicion\nthat the Higgs mass might be related to the cosmological constant through a double $\\phi$-derivative,\nas discussed below Eq.~(\\ref{eq:lam-rep}).\nHowever, we now see from Eq.~(\\ref{Dphi2}) that this is not just an ordinary $\\phi$-derivative $\\partial_\\phi^2$, but\nrather a {\\it modular-covariant}\\\/ derivative, complete with anomaly term. \nThe second relation in Eq.~(\\ref{effpotl}) thereby {\\it subsumes}\\\/\nour original relation between the Higgs mass and the cosmological constant,\nas expressed in Eq.~(\\ref{relation1}). Moreover, we see that \nthe regulated\ncosmological constant $\\widehat\\Lambda(\\rho,a)$ is nothing but the {\\mbox{$\\phi=0$}} truncation of \nthe same effective potential $\\widehat\\Lambda(\\rho,a,\\phi)$.\nIn this way, $\\widehat \\Lambda(\\rho,a,\\phi)$ emerges as the central object \nfrom which our other relevant quantities can be obtained.\n\nAt no step in the derivation of these results was modular invariance broken.\nThus all of these results are completely consistent with modular invariance, as required.\nMoreover, expressed as functions of the worldsheet regulator parameters $(\\rho,a)$, all of our\nquantities are purely string-theoretic and there are no ambiguities in their definitions.\n\nOur next goal was to interpret these regulated quantities in terms of a physical cutoff scale $\\mu$.\nOf course, if we had been working within a field-theoretic context, \nall of our regulator parameters would have had direct spacetime interpretations in terms of a spacetime scale $\\mu$.\nAs a result, varying the values of these regulator parameters would have \nled us directly to a renormalization-group flow with an associated RGE.~\nString theories, by contrast, are formulated \nnot in spacetime but on the worldsheet --- for such strings, spacetime is nothing but a derived quantity.\nAs a result, although we were able to express our regulated quantities as functions of the two regulator parameters $(\\rho,a)$,\nthe only way to extract an EFT description from these otherwise complete string-theoretic\nexpressions was to develop\na mapping between the worldsheet parameters $(\\rho,a)$ and a physical spacetime scale $\\mu$.\n\nAs we have seen, this issue of connecting $(\\rho,a)$ to $\\mu$ is surprisingly subtle,\nand {\\it it is at this step that we must make certain choices that break modular invariance.}\\\/\nWe already discussed some the issues surrounding IR\/UV equivalence in Sect.~\\ref{UVIRequivalence} ---\nindeed, these issues already suggested that the passage to an EFT would be highly non-trivial\nand involve the breaking of modular invariance.\nBut now, with our complete results in hand, we can take a bird's-eye view and finally map out the full structure of the problem.\n\n\n\\begin{figure*}[t!]\n\\centering\n\\includegraphics[keepaspectratio, width=0.95\\textwidth]{mappings.pdf}\n\\caption{\nThe scale structure of physical quantities in a modular-invariant string theory.\nFor the modular-invariant regulator function \n$\\widehat {\\cal G}_\\rho(a,\\tau_2)$ discussed in this paper, \nthe mapping between the regulator parameter $\\rho a^2$ and the physical spacetime scale $\\mu$\nhas two distinct branches.\nThe traditional branch (shown in blue) identifies {\\mbox{$\\mu^2\/M_s^2 = \\rho a^2$}}, \nbut modular invariance implies \nthe existence of an invariance under the scale-inversion duality symmetry \n{\\mbox{$\\mu\\to M_s^2\/\\mu$}}. This in turn implies \nthe existence of a second branch \n(shown in green) on which we can alternatively identify\n{\\mbox{$\\mu^2\/M_s^2= 1\/(\\rho a^2)$}}.\nAlthough $\\widehat {\\cal G}_\\rho(a,\\tau_2)$ functions as a regulator for {\\mbox{$\\rho a^2<1$}},\nits symmetry under {\\mbox{$\\rho a^2 \\to 1\/(\\rho a^2)$}} \nimplies that this function also acts as a regulator when \nextended into the {\\mbox{$\\rho a^2>1$}} region.\nThis then allows us to see the full four-fold modular structure of the theory.\nThe Higgs-mass plot shown in Fig.~\\ref{anatomy} can now be understood as\nfollowing \nthe {\\mbox{$\\mu^2\/M_s^2 = \\rho a^2$}} branch from the lower-left corner of this figure \ninward toward the central location at which {\\mbox{$\\mu=M_s$}},\nafter which it then follows the \n{\\mbox{$\\mu^2\/M_s^2 = 1\/(\\rho a^2)$}} branch outward towards the upper-left corner.\nHowever, in a modular-invariant theory, all four quadrants of this figure are equivalent and describe the same physics.\nLikewise, in such theories there is no distinction between IR and UV.~\nThus one can exchange ``IR'' $\\leftrightarrow$ ``UV'' within all labels of this sketch,\nand we have simply chosen to show \nthose labels\nthat have the most natural interpretations\nwithin the lower-left quadrant. \nFinally, \nregions with beige shading indicate locations where EFT descriptions exist, \nwhile stringy effects dominate in the yellow central region.\nAs a result, focusing on any one of the four EFT regions by itself\nnecessarily breaks modular invariance\nbecause the choice of EFT region is tantamount to picking a preferred direction for the flow of the scale $\\mu$ relative to \nthe underlying string-theoretic regulator parameter $\\rho a^2$. \nHowever, even within the EFT regions, string states {\\it at all mass scales}\\\/ contribute non-trivially.\nThus even these EFT regions differ from what might be expected within quantum field theory.} \n\\label{mappings_figure}\n\\end{figure*}\n\n\nOur understanding of this issue is summarized in Fig.~\\ref{mappings_figure}. \nAlthough the specific situation sketched in Fig.~\\ref{mappings_figure} corresponds \nto our modular-invariant regulator function $\\widehat{\\cal G}_\\rho(a,\\tau)$,\nthe structure of this diagram is general. \nUltimately, the connection between worldsheet physics and spacetime physics\nfollows from the one-loop partition function, \nwhich for physical string states of spacetime masses $M_i$ takes the general form \n\\begin{equation}\n {\\cal Z}\\\/ ~\\sim~ \\tau_2^{-1} \\, \\sum_i \\,e^{- \\pi \\alpha' M_i^2 \\tau_2}~.\n\\label{pfZ}\n\\end{equation}\nThese $M_i$ are precisely the masses \nwhich ultimately appear in our physical supertrace formulas.\nHowever, our regulator function $\\widehat {\\cal G}_\\rho(a,\\tau)$ \ncannot regulate the divergences that might arise from light and\/or massless states \nas {\\mbox{$\\tau\\to i\\infty$}} unless it suppresses contributions\nto the partition function within the region {\\mbox{$\\tau_2\\gsim \\tau_2^\\ast$}} for some $\\tau_2^\\ast$. \nWe thus see that whether a given state contributes significantly to one-loop amplitudes\nin the presence of the regulator\ndepends on the magnitude of $\\alpha' M_i^2 \\tau_2^\\ast$.\nThis immediately leads us to identify a corresponding spacetime physical RG scale \n{\\mbox{$\\mu^2\\sim 1\/(\\alpha' \\tau^\\ast_2)$}}.\nIndeed, this was precisely the logic that originally led us to Eq.~(\\ref{mut}).\nMoreover, for our specific regulator function $\\widehat{\\cal G}_\\rho(a,\\tau)$, we have \n{\\mbox{$\\tau_2^\\ast \\sim 1\/(\\rho a^2)$}},\nthus leading to the natural identification {\\mbox{$\\mu^2\/M_s^2=\\rho a^2$}}.\n\nHowever, modular invariance does not permit us to \nidentify just one special\npoint {\\mbox{$\\tau_2= \\tau_2^\\ast$}} along the {\\mbox{$\\tau_1=0$}} axis within the fundamental domain. \nIndeed, for every such special point, the corresponding point with {\\mbox{$\\tau_2= 1\/\\tau_2^\\ast$}} is equally special,\nsince these two points along the {\\mbox{$\\tau_1=0$}} axis are related by the {\\mbox{$\\tau\\to -1\/\\tau$}} modular transformation.\nIn other words, although our $\\widehat{\\cal G}_\\rho(a,\\tau)$ regulator function suppresses contributions\nfrom the {\\mbox{$\\tau_2\\gsim \\tau_2^\\ast$}} region, the modular invariance of this function requires that it also\nsimultaneously suppress contributions from the region with {\\mbox{$\\tau_2 \\lsim 1\/\\tau_2^\\ast$}}.\n(In general a modular-invariant regulator function equally suppresses the contributions \nfrom the regions that approach {\\it any}\\\/ of the modular cusps, but for the purposes of mapping to a physical\nspacetime scale $\\mu$ our concern is limited to the cusps along the {\\mbox{$\\tau_1=0$}} axis.)\nWe thus see that for any value of the spacetime scale $\\mu$ that we attempt to identify as corresponding\nto $\\rho a^2$, there always exists a second scale $M_s^2\/\\mu$ which we might equally validly identify as\ncorresponding to $\\rho a^2$.\nThis is the implication of the {\\mbox{$\\mu\\to M_s^2\/\\mu$}} scale-inversion symmetry discussed in Sect.~\\ref{sec4}.\n\nThe upshot of this discussion is that the mapping from $\\rho a^2$ to $\\mu$\nin any modular-invariant theory actually\nhas {\\it two branches}\\\/, as shown in Fig.~\\ref{mappings_figure}.\nAlong the first branch we identify {\\mbox{$\\mu^2\/M_s^2=1\/\\tau_2^\\ast= \\rho a^2$}}, but along the second\nbranch we identify \n{\\mbox{$\\mu^2\/M_s^2=\\tau_2^\\ast= 1\/(\\rho a^2)$}}. \nThese branches contain the same physics, but the choice of either branch breaks modular invariance.\nIn this respect modular invariance is much like another description-redundancy symmetry, namely gauge invariance:\nall physical quantities must be gauge invariant, but the choice of \na particular gauge slice (which is tantamount to the choice of a particular branch) \nnecessarily breaks the underlying symmetry.\n\nIn most of our discussions in this paper,\nwe focused on the behavior of our \nregulator functions within the {\\mbox{$a\\ll 1$}} regime.\nHowever, as we have seen in Eq.~(\\ref{newest}),\nthese functions exhibit a symmetry under {\\mbox{$\\rho a^2 \\to 1\/(\\rho a^2)$}}.\nThe logical necessity for this extra symmetry will be discussed below,\nbut this symmetry implies that $\\widehat {\\cal G}_\\rho(a,\\tau)$ also acts as a regulator when \nextended into the {\\mbox{$a\\gg 1$}} region.\nThis then allows us to see the full four-fold modular structure of the theory,\nas shown in Fig.~\\ref{mappings_figure}.\nGiven this structure,\nwe can also revisit the Higgs-mass plot shown in Fig.~\\ref{anatomy}. We now see that\nwe can interpret this plot as following \nthe {\\mbox{$\\mu^2\/M_s^2 = \\rho a^2$}} branch from the lower-left corner of Fig.~\\ref{mappings_figure}\ntoward the central location at which {\\mbox{$\\mu=M_s$}},\nand then \nfollowing the \n{\\mbox{$\\mu^2\/M_s^2 = 1\/(\\rho a^2)$}} branch outward towards the upper-left corner.\n\nGiven the sketch in Fig.~\\ref{mappings_figure},\nwe can also understand more precisely how the passage from string theory \nto an EFT breaks modular invariance. Within this sketch,\nregions with beige shading indicate locations where EFT descriptions exist\n(and where our regulators are designed to function most effectively, with {\\mbox{$a\\ll 1$}} or {\\mbox{$a\\gg 1$}}). \nBy contrast, stringy effects dominate in the yellow central region, which is\nthe only region that locally exhibits the full modular symmetry, lying on both branches simultaneously.\nAs a result, we necessarily break modular invariance by choosing to focus on any one of the four \nEFT regions alone.\nIndeed, each EFT region intrinsically exhibits a certain direction for the flow of the scale $\\mu$\nrelative to the flow of the underlying worldsheet parameter $\\rho a^2$.\nHowever, the relative direction of this flow is not modular invariant,\nas evidenced from the fact that this flow is reversed in switching from one branch\nto the other.\n\nAt first glance, the fact that the EFT regions appear only at the extreme ends of each branch \nin Fig.~\\ref{mappings_figure} might lead \none to believe that only extremely light states contribute within the EFT\nand that the infinite towers of heavy string states can be ignored within such regions.\nHowever, as we have repeatedly stressed throughout this paper, even this seemingly mild \nassertion would be incorrect. For example, even within the {\\mbox{$\\mu\\to 0$}} limit,\nwe have seen in Eq.~(\\ref{asymplimit}) that\nthe Higgs mass receives contributions from ${\\mathbb{X}}_1$-charged states of all masses across\nthe entire string spectrum. Likewise, $\\Lambda$ receives contributions from {\\it all}\\\/ string states, \nregardless of their mass. \nWe have also seen that the Higgs mass accrues a $\\mu$-dependence which transcends\nour field-theoretic expectations, even for {\\mbox{$\\mu\\ll M_s$}}. A particularly \nvivid example of this is the unexpected ``dip'' region shown in Fig.~\\ref{anatomy} --- an\neffect which is the direct consequence of the stringy Bessel functions whose form is dictated by\nmodular invariance.\nThus modular invariance continues to govern the behavior of the Higgs mass at all scales,\neven within the EFT regions.\n\nLikewise, within such theories there is no distinction between IR and UV.~\nWe can already see this within Fig.~\\ref{mappings_figure},\nwhere the points near the upper end of the figure ({\\it i.e.}\\\/, with large $\\mu$) are \ndesignated not as ``UV'' but as ``dual IR'', since they are the images of the IR regions\nwith small $\\mu$ under the duality-inversion symmetry.\nBut even this labeling is not truly consistent with \nmodular invariance, since there is no reason to adopt the \nlanguage of the small-$\\mu$ \nregion in asserting that the bottom part of the figure corresponds to the IR.~\nThanks to the equivalence under {\\mbox{$\\mu\\to M_s^2\/\\mu$}}, we might as well have decided to label the upper\nportion of the figure as ``UV'' and the lower portion of the figure as ``dual UV''.\nIn that case, the center of the figure would represent the most IR-behavior that is possible,\nrather than the most UV.~\nThe upshot is that the mere distinction between ``IR'' and ``UV'' \nitself breaks modular invariance.\nIn a modular-invariant theory, what we would normally call a UV divergence \nis not distinct from an IR divergence --- they are one and the same. \nIndeed, we have seen that the quadratic divergences normally associated with the Higgs mass in field theory \nare softened to mere logarithmic divergences --- such is the power of modular invariance ---\nbut in string theory there is no deeper physical interpretation\nto this remaining divergence as either UV or IR in nature until we decide to introduce one. \n\n\nIn this connection, we note that it \nmight have seemed tempting to look at the EFT expression in Eq.~(\\ref{eq:CW}) and suppose that\nin a UV-complete theory one could have set about the calculation in a piecemeal\nmanner, dividing the contributions into a UV contribution\nand a much less lethal logarithmically divergent IR contribution\nand then evaluating each one separately.\nThis is certainly the kind of reasoning that is suggested by the notion\nof softly broken symmetries, for example. \nHowever, because there is no intrinsic notion of\nUV and IR in a modular-invariant theory, no such separation can exist.\nInstead, all we have in string theory \nare amplitudes which may be divergent,\nand the question as to whether such divergences are most naturally interpreted as UV or IR in nature\nultimately boils down to a {\\it convention}\\\/ as to which modular-group fundamental domain \nis selected as our region of integration.\nAlthough these arguments are expressed in terms of one-loop amplitudes, \nsimilar arguments extend to higher loops as well. \nOf course, most standard textbook recipes for evaluating one-loop modular\nintegrals in string theory adopt the \nfundamental domain which includes the cusp at {\\mbox{$\\tau\\to i\\infty$}}.\nThis choice then leads to an IR interpretation for the divergence.\nBut when we derived\nour supertrace expressions involving only the physical string states,\nour calculations required that we sum over an infinite number of such fundamental domains which \nare all related to each other under modular transformations, as in Eq.~(\\ref{stripF}).\nIt is only in this way that we were able to transition from the fundamental domain to the strip\nand thereby obtain supertraces involving only the physical string states.\nThus UV and IR physics are inextricably mixed within such supertrace expressions.\n\n\nGiven this bird's-eye view, we can now also understand in a deeper way why it was necessary \nfor us to switch from our original modular-invariant regulator functions ${\\cal G}_\\rho(a,\\tau)$ \nin Eq.~(\\ref{regG}) to our enhanced modular-invariant functions $\\widehat {\\cal G}_\\rho(a,\\tau)$ in Eq.~(\\ref{hatGdef})\nwhich exhibited the additional symmetry under {\\mbox{$a\\to 1\/(\\rho a)$}}.\nAt the level of the string worldsheet,\nour original functions ${\\cal G}_\\rho(a,\\tau)$ would have been suitable, since they already \nsatisfied the two critical criteria\nwhich made them suitable as regulators:\n\\begin{itemize}\n\\item {\\mbox{${\\cal G}_\\rho(a,\\tau)\\to 1$}} for all $\\tau$ as {\\mbox{$a\\to 0$}}, so that the {\\mbox{$a\\to 0$}} \n limit restores our original unregulated theory; and\n\\item {\\mbox{$ {\\cal G}_\\rho(a,\\tau)\\to 0$}} sufficiently rapidly for any {\\mbox{$a>0$}} as\n $\\tau_2$ approaches the appropriate cusps ({\\mbox{$\\tau\\to i\\infty$}}, or equivalently {\\mbox{$\\tau\\to 0$}}), so that\n $f$ is capable of regulating our otherwise-divergent integrands for all {\\mbox{$a>0$}}.\n\\end{itemize}\nIndeed, for any divergent string-theoretic quantity $I$, these functions would have led to\na corresponding set of finite quantities $\\widetilde I_\\rho(a)$ for each value of $(\\rho,a)$.\nWe further saw that these ${\\cal G}$-functions had a redundancy under {\\mbox{$(\\rho,a)\\to (1\/\\rho,\\rho a)$}},\nso that the only the combination $\\rho a^2$ was invariant.\n\nHowever, while such functions would have been suitable at the level of the string worldsheet, there is one additional\ncondition that must also be satisfied if we want to be able to interpret our results \nin {\\it spacetime}\\\/, with the invariant combination $\\rho a^2$ identified as a running spacetime scale $\\mu^2\/M_s^2$.\nAs we have argued below Eq.~(\\ref{pfZ}),\nmodular-invariant string theories necessarily exhibit an invariance\nunder {\\mbox{$\\mu\\to M_s^2\/\\mu$}}; indeed,\nthis scale-duality symmetry rests on very solid foundations.\nHowever, given this scale-inversion symmetry, we see that we \nwould not have been able to consistently identify $\\rho a^2$ with the spacetime scale\n$\\mu^2\/M_s^2$ unless our regulator function itself also exhibited such an inversion symmetry, with an invariance under\n{\\mbox{$\\rho a^2 \\to 1\/(\\rho a^2)$}} [or equivalently under {\\mbox{$a\\to 1\/(\\rho a)$}}].\nThis was the ultimately the reason we transitioned from the ${\\cal G}$-functions to the $\\widehat {\\cal G}$-functions,\nas in Eq.~(\\ref{hatGdef}). This not only preserved the first two properties itemized above, \nbut also ensured a third:\n\\begin{itemize}\n\\item {\\mbox{$\\widehat {\\cal G}_\\rho(a,\\tau) = \\widehat {\\cal G}_\\rho(1\/\\rho a,\\tau)$}} for all $(\\rho,a)$.\n\\end{itemize}\nIn other words, while our first two conditions ensured proper behavior for our regulator functions on the string worldsheet,\nit was the third condition which allowed us to endow our regulated string theory with an interpretation\nin terms of a renormalization flow with a spacetime mass scale $\\mu$. \nIndeed, we see from Fig.~\\ref{mappings_figure} that in some sense this extra symmetry was forced on \nus the moment we identified {\\mbox{$\\mu^2\/M_s^2 = \\rho a^2$}} and recognized the existence of the scale-duality\nsymmetry under {\\mbox{$\\mu\\to M_s^2\/\\mu$}}.\nA similar symmetry structure would also need to hold for any alternative regulator functions that might be chosen.\n \n\nGiven these insights, we then proceeded to derive expressions for\nour regulated Higgs mass $\\widehat m_\\phi^2(\\mu)$ and regulated cosmological constant\n(effective potential) $\\widehat \\Lambda(\\mu)$ \nas functions of $\\mu$.\nThe exact results for these quantities are given in Eqs.~(\\ref{finalhiggsmassmu}) \nand (\\ref{lambdamuresult}), respectively. \nOnce again, we stress that these results are fully modular invariant except for the fact\nthat we have implicitly chosen to work within the lower-left branch of Fig.~\\ref{mappings_figure}.\nFor {\\mbox{$\\mu\\ll M_s$}},\nwe were then able to derive the corresponding approximate EFT running \nfor these quantities in Eqs.~(\\ref{approxhiggsmassmu}) \nand (\\ref{lambdamuresult2}).\nIndeed, as we have seen in Eq.~(\\ref{Lambdafull}),\n our final result for the running \neffective potential $\\widehat \\Lambda(\\mu,\\phi)$ takes \nthe general form\n\\begin{eqnarray}\n \\widehat \\Lambda(\\mu,\\phi) \\,&=&\\, \n \\frac{1}{24}{\\cal M}^2 \\,{\\rm Str}\\, M^2 \n -c' \\antieffStr M^2 \\mu^2 \\nonumber\\\\ \n && - \\zeffStr \\left[ \\frac{M^4}{64\\pi^2} \\log\\left( c \\frac{M^2}{\\mu^2}\\right) \n + c''\\mu^4\\right] ~~~~~~~\n\\label{eq:lambdaconclusions} \n\\end{eqnarray}\nwhere \n{\\mbox{$c= 2e^{2\\gamma+1\/2}$}},\n{\\mbox{$c'=1\/(96\\pi^2)$}}, and\n{\\mbox{$c''= 7 c'\/10$}},\nand where of course we regard the masses $M^2$ as a functions of $\\phi$ as in Eq.~(\\ref{TaylorM}).\nThese specific values of $\\lbrace c,c',c''\\rbrace$\nwere of course calculated with our regulator function taken as $\\widehat {\\cal G}_\\rho(a,\\tau)$\nassuming the benchmark value {\\mbox{$\\rho=2$}},\nand with $\\mu$ defined along the lower-left branch in Fig.~\\ref{mappings_figure}.\nHowever, in general these constants depend on the precise profile of our regulator function.\nFinally, given our effective potential, we also discussed the general conditions under which \nour theory is indeed sitting at a stable minimum as a function of $\\phi$.\n\n\nWith the results in Eq.~(\\ref{eq:lambdaconclusions}) in conjunction with\nthe relations in Eq.~(\\ref{effpotl}),\nwe have now obtained an understanding of the Higgs mass \nas emerging from $\\phi$-derivatives \nof an infinite spectral supertrace of regulated effective potentials.\nWe can now also perceive the critical similarities and differences relative to the \nEFT expectations in Eq.~(\\ref{eq:CW}) and thereby address the questions posed\nat the beginning of this section.\nFor example, \nfrom the first term within Eq.~(\\ref{eq:lambdaconclusions})\nwe see that the Higgs mass within the full modular-invariant theory \ncontains a term of the form $\\frac{1}{24} {\\cal M}^2 {\\rm Str} \\partial_\\phi M^2$.\nComparing this term \nwith first term within Eq.~(\\ref{eq:CW}), \nwe might be tempted to identify\n{\\mbox{$M_{\\rm UV}= \\sqrt{3\/2} \\pi {\\cal M}$}}.\nHowever, despite the superficial resemblance between these terms,\nwe see that the full string-theoretic term is very different\nbecause the relevant supertrace is over the {\\it entire}\\\/ spectrum of states in the theory\nand not just the light states in the EFT.~\n\nIt is also possible to compare the logarithmic terms within\nEqs.~(\\ref{eq:CW})\nand (\\ref{eq:lambdaconclusions}).\nOf course, as in the standard treatment, the logarithmic term in Eq.~(\\ref{eq:CW}) can be regulated by subtracting \na term of the form $\\log(M_{\\rm UV}\/\\mu)$, thereby obtaining an effective running.\nWe then see that both logarithmic terms actually agree.\nWhile it is satisfying to see this agreement, it is nevertheless \nremarkable that we have obtained such a logarithmic EFT-like running \nfrom our string-theoretic result.\nAs we have seen, our full string results in \nEqs.~(\\ref{finalhiggsmassmu}) and (\\ref{lambdamuresult}) did\nnot contain logarithms --- they contained Bessel functions.\nMoreover, unlike the term discussed above, their contributions were {\\it not}\\\/ truncated\nto only the light states with {\\mbox{$M\\lsim \\mu$}} --- they involved supertraces over {\\it all}\\\/\nof the states in the string spectrum, as expected for a modular-invariant theory.\nHowever, the behavior of the Bessel functions themselves \nsmoothly and automatically suppressed the contributions from states with {\\mbox{$M\\gsim \\mu$}}.\nThus, we did not need to {\\it impose}\\\/\nthe {\\mbox{$M\\lsim \\mu$}} restriction on the supertrace of the logarithm\nterm in Eq.~(\\ref{eq:lambdaconclusions})\nbased on a prior EFT-based expectation, as in Eq.~(\\ref{eq:CW});\nthis restriction, and thus an EFT-like interpretation, \nemerged naturally from the Bessel functions themselves.\nIt is, of course, possible to verify the appearance of such a term directly within the\ncontext of a given compactification through a direct calculation of the two-point function of the Higgs field\n(and indeed we verified this explicitly for various compactification choices),\nbut of course the expression in \nEq.~(\\ref{eq:lambdaconclusions}) \nis completely general\nand thus holds regardless of the specific compactification.\n\n\nWe can also now answer the final question posed at the beginning of this section:\n to what value does the Higgs mass actually run as {\\mbox{$\\mu\\to 0$}}?\nAssuming {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}, the answer is clear from Eq.~(\\ref{asymplimit}):\n\\begin{eqnarray}\n \\lim_{\\mu\\to 0} \\widehat m_\\phi^2(\\mu) \\, &=& \\,\n \\frac{\\xi}{4\\pi^2} \\,\\frac{\\Lambda}{{\\cal M}^2}\n - \\frac{\\pi}{6} \\,{\\cal M}^2\\, {\\rm Str}\\, {\\mathbb{X}}_1 \\nonumber\\\\\n &=&\\, \n \\frac{\\xi}{96\\pi^2} \\,{\\rm Str}\\, M^2 \n + \\frac{1}{24}\\, {\\cal M}^2\\, {\\rm Str}\\, \\partial_\\phi^2 M^2 \\Bigl|_{\\phi=0} \\nonumber\\\\\n &=&\\, \\frac{{\\cal M}^2 }{24}\\,D_\\phi^2 \\,{\\rm Str}\\, M^2\\Bigl|_{\\phi=0}~. \n\\label{asymplimit2}\n\\end{eqnarray}\nFrom a field-theory perspective, this is a remarkable result: all running actually\nstops as {\\mbox{$\\mu\\to0$}}, and the Higgs mass approaches a constant whose value is set by\na supertrace over {\\it all}\\\/ of the states in the string spectrum.\nThis behavior is clearly not EFT-like.\nHowever, the underlying reason for this has to do with UV\/IR equivalence and the scale-inversion\nsymmetry under {\\mbox{$\\mu\\to M_s^2\/\\mu$}}.\nRegulating our Higgs mass ensures that our theory no longer diverges as {\\mbox{$\\mu\\to \\infty$}}; rather,\nthe Higgs mass essentially ``freezes'' to a constant in this limit.\nIt is of course natural that in this limit the relevant constant includes contributions from\nall of the string states.\nThe scale-inversion symmetry then implies that the Higgs mass must also ``freeze'' \nto exactly the same value as {\\mbox{$\\mu\\to 0$}}.\nWe thus see that although a {\\it portion}\\\/ of the running of the Higgs mass is EFT-like\nwhen {\\mbox{$\\mu\\ll M_s$}}, this EFT-like behavior does not persist all the way to {\\mbox{$\\mu=0$}} because\nthe scale-inversion symmetry forces \nthe behavior as {\\mbox{$\\mu\\to 0$}} to mirror\nthe behavior as {\\mbox{$\\mu\\to \\infty$}}.\nIndeed, the ``dip'' region is nothing but the stringy transition between these two\nregimes.\n\n\nGiven the results in Eq.~(\\ref{asymplimit2})\nwe also observe that we can now write\n\\begin{equation}\n m_\\phi^2 ~=~ \\left. \\frac{1}{24} {\\cal M}^2 \\,{\\rm Str} \\left[ D_\\phi^2 M^2(\\phi)\\right]\\,\\right|_{\\phi=0}~~~\n\\end{equation}\nThis result is thus the Higgs-mass analogue \nof the $\\Lambda$-result in Eq.~(\\ref{eq:lam-rep}).\nWe can also take the $a\\to 0$ (or equivalently $\\mu\\to 0$) limit of Eq.~(\\ref{effpotl}),\nyielding the simple relations\n\\begin{equation}\n \\begin{cases}\n ~ \\Lambda &=~ \\Lambda(\\phi)\\bigl|_{\\phi=0} \\\\\n ~ m_\\phi^2 &=~ D_\\phi^2 \\,\\Lambda(\\phi)\\bigl|_{\\phi=0}~.\n \\end{cases}\n\\label{effpotl2}\n\\end{equation}\nIndeed, for theories with {\\mbox{$\\zStr {\\mathbb{X}}_2=0$}}, \nthese are exact relations amongst finite quantities.\n\n\nThe final results of our analysis \nare encapsulated within Fig.~\\ref{anatomy}.\nIndeed, this figure graphically illustrates many of the most important conclusions of this paper.\nIn Fig.~\\ref{anatomy}, we have dissected the anatomy of the Higgs-mass running,\nillustrating how this running\npasses through different distinct stages as $\\mu$ increases. \nStarting from the ``deep IR\/UV''\nregion near {\\mbox{$\\mu\\approx 0$}}, the Higgs mass passes through the ``dip'' region and the ``EFT'' region\nbefore ultimately reaching the ``turnaround'' region. \nBeyond this, the theory enters the \n``dual EFT'' region, followed by the ``dual dip'' region\nand ultimately the ``dual deep IR\/UV'' region.\nAbove all else, this figure clearly illustrates\nhow in a modular-invariant theory our normal understanding\nof ``running'' is turned on its head. The Higgs mass does not somehow\nget ``born'' in the UV and then run to some possibly undesirable\nvalue in the IR.~ \nInstead, we may more properly consider the Higgs mass to be ``born'' at {\\mbox{$\\mu=M_s$}}.\nIt then runs symmetrically towards both lesser and greater values of $\\mu$\nuntil it eventually asymptotes to a constant as {\\mbox{$\\mu\\to 0$}} and as {\\mbox{$\\mu\\to \\infty$}}.\n\n\nWe conclude this discussion with two comments regarding technical points.\nFirst, as discussed in Sect.~\\ref{sec4}, we have freely assumed throughout this paper \nthat the residue of a supertrace sum is equivalent to the supertrace sum of the individual residues. In other words, as discussed below Eq.~(\\ref{integrateg}), we have assumed that the supertrace sum does not introduce any additional divergences beyond those already encapsulated within our assertion that the four-dimensional Higgs mass is at most logarithmically divergent, or equivalently that the level-matched integrand has a divergence structure\n{\\mbox{$g(\\tau)\\sim c_0+c_1\\tau_2$}} as {\\mbox{$\\tau_2\\to\\infty$}}. Indeed, this assumption is justified because we are working within the presence of a regulator which is sufficiently powerful to render our modular integrals finite, given this divergence structure. Moreover, the divergence structure of our original unregulated Higgs mass is completely general for theories in four spacetime dimensions, since only a change in spacetime dimension can alter the numbers of $\\tau_2$ prefactors which emerge. Of course, four-dimensional string models generically contain many moduli, and some of these moduli may correspond to the radii associated with possible geometric compactifications from our original underlying 10- and\/or 26-dimensional worldsheet theories. If those moduli are extremely large or small, one approaches a decompactification limit in which our theory becomes effectively higher-dimensional. For any finite or non-zero value of these moduli, our results still hold as before. However, in the full limit as these moduli become infinite or zero, new divergences may appear which are related to the fact that the effective dimensionality of the theory has changed. Indeed, extra spacetime dimensions generally correspond to extra factors of $\\tau_2$, thereby increasing the strengths of the potential divergences. Although all of our results in Sects.~\\ref{sec2} and \\ref{sec3} are completely general for all spacetime dimensions, our results in Sect.~\\ref{sec4} are focused on the case of four-dimensional string models \nfor which {\\mbox{$g(\\tau_2)\\sim c_0+c_1\\tau_2$}} as {\\mbox{$\\tau_2\\to\\infty$}}. \nAs a result, the supertrace-summation and residue-extraction procedures will not commute in the decompactification limit, and additional divergences can arise. However, this does not pose a problem for us --- we simply use the same regulators we have already outlined in Sect.~\\ref{sec3}, but instead work directly in a higher-dimensional framework in which $g(\\tau_2)$ as \n{\\mbox{$\\tau_2\\to\\infty$}} \ntakes a form appropriate for the new effective spacetime dimensionality. \nOnce this is done, we are once again free to exchange the orders of residue-extraction and supertrace-summation, knowing that our results must once again be finite.\n\n\nOur second technical point relates to the concern that has occasionally been expressed in the prior literature\nabout the role played by the off-shell tachyons which necessarily appear \nwithin the spectra of all heterotic strings, and the exponential one-loop divergences\nthey might seem to induce in the absence of supersymmetry\nas {\\mbox{$\\tau\\to i\\infty$}}. \nIn this paper, we discussed this issue briefly in the paragraph surrounding\nEq.~(\\ref{protocharge}). \nUltimately, however, we believe that this concern is spurious.\nFirst, as discussed below Eq.~(\\ref{protocharge}), such states typically lack the non-zero charges \nneeded in order to contribute to the relevant one-loop string amplitudes. \nSecond, \nwithin such one-loop amplitudes,\nour modular integrations \ncome with an implicit instruction \nthat within the {\\mbox{$\\tau_2>1$}} region of the fundamental domain\nwe are to perform the $\\tau_1$ integration prior to performing the $\\tau_2$ integration.\nThis then eliminates the contributions from the off-shell tachyons in the {\\mbox{$\\tau\\to i\\infty$}} limit.\nThis integration-ordering prescription \nis tantamount to replacing the divergence as {\\mbox{$\\tau\\to i\\infty$}} with its average along the line segment {\\mbox{$-1\/2\\leq \\tau_1\\leq 1\/2$}},\nwhich makes sense in the {\\mbox{$\\tau_2\\to\\infty$}} limit as this line segment moves infinitely far up the fundamental domain.\nAnother way to understand this is to realize that under a modular transformation no information can be lost, yet this entire\nline segment as {\\mbox{$\\tau_2\\to \\infty$}} is mapped to the single point with {\\mbox{$\\tau_1=\\tau_2=0$}} under the modular transformation {\\mbox{$\\tau\\to -1\/\\tau$}}.\nFinally, through the compactification\/decompactification argument in presented in Ref.~\\cite{Kutasov:1990sv}, \none can see directly that this off-shell tachyon makes no contribution in all spacetime dimensions {\\mbox{$D>2$}}. \nThus no exponential divergence arises.\nHowever, we note that even if \nan exponential divergence were to survive, it would also be automatically regulated through \nour modular-invariant regulator $\\widehat G_\\rho(a,\\tau)$ --- or sufficiently many higher powers thereof --- given\nthat $\\widehat G_\\rho(a,\\tau)$ itself exhibits an exponential suppression as {\\mbox{$\\tau\\to i\\infty$}}.\n\n\nThe results in this paper have touched on many different topics. Accordingly, there are \nseveral directions that future work may take. \n\n\nFirst, although we have focused in this paper on the mass of the\nHiggs, it is clear that this UV\/IR-mixed picture of running provides a general paradigm\nfor how one should think about the behavior of a modular-invariant theory as a whole. \n For example, one question that naturally arises from \nour discussion concerns the renormalization of the dimensionless couplings. \nThis was the subject of the seminal work in Ref.~\\cite{Kaplunovsky:1987rp}. \nEven though a regulator was chosen in Ref.~\\cite{Kaplunovsky:1987rp} which \nwas not consistent with modular invariance,\nthis was one of the first calculations in which the contributions from the full \ninfinite towers of string states were incorporated within a calculation of gauge couplings and their behavior.\nIt would therefore be interesting to revisit these issues and analyze the running and beta functions\nof the dimensionless gauge couplings that would emerge in the presence of a fully modular-invariant regulator. \nThe first steps in this direction have already been taken in Refs.~\\cite{Kiritsis:1994ta, Kiritsis:1996dn, Kiritsis:1998en}.\nHowever, using the techniques we have developed in this paper, it is now possible to\nextend these results to obtain full scale-dependent RG\nflows for the gauge couplings\n as functions of $\\mu$, and in a continuous way that simultaneously incorporates both UV and IR physics and which does not artificially separate the results into a field-theoretic running with a string-theoretic threshold correction.\nMoreover, due to the {\\mbox{$\\mu\\to M_s^2\/\\mu$}} symmetry\nwe expect that the coefficients of {\\it all}\\\/ operators in the theory \nshould experience symmetric runnings with vanishing gradients at {\\mbox{$\\mu=M_s$}}.\nFor operators with zero engineering dimension, this then translates to a vanishing\nbeta function at {\\mbox{$\\mu=M_s$}}, suggesting the existence of an unexpected (and ultimately unstable) ``UV'' \nfixed point at that location.\n\nIn the same vein, it would also be interesting to study the behavior of scattering amplitudes\nwithin a full modular-invariant context.\nWe once again expect significant deviations from our field-theoretic expectations at all scales ---\nincluding those at energies relatively far below the string scale --- but it would be interesting\nto obtain precise information about how this occurs and what shape the deviations take.\n\n \n\nGiven our results thus far,\nperhaps the most important and compelling avenue to explore concerns the gauge hierarchy problem.\nAs discussed in the Introduction,\nit remains our continuing hope that modular symmetries might provide a new perspective on this problem,\none that transcends our typical field-theoretic expectations.\nSome ideas in this direction were already sketched in Ref.~\\cite{Dienes:2001se},\nalong with suggestions \nthat the gauge hierarchy problem\nmight be connected with the cosmological-constant problem,\nand that these both might be closely connected with the question of vacuum stability.\nIt was also advocated in the Conclusions section of Ref.~\\cite{Dienes:2001se} \nthat these insights might be better understood through calculational frameworks that did not involve\ndiscarding the contributions of the infinite towers of string states, but which instead\nincorporated all of these contributions in order \nto preserve modular invariance and the string finiteness that follows.\n\n\nThe results of this paper \nenable us to begin the process of \nfulfilling these ambitions.\nIn particular, the effective potential\nin Eq.~(\\ref{eq:lambdaconclusions}) is a powerful first step because\nthis result \nprovides a ``UV-complete'' effective potential \nwhich yields the raw expressions\nfor radiative corrections written in terms of the spectrum of whatever theory\none may be interested in studying. Moreover it is an expression that\nis applicable at all energy scales, including the scales associated with the \ncosmological constant and the electroweak physics \nwhere such results are critical.\n\nGiven our results, we can develop a string-based reformulation of both \nof these hierarchy problems.\nOur expression for the cosmological constant in Eq.~(\\ref{eq:lam-rep})\n[or equivalently taking {\\mbox{$\\lim_{\\mu \\to 0} \\widehat \\Lambda(\\mu)$}}]\nimplicitly furnishes us with a constraint \nof the form {\\mbox{${\\rm Str}\\,M^2 \\sim 24 M_\\Lambda^4\/{\\cal M}^2$}}\nwhere {\\mbox{$M_\\Lambda \\sim \\Lambda^{1\/4}\\approx 2.3\\times 10^{-3}\\,$}}{\\rm eV} is the\nmass scale associated with the cosmological constant.\nLikewise, we see that\n{\\mbox{$\\Lambda\\ll 4\\pi^{2}M_{\\rm EW}^{2}\\mathcal{M}^{2}$}} where\n{\\mbox{$M_{\\rm EW}\\sim {\\cal O}(100)~{\\rm GeV}$}} denotes the electroweak scale.\nThus, \nwith $\\phi$ representing the Standard-Model Higgs \nand roughly identifying the physical Higgs mass as\n{\\mbox{$\\lim_{\\mu \\to 0} \\widehat m_\\phi^2 (\\mu) \\sim M_{\\rm EW}^2$}},\nwe see from Eq.~(\\ref{asymplimit2}) that we can obtain a second constraint\nof the form\n{\\mbox{$\\partial_\\phi^2 {\\rm Str}\\, M^2\\bigl|_{\\phi=0}\\sim 24 M_{\\rm EW}^2\/{\\cal M}^2$}}.\nWe therefore see that our two hierarchy conditions now respectively take the forms\n\\begin{equation}\n \\begin{cases}\n ~\\phantom{\\partial_\\phi^2\\,} {\\rm Str}\\, M^2 \\Big|_{\\phi=0} \\!\\! &\\sim~ 24\\, M_\\Lambda^4\/{\\cal M}^2 \\\\\n ~\\partial_\\phi^2\\, {\\rm Str}\\, M^2 \\Big|_{\\phi=0} \\!\\! &\\sim~ 24\\, M_{\\rm EW}^2\/{\\cal M}^2 \n \\end{cases}\n\\label{stringVeltman}\n\\end{equation}\nwhere we continue to regard our masses $M^2$ as functions of the \nHiggs fluctuations $\\phi$, as in Eq.~(\\ref{TaylorM}).\nTo one-loop order, these are the hierarchy conditions that must be satisfied by the\nthe spectrum of any modular-invariant string theory. \nIndeed, substituting the masses in Eq.~(\\ref{TaylorM}), these two conditions reduce to\nthe forms\n\\begin{equation}\n\\begin{cases}\n ~{\\rm Str}\\, \\beta_0 \\!\\! &\\sim~ 24\\, M_\\Lambda^4\/{\\cal M}^4 \\\\\n ~{\\rm Str}\\, \\beta_2 \\!\\! &\\sim~ 24\\, M_{\\rm EW}^2\/{\\cal M}^2 ~. \n\\end{cases}\n\\label{stringVeltman2}\n\\end{equation}\nAlthough every massive string state has a non-zero $\\beta_0$ and therefore\ncontributes to the first constraint, only those string states \nwhich couple to the Higgs field\nhave a non-zero $\\beta_2$ and thereby contribute to the second.\nOf course, given the form of Eq.~(\\ref{TaylorM}), \nthe non-zero $\\beta_i$'s for each state are still expected to \nbe $\\sim {\\cal O}(1)$, which is precisely why these constraints are so\ndifficult to satisfy. \nMoreover, as we know in the case of string models exhibiting charge lattices,\nthese $\\beta_i$-coefficients are related to the charges of the individual string states\nand therefore can be discrete in nature.\n\n\n\n Given the constraints in Eq.~(\\ref{stringVeltman2}), \n it is natural to wonder why there is no hierarchy condition \n corresponding to ${\\rm Str}\\,\\beta_1$.\n Actually, such a condition exists, although this is not normally treated as a hierarchy \n constraint. This is nothing but our stability condition \n {\\mbox{$\\partial_\\phi \\widehat \\Lambda(\\mu,\\phi)\\bigl|_{\\phi=0}=0$}} in\n Eq.~(\\ref{stabcond2}), which can be considered on the same footing as\n the other two relations in Eq.~(\\ref{effpotl}). As we have seen,\n this leads directly to the relations {\\mbox{${\\rm Str}\\,{\\mathbb{Y}}=0$}} or equivalently\n {\\mbox{$\\partial_\\phi {\\rm Str}\\,M^2\\bigl|_{\\phi=0}=0$}}, which can be considered\n alongside the relations in Eq.~(\\ref{stringVeltman}).\n This then leads to the constraint {\\mbox{${\\rm Str} \\,\\beta_1=0$}}.\n Of course, it is always possible that there exists a \n non-zero Higgs tadpole, as long as this tadpole is sufficiently small as to have\n remained unobserved ({\\it e.g.}\\\/, at colliders, or cosmologically), \n leading to string models which are not truly stable but only metastable. \n Such models would be analogous to non-supersymmetric\n string models in which the {\\it dilaton}\\\/ tadpole is non-vanishing\n but exponentially suppressed to a sufficient degree that the theory \n is essentially stable on cosmological timescales~\\cite{Abel:2015oxa}.\n In such cases involving a non-zero Higgs potential, \n we can define an associated mass scale $M_{\\rm stab}$ \n which characterizes the maximum possible Higgs instability we can tolerate\n experimentally and\/or observationally.\n Our corresponding ``hierarchy'' condition would then take the form\n\\begin{equation}\n{\\rm Str}\\, \\beta_1 ~\\lsim~ {M_{\\rm stab}}\/{{\\cal M}}~.\n\\label{stabcond2}\n\\end{equation}\n Of course, this condition differs from\n the others in that it does not describe a phenomenological constraint on a particular\n vacuum but rather helps to determine whether that vacuum even exists.\n All conditions nevertheless determine whether a given value of $\\langle \\phi\\rangle$ (in this case\n defined as {\\mbox{$\\langle \\phi\\rangle =0$}}) is viable. \n In general, such ``hierarchies'' exist for each scalar $\\phi$ in the theory.\n\nDespite their fundamentally different natures, these two types of hierarchies can actually\nbe connected to each other.\nIn the case that $\\phi$ represents the Standard-Model Higgs, \nthis connection will then allow us to relate $M_{\\rm stab}$ to $M_{\\rm EW}$.\nThe fundamental reason for this connection is that \na tadpole corresponds to a linear term in an effective potential for the Higgs.\nThis is in addition to the quadratic mass term.\nHowever, we can eliminate the linear term by completing the square, which\nof course simply shifts the corresponding Higgs VEV.~\nThe maximum size of this tadpole diagram is therefore \nalso bounded by $M_{\\rm EW}$. More precisely,\nwe find for the Standard-Model Higgs that\n\\begin{equation}\n M_{\\rm stab} ~\\sim~ 24\\, M_{\\rm EW}^3\/{\\cal M}^2~,\n\\end{equation}\nwhereupon Eq.~(\\ref{stabcond2}) takes the form\n\\begin{equation}\n{\\rm Str}\\, \\beta_1 ~\\lsim~ 24\\, M_{\\rm EW}^3\/{\\cal M}^3~.\n\\label{stabcond2alt}\n\\end{equation}\nIndeed, in this form Eq.~(\\ref{stabcond2alt}) \nmore closely resembles the relations in \nEq.~(\\ref{stringVeltman2}).\n\n\n\n \n\n\nIt is remarkable that in string theory\nthe constraints from the cosmological-constant problem\nand the gauge hierarchy problem in Eq.~(\\ref{stringVeltman2}) take such similar algebraic forms.\nIndeed in some sense $\\beta_0$ and $\\beta_2$ measure the responses of our \nindividual string states to mass (or gravity) and to \nfluctuations of the Higgs field, respectively,\nwith $\\beta_2$ related to the {\\it charges}\\\/ of these states\nwith respect to Higgs couplings.\nIt is also noteworthy that these conditions \neach resemble the so-called ``Veltman condition''~\\cite{Veltman:1980mj}\nof field theory.\nRecall that the Veltman condition for addressing the gauge hierarchy\nin an effective field theory such as the Standard Model\ncalls for cancelling the quadratic divergence of the Higgs mass\nby requiring the vanishing of the (mass)$^2$ supertrace ${\\rm Str} \\,M^2$ \nwhen summed over all light EFT states which couple to the Higgs.\nHowever, we now see that in string theory \nthe primary difference is that the supertraces ${\\rm Str}\\,M^2$ in Eq.~(\\ref{stringVeltman2}) \nare evaluated over \nthe {\\it entire}\\\/ spectrum of string states\nand not merely the light states within the EFT.~ \nThis is an important difference because the vanishing of this supertrace\nwhen restricted to the EFT generally tells us nothing\nabout its vanishing in the full theory, or vice versa.\nThese are truly independent conditions, and we see that\nstring theory requires the latter, not the former.\n \n\nOne of the virtues of modular invariance --- and indeed an indication of\nits overall power as a robust, unbroken symmetry --- is that the string naturalness\nconditions in Eqs.~(\\ref{stringVeltman}) and (\\ref{stringVeltman2}) \nnecessarily include the effects of {\\it all}\\\/ physics occurring \nat intermediate scales. This includes, for example,\nthe effects of a possible GUT phase transition.\nAs discussed earlier in this paper,\nthis is true because modular invariance is an exact symmetry governing\nnot only all of the states in the string spectrum but also their interactions.\nThus all intermediate-scale physics --- even including phase transitions ---\nmust preserve \nmodular invariance. This in turn implies that as the masses and degrees of\nfreedom within the theory evolve,\nthey all evolve together in a carefully balanced way such that modular invariance is preserved. \nThus, given that relations such as that in Eq.~(\\ref{asymplimit2}) are general and rest solely on \nmodular invariance, they too will remain intact. \nRelations such as those in Eqs.~(\\ref{stringVeltman}) and (\\ref{stringVeltman2}) \nthen remain valid.\n\n\n\n\nThus far we have reformulated the constraints associated\nwith the cosmological-constant and gauge hierarchy problems,\nproviding what may be viewed as essentially ``stringy'' versions of the traditional Veltman condition.\nHowever our results also suggest new stringy mechanisms by which such constraints might actually be satisfied ---\nmechanisms by which such hierarchies might\nactually emerge within a given theory. \nGiven the general running behavior of the Higgs mass \nin Fig.~\\ref{anatomy},\nwe observe two interesting features that may be\nrelevant for hierarchy problems.\nFirst, let us imagine that we apply our formalism for the running of the Higgs mass in the \noriginal {\\it unbroken}\\\/ phase of the theory.\nWe will then continue to obtain a result for the Higgs running \nwith the same shape as that shown in Fig.~\\ref{anatomy}, \nonly with the relevant quantities $\\Lambda$, ${\\mathbb{X}}_1$, and ${\\mathbb{X}}_2$ evaluated\nin the unbroken phase.\nConcentrating on the region with {\\mbox{$\\mu\\leq M_s$}},\nwe see that there is a relatively slow (logarithmic) running which stretches all the way from\nthe string scale $M_s$ down to the energy scales associated with the lightest massive string states,\nfollowed by a transient ``dip'' region within which the Higgs mass experiences\na sudden local minimum.\nThis therefore provides a natural scenario in which electroweak symmetry breaking\nmight be triggered at an energy scale hierarchically below the fundamental high energy scales\nin the theory.\nNote that the dip region indeed produces a {\\it minimum}\\\/ for the Higgs mass only if {\\mbox{${\\rm Str} \\,{\\mathbb{X}}_2>0$}}; \notherwise the logarithmic running changes \nsign and the Higgs mass would already be tachyonic \nat high energy scales near the string scale,\nsignifying (contrary to assumptions) that our theory was not sitting at a stable minimum in $\\phi$-space\nat high energies. (We also note that \neven though {\\mbox{${\\mathbb{X}}_2\\geq 0$}},\nthe supertrace ${\\rm Str}\\,{\\mathbb{X}}_2$ \ncan have either sign \ndepending on how these ${\\mathbb{X}}_2$-charges are distributed between\nbosonic and fermionic states.) \nHowever, with {\\mbox{${\\rm Str}\\,{\\mathbb{X}}_2>0$}},\nthis transient minimum in Fig.~\\ref{anatomy} will cause the Higgs to become tachyonic as long as\n\\begin{equation}\n \\frac{\\pi}{6}\\, {\\rm Str}\\,{\\mathbb{X}}_1 + \n \\frac{3}{10}\\, {\\rm Str}\\,{\\mathbb{X}}_2\n ~\\gsim~ \\frac{\\xi}{4\\pi^2} \\frac{\\Lambda}{{\\cal M}^4}~\n\\end{equation}\nwhere the factor of $3\/10$ represents the\napproximate value $\\approx 0.3$ parametrizing the ``dip depth'' from Fig.~\\ref{anatomy}.~\nIt is remarkable that this condition \nlinks the scale of electroweak symmetry breaking with the value of the one-loop\ncosmological constant.\nJust as with our other conditions, this condition can be also expressed as a constraint \non the values of our $\\beta_i$ coefficients:\n\\begin{equation}\n \\frac{9}{5} \\, {\\rm Str}\\,\\beta_1^2\n -4\\pi^2\\, {\\rm Str}\\,\\beta_2 \n ~\\gsim~ \\xi\\, {\\rm Str}\\,\\beta_0~.\n\\end{equation}\nThis is then our condition for triggering electroweak symmetry breaking at small scales\nhierarchically below ${\\cal M}$.\nOf course, after this breaking occurs, we would need to work in the broken phase\nwherein $\\phi$ returns to representing the Higgs fluctuations relative to the new broken-phase vacuum.\n\nThe second feature illustrated within Fig.~\\ref{anatomy} that may be relevant for the\nhierarchy problems concerns the scale-duality symmetry {\\mbox{$\\mu\\to M_s^2\/\\mu$}}.\nAs we have discussed at numerous points throughout this paper,\nthis symmetry implies an equivalence between UV physics and IR physics --- an observation\nwhich already heralds a major disruption of our understanding of the relationship between\nhigh and low energy\nscales compared with field-theoretic expectations. \nGiven that hierarchy problems not only emerge \nwithin the context of low-energy EFTs but also assume traditional field-theoretic relationships between UV and IR physics,\nit is possible to speculate that such hierarchy problems are not fundamental\nand do not survive in string theory in the manner we normally assume.\nFurthermore, we have already seen that modular invariance not only leads to this UV\/IR mixing but\nalso softens divergences so dramatically that certain otherwise-divergent amplitudes\n(such as the cosmological constant)\nare rendered finite.\nTaken together, these observations suggest that modular invariance may hold the key to an entirely new way of thinking \nabout hierarchy problems --- a point originally made in Ref.~\\cite{Dienes:2001se}\nand which we will develop further in upcoming work~\\cite{SAAKRDinprep}.\n\n\nThe results of this paper also prompt a number of additional lines of research.\nFor example, although most of our results are completely general and hold across all\nmodular-invariant string theories, much of our analysis in this paper has been restricted to one-loop order.\nIt would therefore be interesting to understand what occurs at higher loops.\nIn this connection, we note that it is often asserted in the string literature \nthat modular invariance is only a one-loop symmetry, seeming to imply that it should no longer apply at higher loops.\nHowever, this is incorrect: modular invariance is an {\\it exact}\\\/ worldsheet symmetry of (perturbative) closed strings,\nand thus holds at all orders. This symmetry is merely {\\it motivated}\\\/ \nby the need to render one-loop string amplitudes consistent with the underlying conformal invariance of the string worldsheet.\nOnce imposed, however, this symmetry affects the entire string model --- all masses and interactions, to any order.\nLikewise, one might wonder whether there are {\\it multi-loop}\\\/ versions of modular invariance \nwhich could also be imposed, similarly motivated by considerations of higher-loop amplitudes.\nHowever, it has been shown~\\cite{Kawai:1987ew}\nthat within certain closed string theories, amplitude factorization and \nphysically sensible state projections\ntogether ensure that one-loop modular invariance automatically implies multi-loop modular invariance.\nThus one-loop modular invariance is sufficient, and no additional \nsymmetries of this sort are needed.\n\nBecause modular invariance is an {\\it exact}\\\/ worldsheet symmetry,\nwe expect that certain features we have discussed \nin this paper (such as the existence of the scale-duality symmetry under {\\mbox{$\\mu\\to M_s^2\/\\mu$}})\nwill remain valid to all orders.\nWe believe that the same is true of other consequences of modular invariance,\nsuch as our supertrace relations and the ``misaligned supersymmetry''~{\\mbox{\\cite{Dienes:1994np,Dienes:1995pm,Dienes:2001se}}}\nfrom which they emerge.\n\n\nThat said, modular invariance is a symmetry of closed strings.\nFor this reason, we do not expect modular invariance to hold for Type~I strings, which contain\nboth closed-string and open-string sectors.\nHowever, within Type~I strings there are tight relations between the closed-string and open-string\nsectors, and certain remnants of modular invariance survive even into the open-string sectors.\nFor example, certain kinds of misaligned supersymmetry have been found \nto persist even within open-string sectors~\\cite{Cribiori:2020sct}.\nIt will therefore be interesting to determine the extent to which the results and techniques of this paper\nmight extend to open strings.\n\n\nThe results described in this paper have clearly covered a lot of territory, stretching \nfrom the development of new techniques for calculating Higgs masses to the development \nof modular-invariant methods \nof regulating divergences. \nWe have also tackled critical questions concerning UV\/IR mixing and the extent to which one can extract\neffective field theories from modular-invariant string theories, complete with Higgs masses and a cosmological\nconstant that run as functions of a spacetime mass scale.\nWe have demonstrated that there are unexpected relations between the Higgs mass and the one-loop cosmological\nconstant in any modular-invariant string model, and that it is possible to extract \nan entirely string-based effective potential for the Higgs. \nMoreover, as indicated in the Introduction,\nour results apply to {\\it all}\\\/ scalars in the theory --- even beyond the Standard-Model Higgs --- and apply\nwhether or not spacetime supersymmetry is present. \nAs such, we anticipate that there exist numerous areas of exploration that may be prompted by these developments.\nBut perhaps most importantly for phenomenological purposes, we believe that \nthe results of this paper can ultimately serve as the launching point for a rigorous investigation of \nthe gauge hierarchy problem in string theory.\nMuch work therefore remains to be done.\n\n\n\n\n\n\n\\begin{acknowledgments}\n\nWe are happy to thank Carlo Angelantonj, Athanasios Bouganis, and Jens Funke for insightful discussions. \nThe research activities of SAA were supported by the STFC grant \nST\/P001246\/1 and partly by a CERN Associateship and \nRoyal-Society\/CNRS International Cost Share Award IE160590.\nThe research activities of KRD were supported in part by the U.S.\\ Department of Energy\nunder Grant DE-FG02-13ER41976 \/ DE-SC0009913, and also \nby the U.S.\\ National Science Foundation through its employee IR\/D program.\nThe opinions and conclusions\nexpressed herein are those of the authors, and do not represent any funding agencies.\n\n\\end{acknowledgments}\n\n\n\n\n\n\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nLet $X$ be a compact K\\\"ahler manifold such that the anticanonical bundle $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map. By the work of Zhang \\cite{Zha96}, P\\v aun \\cite{Pau12}\nand the first named author \\cite{Cao13} we know that $\\pi$ is a fibration, i.e. $\\pi$ is surjective and has connected fibres.\nThe aim of this paper is to give evidence for the following:\n\n\\begin{conjecture} \\cite{DPS94} \\label{conjecturealbanese}\nLet $X$ be a compact K\\\"ahler manifold such that $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map.\nThen the fibration $\\pi$ is smooth. \nIf the general $\\pi$-fibre is simply connected, the fibration $\\pi$ is locally trivial in the analytic topology.\n\\end{conjecture}\n\nThis conjecture has been proven under the \nstronger assumption that $T_X$ is nef or $-K_X$ is hermitian \nsemipositive \\cite{CP91, DPS93, DPS94, DPS96, CDP12}, but the general case is \nvery much open: so far it is only known in the case where $q(X)=\\dim X$, i.e. the Albanese map is birational \\cite{Zha96, Fan06}. If $X$ is projective we also know that $\\pi$ is equidimensional and has reduced fibres \\cite{LTZZ10}.\nIn low dimension explicit computations based on the minimal model program (MMP) allow to say more:\n\n\\begin{theorem} \\label{theoremPS} \\cite[Thm.]{PS98} \nLet $X$ be a projective manifold such that $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map. If $\\dim X \\leq 3$, then $\\pi$ is smooth.\n\\end{theorem}\n\nWe prove Conjecture \\ref{conjecturealbanese} when the general fibre is a weak Fano manifold:\n\n\\begin{theorem} \\label{theoremmain}\nLet $X$ be a compact K\\\"ahler manifold such that $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map.\nLet $F$ be a general $\\pi$-fibre. If $-K_F$ is nef and big, then $\\pi$ is locally trivial\nin the analytic topology.\n\\end{theorem}\n\nLet us explain the strategy of proof under the stronger assumption that $-K_X$ is $\\pi$-ample:\nfor $m \\gg 0$ we have an embedding \n$$\nX \\hookrightarrow \\ensuremath{\\mathbb{P}}(\\pi_* (\\omega_X^{\\otimes -m})).\n$$\nThe main technical point is to \nshow that the direct image sheaf $\\pi_* (\\omega_X^{\\otimes -m})$ is a nef vector bundle and $(-K_X)^{\\dim X-\\dim T+1}=0$.\nIf $X$ is projective this is not very difficult, the non-algebraic case needs substantially more effort and should be\nof independent interest. \nCombining these two facts an intersection computation shows that $\\pi_* (\\omega_X^{\\otimes -m})$ is actually numerically flat, i.e.\nnef and antinef. Yet a numerically flat vector bundle is a rather special \nlocal system \\cite[Sect.3]{Sim92}, so an argument from \\cite{Cao12} \nallows to show that the equations of the fibres $X_t \\subset \\ensuremath{\\mathbb{P}}(\\pi_* (\\omega_X^{\\otimes -m})_t)$ do \nnot depend on $t \\in T$. In particular all the fibres are isomorphic, so $\\pi$ is locally trivial.\nIf $-K_X$ is only nef and $\\pi$-big, the same considerations show that the relative anticanonical fibration $X' \\rightarrow T$\nis locally trivial. We then use birational geometry to deduce that $X \\rightarrow T$ is also locally trivial.\nTheorem \\ref{theoremmain} immediately implies:\n\n\\begin{corollary} \\label{corollarymain}\nLet $X$ be a compact K\\\"ahler manifold such that $-K_X$ is nef. \nConjecture \\ref{conjecturealbanese} holds if $q(X) = \\dim X-1$. \n\\end{corollary}\n\nThis also settles the problem in low dimension.\n\n\\begin{corollary} \\label{corollarykaehler}\nLet $X$ be a compact K\\\"ahler manifold such that $-K_X$ is nef. \nConjecture \\ref{conjecturealbanese} holds if $\\dim X \\leq 3$.\n\\end{corollary}\n\nIn the second part of the paper we turn our attention to the case where the positivity of $-K_X$ is not strict, even\nalong the general $\\pi$-fibre. We use the MMP to prove Conjecture \\ref{conjecturealbanese} for fibres of low dimension.\n\n\\begin{theorem} \\label{theoremmaintwo}\nLet $X$ be a projective manifold such that $-K_X$ is nef.\nConjecture \\ref{conjecturealbanese} holds if $q(X) = \\dim X-2$. \n\\end{theorem}\n\nThe basic idea of the proof is very simple: find a Mori contraction $\\mu: X \\rightarrow Y$ onto \na projective manifold $Y \\rightarrow T$ such that $-K_{Y}$ is nef and relatively big.\nThen $-K_X-\\mu^* K_Y$ is nef and relatively big, using the birational morphism\n$$\nX \\rightarrow X' \\subset \\ensuremath{\\mathbb{P}}(\\pi_* (\\omega_X^{\\otimes -m} \\otimes \\mu^* \\omega_Y^{\\otimes -m})).\n$$\nwe can prove as in Theorem \\ref{theoremmain} that $X \\rightarrow T$ is locally trivial.\nUnfortunately it is a priori not clear that such a contraction $X \\rightarrow Y$ exists.\nIn fact the second named author constructed an example of a \n(rationally connected) projective threefold $M$ such that $-K_M$ is nef and not big\nand $M=\\mbox{Bl}_B M'$ with $B$ a smooth rational curve such that $-K_{M'} \\cdot B<0$ \\cite[Ex.4.8]{a8}. \nThis problem already appeared in the work of Peternell and Serrano and we follow\nthe same strategy to overcome this difficulty: let $X'\/T$ be a Mori fibre space birational to $X\/T$,\nthen try to prove that $-K_{X'}$ is nef. Once this property is established one can\ndescribe precisely all the steps of the MMP $X \\dashrightarrow X'$.\nThe contribution of this paper is to introduce a new method to establish this kind of statement:\nour proof is based on the idea that if we restrict the MMP to some (pluri-)anticanonical divisor $D' \\subset X'$,\nthe numerical dimension of $-K_{X'}|_{D'}$ is zero or one. This observation quickly leads to strong\nrestrictions on the MMP in a neighbourhood of $D'$, cf. Lemma \\ref{lemmanu2}.\nThe main point is thus to show the existence of global sections of $-m K_X$ for some $m \\in \\ensuremath{\\mathbb{N}}$: \nthis can be done on threefolds, but is completely open in higher dimension. \n \n\n\n{\\bf Acknowledgements.} We thank I.Biswas, S.Boucksom, T.Dinh, V. Lazi\\'c, W.Ou, T. Sano and C.Simpson for helpful communications.\nWe thank J-P. Demailly for helpful discussions and numerous suggestions.\nA. H\\\"oring was partially supported by the A.N.R. project CLASS\\footnote{ANR-10-JCJC-0111}.\n\n\n\\begin{center}\n{\\bf\nNotation and terminology\n}\n\\end{center}\n\nFor general definitions - at least in the algebraic context - we refer to Hartshorne's book \\cite{Har77}.\nWe will frequently use standard terminology and results \nof the minimal model program (MMP) as explained in \\cite{KM98} or \\cite{Deb01}.\nManifolds and varieties will always be supposed to be irreducible.\nA fibration is a proper surjective map with connected fibres \\holom{\\varphi}{X}{Y} between normal varieties.\n\nLet us recall the various positivity concepts that will be used in this paper.\n\n\\begin{definition} \\label{definitionnef} \\cite{Dem12}\nLet $(X, \\omega_{X} )$ be a compact K\\\"ahler manifold, and let $\\alpha \\in H^{1,1}(X) \\cap H^2(X, \\ensuremath{\\mathbb{R}})$ be a real cohomology class \nof type $(1,1)$. We say that $\\alpha$ is nef if for every $\\epsilon> 0$, there is a smooth $(1,1)$-form $\\alpha_{\\epsilon}$\nin the same class of $\\alpha$ such that $\\alpha_{\\epsilon}\\geq -\\epsilon\\omega_{X}$.\n\nWe say that $\\alpha$ is pseudoeffective if there exists a $(1, 1)$-current $T\\geq 0$ in the same class of $\\alpha$.\nWe say that $\\alpha$ is big if there exists a $\\epsilon> 0$ such that $\\alpha-\\epsilon \\omega_{X}$ is pseudoeffective.\n\\end{definition}\n\n\\begin{definition} \\label{definitionrelativebig}\nLet $\\alpha$ be a nef class on a compact K\\\"ahler manifold $X$, and let $\\pi: X\\rightarrow T$ be a fibration.\nWe say that $\\alpha$ is $\\pi$-big if for a general fibre $F$, the restriction $\\alpha|_F$ is big.\n\\end{definition}\n\n\\begin{definition} \\cite[Def 6.20]{Dem12} \\label{definitionnumericaldimension}\nLet $X$ be a compact K\\\"ahler manifold, and let $\\alpha \\in H^{1,1}(X) \\cap H^2(X, \\ensuremath{\\mathbb{R}})$ be a real cohomology class \nof type $(1,1)$. Suppose that $\\alpha$ is nef.\nWe define the numerical dimension of $\\alpha$ by \n$$\n\\nd (\\alpha) :=\n\\max \\{k \\in \\ensuremath{\\mathbb{N}} \\ | \\ \\alpha^{k}\\neq 0 \\mbox{ in } H^{2k}(X,\\mathbb{R})\\}.\n$$\n\\end{definition}\n\n\\begin{remark} \\label{remarknumericaldimension} In the situation above, set $m=\\nd (\\alpha)$.\nBy \\cite[Prop 6.21]{Dem12} the cohomology class $\\alpha^{m}$ can be represented \nby a non-zero closed positive $(m,m)$-current $T$.\nTherefore \n$\\int_X \\alpha^{m}\\wedge\\omega_{X}^{\\dim X - m}\\neq 0$ for any K\\\"ahler class $\\omega_{X}$.\n\\end{remark}\n\n\n\\begin{definition} \\label{definitionnefcodimone}\nLet $M$ be a projective variety, and let $L$ be a $\\ensuremath{\\mathbb{Q}}$-Cartier divisor on $M$. We say that $L$\nis nef in codimension one if $L$ is pseudoeffective and for every prime divisor $D \\subset M$,\nthe restriction $L|_D$ is pseudoeffective.\n\\end{definition}\n\n\\begin{remark} \\label{remarknefcodimone}\nIf $M$ is a normal projective variety and $L$ a $\\ensuremath{\\mathbb{Q}}$-Cartier divisor which is nef \nin codimension one, then $L^2$ is a pseudoeffective cycle, i.e. a limit of effective cycles of codimension two.\nIndeed if \n$L= \\sum \\lambda_j D_j +N$ is the divisorial Zariski decomposition \\cite{Bou04, Nak04}, we have\n$$\nL^2 = \\sum \\lambda_j L|_{D_j} + L \\cdot N.\n$$\nBy hypothesis the restriction $L|_{D_j}$ is pseudoeffective, so a limit of effective divisors on $D_j$.\nThe class $N$ is modified nef in the sense of \\cite{Bou04}, so its intersection with any pseudoeffective divisor\ngives a pseudoeffective cycle. \n\\end{remark}\n\n\n\\begin{definition} \\cite{Miy87} \\label{definitiongenericallynef}\nLet $X$ be a normal, projective variety of dimension $n$, and let \n$\\sF$ be a torsion free coherent sheaf on $X$. We say that $\\sF$ is generically nef with\nrespect to a polarisation $A$ on $X$ \nif $\\sF|_C$ is nef where\n\\[\nC := D_1 \\cap \\ldots \\cap D_{n-1}\n\\]\nwith $D_j \\in | m_j A |$ general and $m_j \\gg 0$. \n\\end{definition}\n\n\n\\section{Numerical dimension} \\label{sectionnumericaldimension}\n\nIn this section we give an upper bound for the numerical dimension of $-K_X$:\n\n\\begin{proposition} \\label{propositionnumericaldimension}\nLet $X$ be a compact K\\\"ahler manifold of dimension $n$ such that $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map. Set $r:=\\dim T$.\nIf $-K_X$ is $\\pi$-big, we have $\\nd (-K_X)=n-r$.\n\\end{proposition}\n\n\\begin{remark} \\label{remarkprojective} If the torus $T$ is projective, this statement is well-known: in this case the manifold $X$ is \nalso projective, so\nif $\\nd(-K_X)>n-r$ we can apply Kawamata-Viehweg vanishing \\cite[6.13]{Dem00} to see that \n\\begin{equation}\\label{projkawaview}\nH^{r}(X, \\sO_X) = H^{r}(X, K_X+(-K_X))=0.\n\\end{equation}\nThe pull-back of a non-zero holomorphic $r$-form from $T$ gives an immediate contradiction. \nWe will also use the following special case of \\cite[Thm.5.1]{AD11}:\n\\end{remark}\n\n\\begin{lemma} \\label{lemmanumericaldimensionprojective}\nLet $X$ be a normal projective variety and $\\Delta$ a boundary divisor on $X$ such that\nthe pair $(X, \\Delta)$ is klt. Let \\holom{\\varphi}{X}{C} be a fibration onto a smooth curve such \nthat $-(K_{X\/C}+\\Delta)$ is nef and $\\varphi$-big. \nThen we have\n$$\n(K_{X\/C}+\\Delta)^{\\dim X} = 0.\n$$\n\\end{lemma}\n\\vspace{5pt}\n\nTo prove Proposition \\ref{propositionnumericaldimension} for arbitrary K\\\"ahler manifolds,\nwe first prove that if $\\nd (-K_X)\\geq n-r+1$, then $\\pi_* ((-K_X)^{n-r+1})$ is nontrivial .\nMore precisely, we have\n\n\\begin{lemma} \\label{lemmanumericaldimension}\nLet $X$ be a compact K\\\"ahler manifold of dimension $n$, and let\n$\\pi: X\\rightarrow T$ be a surjective morphism onto a compact K\\\"ahler manifold $(T, \\omega_T)$ of dimension $r$.\nLet $L$ be a line bundle on $X$ that if nef and $\\pi$-big.\nIf $\\nd(L)\\geq n-r+1$, we have\n$$\n\\int_{X}L^{n-r+1}\\wedge(\\pi^{*}\\omega_{T})^{r-1} > 0.\n$$\n\\end{lemma}\n\n\\begin{proof}\nWe suppose that $\\nd (L)=n-r+k$ for some $k \\in \\ensuremath{\\mathbb{N}}^*$.\nSince $L$ is nef and $\\pi$-big, the class \n\\begin{equation} \\label{equationone}\n\\alpha=L+C\\cdot\\pi^{*}(\\omega_{T})\n\\end{equation}\nis a nef class for any fixed constant $C>0$, and\n$\\int_{X}\\alpha^{n} > 0$.\nThanks to \\cite[Thm. 0.5]{DP04},\nthere exists $\\epsilon> 0$, such that $\\alpha-\\epsilon\\omega_{X}$ is a pseudoeffective class.\nCombining this with the fact that $L$ is nef,\nwe have\n\\begin{equation}\\label{degrelast}\n\\int_{X} L^{n-r+k}\\wedge \\alpha^{r-k}\n\\geq \\epsilon\\int_{X} L^{n-r+k}\\wedge \\alpha^{r-k-1}\\wedge\\omega_{X}\n\\end{equation}\n$$\n\\geq \\epsilon^2\\int_{X} L^{n-r+k}\\wedge \\alpha^{r-k-2}\\wedge\\omega^{2}_{X}\\geq\n\\cdots\\geq \\epsilon^{r-k}\\int_{X}L^{n-r+k}\\wedge\\omega_{X}^{r-k}> 0,$$\nwhere the last inequality comes from Remark \\ref{remarknumericaldimension}.\nBy the definition of numerical dimension and $\\eqref{equationone}$, we have\n\\begin{equation}\\label{degrefirst}\nC^{n-k}\\cdot\\int_{X} L^{n-r+k}\\wedge (\\pi^{*}\\omega_{T})^{r-k} = \\int_{X} L^{n-r+k}\\wedge \\alpha^{r-k}.\n\\end{equation}\nNow \\eqref{degrelast} and \\eqref{degrefirst} imply that\n\\begin{equation} \\label{equationtwo}\n\\int_{X} L^{n-r+k}\\wedge (\\pi^{*}\\omega_{T})^{r-k} > 0.\n\\end{equation}\nOn the other hand, since $L$ is $\\pi$-big, we have\n\\begin{equation}\\label{equationthree}\n\\int_{X} L^{n-r}\\wedge (\\pi^{*}\\omega_{T})^{r}> 0.\n\\end{equation}\nUsing the Hovanskii-Teissier inequality in the K\\\"{a}hler case (cf. Appendix \\ref{appendixinequality}),\nthe inequalities $\\eqref{equationtwo}$ and $\\eqref{equationthree}$ imply\n$\\int_{X}L^{n-r+1}\\wedge(\\pi^{*}\\omega_{T})^{r-1}> 0$.\n\\end{proof}\n\nWe recall a vanishing theorem proved in \\cite[Prop. 2.4]{Cao12}\n\n\\begin{lemma}\\label{keyvanishing1}\nLet $L$ be a line bundle on a compact K\\\"{a}hler manifold $(X,\\omega)$ of dimension $n$, and \nlet $\\varphi$ be a metric on $L$ with analytic singularities. \nLet \n$ \\lambda_{1}(x)\\leq \\lambda_{2}(x)\\leq\\cdots\\leq\\lambda_{n}(x)$\nbe the eigenvalues of $\\frac{i}{2\\pi}\\Theta_{\\varphi}(L)$ with respect to $\\omega$.\nIf\n\\begin{equation}\\label{equation1}\n\\sum_{i=1}^{p} \\lambda_{i}(x)\\geq c\n\\end{equation}\nfor some constant $c> 0$ independent of $x \\in X$,\nthen\n$$H^{q}(X, K_{X}\\otimes L\\otimes \\mathcal{I}(\\varphi))=0 \n\\qquad \\forall \\ q\\geq p.$$\n\\end{lemma}\n\nThe following vanishing property plays an important role in the proof of Proposition \\ref{propositionnumericaldimension}, \nAlthough it was essentially proved in \\cite[Prop.5.3]{Cao12}, we give the proof since\nthe situation here is a little bit more general.\n\n\\begin{proposition}\\label{lemmavanishing}\nLet $(X, \\omega_{X})$ be a compact K\\\"ahler manifold of dimension $n$, and\n$L$ be a nef line bundle on $X$.\nSuppose that the following holds:\n\\begin{enumerate}[(i)]\n\\item $X$ admits\na two steps tower fibration \n$$\\begin{CD} X @>\\pi >> T @> \\pi_{1}>> S \\end{CD}$$ \nwhere $\\pi$ is a fibration onto a compact K\\\"ahler manifold $(T, \\omega_T)$ of dimension $r$,\nand $\\pi_{1}$ is a smooth fibration onto a smooth curve $S$.\n\\item $L$ is $\\pi$-big and satisfies\n$$\\pi_{*}(c_{1}(L)^{n-r+1})=\\pi_{1}^{*}(\\omega_{S})$$ \nfor a K\\\"ahler metric $\\omega_{S}$ on $S$.\n\\end{enumerate}\nThen we have\n$$ \nH^{q}(X,K_{X}\\otimes L)=0 \\qquad \\forall \\ q\\geq r.\n$$\n\\end{proposition}\n\n\\begin{proof}\nBy \\cite[Lemma 5.1]{Cao12}\\footnote{Note that the proof of \\cite[Lemma 5.1]{Cao12} works well in our case.} the class\n$L-d\\cdot \\pi^* \\pi_{1}^* \\omega_{S}$\nis pseudoeffective for some $d> 0$.\nTherefore there exists a singular metric $h_{1}$ on $L$\nsuch that \n$$i\\Theta_{h_{1}}(L)\\geq d\\cdot\\pi^{*}\\pi_{1}^*\\omega_{S}.$$\nSince $c_{1}(L)+ \\pi^{*} \\omega_{T}$ is nef and \n$\\int_{X}(c_{1}(L)+\\pi^{*} \\omega_{T})^{n}> 0$,\n\\cite[Thm. 0.5]{DP04}\nimplies the existence of a singular metric $h_{2}$ on $L$\nsuch that\n$$i\\Theta_{h_{2}}(L)\\geq c\\cdot \\omega_{X}- \\pi^{*} \\omega_{T}$$\nin the sense of currents for some constant $c > 0$.\nThanks to a standard regularization theorem \\cite{Dem92},\nwe can suppose moreover that $h_{1}, h_{2}$ have analytic singularities.\nSince $L$ is nef we know that for any $\\epsilon> 0$, there exists a smooth metric $h_{\\epsilon}$ on $L$\nsuch that $i\\Theta_{h_{\\epsilon}}(L)\\geq -\\epsilon\\omega_{X}$.\nNow we define a new metric $h$ on $L$:\n$$h=\\epsilon_{1} h_{1}+\\epsilon_{2} h_{2}+ (1-\\epsilon_{1}-\\epsilon_{2})h_{\\epsilon}$$\nfor some $1\\gg \\epsilon_{1}\\gg \\epsilon_{2} \\gg \\epsilon> 0$.\nBy construction, we have\n\\begin{equation}\\label{equation2}\ni\\Theta_{h}(L)=\n\\epsilon_{1}i\\Theta_{h_{1}}(L)+\\epsilon_{2}i\\Theta_{h_{2}}(L)+(1-\\epsilon_{1}-\\epsilon_{2})i\\Theta_{h_{\\epsilon}}(L)\n\\end{equation}\n$$\n\\geq d\\cdot\\epsilon_{1} \\pi^{*}(\\omega_{S})-\\epsilon_{2}\\pi^{*}(\\omega_{T})+(c \\cdot\\epsilon_{2}-\\epsilon)\\omega_{X}.\n$$\nSet\n$\\omega_{\\tau}=\\tau\\cdot\\omega_{X}+\\pi^{*}(\\omega_{T})$ for some $\\tau> 0$.\n\nWe now check that $( i\\Theta_{h}(L), \\omega_{\\tau} )$ satisfies the condition \\eqref{equation1} in Lemma \\ref{keyvanishing1} \nfor $p=r$ when $\\tau$ is small enough (i.e., we consider the eigenvalues of $i\\Theta_{h}(L)$ with respect to $\\omega_{\\tau}$,\nwhere $\\tau\\ll \\epsilon$).\nLet $x\\in X$ and let $V$ be a $r$ dimensional subspace of $(T_X)_x$.\nBy an elementary estimate, we \nhave\\footnote{In fact, since $\\pi_1$ is a submersion, $\\omega_T$ decomposes the tangent bundle of $T$ as $T_{T\/S}\\oplus \\pi_1^* (T_S)$ \nin the sense of $C^{\\infty}$. Observing that $\\pi_1$ is smooth and $\\epsilon_1\\gg \\epsilon_2$,\nwe have\n\\begin{equation}\\label{addremark}\nd\\cdot\\epsilon_{1} \\pi_1^{*}(\\omega_{S})(t,t)-\\epsilon_{2}\\omega_{T}(t, t)\\geq \\frac{d\\cdot\\epsilon_{1}}{2}\\omega_{T}(t,t)\n\\qquad\\text{for any }t\\in \\pi_1^* (T_S).\n\\end{equation}\nSince $\\dim V=r$, there exists an non zero element $v\\in V$ such that $\\pi_* (v)\\in \\pi_1^* (T_S)$.\nBy \\eqref{equation2} and \\eqref{addremark}, we obtain\n\\begin{equation}\n\\frac{ i\\Theta_h (L) (v, v )}{\\langle v, v\\rangle_{\\omega_{\\tau}}}\\geq \n\\frac{(c\\epsilon_2 -\\epsilon )\\langle v, v\\rangle_{\\omega_X}+ \\frac{d\\cdot\\epsilon_1}{2}\\langle \\pi_* (v), \\pi_* (v)\\rangle_{\\omega_T}}{\\tau\\langle v, v\\rangle_{\\omega_X}+\\langle \\pi_* (v), \\pi_* (v)\\rangle_{\\omega_T}}\n\\geq \\min\\{ \\frac{c \\epsilon_2 -\\epsilon}{\\tau}, \\frac{d\\cdot \\epsilon_1}{2}\\}.\n\\end{equation}\n}\n$$\\sup\\limits_{v\\in V}\\frac{ i\\Theta_h (L) (v, v )}{\\langle v, v\\rangle_{\\omega_{\\tau}}}\\geq \n \\min \\{ \\frac{c \\epsilon_2 -\\epsilon}{\\tau}, \\frac{d\\cdot \\epsilon_1}{2}\\}\\gg (r-1)\\cdot \\epsilon_2 $$\nby the choice of $\\tau , \\epsilon_1, \\epsilon_2$.\nMoreover, since $\\epsilon_{2}\\ll \\epsilon_{1}$, \n\\eqref{equation2} implies that $i\\Theta_{h}(L)$ has at most $(r-1)$-negative eigenvectors\nand their eigenvalues with respect to $\\omega_{\\tau}$ are larger than $ - \\epsilon_{2}$.\nBy the minimax principle, \n$( i\\Theta_{h}(L), \\omega_{\\tau} )$ satisfies the condition \\eqref{equation1} in Lemma \\ref{keyvanishing1}.\nThus we have\n$$H^{q}(X,K_{X}\\otimes L\\otimes\\mathcal{I} (h))=0\n\\qquad \\forall \\ q \\geq r.$$\nSince $\\epsilon_{1}, \\epsilon_{2}$ are small enough, \nwe have $\\mathcal{I}(h)=\\mathcal{O}_{X}$.\nTherefore we get\n$$\nH^{q}(X,K_{X}\\otimes L)=0\\qquad \\forall \\ q\\geq r.\n$$\n\\end{proof}\n\nTo prove the main theorem in this section, \nwe need another vanishing lemma. \nThe idea of the proof is essentially the same as Proposition \\ref{lemmavanishing}.\n\n\\begin{lemma}\\label{lemmavanishing3}\nLet $(X, \\omega_{X})$ be a compact K\\\"ahler manifold of dimension $n$\nwhich admits a fibration $\\pi: X\\rightarrow T$ onto a compact K\\\"ahler manifold $(T, \\omega_T)$ of dimension $r$.\nLet $L$ be a line bundle on $X$ that is nef and $\\pi$-big, and let $A$ be a line bundle on $T$\nthat is semiample. If $\\nd (A)=s$, then we have \n$$\nH^q (X, K_X \\otimes L \\otimes \\pi^* (A) )=0 \\qquad \\forall \\ q\\geq r-s+1.\n$$\n\\end{lemma}\n\n\\begin{proof}\nSince $A$ is semiample of numerical dimension $s$,\nthere exists a smooth metric $h_A$ on $A$ such that $i \\Theta_{h_A} (A)$ is semipositive\nand has $s$ strictly positive eigenvalues which admit a positive lower bound that does not depend on \nthe point $t \\in T$.\nBy the proof of Proposition \\ref{lemmavanishing}, \nthere exists a metric $h_{2}$ on $L$ with analytic singularities\nsuch that\n$$i\\Theta_{h_{2}}(L)\\geq c\\cdot \\omega_{X}- \\pi^{*} \\omega_{T}$$\nin the sense of currents for some constant $c > 0$.\nNote that $L$ is nef. Then for any $\\epsilon> 0$, there exists a smooth metric $h_{\\epsilon}$ on $L$\nsuch that $i\\Theta_{h_{\\epsilon}}(L)\\geq -\\epsilon\\omega_{X}$.\nNow we define a new metric $h$ on $L$:\n$$h=\\epsilon_{2} h_{2}+ (1-\\epsilon_{2})h_{\\epsilon}$$\nfor some $\\epsilon_{2}\\ll 1$ and $\\epsilon\\ll c\\cdot \\epsilon_2$.\nBy construction, we have\n\\begin{equation}\\label{equationadd2}\ni\\Theta_{h\\cdot h_A}(L+\\pi^* (A) )=\n\\epsilon_{2}i\\Theta_{h_{2}}(L)+(1-\\epsilon_{2})i\\Theta_{h_{\\epsilon}}(L)+\\pi^* ( i\\Theta_{h_A}(A) )\n\\end{equation}\n$$\\geq -\\epsilon_{2}\\pi^{*}(\\omega_{T})+(c \\cdot\\epsilon_{2}-\\epsilon)\\omega_{X} + \\pi^* (i\\Theta_{h_A}(A) )\n= (c \\cdot\\epsilon_{2}-\\epsilon)\\omega_{X} + \\pi^* (i\\Theta_{h_A}(A)-\\epsilon_{2}\\pi^{*}\\omega_{T}) .$$\nSince $i\\Theta_{h_A}(A)$ is fixed, we can let \n$\\epsilon_2$ small enough with respect to the smallest strictly positive eigenvalues of $i\\Theta_{h_A}(A)$.\nSet $\\omega_{\\tau}=\\tau\\omega_X+\\pi^* (\\omega_T)$ for $\\tau> 0$.\nSince the semipositive $(1,1)$-form $i\\Theta_{h_A}(A)$ contains $s$ strictly positive directions,\nby the same argument as in Proposition \\ref{lemmavanishing}, we know that the pair\n$$( i\\Theta_{h\\cdot h_A}(L \\otimes A) , \\omega_{\\tau})$$\nsatisfies the condition \\eqref{equation1} in Lemma \\ref{keyvanishing1} \nfor $p=r-s+1$ when $\\tau$ is small enough.\nUsing Lemma \\ref{keyvanishing1}, we obtain that \n$$H^q (X, K_X\\otimes L\\otimes A\\otimes \\mathcal{I}(h\\cdot h_A))=0 \\qquad\\text{for }q\\geq n-s+1 .$$\nSince $\\epsilon_2\\ll 1$, we have $\\mathcal{I}(h\\cdot h_A)=\\mathcal{O}_X$.\n\\end{proof}\n\n\n\nWe can now prove the main theorem in this section:\n\n\\begin{theorem}\\label{KVvanishing}\nLet $X$ be a compact K\\\"{a}hler manifold of dimension $n$. \nSuppose that there exists a fibration \n$\\pi: X\\rightarrow T$ onto a torus of dimension $r$.\nLet $L$ be a line bundle on $X$ that is nef and $\\pi$-big.\nIf $\\nd (L) \\geq n-r+1$, then we have\n$$\nH^{q}(X, K_{X} \\otimes L)=0 \\qquad \\forall \\ q \\geq r.\n$$\n\\end{theorem}\n\n\\begin{remark}\nBy the same argument as in Proposition \\ref{lemmavanishing}, we can easily prove that \nif $\\nd (L) =n-r$, then \n$$\nH^{q}(X, K_{X} \\otimes L)=0 \\qquad \\forall \\ q> r.\n$$\n\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem \\ref{KVvanishing}]\nSince $\\nd (L)\\geq n-r+1$, \nLemma \\ref{lemmanumericaldimension} implies that \n\\begin{equation} \\label{eqna}\n\\int_{T}\\pi_{*}(c_{1}(L)^{n-r+1})\\wedge\\omega_{T}^{r-1}> 0\n\\end{equation}\nfor any K\\\"ahler class $\\omega_{T}$.\nUsing the assumption that $T$ is a torus, \nwe can represent the cohomology class $\\pi_{*}(c_{1}(L)^{n-r+1})$ by\na constant $(1,1)$-form \n$\\sum_{i=1}^{r}\\lambda_{i}d z_{i}\\wedge d\\overline{z}_{i}$\non $T$.\nSince \\eqref{eqna} is valid for any K\\\"ahler class $\\omega_{T}$, an elementary computation shows that \n$\\lambda_{i}\\geq 0$ for any $i$. Thus \n$\\pi_{*}(c_{1}(L)^{n-r+1})$ is a semipositive \n(non trivial) class in $H^{1,1} (T) \\cap H^2(T, \\mathbb{Q})$.\nUsing \\cite[Prop. 2.2]{Cao12}, we get a smooth fibration\n$\\varphi: T \\rightarrow S$ \nwhere $S$ is an abelian variety of dimension $s$, and\n$$\\pi_{*}(c_{1}(-K_X)^{n-r+1})=\\lambda \\ \\varphi^* A$$\nfor some $\\lambda>0$ and a very ample divisor $A$ on $S$.\n\nFor every $p \\in \\{ 0, \\ldots, s-1 \\}$ let $S_{p}$ be a complete intersection of $p$ general divisors in \nthe linear system $|A|$, and set $X_p:=\\fibre{(\\varphi \\circ \\pi)}{S_p}$ and $T_p:=\\fibre{\\varphi}{S_p}$.\nThen we get a tower of fibrations\n$$\\begin{CD}\n X_{p} @>\\pi|_{X_p}>> T_{p} @>\\varphi|_{T_p}>> S_{p}\n \\end{CD}\n$$\nand $X_{p}$ is smooth by Bertini's theorem.\nMoreover, we have also the equality\n$$\n(\\pi|_{X_p})_{*}(c_{1}(L)^{n-r+1})=\\lambda\\cdot (\\varphi|_{T_p})^{*} A|_{S_p}. \n$$\nNote that $(\\varphi|_{T_p})^{*} A|_{S_p}$ is semiample of numerical dimension $\\dim S_p$, so\nwe have\n\\begin{equation}\\label{intervanishing}\nH^q (X_p, K_{X_p} \\otimes (L \\otimes \\pi^* \\varphi^* A)|_{X_p})=0 \\qquad \\forall \\ q\\geq \\dim T- \\dim S +1.\n\\end{equation}\nby Lemma \\ref{lemmavanishing3}.\nUsing \\eqref{intervanishing} and the exact sequence \n$$\n0 \\rightarrow K_{X_p} \\otimes L|_{X_{p}} \\rightarrow K_{X_p} \\otimes (L \\otimes \\pi^* \\varphi^* A)|_{X_p} \\rightarrow K_{X_{p+1}} \\otimes L|_{X_{p+1}} \\rightarrow 0,\n$$\nan easy induction shows that\n\\begin{equation}\\label{surjective}\nH^{q-(s-1)}(X_{s-1}, K_{X_{s-1}} \\otimes L|_{X_{s-1}}) \\twoheadrightarrow\nH^{q}(X, K_X \\otimes L) \\qquad \\forall \\ q \\geq r. \n\\end{equation}\nApplying Proposition \\ref{lemmavanishing} to $X_{s -1}$ and the line bundle $L|_{X_{s-1}}$, \nwe get\n$$\nH^{q}(X_{s -1}, K_{X_{s -1}} \\otimes L|_{X_{s-1}})=0 \\qquad \\forall \\ q \\geq \\dim T_{s -1}.\n$$\nSince $\\dim T_{s-1} = r-(s-1)$ we conclude by \\eqref{surjective}.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Proposition \\ref{propositionnumericaldimension}]\nIf $\\nd (-K_X)\\geq n-r+1$, by taking $L=-K_X$ in Theorem \\ref{KVvanishing}, \nwe obtain $H^{r}(X, \\sO_X)=0$. \nThe pull-back of a non-zero holomorphic $r$-form from $T$ gives an contradiction.\n\\end{proof}\n\n\n\\section{Positivity of direct image sheaves} \\label{sectionpositivity}\n\nIn this section we will prove that, under the conditions of Theorem \\ref{theoremmain}, the direct image sheaves $\\pi_* (\\omega_X^{\\otimes -m})$ are numerically flat vector bundles for $m \\gg 0$. If the torus $T$ is projective this can be done by proving\nthat $\\pi_* (\\omega_X^{\\otimes -m})$ is nef and numerically trivial on a general complete intersection curve. If $T$ is non-algebraic such a curve\ndoes not exist, so we have to refine the construction. In Subsection \\ref{subsectionsemistable} we introduce the tools necessary\nto deal with the non-algebraic case, Subsection \\ref{subsectionanticanonical} contains the core of our proof, the direct image argument.\n\n\\subsection{Semistable filtration on compact K\\\"ahler manifold} \\label{subsectionsemistable}\n\nLet us recall the following terminology:\n\n\\begin{definition} \\cite[Defn.1.5.1]{HL97} \\label{definitionjordanhoelder}\nLet $(X, \\omega_X)$ be a compact K\\\"ahler manifold, \nand let $\\sG$ be a reflexive sheaf that is semistable with respect to $\\omega_X$.\nA Jordan-H\\\"older filtration is a filtration\n$$\n0 = \\sG_0 \\subset \\sG_1 \\subset \\ldots \\subset \\sG_l = \\sG\n$$ \nsuch that the graded pieces $\\sG_{i}\/\\sG_{i-1}$ are stable for all $i \\in \\{ 1, \\ldots, l\\}$.\n\\end{definition}\n\nEvery semistable sheaf admits a Jordan-H\\\"older filtration \\cite[Prop.1.5.2]{HL97}. Moreover given a torsion-free $E$\nwith Harder-Narasimhan filtration\n$$\n0 = \\sE_0 \\subset \\sE_1 \\subset \\ldots \\subset \\sE_m = E\n$$\nwe can use the Jordan-H\\\"older filtration of every graded piece $\\sE_{j}\/\\sE_{j-1}$ to obtain a refined filtration\n\\begin{equation} \\label{stablefiltration}\n0 = \\sF_0 \\subset \\mathcal{F}_{1}\\subset \\mathcal{F}_{2} \\subset \\ldots \\subset \\mathcal{F}_{k}=E\n\\end{equation}\nsuch that the graded pieces $\\sF_{i}\/\\sF_{i-1}$ are stable for all $i \\in \\{ 1, \\ldots, k \\}$.\nWe call $\\sF_\\bullet$ the {\\em stable filtration} of $E$ with respect to $\\omega$.\n\n\\begin{lemma} \\label{lemmafiltration}\nLet $(X, \\omega_X)$ be a compact K\\\"ahler manifold of dimension $n$, \nand let $\\pi: X\\rightarrow Y$ be a smooth fibration onto a curve $Y$.\nLet $E$ be a nef vector bundle on $X$. Suppose that\n$$\nc_{1}(E)= M\\cdot\\pi^{*}\\omega_{Y}\n$$ \nfor some constant $M$ and $\\omega_Y$ a K\\\"ahler form on $Y$.\nLet $\\sF_\\bullet$ be the stable filtration \\eqref{stablefiltration} of $E$\nwith respect to $\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X}$ \nfor some $0<\\epsilon \\ll 1$. \nThen \n$$ \nc_{1}(\\mathcal{F}_{1})=a_{1}\\cdot\\pi^{*}(\\omega_{Y}) \n$$\nfor some $a_{1}\\geq 0$.\n\\end{lemma}\n\n\\begin{proof}\nIf $E$ is stable the statement is obvious, so suppose that $k \\geq 2$.\nFor $y \\in Y$ an arbitrary point, we denote by $X_y$ the fibre over $y$.\nSince $c_{1}(E)= \\lambda \\cdot\\pi^{*}(\\omega_{Y})$,\nwe have $c_{1}(E|_{X_y})=0$.\nThen $E|_{X_y}$ is numerically flat, and by the proof of \\cite[Thm 1.18]{DPS94},\nfor any reflexive subsheaf $\\mathcal{F}\\subset E$, we have\n$$c_{1}(\\mathcal{F}|_{X_y})\\wedge (\\omega_{X}|_{X_y})^{n-2}\\leq 0 .$$\nTherefore we get\n\\begin{equation} \\label{equationfourminus}\nc_{1}(\\mathcal{F})\\wedge \\pi^{*} (\\omega_{Y}) \\wedge\\omega_{X}^{n-2}\\leq 0 \\qquad \\forall \\ \\mathcal{F}\\subset E.\n\\end{equation}\nArguing as \\cite[Lemma 1.1]{Cao13}, we see that\n\\begin{equation} \\label{equationfour}\n\\sup\\{ c_{1}(\\mathcal{F})\\wedge \\pi^* (\\omega_{Y}) \\wedge(\\omega_{X})^{n-2} \n\\mid \\mathcal{F}\\subset E\\text{ and }c_{1}(\\mathcal{F})\\wedge \\pi^* (\\omega_{Y}) \\wedge(\\omega_{X})^{n-2}< 0\\} < 0.\n\\end{equation}\nWe claim that \n\\begin{equation} \\label{equationfive}\nc_{1}(\\mathcal{F}_{1})\\wedge\\pi^{*}(\\omega_{Y})\\wedge \\omega_{X}^{n-2}=0 \\qquad\\text{and}\\qquad\nc_{1}(\\mathcal{F}_{1})\\wedge(\\omega_{X})^{n-1}\\geq 0.\n\\end{equation}\nTo prove the claim, we first notice that the nefness of $E$ implies that\n\\begin{equation} \\label{equationstar}\nc_{1}(\\mathcal{F}_{1})\\wedge(\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X})^{n-1}\\geq 0. \n\\end{equation} \nThe base $Y$ being a curve we have\n$$(\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X})^{n-1}=\n\\epsilon^{n-2}\\pi^{*}(\\omega_{Y})\\wedge(\\omega_{X})^{n-2}+\\epsilon^{n-1}(\\omega_{X})^{n-1}.$$\nNote also that $c_{1}(\\mathcal{F})\\wedge(\\omega_{X})^{n-1}$ is uniformly bounded from above for \nany $\\mathcal{F}\\subset E$, cf. \\cite[Lemma 7.16]{Kob87}.\nThen \\eqref{equationstar} implies that \n$$c_{1}(\\mathcal{F}_{1})\\wedge\\pi^{*}(\\omega_{Y})\\wedge(\\omega_{X})^{n-2}\\geq -\\epsilon\\cdot M$$\nfor a constant $M$ independent of $\\epsilon$. \nSince $\\epsilon$ is sufficiently small, the uniform estimate $\\eqref{equationfour}$ and \\eqref{equationfourminus} and imply that \n$$\nc_{1}(\\mathcal{F}_{1})\\wedge\\pi^{*}(\\omega_{Y})\\wedge \\omega_{X}^{n-2}=0 .\n$$\nUsing \\eqref{equationstar} we deduce that $c_{1}(\\mathcal{F}_{1})\\wedge(\\omega_{X})^{n-1}\\geq 0$.\nThis proves the claim.\n\nCombining \\eqref{equationfive} with the assumption that $c_{1}(E)= M\\cdot\\pi^{*}\\omega_{Y}$,\nwe get\n$$c_{1}(E\/\\mathcal{F}_{1})\\wedge\\pi^{*}(\\omega_{Y})\\wedge \\omega_{X}^{n-2}=0 .$$\nCombining this with the fact that $c_{1}(E\/\\mathcal{F}_{1})$ is nef and $\\omega_{Y}^{2}=0$, \nwe obtain\n\\begin{equation} \\label{equationsix}\nc_{1}(E\/\\mathcal{F}_{1})= b \\cdot\\pi^{*}(\\omega_{Y}) \n\\end{equation}\nfor some $b \\geq 0$ by Hodge index theorem \n(cf. Remark \\ref{corHT} of Appendix \\ref{appendixinequality}).\nThus we have $c_{1}(\\sF_{1})= a_1 \\cdot\\pi^{*}(\\omega_{Y})$ for some $a_1 \\in \\ensuremath{\\mathbb{Q}}$ and\n\\eqref{equationfive} implies that $a_1 \\geq 0$. \n\\end{proof}\n\nWe come to the main result of this subsection:\n\n\\begin{proposition} \\label{propositionfiltration}\nIn the situation of Lemma \\ref{lemmafiltration}\nthe reflexive sheaves $\\sF_i$ are subbundles of $E$, in particular they and the\ngraded pieces $\\sF_{i}\/\\sF_{i-1}$ are locally free.\nMoreover each of the graded pieces $\\sF_{i}\/\\sF_{i-1}$ is projectively flat, and \nthere exist a smooth metric $h_{i}$ on $\\sF_{i}\/\\sF_{i-1}$, such that\n$$i\\Theta_{h_{i}}(\\mathcal{F}_{i+1}\/\\mathcal{F}_{i})\n=a_{i}\\pi^{*}(\\omega_{Y})\\cdot \\Id_{\\mathcal{F}_{i+1}\/\\mathcal{F}_{i}}, $$\nfor some constant $a_{i}\\geq 0$.\n\\end{proposition}\n\n\\begin{proof}\n\n{\\em Step 1. Proof of the first statement.}\nWe first prove the statement for $i=1$.\nBy \\cite[Lemma 1.20]{DPS94} it is sufficient to prove that the induced morphism\n$$\n\\det \\sF_1 \\rightarrow \\bigwedge^{\\ensuremath{rk} \\ \\sF_1} E\n$$\nis injective as a morphism of vector bundles. Note now that the set $Z \\subset X$ where\n$\\sF_1 \\subset E$ is not a subbundle has codimension at least two: it is contained \nin the union of the loci where the torsion-free sheaves $\\sF_{k+1}\/\\sF_k$ are not locally free. In particular $Z$ does not\ncontain any fibre $X_y:=\\fibre{\\pi}{y}$ with $y \\in Y$. Thus for every $y \\in Y$ the restricted morphism\n\\begin{equation} \\label{inclusiondet}\n(\\det \\sF_1)|_{X_y} \\rightarrow (\\bigwedge^{\\ensuremath{rk} \\ \\sF_1} E)_{X_y}\n\\end{equation}\nis not zero. Yet by Lemma \\ref{lemmafiltration} the line bundle $(\\det \\sF_1)|_{X_y}$ is numerically trivial\nand the vector bundle $(\\bigwedge^{\\ensuremath{rk} \\ \\sF_1} E)_{X_y}$ is numerically flat. Thus the inclusion \\eqref{inclusiondet}\nis injective as a morphism of vector bundles.\nThen $\\sF_1$ is a subbundle of $E$ \\cite[Prop.1.16]{DPS94}.\n\nNow $E\/\\sF_1$ is a nef vector bundle on $X$. Moreover, Lemma \\ref{lemmafiltration} implies that \n$c_1 (E\/\\sF_1)=M'\\cdot \\pi^* (\\omega_Y)$ for some constant $M'$.\nThen we can argue by induction on $E\/\\sF_1$, and the first statement is proved.\n\n{\\em Step 2. The graded pieces are projectively flat.}\nApplying Lemma \\ref{lemmafiltration} to $E\/\\sF_i$, we obtain that $c_1(\\sF_{i}\/\\sF_{i-1})=a_{i}\\cdot \\pi^* (\\omega_Y)$ for some constant $a_i$, in particular\n$c_1^2 (\\sF_{i}\/\\sF_{i-1} )=0$.\nSince $\\sF_{i}\/\\sF_{i-1}$ is $(\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X})$-stable,\nto prove that $\\sF_{i}\/\\sF_{i-1}$ is projectively flat, by \\cite[Thm.4.7]{Kob87} it is sufficient to prove\nthat \n$$\nc_2(\\sF_{i}\/\\sF_{i-1}) \\cdot (\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X} )^{n-2}=0 .\n$$\nSince $c_1(\\sF_{i}\/\\sF_{i-1})$ is a pull-back from the curve $Y$ for every $i \\in \\{ 1, \\ldots, k\\}$ it is easy to see that\n$$\nc_2(E) = \\sum_{i=1}^{k} c_2(\\sF_{i}\/\\sF_{i-1}).\n$$\nSince we have $c_2(\\sF_{i}\/\\sF_{i-1}) \\cdot (\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X})^{n-2} \\geq 0$ \nfor every $i \\in \\{ 1, \\ldots, k\\}$ by \\cite[Thm.4.7]{Kob87}, \nwe are left to show that $c_2(E) \\cdot (\\pi^{*}\\omega_{Y}+\\epsilon\\omega_{X})^{n-2}=0$. \nYet $E$ is nef with $c_1(E)^2=0$, \nso this follows immediately from the Chern class inequalities\nfor nef vector bundles \\cite[Cor.2.6]{DPS94}.\n\\end{proof}\n\n\\subsection{Positivity of $\\pi_*(\\omega_X^{\\otimes -m})$}\n\\label{subsectionanticanonical}\n\nLet $X$ be a normal compact K\\\"ahler space with at most terminal Gorenstein singularities, \nand let \\holom{\\pi}{X}{T} be a fibration such that $-K_X$ is $\\pi$-nef and $\\pi$-big, that\nis $-K_X$ is nef on every fibre and big on the general fibre. In this case\nthe relative base-point free theorem holds \\cite[Thm.3.3]{Anc87}, i.e. for every $m \\gg 0$ the natural map\n$$\\pi^{*}\\pi_{*}(\\omega_X^{\\otimes -m})\\rightarrow \\omega_X^{\\otimes -m}$$\nis surjective. Thus $\\omega_X^{\\otimes -m}$ is $\\pi$-globally generated and induces a bimeromorphic morphism\n\\begin{equation} \\label{eqnrelativemodel}\n\\holom{\\mu}{X}{X'}\n\\end{equation}\nonto a normal compact K\\\"ahler space $X'$. Standard arguments from the MMP show that the bimeromorphic\nmap $\\mu$ is crepant, that is $K_{X'}$ is Cartier and we have\n$$\nK_X \\simeq \\mu^* K_{X'}.\n$$\nIn particular $X'$ has at most canonical Gorenstein singularities. The fibration $\\pi$ factors through the morphism $\\mu$,\nso we obtain a fibration\n\\begin{equation} \\label{eqnrelativefibration}\n\\holom{\\pi'}{X'}{T}\n\\end{equation}\nsuch that $-K_{X'}$ is $\\pi'$-ample. Therefore we call \\holom{\\mu}{X}{X'} the relative anticanonical model of $X$ and\n\\holom{\\pi'}{X'}{T} the relative anticanonical fibration.\n\nWe start with an elementary computation:\n\n\\begin{lemma} \\label{lemmaelementary}\nLet $V$ be a nef vector bundle over a smooth curve $C$, and let $A \\subset V$ be an ample subbundle.\nLet $Z \\subset \\ensuremath{\\mathbb{P}}(V)$ be a subvariety such that $Z \\not\\subset \\ensuremath{\\mathbb{P}}(V\/A)$. Then we have\n$$\nZ \\cdot \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1)^{\\dim V}>0.\n$$\n\\end{lemma}\n\n\\begin{proof} \nLet $\\holom{f}{\\ensuremath{\\mathbb{P}}(V)}{C}$ and $\\holom{g}{\\ensuremath{\\mathbb{P}}(A)}{C}$ be the canonical projections, and let \n$\\holom{\\mu}{X}{\\ensuremath{\\mathbb{P}}(V)}$ be the blow-up along the subvariety $\\ensuremath{\\mathbb{P}}(V\/A)$. \nThe restriction of $\\mu$ to any $f$-fibre \\fibre{f}{c} is the blow-up of a projective space $\\ensuremath{\\mathbb{P}}(V_c)$\nalong the linear subspace $\\ensuremath{\\mathbb{P}}(V_c\/A_c)$, so we see that we have a fibration\n$\\holom{h}{X}{\\ensuremath{\\mathbb{P}}(A)}$ which makes $X$ into a projective bundle over $\\ensuremath{\\mathbb{P}}(A)$.\n$$\n\\xymatrix{\nZ'\\ar[d]\\ar[r]& X \\ar[d]^{\\mu}\\ar[r]^{h} & \\ensuremath{\\mathbb{P}}(A)\\ar[ldd]^{g}\\\\\nZ\\ar[r] & \\ensuremath{\\mathbb{P}}(V)\\ar[d]^{f} & \\\\\n & C\n}\n$$\nSince $Z \\not\\subset \\ensuremath{\\mathbb{P}}(V\/A)$, the strict transform $Z'$ is well-defined and we have\n$$\n\\sO_{\\ensuremath{\\mathbb{P}}(V)}(1)^{\\dim Z} \\cdot Z = \\mu^* \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1)^{\\dim Z} \\cdot Z'.\n$$\nWe claim that\n\\begin{equation} \\label{decompoone}\n\\mu^* \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1) \\simeq h^* \\sO_{\\ensuremath{\\mathbb{P}}(A)}(1) + E,\n\\end{equation}\nwhere $E$ is the $\\mu$-exceptional divisor. Indeed we can write\n$$\n\\mu^* \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1) \\simeq a h^* \\sO_{\\ensuremath{\\mathbb{P}}(A)}(1) + b E + c F,\n$$\nwhere $F$ is a $f \\circ \\mu$-fibre and $a,b,c \\in \\ensuremath{\\mathbb{Q}}$. By restricting to $F$ one easily sees \nthat we have $a=1, b=1$. Note now (for example by looking at the relative Euler sequence) that we have\n$\nN_{\\ensuremath{\\mathbb{P}}(V\/A)\/\\ensuremath{\\mathbb{P}}(V)} \\simeq f^* A^* \\otimes \\sO_{\\ensuremath{\\mathbb{P}}(V\/A)}(1)$.\nSince the exceptional divisor $E$ is the projectivisation of $N_{\\ensuremath{\\mathbb{P}}(V\/A)\/\\ensuremath{\\mathbb{P}}(V)}^*$ we deduce that\n$$\n- E|_E \\simeq \\sO_{\\ensuremath{\\mathbb{P}}(f^* A \\otimes \\sO_{\\ensuremath{\\mathbb{P}}(V\/A)}(-1))}(1) \\simeq (h^* \\sO_{\\ensuremath{\\mathbb{P}}(A)}(1))|_E + \\mu|_E^* \\sO_{\\ensuremath{\\mathbb{P}}(V\/A)}(-1).\n$$\nSince $\\mu^* \\sO_{\\ensuremath{\\mathbb{P}}(V)}(-1)|_E \\simeq \\mu|_E^* \\sO_{\\ensuremath{\\mathbb{P}}(V\/A)}(-1)|_E$ we obtain $c=0$.\n\nIn order to simplify the notation, set $\\xi_V:=\\mu^* \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1)$ and $\\xi_A:=h^* \\sO_{\\ensuremath{\\mathbb{P}}(A)}(1)$.\nBy \\eqref{decompoone} we have\n$$\n\\xi_V^{\\dim Z} \\cdot Z' = \\xi_V^{\\dim Z-1} \\cdot \\xi_A \\cdot Z'\n+ \\xi_V^{\\dim Z-1} \\cdot E \\cdot Z'.\n$$\nSince $E$ does not contain $Z'$, the two terms on the right hand side are non-negative. If $\\xi_V^{\\dim Z-1} \\cdot E \\cdot Z'>0$\nwe are obviously finished, so suppose that this is not the case. Let $e$ be the dimension of $h(E \\cap Z')$. \nSince $\\sO_{\\ensuremath{\\mathbb{P}}(A)}(1)$ is ample and $\\xi_V$ is $h$-ample, we have\n$$\n\\xi_V^{\\dim Z-e-1} \\cdot \\xi_A^{e} \\cdot E \\cdot Z'>0.\n$$\nThus\n$$\nl := \\min \\{ j \\in \\ensuremath{\\mathbb{N}} \\ | \\ \\xi_V^{\\dim Z-j-1} \\cdot \\xi_A^j \\cdot E \\cdot Z'>0 \\}.\n$$\nis an integer. An easy induction now shows that\n$$\n\\xi_V^{\\dim Z} \\cdot Z' = \\xi_V^{\\dim Z-l-1} \\cdot \\xi_A^{l+1} \\cdot Z' +\n\\xi_V^{\\dim Z-l-1} \\cdot \\xi_A^l \\cdot E \\cdot Z'>0.\n$$\n\\end{proof}\n\n\n\\begin{lemma} \\label{lemmaflat}\nLet $X$ be a compact K\\\"ahler manifold of dimension $n$ such that $-K_X$ is nef.\nLet $\\pi: X \\rightarrow T$ be the Albanese fibration, and suppose that $-K_X$ is $\\pi$-big. \nLet $\\holom{\\pi'}{X'}{T}$ be the relative anticanonical fibration. \n\nThen $\\pi'$ is flat and $E_{m}:=(\\pi')_{*}(\\omega_{X'}^{\\otimes -m})$ is locally free for $m \\in \\ensuremath{\\mathbb{N}}$.\n\\end{lemma}\n\n\\begin{proof}\nThe variety $X'$ has at most canonical singularities, so it is Cohen-Macaulay. \nThe base $T$ being smooth it is sufficient to prove that $\\pi'$ is equidimensional \\cite[III,Ex.10.9]{Har77}. \nSet $r:=\\dim T$. \nBy Proposition \\ref{propositionnumericaldimension} we know that \n$$\n(-K_X)^{n-r+1}=(-K_{X'})^{n-r+1}=0.\n$$\nIf $F' \\subset X'$ is an irreducible component of a $\\pi'$-fibre, we have\n$(-K_{X'}|_{F'})^{\\dim F'} \\neq 0$\nsince $-K_{X'}|_{F'}$ is ample. By the preceding equation we see that $\\dim F' \\leq n-r$.\n\nSince $X'$ has at most canonical singularities, \nthe relative Kawamata-Viehweg theorem applies and shows that\n$R^j (\\pi')_{*}(\\omega_{X'}^{\\otimes -m})=0$ for all $j>0$.\nThe fibration $\\pi'$ being flat, the statement follows.\n\\end{proof}\n\n\n\\begin{lemma} \\label{lemmanef}\nIn the situation of Lemma \\ref{lemmaflat}, the vector bundle $E_{m}$ is nef for $m \\gg 1$.\n\\end{lemma}\n\n\\begin{remark}\nIf the fibration is smooth and the torus $T$ is abelian, \nthe nefness is proved in \\cite[Lemma 3.21]{DPS94}.\nSince $T$ is an arbitrary torus and $\\pi$ is -a priori- not necessarily smooth, \nwe use \\cite[Thm. 0.5]{DP04} and the standard regularization method (cf. \\cite[Ch.13]{Dem12}, \\cite[Sect.3]{Dem92})\nto overcome these difficulties. \n\\end{remark}\n\n\\begin{proof}[Proof of Lemma \\ref{lemmanef}]\n\nWe first notice that $-K_X =\\mu^* (-K_{X'})$ by construction.\nTherefore $E_{m}=\\pi_{*}(\\omega_X^{\\otimes -m})$.\nWe first fix a Stein cover $\\mathcal{U}=\\{U_{i}\\}$ on $T$ as constructed in \\cite[13.B]{Dem12}\n\\footnote{We keep the notations in \\cite[13.B]{Dem12}, which can also be found in \\cite[Sect. 3]{Dem92} .}, \nsuch that $U_{i}$ are simply connected balls of radius $2\\delta$ fixed.\nLet $U_i^{'}\\Subset U_i{''}\\Subset U_{i}$ be the balls constructed in \\cite[13.B]{Dem12} such that they are the balls of radius \n$\\delta$, $\\frac{3}{2}\\delta$, $2\\delta$ respectively\nand $\\{ U_i^{'} \\}$ also covers $T$.\nLet $\\theta_{j}$ be a smooth partition function with support in $U_{j}''$ as constructed in \\cite[Lemma 13.11]{Dem12}.\nLet $\\varphi_{k}: T\\rightarrow T$ be a $2^k$-degree isogeny of the torus $T$, and set $X_k :=T\\times_{\\varphi_k} X$.\nLet $L=-(m+1) K_{X_k\/T}$ and set $E_{m, k} :=\\pi_{*}(K_{X_k}+L)$.\nWe have the commutative diagram\n$$\\xymatrix{\n&X_k\\ar[r]^{\\widetilde{\\varphi}_k}\\ar[d]_{\\widehat{\\pi}}& X\\ar[d]_{\\pi}\n\\\\ \\ensuremath{\\mathbb{P}} (E_{m, k})\\ar[r]^{\\pi_{1}} &T\\ar[r]^{\\varphi_k} & T \n\\\\ & U_i\\ar@{^{(}->}[u] &}$$\nNote that the cover $\\mathcal{U}=\\{U_{i}\\}$, and the partition functions $\\theta_{i}$ are independent of $k$.\nWe first prove that there exists a smooth metric $h$ on $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m, k})} (1)$, \nsuch that \n$$i\\Theta_{h}(\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m, k})} (1))\\geq -C\\cdot \\pi_1 ^{*}( \\omega_{T} )$$\nfor a constant $C$ independent of $k$\n\\footnote{ All the constants $C, C_{1},\\cdots, C_{i}$ below are independent of $k$.}.\n\nWe fix a K\\\"ahler metric $\\omega_{X_k}$ on $X_k$.\nSince $L$ is nef and $\\widehat{\\pi}$-big, \n\\cite[Thm. 0.5]{DP04} implies the existence of a singular metric $h_{\\widetilde{\\epsilon}_k }$ on $L$\nsuch that \n$$i\\Theta_{h_{\\widetilde{\\epsilon}_k}}(L)\\geq \\widetilde{\\epsilon}_k \\omega_{X_k}- C_{1}\\widehat{\\pi}^*( \\omega_{T} ),$$\nfor a constant $C_{1}$ independent of $k$, but $\\widetilde{\\epsilon}_k > 0$ is dependent of $k$.\nSince $L$ is nef, for any $\\epsilon > 0$, there exists a metric $h_{\\epsilon}$ such that\n$$i\\Theta_{h_{\\epsilon}}(L)\\geq -\\epsilon\\omega_{X_k} .$$\nBy combining these two metrics, we can easily construct a new metric $h_{\\epsilon_k}$ on $L$,\nsuch that\\footnote{We just need to take $h_{\\epsilon_k}=h_{\\widetilde{\\epsilon}_k }^{r_k}\\cdot h_{\\epsilon}^{1-r_k}$\nfor some $r_k$ small enough, and $\\epsilon\\ll r_k\\cdot \\widetilde{\\epsilon}_k$.}\n\\begin{equation}\\label{firstsingularmetric}\ni\\Theta_{h_{\\epsilon_k}}(L)\\geq \\epsilon_k \\omega_{X_k}- 2\\cdot C_{1}\\widehat{\\pi}^*(\\omega_{T})\\qquad\n\\text{and }\\qquad \\mathcal{I} (h_{\\epsilon_k})=\\mathcal{O}_{X_k}\n\\end{equation}\nfor some $\\epsilon_k > 0$.\nSince $\\mathcal{I} (h_{\\epsilon_k})=\\mathcal{O}_{X_k}$ and $U_i$ are simply connected Stein varieties, \nwe can suppose that $L^2$-bounded (with respect to $h_{\\epsilon_k}$) \nelements in $H^{0}(\\widehat{\\pi}^{-1}(U_{i}), K_{X_k}+L)$\ngenerate $E_{m,k}$ over $U_i$. \n\nLet $\\{\\widehat{e}_{i , j}\\}_{j}$ be an orthonormal base of $H^{0}(\\widehat{\\pi}^{-1}(U_{i}), K_{X_k}+L)$ \nwith respect to $h_{\\epsilon_k}$,\ni.e., $\\int_{\\widehat{\\pi}^{-1}(U_{i})} \\langle\\widehat{e}_{i , j}, \\widehat{e}_{i , j'}\\rangle_{h_{\\epsilon,k}}^2=\\delta_{j, j'}$.\nThen $\\widehat{e}_{i , j}$ induce an element $e_{i,j}\\in H^{0}(\\pi_{1}^{-1}(U_i), \\O_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1))$.\nWe now define a smooth metric $h_{i}$ on $\\O_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1) $ over $\\pi_{1}^{-1}(U_i)$ by \n$$\\|\\cdot\\|_{h_i}^2=\\frac{\\|\\cdot\\|_{h_{0}}^2}{\\sum\\limits_{j}\\|e_{i, j}\\|_{h_{0}}^2} ,$$\nwhere $h_{0}$ is a fixed metric on $\\O_{\\ensuremath{\\mathbb{P}}(E_m)} (1) $.\nThanks to the construction,\n$h_i$ is smooth and $i\\Theta_{h_i}(\\O_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1) )$ is semi-positive on $\\pi_{1}^{-1}(U_i)$.\n\nWe claim that \n\\begin{equation}\\label{equation75}\n\\frac{1}{C_{2}}\\leq \\frac{\\sum\\limits_{j} \\|e_{i, j}\\|_{h_{0}}^2 (z)}{\\sum\\limits_{j} \\|e_{i', j}\\|_{h_{0}}^2 (z)}\\leq C_{2}\n\\qquad \\text{for }z\\in \\pi_{1}^{-1}(U_{i}''\\cap U_{i'}'') ,\n\\end{equation}\nfor some $C_{2}> 0$ independent of $z, k, i, i'$.\nThe proof is almost the same as in \\cite[Lemma 13.10]{Dem12}, except that we use the metric \n$\\epsilon_k\\cdot\\omega_{X_k}+\\widehat{\\pi}^{*}\\omega_T$\nin stead of $\\omega_X$ in the estimate.\nWe postpone the proof of the claim \\eqref{equation75} in Lemma \\ref{gluing}\nand first finish the proof of Lemma \\ref{lemmanef}.\n\nWe now define a global metric $h$ on $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m})} (1)$ by\n$$\\|\\cdot\\|_{h}^{2}=\\|\\cdot\\|_{h_{0}}^{2} e^{- \\sum\\limits_{i} (\\pi_1 ^{*}(\\theta_{i}'))^{2}\\cdot\\ln (\\sum\\limits_{j} \\|e_{i, j}\\|_{h_{0}}^2)} ,\n\\qquad\\text{where }(\\theta_{i}')^{2}=\\frac{\\theta_{i}^{2}}{\\sum\\limits_{k} \\theta_{k}^{2}} .$$\nNote that\n$$i (\\theta_{j}'\\partial\\overline{\\partial} \\theta_{j}'-\\partial\\theta_{j}'\\wedge \\overline{\\partial} \\theta_{j}' )\n\\geq -C_{3}\\cdot\\omega_{T}$$\nby construction.\nCombining this with \\eqref{equation75} and \napplying the Legendre identity in the proof of \n\\cite[Lemma 13.11]{Dem12}\\footnote{Although in the proof of \\cite[Lemma 13.11]{Dem12}, $\\theta_{i}'$ is supposed to be constant on $U_i^{'}$,\nthe uniformly strictly positive of the lower boundedness of $\\theta_{i}'$ on $U_{i}'$ is sufficient for the proof.},\nwe obtain that\n$$i\\Theta_{h}(\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1))\\geq -C\\cdot \\pi_1 ^{*}(\\omega_{T})$$\nfor a constant $C$ independent of $k$.\n\nBy \\cite[Prop. 1.8]{DPS94}, the metric $h$ on $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1)$ induce a smooth metric $h_k$\non $\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m})} (1)$ such that\n$$i\\Theta_{h_k}(\\mathcal{O}_{\\ensuremath{\\mathbb{P}}(E_{m})} (1))\\geq -\\frac{C}{2^{k-1}}\\omega_{T}.$$\nThe lemma is proved by letting $k\\rightarrow +\\infty$.\n\\end{proof}\n\nWe now prove the claim \\eqref{equation75} in Lemma \\ref{lemmanef}, \nwhich is in some sense a relative gluing estimate.\n\n\\begin{lemma}\\label{gluing} In the situation of the proof of Lemma \\ref{lemmanef},\nwe have\n\\begin{equation}\\label{equationlemmagluing}\n\\frac{1}{C_{2}}\\leq \\frac{\\sum\\limits_{j} \\|e_{i, j}\\|_{h_{0}}^2 (z)}{\\sum\\limits_{j} \\|e_{i', j}\\|_{h_{0}}^2 (z)}\\leq C_{2}\n\\qquad \\text{for }z\\in \\pi_{1}^{-1}(U_{i}''\\cap U_{i'}'').\n\\end{equation} \n\\end{lemma}\n\n\\begin{proof}\n\nRecall that $U_i^{'}\\Subset U_i{''}\\Subset U_{i}$ are the balls of radius $\\delta$, $\\frac{3}{2}\\delta$, $2\\delta$ respectively\nas constructed in \\cite[13.B]{Dem12}.\nLet $z$ be a fixed point in $\\pi_{1}^{-1}(U_{i}''\\cap U_{i'}'')$.\nSince $e_{i,j }$ is a section of a line bundle, we have\n$$\\sum\\limits_{j} \\|e_{i, j}\\|_{h_{0}}^2 (z)= \n\\sup\\limits_{\\sum\\limits_{j} \\mid a_j \\mid^2 =1} \\|\\sum\\limits_{j} a_{j} e_{i, j}\\|_{h_{0}}^{2}(z) .$$\nTherefore, there exists a $\\widehat{e}_{i}\\in H^{0}(\\widehat{\\pi}^{-1}(U_{i}), K_{X_k}+L)$\nsuch that\n\\begin{equation}\\label{estimateonepiece}\n\\int_{\\widehat{\\pi}^{-1}(U_{i})} \\|\\widehat{e}_{i}\\|_{h_{\\epsilon_k}}^{2}=1\\qquad \\text{and}\\qquad\n\\|e_{i}\\|_{h_{0}}^{2} (z) = \\sum\\limits_{j} \\|e_{i, j}\\|_{h_{0}}^2 (z) ,\n\\end{equation}\nwhere $e_{i}\\in H^{0}(\\pi_{1}^{-1}(U_i), \\O_{\\ensuremath{\\mathbb{P}}(E_{m,k})} (1))$ is induced by $\\widehat{e}_{i}$.\nOur goal is to construct an element in $H^{0}(\\widehat{\\pi}^{-1}(U_{i'}), K_{X_k}+L)$ with controlled norm\nand equals to $\\widehat{e}_{i}$ on $\\widehat{\\pi}^{-1}(\\pi_{1} (z))$.\n\nLet $\\theta$ be a cut-off function with support in the ball of radius $ \\frac{\\delta}{4}$ \ncentered at $\\pi_{1} ( z )$ (thus is supported in $U_{i}\\cap U_{i'}$), \nand equal to $1$ on the ball of radius $\\frac{\\delta}{8}$ centered at $\\pi_{1} ( z )$.\nBy construction, $(\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})$ is supported in $\\widehat{\\pi}^{-1} (U_{i}\\cap U_{i'})$, \nthus it is well defined on $\\widehat{\\pi}^{-1}(U_{i'})$.\nTherefore we can solve the $\\overline{\\partial}$-equation for $\\overline{\\partial} (\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})$ on \n$\\widehat{\\pi}^{-1}(U_{i'})$\nwith respect to the metric \n\\begin{equation}\\label{newmetric}\n\\omega_{X_k,\\epsilon_k}=\\epsilon_k\\cdot\\omega_{X_k}+\\widehat{\\pi}^{*}\\omega_T \n\\end{equation}\nby choosing a good metric on $L$. \nBefore giving the good metric on $L$, we first give some estimates.\n\nSince $\\theta$ is defined on $T$, we have\n$$\\|\\overline{\\partial}\\widehat{\\pi}^{*}(\\theta)\\|_{\\omega_{X_k,\\epsilon_k}}\\leq C_4$$\nfor some constant $C_{4}$ independent of $k, \\epsilon_k$\n\\footnote{$C_4$ depends on $\\delta$. But by construction, the radius $\\delta$ is independent of $k$.}.\nTherefore we have\n\\begin{equation}\\label{equation5}\n\\int_{\\widehat{\\pi}^{-1}(U_{i'})}\\|\\overline{\\partial}(\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})\\|_{h_{\\epsilon_k}, \\omega_{X_k,\\epsilon_k}}^{2}\n =\\int_{\\widehat{\\pi}^{-1}(U_{i})}\\|\\overline{\\partial}(\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})\\|_{h_{\\epsilon_k}, \\omega_{X_k,\\epsilon_k}}^{2}\n\\end{equation}\n$$\n\\leq C_4 \\int_{\\widehat{\\pi}^{-1}(U_{i})}\\|\\widehat{e}_{i}\\|_{h_{\\epsilon_k}}^{2}=C_{4} ,\n$$\nwhere the first equality comes from the fact that $(\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})$ is supported in \n$\\widehat{\\pi}^{-1} (U_{i}\\cap U_{i'})$.\nBy \\eqref{firstsingularmetric} and \\eqref{newmetric}, we have\n\\begin{equation}\\label{equation6}\ni\\Theta_{h_{\\epsilon_k}}(L)\\geq \\epsilon_k\\omega_{X_k}-2\\cdot C_1 \\widehat{\\pi}^{*}(\\omega_{T} )\\geq \n\\omega_{X_k,\\epsilon_k}-(2\\cdot C_1 +1) \\widehat{\\pi}^{*}(\\omega_{T} ). \n\\end{equation}\n\nWe now define a metric $\\widetilde{h}_{\\epsilon_k} = h_{\\epsilon_k}\\cdot e^{-(n+1) \\widehat{\\pi}^{*}(\\ln |t-\\pi_{1} (z)|)-\\widehat{\\pi}^{*}\\psi_{i'}(t)}$ \non $ L $ over $\\widehat{\\pi}^{-1}(U_{i'})$,\nwhere $\\psi_{i'}(t)$ is a uniformly bounded function on $U_{i'}$ satisfying \n$$dd^c \\psi_{i'}(t)\\geq (2 C_1 +1) \\omega_{T} .$$\nThen \\eqref{equation6} implies that \n$$i\\Theta_{\\widetilde{h}_{\\epsilon_k}}(L)\\geq \\omega_{X_k,\\epsilon_k} \\qquad \\text{on }\\widehat{\\pi}^{-1}(U_{i'}).$$\nBy solving the $\\overline{\\partial}$-equation for $\\overline{\\partial} (\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i})$ \nwith respect to $(\\widetilde{h}_{\\epsilon_k} , \\omega_{X_k,\\epsilon_k} )$ on $\\widehat{\\pi}^{-1}(U_{i'})$, \nwe can find a $g_{i'}\\in L^{2}(\\widehat{\\pi}^{-1}(U_{i'}), K_{X_k}+L)$ such that\n\\begin{equation}\\label{solutionpartial}\n\\overline{\\partial}g_{i'}=\\overline{\\partial}(\\widehat{\\pi}^{*}(\\theta)\\widehat{e}_{i})\n\\end{equation}\nand\n\\begin{equation}\\label{equation7}\n\\int_{\\widehat{\\pi}^{-1}(U_{i'})}\\|g_{i'}\\|_{\\widetilde{h}_{\\epsilon_k}}^{2}\\leq \nC_5\\int_{\\widehat{\\pi}^{-1}(U_{i'})} \\|\\overline{\\partial}(\\widehat{\\pi}^{*}(\\theta)\n\\cdot\\widehat{e}_{i})\\|_{\\widetilde{h}_{\\epsilon_k}, \\omega_{X,\\epsilon_k}}^{2}\n\\leq C_{6},\n\\end{equation}\nwhere the second inequality comes from \\eqref{equation5} \nand the fact that\n$$\n\\overline{\\partial}(\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i}) (z)=0 \\qquad \\text{for }z\\in \\widehat{\\pi}^{-1}(B_{\\frac{\\delta}{8}}(\\pi_{1}(z))),\n$$\nwhere $B_{\\frac{\\delta}{8}}(\\pi_{1}(z))$ is the ball of radius $\\frac{\\delta}{8}$ centered at $\\pi_{1}(z)$.\nBy \\eqref{solutionpartial}, we obtain a holomorphic section \n$$\\widehat{e}_{i'} := (\\widehat{\\pi}^{*}(\\theta)\\cdot \\widehat{e}_{i}- g_{i'} )\\in H^{0}(\\widehat{\\pi}^{-1}(U_{i'}), K_{X_k}+L) .$$\nBy the definition of the metric $\\widetilde{h}_{\\epsilon_k}$ and \\eqref{equation7}, \n$g_{i'}=0$ on $\\widehat{\\pi}^{-1}(\\pi_{1}(z))$.\nTherefore \n$\\widehat{e}_{i'} = \\widehat{e}_{i} $ on $\\widehat{\\pi}^{-1}(\\pi_{1} (z))$.\nMoreover, \\eqref{estimateonepiece} and \\eqref{equation7} imply that\n$$\\int_{\\widehat{\\pi}^{-1}(U_{i'})} \\|\\widehat{e}_{i'}\\|_{h_{\\epsilon_k}}^{2}=\\int_{\\widehat{\\pi}^{-1}(U_{i'})}\\|\\widehat{\\pi}^{*}(\\theta)\\cdot\\widehat{e}_{i}- g_{i'}\\|_{h_{\\epsilon_k}}^{2}\\leq C$$\nfor a constant $C$ independent of $k$.\nBy the extremal property of Bergman kernel, \\eqref{equationlemmagluing} is proved.\n\\end{proof}\n\n\\begin{lemma} \\label{lemmanumericallyflat}\nIn the situation of Lemma \\ref{lemmaflat}, the vector bundle $E_{m}$ is numerically flat.\n\\end{lemma}\n\n\\begin{proof}\nBy Lemma \\ref{lemmanef} it is sufficient to prove that $c_{1}(E_{m})=0$.\nArguing by contradiction we suppose that $c_{1}(E_{m})\\neq 0$.\nThen \\cite[Prop.2.2]{Cao12} implies that there exists a smooth fibration \n$\\pi_{1}: T\\rightarrow S$ onto an abelian variety $S$ of dimension $s$ such that \n$$\nc_{1}(E_{m})=c\\cdot\\pi^{*}_{1} c_1(A)\n$$\nfor some very ample line bundle $A$ and $c> 0$.\n\nLet $S_{1}$ be a complete intersection of $s-1$ hypersurfaces defined by $s-1$ general elements in $H^{0}(S, A)$.\nWe have thus a morphism\n$$\\begin{CD}\n X_{1} @>\\pi|_{X_1}>> T_{1} @>{\\pi_1}|_{T_1}>> S_{1}\n \\end{CD}\n$$\nwhere $X_{1} :=\\pi^{-1}\\pi_{1}^{-1}(S_{1})$ and $T_{1} :=\\pi_{1}^{-1}(S_{1})$\nare smooth by Bertini's theorem.\nFor simplicity of notation we set $E_{m}':=E_{m}|_{T_{1}}$. \nThen $E_{m}'$ is nef and $c_{1}(E_{m}')=c\\cdot (\\pi_{1}|_{T_1})^* c_1(A)$. \nApplying Proposition \\ref{propositionfiltration}, \nwe obtain a semipositive projective flat vector bundle $0\\subset F_{1}\\subset E_{m}'$ on $T_{1}$\nsuch that \nand $c_{1}(F_{1})=\\pi_{1}^{*} (\\omega_{S_{1}})$\nfor some K\\\"ahler form $\\omega_{S_{1}}$ on $S_{1}$.\n\nOur goal is to show that \n$$\n(\\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1))^{n-r+1} \\cdot X_{1} > 0.\n$$\nSince $\\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1)|_{X_1} \\equiv (-m K_X)|_{X_1}$ this gives a contradiction \nto Proposition \\ref{propositionnumericaldimension}. \nNote first that $X_1$ is not contained in any projective subbundle of $\\ensuremath{\\mathbb{P}} (E_{m}')$ since\nthe general $\\pi$-fibre $F$ is embedded by the complete linear system $|-m K_F|$, so it is linearly non-degenerate.\nThus if $\\pi_1$ is an isomorphism (which is equivalent to $\\det E_m$ being ample) \nthe statement follows from Lemma \\ref{lemmaelementary}.\nIn the general case we follow a similar construction: let \n$\\mu: Y\\rightarrow \\mathbb{P}(E_{m}')$ be the blow-up along the subvariety \n$\\mathbb{P}(E_{m}'\/F_{1})$.\nSince $X_{1}$ is not contained in $\\mathbb{P}(E_{m}'\/F_{1})$, we have a diagram\n$$\n\\xymatrix{\nX_{1}'\\ar[d]\\ar[r]^{i}& Y \\ar[d]^{\\mu}\\ar[r]^{h} & \\ensuremath{\\mathbb{P}}(F_{1})\\ar[ldd]^{g}\\\\\nX_{1}\\ar[r] \\ar[rd]_{\\pi|_{X_1}} & \\ensuremath{\\mathbb{P}}(E_{m}')\\ar[d]^{f} & \\\\\n & T_{1}\\ar[d]^{\\pi_{1}}\\\\\n & S_{1}\n}\n$$\nwhere $X_{1}'$ is the strict transformation of $X_{1}$ and $f,g$ and $h$ are the natural maps as in the proof \nof Lemma \\ref{lemmaelementary}.\nBy the same argument as in \\eqref{decompoone} of Lemma \\ref{lemmaelementary}, we have\n$$\n\\mu^* \\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1) \\simeq h^* \\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1) + E,\n$$\nwhere $E$ is the $\\mu$-exceptional divisor.\nSince $\\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1)$ is nef,\nwe have\n\\begin{equation}\\label{equationaddd}\n(\\mu^* \\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1))^{n-r+1} \\cdot X_{1}'\\geq\n(\\mu^* \\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1))^{n-r}\\cdot h^* \\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1)\\cdot X_{1}' .\n\\end{equation}\nBy Proposition \\ref{propositionfiltration}, there is a smooth metric $h_0$ on $F_{1}$\nsuch that \n$$i \\Theta_{h_0} (F_{1}) =\\pi_{1}^*\\omega_{S_1}\\cdot \\Id_{F_{1}} .$$\nOn $\\ensuremath{\\mathbb{P}}(F_{1})$ we have\n$$\ni \\Theta_{h_0} (g^*F_{1}) = g^*\\pi_{1}^*\\omega_{S1}\\cdot \\Id_{g^*F_{1}}.\n$$\nThe metric $h_0$ induces a natural metric $h_0 ^{'}$ on $\\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1)$,\nand by \\cite[Prop. 1.11]{DPS94},\nwe obtain\n$$\ni \\Theta_{h_0 ^{'}}(\\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1))\\geq g^*\\pi_{1}^*\\omega_{S_1}.\n$$ \nSince $h\\circ g=\\mu\\circ f$, we get\n$h^*i \\Theta_{h_0 ^{'}} (\\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1))\\geq \\mu^{*} f^{*}\\omega_{S_{1}}$.\nCombining this with the fact that $f\\circ\\mu\\circ i (X_{1}')=T_1$ by construction, \nwe obtain \n$$h^* \\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1)\\cdot X_{1}'\\geq C\\cdot X_{1, s}',$$\nwhere $X_{1,s}'$ is the general fiber of $i\\circ\\mu\\circ f\\circ \\pi_{1}$, and $C> 0$.\nCombining this with the fact that $\\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1)$ is $f$-ample,\nwe get\n$$(\\mu^* \\sO_{\\ensuremath{\\mathbb{P}} (E_{m}')}(1))^{n-r}\\cdot h^* \\sO_{\\ensuremath{\\mathbb{P}}(F_{1})}(1)\\cdot X_{1}' \\neq 0 .$$\nWe conclude by \\eqref{equationaddd}.\n\\end{proof}\n\n\\subsection{The projective case}\n\nIn Section \\ref{sectiondimensiontwo} we will need the following\nversions of Lemma \\ref{lemmanef} and Lemma \\ref{lemmanumericallyflat}.\n\n\\begin{lemma} \\label{lemmakltdirectimage}\nLet $X$ be a normal projective variety and $\\Delta$ a boundary divisor on $X$ such that\nthe pair $(X, \\Delta)$ is klt. Let \\holom{\\varphi}{X}{T} be a flat fibration onto an abelian\nvariety. Let $L$ be a Cartier divisor on $X$ such that $L-(K_X+\\Delta)$ \nis nef and $\\varphi$-big. Then\nthe following holds:\n\\begin{enumerate}[(i)]\n\\item Let $H$ be an ample line bundle on $T$, then $\\varphi_* (\\sO_X(L)) \\otimes H$ is ample.\n\\item The direct image sheaf $\\varphi_* (\\sO_X(L))$ is nef.\n\\end{enumerate} \n\\end{lemma}\n\n\\begin{proof}\nNote first that since $L-(K_X+\\Delta)$ is relatively nef and big, \nthe higher direct images $R^j \\varphi_* (\\sO_X(L))$\nvanish for all $j>0$ by the relative Kawamata-Viehweg theorem. By cohomology and base change the sheaf\n$\\varphi_* (\\sO_X(L))$ is locally free.\n\n{\\em Proof of (i).} Since $H$ is ample and $L-(K_X+\\Delta)$ is nef and $\\varphi$-big, \nthe divisor $L+\\varphi^*H-(K_X+\\Delta)$ is nef and big.\nSo if $P \\in \\mbox{Pic}^0(T)$ is a numerically trivial line bundle, then\nby Kawamata-Viehweg vanishing one has\n\\[\nH^j(T, \\varphi_* (\\sO_X(L)) \\otimes H \\otimes P) \n=\nH^j(X, L \\otimes \\varphi^*(H \\otimes P)) = 0 \n\\qquad\n\\forall \\ j > 0. \n\\]\nThus $ \\varphi_* (\\sO_X(L)) \\otimes H$ satisfies Mukai's property $IT_0$ \\cite{Muk81}. \nIn particular it is ample by \\cite[Cor.3.2]{Deb06}.\n\n{\\em Proof of (ii).} Let \\holom{\\mu}{T}{T} be the multiplication map $x \\mapsto 2x$.\nLet $H$ be a symmetric ample divisor, then by the theorem of the square\n\\cite[Cor.2.3.6]{BL04} we have $\\mu^* H \\simeq H^{\\otimes 4}$. \nIt is sufficient to show that for $d \\in \\ensuremath{\\mathbb{N}}$ arbitrary, the $\\ensuremath{\\mathbb{Q}}$-twisted \\cite[Sect.6.2]{Laz04b} vector bundle\n$\\varphi_* (\\sO_X(L))\\hspace{-0.8ex}<\\hspace{-0.8ex}\\frac{1}{4^d}H\\hspace{-0.8ex}>$\nis nef. Denote by $\\mu_d$ the $d$-th iteration of $\\mu$ and consider the base change diagram\n\\[\n\\xymatrix{\nX_d\n\\ar[d]_{\\tilde \\varphi} \\ar[r]^{\\tilde \\mu_d} & X \\ar[d]_\\varphi\n\\\\\nT \\ar[r]^{\\mu_d} & T\n}\n\\]\nwhere $X_d := X \\times_T T$.\nBy flat base change \\cite[Prop.9.3]{Har77} we have\n\\[\n\\mu_d^*(\\varphi_* (\\sO_X(L))\\hspace{-0.8ex}<\\hspace{-0.8ex}\\frac{1}{4^d}H\\hspace{-0.8ex}>)\n\\sim_\\ensuremath{\\mathbb{Q}}\n\\tilde \\varphi_* (\\sO_{X_d}(\\tilde \\mu_d^*L))\\hspace{-0.8ex}<\\hspace{-0.8ex}\\frac{1}{4^d}\\mu_d^* H\\hspace{-0.8ex}>\n\\]\nSince $\\frac{1}{4^d} \\mu_d^* H \\sim_\\ensuremath{\\mathbb{Q}} H$, we see that\n$\\mu_d^*(\\varphi_* (\\sO_X(L))\\hspace{-0.8ex}<\\hspace{-0.8ex}\\frac{1}{4^d}H\\hspace{-0.8ex}>)\n\\sim_\\ensuremath{\\mathbb{Q}}\n\\tilde \\varphi_* (\\sO_{X_d}(\\tilde \\mu_d^*L)) \\otimes H\n$\nwhich by the first statement is ample. \n\\end{proof}\n\n\\begin{proposition} \\label{propositionkltdirectimage}\nLet $X$ be a normal projective variety and $\\Delta$ a boundary divisor on $X$ such that\nthe pair $(X, \\Delta)$ is klt. Let \\holom{\\varphi}{X}{T} be a flat fibration onto a\nsmooth curve or an abelian\nvariety. Suppose that $-(K_{X\/T}+\\Delta)$ is nef and $\\varphi$-ample.\nThen \n$$\nE_{m}:=\\varphi_{*}(\\sO_X(-m (K_{X\/T}+\\Delta)))\n$$ \nis a numerically flat vector bundle\nfor all sufficiently large and divisible $m \\gg 0$.\n\\end{proposition}\n\n\\begin{proof} \nFor sufficiently divisible $m \\in \\ensuremath{\\mathbb{N}}$ the $\\ensuremath{\\mathbb{Q}}$-divisor $m (K_{X\/T}+\\Delta)$ is integral\nand Cartier.\n\n{\\em 1st case. $T$ is a curve.} Then $E_m$ is nef \\cite{Kol86},\nand for $m \\gg 0$ we have an inclusion\n$X \\hookrightarrow \\ensuremath{\\mathbb{P}}(E_m)$.\nWe can now argue as in Lemma \\ref{lemmanumericallyflat}: if $E_m$ is not numerically flat, there\nexists an ample subbundle $A \\subset E_m$. By \nLemma \\ref{lemmanumericaldimensionprojective} we have\n$(K_{X\/T}+\\Delta)^{\\dim X}=0$, so Lemma \\ref{lemmaelementary} implies that $X \\subset \\ensuremath{\\mathbb{P}}(E_m\/A)$.\nHowever this is not possible since the embedding of the general fibre $X_t$ in $\\ensuremath{\\mathbb{P}}((E_m)_t)$ is linearly nondegenerate.\n\n{\\em 2nd case. $T$ is an abelian variety.} By Lemma \\ref{lemmakltdirectimage}\nthe sheaf $E_m$\nis a nef vector bundle.\nIf $C$ is a general complete intersection curve in $T$,\ndenote by $X_C:=\\fibre{\\varphi}{C}$ its preimage. Then the pair\n$(X_C, \\Delta_C:=\\Delta \\cap X_C)$ is klt and the relative canonical divisor\n$-(K_{X_C\/C}+\\Delta_C)$\nis nef and relatively ample. By the first case $\\varphi_{*}(E_m \\otimes \\sO_C)$\nis numerically flat, so $\\det E_m$ is numerically \ntrivial \\cite[3.8]{Deb01}.\n\\end{proof}\n\n\n\n\\section{Proof of Theorem \\ref{theoremmain}}\n\nLet $X'$ be a normal compact K\\\"ahler space that admits a flat fibration $\\pi': X' \\rightarrow T$ \nonto a compact K\\\"ahler manifold $T$. Let $L$ be a line bundle on $X$ that is $\\pi'$-ample,\nfor all $m \\in \\ensuremath{\\mathbb{N}}$ we set $E_m := (\\pi')_* (\\sO_{X'}(mL))$. We fix a $m \\gg 0$ such that we have\nan embedding $X' \\hookrightarrow \\mathbb{P}(E_{m})$, for simplicity's sake we denote by\n$\\holom{\\pi'}{\\ensuremath{\\mathbb{P}}(E_m)}{T}$ the natural map. Let $\\sI_{X'} \\subset \\sO_{\\ensuremath{\\mathbb{P}}(E_m)}$ be the ideal\nsheaf of $X'$. Then we define for every $d \\in \\ensuremath{\\mathbb{N}}$\n$$\nS_{m, d} := (\\pi')_{*}(\\sI_{X'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d)).\n$$\n\n\\begin{proposition} \\label{propositionloctriv}\nIn the situation above suppose that\n$E_{m}$ and $S_{m, d}$ are numerically flat vector bundles\nfor all $m \\gg 1$ and $d\\gg 1$. Then the fibration $\\pi'$ is locally trivial.\n\\end{proposition}\n\n\\begin{proof}\nWe have a natural inclusion $i: S_{m, d}\\hookrightarrow S^d E_m$.\nSince numerically flat vector bundles are local systems \n(cf. \\cite[Sect.3]{Sim92} for general case or \\cite[Lemma 6.5]{Ver04} for numerically flat vector bundles),\n$S_{m, d}$ and $S^d E_m$ are local systems on $T$.\nLet $U$ be any small Stein open set in $T$, and let $e_{1},\\cdots, e_{k}$ be a local constant coordinates of $S_{m, d}$\nover $U$.\nNote that $\\Hom (S_{m, d}, S^d E_m)$ is also a local system on $T$, and \n$i\\in H^{0}(T, \\Hom (S_{m, d}, S^d E_m))$.\nBy \\cite[Lemma 4.1]{Cao12}, $i$ is \nparallel\\footnote{It is not true that for any local system, the global sections are parallel.\nHowever, it is true for numerically flat bundles.}\nwith respect to the local system \n$\\Hom (S_{m, d}, S^d E_m)$.\nTherefore the images of $e_{1},\\cdots, e_{k}$ in $S^d E_m$ are also locally constant,\ni.e. the polynomials defining the fibers $X'_{t}$ for $t\\in U$ are locally constant.\nIn particular the fibration $\\pi'$ is locally trivial.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theoremmain}]\n{\\em Step 1. The relative anticanonical fibration is locally trivial.}\nWe follow the argument of \\cite[Thm.3.20]{DPS94}. Denote by $X'$ the relative anticanonical model of $X$ \n(cf. Section \\ref{subsectionanticanonical}).\nFix $m \\gg 0$ such that we have an inclusion $X'\\hookrightarrow \\mathbb{P}(E_{m})$\nwhere $E_m$ is defined as in Lemma \\ref{lemmaflat} and denote by $\\pi'$ both the anticanonical fibration\n$X' \\rightarrow T$ and $\\ensuremath{\\mathbb{P}}(E_m) \\rightarrow T$. \nFor every $d \\in \n\\ensuremath{\\mathbb{N}}$ we have an exact sequence\n$$\n0 \\rightarrow \\sI_{X'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d) \\rightarrow\n\\mathcal{O}_{\\mathbb{P}(E_{m})}(d) \\rightarrow \\omega_{X'}^{\\otimes -md} \\rightarrow 0.\n$$\nSince $\\sO_{\\ensuremath{\\mathbb{P}}(E_{m})}(1)$ is $\\pi'$-ample we get for $d \\gg 0$ an exact sequence\n$$\n0 \\rightarrow (\\pi')_{*}(\\sI_{X'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d)) \n\\rightarrow S^d E_m \\rightarrow E_{md} \\rightarrow 0.\n$$\nThe vector bundles $S^d E_m$ and $E_{md}$ are numerically flat, so $(\\pi')_{*}(\\sI_{X'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d))$\nis numerically flat. Now apply Proposition \\ref{propositionloctriv}.\n\n{\\em Step 2. The Albanese map $\\pi$ is locally trivial.} Let $F'$ be the general fibre of the relative anticanonical fibration, and\nlet $F_t$ be any smooth $\\pi$-fibre.\nThen $F_t$ is a weak Fano manifold, and $\\holom{\\mu|_{F_t}}{F_t}{F'}$ is a crepant birational morphism,\nso $F_t$ is a terminal model of the Fano variety $F'$.\nBy \\cite[Cor.1.1.5]{BCHM10} there are only finitely many terminal models of $F'$. \nThus there are only finitely many possible isomorphism classes for $F_t$,\nhence there exists a non-empty Zariski open subset $T_0 \\subset T$ such that $\\fibre{\\pi}{T_0} \\rightarrow T_0$ \nis locally trivial with fibre $F$. Fix now an ideal sheaf $\\sI$ on $F'$ such that $F$ is isomorphic to the blow-up of $F'$\nalong $\\sI$.\n\nLet $t \\in T$ be an arbitrary point, and let $t \\in U \\subset T$ be an analytic neighbourhood such that\n$X'_U:=\\fibre{(\\pi')}{U} \\simeq U \\times F'$. Let \\holom{\\tilde \\mu}{\\tilde X_U}{X'_U} be the blow-up\nof $X'_U$ along the ideal sheaf $\\sI \\otimes \\sO_U$, then $\\tilde X_U \\simeq U \\times F$.\nIn particular $\\tilde X_U$ is smooth and the birational morphism $\\tilde \\mu$ is crepant.\nSet $X_U :=\\fibre{\\pi}{U}$, then $X_U$ is also smooth and $\\holom{\\mu|_{X_U}}{X_U}{X'_U}$\nis crepant. Thus $X_U$ and and $\\tilde X_U$ are both minimal models over the base $X'_U$ (cf. \\cite[Defn.3.48]{KM98}),\nhence the induced birational morphism $\\tau: X_U \\dashrightarrow X'_U$ is an isomorphism in codimension one \\cite[Thm.3.52]{KM98}.\nMoreover by the universal property of the blow-up the restriction of $\\tau$\nto \n$$\n(X_U \\cap \\fibre{\\pi}{T_0}) \\dashrightarrow (X'_U \\cap \\fibre{(\\pi' \\circ \\mu)}{T_0})\n$$ \nis an isomorphism.\nLet $H$ be an effective $\\pi$-ample divisor on $X_U$, and let $H':=\\tau_* H$ be its strict transform.\nThen $H'$ is $\\pi' \\circ \\mu$-nef: indeed if $C$ is any (compact) curve in $\\tilde X_U \\simeq U \\times F$, it\ndeforms to a curve $C'$ that is contained in $e \\times F$ with $e \\in (U \\cap T_0)$. Yet the restriction \nof $H'$ to $e \\times F$ is ample, since $\\tau$ is an isomorphism in a neighbourhood of $e \\times F$.\nThus we see that $H' \\cdot C = H' \\cdot C'>0$. Since $X_U$ is smooth we satisfy the conditions\nof \\cite[Lemma 6.39]{KM98}, so $\\tau$ is an isomorphism.\nThis shows that $X_U \\rightarrow U$ is locally trivial with fibre $F$, since $t \\in T$ is arbitrary this concludes the proof.\n\\end{proof}\n\n\\begin{proof}[Proof of Corollary \\ref{corollarymain}]\nSuppose first that $q(X)=\\dim X-1$. If $-K_X$ is $\\pi$-big we see by Theorem \\ref{theoremmain} that the Albanese map is a $\\ensuremath{\\mathbb{P}}^1$-bundle. If $-K_X$ is not $\\pi$-big, the general fibre is an elliptic curve. Note that the $C_{n, n-1}$-conjecture is also known\nin the K\\\"ahler case \\cite[Thm.2.2]{Uen87}, so we see that $\\kappa(X) \\geq 0$. Yet $-K_X$ is nef, so we see that $-K_X \\equiv 0$.\nWe conclude by the Beauville-Bogomolov decomposition. \n\\end{proof}\n\nThe proof of Theorem \\ref{theoremmain} works also in the following relative situation which we will use\nin Section \\ref{sectiondimensiontwo}.\n\n\\begin{corollary} \\label{corollaryMFS}\nLet $X$ be a normal $\\ensuremath{\\mathbb{Q}}$-factorial projective variety with at most terminal singularities,\nand let $\\holom{\\varphi}{X}{C}$ be a fibration onto a smooth curve \nsuch that $-K_{X\/C}$ is nef and $\\varphi$-big. If the general fibre is smooth,\nthen $\\varphi$ is locally trivial in the analytic topology.\n\\end{corollary}\n\n\n\n\\section{Two-dimensional fibres} \\label{sectiondimensiontwo}\n\nIn this section we will prove Theorem \\ref{theoremmaintwo}. While the positivity of direct image sheaves\nwill still play an important role, we need some additional geometric information to deduce the smoothness of the Albanese map.\nThis information will be obtained by describing very precisely the MMP.\n\n\\subsection{Reduction to the curve case}\n\nThe following lemma shows that the nefness condition imposes strong restrictions on the singularities\nof a fibre space.\n\n\\begin{lemma} \\label{lemmaimportant}\nLet $M$ be a normal projective variety, and let $\\holom{\\varphi}{M}{C}$ be a fibration onto a curve such\nthat $-K_{M\/C}$ is $\\ensuremath{\\mathbb{Q}}$-Cartier and nef.\nLet $\\Delta$ be a boundary divisor on $M$ such that $\\Delta \\equiv -\\alpha K_{M\/C}$ for some $\\alpha \\in [0, 1]$.\n\nIf the pair $(M, \\Delta)$ is lc (resp. klt) over the generic point of $C$, the pair \n$(M, \\Delta)$ is lc (resp. klt).\nIf the pair $(M, \\Delta)$ is lc, every lc center of $(M, \\Delta)$ surjects onto $C$.\n\\end{lemma}\n\n\\begin{proof}\nLet $\\holom{\\mu}{M'}{M}$ be a dlt blow-up \\cite[Thm.10.4]{Fuj11}, i.e. $\\mu$ is birational morphism from $M'$ a normal $\\ensuremath{\\mathbb{Q}}$-factorial\nvariety such that if we set\n$$\n\\Delta' := \\mu_*^{-1} \\Delta + \\sum_{E_i \\ \\mbox{\\tiny $\\mu$-exc.}} E_i,\n$$\nthen $(M', \\Delta')$ is dlt. Moreover one has\n$$\nK_{M'} +\\Delta' = \\mu^* (K_M+\\Delta) + \\sum_{E_i \\ \\mbox{\\tiny $\\mu$-exc.}} (a_i+1) E_i,\n$$\nand $a_i \\leq -1$ for all $i$. Let $\\Delta'=\\Delta'_{hor}+\\Delta'_{vert}$ be the decomposition\nin the horizontal and vertical part.\nSince $M'$ is $\\ensuremath{\\mathbb{Q}}$-factorial, the pair $(M', \\Delta'_{hor})$ is dlt \\cite[Cor.2.39]{KM98} and\n$$\nK_{M'} +\\Delta'_{hor}= \\mu^* (K_M+\\Delta) + \\sum_{E_i, a_i<-1} (a_i+1) E_i - \\Delta'_{vert}.\n$$\nSuppose now that $(M, \\Delta)$ is lc over the generic point of $C$.\nThen the restriction of $\\sum_{E_i \\ \\mbox{\\tiny $\\mu$-exc.}} (a_i+1) E_i$ \nto a general $(\\varphi \\circ \\mu)$-fibre $F$ is empty. Thus the anti-effective divisor\n$\n- E := \\sum_{E_i, a_i<-1} (a_i+1) E_i - \\Delta'_{vert}\n$\nis vertical. We claim that $E=0$. Assuming this for the time being, let us see how to deduce all the statements:\nsince $\\sum_{E_i, a_i<-1} (a_i+1) E_i=0$ we know that $(M, \\Delta)$ is lc. Moreover any lc centre $W$ surjects onto $C$:\notherwise there exists a vertical divisor $E_i$ with discrepancy $-1$. Thus we have $\\Delta'_{vert} \\neq 0$, a contradiction.\n\nIf $(M, \\Delta)$ is klt over the generic point of $C$, it is lc by what precedes. \nMoreover any lc centre would be contained in a $\\varphi$-fibre, so it would\ngive a non-zero component of $\\Delta'_{vert}$. However we just proved that $\\Delta'_{vert}=0$.\n\n\n{\\em Proof of the claim.}\nSince $- \\mu^* (K_{M\/C}+\\Delta) \\equiv (\\alpha-1) \\mu^* K_{M\/C}$ is nef, \nwe know that for $H$ an ample Cartier divisor on $M'$ and all $\\delta>0$, the divisor\n$-\\mu^* (K_{M\/C}+\\Delta)+\\delta H$ is ample. \nThus there exists an effective $\\ensuremath{\\mathbb{Q}}$-divisor $B \\sim_\\ensuremath{\\mathbb{Q}} -\\mu^* (K_{M\/C}+\\Delta) + \\delta H$ \nsuch that the pair $(M', \\Delta'_{hor}+B)$ is dlt. Thus we have\n$$\nK_{M'\/C} + \\Delta'_{hor} + B \\sim_\\ensuremath{\\mathbb{Q}} \\mu^* (K_{M\/C}+\\Delta) - E - \\mu^* (K_{M\/C}+\\Delta) +\\delta H \\sim_\\ensuremath{\\mathbb{Q}} - E +\\delta H. \n$$\nSince $E$ does not dominate $C$, the restriction of $K_{M'\/C} + \\Delta'_{hor} + B$ to the general \n$(\\varphi \\circ \\mu)$-fibre $F$ \nis numerically equivalent to $\\delta H|_F$. In particular for $m \\in \\ensuremath{\\mathbb{N}}$ sufficiently large and divisible the divisor\n$m (K_{M'\/C} + \\Delta'_{hor} + B)$ is Cartier and the restriction to $F$ has global sections.\nThe pair $(M', \\Delta'_{hor}+B)$ being lc we know by \\cite[Thm.4.13]{Cam04} that \n$(\\varphi \\circ \\mu)_* (m (K_{M'\/C} + \\Delta'_{hor} + B))$ is a nef vector bundle.\nThe natural morphism\n$$ \n(\\varphi \\circ \\mu)^* (\\varphi \\circ \\mu)_* (m (K_{M'\/C} + \\Delta'_{hor} + B)) \\rightarrow m (K_{M'\/C} + \\Delta'_{hor} + B)\n$$\nis not zero, so $m (K_{M'\/C} + \\Delta'_{hor} + B)$ is pseudoeffective.\nThus we see that $- E +\\delta H$ is pseudoeffective. \nTaking the limit $\\delta \\rightarrow 0$ we deduce that the anti-effective divisor $-E$ is pseudoeffective. This proves the claim.\n\\end{proof}\n\n{\\bf Attribution.}\nThe proof above is a refinement of\nthe argument in \\cite{LTZZ10}. Note that our argument can be used\nto give a simplified proof of \\cite[Thm.]{LTZZ10}.\n\n\n\n\\begin{corollary} \\label{corollaryreductioncurve}\nLet $X$ be a projective manifold such that $-K_X$ is nef. Let $\\holom{\\pi}{X}{T}$ be the Albanese map.\nFix an arbitrary point $t \\in T$ and let $C \\subset T$ be a smooth curve such that $t \\in C$\nand for $c \\in C$ general, the fibre $\\fibre{\\pi}{c}$ is smooth.\n\nThen $\\fibre{\\pi}{C}$ is a normal variety with at most canonical singularities. \n\\end{corollary}\n\n\\begin{proof}\nBy \\cite[Thm.]{LTZZ10} the fibration $\\pi$ is flat, so $\\fibre{\\pi}{C}$ is Gorenstein and has pure dimension $\\dim X-\\dim T+1$.\nThe general fibre of $\\fibre{\\pi}{C} \\rightarrow C$ is irreducible, so $\\fibre{\\pi}{C}$ is irreducible. \nSince all the $\\pi$-fibres are reduced \\cite[Thm.]{LTZZ10} we see that \n$\\fibre{\\pi}{C}$ is smooth in codimension one, hence normal. \nThe relative anticanonical divisor $-K_{\\fibre{\\pi}{C}\/C}= -K_X|_{\\fibre{\\pi}{C}}$ is Cartier and nef.\nThe general fibre of $\\fibre{\\pi}{C} \\rightarrow C$ is smooth, so $\\fibre{\\pi}{C}$ is smooth over the generic point of $C$.\nThus the pair $(\\fibre{\\pi}{C}, 0)$ is klt by Lemma \\ref{lemmaimportant}. Since $\\fibre{\\pi}{C}$ is Gorenstein, it has at most canonical singularities.\n\\end{proof}\n\n\n \nCorollary \\ref{corollaryreductioncurve} reduces Conjecture\n\\ref{conjecturealbanese} to the following problem:\n\n\\begin{conjecture} \\label{conjecturerelativealbanese}\nLet $M$ be a normal projective variety with at most canonical Gorenstein singularities.\nLet \\holom{\\varphi}{M}{C} be a fibration onto a smooth curve $C$ such that\n$-K_{M\/C}$ is nef. If the general fibre $F$ is smooth, then $\\varphi$ is smooth.\nIf $F$ is smooth and simply connected, then $\\varphi$ is locally trivial in the analytic topology.\n\\end{conjecture} \n\n\\subsection{Running the MMP for fibres of dimension two} \\label{subsectionMMP}\n\nIn the previous section we reduced Conjecture \\ref{conjecturealbanese} to the case \nof a fibration onto a curve such that the total space has canonical singularities. In this section we will\nmake the stronger assumption that the total space is $\\ensuremath{\\mathbb{Q}}$-factorial with terminal singularities.\nMoreover we will assume at some point the existence of an effective relative anticanonical divisor.\nIn Subsection \\ref{subsectionmainresult} we will see how to verify these additional conditions.\n\n\\begin{setup} \\label{setup}\n{\\rm\nLet $X_C$ be a normal $\\ensuremath{\\mathbb{Q}}$-factorial projective variety with at most terminal singularities, and \nlet \\holom{\\varphi}{X_C}{C} be a fibration onto a smooth curve $C$. Suppose that the general $\\varphi$-fibre is uniruled.\nBy \\cite{BCHM10} we know that $X\/C$ is birational to a Mori fibre space, and we denote by\n\\begin{equation} \\label{MMP}\nX_C=:X_0 \\stackrel{\\mu_0}{\\dashrightarrow} X_1 \\stackrel{\\mu_1}{\\dashrightarrow}\n\\ldots \\stackrel{\\mu_k}{\\dashrightarrow} X_k \n\\end{equation}\na MMP over $C$, that is for every $i \\in \\{0, \\ldots, k-1\\}$ the map\n$\\merom{\\mu_i}{X_i}{X_{i+1}}$ is either a divisorial Mori contraction of a\n$K_{X_i\/C}$-negative extremal ray in $\\NE{X_i\/C}$ or the flip of a small contraction \nof such an extremal ray. Note that all the varieties $X_i$ \nare normal $\\ensuremath{\\mathbb{Q}}$-factorial with at most terminal singularities and endowed with a\nfibration $X_i \\rightarrow C$.\nMoreover $X_k \\rightarrow C$ admits a Mori fibre space structure \\holom{\\psi}{X_k}{Y} onto a normal variety\n$\\holom{\\tau}{Y}{C}$. We know \nthat $Y$ is $\\ensuremath{\\mathbb{Q}}$-factorial \\cite[Prop.7.44]{Deb01}, moreover $Y$ has at most klt singularities \\cite[Thm.0.2]{Amb05}.\n}\n\\end{setup}\n\n\\begin{remark} \\label{remarkflip}\nIf $\\mu_i$ is a flip, denote by $\\Gamma_i$ the normalisation of its graph and by $\\holom{p_i}{\\Gamma_i}{X_i}$\nand $\\holom{q_i}{\\Gamma_i}{X_{i+1}}$ the natural maps. \nBy a well-known discrepancy computation \\cite[Lemma 9-1-3]{Mat02} we have\n\\begin{equation} \\label{flipdiscrepancy}\np_i^* (-K_{X_i\/C}) = q_i^*(-K_{X_{i+1}\/C}) - \\sum a_{i, j} D_{i,j}\n\\end{equation}\nwith $a_{i,j}>0$ where the sum runs over all the $q_i$-exceptional divisors. \n\\end{remark}\n\n\\begin{remark} \\label{remarkdivisorial}\nIf $\\mu_i$ is a divisorial contraction, let $D_i \\subset X_i$ be the exceptional divisor. We have \n\\begin{equation} \\label{divisorialdiscrepancy}\n-K_{X_i\/C} = \\mu_i^*(-K_{X_{i+1}\/C}) - \\lambda_i D_i\n\\end{equation}\nwith $\\lambda_i>0$.\n\\end{remark}\n\nWe explained in the introduction that the nefness of $-K_{X_C\/C}$ is usually not preserved under the MMP. However\nwe have the following:\n\n\\begin{lemma} \\label{lemmaMMPbasic}\nIn the situation of Setup \\ref{setup}, suppose that\n\\begin{enumerate}[(i)]\n\\item $-K_{X_C\/C}$ is pseudoeffective. Then $-K_{X_i\/C}$ is pseudoeffective. \n\\item $-K_{X_C\/C}$ is nef in codimension one. Then $-K_{X_i\/C}$ is nef in codimension one. \n\\item $-K_{X_C\/C}$ is nef. If $B \\subset X_i$ is a curve such that $-K_{X_i\/C} \\cdot B<0$, then $B$ is \n(a strict transform of) a curve contained in the flipping locus or the image of a divisorial contraction. \n\\end{enumerate}\n\\end{lemma}\n\n\\begin{remark} \\label{remarkMMPdimtwo}\nA $\\ensuremath{\\mathbb{Q}}$-Cartier divisor on a surface is nef in codimension one if and only if it is nef.\nThus the lemma shows that for {\\em surfaces} the MMP preserves the property that $-K_{X_C\/C}$ is nef.\n\\end{remark}\n\n\\begin{proof} Our proof follows the arguments in \\cite[Prop.2.1, Prop.2.2]{PS98}.\nLet $\\Gamma$ be the normalisation of the graph of the birational map $X \\dashrightarrow X_i$, and\ndenote by $\\holom{p}{\\Gamma}{X}$ and $\\holom{q}{\\Gamma}{X_i}$ the natural maps. By\n\\eqref{flipdiscrepancy} and \\eqref{divisorialdiscrepancy} we have\n$$\np^* (-K_{X_C\/C}) = q^* (-K_{X_i\/C}) - D,\n$$\nwith $D$ an effective $q$-exceptional $\\ensuremath{\\mathbb{Q}}$-divisor. In particular\nif $B \\subset X'$ is a curve that is not contained in $q(D)$ and $B_\\Gamma \\subset \\Gamma$ its strict transform, \nthen we have\n\\begin{equation} \\label{increase}\np^*(-K_{X_C\/C}) \\cdot B_\\Gamma \\leq -K_{X_i\/C} \\cdot B.\n\\end{equation}\nThis proves the statements $(i)$ and $(iii)$. \n\nSuppose now that \n$-K_{X_C\/C}$ is nef in codimension one. Let $S \\subset X_i$ be a prime divisor, and\ndenote by $S_\\Gamma \\subset \\Gamma$ its strict transform. Since $X_i \\dashrightarrow X$ does not contract\na divisor, we see that $S_\\Gamma$ is not $p$-exceptional. Hence $p^*(-K_{X_C\/C})|_{S_\\Gamma}$\nis pseudoeffective. Thus we see that $q^*(-K_{X_i\/C})|_{S_\\Gamma}$ is pseudoeffective,\nhence $-K_{X_i\/C}|_S$ is pseudoeffective.\n\\end{proof}\n\nWe will now restrict ourselves to the case where $\\dim X_C-\\dim C=2$.\nThis allows to study\nthe positivity of $-K_\\bullet$ on horizontal curves. \n\n\\begin{definition} \\label{definitionnefoverC}\nLet $M$ be a projective variety admitting a fibration $\\holom{f}{M}{C}$ onto a curve $C$.\nA $\\ensuremath{\\mathbb{Q}}$-Cartier divisor $L$ on $M$ is nef over $C$ if \nfor any curve $B \\subset M$ such that $L \\cdot B<0$, the image $f(B)$ is a point.\n\\end{definition}\n\n\\begin{remark} \\label{remarkflippingloci}\nNote that if $\\dim X_C-\\dim C=2$, the exceptional locus\nof a small contraction cannot surject onto $C$, so if $\\mu_i$ is the corresponding step of the MMP,\nthe flipping loci are contained in\nfibres of the maps $X_i \\rightarrow C$ and $X_{i+1} \\rightarrow C$.\nIn particular if $-K_{X_i\/C}$ is nef over $C$, then $-K_{X_{i+1}\/C}$ is nef over $C$\nby \\eqref{increase}. The same holds if the exceptional divisor $D_i$ of a divisorial\ncontraction does not surject onto $C$.\n\\end{remark}\n\n\\begin{lemma} \\label{lemmanefoverC}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$ and $-K_{X_i\/C}$\nis nef over $C$. Then $-K_{X_{i+1}\/C}$ is nef over $C$.\n\\end{lemma}\n\n\\begin{proof}\nBy what precedes it is sufficient to consider the case where \n$\\holom{\\mu_i}{X_i}{X_{i+1}}$ contracts a divisor $D_i$\nonto a curve $C_0$ such that $C_0 \\rightarrow C$ is surjective.\n\nWe will argue by contradiction, so suppose that $-K_{X_{i+1}\/C} \\cdot C_0 < 0$. \nSince $-K_{X_i\/C}$ is nef over $C$, the restriction of $-K_{X_i\/C}$ to \n$D_i \\rightarrow C_0$ is nef over $C_0$. \nSince $\\mu_i$ is a Mori contraction $-K_{X_i\/C}|_{D_i}$ is $\\mu_i|_{D_i}$-ample,\nso $-K_{X_i\/C}|_D$ is nef. By \\eqref{divisorialdiscrepancy} one has\n$$\n-K_{X_i\/C} = \\mu_i^*(-K_{X_{i+1}\/C}) - \\lambda_i D_i.\n$$\nIn particular if $B \\subset D_i$ is any curve that is not contracted by $\\mu_i$, then\n$$\n- \\lambda_i D_i \\cdot B = -K_{X_i\/C} \\cdot B + K_{X_{i+1}\/C} \\cdot (\\mu_i)_*(B) > 0.\n$$\nSince $-D_i$ is $\\mu_i|_{D_i}$-ample, this shows that $-D_i|_{D_i}$ is positive on all curves. \nMoreover we have\n$$\n(- \\lambda_i D_i|_{D_i})^2 = (-K_{X_i\/C}|_{D_i})^2 + 2 (-K_{X_i\/C}) \\cdot (\\mu_i^* K_{X_i\/C}) \\cdot D_i > 0,\n$$\nso by the Nakai-Moishezon criterion the divisor $-D_i|_{D_i}$ is ample. \nThus we see that $-(K_{X_i\/C}+D_i)|_{D_i}$ is ample. Let now \n$\\holom{\\nu}{\\hat D_i}{D_i}$ be the composition of normalisation and minimal resolution, then by the adjunction formula \\cite{Rei94}\n$$\n- K_{\\hat D_i\/C} = -\\nu^* (K_{X_i\/C}+D_i)|_{D_i} + F,\n$$\nwith $F$ an effective divisor. Since $D_i \\rightarrow C_0$ is generically a $\\ensuremath{\\mathbb{P}}^1$-bundle, the divisor $F$\ndoes not surject onto $C_0$. Thus we see that $-K_{\\hat D_i\/C}$ is nef over $C$, moreover it is big.\nLet $\\hat D_i \\rightarrow \\tilde D_i$ be a MMP over $C$ with $\\tilde D_i \\rightarrow C$ a Mori fibre space.\nThen $-K_{\\tilde D_i\/C}$ is nef and big, a contradiction to Lemma \\ref{lemmanumericaldimensionprojective}.\n\\end{proof}\n\nCombining Lemma \\ref{lemmaMMPbasic} and Lemma \\ref{lemmanefoverC} we obtain:\n\n\\begin{corollary} \\label{corollarynefoverC}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$ and $-K_{X_C\/C}$ is nef.\nThen for every $i \\in \\{1, \\ldots, k\\}$ the divisor $-K_{X_i\/C}$ is nef in codimension one and nef over $C$.\n\\end{corollary}\n \nOur goal will be to prove that $-K_{X_i\/C}$ is actually nef, but this needs some serious preparation.\n\n\\begin{lemma} \\label{lemmanu1surfaces}\nLet $S$ be an integral projective surface admitting a morphism $\\holom{f}{S}{C}$ onto a curve $C$.\nLet $L$ be a $\\ensuremath{\\mathbb{Q}}$-Cartier divisor on $S$ that is pseudoeffective and nef over $C$.\n \nLet $S'$ be an integral projective surface admitting a morphism $\\holom{f'}{S'}{C}$ onto $C$. \nLet $L'$ be a nef and $f'$-ample $\\ensuremath{\\mathbb{Q}}$-Cartier divisor on $S'$ such that $(L')^2=0$.\n\nSuppose that there exists a birational morphism $\\holom{\\mu}{S}{S'}$ such that $f' \\circ \\mu = f$.\nSuppose that we have\n$$\nL \\equiv \\mu^* L' - E\n$$\nwith $E$ an effective $\\ensuremath{\\mathbb{Q}}$-Cartier divisor. Then the following holds:\n\n\\begin{enumerate}[(i)]\n\\item If $L$ is $\\mu$-nef, then $L$ is nef and not big. Moreover we have $L \\cdot \\mu^* L' = 0$.\n\\item If $L$ is $\\mu$-nef and $E$ is $f$-vertical, then $E=0$.\n\\end{enumerate}\n\\end{lemma}\n\nRecall that if $L$ is a pseudoeffective $\\ensuremath{\\mathbb{Q}}$-divisor on a smooth projective surface,\nwe can consider its Zariski decomposition\n$L = P + N$,\nwhere $P$ is a nef $\\ensuremath{\\mathbb{Q}}$-divisor and $N$ an effective $\\ensuremath{\\mathbb{Q}}$-divisor such that $P \\cdot N=0$.\n\n\\begin{proof}\nThe statement being invariant under normalisation we can suppose without loss of generality that $S$ and $S'$\nare normal. For the same reason we can suppose that $S$ is smooth.\n\nLet $L = P + N$ be the Zariski decomposition of $L$, and\nlet $N = N_{hor} + N_{vert}$ be the decomposition of the \neffective divisor $N$ in its vertical and horizontal components. \n\nSince $P \\equiv \\mu^* L' - (E+N)$ and $(L')^2=0$, we have\n$$\nP \\cdot \\mu^* L' = -(E+N) \\cdot \\mu^* L'.\n$$\nSince $\\mu^* L$ and $P$ are nef, the left hand side is non-negative. \nHowever $-(E+N) \\cdot \\mu^* L'$ is non-positive, so we get\n\\begin{equation} \\label{orthogonal}\nP \\cdot \\mu^* L' = (E+N) \\cdot \\mu^* L' = 0.\n\\end{equation}\nIn particular for every irreducible component $D \\subset N$ we have $D \\cdot \\mu^* L'=0$.\nSince $L'$ is $f'$-ample and $N_{vert} \\cdot \\mu^* L'=0$ we see that $N_{vert}$ is $\\mu$-exceptional.\n\nThe divisor $P$ being nef, we have $P^2 \\geq 0$\nand $(E+N) \\cdot P \\geq 0$.\nHowever by \\eqref{orthogonal} one has\n$$\nP^2 = [\\mu^* L' - (E+N)] \\cdot P = - (E+N) \\cdot P,\n$$\nso we get $P^2 = (E+N) \\cdot P = 0$.\nYet $(E+N) \\cdot P = 0$ and $(E+N) \\cdot \\mu^* L' = 0$ implies that \n\\begin{equation} \\label{square}\n(E+N)^2 = 0.\n\\end{equation} \n\n{\\em Proof of $(i)$.}\nSince $N_{vert}$ is $\\mu$-exceptional, we know that $L$ is nef on every irreducible\ncomponent of $N_{vert}$. Moreover $L$ is nef over $C$, so it is nef \non every irreducible\ncomponent of $N_{hor}$. Thus $L$ is nef on its negative part $N$, hence $N=0$ and $L$ is nef.\nIn particular we have $L^2=P^2=0$, so $L$ is not big.\n\n\n{\\em Proof of $(ii)$.} Since $E$ is $f$-vertical and $L'$ is $f'$-ample, \nthe equality $E \\cdot \\mu^* L'=0$ implies that $E$ is $\\mu$-exceptional.\nSince $N=0$ we know by \\eqref{square} that $E^2=0$.\nHowever the intersection matrix of an exceptional divisor is \nnegative definite, so we have $E=0$.\n\\end{proof}\n\nFor the next corollary recall that any normal surface is numerically $\\ensuremath{\\mathbb{Q}}$-factorial \\cite{Sak84}, so\nwe can define intersection numbers for any Weil divisor. \n\n\\begin{corollary} \\label{corollarysurfaces}\nLet $Y$ be a normal projective surface \nadmitting a fibration $\\holom{\\tau}{Y}{C}$ onto a smooth curve such that the general fibre is $\\ensuremath{\\mathbb{P}}^1$. \nSuppose that $-K_{Y\/C}$ is pseudoeffective and nef over $C$.\nThen $-K_{Y\/C}$ is nef and $Y \\rightarrow C$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle.\n\\end{corollary}\n\n\\begin{proof} \n{\\em Step 1. Suppose that $Y$ is smooth.}\nWe argue by induction on the relative Picard number. If $\\rho(Y\/C)=1$, then $-K_{Y\/C}$ is nef\nand $\\tau$-ample. Thus $Y \\rightarrow C$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle.\nIf $\\rho(Y\/C)>1$ there exists a Mori contraction $\\holom{\\mu}{Y}{Y'}$ over $C$\nand by Lemma \\ref{lemmaMMPbasic} the divisor $-K_{Y'\/C}$ is pseudoeffective and nef over $C$.\nBy induction $-K_{Y'\/C}$ \nis nef and $Y' \\rightarrow C$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle. We have $(-K_{Y'\/C})^2=0$ and $-K_{Y'\/C}$ \nis ample over $C$. The $\\mu$-exceptional divisor $E$\nis $\\tau$-vertical, so $E=0$ by Lemma \\ref{lemmanu1surfaces}, a contradiction. \n\n\n{\\em Step 2. General case.} \nLet $\\holom{\\nu}{\\hat Y}{Y}$ be the minimal resolution, then we have $K_{\\hat Y} = \\nu^* K_Y - E$ with $E$ an effective divisor. \nThus the relative canonical divisor $-K_{\\hat Y\/C} = - \\nu^* K_{Y\/C} + E$\nis pseudoeffective and nef over $C$. By Step 1 we know that $\\hat Y \\rightarrow C$\nis a $\\ensuremath{\\mathbb{P}}^1$-bundle. Thus $\\nu$ is an isomorphism.\n\\end{proof}\n\n\\begin{remark} \\label{remarksurfaces}\nIn the situation of Corollary \\ref{corollarysurfaces} we can write $Y \\simeq \\ensuremath{\\mathbb{P}}(V)$ with $V$ a rank two vector bundle on $C$.\nSince $-K_{Y\/C}$ is nef, the vector bundle $V$ is semistable \\cite[Prop.2.9]{MP97}.\nMoreover the nef cone $\\mbox{Nef}(Y) \\subset N^1{Y}$ \nand the pseudoeffective cone $\\mbox{Pseff}(Y) \\subset N^1{Y}$ coincide \\cite[Sect.1.5.A]{Laz04a},\nthey are generated over $\\ensuremath{\\mathbb{Z}}$ by $\\frac{-1}{2} K_{Y\/C}$ and a fibre $F$ of the ruling $Y \\rightarrow C$.\nRecall that\non any smooth projective surface a Cartier divisor $L$ is generically nef with respect to all polarisations if and only if it is \npseudoeffective. Thus we see that $L \\rightarrow Y$ is generically nef with respect to all polarisations if and only if $L$ \nis nef if and only if\n$\nL \\equiv \\frac{-m}{2} K_{Y\/C} + n F\n$\nwith $m, n \\in \\ensuremath{\\mathbb{N}}_0$.\n\\end{remark}\n\nWe will now use Lemma \\ref{lemmanu1surfaces} to obtain strong restrictions on the MMP.\n\n\\begin{lemma} \\label{lemmanu2}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$.\nLet $\\merom{\\mu_i}{X_i}{X_{i+1}}$ be a step of the MMP.\n\nLet $S' \\subset X_{i+1}$ be an irreducible surface such that\nthe map $S' \\rightarrow C$ is surjective.\nSuppose that $-K_{X_{i+1}\/C}|_{S'}$ is nef but not big. Suppose moreover that either\n\\begin{itemize}\n\\item we have $-K_{X_{i+1}\/C}|_{S'} \\equiv 0$; or\n\\item the divisor $-K_{X_{i+1}\/C}|_{S'}$ is ample on the fibres of $S' \\rightarrow C$. \n\\end{itemize}\nThen the following holds:\n\\begin{enumerate}[(i)]\n\\item Suppose that $\\mu_i$ is a flip: then $S'$ is disjoint from the flipping locus. \n\\item Suppose that $\\mu_i$ is divisorial with exceptional divisor $D_i$. \nThen either $\\mu_i(D_i)$ is disjoint from $S'$ or $\\mu_i(D_i) \\subset S'$. \nIf $\\mu_i(D_i) \\subset S'$, the map $\\mu_i(D_i)\\rightarrow C$ is surjective and \n$-K_{X_{i+1}\/C}|_{S'} \\not\\equiv 0$.\n\\item Let $S \\subset X_i$ be the strict transform of $S'$. Then $-K_{X_i\/C}|_{S}$ is nef but not big.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{proof}{\\em Proof of (i).}\nArguing by contradiction we suppose that $S'$ is not disjoint from the flipping locus $Z$.\nSince $-K_{X_{i+1}\/C}|_{S'}$ is nef, the intersection $S' \\cap Z$ is non empty and finite.\nUsing the notation of Remark \\ref{remarkflip}, let \n$\\Gamma_S \\subset \\Gamma_i$ be the strict transform of $S'$. \nRestricting \\eqref{flipdiscrepancy} to $\\Gamma_S$ we obtain\n$$\np_i^* (-K_{X_i\/C})|_{\\Gamma_S} = q_i^*(-K_{X_{i+1}\/C})|_{\\Gamma_S} \n- (\\sum a_{i, j} D_{i, j}) \\cap \\Gamma_{S}.\n$$\nThe divisor $(\\sum a_{i, j} D_{i, j})$ being $\\ensuremath{\\mathbb{Q}}$-Cartier, the non-empty intersection\n$E:= (\\sum a_{i, j} D_{i, j}) \\cap \\Gamma_S$ is a non-zero $q_i|_{\\Gamma_S}$-exceptional effective $\\ensuremath{\\mathbb{Q}}$-divisor on $\\Gamma_S$.\nSince $\\Gamma_S$ is not $p_i$-exceptional and surjects onto $C$, it follows\nfrom Corollary \\ref{corollarynefoverC} that\n$p_i^* (-K_{X_i\/C})|_{\\Gamma_S}$ is pseudoeffective and nef over $C$,\nmoreover it is $q_i|_{\\Gamma_S}$-nef.\nIf $-K_{X_{i+1}\/C}|_{S'} \\equiv 0$ this already gives a contradiction, so suppose now\nthat $-K_{X_{i+1}\/C}|_{S'}$ is ample on the fibres of $S' \\rightarrow C$.\n\nRecall that the flipping locus $Z$ is contained in the fibres of $X_{i+1} \\rightarrow C$ \n(cf. Remark \\ref{remarkflippingloci}), so $E$ is vertical\nwith respect to the fibration $\\Gamma_S \\rightarrow C$.\nYet by Lemma \\ref{lemmanu1surfaces} applied to the birational map\n$\\holom{q_i|_{\\Gamma_S}}{\\Gamma_S}{S'}$ we see that $E=0$, a contradiction.\n\n{\\em Proof of (ii).}\nLet $S \\subset X_i$ be the strict transform of $S'$, then we have an induced fibration $S \\rightarrow C$. \nUsing the notation of Remark \\ref{remarkdivisorial} and restricting \\eqref{divisorialdiscrepancy} to $S$ we have\n\\begin{equation} \\label{relation}\n-K_{X_i\/C}|_S = \\mu_i^*(-K_{X_{i+1}\/C})|_{S'} - \\lambda_i (D_i \\cap S),\n\\end{equation}\nIf $\\mu_i(D_i)$ is not disjoint from $S'$,\nthen $D_i \\cap S$ is a non-zero effective divisor. \nNote that the restriction $-K_{X_i\/C}|_S$ is pseudoeffective and nef over $C$, moreover \nit is $\\mu_i$-ample.\nIf $-K_{X_{i+1}\/C}|_{S'} \\equiv 0$ then \\eqref{relation} shows that $-K_{X_i\/C}|_S$ is anti-effective and not zero,\na contradiction.\n\nSuppose now that $-K_{X_{i+1}\/C}|_{S'}$ is ample \non the fibres of $S' \\rightarrow C$. Then we know by Lemma \\ref{lemmanu1surfaces} that $D_i \\cap S$\nis empty unless it has a horizontal component. \nSince $D_i \\cap S$ has a horizontal component, the irreducible curve $\\mu_i(D_i)$ is contained in $S'$\nand surjects onto $C$.\n\n{\\em Proof of (iii).}\nIf the image of the exceptional (resp. the flipping locus) is disjoint\nfrom $S'$ the statement is trivial. If this is not the case, then by $(i)$ and $(ii)$ \nthe contraction $\\mu_i$ is divisorial and \n$-K_{X_{i+1}\/C}|_{S'}$ is ample on the fibres of $S' \\rightarrow C$.\nThus we can apply Lemma \\ref{lemmanu1surfaces} to see that \n$-K_{X_i\/C}|_{S}$ is nef and not big.\n\\end{proof}\n\nThe next lemma describes the Mori fibre space at the end of the MMP:\n\n\\begin{lemma} \\label{lemmapreparation}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$ and that the general $\\varphi$-fibre\nis rationally connected. Suppose that $Y$ is a surface.\n\nThen $Y$ is smooth, the fibration $\\holom{\\tau}{Y}{C}$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle and $-K_{Y\/C}$ is nef.\nLet $\\Delta \\subset Y$ be the $1$-dimensional part of the $\\psi$-singular locus. \n\\begin{itemize}\n\\item If $\\Delta \\neq 0$, it is a smooth irreducible curve and the map $\\Delta \\rightarrow C$ is \\'etale. We have $\\Delta \\equiv - \\lambda K_{Y\/C}$ with $\\lambda \\in \\ensuremath{\\mathbb{Q}}^+$.\nMoreover $S':=\\fibre{\\psi}{\\Delta}$ is an irreducible surface such that $-K_{X_k\/C}|_{S'}$ is nef but not big. The\nrestriction $-K_{X_k\/C}|_{S'}$ is ample\non the fibres of $S' \\rightarrow \\Delta$ and if $l \\subset S'$ is an irreducible component of a general\nfibre of $S' \\rightarrow \\Delta$, we have $-K_{X_k\/C} \\cdot l=1$.\n\\item If $\\Delta=0$, then $X_k \\rightarrow Y$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle, in particular $X_k$ is smooth.\n\\end{itemize}\n\\end{lemma}\n\n\\begin{remark*}\nIf $\\Delta \\neq 0$ it is -a priori- not clear that $X_k$ is Gorenstein and $X_k \\rightarrow Y$ is a conic bundle, \ncp. \\cite[\\S 12]{MP08} and \\cite[p.483]{PS98}.\n\\end{remark*}\n\n\\begin{proof}\nBy Corollary \\ref{corollarynefoverC} we know that $-K_{X_k\/C}$ is nef in codimension one and nef over $C$.\nBy Remark \\ref{remarknefcodimone} this implies that $K_{X_k\/C}^2$ is a pseudoeffective $1$-cycle.\nWe claim that $-K_{Y\/C}$ is pseudoeffective and nef over $C$. \n\n{\\em Proof of the claim.\\footnote{For experts it is not difficult to deduce the claim from general results on the positivity\nof direct image sheaves, cf. \\cite[Cor.0.2]{BP08} \\cite[Lemma 3.24]{a8}.}} \nThe fibration $\\psi$ does not contract a divisor, so it is equidimensional. \nSince a terminal threefold has at most isolated singularities, there are at most finitely many points \n$Z \\subset Y$ such that $(X_k \\setminus \\fibre{\\psi}{Z}) \\rightarrow (Y \\setminus Z)$\nis a conic bundle. Thus we have \\cite[4.11]{Miy83}\n\\begin{equation} \\label{miyanishi}\n\\psi_* (K_{X_k\/C}^2) = - (4 K_{Y\/C} + \\Delta).\n\\end{equation}\nThe cycle $K_{X_k\/C}^2$ is pseudoeffective, \nso its image $-(4 K_{Y\/C} + \\Delta)$ is pseudoeffective. This already proves that $-K_{Y\/C}$ is pseudoeffective.\nWe will now follow an argument from \\cite[p.482]{PS98}:\nlet $B \\subset Y$ be any irreducible curve that surjects onto $C$. Since $-K_{X_k\/C}$\nis $\\psi$-ample and nef over $C$, the restriction $-K_{X_k\/C}|_{\\fibre{\\psi}{B}}$ is nef.\nThus we see by the projection formula and \\eqref{miyanishi} that\n\\begin{equation} \\label{discrim}\n0 \\leq (-K_{X_k\/C})^2 \\cdot \\fibre{\\psi}{B} = - (4 K_{Y\/C} + \\Delta) \\cdot B.\n\\end{equation}\nIn particular we have $-K_{Y\/C} \\cdot B \\geq 0$ unless $B \\subset \\Delta$. Arguing by contradiction we suppose\nthat there exists an irreducible curve $B \\subset \\Delta$ such that $-K_{Y\/C} \\cdot B < 0$.\nSince $B \\subset \\Delta$ the inequality \\eqref{discrim} then implies\n$$\n(K_{Y\/C}+B) \\cdot B \\leq (K_{Y\/C}+\\Delta) \\cdot B = (4 K_{Y\/C} + \\Delta) \\cdot B + 3 (-K_{Y\/C} \\cdot B) < 0.\n$$\nThus if $\\tilde B$ is the normalisation of $B$, the subadjunction formula \\cite{Rei94} shows that $\\deg K_{\\tilde B\/C}<0$,\na contradiction to the ramification formula. This proves the claim.\n\nWe can now describe the surface $Y$: the general fibre of $X_k \\rightarrow C$ is rationally connected, so\nthe general fibre of $Y \\rightarrow C$ is $\\ensuremath{\\mathbb{P}}^1$. Thus we know \nby Corollary \\ref{corollarysurfaces} that $Y \\rightarrow C$ is a ruled surface and \n$-K_{Y\/C}$ is nef.\n\n{\\em 1st case. $\\Delta \\neq \\emptyset$.}\nSince $\\psi$ is a conic bundle in the complement of finitely many points, the general\nfibre over a point in $\\Delta$ is a reducible conic. It is well-known that if $l \\subset S'$ is an irreducible\ncomponent of such a reducible conic, then $S' \\cdot l=-1$ and $-K_{X_k\/C} \\cdot l=1$. Using $\\rho(X_k\/Y)=1$ standard arguments prove that $\\Delta$ and $S'$ are irreducible, cf. \\cite[Rem.2.3.3]{MP08}.\n\nIn the proof of the claim we saw that $-(4 K_{Y\/C} + \\Delta)$ is pseudoeffective.\nSince $-K_{Y\/C}$ is nef and $(-K_{Y\/C})^2=0$ we obtain\n$$\n0 \\leq -K_{Y\/C} \\cdot (-4 K_{Y\/C} - \\Delta) = K_{Y\/C} \\cdot \\Delta \\leq 0.\n$$ \nSince $Y$ is a ruled surface, the equality $-K_{Y\/C} \\cdot \\Delta=0$ implies that\nthat $\\Delta \\equiv - \\lambda K_{Y\/C}$ with $\\lambda \\in \\ensuremath{\\mathbb{Q}}^+$. In particular $\\Delta$ surjects onto $C$ and we have $\\Delta^2=0$.\nBy adjunction we see that $K_{\\Delta\/C}$ has degree $0$, so $\\Delta \\rightarrow C$ is \\'etale and $\\Delta$ is smooth.\n\nSince $\\Delta$ surjects onto $C$, the divisor\n $-K_{X_k\/C}|_{S'}$ is nef and ample on the fibres\nof $S' \\rightarrow \\Delta$. Using the projection formula and \\eqref{miyanishi} we have\n$$\n(-K_{X_k\/C})^2 \\cdot S' = - (4 K_{Y\/C} + \\Delta) \\cdot \\Delta = 0,\n$$\nso $-K_{X_k\/C}|_{S'}$ is not big.\n\n{\\em 2nd case. $\\Delta= \\emptyset$.}\nThe terminal threefold $X_k$ is Cohen-Macaulay and the fibration on the smooth base $Y$ is equidimensional,\nso $\\psi$ is flat. Moreover $\\psi$ has at most finitely many singular fibres. Thus $\\psi$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle by\n\\cite[Thm.2.]{AR12}.\n\\end{proof}\n\n\\begin{proposition} \\label{propositionMMPnef}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$ and that the general $\\varphi$-fibre\nis rationally connected.\nThen $-K_{X_k\/C}$ is nef.\n\\end{proposition}\n\n\\begin{proof}\nBy Corollary \\ref{corollarynefoverC} we know that $-K_{X_k\/C}$ is nef in codimension one and nef over $C$.\nBy Lemma \\ref{lemmapreparation} we know that\n$Y \\rightarrow C$ is a ruled surface such that $-K_{Y\/C}$ is nef.\nLet $\\Delta \\subset Y$ be the $1$-dimensional part of the $\\psi$-singular locus. \n\nDenote by $\\{ B_1, \\ldots, B_m\\} \\subset X_k$\nthe finite (maybe empty) set of curves such that $-K_{X_k\/C} \\cdot B_j<0$. Since $-K_{X_k\/C}$ is nef over $C$ and $\\psi$-ample,\nwe see that for all $j \\in \\{1, \\ldots, m\\}$, the curve $\\psi(B_j)$ is a fibre of the ruling $Y \\rightarrow C$.\n\n{\\em 1st case: $\\Delta \\neq \\emptyset$.} Let $S' \\subset X_k$ be the surface\nconstructed in Lemma \\ref{lemmapreparation}.\nWe will describe the MMP $X \\dashrightarrow X_k$\nin a neighbourhood of $S'$.\n\nWe set $S_k:=S'$ and for \nevery $i \\in \\{ 0, \\ldots, k-1 \\}$ we define inductively\n$S_{i} \\subset X_{i}$ as the strict transform of $S_{i+1} \\subset X_{i+1}$.\nConsider now the largest $m \\in \\{ 1, \\ldots, k\\}$ such that\nthe surface $S_{m+1}$ is not disjoint from the flipping locus of $\\mu_m$ or, if $\\mu_m$ is divisorial,\nthe image $\\mu_m(D_m)$ of the exceptional divisor. Since $\\mu_k \\circ \\ldots \\mu_{m+1}$\nis an isomorphism near $S_{m+1}$ we see that $S_{m+1} \\simeq S'$ and\nby Lemma \\ref{lemmapreparation} the divisor $-K_{X_{m+1}\/C}|_{S_{m+1}}$ is nef but not big. \nMoreover $-K_{X_{m+1}\/C}|_{S_{m+1}}$ is ample\non the fibres of $S_{m+1} \\rightarrow \\Delta$. Thus we can apply Lemma \\ref{lemmanu2} and see that\n$\\mu_m$ is divisorial, the curve $\\mu_m(D_m)$ is contained in $S_{m+1}$ and surjects onto $C$.\nSince $X_{m+1}$ has only finitely many singular points and $\\mu_m(D_m)$ is a lci curve in its general point, we see\nby \\cite[Prop.0.6]{PS98} that $\\mu_m$ is {\\em generically} the blow-up of the curve $\\mu_m(D_m)$.\nIn particular we have\n$$\n-K_{X_{m}\/C} = - \\mu_m^* K_{X_{m+1}\/C} - D_m.\n$$\nLet $l \\subset S_{m+1}$ be an irreducible component of a general fibre of $S_{m+1} \\rightarrow \\Delta$,\nand let $l' \\subset S_m$ be its strict transform. Since $\\mu_m(D_m)$ intersects $l$, we have $D_m \\cdot l' \\geq 1$.\nBy Lemma \\ref{lemmapreparation} we have \n$$\n1 = - K_{X_{m+1}\/C} \\cdot l = - \\mu_m^* K_{X_{m+1}\/C} \\cdot l', \n$$\nso $-K_{X_{m}\/C} \\cdot l'=0$. By Lemma \\ref{lemmanu2} the divisor $-K_{X_{m}\/C}|_{S_m}$ is nef.\nSince it is numerically trivial on the general fibre of $f: S_m \\rightarrow \\Delta$, we see that \n$-K_{X_{m}\/C}|_{S_m}=f^* H$ with $H$ a nef $\\ensuremath{\\mathbb{Q}}$-Cartier divisor on $\\Delta$. However by Lemma \\ref{lemmanu1surfaces}\nwe have\n$$\n(-K_{X_{m}\/C}|_{S_m}) \\cdot \\mu_m^* (-K_{X_{m+1}\/C}|_{S_{m+1}})= 0.\n$$\nSince $-K_{X_{m+1}\/C}|_{S_{m+1}}$ is ample on the fibres of $S_{m+1} \\rightarrow \\Delta$, we see that $H \\equiv 0$,\nso $-K_{X_{m}\/C}|_{S_m} \\equiv 0$. Thus we are in the first case of Lemma \\ref{lemmanu2}: \nfor every $i \\in \\{ 0, \\ldots, m-1 \\}$, the MMP\nis disjoint from $S_i$.\n\nWe will now argue by contradiction and suppose that $-K_{X_k\/C}$ is not nef. \nThen the surface $S_k$ meets the curves $B_j$ in finitely many points.\nOn the one hand we have just seen that the surfaces $S_i$ are disjoint from any flipping locus\nof the MMP and if the contraction is divisorial and $S_i$ is not disjoint from $\\mu_i(D_i)$, \nthen $\\mu_i(D_i)$ surjects onto $C$. \nOn the other hand we know by Lemma \\ref{lemmaMMPbasic}, (iii) that\n$B_j$ is contained in a flipping locus or the image of an exceptional \ndivisor. Thus we have $B_j \\cap S_k = \\emptyset$, a contradiction.\n\n{\\em 2nd case: $\\Delta = \\emptyset$.} \nBy Lemma \\ref{lemmapreparation} the variety $X_k$ is smooth and\n$X_k \\rightarrow Y$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle. Using the Grothendieck-Riemann-Roch formula \\cite[App.A, Thm.5.3]{Har77}\nwe see that\n$$\nc_1(\\psi_* (\\omega^*_{X_k\/Y}))=0, \\ c_2(\\psi_* (\\omega^*_{X_k\/Y}))=\\frac{1}{2} K_{X_k\/Y}^3.\n$$\nSince $\\psi_* (\\omega^*_{X_k\/C}) \\simeq \\psi_* (\\omega^*_{X_k\/Y}) \\otimes \\omega^*_{Y\/C}$ and\n$K_{Y\/C}^2=0$ we deduce that\n$$\nc_1(\\psi_* (\\omega^*_{X_k\/C}))= - 3 K_{Y\/C}, \\ c_2(\\psi_* (\\omega^*_{X_k\/C}))=\\frac{1}{2} K_{X_k\/C}^3.\n$$\nLet $A \\subset Y$ be a general hyperplane section, then $\\fibre{\\psi}{A} \\rightarrow A$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle\nand $-K_{X_k\/C}|_{\\fibre{\\psi}{A}}$ is nef, since $\\fibre{\\psi}{A} \\cap B_j$ is a finite set for every $j$.\nIn particular the direct image $\\psi_* (\\omega^*_{X_k\/C})|_A$ is nef. Thus $\\psi_* (\\omega^*_{X_k\/C})$\nis generically nef for any polarisation $A$ and by \\cite[Thm.6.1]{Miy87} one has\n$$\nc_2(\\psi_*( \\omega^*_{X_k\/C})) \\geq 0.\n$$\nWe claim that $K_{X_k\/C}^3 \\leq 0$, by what precedes this implies $c_2(\\psi_*( \\omega^*_{X_k\/C})) = 0$.\n\n{\\em Proof of the claim.} Recall that $K_{X_k\/C}^2$ is a pseudoeffective cycle and\n$\\psi_* (K_{X_k\/C}^2) = - 4 K_{Y\/C}$. Let $(K_n)_{n \\in \\ensuremath{\\mathbb{N}}}$ be a sequence of effective $1$-cycles with rational coefficients \nconverging in $N_1(X_k)$ to $K_{X_k\/C}^2$. Then we can write\n$$\nK_n = \\sum_{j=1}^m \\eta_{j, n} B_j + R_n,\n$$\nwhere $R_n$ is an effective $1$-cycle with rational coefficients such that $-K_{X_k\/C} \\cdot R_n \\geq 0$. \nIf $H$ is an ample divisor on $X_k$, the degrees $H \\cdot (\\eta_{j, n} B_j)$ and $H \\cdot R_n$ are bounded\nfor large $n \\in \\ensuremath{\\mathbb{N}}$ by $(H \\cdot K_{X_k\/C}^2)+1$. Thus, up to replacing $K_n$ by some subsequence, we can suppose\nthat the sequences $\\eta_{j, n}$ and $R_n$ converge. Thus we have\n$$\nK_{X_k\/C}^2 = \\sum_{j=1}^m \\eta_{j, \\infty} B_j + R_\\infty,\n$$\nwhere $R_\\infty$ is a pseudoeffective cycle such that \n$-K_{X_k\/C} \\cdot R_\\infty \\geq 0$. Pushing down to $Y$ we have\n$$\n- 4 K_{Y\/C} = \\sum_{i=j}^m \\eta_{j, \\infty} \\psi_*(B_j) + \\psi_*(R_\\infty).\n$$\nRecall now that $K_{Y\/C}^2=0$ and $-K_{Y\/C} \\cdot \\psi_*(B_j)>0$ for all $j$. Then the preceding equation\nshows that $\\eta_{j, \\infty}=0$ for all $j$, hence we get $K_{X_k\/C}^2 = R_\\infty$ and\n$$\n- K_{X_k\/C}^3 = -K_{X_k\/C} \\cdot R_\\infty \\geq 0.\n$$\nThis proves the claim.\n\n{\\em Conclusion.}\nWe will now prove that $\\psi_* (\\omega^*_{X_k\/C})$ is a nef vector bundle. Since the natural morphism\n$$\n\\psi^* \\psi_* (\\omega^*_{X_k\/C}) \\rightarrow \\omega^*_{X_k\/C}\n$$\nis surjective, this proves that $-K_{X_k\/C}$ is nef. If $\\psi_* (\\omega^*_{X_k\/C})$ is stable for some\npolarisation $A$ the property\n$$\nc_1^2(\\psi_* (\\omega^*_{X_k\/C}))= - 3 K_{Y\/C}^2=0, \\ c_2(\\psi_* (\\omega^*_{X_k\/C}))=0\n$$\nimplies that $\\psi_* (\\omega^*_{X_k\/C})$ is projectively flat with nef determinant \\cite[Cor.3]{BS94}, hence nef.\nWe already know that $V:=\\psi_* (\\omega^*_{X_k\/C})$ is generically nef for any polarisation $A$ on $Y$. Thus\nif $V \\rightarrow Q$ is any torsion-free quotient sheaf, then $Q$ is generically nef for any polarisation $A$ on $Y$.\nBy Remark \\ref{remarksurfaces} we then have\n$$\n\\det Q = \\frac{-m}{2} K_{Y\/C} + n F\n$$\nwith $m, n \\in \\ensuremath{\\mathbb{N}}_0$. \nSuppose now that $V$ is not stable with respect to the polarisation $\\frac{-1}{2} K_{Y\/C} + \\frac{1}{8} F$.\nThen there exists a stable reflexive subsheaf $\\sF \\subset V$ such that the quotient $Q:=V\/\\sF$ is torsion-free and\nthe slope $\\mu(Q)$ is less or equal than the slope $\\mu(V)$.\nAn elementary computation shows that\n$\\det Q=\\frac{-m}{2} K_{Y\/C}$ with $m \\leq 2 \\ensuremath{rk} \\ Q$. In particular we have $\\det \\sF= \\frac{-(6-m)}{2} K_{Y\/C}$.\nSince $c_2(Q) \\geq 0$ by \\cite[Thm.6.1]{Miy87} and $c_2(\\sF) \\geq 0$ by the Bogomolov-Miyaoka-Yau inequality\nwe see that $c_2(\\sF) = 0$ and $c_2(Q) = 0$. In particular $\\sF$ is projectively flat with nef determinant \\cite[Cor.3]{BS94},\nhence nef. The same holds for $Q$ if it is stable. If $Q$ is not stable we easily prove that it is an extension\nof two line bundles $L_1$ and $L_2$ which are non-negative multiples of $\\frac{-m}{2} K_{Y\/C}$, in particular $Q$ is nef.\nThus $V$ is an extension of nef vector bundles, hence nef.\n\\end{proof}\n\n\\begin{remark*}\nThe proof of the second case $\\Delta = 0$ is tedious and rather ad-hoc. If we could suppose the existence of\na curve $C_0 \\subset Y$ such that $-K_{Y\/C} \\cdot C_0=0$ we could argue as in the first case.\nUnfortunately the curve $C_0$ does not always exist, cf. Remark \\ref{remarkmumford}.\n\\end{remark*}\n\n\\begin{proposition} \\label{propositionMMPtotal}\nIn the situation of Setup \\ref{setup}, suppose that $\\dim X_C-\\dim C=2$ and that the general $\\varphi$-fibre\nis rationally connected. \nSuppose also that there exists an effective divisor $A_0 \\subset X_C$ such that\n$A_0 \\equiv -mK_{X_C\/C}$ for some $m \\in \\ensuremath{\\mathbb{N}}$.\nThen every $\\mu_i$ is a divisorial contraction onto some \\'etale multisection,\nand $-K_{X_i\/C}$ is nef for all $i \\in \\{ 0, \\ldots, k\\}$. \nIf $X_C$ is Gorenstein, the fibration $X_C \\rightarrow C$ is locally trivial in the analytic topology.\n\\end{proposition}\n\n\\begin{proof}\nNote first that $-K_{X_k\/C}$ is nef: if $\\dim Y=2$ this was shown in Proposition \\ref{propositionMMPnef},\nif $\\dim Y=1$ the Mori fibre space $\\psi$ and the fibration $X_k \\rightarrow C$ identify, \nso $-K_{X_k\/C}$ is relatively ample and nef over $C$, hence nef.\nLet $F$ be a general fibre of $X_k \\rightarrow C$. \n\n{\\em Step 1. Description of the MMP.}\nWe denote by $M \\in \\mbox{Pic}^0 C$ a divisor such that\n$$\nA_0 \\in H^0(X_C, -mK_{X_C\/C} + \\varphi^* M).\n$$\nFor every $i \\in \\{0, \\ldots, k\\}$ we denote by $M_i$ the pull-back of $M$ to $X_i$ via the natural map $X_i \\rightarrow C$.\nSetting inductively $A_{i+1}=(\\mu_i)_* A_i$ for every $i \\in \\{0, \\ldots, k-1\\}$\nwe have\n$$\nA_i \\in H^0(X_i, -mK_{X_i\/C} + M_i) \\qquad \\forall \\ i \\in \\{0, \\ldots, k\\}.\n$$\nThe divisor $A_i$ being effective we see that $-K_{X_i\/C}$ is nef if and only if \nits restriction to every irreducible component of $A_i$ is nef.\nNote that if the contraction $\\mu_i$ is divisorial with exceptional divisor $D_i$, we have\n$$\nH^0(X_{i+1}, (\\mu_i)_*(\\sO_{X_i}(-mK_{X_i\/C} + M_i))) = H^0(X_{i+1}, (-mK_{X_{i+1}\/C} + M_{i+1}) \\otimes {\\mathcal J}),\n$$\nwhere ${\\mathcal J}$ is an ideal sheaf whose cosupport is $\\mu_i(D_i)$. In particular $A_{i+1}$ contains $\\mu_i(D_i)$.\nSince $-K_{X_k\/C}$ is nef, the contraction $\\mu_k$ is divisorial.\n\n{\\em 1st case. Suppose that $K_{F}^2=0$.} By Lemma \\ref{lemmapreparation}\nand \\eqref{miyanishi} we have\n$\\psi_* (K_{X_k\/C}^2) \\equiv - m K_{Y\/C}$\nwith $m \\geq 0$. Restricting to the general fibre $F$ the condition $K_F^2=0$ implies that $m=0$.\nThus the pseudoeffective cycle $K_{X_k\/C}^2$\nis numerically equivalent to $\\mu l$ where $l$ is a general $\\psi$-fibre. \nThus we see that $0=(-K_{X_k\/C})^3 = 2 \\mu$.\nHence we have $K_{X_k\/C}^2 \\equiv 0$, in particular the restriction of $-K_{X_k\/C}$ to any component\nof $A_k$ is numerically trivial. If the MMP $X \\dashrightarrow X_k$ is not an isomorphism, then\n$\\mu_k(D_k)$ is not disjoint from $A_k$, a contradiction to \nLemma \\ref{lemmanu2}(ii).\n\n{\\em 2nd case. Suppose that $K_{F}^2>0$.}\nIn this case we can apply Corollary \\ref{corollaryMFS} to see \nthat $X_k \\rightarrow C$ is locally trivial in the analytic topology, in particular $X_k$ is smooth. \n\nSuppose for the moment that $-K_{X_i\/C}$ is nef and relatively big for some $i \\in \\{1, \\ldots, k\\}$. \nWe claim that the divisor $A_i$ is a union of irreducible components\n$A_i = \\sum b_{i,l} A_{i,l}$ such that for every $l$ the natural map $A_{i,l} \\rightarrow C$ is surjective\nand $-K_{X_i\/C}|_{A_{i,l}}$ is either numerically trivial or nef and relatively ample, but not big.\n\n{\\em Proof of the claim.}\nSince $-K_{X_i\/C}$ is nef but not big, the restriction $-K_{X_i\/C}|_{A_{i,l}}$ is not big.\nSince $(\\varphi_i)_*(\\omega^{\\otimes -m}_{X_i\/C}) \\otimes M$ is numerically flat we can see as in the proof\nof Proposition \\ref{propositionloctriv} that $A_i \\rightarrow C$ is locally trivial. \nIn particular all the irreducible components surject onto $C$ and\nif $-K_{X_i\/C}|_{A_{i,l}}$ is relatively big for the fibration $A_{i,l} \\rightarrow C$, it is relatively ample.\nThus we are left to show that if $-K_{X_i\/C}$ is numerically trivial on the fibres of $A_{i,l} \\rightarrow C$,\nthen its restriction to $A_{i,l}$ is numerically trivial. Note that in this case $A_{i,l}$\nis contracted by the morphism to the relative anticanonical model \n$$\n\\nu_i: X_i \\rightarrow X_i' \\subset \\ensuremath{\\mathbb{P}}((\\varphi_i)_*(\\omega^{\\otimes -d}_{X_i\/C}))=:\\ensuremath{\\mathbb{P}}(V_i)\n$$ \nwith $d \\gg 0$, and the image\nof $\\nu_i(A_{i,l})$ is an irreducible component of $(X_i')_{sing}$. Since $X_i' \\rightarrow C$ is \nlocally trivial, the curve $\\nu_i(A_{i,l})$ is an \\'etale multisection and a connected component of $(X_i')_{sing}$.\nAfter \\'etale base change we can suppose that $\\nu_i(A_{i,l})$ is a section.\nIf $\\sI_{X_i'}$ is the ideal of $X_i'$ in $\\ensuremath{\\mathbb{P}}(V_i)$ the direct image\n$(\\varphi_i)_* (\\sI_{X_i'} \\otimes \\sO_{\\ensuremath{\\mathbb{P}}(V_i)}(e))$ is numerically flat for $e \\gg 0$ (cf. the proof of Theorem \\ref{theoremmain}),\nso $(\\varphi_i)_* (\\sI_{(X_i')_{sing}} \\otimes \\sO_{\\ensuremath{\\mathbb{P}}(V_i)}(e))$ is also numerically flat. \nIn particular the section $\\nu_i(A_{i,l})$ corresponds to a numerically trivial quotient of $V_i$, hence\n$$\n0 = \\sO_{\\ensuremath{\\mathbb{P}}(V_i)}(e) \\cdot (X_i')_{sing} = - e d K_{X_i'\/C} \\cdot (X_i')_{sing}.\n$$\nSince $\\nu_i$ is crepant this proves the claim.\n\nWe will now prove by descending induction that $-K_{X_i\/C}$ is nef and relatively big for all $i \\in \\{1, \\ldots, k\\}$. \nThis is clear for $i=k$, so suppose that it holds for $i+1$.\nSince $-K_{X_{i+1}\/C}$ is nef, the contraction $\\mu_i$ is divisorial\nwith exceptional divisor $D_i$ and $\\mu_i(D_i)$ is contained in $A_{i+1}$.\nBy the claim the irreducible components $A_{i+1,l}$ satisfy the conditions of Lemma\n\\ref{lemmanu2}. Thus $\\mu_i(D_i)$ is a curve surjecting onto $C$\nand the divisor $-K_{X_i\/C}$ is nef on the strict transforms\nof all the irreducible components $A_{i+1,l}$.\nThis already proves that $-K_{X_{i}\/C}|_{A_{i}}$ is nef, unless\n$A_i$ has one irreducible component more than $A_{i+1}$, the exceptional divisor $D_i$. \nClearly $-K_{X_{i}\/C}|_{D_i}$ is relatively ample with respect to $D_i \\rightarrow \\mu_i(D_i)$.\nYet $\\mu_i(D_i)$ surjects onto $C$, so the restriction $-K_{X_{i}\/C}|_{D_i}$ is\nnef over $\\mu_i(D_i)$. This proves that $-K_{X_{i}\/C}|_{D_i}$ is nef, hence $-K_{X_{i}\/C}|_{A_{i}}$ is nef.\nSince $-K_{X_{i}\/C}$ is nef we know by \\cite[Prop.4.11]{PS98} that $\\mu_i$ \nis the blow-up along an \\'etale (multi-)section.\n\n{\\em Step 2. $\\varphi$ is locally trivial.} By what precedes the first step of MMP is a divisorial\ncontraction $\\holom{\\mu_0}{X_0}{X_1}$ contracting a divisor $D_0$ onto an \\'etale \nmultisection\\footnote{The case of a trivial MMP can be excluded as follows: \nconsider the Mori fibre space $\\psi: X_C = X_k \\rightarrow Y$.\nSince $X_C$ is Gorenstein, the fibration $\\psi$ is a conic bundle \\cite[Thm.7]{Cut88}. \nThe discriminant locus $\\Delta$ is smooth\nby Lemma \\ref{lemmapreparation}, so all the fibres over $\\Delta$ are reducible conics \\cite[Prop.1.8.3)]{Sar82}.\nThus the associated two-to-one cover $\\tilde \\Delta \\rightarrow \\Delta$ is \\'etale, hence $\\tilde \\Delta \\rightarrow C$\nis \\'etale by Lemma \\ref{lemmapreparation}. Arguing as \\cite[Prop.0.4, Rem.0.5]{PS98} we see that \n$X_C \\times_C \\tilde \\Delta \\rightarrow \\tilde \\Delta$ admits a Mori contraction that blows down exactly one (-1)-curve\nin every fibre.}.\nIn particular $-K_{X_1\/C}$ is nef and $\\varphi_1$-big where $\\holom{\\varphi_1}{X_1}{C}$ is the natural fibration.\nBy Corollary \\ref{corollaryMFS} the variety $X_1$ is smooth, so $X_0$ is smooth.\nMoreover $-K_{X_0\/C} - \\mu_0^* K_{X_1\/C}$ is nef and $\\varphi$-big,\nhence for $m \\in \\ensuremath{\\mathbb{N}}$ the direct image $\\varphi_*(\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m})$\nis nef \\cite{Kol86}. The inclusion $(\\mu_0)_* (\\omega_{X_0\/C}^{\\otimes -m}) \\hookrightarrow \\omega_{X_1\/C}^{\\otimes -m}$ \nyields an inclusion \n$$\n\\varphi_*(\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m})\n\\hookrightarrow\n(\\varphi_1)_* (\\omega_{X_1\/C}^{\\otimes -2m}).\n$$\nThe sheaf $(\\varphi_1)_* (\\omega_{X_1\/C}^{\\otimes -2m})$ is numerically flat for all $m \\gg 0$ by\nProposition \\ref{propositionkltdirectimage}, so \n$\\varphi_*(\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m})$ is also numerically flat\nfor all $m \\gg 0$.\nBy the relative base-point free theorem the natural map \n$$\n\\varphi^* \\varphi_*(\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m})\n\\rightarrow \n\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m}\n$$\nis surjective for all $m \\gg 0$, so we obtain a birational morphism\n$\\holom{\\mu}{X_0}{X_0'}$ onto a normal projective variety $\\varphi': X_0' \\rightarrow C$\nembedded in $\\varphi': \\ensuremath{\\mathbb{P}}(E_m) \\rightarrow C$\nwhere $E_m := \\varphi_*(\\omega_{X_0\/C}^{\\otimes -m} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -m})$\nfor some fixed $m \\gg 0$. We can now argue as in the proof of Theorem \\ref{theoremmain}:\ndenoting by $\\sI_{X_0'} \\subset \\sO_{\\ensuremath{\\mathbb{P}}(E_m)}$ the ideal sheaf of $X_0' \\subset \\ensuremath{\\mathbb{P}}(E_m)$,\nwe have for\nevery $d \\gg 0$ an exact sequence\n$$\n0 \\rightarrow (\\varphi')_{*}(\\sI_{X_0'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d)) \n\\rightarrow S^d E_m \\rightarrow \\varphi_*(\\omega_{X_0\/C}^{\\otimes -dm} \\otimes \\mu_0^* \\omega_{X_1\/C}^{\\otimes -dm}) \\rightarrow 0.\n$$\nThus $(\\varphi')_{*}(\\sI_{X_0'}\\otimes \\mathcal{O}_{\\mathbb{P}(E_{m})}(d)) $ \nis numerically flat, so $\\varphi': X_0' \\rightarrow C$ is locally trivial with fibre $F'$ by Proposition \\ref{propositionloctriv}.\n\nThe Cartier divisor $-K_{X_0\/C}$ is $\\mu$-trivial, \nso we have $K_{X_0\/C} = \\mu^* K_{X_0'\/C}$ \\cite[Thm.3.24]{KM98}. Thus $X_0 \\rightarrow X_0'$ is a crepant resolution of\nthe normal projective variety $X_0'$, \nand a smooth $\\varphi$-fibre $F$ is the minimal resolution of the general $\\varphi'$-fibre $F'$.\nIn particular $F$ is unique up to isomorphism, arguing exactly as in Step 2 of the proof of Theorem \\ref{theoremmain}\nwe see that $X_C \\rightarrow C$ is locally trivial with fibre $F$.\n\\end{proof}\n\n\n\\subsection{Main result} \\label{subsectionmainresult}\n\nIn this section we will prove Theorem \\ref{theoremmaintwo}. Proposition \\ref{propositionMMPtotal}\nobviously settles the main part, however we proved the statement\nunder a nonvanishing condition which is not satisfied in general (cf. Remark \\ref{remarkmumford}).\nWe will now show that these properties hold if we start with a fibration onto a torus.\n\n\\begin{lemma} \\label{lemmanonvanishing}\nLet $X$ be a projective manifold such that $-K_X$ is nef.\nLet $\\pi: X \\rightarrow A$ be the Albanese fibration, \nand suppose that $-K_F$ is nef and abundant\\footnote{Cf. \\cite{Fuj11} for the relevant definitions.} for the general $\\pi$-fibre $F$.\nThen there exists an effective divisor $A \\subset X$ such that\n$A \\equiv -mK_{X}$ for some $m \\in \\ensuremath{\\mathbb{N}}$.\n\\end{lemma}\n\n\\begin{proof}\nSince $-K_F$ is nef and abundant we know by the relative version of Kawamata's \ntheorem \\cite[Thm.1.1]{Fuj11}\nthat $-K_X$ is $\\pi$-semiample, so\nfor every sufficiently divisible $m \\gg 0$ the natural map\n$$\n\\pi^{*}\\pi_{*}(\\omega_X^{\\otimes -m})\\rightarrow \\omega_X^{\\otimes -m}\n$$\nis surjective. Thus $-mK_X$ induces a morphism\n$\\holom{\\psi}{X}{Y}$\nonto a normal projective variety $\\holom{\\tau}{Y}{A}$ \nsuch that $-K_X \\sim_\\ensuremath{\\mathbb{Q}} \\psi^* H$\nwith $H$ a nef and $\\tau$-ample Cartier divisor. \nSince $\\pi$ is equidimensional \\cite{LTZZ10}, the fibration $\\tau$ is equidimensional.\nBy \\cite[Thm.0.2]{Amb05} there exists a boundary divisor $\\Delta_Y$ on $Y$ such that the pair $(Y, \\Delta_Y)$ is klt\nand $H \\sim_\\ensuremath{\\mathbb{Q}} -(K_Y+\\Delta_Y)$. In particular the variety $Y$ is Cohen-Macaulay, so the equidimensional fibration\n$\\tau$ is flat. Thus we can apply Proposition \\ref{propositionkltdirectimage} to see that for sufficiently divisible $m \\gg 0$, \nthe direct image sheaf\n$$\n\\pi_* (\\omega_X^{- \\otimes m}) \\simeq \\tau_* (\\sO_Y(-m(K_Y+\\Delta_Y)))\n$$\nis a numerically flat vector bundle. By \\cite[Thm.1.18]{DPS94} there exists a subbundle $F \\subset \\pi_* (\\omega_X^{- \\otimes m})$\nsuch that $F$ is given by a unitary representation $\\pi_1(A) \\rightarrow U(\\ensuremath{rk} \\ F)$. The group $\\pi_1(A)$\nis abelian, so the representation splits, i.e. $F$ is a direct sum of numerically trivial line bundles. In particular\nthere exists a $M \\in \\mbox{Pic}^0 A$ such that\n$H^0(A, M \\otimes F) \\neq 0$.\n\\end{proof}\n\n\\begin{lemma} \\label{lemmaGRR}\nLet \\holom{f}{M}{C} be a fibration from a normal $\\ensuremath{\\mathbb{Q}}$-factorial threefold with\nat most Gorenstein terminal singularities onto a curve $C$ such that $-K_{M\/C}$\nis nef. Suppose that the general fibre $F$ is rationally connected and $K_F^2=0$.\n\\begin{enumerate}[(i)]\n\\item Then we have\n$c_1(f_!(\\omega^*_{M\/C}))=0$.\n\\item If $h^0(F, -K_F)=1$, there exists an effective divisor $A \\subset M$ \nsuch that $A \\equiv -K_{M\/C}$.\n\\end{enumerate}\n\\end{lemma}\n\n\\begin{remark} \\label{remarkmumford}\nLet $C$ be a curve of genus at least two, and let $U$ be a rank two bundle of degree $0$ on $C$\nsuch that all the symmetric powers $S^m U$ are stable (such vector bundles have been constructed by Mumford).\nSet $M:=\\ensuremath{\\mathbb{P}}(U)$, then $-K_{M\/C}$ is nef, but not numerically equivalent to any effective $\\ensuremath{\\mathbb{Q}}$-divisor.\n\\end{remark}\n\n\\begin{proof}\n{\\em Proof of (i)} By the Grothendieck-Riemann-Roch formula \n\\cite[Thm.15.2]{Ful98}\\footnote{The statement in \\cite{Ful98} is only for a smooth total space,\nbut if $\\holom{\\mu}{M'}{M}$ is a resolution of singularities one checks easily that\n$ch(-K_{M\/C}) td(T_M)=ch(-\\mu^* K_{M\/C}) td(T_{M'})$. Thus the formula holds since\nwe can apply \\cite[Thm.15.2]{Ful98} to $f \\circ \\mu$.}\nwe have\n\\[\ntd(T_C) ch(f_! (\\omega_{M\/C}^*)) = f_* (ch(-K_{M\/C}) td(T_M)).\n\\]\nWe will prove that the degree $3$ component of $ch(-K_{M\/C}) td(T_M)$ is equal to $1-g$,\nwhich by the formula above implies the statement.\n\nSince $K_{M\/C}^3=0$ and $K_F^2=0$ we have\n\\begin{equation} \\label{equationeasy}\nK_{M\/C}^2 \\cdot K_M = 0, \\qquad K_{M\/C} \\cdot K_M^2 = 0.\n\\end{equation}\nThe Chern character of $-K_{M\/C}$ is \n$ch(-K_{M\/C}) = 1 - K_{M\/C} + \\frac{1}{2} K_{M\/C}^2$, and the Todd class is\n\\[\ntd(T_M) = 1- \\frac{K_M}{2} + \\frac{K_M^2+c_2(M)}{12} + \\chi(M, \\sO_M).\n\\]\nThus the degree $3$ components of $ch(-K_{M\/C}) td(T_M)$ are given by\n\\begin{equation} \\label{equationeasy2}\n\\chi(M, \\sO_M) - K_{M\/C} \\cdot \\frac{K_M^2+c_2(M)}{12} + \\frac{1}{4} K_{M\/C}^2 \\cdot K_M.\n\\end{equation}\nSince we have $\\chi(M, \\sO_M)=-\\frac{K_M c_2(M)}{24}$ and $f^* K_C \\cdot c_2(M) =\n(2g-2) c_2(F)$ and $c_2(F)=12$ we obtain\n$\n- K_{M\/C} \\cdot \\frac{c_2(M)}{12} = 2 \\chi(M, \\sO_M) + 2g-2$. \nUsing \\eqref{equationeasy} the formula \\eqref{equationeasy2} simplifies to\n$3 \\chi(M, \\sO_M) + 2g-2$.\nThe general $f$-fibre is rationally connected, so we have \n$\\chi(M, \\sO_M)=\\chi(C, \\sO_C)=1-g$.\n\n\n{\\em Proof of (ii)} Note that for a general fibre $F$ we have \n$h^1(F, -K_F)=h^2(F, -K_F)=0$, so \n$R^j f_* (\\omega_{M\/C}^*)$ is a torsion sheaf for $j \\geq 1$. If $F_0$ is an arbitrary fibre, then by\nSerre duality $h^2(F_0, -K_{F_0})=h^0(F_0, 2 K_{F_0})=0$ since $K_{F_0}$ is by hypothesis antinef\nand not trivial.\nThus we have $R^2 f_* (\\omega_{M\/C}^*)=0$ and\n$$\nf_! (\\omega_{M\/C}^*) = f_* (\\omega_{M\/C}^*) - R^1 f_* (\\omega_{M\/C}^*).\n$$\nSince $R^1 f_* (\\omega_{M\/C}^*)$ is a torsion sheaf, statement $(i)$ implies that\n$c_1 (f_* (\\omega_{M\/C}^*)) \\geq 0$.\nThus $f_* (\\omega^*_{M\/C})$ is a line bundle of non-negative degree, so there exists a\nnumerically trivial line bundle $L$ on $C$ such that\n$H^0(C, f_* (\\omega^*_{M\/C}) \\otimes L) \\neq 0$.\n\\end{proof}\n\n\\begin{proposition} \\label{propositioneasy}\nLet $X$ be a projective manifold such that $-K_X$ is nef, and \nlet \\holom{\\pi}{X}{T} be the Albanese map. Suppose that $\\dim T=\\dim X-2$ and the general\n$\\pi$-fibre $F$ is uniruled but not rationally connected. Then there exists a finite \\'etale cover $X' \\rightarrow X$\nsuch that $q(X')=\\dim X-1$. Moreover the fibration $\\pi$ is smooth.\n\\end{proposition}\n\n\\begin{proof}\nLet $X \\dashrightarrow Y$ be a model of the MRC-fibration \\cite{Deb01} such that $Y$ is smooth. Then $Y$ is not uniruled,\nand we denote by $K_Y=P+N$ the divisorial Zariski decomposition. By \\cite[Main Thm.]{Zha05} the positive part $P$ is \nzero\\footnote{The statement in \\cite[Main Thm.]{Zha05} is only $\\kappa(Y)=0$, but the proof consists in showing that $P=0$.}.\nBy \\cite[Cor.3.4]{Dru11} the variety $Y\/T$ has a good minimal model $Y'\/T$.\nBy \\cite[Prop.8.3]{Kaw85} there exists a finite \\'etale cover $\\tilde T \\rightarrow T$ such that $\\tilde T \\times_T Y'$ is a torus.\nSince the irregularity is invariant under the MMP, we see that $q(\\tilde T \\times_T Y)=\\dim X-1$,\nthus $q(\\tilde T \\times_T X)=\\dim X-1$. \nThis proves the first statement. \n\nLet now $X' \\rightarrow X$ be an \\'etale cover such that $q(X')=\\dim X-1$, and let $X' \\rightarrow T'$ be the Albanese map.\nBy Corollary \\ref{corollarymain} we know that $X' \\rightarrow T'$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle. In particular the reduction of every $\\pi$-fibre\nis a $\\ensuremath{\\mathbb{P}}^1$-bundle $F_0$ over an elliptic curve $E$. Let $\\holom{\\psi}{X}{Y}$ be a Mori contraction over $T$,\nthen $\\psi$ is a $\\ensuremath{\\mathbb{P}}^1$-bundle, and the (reductions of) fibres of $\\tau: Y \\rightarrow T$ are elliptic curves.\nThus we have $K_Y \\equiv 0$, by the Beauville-Bogomolov decomposition the fibration $\\tau$ is smooth.\nHence $\\pi= \\tau \\circ \\psi$ is smooth. \n\\end{proof}\n\n\\begin{remark} \\label{remarkeasy}\nProposition \\ref{propositioneasy} also holds if $X$ is a compact K\\\"ahler threefold: using \\cite{Pau12} we see that\nthe base $Y$ of the MRC fibration has $\\kappa(Y)=0$. Since $Y$ is a surface we can run the MMP, in the surface case Kawamata's result \\cite{Kaw85} follows from the Kodaira-Enriques classification.\n\\end{remark}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theoremmaintwo}]\nLet $F$ be a general $\\pi$-fibre. Using Beauville-Bogomolov we easily exclude the case where $F$\nis not uniruled. If $F$ is uniruled but not rationally connected we conclude by Proposition \\ref{propositioneasy}. Suppose now that $F$ is rationally connected. \nIf $K_F^2>0$ we conclude by Theorem \\ref{theoremmain}, so suppose $K_F^2=0$.\n\n\nWe fix an arbitrary $t \\in T$ \nand $C \\subset T$ a general smooth curve such that $t \\in C$.\nBy Corollary \\ref{corollaryreductioncurve} the preimage\n$\\fibre{\\pi}{C}$ is a normal variety with at most canonical singularities. The divisor \n$-K_{\\fibre{\\pi}{C}\/C}$ is Cartier and nef, so if $X_C \\rightarrow \\fibre{\\pi}{C}$ is a terminal $\\ensuremath{\\mathbb{Q}}$-factorial model \n\\cite[Thm.6.23, Thm.6.25]{KM98} the divisor \n$-K_{X_C\/C}$ is Cartier and nef. By Proposition \\ref{propositionMMPtotal} the fibration $X_C \\rightarrow C$ is locally trivial if we prove\nthat there exists an effective $\\ensuremath{\\mathbb{Q}}$-divisor $A_0$ such that $A_0 \\equiv -mK_{X_C\/C}$ with $m \\in \\ensuremath{\\mathbb{N}}$.\nIf $-K_F$ is not abundant this holds by Lemma \\ref{lemmaGRR},b).\nIf $-K_F$ is nef and abundant we know by Lemma \\ref{lemmanonvanishing} that there exists an effective divisor $A$ on $X$\nsuch that $A \\equiv -mK_X$. Since $C \\subset T$ is general, the restriction $A|_{\\fibre{\\pi}{C}}$ is an effective divisor \nthat is numerically equivalent to $-mK_{\\fibre{\\pi}{C}\/C}$. The pull-back of this divisor to $X_C$ then gives $A_0$.\n\nThus we know that $X_C \\rightarrow C$ is locally trivial. In particular any curve in a fibre of $X_C \\rightarrow C$ deforms into a general fibre,\nhence $X_C \\rightarrow \\fibre{\\pi}{C}$ is an isomorphism.\nThus $X_C \\simeq \\fibre{\\pi}{C} \\rightarrow C$ is locally trivial.\n\\end{proof}\n\nThe proof of Theorem \\ref{theoremmaintwo} would be much simpler if we could classify\nmanifolds such that $-K_X$ is nef but not semiample. The following example shows that this is non-trivial\neven for threefolds, there by correcting \\cite[p.498]{PS98} and \\cite[p.600]{Pet12}.\n\n\\begin{example}\nLet $C$ be an elliptic curve, and let $L_0 \\in \\mbox{Pic}^0 C$ be a line bundle of degree $0$ that is not torsion.\nSet $L_1 := L_2 := \\sO_C$ and $L_3 := L_0^{\\otimes -2}$. Then $V:= \\oplus_{i=0}^3 L_i$ is a numerically flat\nvector bundle of rank four, and we denote by $\\holom{\\psi}{\\ensuremath{\\mathbb{P}}(V)}{C}$ the projectivisation. The vector bundle $S^3 V$ contains a subvector bundle \n$$\n(L_0^{\\otimes 2} \\otimes L_3) \\oplus L_1^{\\otimes 3} \\oplus L_2^{\\otimes 3} \\simeq \\sO_C^{\\oplus 3},\n$$ \nso $\\sO_{\\ensuremath{\\mathbb{P}}(V)}(3)$ has global sections corresponding fibrewise to the degree three monomials\n$\nX_0^2 X_3, \\ X_1^3, \\ X_2^3$.\nThe polynomial $X_0^2 X_3 + X_1^3 + X_2^3$ defines a cubic surface in $\\ensuremath{\\mathbb{P}}^3$ that is normal and has \na unique singular point in $(0:0:0:1)$, this point is of type $D_4$ \\cite[Case C]{BW79}.\nThus if we denote by $X \\subset \\ensuremath{\\mathbb{P}}(V)$ the hypersurface defined by the global section\nof $\\sO_{\\ensuremath{\\mathbb{P}}(V)}(3)$ corresponding to this polynomial, we see that $X$ is normal with at most canonical singularities.\nMoreover we have\n$$\n\\omega_X^* \\simeq (\\psi^* L_0 \\otimes \\sO_{\\ensuremath{\\mathbb{P}}(V)}(1))|_X,\n$$\nso $-K_X$ is nef. The singular locus of $X$ is the curve $C_0$ defined fibrewise by $X_0=X_1=X_2=0$, so it corresponds to the quotient $V \\rightarrow L_3.$\nSince $L_3 = L_0^{\\otimes -2}$ we see that\n$\\omega^*_X|_{C_0} \\simeq L_0^*.$ \nThe line bundle $L_0$ is not torsion, so we obtain that $C_0 \\subset {\\rm Bs} |-mK_X|$ for all $m \\in \\ensuremath{\\mathbb{N}}$. In particular $-K_X$ is not semiample.\n\nLet now $X' \\rightarrow X$ be a terminal model obtained by taking fibrewise the minimal resolution, then\n$X'$ is smooth and $-K_{X'}$ is nef and not semiample. One checks easily that $X'$ is not a product, even after finite \\'etale cover.\n\\end{example}\n\n\\begin{proof}[Proof of Corollary \\ref{corollarykaehler}]\nIf $X$ is projective we conclude by Corollary \\ref{corollarymain} and Theorem \\ref{theoremmaintwo}.\nThus we are left to deal with the case where $X$ is not projective and $q(X)=1$. Then the general fibre $F$ of\n$\\holom{\\pi}{X}{T}$ is not rationally connected, since otherwise $H^2(X, \\sO_X)=0$. If $F$ is uniruled we apply \nRemark \\ref{remarkeasy}. If $F$ is not uniruled, the canonical bundle $K_X$ is pseudoeffective \\cite{Bru06}.\nThus $K_X \\equiv 0$ and we conclude by Beauville-Bogomolov.\n\\end{proof}\n\n\\begin{appendix}\n\n\\section{A Hovanskii-Teissier inequality} \\label{appendixinequality}\n\nFor the convenience of reader, we give the proof of the Hovanskii-Teissier concavity inequality for arbitrary compact K\\\"ahler manifolds,\nwhich was first proved in \\cite{Gro}. The proof here is a direct consequence of \\cite[Thm A, C]{DN06}.\n\n\\begin{proposition} \\label{propositionHT}\nLet $(X,\\omega_{X})$ be a compact K\\\"ahler manifold of dimension $n$, \nand let $\\alpha$, $\\beta$ be two nef classes. For every $i,j, k, s \\in \\ensuremath{\\mathbb{N}}$ we\nhave\n\\begin{equation} \\label{equationseven}\n\\int_{X}(\\alpha^{i}\\wedge\\beta^{j}\\wedge\\omega_{X}^{n-i-j})\n\\end{equation}\n$$\\geq \n(\\int_{X}\\alpha^{i-k}\\wedge\\beta^{j+k}\\wedge\\omega_{X}^{n-i-j})^{\\frac{s}{k+s}}\n\\cdot(\\int_{X}\\alpha^{i+s}\\wedge\\beta^{j-s}\\wedge\\omega_{X}^{n-i-j})^{\\frac{k}{k+s}}.\n$$\n\\end{proposition}\n\n\\begin{proof}\nLet $\\omega_{1},\\cdots, \\omega_{n-2}$ be $n-2$ arbitrary K\\\"ahler classes. \nThanks to \\cite[Thm.A]{DN06}, \nthe bilinear form on $H^{1,1}(X)$\n$$Q([\\lambda], [\\mu])=\\int_{X}\\lambda\\wedge\\mu\\wedge\\omega_{1}\\wedge\\cdots\\wedge\\omega_{n-2}\n\\qquad\\lambda, \\mu\\in H^{1,1}(X)$$\nis of signature $(1, h^{1,1}-1)$.\nSince $\\alpha$, $\\beta$ are nef classes,\nthe function $f(t)=Q (\\alpha+t\\beta, \\alpha+t\\beta)$ is indefinite on $\\mathbb{R}$ \nif and only if $\\alpha$ and $\\beta$ are linearly independent.\nTherefore \n\\begin{equation}\\label{equationaddapen}\n\\int_{X}(\\alpha\\wedge\\beta\\wedge\\omega_{1}\\wedge\\cdots\\wedge\\omega_{n-2})\\geq \n(\\int_{X}\\alpha^{2}\\wedge\\omega_{1}\\wedge\\cdots\\wedge\\omega_{n-2})^{\\frac{1}{2}}\n\\cdot(\\int_{X}\\beta^{2}\\wedge\\omega_{1}\\wedge\\cdots\\wedge\\omega_{n-2})^{\\frac{1}{2}}.\n\\end{equation}\n\nIf we let $\\omega_{1}, \\cdots ,\\omega_{i-1}$ tend to $\\alpha$,\nlet $\\omega_{i},\\cdots,\\omega_{i+j-2}$ tend to $\\beta$\nand take $\\omega_{i+j-1}=\\cdots=\\omega_{n-2}=\\omega_{X}$ in \\eqref{equationaddapen},\nwe have\n$$\\int_{X}(\\alpha^{i}\\wedge\\beta^{j}\\wedge\\omega_{X}^{n-i-j})\\geq \n(\\int_{X}\\alpha^{i-1}\\wedge\\beta^{j+1}\\wedge\\omega_{X}^{n-i-j})^{\\frac{1}{2}}\n\\cdot(\\int_{X}\\alpha^{i+1}\\wedge\\beta^{j-1}\\wedge\\omega_{X}^{n-i-j})^{\\frac{1}{2}} .$$\nThen \\eqref{equationseven} is an easy consequence of the above inequality.\n\\end{proof}\n\n\\begin{remark}\\label{corHT}\nIt is easy to see that the equality holds in \\eqref{equationaddapen} if and only if $\\alpha$ and $\\beta$ are colinear. \n\\end{remark}\n\n\\end{appendix}\n\n\n\n\\def$'${$'$}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n{}Usually, solutions of classical string equations of motion are studied in the case of\nstrings propagating in some dynamically-inactive background fields.\nOn the other hand, one can also consider strings coupled to dynamic local fields with\ntheir kinetic terms included in the total action. In this paper\nwe consider the following system of a classical bosonic string coupled to a massless\nscalar field:\n\\begin{equation}\\label{action.string}\n S = -{1\\over 2} \\int d^2\\sigma \\left[T + \\lambda\\phi(X(\\sigma))\\right]h^{1\/2}h^{\\alpha\\beta}\\partial_\\alpha X^\\mu\\partial_\\beta X_\\mu -{1\\over 8\\pi}\\int d^Dx~\n \\partial^\\mu\\phi(x)\\partial_\\mu\\phi(x)\n\\end{equation}\nHere $T$ is the string tension; the string is described by its $D$-dimensional spacetime coordinates $X^\\mu(\\sigma)$, $\\mu = 1, \\dots, D$,\nwhich are defined on the worldsheet parametrized by the worldsheet coordinates\n$\\sigma^\\alpha = (\\tau, \\sigma)$; $h^{\\alpha\\beta}(\\sigma)$ is the worldsheet metric, and $h = \\det(-h^{\\alpha\\beta})$;\nthe scalar field $\\phi$ lives in the $D$-dimensional spacetime; $\\lambda$ measures the strength of the\ncoupling between the scalar field and the bosonic string.\n\n{}A similar system was considered by Dabholkar and Harvey \\cite{DH}. We will study the renormalizability\nof this system using the methods developed in \\cite{EM} for the classical point-particle electrodynamics.\nThese methods can also be applied to classical equations of motion of the string, which turn out to be non-renormalizable for $D>4$.\n\n{}The analysis in the subsequent sections allows to see in a rather\nsimple way that the string being an extended object, with a natural\nultraviolet cut-off at the fundamental scale $(\\alpha^\\prime)^{1\/2}$, alone is not sufficient for its finiteness.\nIt is the unique combination of its excitations that provides non-renormalization properties\nof the string. This combination of the local fields appearing in the effective action,\ncontaining both massless and massive modes, is a consequence of the conformal\ninvariance, which is crucial for the consistency of the theory.\n\n{}The paper is organized as follows. In Section 2 we review the renormalization of\nthe classical electrodynamics based on \\cite{EM}. Section 3 uses these methods to renormalize\nthe classical string in four dimensions. In Section 4 we argue that the\ntheory is not renormalizable in $D > 4$. We briefly conclude in Section 5.\n\n\\section{Renormalization of Classical Electrodynamics}\n\nConsider a relativistic point particle in $D$ dimensions coupled to the electromagnetic\nfield:\n\\begin{eqnarray}\\label{S.EM}\n {\\cal S} =&& -m \\int d\\tau \\left(U^\\mu(\\tau) U_\\mu(\\tau)\\right)^{1\/2} - e\\int d\\tau \\left[{\\cal A}^\\mu(X(\\tau)) + A^\\mu(X(\\tau))\\right] U_\\mu(\\tau) \\nonumber\\\\\n &&-{1\\over 16\\pi}\\int d^D x~ F^{\\mu\\nu}(x)F_{\\mu\\nu}(x)\n\\end{eqnarray}\nHere $m$ is the mass of the particle; $X^\\mu(\\tau)$ are its coordinates on the worldline\nparametrized by the proper time $\\tau$; $U^\\mu(\\tau) = dX^\\mu(\\tau)\/d\\tau$; $e$ is the electric charge of the\nparticle, which is coupled to the electromagnetic field; $A^\\mu(x)$ is the potential of the\nfield created by the particle itself; ${\\cal A}^\\mu(x)$ is the potential of a dynamically-inactive external electromagnetic\nfield. The electromagnetic field tensor for the self-field $A^\\mu(x)$ is given by\n\\begin{equation}\n F^{\\mu\\nu}(x) = \\partial^\\mu A^\\nu(x) - \\partial^\\nu A^\\mu(x)\n\\end{equation}\nIt enters the kinetic term in the action (\\ref{S.EM}). Note that the analogous quantity for the external field\n\\begin{equation}\n {\\cal F}^{\\mu\\nu}(x) = \\partial^\\mu {\\cal A}^\\nu(x) - \\partial^\\nu {\\cal A}^\\mu(x)\n\\end{equation}\ndoes not enter the action (\\ref{S.EM}) as the external field is dynamically inactive.\n\n{}Due to the reparametrization invariance of the action (\\ref{S.EM}), we have a constraint, which can be chosen as (here $U^2(\\tau) = U^\\mu(\\tau) U_\\mu(\\tau)$):\n\\begin{equation}\n U^2(\\tau) = 1\n\\end{equation}\nThe equations of motion then read:\n\\begin{eqnarray}\\label{self.EM}\n &&m~U^{\\prime\\mu}(\\tau) = e\\left[F^{\\mu\\nu}(X(\\tau)) + {\\cal F}^{\\mu\\nu}(X(\\tau))\\right]U_\\nu(\\tau)\\\\\n &&\\partial_\\nu F^{\\mu\\nu}(x) = -4\\pi e \\int_{-\\infty}^{+\\infty} d\\tau~\\delta\\left(x - X(\\tau)\\right) U^\\mu(\\tau)\\label{F}\n\\end{eqnarray}\nEq. (\\ref{self.EM}) describes the self-interaction of the particle and its interaction with the external field.\n\n{}In the Lorentz gauge $\\partial^\\mu A_\\mu(x) = 0$, Eq. (\\ref{F}) has the following solution\n\\begin{equation}\n A^\\mu(x) = 4\\pi e \\int_{-\\infty}^{+\\infty} d\\tau~G^{-}\\left(x - X(\\tau)\\right)U^\\mu(\\tau)\n\\end{equation}\nHere $G^-(x-y)$ is the retarded Green's function satisfying the following equation and boundary condition:\n\\begin{eqnarray}\n &&\\partial^2 G^-(x-y) = \\delta(x-y)\\\\\n &&G^-(x-y) = 0,~~~x^0 < y^0\n\\end{eqnarray}\nWe will also need another Green's function defined as:\n\\begin{equation}\\label{combo}\n {\\overline G}(x - y) = {1\\over 2}\\left[G^-(x-y) + G^-(y-x)\\right] = {\\cal G}_D\\left((x-y)^2\\right)\n\\end{equation}\nwhere we use the fact that this Green's function depends only on the quantity $(x-y)^2$ (and\nwe also explicitly indicate the $D$ dependence).\n\n{}Using (\\ref{combo}), we have\n\\begin{eqnarray}\\label{FU}\n &&F^{\\mu\\nu}(X(\\tau))U_\\nu(\\tau) = 16\\pi e\\int_{-\\infty}^\\tau d\\tau^\\prime~{\\cal G}^\\prime_D\\left((X(\\tau)-X(\\tau^\\prime))^2\\right) K^\\mu(\\tau, \\tau^\\prime)\\\\\n &&K^\\mu(\\tau, \\tau^\\prime) = \\left[\\left(X^\\mu(\\tau) - X^\\mu(\\tau^\\prime)\\right)U^\\nu(\\tau^\\prime) - (\\mu\\leftrightarrow\\nu)\\right]U_\\nu(\\tau)\n\\end{eqnarray}\n\n{}We can now see that the quantity $F^{\\mu\\nu}(X(\\tau))$, i.e., the electromagnetic field tensor on the worldline, diverges.\nThis is due to the divergent nature of the Green's\nfunction. Therefore, it should be regularized and the divergences, if possible, should be\nremoved via renormalization. From this viewpoint, renormalization\nof the classical electrodynamics is analogous to renormalization in quantum\nfield theory: it is needed because the self-interaction of the particle is divergent. But\nthe analogy stops here. Thus, the renormalized equations\nof motion of the classical point particle cannot be derived from the action principle.\n\n{}The explicit form of the function ${\\cal G}_D(z)$ reads:\n\\begin{eqnarray}\n &&{\\cal G}_D(z) = \\theta^{(k)}(z) \/ 4\\pi^k,~~~D = 2k+2,~~~k=0,1,2,\\dots\\\\\n &&{\\cal G}_D(z) = (-1)^k Q^{(k)}(z) \/ \\pi^k,~~~D = 2k+3,~~~k=0,1,2,\\dots\n\\end{eqnarray}\nwhere the superscript $(k)$ means the $k$-th derivative w.r.t. $z$, and\n\\begin{eqnarray}\n &&\\theta(z) = \\int_{-\\infty}^z d\\alpha~\\delta(\\alpha)\\\\\n &&Q(z) = (4\\pi z^{1\/2})^{-1}\n\\end{eqnarray}\n\n{}It is convenient to use different regularizations when $D$ is even and when $D$ is\nodd. When $D$ is even, we can replace the quantity $(X(\\tau)-X(\\tau^\\prime))^2$ in (\\ref{FU}) by $(X(\\tau)-X(\\tau^\\prime))^2 - \\varepsilon^2$, where $\\varepsilon$ is a positive infinitesimal parameter. Then $F^{\\mu\\nu}(X(\\tau))U_\\nu(\\tau)$ is equal to:\n\\begin{eqnarray}\n &&{\\cal O}(\\varepsilon),~~~D = 2\\\\\n &&-eU^{\\prime\\mu}(\\tau)\/2\\varepsilon + {2\\over 3}e\\left[U^{\\prime\\prime\\mu}(\\tau) + U^\\mu(\\tau)~U^{\\prime 2}(\\tau)\\right] + {\\cal O}(\\varepsilon),~~~D=4\\label{4D}\\\\\n &&-eU^{\\prime\\mu}(\\tau)\/4\\pi\\varepsilon^3 + 3e\\left[U^{\\prime\\prime\\mu}(\\tau) + 3U^\\mu(\\tau)~U^{\\prime 2}(\\tau)\/2\\right]^\\prime\/16\\pi\\varepsilon + \\nonumber\\\\\n &&~~~~~~~+\\mbox{finite terms},~~~D=6\n\\end{eqnarray}\nand so forth. When $D$ is odd, we can replace the integration limit $\\tau$ in (\\ref{FU}) by\n$\\tau-\\varepsilon$ and obtain for the same quantity:\n\\begin{eqnarray}\n &&-eU^{\\prime\\mu}(\\tau)\\ln(\\rho\/\\epsilon) + \\mbox{finite terms},~~~D=3\\\\\n &&-3eU^{\\prime\\mu}(\\tau)\/4\\pi\\varepsilon^2 + e\\left[U^{\\prime\\prime\\mu}(\\tau) + U^\\mu(\\tau)~U^{\\prime 2}(\\tau)\\right]\/\\pi\\varepsilon - 3e\\ln(\\rho\/\\epsilon)\\times \\nonumber\\\\\n &&~~~~~~~\\times\\left[U^{\\prime\\prime\\mu}(\\tau) + 3U^\\mu(\\tau)~U^{\\prime 2}(\\tau)\/2\\right]^\\prime\/8\\pi + \\mbox{finite terms},~~~D=5\n\\end{eqnarray}\nand so forth. Here $\\rho$ is an arbitrary positive infrared cut-off parameter which appears\nbecause of the logarithmic divergences.\n\n{}So, we see that in two dimensions there is no divergence and, moreover, the point\nparticle does not radiate any electromagnetic waves. The reason for this is that in 2D a\nfree electromagnetic field is a pure gauge, although the Coulomb interaction is nontrivial.\nThe electromagnetic field follows the point particle preserving its constant energy and\nspatial shape.\n\n{}In three and four dimensions the self-interaction term is divergent but nonetheless\nin both cases the divergences that appear have the form of the kinetic term in the initial\nLorentz equations of motion (\\ref{self.EM}). Therefore, we can eliminate these divergences via\nmass renormalization:\n\\begin{eqnarray}\n &&{\\widetilde m} = m + e^2\\ln(\\rho\/\\varepsilon),~~~D=3\\\\\n &&{\\widetilde m} = m + e^2\/2\\varepsilon,~~~D=4\n\\end{eqnarray}\nSince no other divergences are present in these two cases, the theory is renormalizable.\nIn particular, in four dimensions using (\\ref{4D}) we obtain the well-known Lorentz equation\nwith the radiation term \\cite{LL}:\n\\begin{equation}\n {\\widetilde m}~U^{\\prime\\mu}(\\tau) = e {\\cal F}^{\\mu\\nu}(X(\\tau)) U_\\nu(\\tau) + {2\\over 3}e^2\\left[U^{\\prime\\prime\\mu}(\\tau) + U^\\mu(\\tau)~U^{\\prime 2}(\\tau)\\right]\n\\end{equation}\nIn $D = 5,6,\\dots$ we can also eliminate one divergence by renormalizing\nthe mass of the particle, but there are other divergences which cannot be\nremoved via renormalization as the required terms are absent in the\noriginal equations of motion.\n\n\\section{Renormalization of String in Four Dimensions}\n\n{}In this section we discuss a renormalization procedure analogous to that\nin the previous section, but for the case of the four-dimensional string. The equations\nof motion for the string coordinates and the scalar field following from the action (\\ref{action.string}) read:\n\\begin{eqnarray}\\label{EOM.string}\n &&\\left[T + \\lambda\\phi(X(\\sigma))\\right]\\partial^2X^\\mu(\\sigma) = \\nonumber\\\\\n &&~~~~~~~=\\lambda\\left\\{{1\\over 2}~\\partial^\\mu\\phi(X(\\sigma))(\\partial X(\\sigma))^2 - \\partial^\\nu\\phi(X(\\sigma))\\partial_\\alpha X^\\mu(\\sigma)\\partial^\\alpha X_\\nu(\\sigma)\\right\\}\\\\\n &&\\partial^2\\phi(x) = 2\\pi\\lambda\\int d^2\\sigma~\\delta(x - X(\\sigma))(\\partial X(\\sigma))^2\n\\end{eqnarray}\nHere the gauge freedom has been used to fix the constraint as follows:\n\\begin{equation}\n \\partial_\\alpha X^\\mu(\\sigma)\\partial_\\beta X_\\mu(\\sigma) = {1\\over 2}~\\eta_{\\alpha\\beta}~(\\partial X(\\sigma))^2\n\\end{equation}\nSo, the worldsheet is flat: $h^{\\alpha\\beta} = \\eta^{\\alpha\\beta}$, where $\\eta^{\\alpha\\beta}$ is the Minkowski metric. Note that we could also include a non-dynamical external scalar field in the action (\\ref{action.string}), but this is not crucial here.\n\n{}Using the techniques discussed in the previous section, we have (note that here we are working in four dimensions):\n\\begin{equation}\\label{phi1}\n \\phi(X(\\sigma)) = \\lambda\\int d\\sigma^\\prime\/|\\sigma - \\sigma^\\prime| + \\mbox{finite terms}\n\\end{equation}\nThe one-dimensional integral over the spatial coordinate $\\sigma^\\prime$ in (\\ref{phi1}) is taken along the string. Also, the quantity\n\\begin{eqnarray}\n &&{1\\over 2}~\\partial^\\mu\\phi(X(\\sigma))(\\partial X(\\sigma))^2 - \\partial^\\nu\\phi(X(\\sigma))\\partial_\\alpha X^\\mu(\\sigma)\\partial^\\alpha X_\\nu(\\sigma)=\\nonumber\\\\\n &&~~~~~~~={\\lambda\\over 2}~\\partial^2 X^\\mu(\\sigma)\\int d\\sigma^\\prime\/|\\sigma - \\sigma^\\prime| + \\mbox{finite terms}\n\\end{eqnarray}\ncan be computed using the following formula:\n\\begin{eqnarray}\n &&\\int d\\sigma^\\prime\\int_{-\\infty}^\\tau d\\tau^\\prime~(\\sigma- \\sigma^\\prime)^\\alpha (\\sigma- \\sigma^\\prime)^\\beta ~\\delta^\\prime((X(\\sigma) - X(\\sigma^\\prime))^2) = \\nonumber\\\\\n &&~~~~~~~=-[(\\partial X(\\sigma))^2]^{-2}\\eta^{\\alpha\\beta} \\int d\\sigma^\\prime\/|\\sigma - \\sigma^\\prime| + \\mbox{finite terms}\n\\end{eqnarray}\nThus, the equation of motion (\\ref{EOM.string}) reads\n\\begin{equation}\n \\left\\{T + {\\lambda^2\\over 2}\\int d\\sigma^\\prime\/|\\sigma - \\sigma^\\prime| \\right\\}\\partial^2 X^\\mu(\\sigma) = \\mbox{finite nonlocal terms}\n\\end{equation}\n\n{}So, the situation for the four-dimensional string is analogous to that for the relativistic point particle in $D = 3$ or $D = 4$ discussed earlier. There is\nonly one divergence of the same form as the kinetic term in the original equations\nof motion. The logarithmically divergent integral can be regularized as follows:\n\\begin{equation}\n \\int d\\sigma^\\prime\/|\\sigma - \\sigma^\\prime| = \\int_{\\sigma - \\rho}^{\\sigma-\\varepsilon} d\\sigma^\\prime\/(\\sigma - \\sigma^\\prime) +\n \\int_{\\sigma+\\varepsilon}^{\\sigma + \\rho} d\\sigma^\\prime\/(\\sigma^\\prime - \\sigma) = 2\\ln(\\rho\/\\varepsilon)\n\\end{equation}\nThe infrared cut-off parameter $\\rho$ is analogous to that introduced in the previous section. The\nrenormalized string tension thus becomes:\n\\begin{equation}\n {\\widetilde T} = T + \\lambda^2\\ln(\\rho\/\\varepsilon)\n\\end{equation}\nThere is no other divergence in this case and, therefore, the string equations of motion\nare renormalizable in 4D.\n\n\\section{Strings in Higher Dimensions}\n\n{}Now we turn to the renormalizability of the string\nequations of motion in higher dimensions. First note that in the string case we\nneed not regularize the integral over the $\\tau^\\prime$ variable appearing in such quantities as\n$\\phi(X(\\sigma))$ as the extended spatial dimension of the string plays the role of the regulator for this integral. But then we\nhave to regularize the remaining integral over the $\\sigma^\\prime$ variable. So, based on the\nresults obtained in Section 2, we can see that in five dimensions there are two\ndivergences of the form $1\/\\varepsilon$ and $\\ln(\\varepsilon)$. In six dimensions we have the $1\/\\varepsilon^2$, $1\/\\varepsilon$ and $\\ln(\\varepsilon)$\ndivergences. And so forth for the higher dimensions. In 3D the string is finite.\n\n{}The preceding analysis shows that the string is not renormalizable if $D > 4$.\nIn the case of the point particle\nwe can always eliminate one of the appearing divergences via mass\nrenormalization. However, in the higher-than-four-dimensional string case no divergence can be\nremoved via renormalization.\n\n{}Thus, consider the leading divergences in $\\phi(X(\\sigma))$ and the r.h.s. of (\\ref{EOM.string}) but in $D > 4$\ndimensions. They enter the equations of motion in the following form:\n\\begin{equation}\\label{HigherD}\n \\propto \\lambda^2 \\varepsilon^{4-D} [(\\partial X(\\sigma))^2]^{(4-D)\/2} \\partial^2 X^\\mu(\\sigma)\n\\end{equation}\nTherefore, this divergence cannot be removed via string tension renormalization owing to the extra factor $[(\\partial X(\\sigma))^2]^{(4-D)\/2}$, which appears due to the following rescaling property of the Green's function in $D$ dimensions:\n\\begin{equation}\n {\\cal G}_D(\\gamma z) = \\gamma^{(2-D)\/2}{\\cal G}_D(z)\n\\end{equation}\nNote that for the point particle the quantity analogous to $(\\partial X(\\sigma))^2$ is $U^2(\\tau) = 1$, which is why for the point particle the leading divergence can always be removed via mass renormalization.\n\n\\section{Concluding Remarks}\n\n{}Let us briefly summarize the discussion in the previous sections. We have seen that\nclassical electrodynamics is a renormalizable theory only in $D < 5$ dimensional spacetimes.\nThe same holds for the bosonic string coupled to the massless scalar\nfield. It is worthwhile to compare our results with the work \\cite{DH}, in which it was argued\nthat in four dimensions the superstring tension is not renormalized at all. Indeed, there\nis a combination of the dilaton $\\phi$, metric $g_{\\mu\\nu}$ and antisymmetric tensor $B_{\\mu\\nu}$ fields,\ncoupled to the bosonic string via a non-linear sigma-model action, for which the\ntension of the string is not renormalized at all at the lowest order of the perturbation\ntheory. For the point particle there too is a certain combination of the electric charge and the mass of the\nparticle, for which in four dimensions the mass of the particle is not renormalized at\nall due to the cancelation of the contributions from the electromagnetic and gravitational self-interactions.\nHowever, in the case of the string we have shown that in $D > 4$ there are some other\ndivergences unrelated to the string tension renormalization. So, to achieve finiteness of the string, we\nmust include massless as well as an infinite tower of massive modes. For instance, the extra factor\n$[(\\partial X(\\sigma))^2]^{(4-D)\/2}$ in (\\ref{HigherD}) in $D > 5$ indicates that there are additional modes\nto be considered. This fits in the ideology of \\cite{Lepage}: non-renormalizability\nof a physical theory indicates that there is some new underlying physics to be included.\nOur analysis of the renormalizability of classical string theory shows that the extended\nnature of the string alone is insufficient for finiteness. The latter requires inclusion of\nall the excitation modes of the string. On the\nother hand, the way the string excitations combine together in the local field theory\naction functional is determined by the conformal invariance. In particular,\nthe special combination of the dilaton $\\phi$, metric $g_{\\mu\\nu}$ and antisymmetric tensor $B_{\\mu\\nu}$ fields\ndiscussed above is a consequence\nof the conformal invariance requirement.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{INTRODUCTION}\n\nObject manipulation tasks performed by autonomous robots require the robot to recognize and locate the object in the 3D space, typically through visual perception. Precise grasping tasks necessitate an estimation of the full 6-DoF pose of task-relevant objects from the input visual data. In the past few years, powerful CNNs have been successfully employed for this purpose enabling object pose estimation under difficult circumstances \\cite{pavlakos17object3d, xiang2018posecnn}. However, the performance of most CNN based state-of-the-art methods is dependent on the availability of large sets of real-world data --- labeled with the ground-truth pose for each object instance --- to supervise the training of the involved neural networks. This training dataset may easily consist of tens of thousands or more labeled data points per object in varying backgrounds and environments \\cite{xiang2018posecnn, zeng2017multi:zengapc, rad2017bb8:bb8}. In such cases, manual labeling of raw samples becomes unreasonable and impractical. Other pose estimation methods that claim to use artificially generated synthetic data are either still partially dependent on real data for fine-tuning or require very high quality textured 3D object models \\cite{tremblay2018corl:dope}. Nevertheless, real-world annotated data is indispensable for a meaningful and comprehensive evaluation of any pose estimator.\n\n\nTechniques for (semi-) automating the process of training data generation has gained research interest in recent years \\cite{rennie2016dataset, wong2017segicp, hodan2017t:tless, hinterstoisser2012model:linemod, suzui2019toward}. However, their reliance on additional hardware such as multi-sensor rigs, turntables with markers, or motion capture systems limits their application in arbitrary environments. Besides, hardware setup and sensor calibration may in itself be time-consuming. LabelFusion \\cite{marion2018label} is a recent but popular method that allows raw data capture through a hand-held RGB-D sensor. Yet, its dependence on the availability of pre-built object meshes restricts operation on a wider object set. EasyLabel \\cite{suchi2019easylabel} attempts to overcome the dependence on pre-built models. However, it cannot produce data on a scale that is necessary for training deep networks, as the method works on individual snapshots of the scene and there is no propagation of labels. \n\nWe address these problems by proposing a technique to produce large amounts of pose-annotated, real-world data for rigid objects that uses minimal human input and does not require any previously built 3D shape or texture model of the object. Our method also simplifies the capture of raw (unlabeled, unprocessed RGB-D) data due to independence from turntables, marker fields, motion capture setups, and manipulator arms. The key idea is to use sparse annotations provided manually and systematically by a human user over multiple scenes, and combine them under geometrical constraints to produce the labels. We define ``pose labels\" as the 2D bounding-box labels, 2D keypoint labels and the pixel-wise mask labels (although other types of labels can be deduced too). Hence, our method can effectively be used for generating large training datasets -- for 2D\/3D object detection, 2D\/3D keypoint estimation, 6-DoF object pose estimation, and semantic segmentation -- for novel objects in real scenarios outside of popular datasets such as LineMOD\\cite{hinterstoisser2012model:linemod} and T-LESS\\cite{hodan2017t:tless}.\n\nWe demonstrate the effectiveness of our method by generating keypoint labels, pixel-wise mask labels, and bounding-box labels for more than 150,000 RGB images of 11 unique objects (7 in-house + 4 YCB-Video \\cite{xiang2018posecnn} objects) in a total of 80 (single and multi-object) scenes --- in only a few minutes of manual annotation for each object. We evaluate the accuracy of the generated labels and the sparse model using ground-truth CAD models. Subsequently, we train and evaluate (1) a keypoint-based 6-DoF pose estimation pipeline and (2) an object segmentation CNN using the resulting labeled dataset. Our proposed tool with the graphical user-interface are published as a public GitHub repository \\footnote[2]{\\url{https:\/\/github.com\/rohanpsingh\/RapidPoseLabels}}.\n\n\\section{RELATED WORK}\n\nVarious works on developing open-source 6-DoF pose labeled datasets have adopted different approaches for automating the process of labeling RGB-D data \\cite{hinterstoisser2012model:linemod, hodan2017t:tless, rennie2016dataset, hua2016scenenn, dai2017scannet}. The popular Rutger's dataset \\cite{rennie2016dataset} was generated by mounting a Kinect sensor to the end joint of a Motoman robot arm. The annotation was done by a human to align the 3D model of the objects in the corresponding RGBD point cloud scene. This approach does produce high quality ground-truth data albeit at the cost of laborious manual involvement. The T-LESS dataset \\cite{hodan2017t:tless}, containing about 50K RGB-D images of 30 textureless objects, was developed with an arguably complicated setup involving a turntable with marker-field and a triplet of sensors attached to a jig. The method for dataset generation described in \\cite{zeng2017multi:zengapc} works through background subtraction, first recording data in a scene without the object and then with the object. This can produce pixel-wise segmentation labels even for model-less objects, but cannot be used for 6-DoF pose labels. SegICP \\cite{wong2017segicp} used a labeled set of 7500 images in indoor scenes, produced with the help of a motion capture (MoCap) based automated annotation system, which inevitably limits the types of environments in which data is acquired. An interesting method of collecting data through the robot in a life-long self-supervised learning fashion was presented in \\cite{deng2019self}. Here the robot learns pose estimation and simultaneously generates new data for improving pose estimation by itself, although in a very limited environment.\n\nThe complications involved in acquisition of real-world training data have inspired methods such as \\cite{tremblay2018corl:dope, tobin2017domain} which are trained only on synthetic or photo-realistic simulated data. Although demonstrating very promising results, they present an additional requirement of the availability of a very high quality textured 3D model, which may be hard to obtain in itself without dedicated special hardware.\n\nMarion et. al. presented LabelFusion \\cite{marion2018label} -- a method for generating ground-truth labels for RGB-D data minimizing the human effort involved. They perform a dense 3D reconstruction of the scene using ElasticFusion, and then ask the user to manually annotate 3 points in the 3D scene space to initiate an ICP-based registration to align a previously built 3D model to the scene point cloud. For sidestepping the need for a prior object model, Suchi et. al. proposed EasyLabel \\cite{suchi2019easylabel} wherein scenes are incrementally built and depth changes are exploited to obtain object masks. This however limits the scale at which data can be generated and so, their method is presented primarily for evaluation purposes -- as opposed to training of deep CNNs.\n\nOur method outperforms the current state-of-the-art in regard to the above-mentioned problems. It is capable of labeling large scales of datasets, suitable for supervised training of deep networks and, as stated earlier, it is not dependent on the availability of a prior 3D object model.\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{images\/main_approach3.png}\n \\caption{\\textbf{Proposed system for generating pose labels.} The input to the system is a set of RGB-D image sequences and a limited amount of manual labels. The output consists of (1) labels for each frame in the raw dataset, (2) a sparse, keypoint-based representation and (3) a dense model of the object. Existing techniques are used for dense reconstruction while a user-friendly GUI is implemented for providing the manual annotations. An optimization problem is solved on the manual annotations to recover a sparse model and finally, a dense model is built-up from the sparse model using a region-growing segmentation algorithm.}\n \\label{figure:main_approach}\n\\end{figure*}\n\n\\section{APPROACH}\n\nWe consider the case of generating training data for a single object-of-interest. The human user collects a set of $N_s$ RGB-D videos (alternatively referred to as \\textit{scenes}) by moving a hand-held RGB-D sensor around the relevant object --- placed in varying orientations, positions, backgrounds, and lighting conditions. This set of unlabeled videos or scenes is denoted by $\\mathcal{U} = \\{\\mathcal{U}_1, \\mathcal{U}_2, \\dots, \\mathcal{U}_{N_s}\\}$. The duration and frame rate of each video $\\mathcal{U}_s$ can be variable (2-5 minutes and 30fps in our experiments). Mathematically, a scene consisting of ${t_{s}}$ frames is defined as $\\mathcal{U}_s = \\{ (\\mathbf{I}_s^{(t)}, \\mathbf{D}_s^{(t)})_{t=1}^{t_{s}} \\}$, where $\\mathbf{I}_s^{(t)}$ and $\\mathbf{D}_s^{(t)}$ are the RGB and depth images respectively at time instance $t \\in \\{1 ,\\dots, {t_{s}}\\}$ in the scene $s \\in \\{1 ,\\dots, {N_s}\\}$. Our final objective is to generate the labeled dataset $\\mathcal{L} = \\{\\mathcal{L}_1, \\mathcal{L}_2, \\dots, \\mathcal{L}_{N_s}\\}$, i.e. associate each frame in each scene with a pose label.\n\n\\textbf{System Overview.} Figure \\ref{figure:main_approach} illustrates the overview of the process. Given $\\mathcal{U}$, our system initiates by performing dense scene reconstructions using a third-party software to obtain scene meshes and recover camera trajectories (Section: \\ref{subsec:dense_recon}). Next, we obtain the manual labels with the help of our GUI tool (Section \\ref{subsec:manual_annot}). The key idea in this step is to ask the human annotator to label an ordered set of $N_k$ arbitrarily chosen landmark points on the object (called the \\textit{keypoints set}) in each scene. The user needs to sequentially mark only a subset of the keypoints on any randomly selected RGB image in each collected scene. This produces fragments of the sparse keypoint-based object representation each defined in the respective scene's frame of reference. If there is enough overlap between the fragments, we can recover simultaneously (1) the full, 3D keypoint-based sparse model $\\mathbf{Q} \\in \\mathbb{R}^{3\\times N_k}$ and (2) the relative transformations between the scenes, explaining the annotations provided by the user. We achieve this by formulating and solving an optimization problem on the sparse, user-annotated keypoint configurations constrained on 3D space rigidity (Section \\ref{subsec:optim_step}). The resulting output of a successful optimization is (optionally, if mask labels are required) used to build a dense model of the object by segmenting out the points corresponding to the object of interest from all scene meshes and combining them together (Section \\ref{subsec:dense_model}). Finally, the produced sparse and dense models can be projected back to all the 2D image planes in $\\mathcal{U}$ to obtain the desired type of labels (Section \\ref{subsec:label_generation}). \n\n\\subsection{Dense Scene Reconstruction} \\label{subsec:dense_recon}\nEach scene $\\mathcal{U}_s$ is a sequence of image pairs where the camera trajectory through time is unknown. It is valuable to obtain this trajectory as it can be used to automatically propagate labels through all instances in the scene. We rely on an existing technique which provides camera pose tracking along with a dense 3D reconstruction, to avoid dependence on fiducial markers or robotic manipulators. With this, the dataset $\\mathcal{U}$ is now altered to become $\\mathcal{U} = \\{\\{(\\mathbf{I}_s^{(t)}, \\mathbf{D}_s^{(t)}, \\mathbf{C}_s^{(t)})_{t=1}^{t_{s}}\\}_{s=1}^{N_s}\\}$, where $\\mathbf{C}_s^{(t)} \\in \\mathbb{R}^{4\\times4}$ is the homogeneous transformation matrix giving the camera pose $\\mathcal{F}_{s}^{(t)}$ at instance $t$ in the scene $s$ relative to the initial camera pose $\\mathcal{F}_{s}^{(1)}$ of the same scene. In our experiments, we use the open-source implementation of ElasticFusion \\cite{whelan2016elasticfusion} for this step (as in \\cite{marion2018label}).\nIt is worthwhile to mention here that while recording the videos, an individual scene $\\mathcal{U}_s$ does not necessarily need to consist of a full scan of the object from all views, which may be difficult to obtain. Our method combines different object views from all scenes in $\\mathcal{U}$ to produce final output.\n\n\\subsection{Manual Annotation} \\label{subsec:manual_annot}\nKeeping in line with our desire to reduce the involved human effort and time to as low as possible, our system sources minimal annotations from the human user. The user first chooses a set of arbitrary but well-distributed, ordered, and uniquely identifiable landmark points (called \\textit{keypoints}) on the physical object. Then, with the help of our user-interface, the user sequentially labels the location of these keypoints in each scene. The labeling is done on the RGB image. As all faces of the object may not be visible in a scene, labeling is done only for the visible subset of the keypoints on the RGB image. The non-visible keypoints are skipped. Moreover, the annotations can be distributed over multiple images of the same scene.\n\nFor each user-annotated keypoint $k' \\in \\{1,\\dots,N_k\\}$ on the RGB image $\\mathbf{I}_{s'}^{(t')}$ in scene $s' \\in \\{1 ,\\dots, {N_s}\\}$ and at time instance $t' \\in \\{1,\\dots, {t_{s'}}\\}$, using camera intrinsics of the RGB sensor and the depth image $\\mathbf{D}_{s'}^{(t')}$, we obtain the 3D point position $\\prescript{}{k'}{\\mathbf{d}_{s'}^{(t')}} = [t_x, t_y, t_z]^T$ of the labeled pixel. Then, we transform this point $\\prescript{}{k'}{\\mathbf{d}_{s'}^{(t')}}$ to the coordinate frame $\\mathcal{F}_{s'}^{(1)}$ (i.e. in the camera frame at time $t=1$ in scene $s=s'$):\n\n\\begin{equation} \\label{eq:tf}\n\\begin{bmatrix}\n\\prescript{}{k'}{\\mathbf{d}_{s'}^{(1)}} \\\\[6pt]\n1\n\\end{bmatrix} = \\mathbf{C}_{s'}^{(t')} \\cdot\n\\begin{bmatrix}\n\\prescript{}{k'}{\\mathbf{d}_{s'}^{(t')}} \\\\[6pt]\n1\n\\end{bmatrix}.\n\\end{equation}\n\nFor each scene $s$, this gives us the matrix $\\mathbf{W}_{s} \\in \\mathbb{R}^{3 \\times N_k}$, where the columns hold the 3D position $\\prescript{}{k}{\\mathbf{d}_{s}^{(1)}}$ of the keypoint $k$ if it was manually annotated and $\\mathbf{0}^{3}$ otherwise. Doing so for all scenes, we obtain $\\mathcal{W} = \\{\\mathbf{W}_1, \\mathbf{W}_2, \\dots, \\mathbf{W}_{N_s}\\}$.\n \n\\textbf{Selection of Keypoints.} Although the selection of keypoints on the object is arbitrary, there are some constraints on the manual annotation which must be kept in mind by the user while choosing them. To ensure existence of a unique solution to the optimization problem, the marked keypoints should rigidly connect all the scenes together, namely by avoiding the system to suffer underdetermination in the produced constraints for the scene relative poses. For example, for two scenes, mutual sharing of 3 or more non-collinear keypoints rigidly ``ties\" them. Hence, we recommend choosing points that are more susceptible to be shared in multiple scenes, such as on the edge shared by two faces of an object.\n\n\\subsection{Optimization} \\label{subsec:optim_step}\nThus far, we have localized subsets or fragments of the object's keypoint model $\\mathbf{Q}$ in each scene of $\\mathcal{U}$. The subsets do not exist in one common frame of reference. Instead, the fragment $\\mathbf{W}_{s}$ is defined in the coordinate frame $\\mathcal{F}_{s}^{(1)}$ and the relative transformations between scenes are yet unknown. In this step we find the relative transformations and combine the fragments in a common space to recover $\\mathbf{Q}$. Let us represent the set of relative transformations by $\\mathcal{T} = \\{{\\mathbf{T}_{1}}, {\\mathbf{T}_2}, \\dots, {\\mathbf{T}_{N_s}}\\}$ where $\\mathbf{T}_{s} = \\begin{bmatrix} \\mathbf{q}_s \\quad \\mathbf{t}_s \\end{bmatrix}$ is the rigid transformation of the local frame $\\mathcal{F}_{s}^{(1)}$ with respect to the world frame $\\mathcal{F}_{w}$. $\\mathbf{q}_s \\in \\mathbb{R}^4$ represents the rotation quaternion and $\\mathbf{t}_s \\in \\mathbb{R}^3$ is the 3D position of the camera center. We set $\\mathcal{F}_{w} = \\mathcal{F}_{1}^{(1)}$ (i.e. origin of scene $1$). We formulate the following nonlinear optimization problem:\n\n\\begin{align}\n\\mathbf{Q}\\text{*},\\mathcal{T}\\text{*} & = \\underset{\\mathbf{Q},\\mathcal{T}}{\\text{argmin}}. \n\\| \\mathbf{S} \\cdot f(\\mathcal{T}, \\mathbf{Q}) - \\mathbf{W} \\| ^2, \\nonumber \\\\\n& \\textrm{s.t.} \\quad \\mathbf{q}_s^T\\cdot\\mathbf{q}_s=1, \\nonumber \\\\\n& \\forall s \\in \\{0 ,\\dots, N_s-1\\}. \\label{eq:opt_eq}\n\\end{align}\n\\noindent\nwhere $\\mathbf{W}$ is the concatenation of all non-zero points in $\\mathcal{W}$ and the function $f(\\cdot)$ successively applies each $\\mathbf{T}_{s} \\in \\mathcal{T}$ to $\\mathbf{Q}$ and returns the concatenated vector. The selection matrix $\\mathbf{S}$ selects from the vector $f(\\mathcal{T}, \\mathbf{Q})$ only those keypoints whose reference is available in $\\mathcal{W}$, that is, the keypoints which were manually annotated. The solution is given by $\\mathbf{Q}\\text{*}$ - representing the optimized keypoint representation of the object model defined in the frame $\\mathcal{F}_{w}$ - and the set $\\mathcal{T}\\text{*}$ - representing the optimized relative scene transformations.\n\nIntuitively, solving Eq. \\ref{eq:opt_eq} is equivalent to bringing together the subsets in $\\mathcal{W}$, defined in their local frames, into one common world frame of reference (as illustrated in Figure \\ref{figure:optimization}). This is possible because the user has inputted mutually overlapping annotations, and the solution to Eq. \\ref{eq:opt_eq} provides a consistent explanation of all the observations.\n\nWe used the scipy.optimize library in Python with the Sequential Least Squares Programming solver \\cite{kraft1988software} to minimize Eq. \\ref{eq:opt_eq}. We initialized the iterations by setting all elements of $f(\\mathcal{T}, \\mathbf{Q})$ to 0 except for the real part of the rotation quaternion which is set to 1.\n\n\\begin{figure}[b]\n \\includegraphics[width=\\linewidth]{images\/optimization.png}\n \\caption{\\textbf{Problem description.} An example dataset with 3 scenes `1', `2' and `3' with trajectory lengths $t_1$, $t_2$ and $t_3$ respectively are shown. The left-side shows 4 keypoints annotated on an image of Scene `2' at time $t=t'$, projected to get the 3D positions. This is done for each scene (with some amount of mutually shared keypoints). The 3D point fragments are then transformed to the origin of their respective scene trajectories, giving us $\\mathbf{W}_1$, $\\mathbf{W}_2$ and $\\mathbf{W}_3$, on the right. The intention of the optimization, essentially, is to find $\\mathbf{T}_1$, $\\mathbf{T}_2$, $\\mathbf{T}_3$ such that all fragments can be represented in world frame $\\mathcal{F}_w$. Keypoint overlapping ensures existence of a unique solution(eg. keypoints 2,3,4 are annotated in scene `1' and `2' both).}\n \\label{figure:optimization}\n\\end{figure}\n\n\\subsection{Segmentation of Dense Object Model} \\label{subsec:dense_model}\nIn this step, we use the sparse object model and the scene transformations produced above to segment out the 3D points corresponding to the object from each of the dense scenes, thereby producing a dense model of the object. A dense object model is necessary to obtain pixel-wise mask labels and is also useful for planning robotic grasp poses. We modify Point Cloud Library's (PCL) region-growing-segmentation algorithm such that the individual points of the sparse model act as seeds and the regions are grown and spread outwards from the seeds. All segmented regions from each scene are then combined using the relative transformations to create the dense model. The output may occasionally have holes in the geometry or contain points from the background scenes which require manual cropping, yet, as our experiments indicate, it remains a practical approximation for the object shape. \n\n\n\\subsection{Generation of Pose Labels} \\label{subsec:label_generation}\nHaving obtained the object sparse model, the dense model (both defined in $\\mathcal{F}_{w}$) and the set $\\mathcal{T}$ of scene transformations w.r.t. $\\mathcal{F}_{w}$, the labeled dataset $\\mathcal{L} = \\{\\mathcal{L}_1, \\mathcal{L}_2,\\dots, \\mathcal{L}_{N_s}\\}$ can be generated through back projection to the RGB image planes. For the purpose of training the 6-DoF pose estimation pipeline adopted by us \\cite{pavlakos17object3d}, we define $\\mathcal{L} = \\{\\{(\\mathbf{I}_{s}^{(t)}, \\textbf{L}_{s}^{(t)}, \\textbf{b}_{s}^{(t)})_{t=1}^{t_{s}}\\}_{s=1}^{N_s}\\}$ where $\\textbf{L}_{s}^{(t)} \\in \\mathbb{R}^{2\\times{N_k}}$ is the 2D keypoint annotation in the image $\\mathbf{I}_{s}^{(t)}$ for the $N_k$ keypoints chosen on the object and $\\textbf{b}_{s}^{(t)} \\in \\mathbb{R}^{3}$ represents the $x,y$ pixel coordinates of the center and side length $h$ of an upright bounding-box square around the object in the image.\n\nComputation of the label for the $k^{th}$ keypoint at time instance $t$ of the scene $s$, $\\prescript{}{k}{\\textbf{l}}_{s}^{(t)}$, can be done simply by transforming the point $\\mathbf{Q}_{*,k}$ to the camera frame $\\mathcal{F}_{s}^{(t)}$ to obtain the 3D point $\\prescript{}{k}{\\mathbf{d}_{s}^{(t)}} = [t_x, t_y, t_z]^T$, following Eq. \\ref{eq:tf_inverse}. And subsequently, projecting it onto the 2D image plane to get $\\prescript{}{k}{\\textbf{l}}_{s}^{(t)} = [u_x, u_y]^T$. \\par\n\n\\begin{align}\n\\begin{bmatrix}\n\\prescript{}{k}{\\mathbf{d}_{s}^{(t)}} \\\\[6pt] \n1\n\\end{bmatrix}\n & = (\\mathbf{C}_{s}^{(t)})^{-1} \\cdot\n \\begin{bmatrix}\n{\\mathbf{T}^{-1}_{s}} \\circ {\\mathbf{Q}_{*,k}} \\\\[6pt]\n1\n\\end{bmatrix}. \\label{eq:tf_inverse}\n\\end{align}\n\nThe per-pixel mask label can also be obtained similarly once the dense model has been generated - by transformation and back projection of each point of the 3D dense model onto the 2D RGB images. \n\nThe bounding-box label $\\textbf{b}_{s}^{(t)}$ can be obtained by finding the up-right bounding rectangle of the mask points or taking the bounding-box of all points in $\\textbf{L}_{s}^{(t)}$ and scaling up by a factor of $1.5$ to avoid cropping out of the object itself.\n\n\\subsection{Using the Sparse Model for New Scenes}\nApplication of deep-learning approaches for practical purposes often requires fine tuning of the trained model with labeled data collected in the real environment of operation \\cite{zeng2017multi:zengapc}. So, the user must be capable of quickly labeling a new RGB-D scene that is outside of the initial dataset $\\mathcal{U}$. This is achieved as follows:\n\nOnce a model $\\mathbf{Q}$ of an object has been generated from $\\mathcal{U}$, for each new RGB-D video, the user must uniquely mark any subset of $\\mathbf{Q}$ (at least 3 points) on a randomly selected RGB image. Then, we obtain the 3D positions of the points using depth data, and apply Procrustes Analysis (using Horn's solution \\cite{horn1987closed}) to compute the rigid transformation between the set of marked points and their correspondences in $\\mathbf{Q}$. We can generate the labels again through back projection of the dense model according to the computed transformation. As this process involves only a few seconds of manual annotation, scalability to a large number of scenes is convenient.\n\n\\begin{figure}[t]\n \\includegraphics[width=\\linewidth]{images\/objects.png}\n \\caption{\\textbf{Selected objects for experiments.} (a) In-house objects on the left and (b) objects selected from YCB-Video dataset on the right.}\n \\label{figure:objects}\n\\end{figure}\n\n\\section{EXPERIMENTS}\nWe employ our proposed system to generate the sparse models, dense models and labeled datasets for two separate sets of objects for quantitative analysis (refer to Figure \\ref{figure:objects}). The first set consists of 4 objects from the YCB-Video dataset \\cite{xiang2018posecnn} which already provides us with multi-object RGB-D video scenes and the object CAD models. The second is a set of 7 objects for which we manually collected the RGB-D scenes with an Intel RealSense D435 at $640\\times480$ resolution. The CAD models of the 7 in-house objects were either sourced from the manufacturer or drawn manually prior to the experiments. We name these objects models as GT-CAD. Nevertheless, the CAD models for all objects and experiments were used only for evaluation purposes, as a key advantage of our system is its independence from prior object models.\n\nFrom the YCB-Video dataset, we selected 25 (20 single-object + 5 multi-object) scenes, with each of the 4 chosen objects appearing in 9 unique scenes. For the in-house objects, we recorded 55 (43 single-object + 12 multi-object) RGB-D scenes using the Intel RealSense D435 sensor. Each object's dataset $\\mathcal{U}$ was carved out of these 55 + 25 scenes.\n\n\\subsection{3D Sparse Model and Label Generation} \nTo evaluate the accuracy of the predicted sparse model, we align it to the corresponding GT-CAD model using ICP (with manual initialization) and find the closest points on the CAD model from each of the points in the sparse model. The set of closest points acts as the ground-truth for the sparse model (denoted by GT-SPARSE). We report the mean Euclidean distance between the corresponding points as an estimate of the sparse-model's accuracy. Next, as the primary motivation of our system is to enable automated generation of training data, we also evaluate the accuracy of the generated labels (keypoint + pixel-wise labels). To get the ground-truth for the keypoint and mask labels, we align (manual initialization followed by ICP fine tuning) the GT-CAD to the scene reconstructions, and project back GT-SPARSE and GT-CAD respectively to the RGB images using the camera trajectories (estimated by ElasticFusion or provided in YCB-Video dataset). Note that this approach is similar to the method of label generation in \\cite{pavlakos17object3d, marion2018label}. \n\nErrors in keypoints labels are computed as the mean 2D distance between the predicted pixel coordinates and the ground-truth. For the pixel-wise mask labels, we use the popular Intersection-over-Union (IoU) metric. Table \\ref{tab:table_model_errors} lists the results of this quantitative analysis for each object along with the number of scenes and number of keypoints chosen for this set of experiments.\n\n\\begin{figure}[b]\n \\includegraphics[width=\\linewidth]{images\/pose_estimation.png}\n \\caption{\\textbf{Block diagram of adopted pose estimation pipeline.} Computed by solving a P\\textit{n}P problem on predicted 2D keypoints and their 3D correspondences.}\n \\label{figure:pose_estimation}\n\\end{figure}\n\nThe errors in the mask labels also give an indication of the accuracy of the dense model. As explained earlier, the dense model requires manual cropping (~2-3min), but our system reduces the overall time in creating labeled datasets by several orders of magnitude as compared to the completely manual approach. We note that any ground-truth (produced either by this approach or through hand-labeling of each instance) will contain impurities in itself on a large dataset, hence the reported quantities capture the accuracy of our method only to a certain extent. However, the low errors (6.56 pixels and 81.2\\% IoU on average) approximately quantify the practicality of our method. Our mean IoU metric remains at par with the 80\\% IoU reported in LabelFusion while completely eliminating the need for a prior 3D model.\n\n\\begin{figure*}\n \\includegraphics[width=\\textwidth]{images\/output_example.png}\n \\caption{\\textbf{Keypoint and bounding-box labels.} Raw real RGB-D data was captured conveniently using a handheld RealSense sensor in 55 scenes for 7 in-house objets with no other hardware requirements. We generated keypoint and bounding-box labels using our approach for all chosen objects (6 are show here) in different scenes without using a prior 3D model from minimal manual annotation. Our approach is easily scalable to a large number of scenes.}\n \\label{figure:label_examples}\n\\end{figure*}\n\n\\begin{figure}[t]\n \\includegraphics[width=\\linewidth]{images\/eval_curves1.png}\n \\caption{Evaluation of the generated sparse keypoint-based model. We measure the average errors in estimated keypoint position with respect to the total no. of keypoints in (a) and total no. of scenes in (b).}\n \\label{figure:error_curves}\n\\end{figure}\n\nNext, we perform a set of experiments on the following 3 objects: Dewalt Cut-out Tool (OBJ1), Facom Wrench (OBJ2), Hitachi Screw Gun (OBJ3) -- to measure the effect of the number of scenes in $\\mathcal{U}$ ($N_s$) and the number of chosen keypoints ($N_k$) on the optimization process. The error in keypoint 3D positions for the 3 objects as a function of the total number of defined keypoints $N_k$ are shown in Figure \\ref{figure:error_curves}(a) when $N_s=8$, while Figure \\ref{figure:error_curves}(b) shows the mean error as a function of number of scenes $N_s$ when $N_k=9$. As the curves show, the mean positional error remained fairly independent of $N_k$ while the accuracy seemed to improve with $N_s$. This is expected as manual input for the same keypoint in multiple scenes would help remove bias. For $N_k$, a lower number of keypoints makes it harder to cover all faces of the object and a higher number increases the chances of human error in the manual annotation step. In our experience, choosing 6--12 adequately spaced keypoints proved to be sufficient for all objects, while $N_s$ is entirely up to the use-case scenario.\n\n\\subsection{Application to 6-DoF Object Pose Estimation}\nAs we expect the proposed approach to be primarily used for deployment of DL based 6-DoF pose estimation, we evaluate the performance of such a pipeline trained on the labeled dataset generated through our method for OBJ1, OBJ2 and OBJ3. We adopt a keypoint-based pose estimation approach \\cite{pavlakos17object3d, rpsingh2020instance}, where a stacked-hourglass network is used for keypoint predictions on RGB images cropped by an object bounding-box detector network (such as SSD \\cite{liu2016ssd}, YOLO \\cite{redmon2016you:yolo}). Predictions from the hourglass module along with the 3D model correspondences (from $\\mathbf{Q}$) are fed through a P\\textit{n}P module to obtain the final pose. An overview is depicted in Figure \\ref{figure:pose_estimation}.\n\n\\begin{figure}[t]\n \\centering\n \\includegraphics[width=\\columnwidth]{images\/mask_labels.png}\n \\caption{\\textbf{Pixel-wise mask labels.} Examples of single and multi-object scenes from the experiments. The generated mask label is shown in the middle while the intersection-over-union computation is on the right.}\n \\label{fig:my_label}\n\\end{figure}\n\n\\begin{figure*}[t]\n \\includegraphics[width=\\textwidth]{images\/sparse_and_dense.png}\n \\caption{Generated dense models of the experiment objects on the left and the sparse representations overlayed on the CAD models on the right. The dense models are produced by \\textit{growing regions} on the scene meshes, seeded by the points of the sparse-representation.}\n \\label{figure:sparse_models}\n\\end{figure*}\n\n\\begin{table*}[t]\n\\caption{Accuracy analysis of generated sparse model and labels.}\n\\label{tab:table_model_errors}\n\\begin{center}\n\\begin{tabular}{c | c | c | c | c | c | c | c}\n\\hline\n& Object & \\# KPs & \\# scenes & Mean KP Error (3D) & Mean KP Error (2D) & Mean IoU & \\# labels\\\\\n& & $(N_{k})$ & $(N_{s}=$len$(\\mathcal{U}))$ & (in \\textit{mm}) & (in \\textit{pixels}) & & (sampled@3Hz)\\\\\n\\hline\n01 & \\textit{Dewalt Cut-out Tool} & 8 & 11 & 1.00 & 4.23 & \\textbf{81.09} & 3043\\\\\n\\hline\n02 & \\textit{Facom Electronic Wrench} & 6 & 10 & 2.69 & 6.66 & 71.54 & 2188\\\\\n\\hline\n03 & \\textit{Hitachi Screw Gun} & 9 & 9 & 5.29 & 6.08 & 75.73 & 2172\\\\\n\\hline\n04 & \\textit{Cup Noodle} & 10 & 6 & 0.5 & 3.97 & \\textbf{88.95} & 860\\\\\n\\hline\n05 & \\textit{Lipton} & 11 & 6 & 2.1 & 8.02 & \\textbf{83.19} & 1029\\\\\n\\hline\n06 & \\textit{Mt. Rainier Coffee} & 11 & 8 & 1.6 & 7.40 & \\textbf{82.49} & 1036\\\\\n\\hline\n07 & \\textit{Oolong Tea} & 11 & 5 & 1.1 & 3.65 & \\textbf{83.6 }& 850\\\\\n\\hline\n08 & \\textit{Mustard Bottle} & 7 & 5 & 1.17 & 7.15 & \\textbf{83.29} & 947\\\\\n\\hline\n09 & \\textit{Potted Meat Can} & 10 & 8 & 1.02 & 8.83 &\\textbf{ 83.49} & 1391\\\\\n\\hline\n10 & \\textit{Bleach Cleanser} & 8 & 9 & 3.37 & 9.73 & \\textbf{81.5} & 1525\\\\\n\\hline\n11 & \\textit{Power Drill} & 12 & 9 & 1.52 & 6.43 & 78.25 & 1346\\\\\n\\hline\n & \\textbf{Mean} & - & - & \\textbf{1.94} & \\textbf{6.56} & \\textbf{81.2} & -\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table*}\n\nWe trained the YOLO and the stacked-hourglass networks for each of the 3 objects from scratch on images sampled from the RGB-D videos of all scenes at 3Hz with a 90-10 split for training and testing respectively. The YOLO network was trained on the default hyperparameters while the stacked-hourglass network was trained on mostly the same hyperparameters as \\cite{pavlakos17object3d} (though we implemented the keypoint detector in PyTorch instead of Lua originally). After the keypoint detection stage, we computed the 6-DoF object pose using OpenCV's E-P\\textit{n}P method, using the generated sparse model $\\mathbf{Q}$ in the test pipeline and the hand-modelled 3D CAD for the benchmark pipeline. For benchmarking, we trained the networks again from scratch on the same set of raw images but using the ground-truth labels and compared the errors in estimated 6-DoF poses. Rotation error was computed using the following geodesic distance formula: \\par\n\n\\begin{equation}\n\\begin{gathered}\n\\Delta(R_1, R_2)\n= \\frac{{\\lVert log(R_1^T R_2) \\rVert}_{F}}{\\sqrt{2}}.\n\\end{gathered}\n\\end{equation}\n\nThe median errors in rotation and translation for the 3 objects for the pipeline, trained on the generated dataset (OURS) and ground-truth dataset (MANUAL), are consolidated in Table \\ref{tab:table_pose_errors}. The results indicate that our proposed approach, while reducing the human time required for the entire procedure by several orders of magnitude, can be used to generate labeled datasets for training pose estimation pipelines to remarkable accuracy (comparable to that trained on manually generated dataset).\n\n\n\\subsection{Application to Object Pixel-wise Segmentation}\nWe investigate the suitability of the generated mask labels for the training of existing object segmentation methods. We selected an open-source implementation of Mask R-CNN \\cite{matterport_maskrcnn_2017} -- a popular approach for pixel-wise segmentation of objects in RGB images.\n\nFrom the labeled dataset generated in our experiments, we selected a subset of images (sampled at 3Hz) and again created a 90\/10 train-test split. Although our tool is capable of generating labels for multi-object scenes, here we limit our analysis to single-object scenes for the sake of simplicity. We trained the model for 11 classes of objects (refer to Table \\ref{tab:table_model_errors} for names), with default initialization of the weights. The ResNet101 backbone was chosen for the architecture and training images were square cropped to $512\\times512$ size. The training was performed on ``head\" layers for the first 20 epochs followed by ``all\" layers for then next 20. Other hyperparameters were set to default.\n\n\\begin{figure}[t]\n \\includegraphics[width=\\columnwidth]{images\/chart.png}\n \\caption{\\textbf{Performance of Mask R-CNN trained on our dataset.} Mean IoU scores for each of the 11 objects in the dataset.}\n \\label{figure:segmentation}\n\\end{figure}\n\n\\addtolength{\\textheight}{-2cm} \n \n \n \n \n \n \nThe purpose here was to measure the performance of the segmentation network trained on the automatically generated dataset (OURS) and compare it to the segmentation network trained on the ground-truth dataset (MANUAL). In both cases, the trained network was evaluated against the test subset of the ground-truth dataset. The Intersection-over-Union metric was measured and reported in Figure \\ref{figure:segmentation}. As the object-wise IoU metric remains comparable in both cases, we argue that training on our dataset (which takes a lot less manual effort to generate) provides similar performance for real applications.\n\\begin{table}[t]\n\\caption{Errors (Median) in Object Pose Estimation}\n\\label{tab:table_pose_errors}\n\\begin{center}\n\\begin{tabular}{c | c | c | c | c}\n\\hline\n& \\multicolumn{2}{c|}{Position} & \\multicolumn{2}{c}{Orientation}\\\\\n& \\multicolumn{2}{c|}{(in \\textit{mm})} & \\multicolumn{2}{c}{(in \\textit{degrees})}\\\\\n\\hline\n& OURS & MANUAL & OURS & MANUAL\\\\\n\\hline\n\\textit{OBJ1} & 9.93 & \\textbf{9.23} & 0.78 & \\textbf{0.69}\\\\\n\\hline\n\\textit{OBJ2} & \\textbf{3.66} & 4.75 & 1.35 & \\textbf{1.23}\\\\\n\\hline\n\\textit{OBJ3} & \\textbf{6.77} & 7.52 & \\textbf{4.46} & 4.80\\\\\n\\hline\n\\end{tabular}\n\\end{center}\n\\end{table}\n\n\\section{CONCLUSION \\& FUTURE WORK}\n\nThrough this paper, we have presented a technique for rapidly generating large datasets of labeled RGB images that can be used for training of deep CNNs for various applications. Our pipeline is highly automated --- we gather input from the human user for a few keypoints in only one RGB per scene. We do not require any complicated hardware setups like robot manipulators and turntables or sophisticated calibration procedures. In fact, the only hardware requirements are -- a calibrated RGB-D sensor and the object itself -- consequently, making it easier for the user to acquire the dataset in different environments.\n\nWe validated the effectiveness of the proposed method by using it to rapidly produce more than 150,000 labeled RGB images for 11 objects and subsequently, using the dataset to train a pose estimation pipeline and a segmentation network. We evaluated the accuracy of the trained networks to establish practicality and applicability of the labeled datasets as a solution to 6-DoF pose for real-world robotic grasping and manipulation tasks.\n\nFuture work in this direction would focus on making the sparse model of an object to be used as a canonical representation of the object's class to help class-specific pose estimation. Also, improving the quality of the segmented dense models by the means of a better algorithm would help improve the quality of labels.\n\n\n\n\n\\bibliographystyle{IEEEtran}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction and Motivation}\n\\label{sec: intro}\n\nConsider a unit --- for example, a human subject in a medical or a social science study, an experimental animal in a biological experiment, a machine in an engineering or reliability setting, or a company in an economic or business situation --- in a longitudinal study monitored over a period $[0,\\tau$], where $\\tau$ is possibly random. Associated with the unit is a covariate row vector $\\mathbf{X} = (X_1, X_2, \\ldots, X_p)$. Over time, the unit will experience occurrences of $Q$ competing types of recurrent events, its recurrent competing risks (RCR) component; transitions of a longitudinal marker (LM) process $W(t)$ over a discrete state space $\\mathfrak{W}$; and transitions of a `health' status (HS) process $V(t)$ over a discrete state space $\\mathfrak{V} = \\mathfrak{V}_0 \\bigcup \\mathfrak{V}_1$, with $\\mathfrak{V}_0 \\ne \\emptyset$ being absorbing states. If the health status process transitions into an absorbing state prior to $\\tau$, then monitoring of the unit ceases, so time-to-absorption serves as the lifetime of the unit. To demonstrate pictorially, the two panels in Figure \\ref{fig: observables} depict the time-evolution for two distinct units, where $Q = 3$, $\\mathfrak{W} = \\{w_1, w_2, w_3\\}$, $\\mathfrak{V} =\\{v_0, v_1, v_2\\}$ with $v_0$ an absorbing state. In panel 1, the unit did not transition to an absorbing state prior to reaching $\\tau$; whereas in panel 2, the unit reached an absorbing state prior to $\\tau$. Two major questions arise: (a) how do we specify a dynamic stochastic model that could be a generative model for such a data, and (b) how do we make statistical inferences for the model parameters and predictions of a unit's lifetime from a sample of such data?\n\n\\begin{figure}\n\\begin{center}\n\\begin{tabular}{cc}\n\\includegraphics*[width=2.5in]{fi\/censor} &\n\\includegraphics*[width=2.5in]{fi\/absorb}\n\\end{tabular}\n\\end{center}\n\\caption{Realized data observables from two distinct study units. The first plot panel is for a unit which was right-censored prior to reaching an absorbing state, while the second plot panel is for a unit which reached an absorbing state prior to getting right-censored.}\n\\label{fig: observables}\n\\end{figure}\n\nTo address these questions, the major goals of this paper are (i) to propose a joint stochastic model for the random observables consisting of the RCR, LM, and HS components for such units, and (ii) to develop appropriate statistical inference methods for the proposed joint model when a sample of units are observed. Achieving these two goals will enable statistical prediction of the (remaining) lifetime of a, possibly new, unit; allow for the examination of the synergistic association among the RCR, LM, and HS components; and provide a vehicle to compare different groups of units and\/or study the effects of concomitant variables or factors. More importantly, a joint stochastic model endowed with proper statistical inference methods could potentially enable unit-based interventions which are performed after a recurrent event occurrence or a transition in either the LM or HS processes. As such it could enhance the implementation of precision or personalized decision-making; for instance, precision medicine in a medical setting.\n\nA specific situation where such a data accrual occurs is in a medical study. For example, a subject may experience different types of recurring cancer, with the longitudinal marker being the prostate-specific antigen (PSA) level categorized into a finite ordinal set, while the health status is categorized into either a healthy, diseased, or dead state, with the last state absorbing. A variety of situations in a biomedical, engineering, public health, sociology, and economics settings, where such data structure arise, are further described in Section \\ref{sec: scenarios}. Several previous works dealing with modeling have either focused in the marginal modeling of each of the three data components, or in the joint modeling of two of the three data components. In this paper we tackle the problem of {\\em simultaneously} modeling all three data components: RCR, LM, and HS, in order to account for the associations among these components, which would not be possible using either the marginal modeling approach or the joint modeling of pairwise combinations of these three components. A joint full model could also subsume these previous marginal or joint models -- in fact, our proposed class of models subsumes as special cases models that have been considered in the literature. In contrast, only by imposing restrictive assumptions, such as the independence of the three model components, could one obtain a joint full model from marginal or pairwise joint models. As such, a joint full model will be less likely to be mis-specified, thereby reducing the potential biases that could accrue from mis-specified models among estimators of model parameters or when predicting residual lifetime.\n\nA joint modeling approach has been extensively employed in previous works. For instance, joint models for an LM process and a survival or time-to-event (TE) process have been proposed in \\cite{tsiatis1995modeling}, \\cite{wulfsohn1997joint}, \\cite{song2002semiparametric}, \\cite{henderson2000joint}, \\cite{tsiatis2004joint}, and \\cite{mclain2015}. Also, the joint modeling of an LM process and a recurrent event process has also been discussed in \\cite{han2007} and \\cite{efen2013}, while the joint modeling of a recurrent event and a lifetime has also been done such as in \\cite{liu2004}. An important and critical theoretical aspect that could not be ignored in these settings is that when an event occurrence is terminal (e.g., death) or when there is a finite monitoring period, informative censoring naturally occurs in the RCR, LM, or HS components, since when a terminal event or when the end of monitoring is reached, then the next recurrent event occurrence, the next LM transition, or the next HS transition will not be observed. \n\nAnother aspect that needs to be taken into account in a dynamic modeling approach is that of performed interventions, usually upon the occurrence of a recurrent event. For instance, in engineering or reliability systems, when a component in the system fails, this component will either be minimally or partially repaired, or altogether replaced with a new component (called a perfect repair); while, with human subjects, when a cancer relapses, a hospital infection transpires, a gout flare-up, or alcoholism recurs, some form of intervention will be performed or instituted. Such interventions will impact the time to the next occurrence of the event, hence it is critical that such intervention effects be reflected in the model; see, for instance, \\cite{gonzalez2005modelling} and \\cite{han2007}. In addition, models should take into consideration the impacts of the covariates and the effects of accumulating event occurrences on the unit. Models that take into account these considerations have been studied in \\cite{pena2006dynamic} and \\cite{pena2007semiparametric}. Appropriate statistical inference procedures for these dynamic models of recurrent events and competing risks have been developed in \\cite{pena2006dynamic} and \\cite{taylor2014nonparametric}. Extensions of these joint dynamic models for both RCR and TE can be found in \\cite{liu2015dynamic}. Some other recent works in joint modeling included the modeling of the three processes: LM, RCR (mostly, a single recurrent event), and TE simultaneously in \\cite{kim2012joint}, \\cite{cai2017joint}, \\cite{krol2017tutorial}, \\cite{mauguen2013dynamic} and \\cite{blanche2015quantifying}. The joint model that will be proposed in this paper will take into consideration these important aspects.\n\nWe now outline the remainder of this paper. Prior to describing formally the joint model in Section \\ref{sec: mathform}, we first present in Section \\ref{sec: scenarios} some concrete situations in science, medicine, engineering, social, and economic disciplines where the data accrual described above could arise and where the joint model will be relevant. Section \\ref{sec: mathform} formally describes the joint model using counting processes and continuous-time Markov chains (CTMCs), and provide interpretations of model parameters. In subsection \\ref{subsec: special case} we discuss in some detail a special case of this joint model obtained using independent Poisson processes and homogeneous CTMCs. Section \\ref{sec: estimation} deals with the estimation of the parameters. In subsection \\ref{subsec: estimation - parametric} we demonstrate the estimation of the model parameters in the afore-mentioned special case to pinpoint some intricacies of joint modeling and its inferential aspects. The general joint model contains nonparametric (infinite-dimensional) parameters, so in subsection \\ref{subsec: estimation - semiparametric} we will describe a semi-parametric estimation procedure for this general model. Section \\ref{sec: Properties} will present asymptotic properties of the estimators, though we will not present in this paper the rigorous proofs of these analytical results but defer them to a more theoretically-focused paper. Section \\ref{sec-Illustration} will then demonstrate the semi-parametric estimation approach through the use of a synthetic or simulated data using an {\\tt R} \\cite{RCitation} program we developed. In Section \\ref{sec: simulation}, we perform simulation studies to investigate the finite-sample properties of the estimators arising from semi-parametric estimation procedure and compare these finite-sample results to the theoretical asymptotic results. An illustration of the semi-parametric inference procedure using a real medical data set is presented in Section \\ref{sec: realdata}. Section \\ref{sec: conclusion} contains concluding remarks describing some open research problems. \n\n\\section{Concrete Situations of Relevance}\n\\label{sec: scenarios}\n \nTo demonstrate potential applicability of the proposed joint model, we describe in this section some concrete situations arising in biomedical, reliability, engineering, and socio-economic settings where the data accrual described in Section \\ref{sec: intro} could arise.\n\n\\begin{itemize}\n\\item {\\bf A Medical Example:} Gout is a form of arthritis characterized by sudden and severe attacks of pain, swelling, redness and tenderness in one of more joints in the toes, ankles, knees, elbows, wrists, and fingers (see, for instance, Mayo Clinic publications about gout). When a gout flare occurs, it renders the person incapacitated (personally attested by the senior author) and the debilitating condition may last for several days. Since the location of the gout flare could vary, we may consider gout as competing recurrent events --- competing with respect to the location of the flare, and recurrent since it could keep coming back. Gout occurs when urate crystals accumulate in the joints, which in turn is associated with high levels of uric acid in the blood. The level of uric acid is measured by the {Serum Urate Level (SUR)}, which can be categorized as {Hyperuricemia} (if SUR $>$ 7.2 mg\/dL for males; if SUR > 6.0 mg\/dL for females), or {Normal} (if 3.5 mg\/dL $\\le$ SUR $\\le$ 7.2 mg\/dL for males; if 2.6 mg\/dL $\\le$ SUR $\\le$ 6.0 mg\/dL for females). The SUR level could be considered a longitudinal marker. Kidneys are associated with the excretion of uric acid in the body. Thus, chronic kidney disease (CKD) impacts the level of uric acid in the body, hence the occurrence of gout. The ordinal stages of CKD, based on the value of the {Glomerular Filtration Rate (GFR)}, are as follows: {Stage 1 (Normal)} if GFR $\\ge$ 90 mL\/min; {Stage 2 (Mild CKD)} if 60 mL\/min $\\le$ GFR $\\le$ 89 mL\/min; {Stage 3A (Moderate CKD)} if 45 mL\/min $\\le$ GFR $\\le$ 59 mL\/min; {Stage 3B (Moderate CKD)} if 30 mL\/min $\\le$ GFR $\\le$ 44; {Stage 4 (Severe CKD)} if 15 mL\/min $\\le$ GFR $\\le$ 29 mL\/min; and {Stage 5 (End Stage CKD)} if GRF $\\le$ 14 mL\/min. The state of Stage 5 (End Stage CKD) can be viewed as an absorbing state. The CKD status could be viewed as the ``health status'' of the person. Other covariates, such as gender, blood pressure, weight, etc., could also impact the occurrence of gout flares, uric acid level, and CKD. When a gout flare occurs, lifestyle interventions could be performed such as (i) consuming skim milk powder enriched with the two dairy products glycomacropeptide (GMP) and G600 milk fat extract; or (ii) consuming standard milk or lactose powder. The purpose of such interventions is to lessen gout flare recurrences. Of major interest is to jointly model the competing gout recurrences, the categorized SUR process, and the CKD process. A study consisting of $n$ subjects could be performed with each subject monitored over some period, with the time of gout flare recurrences of each type, SUR levels, and CKD states recorded over time, aside from relevant covariates. Based on such a data, it will be of interest to estimate the model parameters and to develop a prediction procedure for time-to-absorption to End Stage CKD for a person with gout recurrences.\n\n\n\\item {\\bf A Reliability Example}: Observe $n$ independent cars over each of their own monitoring period until the car is declared inoperable or the monitoring period ends. Cars are complex systems in that they are constituted by different components, which could be subsystems or modules, configured according to some coherent structure function \\cite{barlow1975}. For each car, the states of $Q$ components (such as its engine subsystem; transmission subsystem; brake subsystem; electrical subsystem; etc.) are monitored. Furthermore, its covariates such as weight; initial mileage; current mileage; years of operation; and other characteristics (for example, climate in which car is mostly being driven) are observed. Also, its `health status', which is either functioning, functioning with some problems, or total inoperability (an absorbing state), is tracked over the monitoring period. Meanwhile, a longitudinal marker such as its oil quality indicator (which is either excellent; good; or poor) and the occurrences of failures of any of the $Q$ components are also recorded over the monitoring period. When a component failure occurs, a repair or replacement of the component is undertaken. Given the data for these $n$ cars, an important goal is to predict the time to inoperability of another car. Note that this type of application could occur in more complex systems such as space satellites, nuclear power plants, medical equipments, etc.\n\n\\item {\\bf A Social Science Example}: Observe $n$ independent newly-married \ncouples over a period of years (say, 20 years). Over this follow-up period, the marriage could end in separation or divorce, remain intact, or end due to the death of at least one of them. Each couple will have certain characteristics: their ages when they got married; working status of each; income level of each; education level of each; number of children (this is a time-dependent covariate); net worth of couple; etc. Possibly through a regularly administered questionnaire, \n the couple provides information from which their ``marriage status'' could be ascertained (either very satisfied; satisfied; poor; separated or divorced). Competing recurrent events for the couple could be changes in job status for either; addition in the family; educational changes of children; and major disagreements. A longitudinal marker could be the financial health status of the couple reflected by their categorized FICO scores. A goal is to infer about the parameters of the joint model based on the observations from these $n$ couples, and to predict if separation or divorce will occur for a married couple, who are not in the original sample, and if so, to obtain a prediction interval of the time of such an occurrence. \n\n\\item {\\bf A Financial Example}: Track $n$ independent publicly-listed\ncompanies over their monitoring periods. At the end of its monitoring period, a company could be bankrupt, still in business, or could have been acquired by another company. Each company has its own characteristics, such as total assets, number of employees, number of branches, etc. Note that these are all time-dependent characteristics. The ``health status'' of a company is rated according to four categories (A: Exceptional; B: Less than Exceptional; C: Distressed; D: Bankrupt). The bankrupt status is the absorbing state. The company's liability relative to its asset, categorized into High, Medium, Low, Non-Existent could be an important longitudinal marker. Recurrent competing risks will be the occurrence of an increase (decrease) of at least 5\\% during a trading day in its stock share price. Based on data from a sample of these companies, it could be of interest to predict the time to bankruptcy of another company that is not in the sample. \n\n\\item {\\bf COVID-19 Example}: Consider a vaccine trial where $n$ subjects are randomized into different vaccine groups, including a no vaccine group. Group membership could be coded using dummy covariates. Each subject is monitored over an observation period, until loss to follow-up, or until death. Aside from the group membership, other covariates (e.g., age or age-group, gender, BMI, race, pre-existing conditions such whether immuno-compromised or not, political affiliation, religious affiliation, etc.) for each subject are also observed. A longitudinal marker for each subject could be the viral load, categorized into none, low, medium, or high. See \\cite{VelavanEtAl2021} for other examples of longitudinal medical markers observed in Covid-19 studies. The health status for each subject could be classified into free of Covid-19, moderately infected, severely infected, or dead. Competing recurrent events could be the occurrence of abdominal problems, severe coughing, body temperature reaching 103 degrees Fahrenheit. Possible goals of the study are to compare the different vaccine groups in terms of preventing Covid-19 infection; with respect to the mean or median time to absorption; or with respect to mean or median time to transitioning out of infection state given Covid-19 infection.\n\\end{itemize}\n\n\n\\section{Joint Model of RCR, LM, and HS Processes}\n\\label{sec: mathform}\n\n\\subsection{Data Observables for One Unit}\n\nDenote by $(\\Omega, \\mathfrak{F}, \\Pr)$ the basic filtered probability space with filtration $\\mathcal{F} = \\{\\mathcal{F}_s: s \\ge 0\\}$ where all random entities under consideration are defined. We begin by describing the joint model for the data observable components for {\\bf one unit}. \n\nLet $\\tau$, the end of monitoring period, have a distribution function $G(\\cdot)$, which may be degenerate. The covariate vector will be ${X} = (X_1, \\ldots, X_p)$, assumed to be time-independent, though the extension to time-dependent covariates are possible with additional assumptions.\nFor the RCR component, let $N^R = \\{N_q^R(s) \\equiv (N^R(s;q), q \\in \\mathfrak{I}_Q):\\ s \\ge 0\\}$, with index set $\\mathfrak{I}_Q = \\{1,\\ldots,Q\\}$, be a $Q$-dimensional multivariate counting (column) vector process such that, for $q \\in \\mathfrak{I}_Q$, $N_q^R(s)$ is the number of {\\em observed} occurrences of the recurrent event of type $q$ over $[0,s]$, with $N_q^R(0) = 0$. Thus, the sample path $s \\mapsto N_q^R(s)$ takes values in $\\mathbb{Z}_{0,+} = \\{0,1,2,\\ldots\\}$, is a non-decreasing step-function, and is right-continuous with left-hand limits. We denote by $dN_q^R(s) = N_q^R(s) - N_q^R(s-)$, the jump at time $s$ of $N_q^R$. \n\nFor the LM process, let $W = \\{W(s): s \\geq 0\\}$, where $W(s)$ takes values in a finite state space $\\mathfrak{W}$ with cardinality $|\\mathfrak{W}|$. $W(s)$ represents the state of the longitudinal marker at time $s$. The sample path $s \\mapsto W(s)$ is a step-function which is right-continuous with left-hand limits. By introducing the index set $\\mathfrak{I}_{\\mathfrak{W}} = \\{(w_1,w_2): w_1, w_2 \\in \\mathfrak{W}, w_1 \\ne w_2\\}$, we can convert $W$ into a (column) $2{\\binom{|\\mathfrak{W}|} {2}}$-dimensional multivariate counting process $\\{N^W \\equiv (N^W(s;w_1,w_2), (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}): s \\ge 0\\}$, where $N^W(s;w_1,w_2)$ is the number of observed transitions of $W$ from state $w_1$ into $w_2$ over the period $[0,s]$, that is, for $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$,\n$$N^W(s;w_1,w_2) = \\sum_{t \\le s} I\\{W(t-) = w_1,W(t) = w_2\\},$$\nwith $I(\\cdot)$ denoting indicator function.\nThus, for each $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$, the sample path $s \\mapsto N^W(s;w_1,w_2)$ takes values in $\\mathbb{Z}_{0,+}$, is a nondecreasing step-function, right-continuous with left-hand limits, and with $N^W(s;w_1,w_2) = 0$. Furthermore, $\\sum_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} dN^W(s;w_1,w_2) \\in \\{0,1\\}$ for every $s \\ge 0$.\n\nFor the HS process, let $V = \\{V(s): s \\ge 0\\}$, where $V(s)$ takes values in the finite state space $\\mathfrak{V} = \\mathfrak{V}_0 \\bigcup \\mathfrak{V}_1$, where states in $\\mathfrak{V}_0$ are absorbing states, and with $|\\mathfrak{V}_0| > 0$. $V(s)$ is the state occupied by the HS process at time $s$, so that if $V(s) \\in \\mathfrak{V}_0$ then $V(s^\\prime) = V(s)$ for all $s^\\prime > s$. Similar to the LM process, let $\\mathfrak{I}_{\\mathfrak{V}} = \\{(v_1,v): v_1 \\in \\mathfrak{V}_1, v \\in \\mathfrak{V}; v_1 \\ne v\\}$, whose cardinality is $|\\mathfrak{I}_{\\mathfrak{V}}| = |\\mathfrak{V}_1| |\\mathfrak{V}| - |\\mathfrak{V}_1|$. We convert $V$ into a (column) $|\\mathfrak{I}_{\\mathfrak{V}}|$-dimensional multivariate counting process $\\{N^V \\equiv (N^V(s;v_1,v), (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}): s \\ge 0\\}$, where $N^V(s;v_1,v)$ is the number of observed transitions of $V$ from state $v_1$ into state $v$ over the period $[0,s]$, that is, for $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$,\n$$N^V(s;v_1,v) =\\sum_{t \\le s} I\\{V(t-) = v_1, V(t) = v\\}.$$\nFor each $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$, the sample path $s \\mapsto N^V(s;v_1,v)$ takes values in $\\mathbb{Z}_{0,+}$, and is a nondecreasing step-function, right-continuous with left-hand limits, and with $N^W(0;v_1,v) = 0$. In addition, $\\sum_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} dN^V(s;v_1,v) \\in \\{0, 1\\}$ for every $s \\ge 0$.\nNext, we combine the multivariate counting processes $N^R$, $N^W$, and $N^V$ into one (column) multivariate counting process $N = \\{N(s): s \\ge 0\\}$ of dimension $Q + |\\mathfrak{I}_{\\mathfrak{W}}| + |\\mathfrak{I}_{\\mathfrak{V}}|$, where, with $^{\\tt T}$ denoting vector\/matrix transpose,\n$$N(s) = \\left[(N^R(s))^{\\tt T}, (N^W(s))^{\\tt T}, (N^V(s))^{\\tt T}\\right]^{\\tt T}.$$\n\nAn important point needs to be stated regarding the observables in the study, which will have an impact in the interpretation of the parameters of the joint model. This pertains to the ``competing risks'' nature of all the possible events at each time point $s$. The possible $Q$ recurrent event types, as well as the potential transitions in the LM and HS processes, are all competing with each other. Thus, suppose that at time $s_0$, the event that occurred is a recurrent event of type $q_0$, that is, $dN^R(s_0;q_0) = 1$. This means that this event has occurred {\\em in the presence} of the potential recurrent events from the other $Q-1$ risks, and the potential transitions from either the LM and HS processes. This will entail the use of {\\em crude} hazards, instead of {\\em net} hazards, in the joint modeling, and this observation will play a critical role in the dynamic joint model since each of the competing event occurrences at a given time point $s$ from all the possible event sources (RCR, LM, and HS) will be affected by the history of all these processes just before time $s$. This is the aspect that exemplifies the synergistic association among the three components.\n\nAnother observable process for our joint model is a vector of effective (or virtual) age processes $\\mathcal{E} = \\{(\\mathcal{E}_1(s),\\ldots,\\mathcal{E}_Q(s)): s \\ge 0\\}$, whose components are $\\mathcal{F}$-predictable processes with sample paths that are non-negative, left-continuous, piecewise nondecreasing and differentiable. These effective age processes will carry the impact of interventions performed after each recurrent event occurrence or a transition in either the LM process or the HS process. For recent articles dealing with virtual ages, see the philosophically-oriented article \\cite{FinCha21} and the very recent review article \\cite{Beu2021}.\n\nFinally, we define the time-to-absorption of the unit to be\n$\\tau_A = \\inf\\{s \\ge 0: V(s) \\in \\mathfrak{V}_0\\}$\nwith the convention that $\\inf \\emptyset = \\infty$. Using this $\\tau_A$ and $\\tau$, we define the unit's at-risk process to be $Y = \\{Y(s): s \\ge 0\\}$, with\n$Y(s) = I\\{\\min(\\tau,\\tau_A) \\ge s\\}.$\nIn addition, we define LM-specific and HS-specific at-risk processes as follows: For $w \\in \\mathfrak{W}$, define $Y^W(s;w) = I\\{W(s-) = w\\}$; and, for $v \\in \\mathfrak{V}_1$, define $Y^V(s;v) = I\\{V(s-) = v\\}$.\nFor a unit, we could then concisely summarize the random observables in terms of stochastic processes as:\n\\begin{equation}\n\\label{data: one unit}\nD = (X,N,\\mathcal{E},Y,Y^W,Y^V) \\equiv \\{X,(N(s),\\mathcal{E}(s),Y(s),Y^W(s),Y^V(s): s \\ge 0)\\}.\n\\end{equation}\nNote that the processes are undefined for $s > \\min(\\tau_A,\\tau) \\equiv \\inf\\{s \\ge 0: Y(s+) = 0\\}$ since monitoring of the unit had by then ceased.\n\n\\subsection{Joint Model Specification for One Unit}\n\\label{subsec: joint model}\n\nThe joint model specification will be through the specification of the compensator process vector and the predictable quadratic variation (PQV) process matrix of the multivariate counting process $N$. The predictable process vector $A = \\{A(s): s \\ge 0\\}$ is of dimension $Q + |\\mathfrak{I}_{\\mathfrak{W}}| + |\\mathfrak{I}_{\\mathfrak{V}}|$ and is such that the vector process $M = N - A = \\{M(s) = N(s) - A(s): s \\ge 0\\}$ is a zero-mean square-integrable martingale process with PQV matrix process $\\langle M, M \\rangle$. The vectors $A$ and $M$ are actually partitioned into three vector components reflecting the RCR, LM, and HS components, according to\n\\begin{displaymath}\nA = \\left[ (A^R)^{\\tt T}, (A^W)^{\\tt T}, (A^V)^{\\tt T} \\right]^{\\tt T} \\quad \\mbox{and} \\quad\nM = \\left[(M^R)^{\\tt T}, (M^W)^{\\tt T}, (M^V)^{\\tt T}\\right],\n\\end{displaymath}\nwhere, with $q \\in \\mathfrak{I}_Q$, $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$, $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$, and $s \\ge 0$,\n\\begin{eqnarray*}\n& A^R = \\{(A^R(s;q))\\} \\quad \\mbox{and} \\quad M^R = \\{(M^R(s;q))\\}; & \\\\\n& A^W = \\{(A^W(s;(w_1,w_2))\\} \\quad \\mbox{and} \\quad M^W = \\{(M^W(s;(w_1,w_2))\\}; & \\\\\n& A^V = \\{(A^V(s;(v_1,v))\\} \\quad \\mbox{and} \\quad M^V = \\{(M^V(s;(v_1,v))\\}, &\n\\end{eqnarray*}\nwith $A^R$ and $M^R$ of dimensions $Q$; $A^W$ and $M^W$ of dimensions $|\\mathfrak{I}_{\\mathfrak{W}}|$; and $A^V$ and $M^V$ of dimensions $|\\mathfrak{I}_{\\mathfrak{V}}|$. The matrix $\\langle M, M \\rangle$ could then be partitioned similarly to reflect these block components.\n\nWe can now proceed with the specification of the compensator process vector and the PQV process matrix. \nFor conciseness, we introduce the generic mapping $\\iota$ defined as follows: For a set $A$ with $m$ elements, $A=\\{a_1,\\ldots,a_m\\}$, let $\\iota_{A}(a)=(I(a_2=a),\\ldots,I(a_m=a))$, a row vector of $m-1$ elements. Here $\\iota_A(a)$ is the indicator vector of $a$ excluding the first element of $A$, so that $\\iota_{A}(a_1)=(\\underline{0})$. Excluding $a_1$ in the $\\iota$ mapping is for purposes of model identifiability. One may think of the mapping $\\iota$ as a converter to dummy variables. We will also need the following quantities or functions.\n\\begin{itemize}\n \\item For each $q \\in \\mathfrak{I}_Q$ there is a baseline (crude) hazard rate function $\\lambda_{0q}(\\cdot)$ with associated baseline (crude) cumulative hazard function $\\Lambda_{0q}(\\cdot)$. We also denote by $\\bar{F}_{0q}(\\cdot) = \\prodi_{v=0}^{\\cdot} [1 - \\Lambda_{0q}(dv)]$ the associated baseline (crude) survivor function, where $\\prodi$ is the product-integral symbol.\n %\n \\item For each $q \\in \\mathfrak{I}_Q$ there is a mapping $\\rho_q(\\cdot;\\cdot): \\mathbb{Z}_{0,+}^Q \\times \\Re^{d_q} \\rightarrow \\Re_{0,+}$, where the $d_q$'s are known positive integers. There is an associated vector $\\alpha_q \\in \\Re^{d_q}$.\n %\n \\item There is a collection of non-negative real numbers $\\eta = \\{\\eta(w_1,w_2): (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}\\}$, and we define for every $w_1 \\in \\mathfrak{W}$, $\\eta(w_1,w_1) = -\\sum_{w \\in \\mathfrak{W}; w \\ne w_1} \\eta(w_1,w).$\n %\n \\item There is a collection of non-negative real numbers $\\xi = \\{\\xi(v_1,v): (v_1, v) \\in \\mathfrak{I}_{\\mathfrak{V}}\\}$, and we define for every $v_1 \\in \\mathfrak{V}$, $\\xi(v_1,v_1) = -\\sum_{v \\in \\mathfrak{V}; v \\ne v_1} \\xi(v_1,v),$ and with $\\xi(v_1,v_2) = 0$ for every $v_1 \\in \\mathfrak{V}_0$ and $v_2 \\in \\mathfrak{V}$.\n %\n\\end{itemize}\nWe then define the observables and associated finite-dimensional parameters, respectively, for each of the three model components according to\n\\begin{eqnarray*}\n& B^R(s)=[X,\\iota_{\\mathfrak{V}}(V(s)),\\iota_{\\mathfrak{W}}(W(s))] \\quad \\mbox{and} \\quad \\theta^R=[(\\beta^R)^{\\tt T},(\\gamma^R)^{\\tt T},(\\kappa^R)^{\\tt T}]^{\\tt T}; & \\\\\n& B^W(s)=[X,\\iota_{\\mathfrak{V}}(V(s)),N^R(s)] \\quad \\mbox{and} \\quad \\theta^W=[(\\beta^W)^{\\tt T},(\\gamma^W)^{\\tt T},(\\nu^W)^{\\tt T}]^{\\tt T}; & \\\\\n& B^V(s)=[X,\\iota_{\\mathfrak{W}}(W(s)),N^R(s)] \\quad \\mbox{and} \\quad \\theta^V=[(\\beta^V)^{\\tt T},(\\kappa^V)^{\\tt T},(\\nu^V)^{\\tt T}]^{\\tt T}. &\n\\end{eqnarray*}\nIn addition to the $\\lambda_{0q}$'s, $\\alpha_q$'s, $\\eta$, $\\xi$, the $\\theta^R$, $\\theta^W$, and $\\theta^V$ will constitute all of the model parameters.\nFor the proposed model, the compensator process components are given by, for $q \\in \\mathfrak{I}_Q$, $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$, and $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$:\n\\begin{eqnarray*}\n& A^R(s;q) = \\int_0^s Y(t) \\lambda_{0q}[\\mathcal{E}_q(t)] \\rho_q[N^R(t-);\\alpha_q] \\exp\\{B^R(t-) \\theta^R\\} dt; & \\\\\n& A^W(s;w_1,w_2) = \\int_0^s Y(t) Y^W(t;w_1) \\eta(w_1,w_2) \\exp\\{B^W(t-) \\theta^W\\} dt; \\\\\n& A^V(s;v_1,v) = \\int_0^s Y(t) Y^V(t;v_1) \\xi(v_1,v) \\exp\\{B^V(t-) \\theta^V\\} dt. &\n\\end{eqnarray*}\nIn the left-hand side of the equations above, we have suppressed writing the dependence on the model parameters. With $\\mbox{Dg}(a)$ denoting the diagonal matrix formed from vector $a$, the PQV process is specified to be\n\\begin{displaymath}\n\\langle M, M \\rangle (s) = \\mbox{Dg}[(A^R(s))^{\\tt T}, (A^W(s))^{\\tt T}, (A^V(s))^{\\tt T}].\n\\end{displaymath}\nObserve the dynamic nature of this model in that an event occurrence at an infinitesimal time interval $[s,s+ds)$ depends on the history of the processes before time $s$. According to the theory of counting processes, we have the the following probabilistic interpretations (statements are almost surely):\n\\begin{eqnarray*}\n& E\\{dN^R(s;q)|\\mathcal{F}_{s-}\\} = dA^R(s;q); & \\\\\n& E\\{dN^W(s;w_1,w_2)|\\mathcal{F}_{s-}\\} = dA^W(s;w_1,w_2); & \\\\\n& E\\{dN^V(s;v_1,v)|\\mathcal{F}_{s-}\\} = dA^V(s;v_1,v), &\n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\n& Var\\{dN^R(s;q)|\\mathcal{F}_{s-}\\} = dA^R(s;q); & \\\\\n& Var\\{dN^W(s;w_1,w_2)|\\mathcal{F}_{s-}\\} = dA^W(s;w_1,w_2); & \\\\\n& Var\\{dN^V(s;v_1,v)|\\mathcal{F}_{s-}\\} = dA^V(s;v_1,v), &\n\\end{eqnarray*}\ntogether with the conditional covariance, given $\\mathcal{F}_{s-}$, between any pair of elements of $dN(s)$ being equal to zero, e.g., $Cov\\{dN^R(s),dN^W(s;w_1,w_2)|\\mathcal{F}_{s-}\\} = 0$. However, note that we are not assuming that the components of $N^R$, $N^W$, and $N^V$ are independent, nor even conditionally independent. A way to view this model is that, given $\\mathcal{F}_{s-}$, the history just before time $s$, on the infinitesimal interval $[s,s+ds)$, $dN(s) = (dN^R(s)^{\\tt T}, dN^W(s)^{\\tt T}, dN^V(s)^{\\tt T})^{\\tt T}$ has a multinomial distribution with parameters $1$ and $dA(s) = (dA^R(s)^{\\tt T},dA^W(s)^{\\tt T},dA^V(s)^{\\tt T})^{\\tt T}$. As such, for every $s \\ge 0$, the following constraint holds:\n\\begin{displaymath}\ndN_\\bullet(s) = dN_\\bullet^R(s) + dN_\\bullet^W(s) + dN_\\bullet^V(s) \\in \\{0,1\\},\n\\end{displaymath}\nwith the notation that a subscript of $\\bullet$ means the sum over all the appropriate index set, e.g., $dN_\\bullet^R(s) = \\sum_{q \\in \\mathfrak{I}_Q} dN^R(s;q)$ and $dA_\\bullet(s) = dA_\\bullet^R(s) + dA_\\bullet^W(s) + dA_\\bullet^V(s)$. The multinomial distribution above could actually be approximated by independent Bernoulli distributions. To see this, if we have real numbers $p_k, k=1,\\ldots,K,$ with $0 < p_k \\approx 0$ for each $k = 1, \\ldots, K,$ and with $\\sum_{k=1}^K p_k \\approx 0$, then we have the approximation\n\\begin{displaymath}\n\\left(1 - \\sum_{k=1}^K p_k\\right) \\approx \\prod_{k=1}^K (1 - p_k).\n\\end{displaymath}\nConsequently, in the equation below, the multinomial probability on the left-hand side is approximately the product of (independent) Bernoulli probabilities in the right-hand side.\n\\begin{eqnarray*}\n\\lefteqn{ \\left\\{\\prod_{q \\in \\mathfrak{I}_Q} [dA_q(s)]^{dN_q^R(s)}\\right\\} \\left\\{\\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} [dA^W(s;w_1,w_2)]^{dN^W(s;w_1,w_2)} \\right\\} \\times } \\\\ &&\n \\left\\{\\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} [dA^V(s;v_1,v)]^{dN^V(s;v_1,v )} \\right\\} \\left\\{[1 - dA_\\bullet(s)]^{1 - dN_\\bullet(s)}\\right\\} \\\\ & \\approx &\n \\left\\{\\prod_{q \\in \\mathfrak{I}_Q} [dA_q(s)]^{dN_q^R(s)} [1 - dA_q^R(s)]^{1- dN_q^R(s)}\\right\\} \\\\ && \\left\\{\\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} [dA^W(s;w_1,w_2)]^{dN^W(s;w_1,w_2)} [1 - dA^W(s;w_1,w_2)]^{1 - dN^W(s;w_1,w_2)}\\right\\} \\\\ && \\left\\{\\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} [dA^V(s;v_1,v)]^{dN^V(s;v_1,v )} [1 - dA^V(s;v_1,v)]^{1 - dN^V(s;v_1,v )}\\right\\}.\n\\end{eqnarray*}\nThis approximate equivalence informs the manner in which we generate data from the model later in the sections dealing with an illustrative data set (Section \\ref{sec-Illustration}) and the simulation studies (Section \\ref{sec: simulation}) where we used this product of Bernoulli approach.\n\nConsider a unit that is still at risk just before time $s$ whose LM process is at state $w_1$ and HS process at state $v_1 \\notin \\mathfrak{V}_0$. Two questions of interest are: \n\\begin{itemize}\n\\item[(a)] What is the distribution of the next event occurrence? \n\\item[(b)] Given in addition that an event occurred at time $s+t$, what are the conditional probabilities of each of the possible events? \n\\end{itemize}\nDenote by $T$ the time to the next event occurrence starting from time $s$. Then, \n\\begin{eqnarray*}\n\\Pr\\{T > t|\\mathcal{F}_{s-}\\} & = & \\prodi_{u=s}^{s+t} \\left[1 - \\left(\\sum_{q=1}^Q \\lambda_{0q}[\\mathcal{E}_{q}(u)] \\rho_{q}[N^R(u-);\\alpha_q] \\exp\\{B^R(u-) \\theta^R\\} - \\right.\\right. \\\\ && \\left.\\left. \\eta(w_1,w_1) \\exp\\{B^W(u-)\\theta^W\\} - \\xi(v_1,v_1) \\exp\\{B^V(u-)\\theta^V\\}\\right) du \\right] \\\\\n& = & \\exp\\left\\{-\\int_{s}^{s+t} \\left(\\sum_{q=1}^Q \\lambda_{0q}[\\mathcal{E}_{q}(u)] \\rho_{q}[N^R(u-);\\alpha_q] \\exp\\{B^R(u-) \\theta^R\\} - \\right.\\right. \\\\ && \\left.\\left. \\eta(w_1,w_1) \\exp\\{B^W(u-)\\theta^W\\} - \\xi(v_1,v_1) \\exp\\{B^V(u-)\\theta^V\\}\\right) du \\right\\} \\\\\n& = & \\exp\\left\\{-\\exp\\{B^R(s-) \\theta^R\\}\\sum_{q=1}^Q \\rho_{q}[N^R(s-);\\alpha_q] \\int_{s}^{s+t} \\lambda_{0q}[\\mathcal{E}_{q}(u)] du + \\right. \\\\\n&& \\left. \\eta(w_1,w_1) \\exp\\{B^W(s-)\\theta^W\\} t + \\xi(v_1,v_1) \\exp\\{B^V(s-)\\theta^V\\} t\\right\\},\n\\end{eqnarray*}\nwith the second equality obtained by invoking the product-integral identity \n$$\\prodi_{s \\in I} [1 - dA(s)]^{1-dN(s)} = \\exp\\left\\{-\\int_{s \\in I} dA(s)\\right\\}$$ \nand since no events in $[s,s+t)$ means $dN_\\bullet^R(u) + dN_\\bullet^W(u) + dN_\\bullet^V(u) = 0$ for $u \\in [s,s+t)$, and the last equality arising since, prior to the next event, there will be no changes in the values of $N^R$, $B^R$, $B^W$, and $B^V$ from their respective values just before time $s$. In particular, if the $\\lambda_{0q}$s are constants, corresponding to the hazard rates of an exponential distribution, then the distribution of the time to the next event occurrence is exponential. Given that the event occurred at time $s+t$, then the conditional probability that it was an RCR-type $q$ event is \n$e^{\\{B^R(s-) \\theta^R\\}} \\rho_{q}[N^R(s-);\\alpha_q] \\lambda_{0q}[\\mathcal{E}_{q}(s+t)]\/C(s,t),$\nwith\n\\begin{eqnarray*}\n\\lefteqn{C(s,t) = e^{\\{B^R(s-) \\theta^R \\}} \\sum_{q^\\prime = 1}^Q \\rho_{q^\\prime}[N^R(s-);\\alpha_{q^\\prime}] \\lambda_{0q^\\prime}[\\mathcal{E}_{q^\\prime}(s+t)] - } \\\\\n&& \\eta(w_1,w_1) e^{\\{B^W(s-)\\theta^W\\}} - \\xi(v_1,v_1) e^{\\{B^V(s-)\\theta^V\\}}\n\\end{eqnarray*}\nSimilarly, the conditional probability that it was a transition to state $w_2$ for the LM process is\n$\\eta(w_1,w_2) e^{\\{B^W(s-)\\theta^W\\}}\/C(s,t),$\nand the conditional probability that is was a transition to state $v$, possibly to an absorbing state, for the HS process is\n$\\xi(v_1,v) e^{\\{B^V(s-)\\theta^V\\}}\/C(s,t).$\n %\nThese probabilities demonstrate the competing risks nature of the different possible events. They also provide a computational approach to iteratively generate data from the joint model for use in simulation studies, with the basic idea being to first generate a time to any type of event, then to mark the type of event or update each of the counting processes by using the above conditional probabilities.\n\nDenoting by $\\Theta$ the set of all parameters of the model, the likelihood function arising from observing $D$, with $p_{(W,V)}(\\cdot,\\cdot)$ the initial joint probability mass function of $(W(0),V(0))$, is given by\n\\begin{eqnarray}\n\\label{lik for one unit} \\lefteqn{ \\mathcal{L}(\\Theta|D) = p_{(W,V)}(W(0),V(0)) \\times } \\\\ \n&& \\prodi_{s=0}^\\infty \\left\\{\n\\left[ \\prod_{q \\in \\mathfrak{I}_Q} [dA^R(s;q)]^{dN^R(s;q)} \\right] \\left[ \\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} [dA^W(s;w_1,w_2)]^{dN^W(s;w_1,w_2)} \\right] \\right. \\times \\nonumber \\\\ \n&& \\left. \\left[ \\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} [dA^V(s;v_1,v)]^{dN^V(s;v_1,v)} \\right] \\left[1 - dA_\\bullet(s)\\right]^{1-dN_\\bullet(s)} \n\\right\\} \\nonumber\n\\end{eqnarray}\nThe likelihood in (\\ref{lik for one unit}) could be rewritten in the form\n\\begin{eqnarray}\n\\label{lik: one unit alt} \\lefteqn{\\mathcal{L}(\\Theta|D) = p_{(W,V)}(W(0),V(0)) \\times} \\\\\n&& \\prodi_{s=0}^\\infty \\left\\{\n\\left[ \\prod_{q \\in \\mathfrak{I}_Q} [dA^R(s;q)]^{dN^R(s;q)} \\right] \\left[ \\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} [dA^W(s;w_1,w_2)]^{dN^W(s;w_1,w_2)} \\right] \\right. \\nonumber \\\\\n&& \\left. \\left[ \\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} [dA^V(s;v_1,v)]^{dN^V(s;v_1,v)} \\right] \n\\right\\} \\exp\\{-A_\\bullet(\\infty)\\}. \\nonumber \n\\end{eqnarray}\nLet us examine the meaning of $A_\\bullet(\\infty) \\equiv A_\\bullet^R(\\infty) + A_\\bullet^W(\\infty) + A_\\bullet^V(\\infty)$. Simplifying, we see that this equals\n$A_\\bullet(\\infty) = \\int_0^\\infty Y(s) T(s) ds = \\int_0^{\\tau\\wedge\\tau_A} T(s) ds$,\nwhere\n\\begin{eqnarray*}\nT(s) & = & \\sum_{q \\in \\mathfrak{I}_Q} \\lambda_{0q}[\\mathcal{E}_q(s)] \\rho_q[N^R(s-);\\alpha_q] \\exp\\{B^R(s-) \\theta^R\\} - \\\\\n&& \\sum_{w_1 \\in \\mathfrak{W}} Y^W(s;w_1) \\eta(w_1,w_1) \\exp\\{B^W(s-) \\theta^W\\} - \\\\\n&& \\sum_{v_1 \\in \\mathfrak{V}_1} Y^V(s;v_1) \\xi(v_1,v_1) \\exp\\{B^V(s-) \\theta^V\\}.\n\\end{eqnarray*}\nRecall that $\\eta(w_1,w_1)$ and $\\xi(v_1,v_1)$ are non-positive real numbers. Thus, $T(s) ds$ could be interpreted as the total risk of an event, either a recurrent event in the RCR component or a transition in the LM or HS components, occurring from all possible sources (RCR, LM, HS) that the unit is exposed to at the infinitesimal time interval $[s, s+ds)$, given the history $\\mathcal{F}_{s-}$ just before time $s$.\n\nWe provide further explanations of the elements of the joint model. First, there is a tacit assumption that no more than one event of any type could occur at any given time $s$. Second, the event rate at any time point $s$ for any type of event is in the presence of the other possible risk events. Thus, consider a specific $q_0 \\in \\mathfrak{I}_Q$ and assume that the unit is still at-risk at time $s_0$. Then,\n\\begin{eqnarray}\n\\label{RCR probability}\n\\Pr\\{dN_{q_0}^R(s_0) = 1|\\mathcal{F}_{s_0-}\\} & \\approx & \\lambda_{0q_0}[\\mathcal{E}_{q_0}(s_0)] \n\\rho_{q_0}[N^R(s_0-);\\alpha_{q_0}] \\exp\\{B^R(s_0-)\\theta^R\\} d{s_0}\n\\end{eqnarray}\nis the conditional probability, given $\\mathcal{F}_{s_0-}$, that an event occurred at $[s_0,s_0+ds_0)$ and is of RCR type $q_0$ {\\em and} all other event types did not occur, which is the essence of what is called a crude hazard rate, instead of a net hazard rate. Third, the effective (or virtual) age functions $\\mathcal{E}_{q}(\\cdot)$s, which are assumed to be dynamically determined and not dependent on any unknown parameters, encodes the impact of performed interventions that are performed after each event occurrence. Several possible choices of these functions are:\n\\begin{itemize}\n\\item $\\mathcal{E}_q(s) = s$ for all $s \\ge 0$ and $q \\in \\mathfrak{I}_Q$. This is usually referred to as a minimal repair intervention, corresponding to the situation where an intervention simply puts back the system at the age just before the event occurrence.\n\\item $\\mathcal{E}_q(s) = s - S_{N_\\bullet(s-)}$ where $0 = S_0 < S_1 < S_2 < \\ldots$ are the successive event times. This corresponds to a perfect intervention, which has the effect of re-setting the time to zero for each of the RCRs after each event occurrence. In a reliability setting, this means that all $Q$ components (unless having exponential lifetimes) are replaced by corresponding identical, but new, components at each event occurrence.\n\\item $\\mathcal{E}_q(s) = s - S_{N_\\bullet^R(s-)}^R$ where $0 = S_0^R < S_1^R < S_2^R < \\ldots$ are the successive event times of the occurrences of the RCR events.\n\\item $\\mathcal{E}_q(s) = s - S_{qN^R(s-;q)}^R$ where $0 = S_{q0}^R < S_{q1}^R < S_{q2}^R < \\ldots$ are the successive event times of the occurrences of RCR events of type $q$.\n\\item Other general forms are possible, such as those in \\cite{dorado1997} and \\cite{gonzalez2005modelling}, the latter employing ideas of Kijima. See also the discussion on the `reality' of virtual or effective ages in the paper by \\cite{FinCha21}, as well as the recent review paper by \\cite{Beu2021}.\n\\end{itemize}\nFourth, the impact of accumulating RCR event occurrences, which could be adverse, but could also be beneficial as in software engineering applications, is incorporated in the model through the $\\rho_q(\\cdot;\\cdot)$ functions. One possible choice is an exponential function, such as $\\rho_q(N^R(s-);\\alpha_q) = \\exp\\{N^R(s-)^{\\tt T}\\alpha_q\\}$, but other choices could be made as well. Finally, the modulating exponential link function in the model is for the impact of the covariates as well as the values of the RCR, LM, and HS processes just before the time of interest, with the vector of coefficients quantifying the effects of the covariates. The use of $N(s-)$ in the model could be viewed as using them as internal covariates, or that the dynamic model is a self-exciting model.\n\nSimilar interpretations hold for the parameters $\\{\\eta(w_1,w_2): (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}\\}$ and $\\{\\xi(v_1,v): (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}\\}$. Thus, if just before time $s_0$, $W(s_0-) = w_1$ and $V(s_0-) = v_1$, indicated by $\\mathcal{F}_{s_0-}(w_1,v_1)$, then\n\\begin{eqnarray}\n\\lefteqn{ \\Pr\\{W(s_0+ds_0) = w_2|\\mathcal{F}_{s_0}(w_1,v_1)\\} \\approx } \\label{LM probability} \\\\ \\eta(w_1,w_2)\n&& \\exp\\{X\\beta^W + \\gamma^W_{j(v_1)} + N^R(s_0-) \\nu^W\\} ds_0; \\nonumber \\\\\n\\lefteqn{ \\Pr\\{V(s_0+ds_0) = v|\\mathcal{F}_{s_0}(w_1,v_1)\\} \\approx } \\label{HS probability} \\\\ \\xi(v_1,v)\n&& \\exp\\{ X\\beta^V + \\kappa^V_{j(w_1)} + N^R(s_0-) \\nu^V\\} ds_0, \\nonumber\n\\end{eqnarray}\nwhere $j(v_1)$ is the index associated with $v_1$ in $\\mathfrak{V}_1$ and $j(w_1)$ is the index associated with $w_1$ in $\\mathfrak{W}$.\n\n\\subsection{Special Case: Independent Poisson Processes and CTMCs for One Unit}\n\\label{subsec: special case}\n\nThere is a special case arising from this general joint model obtained when we set $\\lambda_{0q}(s) = \\lambda_{0q}$, $q \\in \\mathfrak{I}_Q$; $\\rho_q = 1$; $\\theta^R = 0$; $\\theta^W = 0$; and $\\theta^V = 0$. In this situation, we have\n\\begin{eqnarray*}\ndA^R(s;q) & = & \\lambda_{0q} ds, q \\in \\mathfrak{I}_Q; \n\\\\ dA^W(s;w_1,w_2) & = & \\eta(w_1,w_2) Y(s) Y^W(s;w_1) ds, (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}; \n\\\\ dA^V(s;v_1,v) & = & \\xi(v_1,v) Y(s) Y^W(s,v_1) ds, (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}.\n\\end{eqnarray*}\nIt is easy to see that this particular model coincides with the model where we have the following situations:\n\\begin{itemize}\n\\item[(i)] $N^R(\\cdot;q), q \\in \\mathfrak{I}_Q,$ are independent homogeneous Poisson processes with respective rates $\\lambda_{0q}, q \\in \\mathfrak{I}_Q$;\n\\item[(ii)] $W(\\cdot)$ is a continuous-time Markov chain (CTMC) with infinitesimal generator matrix (IGM) consisting of $\\{\\eta(w_1,w_2)\\}$;\n\\item[(iii)] $V(\\cdot)$ is a CTMC with IGM consisting of $\\{\\xi(v_1,v_2)\\}$; \n\\item[(iv)] $N^R$, $W$, and $V$ are independent; and\n\\item[(v)] Processes are observed over $[0,\\min(\\tau,\\tau_A)]$, where $\\tau$ is the end of monitoring period, while $\\tau_A$ is the absorption time of $V$ into $\\mathfrak{V}_0$.\n\\end{itemize}\nIn this special situation, the $\\lambda_{0q}$'s are both crude and net hazard rates. Also, due to the memoryless property of the exponential distribution, interventions performed after each event occurrence will have no impact in the succeeding event occurrences. This specific joint model further allows us to provide an operational interpretation of the model parameters. Thus, suppose that at time $s$, the LM process is at state $w_1$ and the HS process is at state $v_1 \\notin \\mathfrak{V}_0$. Then, the distribution of the time to the next event occurrence of any type (the holding or sojourn time at the current state configuration) has an exponential distribution with parameter $C = \\lambda_{0\\bullet} - \\eta(w_1,w_1) - \\xi(v_1,v_1)$. When an event occurs, then the (conditional) probability that it is (i) an RCR event of type $q$ is $\\lambda_{0q}\/C$; (ii) a transition to LM state $w_2 \\ne w_1$ is $\\eta(w_1,w_2)\/C$; or (iii) a transition to an HS state $v \\ne v_1$ is $\\xi(v_1,v)\/C$. This is the essence of the competing risks nature of all the possible event types: an RCR event, an LM transition, and an HS transition.\nAs such, the more general model could be viewed as an extension of this basic model with independent Poisson processes for the RCR component and CTMCs for the LM and HS components. \nFor this special case, the likelihood function in (\\ref{lik: one unit alt}) simplifies to the expression\n\\begin{eqnarray}\n\\lefteqn{\\mathcal{L}(\\Theta|D) = p_{(W,V)}(W(0),V(0))\n\\left[\\prod_{q \\in \\mathfrak{I}_Q} \\lambda_{0q}^{N^R(\\tau\\wedge\\tau_A;q)}\\right] \\times \\label{lik: special case one unit} } \\\\\n&& \n\\left[\\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} \\eta(w_1,w_2)^{N^W(\\tau\\wedge\\tau_A;w_1,w_2)}\\right]\n\\left[\\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{W}}} \\xi(v_1,v)^{N^V(\\tau\\wedge\\tau_A;v_1,v)}\\right] \\times \\nonumber \\\\\n&& \\exp\\left\\{-\\int_0^{\\tau\\wedge\\tau_A} T(s) ds\\right\\}. \\nonumber\n\\end{eqnarray}\nwhere $T(s) = \\lambda_{0\\bullet} - \\sum_{w_1 \\in \\mathfrak{W}} \\eta(w_1,w_1) Y^W(s;w_1) - \\sum_{v_1 \\in \\mathfrak{V}_1} \\xi(v_1,v_1) Y^V(s;v_1).$ Note that $T(s) = T(s;\\lambda_0,\\eta,\\xi)$, that is, it is a quantity instead of a statistic.\n\n\n\\section{Estimation of Model Parameters}\n\\label{sec: estimation}\n\n\\subsection{Parametric Estimation}\n\\label{subsec: estimation - parametric}\n\nHaving introduced the joint model, we now address in this section the problem of making inferences about the model parameters. We assume that we are able to observe $n$ independent units, with the $i$th unit having data $D_i = (X_i,N_i,\\mathcal{E}_i,Y_i,Y_i^W, Y_i^V)$ as in (\\ref{data: one unit}). The full sample data will then be represented by\n\\begin{equation}\n\\label{sample data}\n\\mathbf{D} = (D_1,D_2,\\ldots,D_n),\n\\end{equation}\nwhile the model parameters will be represented by, with the convention that $q \\in \\mathfrak{I}_Q$, $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$, and $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$,\n\\begin{eqnarray*}\n\\Theta & \\equiv & \\left[\\{\\lambda_{0q}(\\cdot), \\alpha_q\\}, \\{\\eta(w_1,w_2)\\},\n\\{\\xi(v_1,v)\\}, \\theta^R, \\theta^W, \\theta^V\\right].\n\\end{eqnarray*}\nThe $\\lambda_{0q}$s could be parametrically-specified, hence will have finite-dimensional parameters, so $\\Theta$ will also then be finite-dimensional. Except for the special case mentioned above, our main focus will be the case where the $\\lambda_{0q}$s are nonparametric. The distributions, $G_i$s, of the end of monitoring periods, $\\tau_i$s, also have model parameters, but they are not of main interest. To visualize the type of sample data set that accrues, Figure \\ref{sample data picture} provides a picture of a simulated sample data with $n = 50$ units.\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{cc}\n\\includegraphics*[width=2.75in]{fi\/covariate.png} & \\includegraphics*[width=2.5in]{fi\/plotallunitsRCR.png} \\\\\n\\includegraphics*[width=2.5in]{fi\/plotallunitsLM.png} & \\includegraphics*[width=2.5in]{fi\/plotallunitsHS.png} \n\\end{tabular}\n\\caption{Simulated sample data with $n = 50$ units. {\\em Panel 1:} Covariate values of the $n=50$ units. Values of $X_{2}$ are indicated by rectangles if $X_{1}=0$, while values of $X_{1}$ are indicated by circles if $X_{1}=1$. Here $X_{1} \\sim \\mbox{BER}(0.5)$, $X_{2} \\sim N(0,1)$. $X_{1}$ and $X_{2}$ are generated independently of each other. {\\em Panel 2:} Recurrent competing risks occurrences with three types of competing risks. Each unit is either censored (``+'') or reaches the absorbing status (``$\\times$''). {\\em Panel 3:} Marker processes. {\\em Panel 4:} Health status processes, with state ``1'' absorbing.}\n\\label{sample data picture}\n\\end{figure}\n\nThe full likelihood function, given $\\mathbf{D}$, is\n$\\mathcal{L}(\\Theta|\\mathbf{D}) = \\prod_{i=1}^n \\mathcal{L}(\\Theta|\\mathbf{D}_i)$,\nwhere the $\\mathcal{L}(\\Theta|D_i)$ is of the form in (\\ref{lik: one unit alt}). If the $\\lambda_{0q}(\\cdot)$s are parametrically-specified, then estimators of the finite-dimensional model parameters could be obtained as the maximizers of this full likelihood function, and their finite and asymptotic properties will follow from the general results for maximum likelihood estimators based on counting processes; see, for instance, \\cite{Bor84} and \\cite{ander1993}.\n\nWe illustrate this situation for the special case of the model given in subsection \\ref{subsec: special case}, so that the parameter is simply $\\Theta = [\\{\\lambda_{0q}\\},\\{\\eta(w_1,w_2)\\},\\{\\xi(v_1,v)\\}]$. In this situation, from (\\ref{lik: special case one unit}), the full likelihood reduces to, with $\\tau_i^* = \\tau_i \\wedge \\tau_{iA}$,\n\\begin{eqnarray*}\n\\lefteqn{\\mathcal{L}(\\Theta|\\mathbf{D}) = \\prod_{i=1}^n p_{(W,V)}(W_i(0),V_i(0)) \\times} \\\\\n&& \\left[\\prod_{q\\in\\mathfrak{I}_Q} \\lambda_{0q}^{\\sum_{i=1}^n N_i^R(\\tau_i^*;q)} \\right] \n\\left[\\prod_{(w_1,w_2)\\in\\mathfrak{I}_{\\mathfrak{W}}} \\eta(w_1,w_2)^{\\sum_{i=1}^n N_i^W(\\tau_i^*;w_1,w_2)}\\right] \\times \\\\\n&& \\left[\\prod_{(v_1,v)\\in\\mathfrak{I}_{\\mathfrak{V}}} \\xi(v_1,v)^{\\sum_{i=1}^n N_i^V(\\tau_i^*;v_1,v)}\\right]\n\\exp\\left\\{-\\sum_{i=1}^n \\int_0^{\\tau_i^*} T_i(s;\\lambda_0,\\eta,\\xi) ds\\right\\},\n\\end{eqnarray*}\nwhere\n\\begin{displaymath}\nT_i(s;\\lambda_0,\\eta,\\xi) = \\lambda_{0\\bullet} - \\sum_{w_1\\in\\mathfrak{W}} \\eta(w_1,w_1) Y_i^W(s;w_1) - \\sum_{v_1\\in\\mathfrak{V}} \\xi(v_1,v_1) Y_i^V(s;v_1).\n\\end{displaymath}\nThe score function $U(\\Theta|\\mathbf{D}) = \\nabla_\\Theta \\log \\mathcal{L}(\\Theta|\\mathbf{D})$ has elements\n\\begin{eqnarray*}\nU^R(\\Theta;q) & = & \\frac{\\sum_{i=1}^n N_i^R(\\tau_i^*;q)}{\\lambda_{0q}} - \\sum_{i=1}^n \\tau_i^*,\\quad q \\in \\mathfrak{I}_Q; \\\\\nU^W(\\Theta;w_1,w_2) & = & \\frac{\\sum_{i=1}^n N_i^W(\\tau_i^*;w_1,w_2)}{\\eta(w_1,w_2)} - \\sum_{i=1}^n \\int_0^{\\tau_i^*} Y_i^W(s;w_1) ds,\\quad (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}; \\\\\nU^V(\\Theta;v_1,v) & = & \\frac{\\sum_{i=1}^n N_i^V(\\tau_i^*;v_1,v)}{\\xi(v_1,v)} - \\sum_{i=1}^n \\int_0^{\\tau_i^*} Y_i^V(s;v_1) ds,\\quad (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}.\n\\end{eqnarray*}\nEquating these equations to zeros yield the ML estimators of the parameters, which are given below and possess the interpretation of being the observed ``occurrence-exposure\" rates.\n\\begin{eqnarray*}\n\\hat{\\lambda}_{0q} & = & \\frac{\\sum_{i=1}^n N_i^R(\\tau_i^*;q)}{\\sum_{i=1}^n \\tau_i^*} = \\frac{\\sum_{i=1}^n \\int_0^\\infty dN_i^R(s;q)}{\\sum_{i=1}^n \\int_0^\\infty Y_i(s) ds}, q \\in \\mathfrak{I}_Q; \\\\\n\\hat{\\eta}(w_1,w_2) & = & \\frac{\\sum_{i=1}^n N_i^W(\\tau_i^*;w_1,w_2)}{\\sum_{i=1}^n \\int_0^{\\tau_i^*} Y_i^W(s;w_1) ds} = \\frac{\\int_0^\\infty dN_i^W(s;w_1,w_2)}{\\sum_{i=1}^n \\int_0^\\infty Y_i(s) Y_i^W(s;w_1) ds}, (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}; \\\\\n\\hat{\\xi}(v_1,v) & = & \\frac{\\sum_{i=1}^n N_i^V(\\tau_i^*;v_1,v)}{\\sum_{i=1}^n \\int_0^{\\tau_i^*} Y_i^V(s;v_1) ds} = \\frac{\\int_0^\\infty dN_i^V(s;v_1,v)}{\\sum_{i=1}^n \\int_0^\\infty Y_i(s) Y_i^V(s;v_1) ds}, (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}.\n\\end{eqnarray*}\nIn these estimators, the numerators are total event counts, e.g., $\\sum_{i=1}^n N_i^R(\\tau_i^*;q)$ is the total number of observed RCR type $q$ events; $\\sum_{i=1}^n N_i^W(\\tau_i^*;w_1,w_2)$ is the total number of observed transitions in the LM process from state $w_1$ into $w_2$; and $\\sum_{i=1}^n N_i^V(\\tau_i^*;v_1,v)$ is the total number of observed transitions in the HS process from state $v_1$ into $v$. On the other hand, the denominators are total observed exposure times, e.g., $\\sum_{i=1}^n \\int_0^\\infty Y_i(s) ds$ is the total time at-risk for all the units; $\\sum_{i=1}^n \\int_0^\\infty Y_i(s)Y_i^W(s;w_1) dw$ is the total observed time of all units that they were at-risk for a transition in the LM process from state $w_1$; and $\\sum_{i=1}^n \\int_0^\\infty Y_i(s) Y_i^V(s;v_1) ds$ is the total observed time of all units that they were at-risk for a transition in the HS process from state $v_1$. An important and crucial point to emphasize here is that in these exposure times, they all take into account the time after the last observed events in each component process until the end of monitoring, whether it is a censoring (reaching $\\tau_i$) or an absorption (reaching $\\tau_{iA}$). If one ignores these right-censored times, then the estimators could be severely biased. This is a critical aspect we mentioned in the introductory section and re-iterate at this point that this should not be glossed over when dealing with recurrent event models.\n\nThe elements of the observed information matrix, $I(\\Theta;\\mathbf{D}) = - \\nabla_{\\Theta^{\\tt T}} U(\\Theta|\\mathbf{D})$, which is a diagonal matrix, have diagonal elements given by:\n\\begin{eqnarray*}\nI^R(\\Theta;q) & = & \\frac{\\sum_{i=1}^n N_i^R(\\tau_i^*;q)}{\\lambda_{0q}^2}, q \\in \\mathfrak{I}_Q; \\\\\nI^W(\\Theta;w_1,w_2) & = & \\frac{\\sum_{i=1}^n N_i^W(\\tau_i^*;w_1,w_2)}{\\eta(w_1,w_2)^2}, (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}; \\\\\nI^V(\\Theta;v_1,v) & = & \\frac{\\sum_{i=1}^n N_i^V(\\tau_i^*;v_1,v)}{\\xi(v_1,v)^2}, (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}.\n\\end{eqnarray*}\nAbbreviating the estimators into $\\hat{\\Theta} = (\\hat{\\lambda}_0, \\hat{\\eta}, \\hat{\\xi})$, we obtain the asymptotic result, that as $n \\rightarrow \\infty$,\n\\begin{equation}\n\\label{approx asymptotic special case}\n\\hat{\\Theta} \\sim \\mbox{AsyMVN}(\\Theta,I(\\hat{\\Theta};\\mathbf{D})^{-1}),\n\\end{equation}\nwith \\mbox{AsyMVN} meaning asymptotically multivariate normal. Thus, this result seems to indicate that the RCR, LM, and HS components or the estimators of their respective parameters do not have anything to do with each other, which appears intuitive since the RCR, LM, and HS processes were assumed to be independent processes to begin with. But, let us examine this issue further. The result in (\\ref{approx asymptotic special case}) is an approximation to the theoretical result that\n\\begin{equation}\n\\label{asymptotic special case}\n\\hat{\\Theta} \\sim \\mbox{AsyMVN}\\left(\\Theta,\\frac{1}{n}\\mathfrak{I}(\\Theta)^{-1}\\right),\n\\end{equation}\nwhere $\\frac{1}{n} I(\\hat{\\Theta};\\mathbf{D}) \\stackrel{pr}{\\rightarrow} \\mathfrak{I}(\\Theta)$. Evidently, $\\mathfrak{I}$ is a diagonal matrix, so let us examine its diagonal elements. Let $q \\in \\mathfrak{I}_Q$. Then we have, with `$\\mbox{pr-lim}$' denoting in-probability limit,\n\\begin{eqnarray*}\n\\lambda_{0q}^2 \\mathfrak{I}^R(\\Theta;q) & = &\\mbox{pr-lim}_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n \\int_0^\\infty dN_i^R(s;q) \\\\\n& = & \\mbox{pr-lim}_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n \\int_0^\\infty dM_i^R(s;q) + \\lambda_{0q} \\left[\\mbox{pr-lim}_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n \\int_0^\\infty Y_i(s) ds \\right].\n\\end{eqnarray*}\nThe first term on the right-hand side (RHS) converges in probability to zero by the weak law of large numbers and the zero-mean martingale property. The second term in the RHS converges in probability to its expectation, hence\n\\begin{displaymath}\n\\mathfrak{I}^R(\\Theta;q) = \\frac{1}{\\lambda_{0q}} \\left[ \\int_0^\\infty \\left\\{ \\lim_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i}^n E[Y_i(s)]\\right\\} ds \\right].\n\\end{displaymath}\nBut, now,\n\\begin{eqnarray*}\n\\lim_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i}^n E[Y_i(s)] & = & \\lim_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i}^n \\Pr\\{\\tau_i \\ge s\\} \\Pr\\{\\tau_i^A \\ge s\\} \\\\\n& = & \\lim_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i}^n \\bar{G}_i(s-) \\Pr\\{V_i(u) \\notin \\mathfrak{V}_0, u \\le s\\}.\n\\end{eqnarray*}\nwith $\\bar{G}_i = 1 - G_i$. The last probability term above will depend on the generators $\\{\\xi(v_1,v): (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}\\}$ of the CTMC $\\{V(s): s \\ge 0\\}$, so that the theoretical Fisher information or the asymptotic variance associated with the estimator $\\hat{\\lambda}_{0q}$ depends after all on the HS process, as well as on the $G_i$s, contrary to the seemingly intuitive expectation that it should not depend on the LM and HS processes. This result is a subtle one which arise because of the structure of the observation processes. If $G_i = G, i=1,\\ldots,n$, then we have that\n\\begin{displaymath}\n\\mathfrak{I}^R(\\Theta;q) = \\frac{1}{\\lambda_{0q}} \\int_0^\\infty \\bar{G}(s-) \\Pr\\{V(u) \\notin \\mathfrak{V}_0, u \\le s\\} ds,\n\\end{displaymath}\nsince $\\Pr\\{V_i(u) \\notin \\mathfrak{V}_0, u \\le s\\} = \\Pr\\{V(u) \\notin \\mathfrak{V}_0, u \\le s\\}, i=1,\\ldots,n$. If we denote by $\\Gamma$ the generator matrix of $\\{V(s): s \\ge 0\\}$ and let $\\Gamma_1$ be the sub-matrix associated with the $\\mathfrak{V}_1$ states, then if $p_0^V \\equiv (p_V(v_1), v_1 \\in \\mathfrak{V}_1)^{\\tt T}$ is the initial probability mass function of $V(0)$, we have\n\\begin{displaymath}\n\\Pr\\{V(u) \\notin \\mathfrak{V}_0, u \\le s\\} = \\Pr\\{V(s) \\in \\mathfrak{V}_1\\} = (p_0^V)^{\\tt T} \\left[e^{s\\Gamma_{11}}\\right] 1_{|\\mathfrak{V}_1|}\n\\end{displaymath}\nwhere the matrix exponential is\n$e^{s\\Gamma_{11}} \\equiv \\sum_{k=0}^\\infty \\frac{s^k \\Gamma_{11}^k}{k!}$\nand $1_K$ is a column vector of $1$s of dimension $K$. Thus, we obtain\n\\begin{displaymath}\n\\mathfrak{I}^R(\\Theta;q) = \\frac{1}{\\lambda_{0q}} (p_0^V)^{\\tt T} \\sum_{k=0}^\\infty \\left[ \\int_0^\\infty \\bar{G}(s-) \\frac{s^k}{k!} ds \\right] \\Gamma_{11}^k 1_{|\\mathfrak{V}_1|}. \n\\end{displaymath}\nFor example, if $\\bar{G}(s) = \\exp(-\\nu s)$, that is, $\\tau_i$'s are exponentially-distributed with mean $1\/\\nu$, then the above expression simplifies to\n$$\\mathfrak{I}^R(\\Theta;q) = \\frac{1}{\\lambda_{0q}} \\frac{1}{\\nu} (p_0^V)^{\\tt T} \\left[ \\sum_{k=0}^\\infty \\left(\\frac{\\Gamma_{11}}{\\nu}\\right)^k\\right] 1_{|\\mathfrak{V}_1|}.$$\nFor computational purposes, one may use an eigenvalue decomposition of $\\Gamma_{11}$: $\\Gamma_{11} = U \\mbox{Dg}(d) U^{-1}$ where $d$ consists of the eigenvalues of $\\Gamma_{11}$ and $U$ is the matrix of eigenvectors associated with the eigenvalues $d$. The main point of this example though is the demonstration that estimators of the parameters associated with the RCR, LM, or HS process will depend on features of the other processes, {\\em even} when one starts with independent processes.\n\nWe remark that the estimators $\\hat{\\lambda}_{0q}$s, $\\hat{\\eta}(w_1,w_2)$s, and $\\hat{\\xi}(v_1,v)$s could also be derived as method-of-moments estimators using the martingale structure. The inverse of the observed Fisher information matrix coincides with an estimator using the optional variation (OV) matrix process, while the inverse of the Fisher information matrix coincides with the limit-in-probability of the predictable quadratic variation matrix. To demonstrate for $\\lambda_{0q}$, we have that\n$$\\left\\{\\sum_{i=1}^n M_i^R(s;q) = \\sum_{i=1}^n \\left[N_i^R(s;q) - \\int_0^s Y_i(t) \\lambda_{0q} dt\\right]: s \\ge 0\\right\\}$$\nis a zero-mean square-integrable martingale. Letting $s \\rightarrow \\infty$, setting $\\sum_{i=1}^n M_i^R(\\infty;q) = 0$, and solving for $\\lambda_{0q}$ yields $\\hat{\\lambda}_{0q}$. Next, we have\n\\begin{eqnarray*}\n\\hat{\\lambda}_{0q} & = & \\frac{\\sum_{i=1}^n \\int_0^\\infty dN_i^R(s;q)}{\\sum_{i=1}^n \\int_0^\\infty Y_i(s) ds} = \\lambda_{0q} + \\frac{\\sum_{i=1}^n \\int_0^\\infty dM_i^R(s;q)}{\\sum_{i=1}^n \\int_0^\\infty Y_i(s) ds}\n\\end{eqnarray*}\nso that\n\\begin{eqnarray*}\n\\sqrt{n}[\\hat{\\lambda}_{0q} - \\lambda_{0q}] = \\left( \\int_0^\\infty \\frac{1}{n} \\sum_{i=1}^n Y_i(s) ds \\right)^{-1} \\frac{1}{\\sqrt{n}} \\sum_{I=1}^n \\int_0^\\infty dM_i^R(s;q).\n\\end{eqnarray*}\nWe have already seen where $\\int_0^\\infty \\frac{1}{n} \\sum_{i=1}^n Y_i(s) ds$ converges in probability, whereas by the Martingale Central Limit Theorem, we have that\n\\begin{eqnarray*}\n\\frac{1}{\\sqrt{n}} \\sum_{i=1}^n \\int_0^\\infty dM_i^R(s;q) \\stackrel{d}{\\rightarrow} N(0,\\sigma^2_R(q))\n\\end{eqnarray*}\nwith \n\\begin{eqnarray*}\n\\sigma_R^2(q) & = & \\int_0^\\infty \\left\\{\\mbox{pr-lim}_{n \\rightarrow \\infty} \\frac{1}{n} \\sum_{i=1}^n d \\langle M_i^R(\\cdot;q), M_i^R(\\cdot;q) \\rangle (s) \\right\\} \\\\\n & = & \\int_0^\\infty \\left\\{\\mbox{pr-lim}_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n Y_i(s) \\lambda_{0q} \\right\\} ds.\n\\end{eqnarray*}\nTherefore, we have\n\\begin{eqnarray*}\n\\lefteqn{ \\sqrt{n} [\\hat{\\lambda}_{0q} - \\lambda_{0q}] \\stackrel{d}{\\rightarrow} } \\\\\n&& N\\left(0,\\frac{\\lambda_{0q}}{\\int_0^\\infty \\left[ \\mbox{pr-lim}_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n Y_i(s) \\right] ds} = \\frac{\\lambda_{0q}}{\\int_0^\\infty \\left[ \\lim_{n\\rightarrow\\infty} \\frac{1}{n} \\sum_{i=1}^n E[Y_i(s)] \\right] ds} \\right), \n\\end{eqnarray*}\nwhich is the same result stated above using ML theory. Analogous asymptotic derivations can be done for $\\hat{\\eta}(w_1,w_2)$ and $\\hat{\\xi}(v_1,v)$, though the resulting limiting variances will involve expected occupation times for their respective states of the $W_i$-processes coming from the $Y_i^W(\\cdot;w_1)$ terms and the $V_i$-processes from the $Y_i^V(\\cdot;v_1)$ terms. Note that, asymptotically, these estimators are independent, {\\em but} their limiting variances depend on the parameters from the other processes.\n\n\\subsection{Semi-Parametric Estimation}\n\\label{subsec: estimation - semiparametric}\n\nIn this section we consider the estimation of model parameters when the hazard rate functions $\\lambda_{0q}(\\cdot)$s are specified nonparametrically. We shall denote by $\\Lambda_{0q}(t) = \\int_0^t \\lambda_{0q}(u) du, q \\in \\mathfrak{I}_Q$, the associated cumulative hazard functions. To simplify notation, we let\n\\begin{eqnarray*}\n& \\psi_R(s;\\theta^R) = \\exp\\{B^R(s) \\theta^R\\}; \n \\psi^W(s;\\theta^W) = \\exp\\{B^W(s) \\theta^W\\}; \n \\psi^V(s;\\theta^V) = \\exp\\{B^V(s) \\theta^V\\}. &\n\\end{eqnarray*}\nUsing these functions, for $q \\in \\mathfrak{I}_Q$, $(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$, and $(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}$, we then have\n\\begin{eqnarray*}\ndA^R(s;q) & = & Y(s) \\lambda_{0q}[\\mathcal{E}_q(s)] \\rho_q(N^R(s-);\\alpha_q) \\psi^R(s-;\\theta^R) ds; \\\\\ndA^W(s;w_1,w_2) & = & Y(s) Y^W(s;w_1) \\eta(w_1,w_2) \\psi^W(s-;\\theta^W) ds; \\\\\ndA^V(s;v_1,v) & = & Y(s) Y^V(s;v_1) \\xi(v_1,v) \\psi^V(s-;\\theta^V) ds.\n\\end{eqnarray*}\nWe also abbreviate the vector of model parameters into $\\Theta \\equiv (\\Lambda_0,\\alpha,\\eta,\\xi,\\theta^R,\\theta^W,\\theta^V)$. Our goal is to obtain estimators for these parameters based on the sample data $\\mathbf{D} = (D_1, D_2,\\ldots,D_n)$. In a nutshell, the basic approach to obtaining our estimators is to first assume that $(\\alpha,\\theta^R,\\theta^W,\\theta^V)$ are known, then obtain `estimators' of $(\\Lambda_0,\\eta,\\xi)$. Having obtained these `estimators', in quotes since they are not yet estimators when $(\\alpha,\\theta^R,\\theta^W,\\theta^V)$ are unknown, we plug them into the likelihood function to obtain a profile likelihood function. From the resulting profile likelihood function, which depends on $(\\alpha,\\theta^R,\\theta^W,\\theta^V)$, we obtain its maximizers with respect to these finite-dimensional parameters to obtain their estimators. These estimators are then plugged into the `estimators' of $(\\Lambda_0,\\eta,\\xi)$ to obtain their estimators.\n\nThe full likelihood function based on the sample data $\\mathbf{D} = (D_1,\\ldots,D_n)$ could be written as a product of three ``major'' likelihood functions corresponding to the three model components:\n\\begin{eqnarray}\n\\label{full lik factor} \\lefteqn{\\mathcal{L}(\\Theta|\\mathbf{D}) = \n\\left[ \\prod_{q \\in \\mathfrak{I}_Q} \\mathcal{L}^R(\\Lambda_{0q},\\alpha,\\theta^R;q|\\mathbf{D}) \\right] \\times } \\\\\n&& \\left[ \\prod_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} \\mathcal{L}^W(\\eta,\\theta^V;w_1,w_2|\\mathbf{D}) \\right] \\times \n\\left[ \\prod_{(v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}} \\mathcal{L}^V(\\xi,\\theta^W;v_1,v|\\mathbf{D}) \\right], \\nonumber\n\\end{eqnarray}\nwhere, suppressing writing of the parameters in the functions,\n\\begin{eqnarray*}\n\\mathcal{L}^R(\\Lambda_{0q},\\alpha,\\theta^R;q|\\mathbf{D}) & = & \\left\\{\\prod_{i=1}^n \\prodi_{s=0}^\\infty \\left[dA_i^R(s;q)\\right]^{dN_i^R(s;q)}\\right\\} \\times \\\\ && \\exp\\left\\{-\\sum_{i=1}^n \\int_0^\\infty dA_i^R(s;q)\\right\\}; \\\\\n\\mathcal{L}^W(\\eta,\\theta^V;w_1,w_2|\\mathbf{D}) & = & \\left\\{ \\prod_{i=1}^n \\prodi_{s=0}^\\infty \\left[dA_i^W(s;w_1,w_2)\\right]^{dN_i^W(s;w_1,w_2)}\\right\\} \\times \\\\ && \\exp\\left\\{-\\sum_{i=1}^n\\int_0^\\infty dA_i^W(s;w_1,w_2)\\right\\}; \\\\\n\\mathcal{L}^V(\\eta,\\theta^V;v_1,v|\\mathbf{D}) & = & \\left\\{ \\prod_{i=1}^n \\prodi_{s=0}^\\infty \\left[dA_i^V(s;v_1,v)\\right]^{dN_i^V(s;v_1,v)}\\right\\} \\times \\\\ && \\exp\\left\\{-\\sum_{i=1}^n\\int_0^\\infty dA_i^V(s;v_1,v)\\right\\}.\n\\end{eqnarray*}\nLet $0 = S_0 < S_1 < S_2 < \\ldots < S_K < S_{K+1} =\\infty$ be the ordered distinct times of \\underline{any} type of event occurrence for all the $n$ sample units. Also, let $0 = T_0 < T_1 < T_2 < \\ldots < T_L < T_{L+1} = \\infty$ be the ordered distinct values of the set $\\{\\mathcal{E}_{iq}(S_j): i = 1, \\ldots, n; q \\in \\mathfrak{I}_Q; j=0,1,\\ldots,S_K\\}$. Recall that $\\tau_i^* = \\tau_i \\wedge \\tau_{iA}$. Observe that both $\\{S_k: k=0,1,\\ldots,K,K+1\\}$ and $\\{T_l: l=0,1,\\ldots,L,L+1\\}$ partition $[0,\\infty)$. For each $i=1,\\ldots,n$, and $q \\in \\mathfrak{I}_Q$, $\\mathcal{E}_{iq}(\\cdot)$ is observed, hence defined, only on $[0,\\tau_i^*)$. However, for notational convenience, we define $\\mathcal{E}_{iq}(s) = 0$ for $s > \\tau_i^*$. In addition, on each non-empty interval $(S_{j-1} \\wedge \\tau_i^*,S_j \\wedge \\tau_i^*]$, $\\mathcal{E}_{iq}(\\cdot)$ has an inverse which will be denoted by $\\mathcal{E}_{iqj}^{-1}(\\cdot)$. Henceforth, for brevity of notation, we adopt the mathematically imprecise convention that $0\/0 = 0$.\n\n\\begin{proposition}\n\\label{prop-1}\nFor $q \\in \\mathfrak{I}_Q$, if $(\\alpha_q,\\theta^R)$ is known, then the nonparametric maximum likelihood estimator (NPMLE) of $\\Lambda_{0q}(\\cdot)$ is given by\n\\begin{eqnarray}\n\\label{cumulative hazard}\n\\hat{\\Lambda}_{0q}(t;\\alpha_q,\\theta^R) = \\sum_{l: T_l \\le t} \\left[\\frac{\\sum_{i=1}^n \\sum_{j=1}^K I\\{\\mathcal{E}_{iq}(S_j) = T_l\\} dN_i^R(S_j;q)}{S_q^{0R}(T_l|\\alpha_q,\\theta^R)}\\right] \n\\end{eqnarray}\nwhere\n\\begin{eqnarray}\n \\label{sum of at-risk} S_q^{0R}(u|\\alpha_q,\\theta^R) & = & \\sum_{i=1}^n\\sum_{j=1}^K \\left\\{ \\left[\\frac{ \\rho_q[N_i^R(\\mathcal{E}_{iqj}^{-1}(u)-);\\alpha_q] \\psi_i^R(\\mathcal{E}_{iqj}^{-1}(u)-;\\theta^R)}{\\mathcal{E}_{iq}^\\prime[\\mathcal{E}_{iqj}^{-1}(u)]}\\right] \\times \\right. \\\\ && \\left. \nI\\{\\mathcal{E}_{iq}(S_{j-1} \\wedge \\tau_i^*) < u \\le \\mathcal{E}_{iq}(S_{j} \\wedge \\tau_i^*)\\} \\right\\}. \\nonumber\n\\end{eqnarray}\n\\end{proposition}\n\n\\begin{proof}\nThe likelihood $\\mathcal{L}_q^R(\\Lambda_{0q},\\alpha_q,\\theta^R|\\mathbf{D})$ could be written as follows:\n\\begin{eqnarray*}\n\\mathcal{L}_q^R & = & \\left[\\prod_{i=1}^n \\prod_{j=1}^{K} [Y_i(S_j) \\lambda_{0q}[\\mathcal{E}_{iq}(S_j) \\rho_q[N_i^R(S_j-);\\alpha_q] \\psi_i(S_j-;\\theta^R)]^{dN_i^R(S_j;q)}\\right] \\times \\\\ && \\exp\\left\\{-\\sum_{i=1}^n\\sum_{j=1}^{K+1} \\int_{S_{j-1}}^{S_j} Y_i(s) \\lambda_{0q}[\\mathcal{E}_{iq}(s)]\n\\rho_q[N_i^R(s-);\\alpha_q] \\psi_i(s-;\\theta^R) ds\\right\\}.\n\\end{eqnarray*}\nFocusing on the nonparametric parameter $\\Lambda_{0q}(\\cdot)$, the first term of $\\mathcal{L}_q^R$ could be written as\n\\begin{eqnarray*}\n\\prod_{i=1}^n \\prod_{j=1}^K [\\lambda_{0q}[\\mathcal{E}_{iq}(S_j)]^{dN_i^R(S_j;q)} & = &\n\\prod_{l=1}^L [\\lambda_{0q}(T_l)]^{dN_\\bullet^R(\\infty,T_l;q)} = \\prod_{l=1}^L [d\\Lambda_{0q}(T_l)]^{N_\\bullet^R(\\infty,T_l;q)}\n\\end{eqnarray*}\nwhere \n\\begin{displaymath}\ndN_\\bullet^R(\\infty,T_l;q) = \\sum_{i=1}^n \\sum_{j=1}^K I\\{\\mathcal{E}_{iq}(S_j) = T_l\\} dN_i^R(S_j;q).\n\\end{displaymath}\nThe exponent in the second term of $\\mathcal{L}_q^R$ could be written as follows, the second equality obtained after an obvious change-of-variable and using the definition of $S_q^{0R}(\\cdot;\\cdot,\\cdot)$ in the proposition:\n\\begin{eqnarray*}\n\\lefteqn{\\sum_{i=1}^n\\sum_{j=1}^{K+1} \\int_{S_{j-1}}^{S_j} Y_i(s) \\lambda_{0q}[\\mathcal{E}_{iq}(s)]\n\\rho_q[N_i^R(s-);\\alpha_q] \\psi_i(s-;\\theta^R) ds} \\\\\n& = & \\sum_{i=1}^n \\sum_{j=1}^{K+1} \\int_{S_{j-1} \\wedge \\tau_i^*}^{S_j \\wedge \\tau_i^*} \\lambda_{0q}[\\mathcal{E}_{iq}(s)]\n\\rho_q[N_i^R(s-);\\alpha_q] \\psi_i(s-;\\theta^R) ds \\\\\n& = & \\int_0^\\infty S_q^{0R}(u|\\alpha_q,\\theta^R) d\\Lambda_{0q}(u) = \\sum_{l=1}^{L+1} \\int_{T_{l-1}}^{T_l} S_q^{0R}(u;\\alpha_q,\\theta^R) d\\Lambda_{0q}(u) \\\\ \n& = & \\sum_{l=1}^L S_q^{0R}(T_l|\\alpha_q,\\theta^R) d\\Lambda_{0q}(T_l) + \\sum_{l=1}^{L+1} \\int_{u \\in (T_{l-1},T_{l})} S_q^{0R}(u|\\alpha_q,\\theta^R) d\\Lambda_{0q}(u)\n\\end{eqnarray*}\nTherefore, $\\mathcal{L}_q^R$, when viewed solely in terms of the parameter $\\Lambda_{0q}$ equals\n\\begin{eqnarray*}\n\\mathcal{L}_q^R & = & \\prod_{l=}^L [d\\Lambda_{0q}(T_l)]^{dN_\\bullet^R(\\infty,T_l;q)} \\times \\\\ && \\exp\\left\\{-\\left[\\sum_{l=1}^L S_q^{0R}(T_l|\\alpha_q,\\theta^R) d\\Lambda_{0q}(T_l) + \\sum_{l=1}^{L+1} \\int_{u \\in (T_{l-1},T_{l})} S_q^{0R}(u|\\alpha_q,\\theta^R) d\\Lambda_{0q}(u)\\right]\\right\\}\n\\end{eqnarray*}\nSince $\\sum_{l=1}^{L+1} \\int_{u \\in (T_{l-1},T_{l})} S_q^{0R}(u|\\alpha_q,\\theta^R) d\\Lambda_{0q}(u) \\ge 0$, then we obtain the upper bound for $\\mathcal{L}_q^R$ by setting this term to be equal to zero:\n\\begin{displaymath}\n\\mathcal{L}_q^R \\le \\left[\\prod_{l=1}^L [d\\Lambda_{0q}(T_l)]^{dN_\\bullet^R(\\infty,T_l;q)}\\right] \\exp\\left\\{-\\sum_{l=1}^L S_q^{0R}(T_l|\\alpha_q,\\theta^R) d\\Lambda_{0q}(T_l)\\right\\}.\n\\end{displaymath}\nThe upper bound is maximized by setting \n\\begin{displaymath}\nd\\hat{\\Lambda}_{0q}(T_l|\\alpha_q,\\theta^R) = \\frac{dN_\\bullet^R(\\infty,T_l;q)}{S_q^{0R}(T_l|\\alpha_q,\\theta^R)}, l=1,2,\\ldots,L.\n\\end{displaymath}\nFor $u \\in (T_{l-1},T_l)$, we then take $\\hat{\\Lambda}_{0q}(u|\\alpha_q,\\theta^R) = \\hat{\\Lambda}_{0q}(T_{l-1}|\\alpha_q,\\theta^R)$ which will satisfy the condition $\\int_{u \\in (T_{l-1},T_{l})} S_q^{0R}(u|\\alpha_q,\\theta^R) d\\hat{\\Lambda}_{0q}(u) = 0$ for all $l = 1,2, \\ldots, L+1$. Thus,\n\\begin{displaymath}\n\\hat{\\Lambda}_{0q}(t|\\alpha_q,\\theta^R) = \\sum_{l: T_l \\le t} d\\hat{\\Lambda}_{0q}(T_l;\\alpha_q,\\theta^R) = \\sum_{l: T_l \\le t} \\frac{dN_\\bullet^R(\\infty,T_l;q)}{S_q^{0R}(T_l|\\alpha_q,\\theta^R)},\n\\end{displaymath}\nwhich is a step-function with jumps only on $T_l$s with $dN_\\bullet^R(\\infty,T_l;q) > 0$, maximizes $\\Lambda_{0q}(\\cdot) \\mapsto \\mathcal{L}_q^R(\\Lambda_{0q}(\\cdot)|\\alpha_q,\\theta^R|\\mathbf{D})$ for given $(\\alpha_q,\\theta^R)$, completing the proof of the proposition.\n\\end{proof}\n\nA more elegant representation of the Aalen-Breslow-Nelson type estimator $\\hat{\\Lambda}_{0q}(\\cdot;\\alpha_q,\\theta^R)$ in Proposition \\ref{prop-1}, which shows that the estimator is also moment-based, aside from being useful in obtaining finite and asymptotic properties, is through the use doubly-indexed processes as in \\cite{pena2007semiparametric} for a setting with only one recurrent event type and without LM and HS processes. Define the doubly-indexed processes $\\{(N_i^R(s,t;q),A_i^R(s,t; q|\\Lambda_{0q},\\alpha_1,\\theta^R): (s,t) \\in \\Re_+^2\\}$ where\n\\begin{eqnarray*}\n& N_i^R(s,t;q) = \\int_0^s I\\{\\mathcal{E}_{iq}(v) \\le t\\} dN_i^R(v;q); & \\\\\n& A_i^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R) = \\int_0^s I\\{\\mathcal{E}_{iq}(v) \\le t\\} dA_i^R(v;q|\\Lambda_{0q},\\alpha_q,\\theta^R). &\n\\end{eqnarray*}\nAlso, let \n\\begin{eqnarray*}\nN_\\bullet^R(s,t;q) & = & \\sum_{i=1}^n N_i^R(s,t;q)\\ \\mbox{and}\\ A_\\bullet^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R) = \\sum_{i=1}^n A_i^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R).\n\\end{eqnarray*}\nThen, for fixed $t$, $\\{M_\\bullet^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R) = N_\\bullet^R(s,t;q) - A_\\bullet^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R): s \\ge 0\\}$ is a zero-mean square-integrable martingale. Thus, $E[N_\\bullet^R(s,t;q)] = E[A_\\bullet^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R)]$.\n\n\\begin{proposition}\n\\label{prop-2}\nFor $q \\in \\mathfrak{I}_Q$, $A_\\bullet^R(s,t;q|\\Lambda_{0q},\\alpha_q,\\theta^R) = \\int_0^t S_q^{0R}(s,u|\\alpha_q,\\theta^R) d\\Lambda_{0q}(u)$, where\n\\begin{eqnarray*}\nS_q^{0R}(s,u|\\alpha_q,\\theta^R) & = & \\sum_{i=1}^n\\sum_{j=1}^K \\left\\{ \\left[\\frac{ \\rho_q[N_i^R(\\mathcal{E}_{iqj}^{-1}(u)-);\\alpha_q] \\psi_i^R(\\mathcal{E}_{iqj}^{-1}(u)-;\\theta^R)}{\\mathcal{E}_{iq}^\\prime[\\mathcal{E}_{iqj}^{-1}(u)]}\\right] \\times \\right. \\\\ && \\left. \nI\\{\\mathcal{E}_{iq}(S_{j-1} \\wedge \\tau_i^* \\wedge s) < u \\le \\mathcal{E}_{iq}(S_{j} \\wedge \\tau_i^* \\wedge s)\\} \\right\\};\n\\end{eqnarray*}\nand $dN_\\bullet^R(s,T_l;q) = \\sum_{i=1}^n \\sum_{j=1}^K I\\{S_j \\le s; \\mathcal{E}_{iq}(S_j) = T_l\\} dN_i^R(S_j;q)$.\n\\end{proposition}\n\n\\begin{proof}\nSimilar to steps in the proof of Proposition \\ref{prop-1}.\n\\end{proof}\n\nBy first assuming that $(\\alpha_q,\\theta^R)$ is known, then from the identities in Proposition \\ref{prop-2}, a method-of-moments type estimator of $\\Lambda_{0q}(t)$ is given by\n\\begin{equation}\n\\label{NAE-type 1}\n\\hat{\\Lambda}_{0q}(s,t|\\alpha_q,\\theta^R) = \\sum_{l : T_l \\le t} \\frac{dN_\\bullet^R(s,T_l;q)}{S_q^{0R}(s,T_l|\\alpha_q,\\theta^R)} =\\int_0^t \\frac{N_\\bullet^R(s,du;q)}{S_q^{0R}(s,u|\\alpha_q,\\theta^R)}.\n\\end{equation}\nWhen $s \\rightarrow \\infty$, this $\\hat{\\Lambda}_{0q}(s,t|\\alpha_q,\\theta^R)$ converges to the estimator $\\hat{\\Lambda}_{0q}(t|\\alpha_q,\\theta^R)$ in Proposition \\ref{prop-1}.\nNext, we obtain estimators of the $\\eta(w_1,w_2)$s and $\\xi(v_1,v)$s, again by first assuming first that $\\theta^W$ and $\\theta^V$ are known.\n\n\\begin{proposition}\n\\label{prop-est of nu and xi}\nIf $(\\theta^W,\\theta^V)$ are known, the ML estimators of the $\\eta(w_1,w_2)$s and $\\xi(v_1,v)$s are the ``occurrence-exposure'' rates\n\\begin{eqnarray}\n\\hat{\\eta}(w_1,w_2|\\theta^W) & = & %\n\\frac{\\sum_{i=1}^n\\sum_{j=1}^K dN_i^W(S_j;w_1,w_2)}{S^{0W}(w_1;\\theta^W)}, \\forall (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}; \\label{eta estimator} \\\\\n\\hat{\\xi}(v_1,v|\\theta^V) & = & \n\\frac{\\sum_{i=1}^n\\sum_{j=1}^K dN_i^V(S_j;v_1,v)}{S^{0V}(v_1;\\theta^V)}, \\forall (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}}, \\label{xi estimator}\n\\end{eqnarray}\nwhere\n$$S^{0W}(w_1;\\theta^W) = \\int_0^\\infty \\sum_{i=1}^n Y_i(s) Y_i^W(s;w_1) \\psi_i^W(s-;\\theta^W) ds;$$\n$$S^{0V}(v_1;\\theta^V) = \\int_0^\\infty \\sum_{i=1}^n Y_i(s) Y_i^V(s;v_1) \\psi_i^V(s-;\\theta^V) ds.$$\n\\end{proposition}\n\n\\begin{proof}\nFollows immediately by maximizing the likelihood functions $\\mathcal{L}^W$ and $\\mathcal{L}^V$ with respect to the $\\eta(w_1,w_2)$s and $\\xi(v_1,v)$s, respectively.\n\\end{proof}\n\nWe can now form the profile likelihoods for the parameters $(\\{\\alpha_q, q \\in \\mathfrak{I}_Q\\}, \\theta^R, \\theta^W, \\theta^V)$. These are the likelihoods that are obtained after plugging-in the `estimators' $\\hat{\\Lambda}_{0q}(\\cdot;\\alpha_q,\\theta^R)$s, $\\hat{\\eta}(w_1,w_2)$s, and $\\hat{\\xi}(v_1,v)$s in the full likelihoods. The resulting profile likelihoods are reminiscent of the partial likelihood function in Cox's proportional hazards model \\cite{cox1972,AndGil1982}. \n\n\\begin{proposition}\n\\label{profile liks}\nThe three profile likelihood functions $\\mathcal{L}_{pl}^R$, $\\mathcal{L}_{pl}^W$ and $\\mathcal{L}_{pl}^V$ are given by\n\\begin{eqnarray*}\n\\lefteqn{\\mathcal{L}^R_{pl}(\\alpha_q,q\\in\\mathfrak{I}_Q,\\theta^R|\\mathbf{D}) = } \\\\ && \\prod_{q\\in\\mathfrak{I}_Q} \\prod_{i=1}^n \\prod_{j=1}^K \\prod_{l=1}^L\n\\left[\\frac{\\rho_q[N_i^R(S_j-);\\alpha_q] \\psi_i^R(S_j-;\\theta^R)}{S_q^{0R}(T_l|\\alpha_q,\\theta^R)}\\right]^{I\\{\\mathcal{E}_{iq}(S_j)=T_l\\} dN_i^R(S_j;q)};\n\\end{eqnarray*}\n\\begin{displaymath}\n\\mathcal{L}_{pl}^W(\\theta^W|\\mathbf{D}) = \\prod_{w_1 \\in \\mathfrak{W}} \\prod_{i=1}^n \\prod_{j=1}^K \\left[\\frac{\\psi_i^W(S_j-;\\theta^W)}{S^{0W}(w_1;\\theta^W)}\\right]^{dN_i^W(S_j;w_1,\\bullet)};\n\\end{displaymath}\n\\begin{displaymath}\n\\mathcal{L}_{pl}^V(\\theta^V|\\mathbf{D}) = \\prod_{v_1 \\in \\mathfrak{V}_1} \\prod_{i=1}^n \\prod_{j=1}^K \\left[\\frac{\\psi_i^V(S_j-;\\theta^V)}{S^{0V}(v_1;\\theta^V)}\\right]^{dN_i^V(S_j;v_1,\\bullet)}.\n\\end{displaymath}\nwith $dN_i^W(S_j;w_1,\\bullet) = \\sum_{w_2 \\in \\mathfrak{W};\\ w_2 \\ne w_1} dN_i^W(S_j;w_1,w_2)$, the number of transitions from state $w_1$ at time $S_j$ for unit $i$, and $dN_i^V(S_j;v_1,\\bullet) = \\sum_{v \\in \\mathfrak{V};\\ v \\ne v_1} dN_i^V(S_j;v_1,v)$, the number of transitions from state $v_1$ at time $S_j$ for unit $i$.\n\\end{proposition}\n\n\\begin{proof}\nThese follow immediately by plugging-in the `estimators' into the three main likelihoods in (\\ref{full lik factor}) and then simplifying.\n\\end{proof}\n\nFrom these three profile likelihoods, we could obtain estimators of the parameters $\\alpha_q$s, $\\theta^R$, $\\theta^W$, and $\\theta^V$ as follows:\n\\begin{eqnarray*}\n& (\\hat{\\alpha}_q, q\\in\\mathfrak{I}_Q,\\hat{\\theta}^R) = {\\arg\\max}_{(\\alpha_q, \\theta^R)} \\mathcal{L}^R_{pl}(\\alpha_q,q\\in\\mathfrak{I}_Q,\\theta^R|\\mathbf{D}); & \\\\\n& \\hat{\\theta}^W = \\arg\\max_{\\theta^W} \\mathcal{L}_{pl}^W(\\theta^W|\\mathbf{D}) \\ \\mbox{and} \\\n\\hat{\\theta}^V = \\arg\\max_{\\theta^V} \\mathcal{L}_{pl}^V(\\theta^V|\\mathbf{D}). &\n\\end{eqnarray*}\nEquivalently, these estimators are maximizers of the logarithm of the profile likelihoods. These log-profile likelihoods are more conveniently expressed in terms of stochastic integrals as follows:\n\\begin{eqnarray*}\nl_{pl}^R & = & \\sum_{q\\in\\mathfrak{I}_Q}\\sum_{i=1}^n \\int_0^\\infty\\int_0^\\infty \\left[\\log\\rho_q[N_i^R(s-);\\alpha_q] + \\log\\psi_i^R(s-;\\theta^R) - \\right. \\\\ && \\left. \\log S_q^{0R}(t;\\alpha_q,\\theta^R)\\right] N_i^R(ds,dt;q); \\\\\nl_{pl}^W & = & \\sum_{w_1 \\in \\mathfrak{W}} \\sum_{i=1}^n \\int_0^\\infty \\left[\\log\\psi_i^W(s-;\\theta^W) - \\log S^{0W}(s;w_1|\\theta^W)\\right] N_i^W(ds;w_1,\\bullet); \\\\\nl_{pl}^V & = & \\sum_{v_1 \\in \\mathfrak{V}_1} \\sum_{i=1}^n \\int_0^\\infty \\left[\\log\\psi_i^V(s-;\\theta^V) - \\log S^{0V}(s;v_1|\\theta^V)\\right] N_i^V(ds;v_1,\\bullet).\n\\end{eqnarray*}\nAssociated with each of these log-profile likelihood functions are their profile score vector (the gradient or vector of partial derivatives) and profile observed information matrix (negative of the matrix of second partial derivatives): $U^R(\\alpha,\\theta^R)$; $U^W(\\theta^W)$ and $U^V(\\theta^V)$ for the score vectors, and $I^R(\\alpha,\\theta^R)$, $I^W(\\theta^W)$ and $I^V(\\theta^V)$ for the observed information matrices. The estimators could then be obtained as the solutions of the set of equations\n\\begin{displaymath}\nU_{pl}^R(\\hat{\\alpha},\\hat{\\theta}^R) = 0; U_{pl}^W(\\hat{\\theta}^W) = 0; U_{pl}^V(\\hat{\\theta}^V) = 0.\n\\end{displaymath}\nA possible computational approach to obtaining the estimates is via the Newton-Raphson iteration procedure:\n\\begin{eqnarray*}\n& (\\hat{\\alpha},\\hat{\\theta}^R) \\leftarrow (\\hat{\\alpha},\\hat{\\theta}^R) + I^R(\\hat{\\alpha},\\hat{\\theta}^R)^{-1} U^R(\\hat{\\alpha},\\hat{\\theta}^R); & \\\\\n& \\hat{\\theta}^W \\leftarrow \\hat{\\theta}^W + I^W(\\hat{\\theta}^W)^{-1} U^W(\\hat{\\theta}^W); \\\n\\hat{\\theta}^V \\leftarrow \\hat{\\theta}^V + I^V(\\hat{\\theta}^V)^{-1} U^V(\\hat{\\theta}^V). &\n\\end{eqnarray*}\nHaving obtained the estimates of $\\alpha$, $\\theta^W$ and $\\theta^V$, we plug them into $\\hat{\\Lambda}_{0q}(t|\\alpha_q,\\theta^R)$s, $\\eta(w_1,w_2|\\theta^W)$s and $\\xi(v_1,v|\\theta^V)$s to obtain the estimators $\\hat{\\Lambda}_{0q}(t)$s, $\\hat{\\eta}(w_1,w_2)$s and $\\hat{\\xi}(v_1,v)$s:\n\\begin{equation}\n\\label{final estimators}\n\\hat{\\Lambda}_{0q}(t) = \\hat{\\Lambda}_{0q}(t|\\hat{\\alpha}_q,\\hat{\\theta}^R);\\quad \\hat{\\eta}(w_1,w_2) = \\hat{\\eta}(w_1,w_2|\\hat{\\theta}^W);\\quad \\hat{\\xi}(v_1,v) = \\hat{\\xi}(v_1,v|\\hat{\\theta}^V).\n\\end{equation}\n\n\\section{Asymptotic Properties of Estimators}\n\\label{sec: Properties}\n\nIn this section we provide some asymptotic properties of the estimators, though we do not present the rigorous proofs of the results due to space constraints and instead defer them to a separate paper. To make our exposition more concise and compact, we introduce additional notation. Consider a real-valued function $h$ defined on $\\mathfrak{I}_{\\mathfrak{W}}$. Then, $h(w_1,w_2)$ will represent the value at $(w_1,w_2)$, but when we simply write $h$ it means the $|\\mathfrak{I}_{\\mathfrak{W}}| \\times 1$ vector consisting of $h(w_1,w_2), (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$. Thus, $\\eta$ is an $|\\mathfrak{I}_{\\mathfrak{W}}| \\times 1$ (column) vector with elements $\\eta(w_1,w_2)$s; $N_i^W(s)$ is an $|\\mathfrak{I}_{\\mathfrak{W}}| \\times 1$ vector consisting of $N_i^W(s;w_1,w_2)$s; $S^{0W}(s|\\theta^W)$ is an $|\\mathfrak{I}_{\\mathfrak{W}}| \\times 1$ vector with elements $S^{0W}(s;w_1|\\theta^W), (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}$; and $\\psi_i^W(s|\\theta^W)$ is an $|\\mathfrak{I}_{\\mathfrak{W}}| \\times 1$ vector consisting of the same elements. Similarly for those associated with the HS process where functions are defined over $\\mathfrak{I}_{\\mathfrak{V}}$; e.g., $\\eta = (\\eta(v_1,v), (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}})$, $N_i^V(s) = (N_i^V(s;v_1,v): (v_1,v) \\in \\mathfrak{I}_{\\mathfrak{V}})$, etc.\n\nLet us first consider the profile likelihood function $\\mathcal{L}_{pl}^W$ and the estimators $\\hat{\\theta}^W$ and $\\hat{\\eta}$. Using the above notation, the associated profile log-likelihood function, score function, and observed information matrix could be written as follows:\n\\begin{eqnarray*}\n& l_{pl}^W(\\theta^W) = \\sum_{i=1}^n \\int_0^\\infty \\left[\\log\\psi_i^W(s-|\\theta^W) - \\log S^{0W}(s|\\theta^W) \\right]^{\\tt T} dN_i^W(s); & \\\\\n& U_{pl}^W(\\theta^W) = \\sum_{i=1}^n \\int_0^\\infty H_{1i}^W(s|\\theta^W)^{\\tt T} dN_i^W(s) ; & \\\\ \n&I_{pl}^W(\\theta^W) = \\sum_{i=1}^n \\int_0^\\infty V_1^W(s|\\theta^W)^{\\tt T} dN_i^W(s), &\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\nH_{1i}^W(s|\\theta^W) & = & \\frac{\\stackrel{\\cdot}{\\psi}_i^W(s-|\\theta^W)}{\\psi_i^W(s-|\\theta^W)} - \\frac{\\stackrel{\\cdot}{S}^{0W}(s|\\theta^W)}{S^{0W}(s|\\theta^W)} \\\\\n& \\equiv & \\left(\\frac{\\stackrel{\\cdot}{\\psi}_i^W(s-|\\theta^W)}{\\psi_i^W(s-|\\theta^W)} - \\frac{\\stackrel{\\cdot}{S}^{0W}(s;w_1|\\theta^W)}{S^{0W}(s;w_1|\\theta^W)}: (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}\\right); \n\\end{eqnarray*}\nand\n\\begin{eqnarray*}\nV_1^W(s|\\theta^W) & = & \\frac{\\stackrel{\\cdot\\cdot}{S}^{0W}(s|\\theta^W)}{S^{0W}(s|\\theta^W)} - \\left(\\frac{\\stackrel{\\cdot}{S}^{0W}(s|\\theta^W)}{S^{0W}(s|\\theta^W)}\\right)^{\\otimes 2} \\\\\n& \\equiv & \\left(\\frac{\\stackrel{\\cdot\\cdot}{S}^{0W}(s;w_1|\\theta^W)}{S^{0W}(s;w_1|\\theta^W)} - \\left(\\frac{\\stackrel{\\cdot}{S}^{0W}(s;w_1|\\theta^W)}{S^{0W}(s;w_1|\\theta^W)}\\right)^{\\otimes 2}: (w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}} \\right).\n\\end{eqnarray*}\nA `$\\cdot$' over a function means gradient with respect to the parameter vector, while a `$\\cdot\\cdot$' over a function means the matrix of second-partial derivatives with respect to the element of the parameter vector. In obtaining $I_{pl}^W$, we also used the fact that $\\psi_i^W$ is an exponential function. We now present some results about $\\hat{\\theta}^W$. We also let \n$$\\mathfrak{I}_{pl}^W = \\mbox{pr-lim} \\left[ \\frac{1}{n} I_{pl}^W(\\theta^W)\\right].$$\nThis will be a function of {\\bf\\em all} the model parameters, since the limiting behavior of the $Y_i$s and $Y_i^W$s will depend on all the model parameters, owing to the interplay among the RCR, LM, and HS processes, as could be seen in the special case of Poisson processes and CTMCs.\n\n\\begin{proposition}\n\\label{prop-asymptotic about thetaW}\nUnder certain regularity conditions, we have, as $n \\rightarrow \\infty$,\n\\begin{itemize}\n\\item[(i)] (Consistency)\n$\\hat{\\theta}^W \\stackrel{p}{\\rightarrow} \\theta^W$;\n\\item[(ii)] (Asymptotic Representation)\n\\begin{eqnarray*} \n\\sqrt{n}[\\hat{\\theta}^W - \\theta^W] = \\left[\\frac{1}{n} I_{pl}^W(\\theta^W)\\right]^{-1} \\frac{1}{\\sqrt{n}} \\sum_{i=1}^n \\int_0^\\infty\nH_{1i}^W(s|\\theta^W)^{\\tt T} dM_i^W(s|\\eta,\\theta^W) + o_p(1);\n\\end{eqnarray*}\n\\item[(iii)] (Asymptotic Normality) \n$\\sqrt{n}[\\hat{\\theta}^W - \\theta^W] \\stackrel{d}{\\rightarrow} N\\left[0,\\left(\\mathfrak{I}_{pl}^W\\right)^{-1}\\right].$\n\\end{itemize}\n\\end{proposition}\n\n\nWe point out an important result needed for the proof of Proposition \\ref{prop-asymptotic about thetaW}(iii), which is that\n\\begin{eqnarray*}\n\\lefteqn{\\frac{1}{n} I_{pl}^W(\\theta^W) = } \\\\\n&& \\sum_{(w_1,w_2) \\in \\mathfrak{I}_{\\mathfrak{W}}} \\left[ \\frac{1}{n} \\sum_{i=1}^n \\int_0^\\infty \\left\\{H_{1i}^W(s;w_1|\\theta^W)\\right\\}^{\\otimes 2} Y_i(s) Y_i^W(s;w_1) \\eta(w_1,w_2) \\psi_i^W(s-|\\theta^W) ds \\right] \\\\ && + o_p(1) \n\\stackrel{p}{\\rightarrow} \\mathfrak{I}_{pl}^W.\n\\end{eqnarray*}\nThis asymptotic equivalence indicates where the involvement of the at-risk processes come into play in the limiting profile information matrix, hence the dependence on all the model parameters. This also shows that a natural consistent estimator of $\\mathfrak{I}_{pl}^W$ is $I_{pl}^W(\\hat{\\theta}^W)\/n$.\n\nNext, we are now in position to present asymptotic results about $\\hat{\\eta}$. First, let $s^{0W}$ and $\\stackrel{\\cdot}{s}^{0W}$ be deterministic functions satisfying\n\\begin{eqnarray*}\n& \\left|\\frac{1}{n} \\int_0^\\infty S^{0W}(z|\\theta^W) dz - \\int_0^\\infty s^{0W}(z) dz\\right| \\rightarrow 0; & \\\\\n& \\left|\\frac{1}{n} \\int_0^\\infty \\stackrel{\\cdot}{S}^{0W}(z|\\theta^W) dz - \\int_0^\\infty \\stackrel{\\cdot}{s}^{0W}(z) dz\\right| \\rightarrow 0. &\n\\end{eqnarray*}\n\n\\begin{proposition}\n\\label{prop-asymptotic about eta}\nUnder certain regularity conditions, we have as $n \\rightarrow \\infty$,\n\\begin{itemize}\n\\item[(i)] (Consistency) $\\hat{\\eta} \\stackrel{p}{\\rightarrow} \\eta$;\n\\item[(ii)] (Asymptotic Normality) $\\sqrt{n}(\\hat{\\eta} -\\eta) \\stackrel{d}{\\rightarrow} N(0,\\Sigma)$, where\n\\begin{eqnarray*}\n\\Sigma & = & \\mbox{Dg}\\left(\\int_0^\\infty s^{0W}(z) dz\\right)^{-1} \\mbox{Dg}(\\eta) + \\\\\n&& \\left[\\mbox{Dg}\\left(\\int_0^\\infty s^{0W}(z) dz\\right)^{-1} \\mbox{Dg}(\\eta) \\left(\\int_0^\\infty \\stackrel{\\cdot}{s}^{0W}(z) dz\\right)^{\\tt T} (\\mathfrak{I}_{pl}^W)^{-1\/2}\\right]^{\\otimes 2}.\n\\end{eqnarray*}\n\\item[(iii)] (Joint Asymptotic Normality) $\\sqrt{n}(\\hat{\\eta} - \\eta)$ and $\\sqrt{n}(\\hat{\\theta}^W - \\theta^W)$ are jointly asymptotically normal with means zeros and asymptotic covariance matrix\n\\begin{eqnarray*}\n\\lefteqn{\\mbox{Acov}(\\sqrt{n}(\\hat{\\theta}^W - \\theta^W),\\sqrt{n}(\\hat{\\eta} - \\eta)) = } \\\\\n&& -(\\mathfrak{I}_{pl}^W)^{-1} \\left(\\int_0^\\infty \\stackrel{\\cdot}{s}^{0W}(z) dz\\right) \\mbox{Dg}(\\eta) \\mbox{Dg}\\left(\\int_0^\\infty s^{0W}(z) dz\\right)^{-1}.\n\\end{eqnarray*}\n\\end{itemize}\n\\end{proposition}\n\n\nWe remark that in result (ii) for the asymptotic covariance matrix $\\Sigma$ in Proposition \\ref{prop-asymptotic about eta}, the additional variance term in the right-hand side is the effect of plugging-in the estimator $\\hat{\\theta}^W$ for $\\theta^W$ in the `estimators' $\\hat{\\eta}(w_1,w_2|\\theta^W)$s to obtain $\\hat{\\eta}(w_,w_2)$s.\nWithout having to write them down explicitly, similar results are obtainable for the estimators $\\hat{\\theta}^V$ and $\\hat{\\xi}$. \n\nNext, we present results concerning the asymptotic properties of the estimators of $\\alpha_q$s, $\\theta^R$, and $\\Lambda_{0q}(\\cdot)$s. Define the restricted profile likelihood for $(\\alpha_q,\\theta^R)$ based on data $\\mathbf{D}$ observed over $[0,s^*]$ via\n\\begin{eqnarray*}\n\\mathcal{L}_{pl}^R(\\alpha,\\theta^R) & \\equiv & \\mathcal{L}_{pl}^R(\\alpha,\\theta^R|s^*,t^*) \\equiv \\mathcal{L}_{pl}^R(\\alpha,\\theta^R|\\mathbf{D}(s^*,t^*)) \\\\\n& = & \n\\prod_{q=1}^Q \\prod_{i=1}^n \\prodi_{s=0}^{s^*} \n\\left[\n\\frac{\\rho_q[N_i^R(s-);\\alpha_q] \\psi_i^R(s-;\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}\n\\right]^{I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} dN_i^R(s;q)}\n\\end{eqnarray*}\nwith $S_q^{0R}(\\cdot,\\cdot|\\cdot,\\cdot)$ as defined in Proposition \\ref{prop-2}. This is restricted in the sense that we only consider events that happened when the effective age are no more than $t^*$ and happened over $[0,s^*]$. Note that as we let $t^* \\rightarrow \\infty$ and $s^* \\rightarrow \\infty$, we obtain the profile likelihood in Proposition \\ref{profile liks}, The log-profile likelihood function is\n\\begin{eqnarray*}\n\\lefteqn{ {l}_{pl}^R(\\alpha,\\theta^R|s^*,t^*) \\equiv \\log \\mathcal{L}_{pl}^R(\\alpha,\\theta^R|s^*,t^*) } \\\\ & = &\n\\sum_{q=1}^Q \\sum_{i=1}^n \\int_0^{s^*} \\left\\{\n\\log\\rho_q[N_i^R(s-);\\alpha_q] + \\log\\psi_i^R(s-|\\theta^R) - \\log S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)\n\\right\\} \\times \\\\ && I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} dN_i^R(ds;q).\n\\end{eqnarray*}\nThe associated profile score function is\n\\begin{displaymath}\nU_{pl}^R(\\alpha,\\theta^R|s^*,t^*) = U_{pl}^R(\\alpha,\\theta^R) = \\sum_{i=1}^n \\sum_{q=1}^Q \\int_0^{s^*} H_{iq}^R(s|s^*,\\alpha,\\theta^R) N_i^R(ds,t^*;q)\n\\end{displaymath}\nwhere, for $q \\in \\mathfrak{I}_Q$,\n$$H_{iq}^R(s|s^*,\\alpha,\\theta^R)^{\\tt T} = \\left[ 0^{\\tt T},\\ldots, 0^{\\tt T}, H_{i1q}(s|s^*,\\alpha,\\theta^R)^{\\tt T}, 0^{\\tt T}, \\ldots, 0^{\\tt T},\nH_{i2q}(s|s^*,\\alpha,\\theta^R)^{\\tt T}\\right],$$\na $(Q + 1) \\times 1$ block matrix and with\n$$H_{i1q}^R(s|s^*,\\alpha_q,\\theta^R) = \\frac{\\nabla_{\\alpha_q} \\rho_q[N_i^R(s-);\\alpha_q)}{\\rho_q[N_i^R(s-);\\alpha_q)} -\n\\frac{\\nabla_{\\alpha_q} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)};$$\n$$H_{i2q}^R(s|s^*,\\alpha_q,\\theta^R) = \\frac{\\nabla_{\\theta^R} \\psi_i^R(s-|\\theta^R)}{ \\psi_i^R(s-|\\theta^R)} -\n\\frac{\\nabla_{\\theta^R} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}.$$\nNote the dimensions of these block vectors: the $q$th block is of dimension $\\dim(\\alpha_q) \\times 1$, while the $(Q+1)$th block is of dimension $\\dim(\\theta^R) \\times 1$. An important martingale representation of $U_{pl}^R(\\alpha,\\theta^R|s^*,t^*)$, which is straight-forward to establish, is\n\\begin{displaymath}\nU_{pl}^R(\\alpha,\\theta^R|s^*,t^*) = \\sum_{i=1}^n \\sum_{q=1}^Q \\int_0^{s^*} H_{iq}^R(s|s^*,\\alpha,\\theta^R) M_i^R(ds,t^*;q).\n\\end{displaymath}\nThe predictable variation associated with this score function is\n\\begin{displaymath}\n\\langle U_{pl}^R(\\alpha,\\theta^R|\\cdot,t^*) \\rangle (s^*) = \\sum_{i=1}^n \\sum_{q=1}^n \\int_0^{s^*} [H_{iq}^R(s|s^*,\\alpha,\\theta^R)]^{\\otimes 2} A_i^R(ds,t^*;q)\n\\end{displaymath}\nwith $A_i^R(ds,t;q) = I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} dA_i^R(s;q)$. \nThe estimators $\\hat{\\alpha}_q(s^*,t^*), q =1,\\ldots,Q,$ and $\\hat{\\theta}^R(s^*,t^*)$ satisfy the equation\n$U_{pl}^R(\\hat{\\alpha},\\hat{\\theta}^R|s^*,t^*) = 0.$\n\nThe observed profile information matrix $I_{pl}^R(\\alpha,\\theta^R|s^*,t^*)$ is a $(Q+1) \\times (Q+1)$ symmetric block matrix with the following block elements, for $q, q^\\prime = 1, \\ldots,Q$:\n\\begin{displaymath}\nI_{pl,qq^\\prime}^R(\\alpha,\\theta^R|s^*,t^*) = \n\\left\\{\n\\begin{array}{ccc}\n\\sum_{i=1}^n \\int_0^{s^*} V_{i11qq}^R(s|s^*,\\alpha,\\theta^R) N_i^R(ds,t^*;q) & \\mbox{for} &q = q^\\prime \\\\\n0 & \\mbox{for} & q \\ne q^\\prime\n\\end{array}\n\\right.;\n\\end{displaymath}\n\\begin{displaymath}\nI_{pl,(Q+1)q}^R(\\alpha,\\theta^R|s^*,t^*) = \n\\sum_{i=1}^n \\int_0^{s^*} V_{i21q}^R(s|s^*,\\alpha,\\theta^R) N_i^R(ds,t^*;q);\n\\end{displaymath}\n\\begin{displaymath}\nI_{pl,(Q+1)(Q+1)}^R(\\alpha,\\theta^R|s^*,t^*) = \n\\sum_{i=1}^n \\sum_{q=1}^Q \\int_0^{s^*} V_{i22q}^R(s|s^*,\\alpha,\\theta^R) N_i^R(ds,t^*;q),\n\\end{displaymath}\nwhere $V_{i11qq^\\prime}^R$ is of dimension $\\dim(\\alpha_q) \\times \\dim(\\alpha_{q^\\prime})$ for $q, q^\\prime = 1, 2, \\ldots, Q$; $V_{i21q}^R$ and $(V_{i12q}^R)^{\\tt T}$ have dimensions $\\dim(\\theta^R) \\times \\dim(\\alpha_q)$; and $V_{i22q}^R$ has dimension $\\dim(\\theta^R) \\times \\dim(\\theta^R)$. These are given by the following expressions, for $q, q^\\prime = 1,\\ldots,Q$:\n\\begin{eqnarray*}\n \\lefteqn{V_{i11qq}^R(s|s^*,\\alpha,\\theta^R) = -\\left\\{\\frac{\\nabla_{\\alpha_q\\alpha_q^{\\tt T}} \\rho_q[N_i(s-);\\alpha_q]}{\\rho_q[N_i(s-);\\alpha_q]} - \\left(\\frac{\\nabla_{\\alpha_q} \\rho_q[N_i(s-);\\alpha_q]}{\\rho_q[N_i(s-);\\alpha_q]}\\right)^{\\otimes 2}\\right\\} + } \\\\\n & & \\left\\{\\frac{\\nabla_{\\alpha_q\\alpha_q^{\\tt T}} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)} - \\left(\\frac{\\nabla_{\\alpha_q} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}\\right)^{\\otimes 2}\\right\\}; \n \\end{eqnarray*}\n $$V_{i11qq^\\prime}^R(s|s^*,\\alpha,\\theta^R) = 0, q \\ne q^\\prime; $$\n\\begin{eqnarray*}\n\\lefteqn{V_{i21q}^R(s|s^*,\\alpha,\\theta^R) =\n-\\left\\{\\frac{\\nabla_{\\alpha_q(\\theta^R)^{\\tt T}} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)} - \\right. } \\\\ && \\left.\n\\left(\\frac{\\nabla_{\\alpha_q} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}\\right)\n\\left(\\frac{\\nabla_{(\\theta^R)^{\\tt T}} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}\\right)\\right\\};\n\\end{eqnarray*}\n\\begin{eqnarray*}\nV_{i22q}^R(s|s^*,\\alpha,\\theta^R) & = & -\\left\\{\\frac{\\nabla_{\\theta^R(\\theta^R)^{\\tt T}} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)} - \\right. \\\\ && \\left.\n\\left(\\frac{\\nabla_{\\theta^R} S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}{S_q^{0R}(s^*,\\mathcal{E}_{iq}(s)|\\alpha_q,\\theta^R)}\\right)^{\\otimes 2}\\right\\},\n\\end{eqnarray*}\nwhere for the last expression we used the fact that $\\psi_i(z) = \\exp(z)$.\n\nA condition needed for the asymptotic results is that there is an invertible matrix function $\\mathfrak{I}_{pl}^R(\\alpha,\\theta^R|s^*,t^*)$ which equals the in-probability limit of $\\frac{1}{n} I_{pl}^R(\\alpha,\\theta^R|s^*,t^*)$ as $n \\rightarrow \\infty$. Under this condition, we have the following consistency and asymptotic normality results for $(\\hat{\\alpha},\\hat{\\theta}^R)$:\n\n\\begin{proposition}\n\\label{asymptotics of alpha and thetaR}\nUnder regularity conditions and as $n \\rightarrow \\infty$,\n\\begin{itemize}\n\\item[(i)] $(\\hat{\\alpha}(s^*,t^*),\\hat{\\theta}^R(s^*,t^*)) \\stackrel{p}{\\rightarrow} (\\alpha,\\theta^R)$; \n\\item[(ii)] $\\sqrt{n}\\left[\\begin{array}{c} \\hat{\\alpha}(s^*,t^*) - \\alpha \\\\ \\hat{\\theta}^R(s^*,t^*) - \\theta^R \\end{array} \\right] \\stackrel{d}{\\rightarrow} N\\left(0,[\\mathfrak{I}_{pl}^R(\\alpha,\\theta^R|s^*,t^*)]^{-1}\\right).$\n\\end{itemize}\n\\end{proposition}\n\n\nImportant equivalences to note are, for $q, q^\\prime =1,\\ldots,Q$:\n\\begin{eqnarray*}\n\\frac{1}{n} I_{pl,qq^\\prime}^R(\\alpha,\\theta^R|s^*,t^*) & = &\n\\frac{1}{n}\\sum_{i=1}^n\\int_0^{s^*} V_{i11qq^\\prime}^R(s|s^*,\\alpha,\\theta^R) I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} \\times \\\\ &&\nY_i(s) \\rho_q[N_i^R(s-);\\alpha_q] \\psi_i^R(s-;\\theta^R) \\lambda_{0q}[\\mathcal{E}_{iq}(s)] ds + o_p(1); \\\\\n\\frac{1}{n} I_{pl,(Q+1)q}^R(\\alpha,\\theta^R|s^*,t^*) & = &\n\\frac{1}{n}\\sum_{i=1}^n\\int_0^{s^*} V_{i21q}^R(s|s^*,\\alpha,\\theta^R) I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} \\times \\\\ &&\nY_i(s) \\rho_q[N_i^R(s-);\\alpha_q] \\psi_i^R(s-;\\theta^R) \\lambda_{0q}[\\mathcal{E}_{iq}(s)] ds + o_p(1); \\\\\n\\frac{1}{n} I_{pl,(Q+1)(Q+1)}^R(\\alpha,\\theta^R|s^*,t^*) & = &\n\\frac{1}{n}\\sum_{i=1}^n \\sum_{q=1}^Q \\int_0^{s^*} V_{i22q}^R(s|s^*,\\alpha,\\theta^R) I\\{\\mathcal{E}_{iq}(s) \\le t^*\\} \\times \\\\ &&\nY_i(s) \\rho_q[N_i^R(s-);\\alpha_q] \\psi_i^R(s-;\\theta^R) \\lambda_{0q}[\\mathcal{E}_{iq}(s)] ds + o_p(1).\n\\end{eqnarray*}\nIn addition, the regularity conditions must imply that, we have\n\\begin{eqnarray*}\n& \\left|\\frac{1}{n} I_{pl,qq}^R(\\alpha,\\theta^R|s^*,t^*) - \\frac{1}{n}\\sum_{i=1}^n \\int_0^{s^*} [H_{i1q}^R(s|s^*,\\alpha,\\theta^R)]^{\\otimes 2} A_i^R(ds,t^*;q) \\right| \\stackrel{p}{\\rightarrow} 0; & \\\\\n& \\left|\\frac{1}{n} I_{pl,(Q+1)(Q+1)}^R(\\alpha,\\theta^R|s^*,t^*) - \\frac{1}{n}\\sum_{i=1}^n \\int_0^{s^*} [H_{i2q}^R(s|s^*,\\alpha,\\theta^R)]^{\\otimes 2} A_i^R(ds,t^*;q) \\right| \\stackrel{p}{\\rightarrow} 0; & \\\\\n& \\left|\\frac{1}{n} I_{pl,(Q+1)q}^R(\\alpha,\\theta^R|s^*,t^*) - \\frac{1}{n}\\sum_{i=1}^n \\int_0^{s^*} H_{i1q}^R(s|s^*,\\alpha,\\theta^R) [H_{i2q}^R(s|s^*,\\alpha,\\theta^R)]^{\\tt T} A_i^R(ds,t^*;q) \\right| & \\\\ & \\stackrel{p}{\\rightarrow} 0. &\n\\end{eqnarray*}\nThese conditions imply that $\\left|\\frac{1}{n} \\langle U_{pl}^R \\rangle - \\frac{1}{n} I_{pl}^R \\right| \\stackrel{p}{\\rightarrow} 0$ as $n \\rightarrow \\infty$.\nThese are the analogous results to those in the usual asymptotic theory of maximum likelihood estimators, where the Fisher information is equal to the expected value of the squared partial derivative with respect to the parameter of the log-likelihood function and also the negative of the expected value of the second partial derivative of the log-likelihood function. They are usually satisfied by imposing a set of conditions that allows for the interchange of the order of integration with respect to $s$ and the partial differentiation with respect to the parameters; see, for instance, \\cite{Bor84}, \\cite{AndGil1982}, and \\cite{pena2016} in similar but simpler settings.\n\nThe proof of Proposition \\ref{asymptotics of alpha and thetaR} then relies on the asymptotic representation\n$$\\sqrt{n} \\left[\\begin{array}{c} \\hat{\\alpha}(s^*,t^*) - \\alpha \\\\ \\hat{\\theta}^R(s^*,t^*) - \\theta^R \\end{array}\\right] = \\left[\\frac{1}{n}I_{pl}^R(\\alpha,\\theta^R|s^*,t^*)\\right]^{-1} \\left[\\frac{1}{\\sqrt{n}} U_{pl}^R(\\alpha,\\theta^R|s^*,t^*)\\right] + o_p(1).$$\nIn fact, this representation is also crucial for finding the asymptotic properties of the estimators of the $\\Lambda_{0q}(\\cdot)$s, which we will now present. By first-order Taylor expansion and under regularity conditions, we have the representations, for each $q = 1,\\ldots, Q$, given by\n\\begin{eqnarray*}\n\\hat{\\Lambda}_{0q}(s^*,t) & = & \\int_0^t \\frac{I\\{S_q^{0R}(s^*,w|\\hat{\\alpha}(s^*,t^*),\\hat{\\theta}^R(s^*,t^*)) > 0\\}}{S_q^{0R}(s^*,w|\\hat{\\alpha}(s^*,t^*),\\hat{\\theta}^R(s^*,t^*)} \\sum_{i=1}^n N_i^R(s^*,dw;q) \\\\\n& = & \\int_0^t \\frac{I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\}}{S_q^{0R}(s^*,w|\\alpha,\\theta^R)} \\sum_{i=1}^n N_i^R(s^*,dw;q) + \\\\ && \\left[ B_{1q}(s^*,t|\\alpha,\\theta^R),\\ B_{2q}(s^*,t|\\alpha,\\theta^R)\\right] \\left[\\begin{array}{c} \\hat{\\alpha}(s^*,t^*) - \\alpha \\\\ \\hat{\\theta}^R(s^*,t^*) - \\theta^R \\end{array}\\right] + o_p(1\/\\sqrt{n})\n\\end{eqnarray*}\nwhere\n\\begin{eqnarray*}\n\\lefteqn{B_{1q}(s^*,t|\\alpha,\\theta^R) = - \\int_0^t I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\} \\times } \\\\ && \\left\\{\\frac{\\nabla_\\alpha S_q^{0R}(s^*,w|\\alpha,\\theta^R)}{[S_q^{0R}(s^*,w|\\alpha,\\theta^R)]^2}\\right\\} \\sum_{i=1}^n N_i^R(s^*,dw;q);\n\\end{eqnarray*}\n\\begin{eqnarray*}\n\\lefteqn{B_{2q}(s^*,t|\\alpha,\\theta^R) = - \\int_0^t I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\} \\times } \\\\ && \\left\\{\\frac{\\nabla_\\theta^R S_q^{0R}(s^*,w|\\alpha,\\theta^R)}{[S_q^{0R}(s^*,w|\\alpha,\\theta^R)]^2}\\right\\} \\sum_{i=1}^n N_i^R(s^*,dw;q).\n\\end{eqnarray*}\nFor $q = 1,\\ldots,Q,$ let\n$$\\Lambda_{0q}^*(s^*,t) = \\int_0^t I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\} \\lambda_{0q}(w) dw.$$\nObserve that\n$$\\int_0^t \\frac{I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\}}{S_q^{0R}(s^*,w|\\alpha,\\theta^R)} \\sum_{i=1}^n A_i^R(s^*,dw|\\alpha,\\theta^R) = \\Lambda_{0q}^*(s^*,t),$$\nimplying that\n\\begin{eqnarray*}\n\\lefteqn{ \\int_0^t \\frac{I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\}}{S_q^{0R}(s^*,w|\\alpha,\\theta^R)} \\sum_{i=1}^n N_i^R(s^*,dw;q) = } \\\\\n&& \\int_0^t \\frac{I\\{S_q^{0R}(s^*,w|\\alpha,\\theta^R) > 0\\}}{S_q^{0R}(s^*,w|\\alpha,\\theta^R)} \\sum_{i=1}^n M_i^R(s^*,dw;q) + \\Lambda_{0q}^*(s^*,t).\n\\end{eqnarray*}\nLet us also define\n\\begin{eqnarray*}\n\\lefteqn{ \\hat{Z}_q^R(s^*,t) = [\\hat{\\Lambda}_{0q}(s^*,t) - \\Lambda_{0q}^*(s^*,t)] - } \\\\ && \\left[ B_{1q}(s^*,t|\\alpha,\\theta^R),\\ B_{2q}(s^*,t|\\alpha,\\theta^R)\\right] \\left[\\begin{array}{c} \\hat{\\alpha}(s^*,t^*) - \\alpha \\\\ \\hat{\\theta}^R(s^*,t^*) - \\theta^R \\end{array}\\right];\n\\end{eqnarray*}\n\\begin{displaymath}\n\\tilde{H}_{iq}^R(w|s^*) = \\sum_{j: S_j \\le s^*} H_{iq}^R[\\mathcal{E}_{iqj}^{-1}(w)|s^*] I\\{\\mathcal{E}_{iq}(S_{j-1}) < w \\le \\mathcal{E}_{iq}(S_j)\\};\n\\end{displaymath}\nand form the vectors of functions $\\hat{Z} = (\\hat{Z}_q, q=1,\\ldots,Q)$ and $\\tilde{H}_i^R = (\\tilde{H}_{iq}^R, q = 1,\\ldots,Q)$. Then, the main asymptotic representation leading to the asymptotic results for the RCR component parameters is given by, for $t \\in [0,t^*]$,\n\\begin{eqnarray}\n\\sqrt{n} \\left[\n\\begin{array}{c}\n\\hat{\\alpha}(s^*,t^*) - \\alpha \\\\\n\\hat{\\theta}^R(s^*,t^*) - \\theta^R \\\\\n\\hat{Z}(s^*,t)\n\\end{array}\n\\right] & = &\n\\left[\n\\begin{array}{cc}\n\\left(\\frac{1}{n} I_{pl}^R(s^*,t^*)\\right)^{-1} & 0 \\\\ 0 & I\n\\end{array}\n\\right] \\times \\nonumber \\\\ &&\n\\frac{1}{\\sqrt{n}} \\int_0^{t}\n\\left[\n\\begin{array}{c}\n\\tilde{H}_i^R(w|s^*) \\\\\n\\mbox{Dg}\\left(\\frac{I\\{S^{0R}(s^*,w)\/n > 0\\}}{S^{0R}(s^*,w)\/n}\\right)\n\\end{array}\n\\right] \nM_i^R(s^*,dw) + o_p(1).\n\\label{major asymptotic representation RCR}\n\\end{eqnarray}\n\nUsing this representation, we obtain the following asymptotic properties, though not stated in the most general form.\n\n\\begin{proposition}\n\\label{prop-main result RCR estimators}\nUnder certain regularity conditions, we have\n\\begin{itemize}\n\\item[(i)]\n$\\sqrt{n} \\left[ \\begin{array}{c}\n\\hat{\\alpha}(s^*,t^*) - \\alpha \\\\\n\\hat{\\theta}^R(s^*,t^*) - \\theta^R\n\\end{array}\n\\right]$ and $\\sqrt{n} \\hat{Z}(s^*,t)$ are asymptotically independent;\n\\item[(ii)]\nFor each $q = 1, \\ldots, Q$, $\\hat{\\Lambda}_{0q}(s^*,t)$ converges uniformly in probability to $\\Lambda_{0q}(t)$ for $t \\in [0,t^*]$;\n\\item[(iii)]\nFor each $q = 1, \\ldots, Q$, and for $t \\in [0,t^*]$, $\\sqrt{n}[\\hat{\\Lambda}_{0q}(s^*,t) - \\Lambda_{0q}(t)]$ converges in distribution to a normal distribution with mean zero and variance\n\\begin{displaymath}\n\\Gamma_q(s^*,t) = \\int_0^t \\frac{d\\Lambda_{0q}(w)}{s_q^{0R}(s^*,w)} +\n(b_{1q}(s^*,t),\\ b_{2q}(s^*,t)) [\\mathfrak{I}_{pl}^R(s^*,t^*)]^{-1} (b_{1q}(s^*,t),\\ b_{2q}(s^*,t))^{\\tt T},\n\\end{displaymath}\nwhere $(b_{1q}(s^*,t),\\ b_{2q}(s^*,t)) = \\mbox{pr-lim} \\left\\{ \\frac{1}{n} (B_{1q}(s^*,t),\\ B_{2q}(s^*,t))\\right\\}$.\n\\item[(iv)] More, generally, for $q=1,\\ldots,Q$, the stochastic process $\\{\\sqrt{n}[\\hat{\\Lambda}_{0q}(s^*,t) - \\Lambda_{0q}(t)]: \\ t \\in [0,t^*]\\}$ converges weakly in Skorokhod's space $(\\mathfrak{D}[0,t^*], \\mathcal{D}[0,t^*])$ to a zero-mean Gaussian process with covariance function\n\\begin{eqnarray*}\n\\Gamma_q(s^*,t_1,t_2) & = & \\int_0^{\\min(t_1,t_2)} \\frac{d\\Lambda_{0q}(w)}{s_q^{0R}(s^*,w)} + \\\\ &&\n(b_{1q}(s^*,t_1),\\ b_{2q}(s^*,t_1)) [\\mathfrak{I}_{pl}^R(s^*,t^*)]^{-1} (b_{1q}(s^*,t_2),\\ b_{2q}(s^*,t_2))^{\\tt T}.\n\\end{eqnarray*}\nThis covariance function is consistently estimated by replacing the unknown functions by their empirical counterparts and replacing the finite-dimensional parameters by their estimators.\n\\end{itemize}\n\\end{proposition}\n\n\nWe point out that the $\\hat{\\Lambda}_{0q}, q=1,\\ldots,Q,$ are asymptotically dependent, and are also not independent of $(\\hat{\\alpha}(s^*,t^*),\\hat{\\theta}^R(s*,t^*))$. The results stated above generalize those in \\cite{AndGil1982} and \\cite{pena2016} to a more complex and general situation.\n\n\\section{Illustration of Estimation Approach on Simulated Data}\n\\label{sec-Illustration}\n\nIn this section we provide a numerical illustration of the estimation procedure when given a sample data. The illustrative sample data set with $n = 50$ units is depicted in Figure \\ref{sample data picture}. This is generated from the proposed model with the following characteristics.\nFor the $i$th sample unit, the covariate values are generated according to $X_{i1}\\ {\\sim}\\ \\mbox{BER}(0.5)$, $X_{i2}\\ {\\sim}\\ N(0, 1)$ with $X_{i1}$ and $X_{i2}$ independent, where $\\mbox{BER}(p)$ is the Bernoulli distribution with success probability of $p$. The end of monitoring time is $\\tau_i\\ {\\sim}\\ \\mbox{EXP}(5)$, where $\\mbox{EXP}(\\lambda)$ is the exponential distribution with mean $\\lambda$. For the RCR component with $Q=3$, the baseline (crude) hazard rate function for risk $q \\in \\{1,2,3\\}$, is a two-parameter Weibull given by \n$$\\lambda_{0q}(t|\\kappa_q^*,\\theta_q^*)=\\frac{\\kappa_q^*}{\\theta_q^*}\\left(\\frac{t}{\\theta_q^*}\\right)^{\\kappa_q^*-1},\\quad t \\ge 0,$$ \nwith $\\kappa_q^* \\in \\{2, 2, 3\\}$ and $\\theta_q^* \\in \\{ 0.9, 1.1, 1\\}$. The associated (crude) survivor function for risk $q$ is\n$$\\bar{F}_{0q}(t|\\kappa_q^*,\\theta_q^*) = \\frac{\\kappa_q^*}{\\theta_q^*}\\left(\\frac{t}{\\theta_q^*}\\right)^{\\kappa_q^*-1} \\exp\\left\\{-\\left(\\frac{t}{\\theta_q^*}\\right)^{\\kappa_q^*}\\right\\},\\quad t \\ge 0.$$\nFor risk $q$, the effective age process function is $\\mathcal{E}_{iq}(s) = s - S_{iqN_{iq}^R(s-)}^R$, the backward recurrence time for this risk. For the effects of the accumulating event occurrences, $\\rho_q(N_i^R(s-);\\alpha_q) =(\\alpha_q)^{\\log(1+N^R_{iq}(s-))}$. \nFor the HS component, $\\mathfrak{V} = \\{1,2,3\\}$ with state `$1$' an absorbing state, so $\\gamma$ is a scalar. For the LM component, $\\mathfrak{W} = \\{1=\\mbox{High}, 2=\\mbox{Normal}, 3=\\mbox{Low}\\}$, so $\\kappa$ is a two-dimensional vector. \nThe infinitesimal generator matrices $\\eta$ for the LM process and $\\xi$ for the HS process are, respectively,\n\\begin{eqnarray*}\n\\eta=\\begin{bmatrix}\n-0.3 & 0.2 & 0.1\\\\\n0.1 & -0.2 & 0.1 \\\\\n0.1 & 0.2 & -0.3\n\\end{bmatrix}\n \\quad \\mbox{and} \\quad\n\\xi=\\begin{bmatrix}\n0 & 0 & 0\\\\\n0.2 & -0.7 & 0.5 \\\\\n0.05 & 0.5 & -0.55\n\\end{bmatrix}.\n\\end{eqnarray*}\nThe values in the first row for the $\\xi$-matrix are all zeros because state $1$ in HS is absorbing. The true values of the remaining model parameters are given in the second column of Table \\ref{oneEs}.\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{cccc}\n \\hline\nParameter & True & Estimate & Est.\\ Standard Error \\\\ \n \\hline\n $\\alpha_1$ & 1.50 & 1.58 & 0.13 \\\\\n $\\alpha_2$ & 1.20 & 1.16 & 0.15 \\\\\n $\\alpha_3$ & 2.00 & 2.10 & 0.21 \\\\\n $\\beta^R_1$ & 1.00 & 1.17 & 0.09 \\\\\n $\\beta^R_2$ & -1.00 & -0.94 & 0.10 \\\\\n $\\gamma^R_1$ & 1.00 & 1.17 & 0.09 \\\\ \n $\\kappa^R_1$ & 1.00 & 1.10 & 0.09 \\\\\n $\\kappa^R_2$ & -1.00 & -0.68 & 0.10 \\\\\n\\hline\n$\\beta^W_1$ & 1.00 & 1.02 & 0.28 \\\\ \n$\\beta^W_2$ & -1.00 & -0.78 & 0.28 \\\\ \n$\\gamma^W_1$ & 1.00 & 0.97 & 0.27 \\\\ \n$\\nu^W_1$ & 1.00 & 0.82 & 0.06 \\\\ \n$\\nu^W_2$ & 1.00 & 1.11 & 0.09 \\\\\n$\\nu^W_3$ & -2.00 & -2.01 & 0.09 \\\\ \n\\hline\n $\\beta^V_1$ & 1.00 & 0.93 & 0.17 \\\\\n $\\beta^V_2$ & -1.00 & -1.00 & 0.16 \\\\\n $\\kappa^V_1$ & 1.00 & 1.00 & 0.20 \\\\\n $\\kappa^V_2$ & -1.00 & -1.59 & 0.40 \\\\\n $\\nu^V_1$ & 1.00 & 1.24 & 0.04 \\\\\n $\\nu^V_2$ & 1.00 & 1.04 & 0.06 \\\\\n $\\nu^V_3$ & -2.00 & -2.30 & 0.06 \\\\ \n\\hline\n\\end{tabular}\n\\caption{The second column contains the true values of the parameters in the first column of the model that generated the illustrative sample data in Figure \\ref{sample data picture} and also used in the simulation study. The third column are the estimates of these parameters arising from the illustrative sample data, while the fourth column contains the information-based estimates of the standard errors of the estimators. The RCR component model parameters are the $\\alpha$s and those with superscript $R$. The LM component parameters are those with superscript of $W$. Finally, the HS component parameters are those with superscript $V$.}\n\\label{oneEs}\n\\end{table}\n\nFor each replication, the realized data for the $i$th unit among the $n$ units are generated in the following manner. At time $s=0$, we first randomly assign an initial LM state (either 1, 2, or 3) and HS state (either 2 or 3) uniformly among the allowable states and independently for the LM and HS processes. We specify a fixed length of the intervals partitioning $[0,\\infty)$, which in the simulation runs was set to $ds=0.001$, so we have intervals $I_k = [s_k,s_{k+1}) = [k(ds),(k+1)(ds)), k = 0, 1, 2, \\ldots$. A smaller value of $ds$ will make the data generation coincide more closely to the model, but at the same time will also lead to higher computational costs, especially in a simulation study with many replications. \nThe data generation proceeds sequentially over $k=0,1,2,\\ldots$. Suppose that we have reached interval $I_k = [s_k,s_{k+1})$ with $W(s_k) = w_1$ and $V(s_k) = v_1$. For $q \\in \\mathfrak{I}_Q$, generate a realization $e_q^R$ of $E_q^R$ according to a $\\mbox{BER}(p_q^R)$ with $p_q^R$ given by (\\ref{RCR probability}). Also, for $w_2 \\ne w_1$, generate a realization $e_{w_2}^W$ of $E_{w_2}^W$ according to a $\\mbox{BER}(p_{w_2}^W)$ with $p_{w_2}^W$ given by (\\ref{LM probability}). Finally, for $v \\ne v_1$, generate a realization $e_{v}^V$ of $E_{v}^V$ according to a $\\mbox{BER}(p_{v}^V)$ with $p_v^V$ given by (\\ref{HS probability}). \n\\begin{itemize}\n\\item\nIf all the realizations $e_q^R, q \\in \\mathfrak{I}_Q$, $e_{w_2}^W, w_2 \\in \\mathfrak{I}_{\\mathfrak{W}}, w_2 \\ne w_1$, and $e_v^V, v \\in \\mathfrak{I}_{\\mathfrak{V}}, v \\ne v_1$ are zeros, which means no events occurred in the interval $I_k$, then we proceed to the next interval $I_{k+1}$, provided that $\\tau \\notin I_k$, otherwise we stop.\n\\item\nIf exactly one of the realizations $e_q^R, q \\in \\mathfrak{I}_Q$, $e_{w_2}^W, w_2 \\in \\mathfrak{I}_{\\mathfrak{W}}, w_2 \\ne w_1$, and $e_v^V, v \\in \\mathfrak{I}_{\\mathfrak{V}}, v \\ne v_1$ equals one, so that an event occurred, then we update the values of $N^R, N^W, N^V$ and proceed to the next interval $I_{k+1}$, unless $e_v^V = 1$ for a $v \\in \\mathfrak{V}_0$ (i.e., there is a transition to an absorbing state) or $\\tau \\in I_k$, in which case we stop.\n\\end{itemize}\nIn our implementation, since $0 < ds \\approx 0$, the success probabilities in the Bernoulli distributions above will all be close to zeros, hence the probability of more than one of the $e_q^R, q \\in \\mathfrak{I}_Q$, $e_{w_2}^W, w_2 \\in \\mathfrak{I}_{\\mathfrak{W}}, w_2 \\ne w_1$, and $e_v^V, v \\in \\mathfrak{I}_{\\mathfrak{V}}, v \\ne v_1$ taking values of one is very small. But, in case there were at least two of them with values of one, then we randomly choose one to take the value of one and the others are set to zeros. Thus, we always have $$\\sum_{q \\in \\mathfrak{I}_Q} e_q^R + \\sum_{w_2 \\in \\mathfrak{I}_{\\mathfrak{W}};\\ w_2 \\ne w_1} e_{w_2}^W + \\sum_{v \\in \\mathfrak{I}_{\\mathfrak{V}};\\ v \\ne v_1} e_v^V \\in \\{0, 1\\},$$ which means there is at most one event in any interval. Note that whether we reach an absorbing state or not, there will always be right-censored event times or sojourn times.\n\nWe remark that the event time generation could also have been implemented by first generating a sojourn time and then deciding which event occurred according to a multinomial distribution at the realized sojourn time as indicated in subsection \\ref{subsec: joint model}. In addition, depending on the form of the baseline hazard rate functions $\\lambda_{0q}(\\cdot)$s (e.g., Weibulls) and the effective age processes $\\mathcal{E}_{iq}(\\cdot)$s (e.g., backward recurrence times), a more direct and efficient manner of generating the event times using a variation of the Probability Integral Transformation is possible, without having to do the partitioning of the time axis as done above. However, the method above is more general, though approximate, and applicable even if the $\\lambda_{0q}$'s or the $\\mathcal{E}_{iq}$'s are of more complex forms.\n\nGraphical plots associated with the generated illustrative sample data are provided in Figure \\ref{sample data picture} of Section \\ref{subsec: estimation - parametric}. \nFor this sample data set there were $36$ units that reached the absorbing state, with a mean time to absorption of about $t=1$; while $14$ units did not reach the absorbing state before hitting their respective end of their monitoring periods, with the mean monitoring time at about $t=2$. Recall that $\\tau_i$'s were distributed as $\\mbox{EXP}(5)$ so the mean of $\\tau_i$ is 5. One may be curious why the mean monitoring time for those that did not get absorbed is about 2 and not close to 5. The reason for this is because of an induced selection bias. Those units who got absorbed will tend to have longer monitoring times, hence those that were not absorbed will tend to have shorter monitoring times, explaining a reduction in the mean monitoring times for the subset of units that were not absorbed.\n\nWe have developed programs in {\\tt R} \\cite{RCitation} for implementing the semi-parametric estimation procedure for the joint model described above. We used these programs on this illustrative sample data set to obtain estimates and their estimated standard errors of the model parameters.\nThe third and fourth columns of Table \\ref{oneEs} contain the parameter estimates and estimates of their standard errors, respectively, of the finite-dimensional parameters in the first column, whose true values are given in the second column. Figure \\ref{estbase} depicts the true baseline survivor functions, which are Weibull survivor functions, together with their semi-parametric estimates for each risk of the three risks in the RCR component. The estimates of the baseline survivor functions are obtained using the product-integral representation of cumulative hazard functions. For this generated sample data, the estimates obtained and the function plots demonstrate that there is reasonable agreement between the true parameter values and true functions and their associated estimates, indicating that the semi-parametric estimation procedure described earlier appears to be viable.\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth,height=7in]{fi\/plotbase} \n\\caption{The true (red, dashed) and estimated (blue, solid) baseline (crude) survivor functions for each of the three risks based on the simulated illustrative sample data set with $n=50$ units depicted in Figure \\ref{sample data picture}.}\n\\label{estbase}\n\\end{figure}\n\n\n\\section{Finite-Sample Properties via Simulation Studies}\n\\label{sec: simulation}\n\n\\subsection{Simulation Design}\n\nWe have provided asymptotic results of the estimators in Section \\ref{sec: Properties}. In this section we present the results of simulation studies to assess the finite-sample properties of the estimators of model parameters. This will provide some evidence whether the semi-parametric estimation procedure, which appears to perform satisfactorily for the single illustrative sample data set in Section \\ref{sec-Illustration}, performs satisfactorily over many sample data sets.\nThese simulation studies were implemented using {\\tt R} programs we developed, in particular, the programs utilized in estimating parameters in the preceding section. In these simulation studies, as in the preceding section, when we analyze each of the sample data, the baseline hazard rate functions are estimated semi-parametrically, even though in the generation of each of the sample data sets, two-parameter Weibull models were used in the RCR components. \nAside from the set of model parameters described in Section \\ref{sec-Illustration}, the simulation study have the additional inputs which are the sample size $n$ and the number of simulation replications \\mbox{\\tt Mreps}, the latter set to 1000. The sample sizes used in the two simulation studies are $n \\in \\{50, 100\\}$.\nFor fixed $n$, for each of the \\mbox{\\tt Mreps}\\ replications, the sample data generation is as described in Section \\ref{sec-Illustration}.\n\nTable \\ref{genob} presents some summary results pertaining to the three processes based on the \\mbox{\\tt Mreps}\\ replications. The first three rows indicate the means and standard deviations of the number of event occurrences per unit for each risk, and the mean time for the first event occurrence of each risk. For example, risk $1$ occurs about $2.6$ times per unit with a standard deviation $3.57$. The mean time for risk $1$ to occur for the first time is about $0.48$. We notice that occurrence frequencies for three risks are ordered according to $\\mbox{Risk 1} \\succ \\mbox{Risk 2} \\succ \\mbox{Risk 3}$, and consequently risk $1$ tends to have the shortest mean time to the first event. Also note that the mean number of event occurrences per unit for each risk is around $2$, which implies that there are not too few RCR events or too many RCR events (see the property of ``explosion'' as discussed in \\cite{gjess2010}) per unit. This indicates that the choice of the effective age function $\\mathcal{E}_{iq}(\\cdot)$ and the accumulating event function $\\rho_q(\\cdot)$, together with the parameter values we chose for the data generation, were reasonable. \n\nThe fourth to ninth rows show the mean and standard deviation of the number of transitions to specific states per unit, the mean and standard deviation of occupation times per unit for specific states, the mean and standard deviation of sojourn times for specific states. For example, column 4 tells us that (i) the mean number of transitions to state $2$ of the HS process (HS $2$ for short) per unit is $2.34$; (ii) a unit would stay in HS $2$ for an approximate time of $0.8$ on average; (iii) the mean sojourn time for HS $2$ is about $0.34$. We do not include information for HS $1$ since it is an absorbing state. Comparing the $V=2$ and $V=3$ columns, we find that units tend to transit to HS state $2$ more often than to HS state $3$. The mean occupation time for state $2$ per unit is longer compared to state 3. For the last three columns, there are more transitions to state $2$ than to other states. Thus, a unit tends to stay in LM state $1$ more often than in the other two LM states. \n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{c|ccc|cc|ccc}\n \\hline\n & RCR1 & RCR2 & RCR3 & V=2 & V=3 & W=1 & W=2 & W=3 \\\\ \n \\hline\nMean: Count & 2.60 & 2.06 & 1.62 & & & & & \\\\ \n SD: Count & 3.57 & 2.62 & 3.37 & & & & & \\\\ \n Mean: TimePerEvent & 0.48 & 0.61 & 0.78 & & & & & \\\\ \n \\hline\n Mean: NumTransition & & & & 2.34 & 2.10 & 1.16 & 1.36 & 1.10 \\\\ \n SD: NumTransition & & & & 2.17 & 2.02 & 1.31 & 1.24 & 1.33 \\\\ \n Mean: OccupationTime & & & & 0.80 & 0.46 & 0.59 & 0.31 & 0.36 \\\\ \n SD: OccupationTime & & & & 1.83 & 0.78 & 1.51 & 0.58 & 0.71 \\\\ \n Mean: SojournTime & & & & 0.34 & 0.22 & 0.51 & 0.23 & 0.32 \\\\ \n SD: SojournTime & & & & 1.09 & 0.54 & 1.39 & 0.49 & 0.64 \\\\ \n \\hline\n\\end{tabular}\n\\caption{Summary statistics for three processes for \\mbox{\\tt Mreps}\\ replications of data set. The first three rows are for RCR events. The first three rows indicate the mean\/standard deviation of the number of recurrent event occurrences per unit for each risk, the mean time for one recurrent event for each risk. The fourth to ninth rows are for HS and LM events. They indicate the mean\/standard deviation of the number of transitions to the specific state per unit, the mean\/standard deviation of the occupation time for the specific state per unit, the mean\/standard deviation of the sojourn time for the specific state. State $1$ of the HS process $V$ is absorbing, hence not included in the table.}\n\\label{genob}\n\\end{table}\n \nAlso, to obtain some insights into the model-induced dependencies among the components, we also obtained the correlations among RCR, LM, and HS processes over time from the simulated data. We first constructed a vector of six variables over a finite number of time points $\\mathcal{S} \\subset [0,\\tau]$ given by\n %\n \\begin{displaymath}\n \\{Z(s) \\equiv [I(V_i(s)=2), I(W_i(s)=2), I(W_i(s)=3), N_{i1}^R(s), N_{i2}^R(s), N_{i3}^R(s)]^T : s\\in \\mathcal{S}\\}. \n \\end{displaymath}\n %\nFor each $s \\in \\mathcal{S}$, we then obtained the sample correlation matrix $C(s)$ from $\\{Z_i(s), i=1,2,\\ldots,n\\}$. Each of the \\mbox{\\tt Mreps}\\ replications then yielded a $C(s)$, so we took the mean, element-wise, of these \\mbox{\\tt Mreps}\\ correlation matrices. The matrix of scatterplots in Figure \\ref{corr} provides the plots of these mean correlation coefficients over time points $s \\in \\mathcal{S}$. The point we are making here is that the joint model does induce non-trivial patterns of dependencies over time among the three model components.\n \n \\begin{figure}\n\\begin{center}\n\\includegraphics*[width=\\textwidth]{fi\/correlation2}\n\\caption{Plots of sample (Pearson) correlations over a finite set of time points in $[0,3]$. The random vector, at each time $s$ and for each sample unit, is $C(s) = [I(V_i(s)=2), I(W_i(s)=2), I(W_i(s)=3), N_{i1}^R(s), N_{i2}^R(s), N_{i3}^R(s)]$. The sample correlation matrix is computed based on the $n = 50$ sample units. The element-wise means of the \\mbox{\\tt Mreps}\\ correlation matrices were then computed. The plots depict these mean sample correlations for the pairs of variables in $C(s)$ over time $s$.}\n\\label{corr}\n\\end{center}\n\\end{figure}\n\n\\subsection{Finite-Sample Properties of Estimators}\n\nThe set of estimates obtained for one sample data set in the last section is insufficient to assess the performance of the semi-parametric estimation procedure. To get a sense of its performance we performed simulation studies with $\\mbox{\\tt Mreps} = 1000$ replications and sample sizes $n \\in \\{50, 100\\}$. For each replication, we generated a sample data set according to the same joint model, then obtained the set of estimates, via the semi-parametric procedure, for this data set. \nSummary statistics, such as the means, standard deviations, asymptotic standard errors, percentiles, and boxplots for all \\mbox{\\tt Mreps}\\ estimates were then obtained or constructed. Table \\ref{estab} shows these summary statistics of the \\mbox{\\tt Mreps}\\ estimates for each parameter. The asymptotic standard errors reported in Table \\ref{estab} are the means of the asymptotic standard errors over the \\mbox{\\tt Mreps}\\ replications. Also included are the percentile $95\\%$ confidence intervals for each of the unknown parameters based on the \\mbox{\\tt Mreps}\\ replications stratified according to $n=\\{50, 100\\}$. Since the $\\Lambda_{0q}$s are functional nonparametric parameters, we only provide the estimates at four selected time points. Due to space limitation, we also only provide the results for a small subset of the $\\eta$ and $\\xi$ parameters in Table \\ref{estab}. From these simulation results, we observe that the estimates are close to the true values of the model parameters, and the sample standard deviations are close to the asymptotic standard errors, providing some validation to the semi-parametric estimation procedure for the joint model and empirically lending support to the asymptotic results. By comparing the results for $n=50$ and $n=100$ in Table \\ref{estab}, we find that when the sample size $n$ increases, the performance of the estimators of the parameters improves with biases and standard errors decreasing. \n\n\\begin{table}[ht]\n\\begin{center}\n\\begin{tabular}{c|r|ccccc|ccccc}\n\\hline\n \\multicolumn{2}{c|}{Sample Size}\n& \\multicolumn{5}{c|}{$n=50$} \n& \\multicolumn{5}{c}{$n=100$} \\\\\n\\hline\nParameter & True & Mean & SD & ASE & PL & PU & Mean & SD & ASE & PL & PU \\\\\n\\hline\n$\\Lambda_{01}(0.3)$ & 0.11 & 0.09 & 0.03 & 0.04 & 0.05 & 0.16 & 0.1 & 0.02 & 0.03 & 0.07 & 0.15 \\\\\n$\\Lambda_{01}(0.6)$ & 0.44 & 0.43 & 0.09 & 0.07 & 0.22 & 0.66 & 0.42 & 0.06 & 0.05 & 0.3 & 0.6 \\\\\n$\\Lambda_{01}(0.9)$ & 1 & 0.97 & 0.19 & 0.18 & 0.57 & 1.61 & 0.99 & 0.12 & 0.12 & 0.71 & 1.46 \\\\\n$\\Lambda_{01}(1.2)$ & 1.78 & 1.83 & 0.43 & 0.42 & 1.02 & 2.7 & 1.79 & 0.27 & 0.27 & 1.25 & 2.65 \\\\\n$\\Lambda_{02}(0.3)$ & 0.09 & 0.07 & 0.03 & 0.04 & 0.04 & 0.13 & 0.08 & 0.02 & 0.03 & 0.05 & 0.12 \\\\\n$\\Lambda_{02}(0.6)$ & 0.36 & 0.32 & 0.08 & 0.07 & 0.18 & 0.59 & 0.34 & 0.06 & 0.05 & 0.23 & 0.48 \\\\\n$\\Lambda_{02}(0.9)$ & 0.81 & 0.78 & 0.16 & 0.15 & 0.41 & 1.36 & 0.79 & 0.12 & 0.1 & 0.54 & 1.14 \\\\\n$\\Lambda_{02}(1.2)$ & 1.44 & 1.53 & 0.33 & 0.32 & 0.81 & 2.33 & 1.46 & 0.23 & 0.23 & 1.03 & 2.24 \\\\\n$\\alpha_1$ & 1.5 & 1.52 & 0.15 & 0.13 & 1.13 & 2.06 & 1.5 & 0.1 & 0.09 & 1.25 & 1.83 \\\\\n$\\alpha_2$ & 1.2 & 1.21 & 0.15 & 0.14 & 0.8 & 1.76 & 1.2 & 0.11 & 0.09 & 0.95 & 1.58 \\\\\n$\\alpha_3$ & 2 & 2.02 & 0.23 & 0.22 & 1.43 & 2.7 & 2.01 & 0.14 & 0.14 & 1.56 & 2.6 \\\\\n$\\beta^R_1$ & 1 & 1.03 & 0.11 & 0.1 & 0.78 & 1.45 & 1.02 & 0.08 & 0.07 & 0.82 & 1.25 \\\\\n$\\beta^R_2$ & -1 & -1.03 & 0.11 & 0.11 & -1.29 & -0.85 & -1.02 & 0.07 & 0.08 & -1.17 & -0.89 \\\\\n$\\gamma^R_1$ & 1 & 1.03 & 0.1 & 0.08 & 0.8 & 1.47 & 1.02 & 0.06 & 0.06 & 0.86 & 1.25 \\\\\n$\\kappa^R_1$ & 1 & 1.03 & 0.1 & 0.09 & 0.77 & 1.52 & 1.02 & 0.08 & 0.06 & 0.83 & 1.29 \\\\\n$\\kappa^R_2$ & -1 & -1.02 & 0.15 & 0.14 & -1.51 & -0.52 & -1.01 & 0.1 & 0.1 & -1.33 & -0.74 \\\\\n\\hline\n$\\eta(2,1)$ & 0.1 & 0.1 & 0.04 & 0.03 & 0.04 & 0.18 & 0.1 & 0.02 & 0.02 & 0.05 & 0.16 \\\\\n$\\eta(3,1)$ & 0.1 & 0.1 & 0.04 & 0.03 & 0.04 & 0.19 & 0.1 & 0.02 & 0.02 & 0.06 & 0.17 \\\\\n$\\eta(1,2)$ & 0.2 & 0.2 & 0.05 & 0.06 & 0.11 & 0.29 & 0.2 & 0.04 & 0.04 & 0.12 & 0.29 \\\\\n$\\eta(3,2)$ & 0.2 & 0.21 & 0.05 & 0.05 & 0.11 & 0.29 & 0.2 & 0.03 & 0.03 & 0.13 & 0.29 \\\\\n$\\beta^W_1$ & 1 & 0.98 & 0.23 & 0.21 & 0.51 & 1.45 & 0.99 & 0.16 & 0.15 & 0.68 & 1.32 \\\\\n$\\beta^W_2$ & -1 & -0.99 & 0.18 & 0.2 & -1.24 & -0.75 & -0.99 & 0.14 & 0.14 & -1.16 & -0.81 \\\\\n$\\gamma^W_1$ & 1 & 0.99 & 0.23 & 0.23 & 0.56 & 1.48 & 0.99 & 0.15 & 0.15 & 0.71 & 1.33 \\\\\n$\\nu^W_1$ & 1 & 0.99 & 0.07 & 0.06 & 0.74 & 1.25 & 0.99 & 0.05 & 0.04 & 0.82 & 1.15 \\\\\n$\\nu^W_2$ & 1 & 0.99 & 0.09 & 0.07 & 0.71 & 1.29 & 0.99 & 0.06 & 0.06 & 0.8 & 1.17 \\\\\n$\\nu^W_3$ & -2 & -1.98 & 0.09 & 0.08 & -2.42 & -1.6 & -1.99 & 0.06 & 0.06 & -2.25 & -1.72 \\\\\n\\hline\n$\\xi(2,1)$ & 0.2 & 0.19 & 0.05 & 0.05 & 0.11 & 0.29 & 0.2 & 0.04 & 0.04 & 0.13 & 0.28 \\\\\n$\\xi(3,1)$ & 0.05 & 0.05 & 0.02 & 0.02 & 0.02 & 0.1 & 0.05 & 0.02 & 0.01 & 0.01 & 0.08 \\\\\n$\\xi(3,2)$ & 0.5 & 0.49 & 0.07 & 0.06 & 0.4 & 0.6 & 0.5 & 0.05 & 0.04 & 0.41 & 0.6 \\\\\n$\\xi(2,3)$ & 0.5 & 0.48 & 0.14 & 0.15 & 0.4 & 0.59 & 0.49 & 0.09 & 0.1 & 0.4 & 0.6 \\\\\n$\\beta^V_1$ & 1 & 0.99 & 0.16 & 0.15 & 0.65 & 1.35 & 0.99 & 0.09 & 0.1 & 0.78 & 1.21 \\\\\n$\\beta^V_2$ & -1 & -1 & 0.13 & 0.14 & -1.2 & -0.83 & -0.99 & 0.09 & 0.1 & -1.11 & -0.88 \\\\\n$\\kappa^V_1$ & 1 & 1.01 & 0.18 & 0.17 & 0.65 & 1.4 & 1 & 0.13 & 0.12 & 0.74 & 1.27 \\\\\n$\\kappa^V_2$ & -1 & -1.01 & 0.29 & 0.28 & -1.64 & -0.44 & -1.01 & 0.21 & 0.21 & -1.52 & -0.57 \\\\\n$\\nu^V_1$ & 1 & 1 & 0.05 & 0.04 & 0.83 & 1.17 & 1 & 0.04 & 0.04 & 0.89 & 1.13 \\\\\n$\\nu^V_2$ & 1 & 1.01 & 0.06 & 0.05 & 0.82 & 1.21 & 1 & 0.04 & 0.03 & 0.87 & 1.14 \\\\\n$\\nu^V_3$ & -2 & -1.99 & 0.07 & 0.05 & -2.29 & -1.73 & -2 & 0.04 & 0.04 & -2.2 & -1.81 \\\\\n\\hline\n\\end{tabular}%\n\\end{center}\n\\caption{Summary statistics of the parameter estimates for the \\mbox{\\tt Mreps}\\ replications in the simulation runs for sample sizes $n=\\{50,100\\}$. The columns are the true values of the model parameters, the sample mean of the estimates, the sample standard deviations, the asymptotic standard errors, the 2.5\\% percentile, and the 97.5\\% percentile. The sample standard deviations are estimates of the standard errors of the estimators. The asymptotic standard errors are the means of the asymptotic standard errors over the \\mbox{\\tt Mreps}\\ replications. The RCR component includes model parameters $\\Lambda$s, $\\alpha$s and those parameters with superscript $R$. The LM component includes model parameters $\\eta$s and those parameters with superscript $W$. The HS component includes model parameters $\\xi$s and those parameters with superscript $V$. \n}\n\\label{estab}\n\\end{table}\n\n\nThe graphical summary of these centered estimates for \\mbox{\\tt Mreps}\\ replications of $n=50$ units is given in Figure \\ref{estbox}. Centering for each estimator is done by subtracting the {\\em true} value of the parameter being estimated. We observe that the medians of all these {\\em centered} estimates are close to $0$. We also observe some outliers, but most of the centered \\mbox{\\tt Mreps}\\ estimates are close to $0$. In Figure \\ref{allba}, we show three types of plots for the baseline survivor function for each risk in the RCR component. The true Weibull type baseline survivor function is plotted in red color, the overlaid plots of a random selection of ten estimates of the baseline survivor functions are in green color, and the mean baseline survivor function based on the \\mbox{\\tt Mreps}\\ estimates is shown in blue color. We observe that there is close agreement between the true (red) and mean (blue) curves. Based on these simulation studies, the semi-parametric estimation procedure for the joint model appears to provide reasonable estimates of the true finite- and infinite-dimensional model parameters, at least for the choices of parameter values for these particular simulations. Further simulation and analytical studies are still needed to substantively assess the performance of the semi-parametric estimation procedure for the proposed joint model. \n\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{c}\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/centeredest_1} \\\\\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/centeredest_2} \\\\\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/centeredest_3}\n\\end{tabular}\n\\caption{Boxplots of the centered parameter estimates from \\mbox{\\tt Mreps}\\ replications for simulated data sets each with $n=50$ units. Centering is done by subtracting the true parameter value from each of the \\mbox{\\tt Mreps}\\ estimates.}\n\\label{estbox}\n\\end{figure}\n\n\n\n\\begin{figure}\n\\centering\n\\begin{tabular}{c}\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/allbasesur_1} \\\\\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/allbasesur_2} \\\\\n\\includegraphics*[width=\\textwidth,height=2.75in]{fi\/allbasesur_3}\n\\end{tabular}\n\\caption{Overlaid plots of the true baseline survivor function (in red), ten simulated estimates of the baseline survivor function (in green), and the mean baseline survivor function based on \\mbox{\\tt Mreps}\\ simulations (in blue) for each of the three risks in the RCR component.}\n\\label{allba}\n\\end{figure}\n\n\n\\section{Illustration on a Real Data Set}\n\\label{sec: realdata}\n\nTo illustrate our estimation procedure on a real data set, we apply the joint model and the semi-parametric estimation procedure to a medical data set with $n = 150$ patients diagnosed with metastatic colorectal cancer which cannot be controlled by curative surgeries. This data set was gathered in France from 2002--2007 and was used in \\cite{duc2011}. It consists of two data sets which are deposited in the \\textit{frailtypack} package in the {\\tt R} Library \\cite{RCitation}: data set \\textbf{colorectal.Longi} and data set \\textbf{colorectal}. The data set \\textbf{colorectal.Longi} includes the follow-up period, in years, of the patient's tumor size measurements. The times of first measurements of tumor size vary from patient to patient, so to have all of them start at time `zero', our artificial time origin, we consider these first measurements as their initial states. Subsequent times of measuring tumor size are then in terms of the lengths of time from their time origin. There were a total of $906$ tumor size measurements for all the patients. In order to conform to our discrete-valued LM model, we classify (arbitrarily) the tumor size into three categories (states): $1$, $2$, and $3$, if the tumor size belongs in the intervals $[3.4, 6.6]$, $[2, 3.4]$, and $(0, 2)$, respectively. Since the tumor size is only measured at discrete times, instead of continuously, we assumed that the tumor size state is constant between tumor size measurement times, and consequently the tumor size process could only transition at the times in which tumor size is measured. This assumption is most likely unrealistic, but we do so for the purpose of illustration. The data set \\textbf{colorectal} contains some information about the patient's ID number, covariates $X_1$ and $X_2$, with $X_1=1 (0)$ if patient received treatment C (S); $X_2$ consists of two dummy variables, with $X_2=(0,0),(1,0),(0,1)$ if the initial WHO performance status is $0,1,2$, respectively; the time (in years) of each occurrence of a new lesion since baseline measurement time; and the final right-censored or death time. There were $289$ occurrences of new lesions and $121$ patients died during the study. \n\nClearly, this data set is a special case of the type of data appropriate for our proposed joint model, having only one type of recurrent event, one absorbing health status state (dead), and one transient health status state (alive). We assume the effective age $\\mathcal{E}_i(s) = s - S_{iN_i^R(s-)}^R$ after each occurrence of a new lesion, and use $\\rho(k);\\alpha) =\\alpha^{\\log(1+k)}$. The unknown model parameters in the RCR (here, just a recurrent event) component of the model includes $\\alpha$ in the $\\rho(\\cdot)$ function, $\\beta^R=[\\beta^R_1,\\beta^R_2,\\beta^R_3]$ for the covariates, and $[\\kappa^R_1,\\kappa^R_2]$ for the LM state. The unknown model parameters in the LM component of the model are $\\beta^W$ for the covariates and $\\nu^W_1$ for the recurrent event process. The unknown parameters in the HS component of the model includes $\\beta^V$ for the covariates, $[\\kappa^V_1,\\kappa^V_2]$ for the LM state, and $\\nu^V_1$ for the recurrent event counting process.\n\nWe fitted the joint model to this data set. The resulting model parameter estimates along with the information-based standard error estimates are given in Table \\ref{truees}. The standard errors are obtained by taking square roots of the diagonal elements of the observed inverse of the profile likelihood information matrix. Based on these estimates, we could also perform hypothesis tests. Thus, for instance, the $p$-values associated with the two-tailed hypothesis tests are also given in the table. The null hypothesis being tested for $\\alpha$ is that $H_0: \\alpha=1$, while the null hypotheses for the other model parameters are that their true parameter values are zeros. We test $\\alpha=1$ instead of $\\alpha = 0$ because $\\alpha=1$ means that the accumulating number of recurrent event occurrences does not have an impact in subsequent recurrent event occurrences. From the values in Table \\ref{truees}, the estimate of $\\alpha$ is less than one, which may indicate that each occurrence of new lesion decreases the risk of future occurrences of new lesions, though from the result of the statistical test we cannot conclude statistically that $\\alpha < 1$. Based on the set of $p$-values, we find that the initial WHO performance state of $1$ and the tumor size state of $2$ are associated with decreased risk of new lesion occurrences, while an initial WHO performance state of $2$ is associated with an increased risk of new lesion occurrences. An initial WHO performance state of $2$ and the number of occurrences of new lesions are associated with an increased risk of death in the health status. \n\nFinally, we want to emphasize the importance of the effective age process. An inappropriate effective age may lead to misleading estimates. Parameter model estimates under a mis-specified effective age could lead to biases and potentially misleading conclusions. This is one aspect where domain specialists and statisticians need to consider when assessing the impact of interventions since the specification of the effective or virtual age have important and consequential implications. For more discussions about effective or virtual ages, see the recent papers \\cite{FinCha21,Beu2021}, the last one also touching on the situation where the virtual age function depends on unknown parameters.\n\n\\begin{table}[ht]\n\\centering\n\\begin{tabular}{cccc}\n \\hline\nParameter & Estimate & Est.\\ Standard Error & $p$-value \\\\ \n \\hline\n$\\alpha$ & 0.77 & 0.22 & 0.30 \\\\ \n $\\beta^R_1$ & -0.16 & 0.20 & 0.43 \\\\ \n $\\beta^R_2$ & -0.42 & 0.21 & 0.05 \\\\ \n $\\beta^R_3$ & 0.88 & 0.41 & 0.03 \\\\ \n $\\kappa^R_1$ & -0.52 & 0.27 & 0.05 \\\\ \n $\\kappa^R_2$ & -0.25 & 0.25 & 0.31 \\\\ \n \\hline\n $\\beta^W_1$ & 0.08 & 0.25 & 0.75 \\\\ \n $\\beta^W_2$ & -0.00 & 0.25 & 0.99 \\\\ \n $\\beta^W_3$ & -0.51 & 0.73 & 0.48 \\\\ \n $\\nu^W_1$ & -0.23 & 0.18 & 0.19 \\\\ \n \\hline\n $\\beta^V_1$ & 0.49 & 0.30 & 0.10 \\\\ \n $\\beta^V_2$ & -0.11 & 0.30 & 0.71 \\\\ \n $\\beta^V_3$ & 1.20 & 0.41 & 0.00 \\\\ \n $\\kappa^V_1$ & -0.43 & 0.31 & 0.16 \\\\ \n $\\kappa^V_2$ & -0.18 & 0.33 & 0.59 \\\\ \n $\\nu^V_1$ & 0.71 & 0.14 & 0.00 \\\\\n \\hline\n\\end{tabular}\n\\caption{Parameter estimates, information-based standard errors, and $p$-values for the RCR, LM, and HS processes based on the real data set. The $p$-value is based on the two-tailed hypothesis test that the model parameter is zero (except for the parameter $\\alpha$ where $H_0: \\alpha=1$). The top block includes the model parameters in the RCR component, the middle block includes the model parameters in the LM component, and the bottom block includes the model parameters in the HS component.}\n\\label{truees}\n\\end{table}\n\n\\section{Concluding Remarks}\n\\label{sec: conclusion}\n\nFor the general class of joint models for recurrent competing risks, longitudinal marker, and health status proposed in this paper, which encompasses many existing models considered previously, there are still numerous aspects that need to be addressed in future studies. Foremost among these aspects is a more refined analytical study of the finite-sample and asymptotic properties of the estimators of model parameters, together with other inferential and prediction procedures. The finite-sample and asymptotic results could be exploited to enable performing tests of hypothesis and construction of confidence regions for model parameters. There is also the interesting aspect of computationally estimating the standard errors of the estimators. How would a bootstrapping approach be implemented in this situation? Another important problem that needs to be addressed is how to perform goodness-of-fit and model validation for this joint model. Though the class of models is very general, there are still possibilities of model mis-specifications, such as, for example, in determining the effective age processes, or in the specification of the $\\rho_q(\\cdot)$-functions. What are the impacts of such model mis-specifications? Do they lead to serious biases that could potentially result in misleading conclusions? These are some of the problems whose solutions await further studies.\n\nA potential promise of this joint class of models is in precision medicine. Because all three components (RCR, LM, HS) are taken into account simultaneously, in contrast to a marginal modeling approach, the synergy that this joint model allows may improve decision-making -- for example, in determining interventions to be performed for individual units. In this context, it is of utmost importance to be able to predict in the future the trajectories of the HS process given information at a given point in time about all three processes. Thus, an important problem to be dealt with in future work is the problem of forecasting using this joint model. How should such forecasting be implemented? This further leads to other important questions. One is determining the relative importance of each of the components in this prediction problem. Could one ignore other components and still do as well relative to a joint model-based prediction approach? If there are many covariates, how should the important covariates among these numerous covariates be chosen in order to improve prediction of, say, the time-to-absorption?\n\nFinally, though our class of joint models is a natural extension of earlier models dealing with either recurrent events, competing recurrent events, longitudinal marker, and terminal events, one may impugn it as not realistic, but instead view it as more of a futuristic class of models, since existing data sets were not gathered in the manner for which these joint models apply. For instance, in the example pertaining to gout in Section \\ref{sec: scenarios}, the SUR level and CKD status are not continuously monitored. However, with the advent of smart devices, such as smart wrist watches, embedded sensors, black boxes, etc., made possible by miniaturized technology, high-speed computing, almost limitless cloud-based memory capacity, and availability of rich cloud-based databases, the era is, in our opinion, fast approaching when continuous monitoring of longitudinal markers, health status, occurrences of different types of recurrent events, be it on a human being, an experimental animal or plant, a machine such as an airplane or car, an engineering system, a business entity, etc. will become more of a standard rather than an exception. By developing the models and methods of analysis for such future complex and advanced data sets, {\\em even} before they become available and real, will hasten and prepare us for their eventual arrival.\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\\section*{Acknowledgments}\n\nSome materials in this work are based on L.\\ Tong's PhD dissertation research at the University of South Carolina.\nE.\\ Pe\\~na is currently Program Director in the Division of Mathematical Sciences (DMS) at the National Science Foundation (NSF). As a consequence of this position, he receives support for research, which included work in this manuscript, under NSF Grant 2049691 to the University of South Carolina. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of NSF. P.\\ Liu acknowledges the 2017--2019 Summer Research Grants from Bentley University. E.\\ Pe\\~na acknowledges NSF Grant 2049691 and NIH Grant P30 GM103336-01A1.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzexay b/data_all_eng_slimpj/shuffled/split2/finalzzexay new file mode 100644 index 0000000000000000000000000000000000000000..b99f33063e725c22067a540a07471d03ecb9e328 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzexay @@ -0,0 +1,5 @@ +{"text":"\\section{Introduction}\n\nFor a knot $K$ in $S^{3}$ and $r =p\/q \\in \\Q \\cup \\{\\infty\\}$, let $S^{3}(K,r)$ be the oriented closed 3-manifold obtained by the Dehn surgery on $K$ along the slope $r$.\nWe denote the 3-manifold $M$ with opposite orientation by $-M$, and we write $M \\cong M'$ if two 3-manifolds are homeomorphic by an orientation preserving homeomorphism.\n\n\n \nTwo Dehn surgeries along a knot $K$ with different slopes $r$ and $r'$ are \\emph{purely cosmetic} if $S^{3}(K,r) \\cong S^{3}(K,r')$ and \\emph{reflecitively cosmetic} (or, \\emph{chirally cosmetic}) if $S^{3}(K,r) \\cong -S^{3}(K,r')$.\n\n \nA famous cosmetic surgery conjecture (for a knot in $S^{3}$) \\cite[Problem 1.81]{kir} states that a non-trivial knot $K$ does not admit purely cosmetic surgeries. The case one of the slope $r=\\infty$ was shown in \\cite{gl}, which particularly says that two knots in $S^{3}$ are equivalent if and only if they have the homeomorphic complements.\n\nThere are various constraints for two Dehn surgeries are purely cosmetic. Among them, using Heegaard Floer homology theory in \\cite[Theorem 1.2]{nw} the following strong restrictions are shown.\n\\begin{equation}\n\\label{eqn:NW} \n\\begin{split}\n&\n\\mbox{If } S^{3}(K,p\\slash q) \\cong S^{3}(K,p'\\slash q'), \\mbox{ then } p' \\slash q'=\\pm p\\slash q \\mbox{ and } q^{2} \\equiv -1 \\pmod p.\\\\ &\\mbox{In particular, } L(p,q) \\cong L(p',q')\n\\end{split}\n\\end{equation}\n\nThe cosmetic surgery conjecture can be regarded as a statement saying that when we fix a knot $K$ then the Dehn surgery gives an injective map\n$S^{3}(K,\\ast):\\{\\mbox{Slopes}\\}=\\Q \\cup \\{\\infty\\} \\rightarrow \\{\\mbox{(oriented) 3-manifolds}\\}$\nexcept the case $K$ is the unknot. In this point of view, it is natural to ask the injectivity of the Dehn surgery map when we fix a slope; Is the Dehn surgery map\n$S^{3}(\\ast,r):\\{\\mbox{Knots}\\} \\rightarrow \\{\\mbox{(oriented) 3-manifolds}\\}$ injective ?\nA slope $r$ is a called \\emph{characterizing slope} of $K$ if the answer is affirmative, that is, $S^{3}(K,r) \\cong S^{3}(K',r)$ implies $K = K'$.\n\nCompared with purely cosmetic surgeries, a situation for characterizing slopes is more complicated. There are various examples of non-characterizing slopes. Among them, in \\cite{bamo} hyperbolic knots with infinitely many (integral) non-characterizing slopes are given.\nOn the other hand, if $K$ is the unknot \\cite{kmos,os2}, trefoil, or the figure-eight knot \\cite{os1} then all the slopes are characterizing. Moreover, if $K$ is a torus knot, a slope $r$ is characterizing provided $r$ is sufficiently large \\cite[Theorem 1.3]{nz}, whereas some small slopes are not characterizing.\n\nIn this paper we use the LMO invariant to study a structure of Dehn surgery along knots. We obtain various constraints for a knot to admit a purely or reflectively cosmetic surgery, or, a slope $r$ to be characterizing. \n\n\nThe LMO invariant is an invariant of closed oriented 3-manifolds which takes value in certain graded algebra $\\mathcal{A}(\\emptyset)$. The degree one part $\\lambda_1$ of the LMO invariant is equal to the Casson-Walker invariant \\cite{lmmo} that satisfies the following surgery formula\n\\begin{equation}\n\\label{eqn:LMO1} \\lambda_{1}(S^{3}(K,p \\slash q))=\\frac{1}{2}a_{2}(K) \\frac{q}{p} + \\lambda_{1}(L(p,q)).\n\\end{equation}\nHere $a_{2}(K)$ denotes the coefficients of $z^{2}$ of the Conway polynomial, and $L(p,q) = S^{3}(\\mathsf{Unknot},p\\slash q)$ denote the $(p,q)$-Lens space.\n\nUsing (\\ref{eqn:NW}) and the surgery formula (\\ref{eqn:LMO1}) we immediately get the following constraint for cosmetic surgery and characterizing slopes. (In \\cite{boli}, this is proved without using (\\ref{eqn:NW}) -- instead they used Casson-Gordon invariant to get an additional constraint.)\n\n\\begin{theorem}\\cite[Proposition 5.1]{boli}\n\\label{theorem:LMO1}\nLet $K$ and $K'$ be knots in $S^{3}$ and $r,r' \\in \\Q \\setminus \\{0\\}$ with $r\\neq r'$.\n\\begin{enumerate}\n\\item[(i)] If $S^{3}(K,r)\\cong S^{3}(K',r)$ then $a_{2}(K)=a_{2}(K')$.\n\\item[(ii)] If $S^{3}(K,r)\\cong S^{3}(K,r')$ then $a_{2}(K)=0 \\ (=a_{2}({\\sf Unknot}))$.\n\\end{enumerate}\n\\end{theorem}\n\n\nOur purpose is to get further constraints that generalize Theorem \\ref{theorem:LMO1} by looking at higher order part of the LMO invariants.\n\nTwo knots $K$ and $K'$ are called \\emph{$C_{n+1}$-equivalent} if $v(K)=v(K')$ for all finite type invariant $v$ whose degree is less than or equal to $n$. \nA knot which is $C_{n+1}$ equivalent to the unknot is called a \\emph{$C_{n+1}$-trivial} knot.\nIn \\cite{gu,ha} it is shown that two knots are $C_{n+1}$-equivalent if and only if they are moved each other by certain local moves called $C_{n+1}$-moves. \n\nIn this terminology, Theorem \\ref{theorem:LMO1} can be understood that Dehn surgery characterizes a knot or a slope \\emph{up to $C_3$-equivalence}: (i) says that if Dehn surgeries of two knots $K$ and $K'$ along the same slope are homeomorphic then $K$ and $K'$ are $C_{3}$-equivalent, and (ii) says that the cosmetic surgery conjecture is true unless $K$ is $C_3$-trivial. \n\n\nIn \\cite{bl} Bar-Natan and Lawrence gave a rational surgery formula of the LMO invariant. First we write down a rational surgery formula for degree two and three part of the (primitive) LMO invariants of $S^{3}(K,r)$.\n\n\\begin{theorem}[Surgery formula for $\\lambda_2$ and $\\lambda_3$]\n\\label{theorem:LMO23}\nLet $K$ be a knot in $S^{3}$.\n\\begin{eqnarray*}\n\\lambda_2(S^{3}(K,p\/q))\\!\\!&\\!\\! = \\!&\\!\\!\\!\n\\left(\\!v_2(K)^{2} + \\frac{1}{24}v_2(K)+\\frac{5}{2}v_{4}(K)\\!\\right)\\frac{q^{2}}{p^{2}}-v_3(K)\\frac{q}{p} + \\frac{v_2(K)}{24}\\left( \\frac{1}{p^{2}}-1\\right)\\\\\n & & \\hspace{0.4cm}+ \\lambda_{2}(L(p,q))\\\\\n& \\!\\!=\\!&\\!\\!\\! \\left(\\!\\frac{7a_2(K)^2-a_2(K)-10a_{4}(K)}{8}\\! \\right)\\frac{q^{2}}{p^{2}} -v_{3}(K)\\frac{q}{p} + \\frac{a_{2}(K)}{48}\n\\left(1-\\frac{1}{p^{2}}\\right)\\\\\n & & \\hspace{0.4cm}+ \\lambda_{2}(L(p,q))\\\\\n\\end{eqnarray*}\n\\begin{eqnarray*}\n\\lambda_{3}(S^{3}(K,p\/q)) \\!\n&\\!\\! = \\!& \\!\n-\\left(\\!\\frac{35}{4}v_6(K)\\!+\\!\\frac{5}{24}v_4(K)\\!+\\!10v_2(K)v_4(K)\\!+\\!\\frac{4}{3}v_2(K)^{3}\\!+\\! \\frac{1}{12}v_2(K)^{2}\\!\\right)\\frac{q^{3}}{p^{3}}\\\\\n& &\\! - \\left( \\frac{5}{24}v_4(K) +\\frac{1}{288}v_2(K)+\\frac{1}{12}v_2(K)^{2}\\right)\\frac{q}{p^{3}} \\\\\n& &\\! + \\left(\\frac{5}{2}v_{5}(K)+ 2v_3(K)v_2(K)+\\frac{1}{24}v_3(K)\\right)\\frac{q^2}{p^2} + \\frac{v_3(K)}{24}\\left(\\frac{1}{p^2}-1\\right)\\\\\n& &\\! - \\left(w_4(K)-\\frac{1}{12}v_2(K)^{2}-\\frac{1}{288}v_2(K)-\\frac{5}{24}v_4(K)\\right)\\frac{q}{p} +\\lambda_{3}(L(p,q))\n\\end{eqnarray*}\n\n\\end{theorem}\n\nHere $v_2(K),v_3(K),v_4(K),w_4(K),v_5(K)$ and $v_6(K)$ are certain canonical finite type invariant of the knot $K$ (see Section \\ref{section:LMO} for details -- as we will see in Lemma \\ref{lemma:AJ}, except $v_5$ they are determined by the Alexander and the Jones polynomial). Also, $a_{2n}(K)$ is the coefficient of $z^{2n}$ in $\\nabla_{K}(z)$, the Conway polynomial of $K$. \n\n\nThe degree two part of the LMO invariant (combined with (\\ref{eqn:NW})) gives rise to the followings.\n\n\\begin{corollary}\n\\label{corollary:LMO2}\nLet $K$ and $K'$ be knots in $S^{3}$, and $r,r' \\in \\Q \\setminus \\{0\\}$ with $r \\neq r'$.\n\\begin{enumerate}\n\\item[(i)] If $S^{3}(K,r) \\cong S^{3}(K,r')$ then $v_3(K)=0$. \n\\item[(ii)] If $S^{3}(K,r) \\cong -S^{3}(K,-r)$ then $v_3(K)=0$. \n\\item[(iii)] If $S^{3}(K,r) \\cong -S^{3}(K,r')$ for $r' \\neq \\pm r$ then either\n\\begin{itemize}\n\\item[(iii-a)] $v_{3}(K)=0$, or,\n\\item[(iii-b)] $v_{3}(K) \\neq 0$ and \n$\\displaystyle \\frac{rr'}{r+r'} = \\frac{7a_2(K)^2-a_2(K)-10a_{4}(K)}{8v_{3}(K)}. $\n\\end{itemize}\n\\item[(iv)] If $S^{3}(K,r) \\cong S^{3}(K',r)$ then either\n\\begin{itemize}\n\\item[(iv-a)] $a_{4}(K)=a_{4}(K')$, $v_{3}(K)=v_{3}(K')$, or,\n\\item[(iv-b)] $a_{4}(K) \\neq a_{4}(K')$, $v_{3}(K)\\neq v_{3}(K')$, and \n$\\displaystyle r = \\frac{5(a_{4}(K)-a_{4}(K'))}{4(v_{3}(K)-v_{3}(K'))}. $\n\\end{itemize}\n\\end{enumerate}\n\\end{corollary}\n\n(i) was proven in \\cite{iw} by a similar argument using Lescop's surgery formula of the Kontsevich-Kuperberg-Thurston invariant \\cite{ko2,kt} (see Remark \\ref{remark:KKT}). \n \nWe note that the degree two part gives the following constraint for a knot to admit a Lens space surgery.\n\n\\begin{corollary}\n\\label{corollary:Lens}\nIf $S^{3}(K,p\\slash q)$ is a Lens space, then \n\\[ \\left( \\frac{7a_2(K)^2-a_2(K)-10a_{4}(K)}{8} \\right)\\frac{q^{2}}{p^{2}} -v_{3}(K)\\frac{q}{p} + \\frac{a_{2}(K)}{48}\n\\left(1-\\frac{1}{p^{2}}\\right) =0.\\]\n\\end{corollary}\n\nBy cyclic surgery theorem \\cite{cgls}, if $K$ is not a torus knot, then $q=1$ hence we get\n\\begin{equation}\n\\label{eqn:Lens-obstruction}\na_{2}(K)p^{2} -48v_{3}(K)p+\\left(42a_2(K)^2-7a_2(K)-60a_{4}(K) \\right) =0.\n\\end{equation}\n\nCombined with the fact that $a_{2}(K),4v_{3}(K)$ and $a_{4}(K)$ are integers, (\\ref{eqn:Lens-obstruction}) brings some interesting informations. For example, a non-torus knot $K$ admitting lens surgery, $576v_{3}(K)^{2}-a_{2}(K)(42a_2(K)^2-7a_2(K)-60a_{4}(K))$ is a square number. If a non-torus knot $K$ admits more than one Lens space surgeries, the surgery slopes are successive integers \\cite[Corollary 1]{cgls} so such a knot has $a_{2}(K) \\neq \\pm 1$.\n\n\nThe formula of degree three part is more complicated. Fortunately, as for cosmetic surgery, using (\\ref{eqn:NW}) we get the following simple constraints.\n\n\\begin{corollary}\n\\label{corollary:LMO3}\nLet $K$ and $K'$ be a knot in $S^{3}$ and $r= p\\slash q \\in \\Q \\setminus\\{0\\}$.\n\\begin{enumerate}\n\\item[(i)] If $S^{3}(K,r) \\cong S^{3}(K,r')$ for $r'\\neq r$, then \n\\[ p^{2}(24w_{4}(K)-5v_{4}(K)) + 5v_{4}(K) + q^{2}(210v_{6}(K)+5v_{4}(K)) = 0.\\]\n\\item[(ii)] If $S^{3}(K,r) \\cong -S^{3}(K,-r)$, then $v_{5}(K)=0$.\n\\end{enumerate}\n\\end{corollary}\nCorollary \\ref{corollary:LMO3} (i) leads to the following.\n\\begin{corollary}\n\\label{cor:c11}\nThe cosmetic surgery conjecture is true for all knots with less than or equal to 11 crossings, possibly except $10_{118}$.\n\\end{corollary}\n\n\nUsing higher degree part of the LMO invariant, adding suitable $C_n$-equivalence assumptions we prove the following more direct generalizations of Theorem \\ref{theorem:LMO1}.\n\n\\begin{theorem}\n\\label{theorem:LMOhigh1}\nLet $K$ and $K'$ be a knot in $S^{3}$ and $r, r' \\in \\Q \\setminus\\{0\\}$ with $r \\neq r'$.\n\\begin{enumerate}\n\\item[(i)] Assume that $K$ and $K'$ are $C_{2m+2}$-equivalent.\nIf $S^{3}(K,r) \\cong S^{3}(K',r)$ then $a_{2m+2}(K) = a_{2m+2}(K')$.\n\\item[(ii)] Assume that $K$ is $C_{4m+2}$-trivial.\nIf $S^{3}(K,r) \\cong S^{3}(K,r')$ then $a_{4m+2}(K) = 0$.\n\\end{enumerate}\n\\end{theorem}\n\nWe say that $K$ and $K'$ are \\emph{odd $C_{n+1}$}-equivalent if $v(K)=v(K')$ for all \\emph{odd degree} canonical finite type invariant $v$ of degree $\\leq n$. We call a knot is \\emph{odd $C_{n+1}$-trivial} if it is odd $C_{n+1}$-equivalent to the unknot.\n\n\nLet $(Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{e,k}$ denotes the bigrading $(e,k)$ of the Kontsevich invariant of $K$, normalized so that the unknot takes value $1$. Let $K_{e,k}$ denotes the kernel of the Aarhus integration (diagram pairing) $\\langle \\ast , \\strutd^{\\frac{k}{2}}\\rangle: \\mathcal{B}_{e,k} \\rightarrow \\mathcal{A}(\\emptyset)_{e + \\frac{k}{2}}$ (See Section \\ref{section:LMO}).\n\n\n\n\n\\begin{theorem}\n\\label{theorem:LMOhigh2}\nLet $K$ and $K'$ be a knot in $S^{3}$ and $r \\in \\Q \\setminus\\{0\\}$.\n\\begin{enumerate}\n\\item[(i)] Assume that $K$ and $K'$ are $C_{2m+1}$-equivalent.\nIf $S^{3}(K,r) \\cong S^{3}(K',r)$ and $a_{2m+2}(K) = a_{2m+2}(K')$, then $(Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{1,2m}- (Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{1,2m} \\in K_{1,2m}$.\n\\item[(ii)] Assume that $K$ is odd $C_{4m+1}$-trivial. \nIf $S^{3}(K,r) \\cong - S^{3}(K,-r)$ then $(Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{1,4m} \\in K_{1,4m}$. \n\\item[(iii)] Assume that $K$ is odd $C_{4m+3}$-trivial.\nIf $S^{3}(K,r) \\cong \\pm S^{3}(K,-r)$ then $(Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{1,4m+2} \\in K_{1,4m+2}$. \n\\end{enumerate}\n\\end{theorem}\n\n\nAs a corollary, we prove a vanishing of certain finite type invariants that come from colored Jones polynomial (Quantum $\\mathfrak{sl}_2$ invariant). Let $V_{n}(K;t)$ be the $n$-colored Jones polynomial, normalized $V_{n}(\\textsf{Unknot};t) = 1$. The colored Jones polynomials have the following expansion called the \\emph{loop expansion}, or \\emph{Melvin-Morton expansion} \\cite{mm}.\n\\[ V_{n}(K;e^{h}) = \\sum_{e\\geq 0} \\Bigl(\\sum_{k\\geq 0} j_{e,k}(K) (nh)^{k}\\Bigr) h^{e}. \\] \nHere the coefficient $j_{e,k}(K) \\in \\Q$ is a canonical finite type invariant of degree $e+k$.\n\n\\begin{corollary}\n\\label{corollary:MM}\nLet $K$ and $K'$ be a knot in $S^{3}$ and $r \\in \\Q \\setminus\\{0\\}$.\n\\begin{enumerate}\n\\item[(i)] Assume that $K$ and $K'$ are $C_{2m+1}$-equivalent.\nIf $S^{3}(K,r) \\cong S^{3}(K',r)$ and $a_{2m+2}(K) = a_{2m+2}(K')$, then $j_{1,2m}(K)=j_{1,2m}(K')$.\n\\item[(iii)] Assume that $K$ is odd $C_{4m+1}$-trivial.\nIf $S^{3}(K,r) \\cong - S^{3}(K,-r)$ then $j_{1,4m}(K)=0$. \n\\item[(ii)] Assume that $K$ is odd $C_{4m+3}$-trivial.\nIf $S^{3}(K,r) \\cong \\pm S^{3}(K,-r)$ then $j_{1,4m+2}(K)=0$.\n\\end{enumerate}\n\\end{corollary}\n\nFor a canonical finite type invariant $v$ of degree $n$, $v(\\mbox{mirror of }K) = (-1)^{n}v(K)$. Thus for an amphicheiral knot $K$, $v(K)=0$ for all \\emph{odd} degree canonical finite type invariant $v$. It is conjectured that converse is true (this is related to a more familiar conjecture that finite type invariants do not detect the orientation of knots \\cite[Problem 1.89]{kir}). Corollary \\ref{corollary:LMO2} (i), Corollary \\ref{corollary:LMO3} (ii), Corollary \\ref{corollary:MM} (ii),(iii) says that if $S^{3}(K,r) \\cong -S^{3}(K,-r)$ then various canonical odd degree finite type invariants of $K$ vanish. Thus, they bring a supporting evidence for an affirmative answer to the following question.\n\\[ \\mbox{\\emph{If} } S^{3}(K,r) \\cong -S^{3}(K,-r) \\mbox{\\emph{ for some }} r \\neq 0,\\infty, \\mbox{\\emph{ then is }} K \\mbox{\\emph{ amphicheiral?}}\\] \n\n\n\\section*{Acknowledgements}\nThe author would like to thank K. Ichihara for stimulating talk and discussion which inspires the author to work on this subject, and to thank to Z. Wu for pointing out inaccuracy of reference in the first version of the paper. This work was partially supported by JSPS KAKENHI 15K17540,16H02145.\n\n\n\n\n\\section{LMO invariants and rational surgery formula}\n\\label{section:LMO}\n\nIn this section we briefly review the basics of Kontsevich and LMO invariants. We use Aarhus integral construction of the LMO invariant developed \\cite{bgrt1,bgrt2,bgrt3} and a rational surgery formula of the LMO invariant due to Bar-Natan and Lawrence \\cite{bl}. For basics of the Kontsevich and the LMO invariants we refer \\cite{oht}.\n\n\\subsection{Open Jacobi diagrams}\n\n\n \nAn \\emph{(open) Jacobi diagram} or \\emph{(vertex-oriented) uni-trivalent graph} is a graph $D$ whose vertex is either univalent or trivalent, such that at each trivalent vertex $v$ a cyclic order on three edges around $v$ is equipped. \nThe \\emph{degree} of $D$ is the half of the number of vertices.\nWe will often call a univalent vertex a \\emph{leg}, and denote the number of the legs of a Jacobi diagram $D$ by $k(D)$.\nFor a Jacobi diagram $D$, let $e(D)=-\\chi(D)$ be the minus of the euler characteristic of $D$. We call $e(D)$ the \\emph{euler degree} of $D$. \nThen $\\deg(D)=e(D)+k(D)$.\n\nLet $\\mathcal{B}$ (resp. $\\mathcal{A}(\\emptyset)$) be the vector space over $\\C$ spanned by Jacobi diagrams (resp. Jacobi diagrams without univalent vertex), modulo the AS and IHX relations given in Figure \\ref{fig:IHX}. \n\n\\begin{figure}[htbp]\n\\includegraphics*[width=100mm]{IHX.eps}\n\\caption{The AS and IHX relation: we understand that at each trivalent vertex, cyclic order is defined by counter-closkcwise direction.} \n\\label{fig:IHX}\n\\end{figure} \n\nBy taking the disjoint union $\\sqcup$ as the product, both $\\mathcal{B}$ and $\\mathcal{A}(\\emptyset)$ have the structure of graded algebras. Since the IHX and the AS relations and the disjoint union product respect both $k(D)$ and $e(D)$, we view $\\mathcal{B}$ as a bi-graded algebra. \nFor $X \\in \\mathcal{B}$ we denote by $X_{e,k}$ the part of $X$ whose bigrading is $(e,k)$.\nStrictly speaking, we will use the completion of $\\mathcal{B}$ and $\\mathcal{A}(\\emptyset)$ with respect to degrees which we denote by the same symbol $\\mathcal{B}$ and $\\mathcal{A}(\\emptyset)$ by abuse of notations. \n\n\n\n\n\nLet $\\exp_{\\sqcup}:\\mathcal{B} \\rightarrow \\mathcal{B}$ (or, $\\mathcal{A}(\\emptyset) \\rightarrow \\mathcal{A}(\\emptyset)$) be the exponential map with respect to $\\sqcup$ product operation, defined by\n\\[ \\exp_{\\sqcup}(D) = 1 + D + \\frac{1}{2}D \\sqcup D + \\cdots = \\sum_{n=0}^{\\infty} \\frac{1}{n!}\\underbrace{(D \\sqcup\\cdots \\sqcup D)}_{n}. \\] \nWe will simply denote $\\underbrace{(D \\sqcup\\cdots \\sqcup D)}_{n}$ by $D^{n}$. \n\nFor a Jacobi diagram $C$, let $\\partial_{C}:\\mathcal{B} \\rightarrow \\mathcal{B}$ be the differential operator defined by\n\\[ \\partial_{C} (D) =\\begin{cases} 0 & k(C)>k(D) \\\\\n\\sum(\\mbox{glue} \\textit{ all } \\mbox{the legs of } \\Omega \\mbox{ to some legs of }D) & k(C)\\leq k(D)\n\\end{cases}\n\\]\nIn a similar manner, we define the pairing $\\langle C,D\\rangle \\in\\mathcal{A}(\\emptyset)$ of $C,D \\in \\mathcal{B}$ by \n\\[ \n\\langle C,D \\rangle = \n\\begin{cases}\n0 & k(C) \\neq k(D)\\\\\n\\sum \\mbox{(glue the legs of } C \\mbox{ to the legs of } D) & k(C)=k(D)\n\\end{cases}\n\\]\nThus $\\partial_{C}(D)=\\langle C,D\\rangle$ if $k(C)=k(D)$.\nIn both cases, the summation runs all the possible ways to gluing \\emph{all} the legs of $C$ to \\emph{some} legs of $D$. We denote this summation by using box, as Figure \\ref{fig:pairing}.\nIt is known that $\\partial_{C \\sqcup C'} = \\partial_{C'}\\circ\\partial_{C}$. Thus if $C \\in \\mathcal{B}$ is invertible (with respect to $\\sqcup$ product) then $\\partial_{C}$ is invertible with $\\partial_{C}^{-1}=\\partial_{C^{-1}}$ (see \\cite{bgrt2,blt,bl} for details).\n\n\n\\begin{figure}[htbp]\n\\includegraphics*[width=75mm]{zu_pairing_new.eps}\n\\caption{(i) Differential operator $\\partial_{C}(D)$ (ii) Pairing $\\langle C,D \\rangle$.} \n\\label{fig:pairing}\n\\end{figure}\n\nLet $b_{2i}$ be the modified Bernoulli numbers, defined by \n\\begin{equation}\n\\label{eqn:modBer} \\frac{1}{2}\\log \\frac{\\sinh(x\\slash2)}{x \\slash 2} = \\sum_{i=0}^{\\infty} b_{2i} x^{2i} = 1+ \\frac{1}{48}x^{2}-\\frac{1}{5760}x^{4}+ \\frac{1}{362880}x^{6}+ \\cdots. \n\\end{equation}\nFor $q \\in \\Z\\setminus\\{0\\}$, let\n\\[ \\Omega_{q} = \\exp_{\\sqcup} \\Bigl(\\sum_{n=1}^{\\infty} \\frac{b_{2n}}{q^{2n}}\n\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2n} \\;\\Bigr) = 1 + \\frac{1}{48q^{2}}\\Donetwod - \\frac{1}{5760q^{4}}\\Donefourd + \\frac{1}{4608q^{4}}\\Donetwod \\Donetwod +\\cdots. \\]\n\nThe element $\\Omega=\\Omega_{1}$ is called the \\emph{wheel element}.\n\n\n\n\n\\subsection{Wheeled Kontsevich invariant}\n\nThe \\emph{Kontsevich invariant} $Z(K)$ is an invariant of a framed knot, which takes value in $\\mathcal{A}(S^{1})$, the space of Jacobi diagram over $S^{1}$ \\cite{ko1,bar}. Throughout the paper, we will always assume that the knot $K$ is zero-framed.\nThe target space $\\mathcal{A}(S^{1})$ is isomorphic to $\\mathcal{B}$ as a graded vector space, by Poincar\\'e-Birkoff-Witt isomorphism $\\chi: \\mathcal{B} \\rightarrow \\mathcal{A}(S^{1})$. Let $\\sigma:\\mathcal{A}(S^{1}) \\rightarrow \\mathcal{B}$ be the inverse of $\\chi$. In the rest of the paper, we will always view the Kontsevich invariant takes value in $\\mathcal{B}$, by defining\n\\[ Z^{\\sigma}(K) = \\sigma(Z(K)) \\in \\mathcal{B} \\]\nWe will denote by $Z^{\\sigma}(K)_{e,k}$ the bigrading $(e,k)$ part of the Kontsevich invariant. See \\cite{gr} for a topological meaning of this bigrading. \n\nLet $V_n$ be the vector space spanned by finite type invariants of degree $\\leq n$. The Kontsevich invariant gives a map $\\mathcal{Z}:(\\mathcal{B}_{\\deg =n})^{*} \\rightarrow V_{n}$, by $\\mathcal{Z}(w)(K)= w(Z^{\\sigma}(K)_{\\deg=n})$. Here $w: \\mathcal{B}_{\\deg=n} \\rightarrow \\C$ is an element of $(\\mathcal{B}_{\\deg =n})^{*}$, the dual space of degree $n$ part of $\\mathcal{B}$ and $Z^{\\sigma}(K)_{\\deg=n} \\in \\mathcal{B}_{\\deg=n}$ denotes the degree $n$ part of $Z^{\\sigma}(K)$.\nOn the other hand, there is a map called \\emph{symbol} $\\mathsf{Symb}: V_{n} \\rightarrow (\\mathcal{B}_{\\deg = n})^{*}$ (see \\cite{bar,bg} for definition). A finite type invariant $v \\in V_n$ is called\n\\begin{itemize}\n\\item[--] \\emph{canonical}, if $v= \\mathcal{Z}(\\mathsf{Symb}(v))$.\n\\item[--] \\emph{primitive}, if $v(K\\# K')=v(K)+v(K')$, which is equivalent to saying that $v$ lies in the image of the primitive subspace of $\\mathcal{B}$.\n\\end{itemize}\n\n\nThe \\emph{wheeled Kontsevich invariant} $Z^{\\sf Wheel}(K) \\in \\mathcal{B}$ is a version of the Kontsevich invariant defined as follows.\n\n\n\nLet $\\partial_{\\Omega}= 1 + \\frac{1}{48}\\partial_{\\includegraphics*[width=3mm]{D_12.eps}}+\\cdots$ be the differential operator defined by the wheel element $\\Omega$. The \\emph{wheeling map} $\\Upsilon = \\chi \\circ \\partial_{\\Omega}:\\mathcal{B} \\rightarrow \\mathcal{A}(S^{1})$ is the composite of $\\partial_{\\Omega}$\nand the Poincar\\'e-Birkoff-Witt isomorphism $\\chi$.\nThe wheeling map $\\Upsilon$ gives an isomorphism of \\emph{algebra} \\cite[Wheeling theorem]{blt}, whereas Poincar\\'e-Birkoff-Witt isomorphism $\\chi$ only gives an isomorphism of vector space.\n\nThe wheeled Kontsevich invariant is the image of the Kontsevich invariant under the inverse of the wheeling map $\\Upsilon$: \n\\[ Z^{\\sf Wheel}(K) = \\Upsilon^{-1}(Z(K)) = (\\partial_{\\Omega})^{-1}\\circ\\sigma(Z(K)) = \\partial_{\\Omega^{-1}} Z^{\\sigma}(K). \\]\n\n\nThe wheel element $\\Omega$ is equal to the Kontsevich invariant of the unknot \\cite[Wheel theorem]{blt}: $Z^{\\sigma}({\\sf Unknot})=\\Omega$. Therefore instead of $Z^{\\sigma}$ or $Z^{\\sf Wheel}$, it is often useful to use $Z^{\\sigma}(K) \\sqcup \\Omega^{-1}$ since $Z^{\\sigma}({\\sf Unknot}) \\sqcup \\Omega^{-1} = 1$.\n\nThe Kontsevich invariant is group-like, so $Z^{\\sigma}(K) = \\exp_{\\sqcup}(z^{\\sigma}(K))$, where $z^{\\sigma}(K)$ denotes the primitive part of $Z^{\\sigma}(K)$. We express low degree part of the primitive Kontsevich invariant as\n\\begin{eqnarray*}\nZ^{\\sigma}(K) \\sqcup \\Omega^{-1}&= &\\exp_{\\sqcup}\\Bigl( v_2(K)\\Donetwod + v_3(K)\\Dtwotwod+ v_{4}(K)\\Donefourd + w_4(K) \\Dthreetwod \\\\\n& & \\hspace{1.8cm}+v_5(K)\\Dtwofourd +v_6(K)\\Donesixd+(\\mbox{higher degree parts}) \\Bigr).\n\\end{eqnarray*}\nHere $v_{2}(K),v_3(K),v_4(K),w_4(K),v_5(K),v_6(K)$ are canonical, primitive finite type invariants of degree $2,3,4,4,5,6$, respectively.\n\nThus the bigrading $(e,k)$ part of $Z^{\\sigma}(K)$ with $e+\\frac{k}{2} \\leq 3$ are explicitly written by\n\\begin{eqnarray*}\nZ^{\\sigma}(K)&\\!\\!=\\!\\!&1 +\\bigl(v_2(K) + b_2\\bigr) \\Donetwod + v_3(K)\\Dtwotwod+ \\frac{1}{2}\\bigl(v_2(K) +b_{2}\\bigr)^{2} \\Donetwod \\Donetwod +(v_{4}(K)+b_{4}) \\Donefourd\\\\ \n& &+ w_4(K) \\Dthreetwod + v_3(K)(v_2(K)+b_2)\\Donetwod \\Dtwotwod + v_5(K) \\Dtwofourd + \\frac{1}{6}\\bigl(v_2(K) +b_{2}\\bigr)^{3}\\Donetwod \\Donetwod\\Donetwod \\\\\n& &+\\bigl( v_2(K)+b_2\\bigr)\\bigl( v_4(K)+b_4 \\bigr)\\Donetwod \\Donefourd + \\bigl(v_6(K) +b_{6}\\bigr)\\Donesixd+(\\mbox{higher degree parts}).\n\\end{eqnarray*}\nHere $b_{2i}$ denotes the modified Bernoulli numbers given by (\\ref{eqn:modBer}).\n\n\nExcept $v_{5}(K)$, these finite type invariants are written by using Conway polynomial and the Jones polynomials.\nLet $a_{2i}(K)$ be the coefficient of $z^{2i}$ in the Conway polynomial $\\nabla_{K}(z)$ of $K$, and let $j_{n}(K)$ is the coefficient of $h^{n}$ in the Jones polynomial $V_{K}(e^{h})$ of $K$, putting the variable as $t=e^{h}$. Then we have the following (See Section \\ref{section:sl2computation} for proof).\n\n\\begin{lemma}\n\\label{lemma:AJ}\n\\begin{enumerate}\n\\item $v_{2}(K)=-\\frac{1}{2}a_{2}(K)$.\n\\item $v_{3}(K)=-\\frac{1}{24}j_{3}(K)$.\n\\item $v_{4}(K)=-\\frac{1}{2}a_{4}(K)-\\frac{1}{24}a_{2}(K)+\\frac{1}{4}a_{2}(K)^{2}$.\n\\item $w_{4}(K)=\\frac{1}{96}j_{4}(K) + \\frac{3}{32}a_{4}(K) - \\frac{9}{2}a_{2}(K)^{2}$.\n\\item $v_{6}(K)=-\\frac{1}{2}a_{6}(K)-\\frac{1}{12}a_{4}(K)-\\frac{1}{720}a_{2}(K)+\\frac{1}{24}a_{2}(K)^{2}+\\frac{1}{2}a_{2}(K)a_{4}(K)-\\frac{1}{6}a_{2}(K)^{3}$.\n\\end{enumerate}\n\\end{lemma}\n\nSince the Jones polynomial is an integer coefficient polynomial, $j_{3}(K) \\in 6\\Z$ so $4v_{3}(K) \\in \\Z$. The degree three finite type invariant $v_3$ takes value $-\\frac{1}{4}$ for a right-handed trefoil.\n\nIn general, the euler degree zero part of the Kontsevich invariant is wirtten by the Alexander polynomial, using the following formula (This is a consequence of Melvin-Morton-Rozansky conjecture \\cite{bg}).\n\n\\begin{proposition}\n\\label{proposition:MMR}\nLet $-\\frac{1}{2} \\log \\Delta_{K}(e^{x})=\\sum_{n=0}^{\\infty} d_{2n}(K) x^{2n}$ where $\\Delta_{K}(t)$ is the Alexander polynomial of $K$, normalized so that $\\Delta_{K}(t)=\\Delta_{K}(t^{-1})$, $\\Delta_{K}(1)=1$. \nThen the euler degree zero part of the Kontsevich invariant is\n\\[ \\exp_{\\sqcup}\\Bigl(\\sum_{k=0}^{\\infty} d_{2k}(K) \\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2n} \\Bigr).\\]\nIn particular, if $a_2(K)=a_{4}(K)=\\cdots =a_{2m}(K)=0$ for some $m\\geq0$ then $d_{2}(K)=d_4(K)=\\cdots = d_{2m}(K)=0$ and $d_{2m+2}(K)=-\\frac{1}{2}a_{2m+2}(K)$.\n\\end{proposition}\n\n\n\\subsection{LMO invariant and rational surgery formula}\n\nThe LMO invariant $\\widehat{Z}^{LMO}(M)$ is an invariant of an oriented closed 3-manifold $M$ that takes value in $\\mathcal{A}(\\emptyset)$.\nHere we restrict our attention to the case $M=S^{3}(K,r)$ for $r \\neq 0,\\infty$. In particular, we will always assume that $M$ is a rational homology sphere.\n\nTo make computation simpler, we will use the following simplification. \nLet $\\Aoned$ be the theta-shaped Jacobi diagram which generates the degree one part of $\\mathcal{A}(\\emptyset)$.\n\n\n\n\nLet $\\mathcal{A}^{red}$ be the quotient of $\\mathcal{A}(\\emptyset)$ by the ideal generated by $\\Aoned$, and let $\\pi: \\mathcal{A}(\\emptyset) \\to \\mathcal{A}^{red}$ be the quotient map. \nWe call $\\pi(\\widehat{Z}^{LMO}(M)) \\in \\mathcal{A}^{red}$ the \\emph{reduced LMO invariant} and denote by $Z^{LMO}(M)$.\nBy abuse of notation, we will simply refer the reduced LMO invariant simply as the LMO invariant. When we change the orientation, the (reduced) LMO invariant changes as\n\\[ Z^{LMO}_{n}(-M) = (-1)^{n}Z^{LMO}_n(M) \\]\nwhere $Z^{LMO}_{n}(M)$ denotes the degree $n$ part of the LMO invariant.\n\nThe low degree part of the (reduced) LMO invariant is written by\n\\[ Z^{LMO}(M)=1 + \\lambda_{2}(M) \\Atwod + \\lambda_{3}(M) \\Athreed +(\\mbox{higher degree parts}). \\]\nwhere $\\lambda_2(M), \\lambda_3(M) \\in \\C$ are finite type invariant of rational homology spheres.\nIn the rest of arguments, unless otherwise specified, we will always work in $\\mathcal{A}^{red}$. For example, we will always view the pairing $\\langle D,D' \\rangle$ so that it takes value in $\\mathcal{A}^{red}$, by composing the quotient map $\\pi$.\n\n\n\n\n\nUsing the simplification $\\Aoned=0$, the (reduced) LMO invariant of the 3-manifold obtained by rational Dehn surgery along a knot $K$ is given by the following simpler formula. Let \\strutd be the strut, the Jacobi diagram homeomorphic to the interval. \n\n\\begin{theorem}[Rational surgery formula \\cite{bl}]\n\\label{theorem:rsurgery}\nLet $K$ be a knot in $S^{3}$. Then the (reduced) LMO invariant of $p\/q$-surgery along $K$ is given by\n\\[ Z^{LMO}(S^{3}(K,p\/q))= \\Bigl\\langle Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} \\pairingcommaa \\exp_{\\sqcup}( -\\frac{q}{2p}\\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}) \\Bigr\\rangle. \\]\n\\end{theorem}\n\n\n\nAs an application of the rational surgery formula above, in \\cite[Proposition 5.1]{bl} it is shown that the (reduced) LMO invariant of the lens space $L(p,q)$ is given by the\n\\begin{equation}\n\\label{eqn:LMO-lens}\nZ^{LMO}(L(p,q)) = \\langle \\Omega, \\Omega^{-1} \\sqcup \\Omega_{p}\\rangle\n\\end{equation}\n\nThus the (reduced) LMO invariant of Lens space only depend on $p$.\nIn particular,\n\\begin{equation}\n\\label{eqn:LMO23-Lens}\n\\lambda_{2}(L(p,q))= \\frac{1}{24}\\left( \\frac{1}{48p^{2}}-\\frac{1}{48} \\right),\\quad Z^{LMO}_{2m+1}(L(p,q))= 0 \\ (m\\in \\Z).\n\\end{equation}\n\n\n\n\n\n\n\n\n\n\n\n\\section{Proof of Theorems}\n\nFirst of all we determine which part of the Kontsevich invariant contributes to the degree $n$ part of the (reduced) LMO invariants.\n\n\\begin{proposition}\n\\label{prop:degree}\nThe degree $n$ part of the LMO invariant for $S^{3}(K,p\\slash q)$ is determined by the slope $p\\slash q$ and the bigrading $(e,k)$ part of $Z^{\\sigma}(K)_{e,k}$ with $e+ \\frac{k}{2} \\leq n$.\n\\end{proposition}\n\\begin{proof}\n\nBy definition of the pairing, for $D \\in \\mathcal{B}_{e,k}$ we have \n$\\langle D, \\exp_{\\sqcup}(-\\frac{q}{2p} \\strutd ) \\rangle \\in \\mathcal{A}(\\emptyset)_{e+\\frac{k}{2}}$. \nThus the degree $n$ part of the LMO invariant of $S^{3}(K,p\\slash q)$ is determined by the bigrading $(e,k)$ part of $Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q}$, with $e+\\frac{k}{2}=n$.\n\n\nThe bigrading $(e,k)$ part of $Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q}$ is determined by the bigrading $(e',k')$ part of $Z^{\\sf Wheel}(K)$ with $(e',k') \\in \\{(e,k), (e,k-2),(e,k-4),\\ldots \\}$.\nAlso, by definition of $\\partial_{D}$, if $D \\in \\mathcal{B}_{e,k}$ and $D' \\in \\mathcal{B}_{e',k'}$, $\\partial_{D}(D') \\in \\mathcal{B}_{e'+e+k,k'-k}$. This shows that the bigrading $(e,k)$ part of $Z^{\\sf Wheel}(K)$ is determined by the bigrading $(e',k')$ part of $Z^{\\sf Wheel}(K)$ with $(e',k') \\in \\{(e,k), (e-2,k+2),(e-4,k+4),\\ldots \\}$.\n\nThese observations show that the degree $n$ part of the LMO invariant of $S^{3}(K,p\\slash q)$ is determined by the bigrading $(e,k)$ part of $Z^{\\sigma}(K)_{e,k}$ with $e+ \\frac{k}{2} \\leq n$ (and the surgery slope $p\\slash q$).\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:LMO23}]\nBy Proposition \\ref{prop:degree}, to compute the degree 2 and 3 part of the LMO invariant for $S^{3}(K,p\\slash q)$, it is sufficient to consider the bigraging $(e,k)$ part of $Z^{\\sigma}(K)$ for $e + \\frac{k}{2} \\leq 3$. As we have already seen, this is given by\n\\begin{eqnarray*}\nZ^{\\sigma}(K)&\\!\\!=\\!\\!&1 +\\bigl(v_2(K) + b_2\\bigr) \\Donetwod + v_3(K)\\Dtwotwod+ \\frac{1}{2}\\bigl(v_2(K) +b_{2}\\bigr)^{2} \\Donetwod \\Donetwod +(v_{4}(K)+b_{4}) \\Donefourd\\\\ \n& &+ w_4(K) \\Dthreetwod + v_3(K)(v_2(K)+b_2)\\Donetwod \\Dtwotwod + v_5(K) \\Dtwofourd + \\frac{1}{6}\\bigl(v_2(K) +b_{2}\\bigr)^{3}\\Donetwod \\Donetwod\\Donetwod \\\\\n& &+\\bigl( v_2(K)+b_2\\bigr)\\bigl( v_4(K)+b_4 \\bigr)\\Donetwod \\Donefourd + \\bigl(v_6(K) +b_{6}\\bigr)\\Donesixd+(\\mbox{higher degree parts}).\n\\end{eqnarray*}\nHere $b_2=\\frac{1}{48}, b_{4}=-\\frac{1}{5760}, b_{6}= \\frac{1}{362880}$ are modified Bernoulli numbers.\nSince \n\\[ \\begin{cases}\n\\partial_{\\Omega^{-1}} \\left( \\Donetwod \\right) = \\Donetwod-2b_2\\Atwod, \\ \n\\partial_{\\Omega^{-1}} \\left(\\Dtwotwod \\right) =\\Dtwotwod -2 b_2\\Athreed, \\\\\n\\partial_{\\Omega^{-1}}\\left(\\Donetwod \\Donetwod \\right)= \\Donetwod \\Donetwod - 8b_2\\Dthreetwod +\\mbox{(other parts),}\\\\\n\\partial_{\\Omega^{-1}}\\left(\\Donefourd\\right)= \\Donefourd - 10b_2\\Dthreetwod + \\mbox{(other parts)},\n\\end{cases}\n\\]\nthe wheeled Kontsevich invariant is given by\n\\begin{align*}\nZ^{\\sf Wheel}(K)&= 1 +\\bigl(v_2(K) + b_2\\bigr)\\Donetwod\n + v_3(K)\\Dtwotwod + \\frac{1}{2}\\bigl(v_2(K)+b_{2}\\bigr)^{2}\\Donetwod \\Donetwod +(v_{4}(K)+b_{4}) \\Donefourd \\\\\n& \\hspace{0.4cm} + w_4(K)\\Dthreetwod + v_3(K)(v_2(K)+b_2) \\Donetwod\\Dtwotwod + v_5(K)\\Dtwofourd + \\frac{1}{6}\\bigl(v_2(K) +b_{2}\\bigr)^{3}\\Donetwod \\Donetwod \\Donetwod \\\\\n& \\hspace{0.4cm} + \\bigl( v_2(K)+b_2\\bigr)\\bigl( v_4(K)+b_4 \\bigr)\\Donetwod \\Donefourd +\\bigl(v_6(K)+b_{6}\\bigr)\\Donesixd +\\bigr(-2b_2 \\bigr)\\bigl(v_2(K)+ b_2\\bigr)\\Atwod \\\\\n& \\hspace{0.4cm}+ (-2b_2v_3(K))\\Athreed + \\bigl(-8b_2\\frac{1}{2}\\bigl(v_2(K) +b_{2}\\bigr)^{2}-10b_2v_4(K)\\bigr)\\Dthreetwod+ (\\mbox{other parts}).\n\\end{align*}\n\nThus $Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q}$ is equal to\n\\begin{eqnarray*}\n& & Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q}\\\\\n&=& 1 +\\bigl(v_2(K) + b_2 + \\frac{b_2}{q^{2}}\\bigr)\\Donetwod + v_3\\Dtwotwod \\\\\n& & + \\Bigl\\{ \\frac{1}{2}\\bigl(v_2(K) +b_{2}\\bigr)^{2} + (v_2(K)+b_2)\\frac{b_2}{q^{2}}+ \\frac{1}{2}\\frac{b_2^2}{q^{4}}\\Bigr\\} \\Donetwod \\Donetwod \\\\\n& & + (v_{4}(K)+b_{4}+\\frac{b_{4}}{q^{4}})\\Donefourd + \\bigl(w_4(K)-4b_2\\bigl(v_2(K) +b_{2}\\bigr)^{2}-10b_2v_4(K)\\bigr)\\Dthreetwod\\\\\n& & + \\bigl(v_3(K)(v_2(K)+b_2) + v_3(K)\\frac{b_2}{q^{2}}\\bigr) \\Donetwod \\Dtwotwod+ v_5(K)\\Dtwofourd\\\\\n& &+ \\Bigl\\{\\frac{1}{6}\\bigl(v_2(K)+b_{2}\\bigr)^{3}+ \\frac{1}{2}(v_2(K)+b_2)^{2}\\frac{b_2}{q^{2}}+(v_{2}(K)+b_{2})\\frac{1}{2}\\frac{b_2^2}{q^{4}}+ \\frac{1}{6}b_{2}^{3}q^{-6}\\Bigr\\}\\Donetwod \\Donetwod \\Donetwod \\\\\n& & + \\Bigl\\{ \\bigl( v_2(K)+b_2\\bigr)\\bigl( v_4(K)+b_4 \\bigr)+ (v_{4}(K)+b_{4})\\frac{b_2}{q^{2}} + (v_{2}(K)+b_2)\\frac{b_4}{q^{4}}+\\frac{ b_{2}b_{4}}{q^{6}}\\Bigr\\}\\Donetwod\\Donefourd \\\\\n& & +\\bigl(v_6(K) +b_{6}+\\frac{b_{6}}{q^{6}}\\bigr)\\Donesixd + \\bigr(-2b_2 \\bigr)\\bigl(v_2(K)+ b_2\\bigr)\\Atwod + (-2b_2v_3(K)){\\raisebox{-2mm}{\\includegraphics*[width=4mm]{A_3.eps}}} + (\\mbox{other parts})\n\\end{eqnarray*}\nBy direct computations (or, one use $\\mathfrak{sl}_2$ weight system evaluation as we will use in Section \\ref{section:sl2computation}), the pairing with struts (in $\\mathcal{A}^{red}$) are given by\n\\[ \\begin{cases}\n\\langle \\Dtwotwod \\pairingcommaa \\strutd\\rangle = 2 \\Atwod \\,, \\quad\n\\langle \\Donetwod \\Donetwod \\pairingcommaa \\strutd^2 \\rangle = 16 \\Atwod \\,,\\quad\n\\langle \\Donefourd \\pairingcommaa \\strutd^2 \\rangle = 20 \\Atwod \\,,\\\\\n\\langle \\Dthreetwod \\pairingcommaa \\strutd^2 \\rangle = 2 \\Athreed \\,, \\quad\n\\langle \\Dtwotwod \\Donetwod \\pairingcommaa \\strutd^2 \\rangle = 16 \\Athreed \\,, \\quad\n\\langle \\Dtwofourd \\pairingcommaa \\strutd^2 \\rangle = 20 \\Athreed \\,, \\\\\n\\langle \\Donetwod\\Donetwod\\Donetwod \\pairingcommaa \\strutd^{3}\\rangle = 384\\Athreed \\,, \\quad \n\\langle \\Donetwod\\Donefourd \\pairingcommaa \\strutd^{3}\\rangle= 480 \\Athreed \\,,\\\\\n\\langle \\Donesixd \\pairingcommaa \\strutd^{3}\\rangle = 420 \\Athreed.\n\\end{cases}\n\\]\nConsequently, we get\n\\begin{eqnarray*}\n\\lambda_2(S^{3}_{K}(p\/q)) &=& v_{3}(K)\\bigl(-\\frac{q}{2p}\\bigr)\\cdot 2 \\\\\n\\nonumber & &+ \\left( \\frac{(v_2(K)+b_{2})^{2}}{2} + \\frac{ (v_2(K)+b_2)b_2}{q^{2}} + \\frac{b_2^{2}}{2q^{4}} \\right)\\frac{1}{2}\\bigl(-\\frac{q}{2p}\\bigr)^{2}\\cdot 16\\\\\n\\nonumber & &+\\left( v_{4}(K)+b_{4}+\\frac{b_{4}}{q^{4}} \\right) \\frac{1}{2}\\bigl(-\\frac{q}{2p}\\bigr)^{2}\\cdot 20 + \\bigr(-2b_2 \\bigr)\\bigl(v_2(K)+ b_2\\bigr).\n\\end{eqnarray*}\n\\begin{eqnarray*}\n\\lambda_3(S^{3}(K,p\/q)) &\\!\\! = \\!\\!&\n\\bigl(w_4(K)-4b_2\\bigl(v_2(K) +b_{2}\\bigr)^{2}-10b_2v_4(K)\\bigr)\\bigl( -\\frac{q}{2p}\\bigr) \\cdot 2 \\\\\n& & \\hspace{-1.5cm}+ \\left( (v_3(K)(v_2(K)+b_2) + \\frac{v_3(K)b_2}{q^{2}}\\right)\\frac{1}{2}\\bigl( -\\frac{q}{2p}\\bigr)^{2} \\cdot 16 + v_{5}(K)\\frac{1}{2}\\bigl( -\\frac{q}{2p}\\bigr)^{2}\\cdot 20\\\\ \n& &\\hspace{-1.5cm} + \\left( \\frac{\\bigl(v_2(K)+b_{2}\\bigr)^{3}}{6}+ \\frac{(v_2(K)+b_2)^{2}b_{2}}{2q^{2}}+\\frac{(v_{2}(K)+b_{2})b_{2}^{2}}{2q^{4}}+ \\frac{b_{2}^{3}}{6q^{6}}\\right) \\frac{1}{6}\\bigl( -\\frac{q}{2p}\\bigr)^{3}\\cdot 384\\\\ \n& &\\hspace{-1.5cm} + \\left( \\bigl( v_2+b_2\\bigr)\\bigl( v_4+b_4 \\bigr)+ \\frac{(v_{4}(K)+b_{4})b_2}{q^{2}} + \\frac{(v_{2}(K)+b_2)b_{4}}{q^{4}}+ \\frac{b_{2}b_{4}}{q^{6}} \\right) \\frac{1}{6}\\bigl( -\\frac{q}{2p}\\bigr)^{3}\\!\\!\\cdot480\\\\\n& &\\hspace{-1.5cm} + \\left(v_6(K)+b_{6}+\\frac{b_{6}}{q^{6}}\\right)\\frac{1}{6}\\bigl( -\\frac{q}{2p}\\bigr)^{3}\\cdot 420 + (-2b_2v_3(K)).\n\\end{eqnarray*}\nThus we conclude\n\\begin{align*}\n\\lambda_2(S^{3}(K,p\/q))-\\lambda_{2}(L(p,q)) & \\\\\n& \\hspace{-3cm} = \n\\left( v_2(K)^{2} + \\frac{1}{24}v_2(K)+\\frac{5}{2}v_{4}(K)\\right)\\frac{q^{2}}{p^{2}}-v_3(K)\\frac{q}{p} + \\frac{v_2(K)}{24}\\left( \\frac{1}{p^{2}}-1\\right) \\\\\n& \\hspace{-3cm} =\n\\left( \\frac{7a_2(K)^2-a_2(K)-10a_{4}(K)}{8} \\right)\\frac{q^{2}}{p^{2}} -v_{3}(K)\\frac{q}{p} + \\frac{a_{2}(K)}{48}\n\\left(1-\\frac{1}{p^{2}}\\right),\n\\end{align*}\n\\begin{align*}\n\\lambda_{3}(S^{3}(K,p\/q))-\\lambda_{3}(L(p,q)) & \\\\\n& \\hspace{-4cm} = \n-\\left(\\frac{35}{4}v_6(K)+\\frac{5}{24}v_4(K)+10v_2(K)v_4(K)+ \\frac{4}{3}v_2(K)^{3}+\\frac{1}{12}v_2(K)^{2} \n\\right)\\frac{q^{3}}{p^{3}}\\\\\n& \\hspace{-3.5cm}\n-\\left( \\frac{5}{24}v_4(K) +\\frac{1}{288}v_2(K)+\\frac{1}{12}v_2(K)^{2}\\right)\\frac{q}{p^{3}}\\\\\n& \\hspace{-3.5cm}\n+ \\left(\\frac{5}{2}v_{5}(K)+ 2v_3(K)v_2(K)+\\frac{1}{24}v_3(K)\\right)\\frac{q^2}{p^2} + \\frac{v_3(K)}{24}\\left(\\frac{1}{p^2}-1\\right)\\\\\n& \n\\hspace{-3.5cm}- \\left(w_4(K)-\\frac{1}{12}v_2(K)^{2}-\\frac{1}{288}v_2(K)-\\frac{5}{24}v_4(K)\\right)\\frac{q}{p}\n\\end{align*}\n\n\n\\end{proof}\n\n\n\\begin{remark}\n\\label{remark:KKT}\nIn \\cite[Theorem7.1]{le} Lescop proved a similar formula \n\\begin{equation}\n\\label{eqn:Lescopformula}\n\\lambda_2^{KKT}(S^{3}(K,p\\slash q)) = \\lambda^{'' KKT}_{2}(K) \\frac{q^{2}}{p^{2}} + w_3(K)\\frac{q}{p} + c(p\\slash q)a_{2}(K) + \\lambda_{2}^{KKT}(L(p,q))\n\\end{equation}\nfor the degree two part $\\lambda_2^{KKT}$ of the Kontsevich-Kuperberg-Thurston universal finite type invariant $Z^{KKT}$, which is defined by configuration space integrals \\cite{ko2,kt}.\nFor degree two part we have $\\lambda_{2}^{KKT} = 2 \\lambda_2$ (note that in Lescop uses the coefficient of the Jacobi diagram ${\\raisebox{-2mm}{\\includegraphics*[width=5mm]{A_2p.eps}}} = \\frac{1}{2}{\\raisebox{-2mm}{\\includegraphics*[width=4mm]{A2.eps}}}$) so Theorem \\ref{theorem:LMO23} gives formulae of invariants in Lescop's formula (\\ref{eqn:Lescopformula}), namely, \n\\[ \\lambda^{'' KKT}_{2}(K)= \\frac{7a_2(K)^2-a_2(K)-10a_{4}(K)}{4}, w_3(K)=-2v_3(K), c(p\\slash q) = \\frac{1}{24}-\\frac{1}{24p^{2}}.\\]\n\\end{remark}\n\n\n\n\\begin{proof}[Proof of Corollary \\ref{corollary:LMO2}]\n{$ $}\\\\\n\n\\noindent\n(i,ii): \nBy (\\ref{eqn:NW}), it is sufficient to consider the case $r'= - r$.\nAssume that $S^{3}(K,p\/q) \\cong \\pm S^{3}(K,-p\/q)$. Since $\\lambda_2(M)=\\lambda_2(-M)$, by Theorem \\ref{theorem:LMO23} \n\\[ \\lambda_{2}(S^{3}(K,p\/q)) - \\lambda_{2}(\\pm S^{3}(K,-p\/q)) = -2 v_{3}(K) q\\slash p =0 \\]\nTherefore $v_{3}(K)=0$.\\\\\n\n\\noindent\n(iii): Assume that $S^{3}(K,p\/q) \\cong \\pm S^{3}(K, -p'\/q')$. \nSince $H_{1}(S^{3}(K,p\/q))\\cong \\Z\\slash p\\Z$, $p=p'$.\nBy Theorem \\ref{theorem:LMO23} \n\\begin{eqnarray*}\n0 &= & \\lambda_{2}(S^{3}(K,p\/q)) - \\lambda_{2}(- S^{3}(K,p\/q')) \\\\\n& =& \\left(\\frac{7a_{2}(K)^{2}-a_{2}(K)-10a_{4}(K)}{8}\\right)\\frac{q^{2}-q'^{2}}{p^{2}} - v_{3}(K)\\frac{q-q'}{p}.\n\\end{eqnarray*}\nSince $q' \\neq \\pm q$, either $v_{3}(K)\\neq 0$ (and $7a_{2}(K)^{2}-a_{2}(K)-10a_{4}(K)=0$), or, \n\\[ \\frac{p}{q+q'} = \\frac{7a_{2}(K)^{2}-a_{2}(K)-10a_{4}(K)}{8v_{3}(K)}.\\]\n\n\\noindent\n(iv): Assume that $S^{3}(K,p\/q) \\cong \\pm S^{3}(K', p\/q)$. By Theorem \\ref{theorem:LMO1} (i), $a_{2}(K)=a_{2}(K')$. By Theorem \\ref{theorem:LMO23} \n\\[\\lambda_{2}(S^{3}(K,p\/q)) - \\lambda_{2}(S^{3}(K',p\/q)) = \\frac{5}{4}(a_{4}(K)-a_{4}(K'))\\frac{q^{2}}{p^{2}} - (v_{3}(K)-v_{3}(K'))\\frac{q}{p}=0 .\\]\n\\end{proof}\n\n\\begin{proof}[Proof of Corollary \\ref{corollary:Lens}]\nAssume that $S^{3}(K,p\\slash q) \\cong L(p',q')$. Then $|H_{1}(S^{3}(K,p\\slash q);\\Z)|=p=p'=H_{1}(L(p',q));\\Z)$ so $p=p'$. By (\\ref{eqn:LMO23-Lens}) $\\lambda_{2}(L(p,q))=\\lambda_{2}(L(p,q'))$ so Theorem \\ref{theorem:LMO23} gives the desired equality.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Corollary \\ref{corollary:LMO3}]\n(i): By (\\ref{eqn:NW}), it is sufficient to consider the case $r'= -r$.\nBy Theorem \\ref{theorem:LMO1} (ii) and Corollary \\ref{corollary:LMO2} (i), $a_{2}(K)=v_2(K)=v_{3}(K)=0$. Thus by Theorem \\ref{theorem:LMO23} \n\\begin{eqnarray*}\n 0 &=& \\lambda_{3}(S^{3}(K,p\/q)) - \\lambda_{3}(S^{3}(K,-p\/q))\\\\\n& =& -\\left( \\frac{35}{2}v_{6}(K) + \\frac{5}{12} v_4(K)\\right) \\frac{q^{3}}{p^{3}} - \\frac{5}{12}v_{4}(K) \\frac{q}{p^{3}} - \\left( 2w_{4}(K)-\\frac{5}{12}v_{4}(K)\\right) \\frac{q}{p}.\n\\end{eqnarray*}\n(ii):\nIf $S^{3}_{K}(p\/q)) \\cong -S^{3}_{K}(-p\/q)$ then by Corollary \\ref{corollary:LMO2} (ii) $v_{3}(K)=0$. Therefore by Theorem\n$ \\lambda_{3}(S^{3}(K,p\/q)) - \\lambda_{3}(-S^{3}(K,-p\/q)) = 5v_5(K)= 0$. \n\\end{proof}\n\n\\begin{proof}[Proof of Corollary \\ref{cor:c11}]\n\nBy Lemma \\ref{lemma:AJ} and Corollary \\ref{corollary:LMO3} (i), if $S^{3}(K,p\\slash q) \\cong S^{3}(K,-p\\slash q)$ then we get\n\\begin{equation}\n\\label{eqn:obstruction}\n(19a_{4}(K)+j_4(K))p^{2} -10a_{4}(K)-(420a_{6}(K)+80a_{4}(K))q^{2}=0.\n\\end{equation}\n\nAccording to \\cite{iw}, the cosmetic surgery conjecture was confirmed for knots with less than or equal to 11 crossings, with 8 exceptions\n\\[ 10_{33},10_{118},10_{146},11a_{91},11a_{138},11a_{285},11n_{86},11n_{157}\n\\]\nin the table KnotInfo \\cite{cl}. \n\nFor these knots, the values of $a_{4},j_{4}$ and $a_{6}$ are given as follows.\n\n\\begin{table}[htbp]\n\\begin{tabular}{|c|c|c|c|c|c|c|c|c|} \\hline\n& $10_{33}$ & $10_{118}$ & $10_{146}$ & $11a_{91}$ & $11a_{138}$ & $11a_{285}$ & $11n_{86}$ & $11n_{157}$ \\\\ \\hline \n$a_{4}$ &\n4 & 2 & 2 & 0 & 2 & 2& -2& 0 \\\\ \\hline\n$j_{4}$ &-12 &-6 & -6 & 0& -6 & -6& 6& 0\\\\ \\hline\n$a_{6}$ &0& 3 & 0 & -2& -2& -2& -1& -1\\\\ \\hline\n\\end{tabular}\n\\end{table}\n\nNote that (\\ref{eqn:obstruction}) gives a diophantine equation of the form $ap^{2}-bq^{2}=c$, whose solvability can be checked algorithmically \\cite{aa}. For these knots the equation (\\ref{eqn:obstruction}) has no integer solutions, except the case $K=10_{118}$ (The author uses the computer program at \\cite{so}. In the case $K=10_{118}$ we get the equation $32p^2-20-1420q^{2}=0$ which has the solutions $p=20u+1065v$ and $q=3u-160v$, where $(u,v)$ are the solutions of Pell's equation $u^2 - 2840v^2 = 1$.)\n\n\n\\end{proof}\n\n\nNext we proceed to see higher degree part. To use $C_{n}$-equvalence assumption, we observe the following.\n\n\\begin{lemma}\n\\label{lemma:KontCn}\nIf $K$ and $K'$ are $C_{n+1}$-equivalent, then for $e+\\frac{k}{2} \\leq n+1$, then \n\\[ (Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} - Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q})_{e,k} = (Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{e,k} - (Z^{\\sigma}(K') \\sqcup \\Omega^{-1})_{e,k} \\]\nSimilarly, if $K$ and $K'$ are odd $C_{n+1}$-equivalent, then for $e+\\frac{k}{2} \\leq n+1$ with odd $e+k$, \n\\[ (Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} - Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q})_{e,k} = (Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{e,k} - (Z^{\\sigma}(K') \\sqcup \\Omega^{-1})_{e,k} \\]\n\\end{lemma}\n\\begin{proof}\nSince $K$ and $K'$ are $C_{n}$ equivalent \n$Z^{\\sigma}(K)_{e',k'} = Z^{\\sigma}(K')_{e',k'}$ if $e'+k' \\leq n$.\nSince for the degree $d$ element $D$, the $\\partial_{\\Omega^{-1}}(D) = D+ (\\mbox{degree} \\geq d+2\\mbox{ elements } )$, for $e+k \\leq n+1$ \n\\[\nZ^{\\sf Wheel}(K)_{e,k}-Z^{\\sf Wheel}(K')_{e,k} = (\\partial_{\\Omega^{-1}}(Z^{\\sigma}(K)-Z^{\\sigma}(K'))_{e,k} =\n(Z^{\\sigma}(K)-Z^{\\sigma}(K'))_{e,k}. \\]\nTherefore for $e+k \\leq n+1$\n\\begin{eqnarray*}\n(Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} - Z^{\\sf Wheel}(K') \\sqcup \\Omega_{q})_{e,k}& = & \\left( (Z^{\\sf Wheel}(K) - Z^{\\sf Wheel}(K')) \\sqcup \\Omega_{q}\\right)_{e,k}\\\\\n& = & \\left( (Z^{\\sigma}(K)-Z^{\\sigma}(K'))\\sqcup \\Omega_q \\right)_{e,k}\\\\\n& = & (Z^{\\sigma}(K)-Z^{\\sigma}(K'))_{e,k}\\\\\n& = & (Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{e,k} - (Z^{\\sigma}(K') \\sqcup \\Omega^{-1})_{e,k} \n\\end{eqnarray*}\n\nTo see the latter assertion, we note that both $\\partial_{\\Omega^{-1}}$ and $\\sqcup \\Omega_{q}$ preserve the parity of $D$. Namely, \n if $D \\in \\mathcal{B}_{odd}$, where we denote by $\\mathcal{B}_{odd}$ the odd degree part of $\\mathcal{B}$, then $\\partial_{\\Omega^{-1}}(D), D\\sqcup \\Omega_{q} \\in \\mathcal{B}_{odd}$.\n\\end{proof}\n\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:LMOhigh1}]\n{$ $}\\\\\n\n\\noindent\n(i): \nBy Proposition \\ref{prop:degree}, the degree $m+1$ part of the LMO invariant of $S^{3}(K,p\\slash q)$ is determined by $(Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q})_{e,k}$ with $e+\\frac{k}{2}=m+1$.\n\nSince $K$ and $K'$ are $C_{2m+2}$-equivalent, for $e+\\frac{k}{2}=m+1$\n\\begin{eqnarray*}\n(Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} - Z^{\\sf Wheel}(K') \\sqcup \\Omega_{q})_{e,k} &=&(Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{e,k} - (Z^{\\sigma}(K')\\sqcup \\Omega^{-1})_{e,k}\\\\\n& & \\hspace{-2cm}=\n \\begin{cases}\n -\\frac{1}{2}(a_{2m+2}(K)-a_{2m+2}(K'))\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2} & (e,k)=(0,2m+2)\\\\\n\\quad 0 & \\mbox{Otherwise},\n\\end{cases}\n\\end{eqnarray*}\nby Lemma \\ref{lemma:KontCn} and Proposition \\ref{proposition:MMR}. Therefore\n\\begin{align}\n\\label{eqn:LMO_2m+1}\nZ^{LMO}_{m+1}(S^{3}(K,p\\slash q))-Z^{LMO}_{m+1}(S^{3}(K',p\\slash q)) & \\\\ \n& \\nonumber \\hspace{-6cm} = \\left\\langle -\\frac{1}{2}(a_{2m+2}(K)-a_{2m+2}(K'))\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2} \\pairingcommab \\frac{1}{(m+1)!}\\left(-\\frac{q}{2p}\\right)^{2m+1} \\!\\!\\!\\!\\! \\strutd^{m+1} \\right\\rangle \\\\\n \\nonumber&\\ \\hspace{-6cm} = - \\frac{a_{2m+2}(K)-a_{2m+2}(K')}{2(m+1)!}\\left(-\\frac{q}{2p}\\right)^{m+1}\n\\left\\langle \\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2} \\pairingcommab \\strutd^{m+1} \\right\\rangle \\pairingperiod\n\\end{align}\n\nAs we will see in Lemma \\ref{lemma:nonvanish},\n$\\left\\langle \\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2} \\pairingcommab \\strutd^{m+1} \\right\\rangle \\neq 0$.\nThis shows that $S^{3}(K,p\\slash q) \\cong S^{3}(K',p\\slash q)$ implies $a_{2m+2}(K)=a_{2m+2}(K')$.\\\\\n\n\\noindent\n(ii) By (\\ref{eqn:LMO_2m+1}), when $K$ is $C_{4m+2}$-trivial, then\n\\[ \nZ^{LMO}_{2m+1}(S^{3}(K,p\\slash q))-Z^{LMO}_{2m+1}(L(p,q)) = \\left\\langle -\\frac{1}{2}a_{4m+2}(K)\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{4m+2} \\pairingcommab \\frac{1}{(2m+1)!}\\left(-\\frac{q}{2p}\\right)^{2m+1} \\!\\!\\!\\!\\!\\! \\strutd^{2m+1} \\right\\rangle \\pairingperiod \n\\]\nBy (\\ref{eqn:NW}), if $S^{3}(K,p\\slash q) \\cong S^{3}(K,p'\\slash q')$ then \n$\\frac{p}{q}= - \\frac{p'}{q'}$ and $Z^{LMO}_{2m+1}(L(p,q))-Z^{LMO}(L(p',q'))=0$ hence\n\\begin{eqnarray*}\n0 &=& Z^{LMO}_{2m+1}(S^{3}(K,p\\slash q))-Z^{LMO}_{2m+1}(S^{3}(K,-p\\slash q))\\\\\n & =& - \\frac{a_{4m+2}(K)}{(2m+1)!}\\left(-\\frac{q}{2p}\\right)^{2m+1} \\left\\langle\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{4m+2} \\pairingcommab \\strutd^{2m+1} \\right\\rangle \\pairingperiod\n\\end{eqnarray*}\nTherefore $a_{4m+2}(K)=0$.\n\\end{proof}\n\n\\begin{proof}[Proof of Theorem \\ref{theorem:LMOhigh2}]\n{$ $}\\\\\n\n\\noindent\n(i): By the same argument as Theorem \\ref{theorem:LMOhigh1} (i), if $K$ and $K'$ are $C_{2m+1}$-equivalent for $e+\\frac{k}{2}=m+1$,\n\\begin{eqnarray*}\n(Z^{\\sf Wheel}(K) \\sqcup \\Omega_{q} - Z^{\\sf Wheel}(K') \\sqcup \\Omega_{q})_{e,k} &=& (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{e,k} - (Z^{\\sigma}(K')\\sqcup \\Omega^{-1})_{e,k}\\\\\n& & \\hspace{-3cm}= \n \\begin{cases}\n -\\frac{1}{2}(a_{2m+2}(K)-a_{2m+2}(K'))\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2} & (e,k)=(0,2m+2)\\\\\n(Z^{\\sigma}(K) \\sqcup \\Omega^{-1} - Z^{\\sigma}(K) \\sqcup \\Omega^{-1})_{1,2m} & (e,k)=(1,2m)\\\\\n0 & \\mbox{Otherwise.}\n\\end{cases}\n\\end{eqnarray*}\nHence\n\\begin{eqnarray*}\nZ^{LMO}_{m+1}(S^{3}(K,p\\slash q))- Z^{LMO}_{m+1}(S^{3}(K',p\\slash q)) & & \\\\\n& &\\hspace{-6cm} = \\left\\langle -\\frac{1}{2}(a_{2m+2}(K)-a_{2m+2}(K'))\\overbrace{\\raisebox{-4.5mm}{\\includegraphics*[width=9mm]{D_12m.eps}}}^{2m+2}\n\\pairingcommab \\frac{1}{(m+1)!}\\left(-\\frac{q}{2p}\\right)^{m+1} \\!\\!\\!\\! \\strutd^{m+1} \\right\\rangle \\\\\n& & \\hspace{-5cm}+ \\left\\langle (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,2m} - (Z^{\\sigma}(K')\\sqcup \\Omega^{-1})_{1,2m} \\pairingcommab \\frac{1}{m!}\\left(-\\frac{q}{2p}\\right)^{m}\\!\\!\\!\\! \\strutd^{m} \\right\\rangle \\pairingperiod\n\\end{eqnarray*}\nThus, if $S^{3}(K,p\\slash q) \\cong S^{3}(K', p\\slash q)$ and $a_{4m+4}(K)=a_{4m+4}(K')$ then\n\\[ \\left\\langle \\Bigl( (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,4m+2} - (Z^{\\sigma}(K')\\sqcup \\Omega^{-1})_{1,4m+2} \\Bigr), \\strutd^{2m+1} \\right\\rangle =0.\n\\]\n\n\\noindent\n(ii,iii) Assume that $K$ is odd $C_{4m+1}$-trivial.\nSince\n\\begin{align*}\nZ^{LMO}_{2m+1}(S^{3}(K,p\\slash q)) = \\sum_{e=0}^{m} \\left\\langle (Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{2e,4m+2-4e} \\pairingcommab \\frac{1}{(2m+1-2e)!} \\left( -\\frac{q}{2p}\\strutd \\right)^{2m+1-2e} \\right\\rangle \\\\\n+ \\sum_{e=0}^{m} \\left\\langle (Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{2e+1,4m-4e} \\pairingcommab\\frac{1}{(2m-2e)!} \\left( -\\frac{q}{2p}\\strutd \\right)^{2m-2e} \\right\\rangle\n\\end{align*}\nwe get\n\\begin{align*}\n&0= Z^{LMO}_{2m+1}(S^{3}(K,p\\slash q))-Z^{LMO}_{2m+1}(-S^{3}(K,-p\\slash q)) = Z^{LMO}_{2m+1}(S^{3}(K,p\\slash q)) + Z^{LMO}_{2m+1}(S^{3}(K,-p\\slash q))\\\\\n& \\hspace{0.5cm}= 2 \\sum_{e=0}^{m} \\left\\langle (Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{2e+1,4m-4e} \\pairingcommab \\frac{1}{(2m-2e)!} \\left( -\\frac{q}{2p}\\strutd \\right)^{2m-2e} \\right\\rangle \\pairingperiod\n\\end{align*}\nBy Lemma \\ref{lemma:KontCn}, $(Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{2e+1,4m-4e} = 0$ unless $e=0$. Moreover for $e=0$ we have\n\\[\n(Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{1,4m}=(Z^{\\sf Wheel}({\\sf Unknot})\\sqcup \\Omega_q)_{1,4m} + (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,4m} - (Z^{\\sigma}({\\sf Unknot})\\sqcup \\Omega^{-1})_{1,4m}.\n\\]\nSince for any $X \\in \\mathcal{B}$ and $q \\in \\Z \\setminus\\{0\\}$, $(X\\sqcup \\Omega_q^{\\pm 1})_{1,k}= X_{1,k}$ we have\n\\[ (Z^{\\sf Wheel}({\\sf Unknot})\\sqcup \\Omega_q)_{1,4m} = (Z^{\\sigma}({\\sf Unknot})\\sqcup \\Omega^{-1})_{1,4m}.\\]\nThus we get\n\\[ (Z^{\\sf Wheel}(K)\\sqcup \\Omega_{q})_{1,4m} = (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,4m}.\\]\nHence\n\\[ 0= \\frac{2}{(2m)!}\\left(-\\frac{q}{2p}\\right)^{2m} \\!\\!\\left\\langle (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,4m} \\pairingcommaa \\strutd^{2m} \\right\\rangle .\n\\]\n(iii) is proved similarly.\n\n\\end{proof}\n\n\n\\begin{proof}[Proof of Corollary \\ref{corollary:MM}]\nLemma \\ref{lemma:nonvanish2} in next Section shows that if $(Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,2m} \\in K_{1,2m}$ then $j_{1,2m}(K)=0$.\n\\end{proof}\n\n\\section{Some $\\mathfrak{sl}_2$ weight system computations}\n\\label{section:sl2computation}\n\nIn this section we use $(\\mathfrak{sl}_2,V_n)$ weight system, which is a linear map $W_{(\\mathfrak{sl_2},V_n)} : \\mathcal{B} \\ (\\mbox{or, } \\mathcal{A}(\\emptyset))\\rightarrow \\C[[h]]$ that comes from the Lie algebra $\\mathfrak{sl}_2$ and its $n$-dimensional irreducible representation, to confirm some assetions used in previous sections.\n\n\nThe image of $W_{\\mathfrak{sl}_2,V_n}$ can be calculated recursively by the following relations \\cite{cv}:\n\\begin{enumerate}\n\\item $W_{\\mathfrak{sl_2}}(\\raisebox{-2.5mm}{\\includegraphics*[width=7mm]{I.eps}}) = 2h\\Bigr(W_{\\mathfrak{sl_2}}(\\raisebox{-2.5mm}{\\includegraphics*[width=7mm]{P.eps}})- W(\\raisebox{-2.5mm}{\\includegraphics*[width=7mm]{X.eps}})\\Bigl)$ \n\\item $W_{\\mathfrak{sl_2}}(\\raisebox{-2.5mm}{\\includegraphics*[width=7mm]{bubble.eps}}) = 4h W_{\\mathfrak{sl_2}}(\\raisebox{-2.5mm}{\\includegraphics*[width=7mm]{line.eps}})$.\n\\item $W_{\\mathfrak{sl_2}}(\\raisebox{-3mm}{\\includegraphics*[width=7mm]{circ.eps}})=3$.\n\\item $W_{(\\mathfrak{sl_2},V_n)}(D \\sqcup \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}) = h \\frac{n^{2}-1}{2}W_{(\\mathfrak{sl_2},V_n)}(D)$.\n\\end{enumerate}\nNote that $\\deg W_{(\\mathfrak{sl_2},V_n)}(D) = \\deg(D)$ and the relation (1)--(3) do not depend on $n$. Thus for $D \\in \\mathcal{A}(\\emptyset)$, $ W_{(\\mathfrak{sl_2},V_n)}(D)$ does not depend on $n$ so we will simply write by $W_{\\mathfrak{sl_2}}(D)$.\n\nBy the definition of the weight system and the colored Jones polynomial as quantum $(\\mathfrak{sl}_2, V_n)$ invariant, we have\n\\begin{equation}\n\\label{eqn:Jones}\n W_{(\\mathfrak{sl_2},V_n)}(Z^{\\sigma}(K)\\sqcup \\Omega^{-1}) = V_{n}(K;e^{-h})= \\sum_{e\\geq 0} \\left(\\sum_{k\\geq 0} j_{e,k}(K) (nh)^{k}\\right)h^{e}.\n\\end{equation}\nWe remark that we put the variable $t$ in the colored Jones polynomial not $e^{h}$ but $e^{-h}$, due to the difference of normalization of the colored Jones polynomial and quantum $\\mathfrak{sl}_2$ invariants. \n\nWe use this to check finite type invariants which we used can be written by Jones and Conway polynomials.\n\n\\begin{proof}[Proof of Lemma \\ref{lemma:AJ}]\n(1),(3) and (5) follow from Proposition \\ref{proposition:MMR} so we prove (2) and (4).\nThe degree three and four part of $(Z^{\\sigma}(K)\\sqcup \\Omega^{-1})$ are given $v_{3}(K)\\Dtwotwod$ and $\\frac{1}{8}a_{2}(K)^{2}\\Donetwod\\Donetwod - \\frac{1}{2}a_{4}(K) \\Donefourd+ w_{4}(K)\\Dthreetwod$, respectively. Thus by (\\ref{eqn:Jones}) applying $W_{(\\mathfrak{sl_2},V_2)}$ we get\n\\begin{eqnarray*}\nj_{3}(K)(-h)^{3} & = & v_{3}(K) W_{(\\mathfrak{sl}_2, V_2)}(\\Dtwotwod)= 24v_{3}(K)h^{3} \\\\\n j_{4}(K)(-h)^{4} & = & \\frac{1}{8}a_{2}(K)^{2} W_{(\\mathfrak{sl}_2, V_2)}(\\Donetwod\\Donetwod) - \\frac{1}{2}a_{4}(K) W_{(\\mathfrak{sl}_2, V_2)}( \\Donefourd)+ w_{4}(K) W_{(\\mathfrak{sl}_2, V_2)}(\\Dthreetwod)\\\\\n& = &\\frac{1}{8}a_{2}(K)^{2}36h^{4} - \\frac{1}{2}a_{4}(K) 18h^{4}+ w_{4}(K)96h^{4}\\\\\n& = & \\left(\\frac{9}{2}a_{2}(K)^{2} -9a_{4}(K) + 96w_{4}(K)\\right) h^{4}\n\\end{eqnarray*}\n\\end{proof}\n\n\n\nFor two Jacobi diagrams $D$ and $D'$ we write $D \\equiv D'$ if $D$ is equal to $D'$ by using the $\\mathfrak{sl}_2$ weight system relations (1)--(3) which are independent of $V_n$. \nBy the $\\mathfrak{sl}_2$ weight system relations (1)--(3) we remove all trivalent vertex of a Jacobi diagram when the number of univalent vertices are even. \n\n\\begin{lemma}\n\\label{lemma:nonvanish}\n$W_{\\mathfrak{sl}_2} \\Bigl(\\Bigl\\langle \\overbrace{\\raisebox{-4mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2m} \\pairingcommab \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^m \\Bigr\\rangle \\Bigr) = 2(2h)^{m}(2m+1)!$. In particular,\n $\\Bigl\\langle \\overbrace{\\raisebox{-4mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2m} \\pairingcommab \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^m \\Bigr\\rangle \\neq 0$\n\\end{lemma}\n\\begin{proof}\nFirst we observe that\n\\[\n \\overbrace{\\raisebox{-4mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2m} \\equiv (2h)^{m}\\sum_{\\be=(e_1,\\ldots,e_m) \\in \\{0,1\\}^{m}} (-1)^{e_1+\\cdots+e_m} D_{\\be} \\\\ \n\\]\nHere for $\\be=(e_1,\\ldots,e_m) \\in \\{0,1\\}^{m}$, $D_{\\be}$ denotes the Jacobi diagram\\\\\n\\[ \\includegraphics*[width=70mm]{zu_d0.eps}\\]\nThen the pairing $ \\left\\langle D_{\\be}, \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^m \\right\\rangle = {\\raisebox{-5mm}{\\includegraphics*[width=20mm]{d1.eps}}}$\n is given by\n\\begin{equation}\n\\label{eqn:pairing0}\n {\\raisebox{-5mm}{\\includegraphics*[width=25mm]{d1.eps}}} = \\begin{cases} \\includegraphics*[width=30mm]{d3.eps} & (e_{1}=e_2=\\cdots=e_m=0)\\\\\n\\includegraphics*[width=25mm]{d2.eps} & (\\text{otherwise})\n\\end{cases}\n\\end{equation}\nBy definition, $W_{\\mathfrak{sl}_2}\\left({\\raisebox{-3mm}{\\includegraphics*[width=20mm]{d2.eps}}}\\right)= W_{\\mathfrak{sl}_2}\\left(\\Bigl\\langle\\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^{m}, \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^{m} \\Bigr\\rangle \\right) = (2m+1)!$ (see \\cite[Lemma 6.1]{blt}).\nTherefore by (\\ref{eqn:pairing0})\n\\[\nW_{\\mathfrak{sl}_2} \\Bigl(\\Bigl\\langle \\overbrace{\\raisebox{-4mm}{\\includegraphics*[width=10mm]{D_12m.eps}}}^{2n}{ \\raisebox{-2mm},} \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^m \\Bigr\\rangle \\Bigr) = 2(2h)^{m}(2m+1)!\n\\]\n\\end{proof}\n\n\\begin{lemma}\n\\label{lemma:nonvanish2}\n\\[ W_{\\mathfrak{sl_2}}\\left(\\left\\langle (Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,2m} {\\raisebox{-2mm},} \\\n\\raisebox{-1.8mm}{\\includegraphics*[width=7mm]{strut.eps}}^{m}\n\\right\\rangle \\right) = 2^{m}h^{m+1} (2m+1)! j_{1,2m}(K)\\]\n\\end{lemma}\n\\begin{proof}\nLet us put $(Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{e,2k} \\equiv c_{e,2k}(K) h^{e+k} \\raisebox{-1mm}{\\includegraphics*[width=7mm]{strut.eps}}^{k}$. Then \n\\begin{align*}\nW_{\\mathfrak{sl}_2,V_{n}}((Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{e,2k}) &= e_{e,2k}(K) h^{e+2k}\\left(\\frac{n^{2}-1}{2}\\right)^{k} \\\\\n&\\hspace{-3.2cm} = \\frac{c_{e,2k}(K)}{2^{k}} h^{e}(nh)^{2k} -\\frac{c_{e,2k}(K)}{2^{k}}kh^{e+2}(nh)^{2k-2}+ \\frac{c_{e,2k}(K)}{2^{k}}\\binom{k}{2}h^{e+4}(nh)^{2k-4}-\\cdots. \\\\\n\\end{align*}\nBy (\\ref{eqn:Jones}) we conclude $j_{1,2m}(K) = \\frac{c_{1,2m}(K)}{2^{m}}$.Then the $\\mathfrak{sl}_2$ weight system evaluation of the desired pairing is \n\\begin{eqnarray*} W_{\\mathfrak{sl_2}}\\left( \\left\\langle(Z^{\\sigma}(K)\\sqcup \\Omega^{-1})_{1,2m} {\\raisebox{-2mm},} \n\\raisebox{-1.8mm}{\\includegraphics*[width=7mm]{strut.eps}}^{m}\n\\right\\rangle \\right)& = & 2^{m}j_{1,2m}(K)h^{m+1} W_{\\mathfrak{sl_2}}\\left({\\raisebox{-3mm}{\\includegraphics*[width=20mm]{d2.eps}}}\\right)\\\\\n& = & 2^{m}h^{m+1} (2m+1)! j_{1,2m}(K)\n\\end{eqnarray*}\n \\end{proof}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Preliminaries}\\label{sec:prelim}\n\nWe denote by $G = (V,E,w)$ a graph $G$ with vertex set $V$, edge set $E$, and weight function $w: E(G)\\rightarrow \\mathbb{R}^+$ on its edges. We denote by $\\texttt{MST}(G)$ the minimum spanning tree of $G$; there could be MSTs for $G$, but we may assume w.l.o.g.\\ that there is only one (e.g., by using lexicographic rules to break ties for edges of the same weight). When the graph is clear from the context, we abbreviate $\\texttt{MST}(G)$ as $\\texttt{MST}$. We denote by $w(G) = \\sum_{e\\in E}w(e)$ the {\\em weight} of $G$, i.e., the sum of all edge weights in $G$. \n\nWe use $d_G(u,v)$ to denote the distance between two vertices $u$ and $v$ in $G$. \nThe diameter of $G$ is the maximum pairwise distance in $G$, and is denoted by $\\mathsf{Dm}(G)$. \n\n\nFor a subset of vertices $X\\subseteq V$, we denote by $G[X]$ the subgraph of $G$ induced by $X$. We also define a subgraph of $G$ induced by an \\emph{edge set} $F$ by $G[F] = (V, F)$ \n\nLet $H$ be a spanning subgraph of $G$ (with edge weights inherited from $G$). The {\\em stretch} of $H$ is defined as $\\max_{u\\not\\in v \\in V}\\frac{d_H(u,v)}{d_G(u,v)}$; $H$ is called a {\\em $t$-spanner} of $G$ if its stretch is at most $t$. \nThe next well-known observation,\nwhich states that the stretch of $H$ is realized by an edge of $G$, \n follows from the triangle inequality.\n\\begin{observation} \\label{stretch:ob}\n\t$\\max_{u \\not= v \\in V(G)} \\frac{d_H(u,v)}{d_G(u,v)} = \\max_{(u,v) \\in E(G)} \\frac{d_H(u,v)}{d_G(u,v)}$.\n\\end{observation}\n We say that $H$ is a spanner for a \\emph{subset of edges $X\\subseteq E$} if $\\max_{(u,v) \\in X} \\frac{d_H(u,v)}{d_G(u,v)} \\leq t$. \n \nOur constructions use the aforementioned linear-time construction of $(2k-1)$-spanners for unweighted graphs by Halperin-Zwick~\\cite{HZ96}, which we record in the following theorem for further use. \n\n\n\n\\begin{theorem}[\\cite{HZ96}]\\label{thm:HalperinZwick} For any unweighted $n$-vertex $m$-edge graph $G$ and any integer $k \\ge 1$, a $(2k-1)$-spanner of $G$ with $O(n^{1+\\frac{1}{k}})$ edges can be constructed deterministically in $O(m + n)$ time.\n\\end{theorem}\n\n\n\n\\section{Optimally Sparse and Light Spanners in $O(m\\alpha(m,n))$ Time}\n\nLe and Solomon~\\cite{LS21} recently show that a $(2k-1)(1+\\epsilon)$-spanner with lightness $O_{\\epsilon}(n^{1\/k})$ can be constructed in $O_{\\epsilon}(m\\alpha(m,n))$ time; the notation $O_{\\epsilon}(.)$ hides a polynomial factor of $1\/\\epsilon$. However, their spanner is not sparse, i.e., in the worst case, the number of edges of the spanner is $\\Omega(m)$, which could be $\\Omega(n^{2})$ for dense graphs. Here we use the insights we develop in \\Cref{sec:PointerMachine} and \\Cref{sec:sparseRAM} on top of the construction of \\cite{LS21} to obtain a $(2k-1)(1+\\epsilon)$-spanner that is both sparse and light as claimed in \\Cref{thm:2}.\n\n\nFirst, we briefly recap the algorithm of Le and Solomon~\\cite{LS21}, called \\emph{LS algorithm}. LS algorithm first divides $E$ into two sets of edges: $E_{\\text{light}} = \\{e \\in : w(e) \\leq \\frac{w(\\texttt{MST})}{m\\epsilon}\\}$ and $E_{\\text{heavy}} = E\\setminus E_{\\text{light}}$. Every edge in $E_{\\text{light}}$ shall be added to the final spanner, and this only incurs an additive $+\\frac{1}{\\epsilon}$ in the lightness since:\n\\begin{equation}\\label{eq:weightElight}\n\tw(E_{\\text{light}}) \\leq m \\cdot \\frac{w(\\texttt{MST})}{m\\epsilon} \\leq \\frac{w(\\texttt{MST})}{\\epsilon}.\n\\end{equation}\nFor edges in $E_{\\text{heavy}}$, LS algorithm constructs a $(2k-1)(1+\\epsilon)$-spanner $H_{\\text{heavy}}$ that has two properties: \n\\begin{equation} \\label{eq:Hprop}\n\t\\begin{split}\n\t\t(1)\\quad &w(H_{\\text{heavy}})\\leq O_{\\epsilon}(n^{1+1\/k})w(\\texttt{MST}) \\\\\n\t\t(2) \\quad &d_G(u,v)\\leq d_{H_{\\text{heavy}}}(u,v)\\leq (2k-1)(1+\\epsilon)d_G(u,v) \\quad \\forall (u,v)\\in E_{\\text{heavy}}\n\t\\end{split}\n\\end{equation}\nThe final spanner of the graph is $H = H_{\\text{heavy}}\\cup E_{\\text{light}}$. By \\Cref{eq:weightElight} and \\Cref{eq:Hprop}, it follows that $w(H) = (O_{\\epsilon}(n^{1\/k}) + \\frac{1}{\\epsilon})w(\\texttt{MST}) = O_{\\epsilon}(n^{1\/k}) w(\\texttt{MST})$, and hence the lightness of $H$ is $O_{\\epsilon}(n^{1\/k})$. \n\nOur first observation is that in LS algorithm, $H_{\\text{heavy}}$ does not contain any other edge of $E_{\\text{light}}$, except for $\\texttt{MST}$ edges. It follows that if we construct a $(2k-1)(1+\\epsilon)$-spanner $H_{\\text{light}}$ for the subgraph of $G$ induced by $E_{\\text{light}}\\cup \\texttt{MST}$ by applying the construction in \\Cref{thm:1}, and set $H = H_{\\text{light}} \\cup H_{\\text{heavy}} \\cup \\texttt{MST}$, then $H$ is still a $(2k-1)(1+\\epsilon)$-spanner of $G$. Furthermore, $w(H) \\leq w(E_{\\text{light}}) + w(H_{\\text{heavy}}) + w(\\texttt{MST}) = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST})$. Thus, the lightness of $H$ is $O_{\\epsilon}(n^{1\/k})$. Observe that $H_{\\text{light}}\\cup \\texttt{MST}$ has sparsity $O_{\\epsilon}(n^{1\/k})$. It follows that, to guarantee that $H$ has sparsity $O_{\\epsilon}(n^{1\/k})$, we need to construct $H_{\\text{heavy}}$ such that its sparsity and lightness are both $O_{\\epsilon}(n^{1\/k})$. Our construction crucially makes use of the cycle property of $\\texttt{MST}$ following the same spirit of the construction in \\Cref{sec:sparseRAM}. \n\n\n\\subsection{The construction of $H_{\\text{heavy}}$}\n\nWe assume that $E_{\\text{heavy}}$ has no edges of weight at least $w(\\texttt{MST})$ since we could safely remove them from $E_{\\text{heavy}}$ without affecting the stretch of the construction. The spanner $H_{\\text{heavy}}$ constructed by LS algorithm is a subgraph of $G_{\\text{heavy}}$, which is a subgraph $G$ induced by $E_{\\text{heavy}}\\cup E(\\texttt{MST})$. However, the construction operates on a graph $\\tilde{G}$ obtained from $G_{\\text{heavy}}$ by subdividing edges of $\\texttt{MST}$ using \\emph{virtual vertices}. Specifically, we define $\\bar{w} = w(\\texttt{MST})\/\\epsilon$, and for each edge of $e\\in \\texttt{MST}$, if $w(e)\\geq \\bar{w}$, we subdivide $e$ into $\\lceil \\frac{w(e)}{\\bar{w}} \\rceil$ edges, each of weight at most $\\bar{w}$, whose total weight is $w(e)$. Let $\\widetilde{\\mst}$ be the subdivided $\\texttt{MST}$ and $\\tilde{G} = (\\tilde{V}, E(\\widetilde{\\mst})\\cup E_{\\text{heavy}})$. That is, $\\tilde{G}$ and $G$ share the same set of edges $E_{\\text{heavy}}$. Vertices in $\\tilde{V}\\setminus V$ are called \\emph{virtual vertices}. Le and Solomon~\\cite{LS21} observed that:\n\n\\begin{observation}[Observation 3.4 in \\cite{LS21}]\\label{obs:virtualNum} $|\\tilde{V}| = O(m)$. \n\\end{observation}\n\n\n\nWe now divide $E_{\\text{heavy}}$ further into subsets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$ with $\\mu_\\epsilon = O(\\frac{1}{\\epsilon}\\log(1\/\\epsilon))$, as we did in \\Cref{sec:PointerMachine} (\\Cref{eq:EsigmaI}):\n\n\\begin{equation}\\label{eq:Esigmaixdef}\n\tE^{\\sigma}_i = \\left\\{e : \\frac{L_i}{1+\\epsilon} \\leq w(e) \\leq L_i \\right\\} \\mbox{ with } L_i = L_{0}\/\\epsilon^i, L_0 = (1+\\epsilon)^{\\sigma}\\bar{w}~. \n\\end{equation}\n\nWe note that constructing every $E^{\\sigma}_i$ can be done in $O(m)$ by simply sorting all indices $i$ such that $E^{\\sigma}_i \\not=\\emptyset$. This is because the maximum index $i_{\\max}$ is $O(\\log(n))$ (Claim 3.5 in \\cite{LS21}) and hence, the sorting step takes only $O(\\log(n)\\log\\log(n)) = O(n)$ time.\n\nThe construction then focuses on each set $E^{\\sigma}$ separately. That is, we construct a $(2k-1)(1+O(\\epsilon))$-spanner $H^{\\sigma}$ for each set $E^{\\sigma}$ in $O(m\\alpha(m,n))$ time, and set $H_{\\text{heavy}} = \\cup_{1\\leq \\sigma \\leq \\mu_\\epsilon}H^{\\sigma}$. It follows that the running time to construct $H_{\\text{heavy}}$ is $O(m\\alpha(m,n)\/\\epsilon)$. Here we slightly abuse notation as $H^{\\sigma}$ is a subgraph of $\\tilde{G}$ instead of being a subgraph of $G_{\\text{heavy}}$. However, the difference between $\\tilde{G}$ and $G_{\\text{heavy}}$ lies only in $\\texttt{MST}$ vs $\\widetilde{\\mst}$, and by assuming that $H^{\\sigma}$ contains $\\widetilde{\\mst}$, we can transform $H^{\\sigma}$ to a subgraph of $G_{\\text{heavy}}$ by replacing each path of subdividing virtual vertices with the corresponding original edge of $\\texttt{MST}$. \n\nFor notational convenience, we set $H^{\\sigma}_{0} = (\\tilde{V}, E(\\widetilde{\\mst}))$. The construction of $H^{\\sigma}$ happens in \\emph{levels}: at level $i$, we construct a subgraph $H^{\\sigma}_i$ such that $H^{\\sigma}_{\\leq i}$ is a $(2k-1)(1+O(\\epsilon))$-spanner for edges in $E^{\\sigma}_{\\leq i}$. Here $H^{\\sigma}_{\\leq i} = \\cup_{0\\leq j \\leq i}H^{\\sigma}_j$ and $E^{\\sigma}_{\\leq i} = \\cup_{0\\leq j \\leq i} E^{\\sigma}_{j}$. Recall that $E^{\\sigma}_{0} = \\emptyset$ since every edge in $E_{\\text{heavy}}$ has a weight of at least $\\bar{w}\/\\epsilon$. By induction, $H^{\\sigma} \\defi \\cup_{i\\geq 0}H^{\\sigma}_i$ is a $(2k-1)(1+O(\\epsilon))$-spanner for $\\tilde{G}$. \n\n Similar to the construction of a sparse spanner in \\Cref{sec:PointerMachine}, we construct a hierarchy of clusters, and each level $i\\geq 1$ of the construction is associated with a set of clusters $\\mathcal{C}_i$ satisfying the following properties:\n \n \\begin{enumerate}[noitemsep]\n \t\\item[(1)] \\hypertarget{P1L}{} Each cluster $C\\in \\mathcal{C}_i$ is a subset of $V$. Furthermore, clusters in $\\mathcal{C}_i$ induce a partition of $\\tilde{V}$.\n \t\\item[(2)]\\hypertarget{P2L}{} Each cluster $C\\in \\mathcal{C}_i$ is the union of $\\Omega(1\/\\epsilon)$ clusters in $\\mathcal{C}_{i-1}$ for any $i\\geq 2$. \n \t\\item[(3)]\\hypertarget{P3L}{} Each cluster $C\\in \\mathcal{C}_i$ induces a subgraph $H_{\\leq i-1}^{\\sigma}[C]$ of diameter at most $gL_{i-1}$ for some constant $g$.\n \\end{enumerate}\n \n By property \\hyperlink{P3L}{(3)}, clusters at level $1$ are subgraphs of $H_0$, which is $\\widetilde{\\mst}$. The construction is described in the following lemma.\n \n \\begin{lemma}[Lemma 3.8 in \\cite{LS21}]\\label{lm:level1Const} In time $O(m)$, we can construct a set of level-$1$ clusters $\\mathcal{C}_1$ such that, for each cluster $C\\in \\mathcal{C}_1$, the subtree $\\widetilde{\\mst}[C]$ \tof $\\widetilde{\\mst}$ induced by $C$ satisfies $L_0 \\leq \\mathsf{Dm}(\\widetilde{\\mst}[C]) \\leq 14L_0$. \n \\end{lemma}\n\nThus, by choosing $g\\geq 14$, property \\hyperlink{P3L}{(3)} is satisfied for clusters in $\\mathcal{C}_1$. Property \\hyperlink{P1L}{(1)} follows directly from \\Cref{lm:level1Const}, and property \\hyperlink{P2L}{(2)} is not applicable. \n\nA crucial component in LS algorithm is a potential function $\\Phi: 2^{\\tilde{V}}\\rightarrow \\mathbb{R}^+$ that associates each cluster $C\\in \\mathcal{C}_i$ with a potential value $\\Phi(C)$. Let $\\Phi_i = \\sum_{C\\in \\mathcal{C}_i} \\Phi(C)$ be the total potential value at level $i$. The potential values of level $1$ clusters are defined as follows:\n\\begin{equation}\\label{eq:level1Potential}\n\t\\Phi(C) = \\mathsf{Dm}(\\widetilde{\\mst}[C]) \\quad\\forall C\\in \\mathcal{C}_1\n\\end{equation}\nBy \\Cref{lm:level1Const}, we have that:\n\\begin{equation}\\label{eq:level1TotalPotential}\n\t\\Phi_1 = \\sum_{C\\in \\mathcal{C}_1}\\mathsf{Dm}(\\widetilde{\\mst}[C]) \\leq w(\\texttt{MST})\n\\end{equation}\n\nNext, Le and Solomon~\\cite{LS21} define a potential change $\\Delta_{i+1} = \\Phi_i - \\Phi_{i+1}$. Let $i_{\\max}$ be the maximum level, and define $\\Phi_{i_{\\max}+1} = 0$. The idea is to bound the weight of the to-be-constructed spanner $H^{\\sigma}_i$ by the potential change $O_{\\epsilon}(n^{1\/k})\\Delta_{i+1}$ (modulo a small additive term that we will describe later). It follows that we can bound the weight of $H^{\\sigma}$, again modulo a small additive term, as follows.\n\\begin{equation}\n\tw(H^{\\sigma}) \\leq O_{\\epsilon}(n^{1\/k}) \\sum_{i=0}^{i_{\\max}} \\Delta_{i+1} = O_{\\epsilon}(n^{1\/k})(\\Phi_1 - \\Phi_{i_{\\max}+1}) = O_{\\epsilon}(n^{1\/k}) \\Phi_1 = O_{\\epsilon}(n^{1\/k}) w(\\texttt{MST})\n\\end{equation}\n\nIn~\\cite{LS21}, Le and Solomon showed the following lemma, which is the key to their construction.\n\n\\begin{lemma}[Lemma 2.6 and Theorem 1.9~\\cite{LS21}]\n\t\\label{lm:LSConstruction} For each level $i\\geq 1$, there is an algorithm that can compute a subgraph $H^{\\sigma}_i$ \\emph{induced by a subset of $ E_i^{\\sigma}$}, as well as the set of level-$(i+1)$ clusters $\\mathcal{C}_{i+1}$ satisfying properties \\hyperlink{P1L}{(1)}-\\hyperlink{P3L}{(3)} given a set of clusters $\\mathcal{C}_i$ at level $i$, such that: \n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] $w(H^{\\sigma}_i) = O_{\\epsilon}(n^{1\/k}) \\Delta_{i+1} + a_i$ for some $a_i\\geq 0$ such that $\\sum_{1\\leq i\\leq i_{\\max}} a_i = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST})$.\n\t\t\\item[(2)] for every $(u,v)\\in E^{\\sigma}_i$, $d_{H^{\\sigma}_{\\leq i}}(u,v)\\leq (2k-1)(1+(10g+1)\\epsilon)w(u,v)$.\n\t\\end{enumerate}\n\tFurthermore, the total running time of the construction of \\emph{all levels} is $O(m\\alpha(m,n))$ in the pointer-machine model.\n\\end{lemma}\n\nIn \\Cref{lm:LSConstruction}, $a_i$ is a corrective term added to handle some edge cases where $\\Delta_{i+1} = 0$ or even negative. The stretch is $(2k-1)(1+(10g+1)\\epsilon)$ instead of $(2k-1)(1+\\epsilon)$, but we can obtain the stretch $(2k-1)(1+\\epsilon)$ by scaling $\\epsilon \\leftarrow \\frac{\\epsilon}{10g+1}$. Note that \\Cref{lm:LSConstruction} does not provide any bound on the number of edges of $H^{\\sigma}_i$.\n\n\nTo bound the sparsity of $H^{\\sigma}$ in our construction, we distinguish between \\emph{isolated clusters} and \\emph{non-isolated clusters}. A cluster $C\\in \\mathcal{C}_i$ is non-isolated if it contains at least one endpoint of an edge in $E(H^{\\sigma}_i)$, and otherwise, is isolated. By examining the construction of Le and Solomon carefully, we have that:\n\n\\begin{lemma}[Le and Solomon~\\cite{LS21}]\\label{lm:HiSizeYi}Let $\\mathcal{Y}_i\\subseteq \\mathcal{C}_i$ be the set of all non-isolated clusters. Then $|E(H_i^{\\sigma})| = O_{\\epsilon}(n^{1\/k})|\\mathcal{Y}_i|$.\n\\end{lemma}\n\nBy property \\hyperlink{P2L}{(2)}, the number of clusters is geometrically decreasing when $\\epsilon$ is sufficiently smaller than $1$, and hence, the total number of clusters at all levels is $O(|\\mathcal{C}|_1)$. This implies that:\n\\begin{equation}\\label{eq:HsigmaLooseBound}\n\t|E(H^{\\sigma})| = \\sum_{i\\geq 1}|E(H^{\\sigma}_i)| = \\sum_{i\\geq 1} O_{\\epsilon}(n^{1\/k})|\\mathcal{C}_i| = O_{\\epsilon}(n^{1\/k}) |\\mathcal{C}_1| = O_{\\epsilon}(n^{1\/k}) |\\tilde{V}|\n\\end{equation}\n\nHowever, $|\\tilde{V}|$ could be up to $\\Omega(m)$ as it contains virtual vertices (\\Cref{obs:virtualNum}). Thus, \\Cref{eq:HsigmaLooseBound} does not provide any meaningful bound on the number of edges of $H^{\\sigma}$.\n\n\nWe now describe our idea to modify LS algorithm and to bound the number of edges in $H^{\\sigma}_i$. For each cluster $C\\in \\mathcal{C}_i$, we introduce two new types of clusters: \\emph{non-virtual clusters}, denoted by $\\mathcal{N}_i$, and \\emph{virtual clusters}, denoted by $\\mathcal{M}_i$. A cluster $C\\in \\mathcal{C}_i$ is virtual if $C$ only contains virtual vertices, i.e., $C\\subseteq \\tilde{V}\\setminus V$; otherwise $C$ is non-virtual. Since a non-isolated cluster contains at least one non-virtual vertex, which is the endpoint of an edge in $E(H^\\sigma_i)$, we have:\n\\begin{observation}\\label{obs:} $\\mathcal{Y}_i \\subseteq \\mathcal{N}_i$.\n\\end{observation}\n\nFollowing the same idea of the construction in \\Cref{sec:sparseRAM}, our goal is to construct a set of cluster $\\mathcal{C}_{i+1}$ such that (in addition to properties in \\Cref{lm:LSConstruction}) $|\\mathcal{N}_{i}| - |\\mathcal{N}_{i+1}| = \\Omega(|\\mathcal{Y}_i|)$. For notational convenience, we define $\\mathcal{N}_{i_{\\max}+1} = \\emptyset$. By the same argument in \\Cref{sec:sparseRAM} and using \\Cref{lm:HiSizeYi}, we could show that $|E(H^{\\sigma})| = O_{\\epsilon}(n^{1\/k})|\\mathcal{N}_1|$. Recall by the definition of non-virtual clusters that $|\\mathcal{N}_1|\\leq n$. It follows that $|E(H^{\\sigma})| = O_{\\epsilon}(n^{1+1\/k})$, which implies the desired sparsity bound. These ideas are formalized in the following lemma, whose proof is provided in \\Cref{subsec:clustering}. \n\n \n \\begin{lemma} \\label{lm:NewConstruction} For each level $i\\geq 1$, there is an algorithm that can compute a subgraph $H^{\\sigma}_i$ \\emph{induced by a subset of $ E_i^{\\sigma}$}, as well as the set of level-$(i+1)$ clusters $\\mathcal{C}_{i+1}$ satisfying properties \\hyperlink{P1L}{(1)}-\\hyperlink{P3L}{(3)} given a set of clusters $\\mathcal{C}_i$ at level $i$, such that: \n \t\\begin{enumerate}[noitemsep]\n \t\t\\item[(1)] $w(H^{\\sigma}_i) = O_{\\epsilon}(n^{1\/k}) \\Delta_{i+1} + a_i$ for some $a_i\\geq 0$ such that $\\sum_{1\\leq i\\leq i_{\\max}} a_i = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST})$.\n \t\t\\item[(2)] for every $(u,v)\\in E^{\\sigma}_i$, $d_{H^{\\sigma}_{\\leq i}}(u,v)\\leq (2k-1)(1+(10g+1)\\epsilon)w(u,v)$.\n \t\t\\item[(3)] $|E(H^\\sigma_{i})| = O_{\\epsilon}(n^{1\/k})|\\mathcal{Y}_i|$. \n \t\t\\item[(4)] $|\\mathcal{N}_{i}| - |\\mathcal{N}_{i+1}| \\geq |\\mathcal{Y}_i|\/2$. \n \t\\end{enumerate}\n \tFurthermore, the total running time of the construction of all levels is $O_{\\epsilon}(m\\alpha(m,n))$ in the pointer-machine model.\n \\end{lemma}\n\nIn the next section, we prove \\Cref{thm:2}, assuming that \\Cref{lm:NewConstruction} holds.\n\n\\subsection{Proof of \\Cref{thm:2}}\\label{subsec:proofThm2}\n\nRecall that $H = H_{\\text{light}} \\cup H_{\\text{heavy}}\\cup \\texttt{MST}$, where $H_{\\text{light}}$ is a $(2k-1)(1+\\epsilon)$-spanner of $E_{\\text{light}}$. By \\Cref{thm:1}, $H_{\\text{light}}$ can be constructed in $O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon) + \\mathsf{SORT}(m))$ in the pointer-machine model, and in $O(m\\mathsf{poly}(1\/\\epsilon))$ time in the \\textsf{Transdichotomous~} model. $\\texttt{MST}$ can be constructed in $O(m\\alpha(m,n))$ by the pointer-machine model by Chazelle's algorithm~\\cite{Chazelle00}. By \\Cref{lm:NewConstruction}, the running time to construct $H^{\\sigma}$ is $O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon))$, which implies the running time to construct $H$ is $O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon))\\mu_{\\epsilon} = O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon))$. Thus, the running time to construct $H$ is $O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon) + \\mathsf{SORT}(m))$ in the pointer-machine model and is $O(m\\alpha(m,n)\\mathsf{poly}(1\/\\epsilon))$ in the \\textsf{Transdichotomous~} model.\n\n\nWe now focus on bounding the sparsity and lightness of $H$. By Item (1) in \\Cref{lm:NewConstruction}, we have that:\n\\begin{equation}\\label{eq:HsigmaPreciseBound}\n\t\\begin{split}\n\t\t\tw(H^{\\sigma}) &= \\sum_{i=0}^{i_{\\max}}w(H^{\\sigma}_i) = \\sum_{i=0}^{i_{\\max}} O_{\\epsilon}(n^{1\/k})\\Delta_{i+1} + a_i \\\\\n\t\t\t&= O_{\\epsilon}(n^{1\/k}) (\\Phi_1) + \\sum_{i=0}^{i_{\\max}} a_i = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST}),\n\t\\end{split}\n\\end{equation}\nby \\Cref{eq:level1TotalPotential} and Item (2) of \\Cref{lm:NewConstruction}. Furthermore, by Item (4) of \\Cref{lm:NewConstruction}, it follows that:\n\\begin{equation}\\label{eq:HsigmaSize}\n\t\\begin{split}\n\t\t\t|E(H^{\\sigma})| &= \\sum_{i=0}^{i_{\\max}} |E(H^{\\sigma}_i)| = \\sum_{i=0}^{i_{\\max}} O_{\\epsilon}(n^{1\/k})|\\mathcal{Y}_i| \\quad \\mbox{(by Item (3) of \\Cref{lm:NewConstruction})} \\\\\n\t\t\t&= \\sum_{i=0}^{i_{\\max}} O_{\\epsilon}(n^{1\/k}) (|\\mathcal{N}_{i}| - |\\mathcal{N}_{i+1}|) \\quad \\mbox{(by Item (4) of \\Cref{lm:NewConstruction})}\\\\\n\t\t\t&= O_{\\epsilon}(n^{1\/k}) |\\mathcal{N}_1| = O_{\\epsilon}(n^{1+1\/k}).\n\t\\end{split}\n\\end{equation}\nIt follows that $w(H_{\\text{heavy}}) = \\sum_{\\sigma \\in [1,\\mu_\\epsilon]}w(H^{\\sigma}) = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST})$ and $|E(H_{\\text{heavy}})| = \\sum_{\\sigma \\in [1,\\mu_\\epsilon]}|E(H^{\\sigma})| = O_{\\epsilon}(n^{1+1\/k})$ since $\\mu_\\epsilon = O(\\log(1\/\\epsilon)1\/\\epsilon)$. \n\n\nObserve that $w(H_{\\text{light}})\\leq w(E_{\\text{light}}) \\leq w(\\texttt{MST})\/\\epsilon$ by \\Cref{eq:weightElight}. Furthermore, $|E(H_{\\text{light}})| = O_{\\epsilon}(n^{1+1\/k})$ by \\Cref{thm:2}. We then conclude that:\n\\begin{equation*}\n\t\\begin{split}\n\t\tw(H) &\\leq w(H_{\\text{light}}) + w(H_{\\text{heavy}}) = O_{\\epsilon}(n^{1\/k})w(\\texttt{MST})\\\\\n\t\t|E(H)|&= |E(H_{\\text{light}})| + |E(H_{\\text{heavy}})| = O_{\\epsilon}(n^{1+1\/k}).\n\t\\end{split}\n\\end{equation*}\nThat is, the sparsity and lightness of $H$ are both $O_{\\epsilon}(n^{1\/k})$.\n\n\nWe now bound the stretch of $H$. Let $(u,v)$ be any edge in $G$. If $(u,v)$ is in $E_{\\text{light}}$, then the stretch of $(u,v)$ is $(2k-1)(1+\\epsilon)$ in $H_{\\text{light}}$. If $(u,v)\\in \\texttt{MST}$, then the stretch is $1$ since $H$ contains $\\texttt{MST}$ as a subgraph. Otherwise, $(u,v)\\in E_{\\text{heavy}}$, and this means there exist $\\sigma \\in [1,\\mu_\\epsilon]$ and $i\\in [1,i_{\\max}]$ such that $(u,v)\\in E^{\\sigma}_i$. By Item (1) in \\Cref{lm:NewConstruction}, the stretch of $(u,v)$ in $H^{\\sigma}_{\\leq i}$, and hence in $H_{\\text{heavy}}$, is $(2k-1)(1+(10g+1)\\epsilon)$. In summary, the stretch in $H$ of any edge $(u,v)\\in E(G)$ is at most $(2k-1)(1+(10g+1)\\epsilon)$. By scaling $\\epsilon \\leftarrow \\epsilon\/(10g+1)$, we obtain a spanner of stretch $(2k-1)(1+\\epsilon)$, and with the same lightness and sparsity bounds. \\mbox{}\\hfill $\\Box$\\\\\n\n\\subsection{Construction of $H^{\\sigma}_i$ and $\\mathcal{C}_{i+1}$}\\label{subsec:clustering}\n \nIn this section, we construct $H^{\\sigma}_i$ and $\\mathcal{C}_{i+1}$ with properties claimed in \\Cref{lm:NewConstruction}. Without loss of generality, we assume that $\\epsilon$ is sufficiently small, and in particular, $\\epsilon$ is smaller than $1\/(c\\cdot g)$ for any constant $c$. We now introduce new notation used in this section.\n\n\\paragraph{Notation.~} We consider graphs with weights on both \\emph{edges and vertices} in this section. We define the \\emph{augmented weight} of a path to be the total weight of all edges and vertices along the path. The \\emph{augmented distance} between two vertices in $G$ is defined as the minimum augmented weight of a path between them in $G$. The augmented diameter of $G$ is denoted by $\\mathsf{Adm}(G)$, which is the maximum pairwise augmented distance in $G$.\n\n\n\\paragraph{Cluster graphs.~} The construction of $\\mathcal{C}_{i+1}$ is done via a \\emph{cluster graph} $\\mathcal{G}_i(\\mathcal{V}_i, \\mathcal{E}_i',\\omega)$ that has weights on both edges and nodes (we use nodes to refer to vertices of $\\mathcal{G}_i$). Each node $\\varphi_C \\in \\mathcal{V}_i$ corresponds to a level-$i$ cluster $C$ and has weight:\n\\begin{equation}\\label{eq:nodeWeight}\n\t\\omega(\\varphi_C) = \\Phi(C)\n\\end{equation}\nThat is, the weight of each node is the \\emph{potential value} of its corresponding cluster. The edge set $\\mathcal{E}'_i$ is the union of two edge sets $\\mathcal{E}_i\\cup \\widetilde{\\mst}_i$:\n\\begin{itemize}[noitemsep]\n\t\\item Each edge $\\mathbf{e} = (\\varphi_{C_u}, \\varphi_{C_v})\\in \\mathcal{E}_i$ corresponds to an edge $(u,v)\\in E^{\\sigma}_i\\cup E(\\widetilde{\\mst})$ where $C_u$ and $C_v$ are level-$i$ clusters containing $u$ and $v$, respectively. Furthermore, $\\omega(\\mathbf{e}) = w(u,v)$.\n\t\\item $\\mathcal{E}_i$ corresponds to a subset of edges of $E^{\\sigma}_i$ and $\\widetilde{\\mst}_i$ corresponds to a subset of edges of $\\widetilde{\\mst}$, the subdivided $\\texttt{MST}$. $\\widetilde{\\mst}_i$ induces a minimum spanning tree of $\\mathcal{G}_i$, and we abuse notation by denoting $\\widetilde{\\mst}_i$ the spanning tree of $\\mathcal{G}_i$ by edges in $\\widetilde{\\mst}_i$.\n\\end{itemize} \n We refer readers to Lemma 3.16 in \\cite{LS21} for the construction of $\\mathcal{G}_i$. At a high level, the construction removes edges that are self-loops, parallel edges, and those that have stretch at most $(2k-1)(1+6g\\epsilon)$ in $\\widetilde{\\mst}_i$ as these edges already have a good stretch. \n\n\\begin{lemma}[Lemma 3.16 and Lemma 3.22~\\cite{LS21}]\\label{lm:GiConstr} $\\mathcal{G}_i$ can be constructed in $O((|\\mathcal{V}_i| + |E^{\\sigma}_i|)\\alpha(m,n))$ time. Furthermore, if the subset of edges of $E^{\\sigma}_i$ corresponding to $\\mathcal{E}_i$ has stretch $(2k-1)(1+s\\epsilon)$ in $H^{\\sigma}_{\\leq i}$ for some constant $s$ that only depends on $g$, then every edge in $E^{\\sigma}_i$ has stretch $(2k-1)(1 + \\max\\{s+4g, 10g\\}\\epsilon)$ in $H^{\\sigma}_{\\leq i}$ when $\\epsilon \\leq\\frac{1}{\\max\\{s+4g, 10g\\}}$. \n\\end{lemma}\n\n\\Cref{lm:GiConstr} implies that it suffices for the construction to focus on constructing a spanner for the subset of edges of $E^{\\sigma}_i$ that correspond to edges in $\\mathcal{E}_i$. \n\n\n\n\\paragraph{Level-$(i+1)$ clusters.~} Instead of constructing level-$(i+1)$ directly from level-$i$ clusters, we construct a collection $\\mathbb{X}$ of vertex-disjoint subgraphs of $\\mathcal{G}_i$. Each subgraph $\\mathcal{X}\\in \\mathbb{X}$ has the vertex set denoted by $\\mathcal{V}(\\mathcal{X})$ and the edge set denoted by $\\mathcal{E}(\\mathcal{X})$, and is mapped to a level-$(i+1)$ cluster, denoted by $C_{\\mathcal{X}}$, as follows:\n\\begin{equation}\\label{eq:XtoCluster}\n\tC_{\\mathcal{X}} = \\cup_{\\varphi_C\\in \\mathcal{V}(\\mathcal{X})}C\n\\end{equation}\nThat is, $C_{\\mathcal{X}}$ is the union of all level-$i$ clusters corresponding to the nodes of $\\mathcal{X}$. Note that $\\mathcal{X}$ has weights on both edges and nodes. We then define the potential value of $C_{\\mathcal{X}}$ as follows.\n\\begin{equation}\\label{eq:CXPotential}\n\t\\Phi(C_{\\mathcal{X}}) = \\mathsf{Adm}(\\mathcal{X})\t\n\\end{equation}\nThat is, the potential value of $C_{\\mathcal{X}}$ is the augmented diameter of the corresponding subgraph. Recall that the potential value will then be the weight of the node corresponding to $C_{\\mathcal{X}}$ in the cluster graph $\\mathcal{G}_{i+1}$ in the construction of the next level, see \\Cref{eq:nodeWeight}. Furthermore, inductively, we can show that, if $\\omega(\\varphi_C)$ is an upper bound on $\\mathsf{Dm}(H^{\\sigma}_{\\leq i-1}[C])$, then $\\mathsf{Adm}(\\mathcal{X})$ is an upper bound on $\\mathsf{Dm}(H^{\\sigma}_{\\leq i}[C_{\\mathcal{X}}])$. As a result, guaranteeing properties \\hyperlink{P1L}{(1)}-\\hyperlink{P3L}{(3)} for level-$(i+1)$ clusters can be translated into guaranteeing the following properties for subgraphs in $\\mathbb{X}$:\n\n\n\\begin{itemize}[noitemsep]\n\t\\item \\textbf{(P1').~} \\hypertarget{P1'L}{} $\\{\\mathcal{V}(\\mathcal{X})\\}_{\\mathcal{X} \\in \\mathbb{X}}$ is a partition of $\\mathcal{V}_i$.\n\t\\item \\textbf{(P2').~} \\hypertarget{P2'L}{} $|\\mathcal{V}(\\mathcal{X})| = \\Omega(\\frac{1}{\\epsilon})$.\n\t\\item \\textbf{(P3').~} \\hypertarget{P3'L}{} $L_i \\leq \\mathsf{Adm}(\\mathcal{X}) \\leq gL_{i}$.\n\\end{itemize}\n\n\\begin{lemma}[Lemma 3.14~\\cite{LS21}]\\label{lm:XInvariants} Let $\\mathcal{X}$ be any subgraph in $\\mathbb{X}$ satisfying properties \\hyperlink{P1'L}{(P1')}-\\hyperlink{P3'L}{(P3')}. Suppose that every edge $(\\varphi_{C_u},\\varphi_{C_v})\\in \\mathcal{E}(\\mathcal{X})$ corresponds to an edge $(u,v)$ that is added to $H^{\\sigma}_{i}$. Then, $C_{\\mathcal{X}}$ satisfies all properties \\hyperlink{P1L}{(1)}-\\hyperlink{P3L}{(3)}.\n\\end{lemma}\n\nWe remark that \\Cref{lm:XInvariants} is based on the assumption that $(u,v)$ is added to $H^{\\sigma}_{i}$, which we have not constructed yet. \n\n\\paragraph{Constructing level$(i+1)$ clusters.~} \\Cref{lm:XInvariants} translates the construction of clusters in $\\mathcal{C}_{i+1}$ to the construction of the set of subgraphs $\\mathbb{X}$ satisfying \\hyperlink{P1'L}{(P1')}-\\hyperlink{P3'L}{(P3')}. The main difficulty is not only to satisfy these properties; but also to guarantee that the weight of $H^{\\sigma}_i$ is bounded by the potential change $\\Delta_{i+1}$ (and a small additive term) as claimed in Item (1) of \\Cref{lm:NewConstruction}. Recall by \\Cref{eq:nodeWeight} and \\Cref{eq:CXPotential} that: \n\\begin{equation}\\label{eq:Phii}\n\t\\begin{split}\n\t\t\t\\Phi_i &= \\sum_{C\\in \\mathcal{C}_{i}} \\Phi(C) = \\sum_{\\varphi_C \\in \\mathcal{V}_i}\\omega(\\varphi_C)\\\\\n\t\t\t\\Phi_{i+1} &= \\sum_{C_{\\mathcal{X}}\\in \\mathcal{C}_{i+1}} \\Phi(C_{\\mathcal{X}}) = \\sum_{\\mathcal{X}\\in \\mathbb{X}}\\mathsf{Adm}(\\mathcal{X})\n\t\\end{split}\n\\end{equation}\nThus, if we define the \\emph{local potential change} of $\\mathcal{X}$ as follows:\n\\begin{equation}\\label{eq:localChangeX}\n\t\\Delta_{i+1}(\\mathcal{X}) = \\sum_{\\varphi_C\\in \\mathcal{V}(\\mathcal{X})}\\omega(\\varphi_{\\mathcal{X}})-\\mathsf{Adm}(\\mathcal{X}), \n\\end{equation} \n\n\\noindent then it follows that:\n\n\\begin{claim}[Claim 3.15~\\cite{LS21}]\\label{clm:PotentialDecompos} $\\Delta_{i+1} = \\sum_{\\mathcal{X}\\in \\mathbb{X}}\\Delta_{i+1}(\\mathcal{X})$.\n\\end{claim}\n\nThat is, the potential change $\\Delta_{i+1}$ can be decomposed into local potential changes of subgraphs in $\\mathbb{X}$. This meanss we could bound the weight of $H_i^{\\sigma}$ \\emph{locally} via bounding the total weight of edges incident to nodes in $\\mathcal{X}$ by the local potential change of $\\mathcal{X}$. \n\n\\paragraph{Partitioning $\\mathcal{V}_i$ and $\\mathbb{X}$.~}\nWe say that a partition $\\mathbb{V} = \\{\\mathcal{V}_i^{\\mathsf{high}}, \\mathcal{V}_i^{\\mathsf{low}^+},\\mathcal{V}_i^{\\mathsf{low}^-}\\}$ of $\\mathcal{V}_i$ is a \\emph{degree-specific} partition if every node $\\varphi_C \\in \\mathcal{V}_i^{\\mathsf{high}}$ is incident to $\\Omega(1\/\\epsilon)$ edges in $\\mathcal{E}_i$ and every node $\\varphi_C \\in \\mathcal{V}_i^{\\mathsf{low}^+} \\cup \\mathcal{V}_i^{\\mathsf{low}^-}$ is incident to $O(1\/\\epsilon)$ edges in $\\mathcal{E}_i$. That is, $\\mathcal{V}^{\\mathsf{high}}_i$ is the set of high-degree nodes of $\\mathcal{V}_i$ and $\\mathcal{V}_i^{\\mathsf{low}^+} \\cup \\mathcal{V}_i^{\\mathsf{low}^-}$ is the set of low-degree nodes of $\\mathcal{V}_i$. The difference between $\\mathcal{V}_i^{\\mathsf{low}^+}$ and $\\mathcal{V}_i^{\\mathsf{low}^-}$ will be made clear later.\n\t\nWe say that a partition $\\{\\mathbb{X}^{\\mathsf{high}}, \\mathbb{X}^{\\mathsf{low}^+}, \\mathbb{X}^{\\mathsf{low}^-}\\}$ of a collection $\\mathbb{X}$ of subgraphs of $\\mathcal{G}_i$ \\emph{conforms} with a degree-specific partition $\\mathbb{V}$ if\n\\begin{itemize}[noitemsep]\n\t\\item[(i)] Every subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{low}^-}$ has $\\mathcal{V}(\\mathcal{X})\\subseteq \\mathcal{V}_i^{\\mathsf{low}^-}$, and $\\cup_{\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{low}^-}}\\mathcal{V}(\\mathcal{X}) = \\mathcal{V}_i^{\\mathsf{low}^-}$.\n\t\\item[(ii)] For every node $\\varphi_C \\in \\mathcal{V}_i^{\\mathsf{high}}$, there exists a subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{high}}$ such that $\\varphi_C \\in \\mathcal{V}(\\mathcal{X})$, and that every subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{high}}$ contains at least one node in $\\mathcal{V}_i^{\\mathsf{high}}$.\n\\end{itemize}\n\nObserve that property (ii) implies that $\\mathcal{V}(\\mathcal{X})\\subseteq \\mathcal{V}_i^{\\mathsf{low}^+}$ for every $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{low}^+}$. Also, it is possible that a subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{high}}$ contains a node in $\\mathcal{V}_i^{\\mathsf{low}^+}$. \n\nThe construction of $\\mathbb{X}$ in~\\cite{LS21} is described by the following lemma. \n\n\\begin{restatable}[Lemma 3.17~\\cite{LS21}]{lemma}{Clustering}\n\t\\label{lm:Clustering} Given $\\mathcal{G}_i$, we can construct in time $O((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\epsilon^{-1})$ (i) a degree-specific partition $\\mathbb{V} = \\{\\mathcal{V}_i^{\\mathsf{high}}, \\mathcal{V}_i^{\\mathsf{low}^+},\\mathcal{V}_i^{\\mathsf{low}^-}\\}$ of $\\mathcal{V}_i$ and (ii) a collection $\\mathbb{X}$ of subgraphs of $\\mathcal{G}_i$ along with a partition $\\{\\mathbb{X}^{\\mathsf{high}}, \\mathbb{X}^{\\mathsf{low}^+}, \\mathbb{X}^{\\mathsf{low}^-}\\}$ conforming $\\mathbb{V}$ such that:\n\t\\begin{enumerate}\n\t\t\\item[(1)] Let $\\Delta_{i+1}^+(\\mathcal{X}) = \\Delta(\\mathcal{X}) + \\sum_{\\mathbf{e} \\in \\widetilde{\\mst}_i\\cap \\mathcal{E}(\\mathcal{X})}w(\\mathbf{e})$. Then, $\\Delta_{i+1}^+(\\mathcal{X}) \\geq 0$ for every $\\mathcal{X} \\in \\mathbb{X}$, and\n\t\t\\begin{equation}\\label{eq:averagePotential}\n\t\t\t\\sum_{\\mathcal{X} \\in\\mathbb{X}^{\\mathsf{high}}\\cup \\mathbb{X}^{\\mathsf{low}^+}} \\Delta_{i+1}^+(\\mathcal{X}) = \\sum_{\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{high}}\\cup \\mathbb{X}^{\\mathsf{low}^+}} \\Omega(|\\mathcal{V}(\\mathcal{X})|\\epsilon^2 L_i). \n\t\t\\end{equation}\n\t\t\\item[(2)] There is no edge in $\\mathcal{E}_i$ between a node in $\\mathcal{V}^{\\mathsf{high}}_i$ and a node in $\\mathcal{V}^{\\mathsf{low}^-}_i$. Furthermore, if there exists an edge $(\\varphi_{C_u},\\varphi_{C_v}) \\in \\mathcal{E}_i$ such that both $\\varphi_{C_u}$ and $\\varphi_{C_v}$ are in \n\t\t$\\mathcal{V}_i^{\\mathsf{low}^-}$, then $\\mathcal{V}_i^{\\mathsf{low}^-} = \\mathcal{V}_i$ and $|\\mathcal{E}_i| = O(\\frac{1}{\\epsilon^2})$; this case is called the \\emph{degenerate case}.\n\t\t\\item[(3)] For every subgraph $\\mathcal{X} \\in \\mathbb{X}$, $\\mathcal{X}$ satisfies the three properties (\\hyperlink{P1'L}{P1'})-(\\hyperlink{P3'L}{P3'}) with constant $g=31$, and $|\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i| = O(|\\mathcal{A}_{\\mathcal{X}}|)$ where $\\mathcal{A}_{\\mathcal{X}}$ is the set of nodes in $\\mathcal{X}$ incident to an edge in $\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i$.\n\t\\end{enumerate}\t\n\\end{restatable}\n\n\nWe call $\\Delta_{i+1}(\\mathcal{X})$ the \\emph{corrected potential change} of $\\mathcal{X}$. We remark that $\\Delta_{i+1}(\\mathcal{X})$ could be negative but $\\Delta_{i+1}(\\mathcal{X})$ is always positive by Item (1) of \\Cref{lm:Clustering}. Furthermore, Item (1) in \\Cref{lm:Clustering} only tells us about the corrected potential changes of subgraphs in $\\mathbb{X}^{\\mathsf{high}}\\cup \\mathbb{X}^{\\mathsf{low}^+}$; there is no guarantee on the corrected potential changes of subgraphs in $\\mathbb{X}^{\\mathsf{low}^-}$ other than non-negativity, and as a result, we could not bound the total weight of edges incident to a subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{low}^-}$ by the local potential change of $\\mathcal{X}$. However, Item (2) means that subgraphs in $\\mathbb{X}^{\\mathsf{low}^-}$ do not need to ``pay for'' their incident edges (by their corrected potential changes)---these edges can be paid for by subgraphs in $\\mathbb{X}^{\\mathsf{high}}\\cup \\mathbb{X}^{\\mathsf{low}^+}$---unless the degenerate case happens, which only incurs a small weight (of $O(1\/\\epsilon^2)$ edges). Furthermore, subgraphs in $\\mathbb{X}^{\\mathsf{low}^-}$ do not contain any edge in $\\mathcal{E}_i$ by Item (2) of \\Cref{lm:Clustering} unless the degenerate case happens.\n\n\\begin{observation}[Observation 3.20 in \\cite{LS21}]\\label{obs:LowmStructure} If the degenerate case does not happen, for every edge $(\\varphi_{1},\\varphi_{2})$ with one endpoint in $\\mathcal{V}_i^{\\mathsf{low}^-}$, the other endpoint must be in $\\mathcal{V}_i^{\\mathsf{low}^+}$, and hence, $\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i = \\emptyset$ if $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{low}^-}$.\n\\end{observation}\n\nWe remark that Item (3) in \\Cref{lm:Clustering} is slightly different from the corresponding item in Lemma 3.17~\\cite{LS21}, which is Item (5), in that $|\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i|$ is bounded by $O(|\\mathcal{V}(\\mathcal{X})|)$. Here we need a slightly stronger bound, and Item (3) can be seen directly from the construction of ~\\cite{LS21}. For completeness, we will show this item in the construction in \\Cref{subsubsec:clustering}. \n\n\nWhile the construction in \\Cref{lm:Clustering} provides a mean to construct $H_i^{\\sigma}$ and bounding its weight by (corrected) potential changes via Item (1), it does not give us a sufficient reduction in the number of non-virtual clusters as claimed by Items (3) and (4) in \\Cref{lm:NewConstruction}. The reduction in the number of non-virtual clusters was used to bound the total number of edges of $H^{\\sigma}$ in \\Cref{subsec:proofThm2}. Our main contribution is a modification of the construction by Le and Solomon~\\cite{LS21} using the cycle property of $\\texttt{MST}$ to achieve the reduction in the number of non-virtual clusters. \n\nWe call a node $\\varphi_C$ \\emph{virtual} if it corresponds to a virtual cluster $C$; otherwise, we call $\\varphi_C$ \\emph{non-virtual}. We say that $\\varphi_C$ is isolated if $C$ is isolated, and otherwise, is non-isolated. By definition, a non-isolated node is a non-virtual node.\n\nWe abuse notation by denoting $\\mathcal{N}_i$ and $\\mathcal{M}_i$ the sets of non-virtual nodes and virtual nodes of $\\mathcal{V}_i$, respectively. We denote by $\\mathcal{Y}_i$ the set of non-isolated nodes in $\\mathcal{V}_i$. We will show later that $\\mathcal{Y}_i$ is exactly the set of nodes defined in \\Cref{lm:HiSizeYi}. That is, every node in $\\mathcal{Y}_i$ corresponds to a level-$i$ cluster that contains at least one endpoint of an edge in $H^{\\sigma}_i$. \n\nWe say that a subgraph $\\mathcal{X} \\in \\mathbb{X}$ is non-virtual if it contains at least one non-virtual node, and otherwise, is virtual. A non-virtual subgraph corresponds to a non-virtual level-$(i+1)$ cluster. Our main contribution is the construction of $\\mathbb{X}$ described by the following lemma.\n\n\\begin{lemma}\\label{lm:additional-prop}We can construct in $O((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\epsilon^{-1})$ a degree-specific partition $\\mathcal{V}$ of $\\mathcal{V}_i$ and a collection $\\mathbb{X}$ of subgraphs of $\\mathcal{G}_i$ that satisfy all properties in \\Cref{lm:Clustering} with $g = 42$. Furthermore, if we denote by $\\mathcal{N}_{i+1}$ the set of non-virtual subgraphs in $\\mathbb{X}$, then $|\\mathcal{N}_i| - |\\mathcal{N}_{i+1}| \\geq |\\mathcal{Y}_i|\/2$.\n\\end{lemma}\n\nIn the following section, we prove \\Cref{lm:NewConstruction} assuming that \\Cref{lm:additional-prop} holds. The proof of \\Cref{lm:additional-prop} is deferred to \\Cref{subsubsec:clustering}.\n\n\\subsubsection{Proof of \\Cref{lm:NewConstruction}}\n\n\n\nWe use the same algorithm in \\cite{LS21} to construct $H_i^{\\sigma}$. The algorithm has three steps. Initially $H_i^{\\sigma}$ has no edge. \n\\begin{itemize}[noitemsep]\n\t\\item \\textbf{Step 1.~} For every subgraph $\\mathcal{X} \\in \\mathbb{X}$, we add to $H_i^{\\sigma}$ every edge in $E^{\\sigma}_i$ that corresponds to an edge in $\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i$. The purpose of this step is to guarantee the assumption of \\Cref{lm:XInvariants}.\n\t\\item \\textbf{Step 2.~} Wee use Halperin-Zwick algorithm (\\Cref{thm:HalperinZwick}) to construct a $(2k-1)$-spanner for edges between $\\mathcal{V}_i^{\\mathsf{high}}$ only. Specifically, we create an \\emph{unweighted} graph $\\mathcal{K}_i$ that has $\\mathcal{V}_i^{\\mathsf{high}}$ as the vertex set and the subset of edges of $\\mathcal{E}_i$ between $\\mathcal{V}_i^{\\mathsf{high}}$ as the edge set. Then, we run Halperin-Zwick algorithm~\\cite{HZ96} on $\\mathcal{K}_i$ to obtain an edge set $\\mathcal{E}^{\\mathsf{pruned}}_i$. We then add every edge in $E^{\\sigma}_i$ corresponding to an edge in $\\mathcal{E}^{\\mathsf{pruned}}_i$ to $H^{\\sigma}_i$.\n\t\\item \\textbf{Step 3.~} We add to $H^{\\sigma}_i$ every edge that corresponding to an edge of $\\mathcal{E}_i$ incident to a node in $\\mathcal{V}_i^{\\mathsf{low}^+}\\cup \\mathcal{V}_i^{\\mathsf{low}^-}$.\n\\end{itemize}\n \n Le and Solomon (Lemma 3.22 and Lemma 4.5 in ~\\cite{LS21}) showed that $w(H^{\\sigma}_i) = O_{\\epsilon}(n^{1\/k})\\Delta_{i+1} + a_i$ for $a_i$ satisfying \\Cref{lm:NewConstruction}, and that the stretch of every edge $(u,v) \\in E^{\\sigma}_i$ is at most $(2k-1)(1+(10g+1)\\epsilon)$ in $H^{\\sigma}_{i}$. Since their proof only uses properties stated in \\Cref{lm:Clustering}, and that our construction in \\Cref{lm:additional-prop} also satisfies \\Cref{lm:Clustering}, Items (1) and (2) in \\Cref{lm:NewConstruction} hold in our construction as well. We remark that the additive term $a_i$ is used to handle the degenerate case in Item (2) of \\Cref{lm:Clustering}, since in that case, $\\Delta_{i+1} \\leq 0$. \n \n We now focus on proving Items (3) and (4) of \\Cref{lm:NewConstruction}. First, we observe that \n for every node $\\varphi_C$ that is incident to an edge $\\mathbf{e} \\in \\mathcal{E}_i$, the corresponding edge of $\\mathbf{e}$ in $E^{\\sigma}_i$ is added to $H^{\\sigma}_i$, unless $\\varphi_C\\in \\mathcal{V}_i^{\\mathsf{high}}$ and Halperin-Zwick algorithm does not pick $\\mathbf{e}$ to $\\mathcal{E}^{\\mathsf{pruned}}_i$. In this exceptional case, another edge incident to $\\varphi_C$ must be picked to $\\mathcal{E}^{\\mathsf{pruned}}_i$; otherwise, $\\varphi_C$ is not connected to any node in the graph induced by $\\mathcal{E}^{\\mathsf{pruned}}_i$, contradicting that the output is a spanner. It follows that $\\mathcal{Y}_i$ corresponds to level-$i$ clusters that have at least one incident edge in $H^{\\sigma}_i$. Thus, Item (4) of \\Cref{lm:NewConstruction} follows from \\Cref{lm:additional-prop}. \n \n By Item (3) in \\Cref{lm:Clustering}, the total number of edges added in Step 1 is $O(1)\\sum_{\\mathcal{X} \\in \\mathbb{X}}\\mathcal{A}_{\\mathcal{X}} = O(1)|\\mathcal{Y}_i|$. The number of edges added in Step 2 is $|\\mathcal{E}_i^{\\mathsf{pruned}}|= O(|\\mathcal{V}_i^{\\mathsf{high}}|^{1+1\/k}) = O(n^{1\/k})|\\mathcal{V}_i^{\\mathsf{high}}| = O(n^{1\/k})|\\mathcal{Y}_i|$ since $\\mathcal{V}_i^{\\mathsf{high}}\\subseteq \\mathcal{Y}_i$ by the definition of non-isolated nodes. In Step 3, for each node $\\varphi_C \\in \\mathcal{V}_i^{\\mathsf{low}^+}\\cup \\mathcal{V}_i^{\\mathsf{low}^-}$, we add at most $O(1\/\\epsilon)$ incident edges to $H^{\\sigma}_i$ since nodes in $\\mathcal{V}_i^{\\mathsf{low}^+}\\cup \\mathcal{V}_i^{\\mathsf{low}^-}$ have degree $O(1\/\\epsilon)$. Thus, the total number of edges added in Step 3 is $O(1\/\\epsilon)|\\mathcal{Y}_i|$. Item (3) of \\Cref{lm:NewConstruction} now follows. \n \n For the running time, we first note that constructing $\\mathcal{G}_i$ takes $O((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\alpha(m,n))$ time by \\Cref{lm:GiConstr}. The set of subgraphs $\\mathbb{X}$ is constructed in $O_{\\epsilon}(|\\mathcal{V}_i| + |\\mathcal{E}_i|)$ time by \\Cref{lm:additional-prop}. In the construction of $H^{i}_{\\sigma}$, Steps 1 and 3 take $O(|\\mathcal{V}_i| + |\\mathcal{E}_i|)$ by a straightforward implementation. Step 2 takes $O(|\\mathcal{V}_i| + |\\mathcal{E}_i|)$ time by \\Cref{thm:HalperinZwick}. Thus, the total running time of the construction at level $i$ is $O((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\alpha(m,n))$. It follows that the total running time over all levels is:\n \\begin{equation*}\n \t\\begin{split}\n \t\t \t\\sum_{i\\geq 1} O_{\\epsilon}\\left((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\alpha(m,n)\\right) & =\t O_{\\epsilon}\\left(( \\sum_{i\\geq 1}|\\mathcal{V}_i| + m )\\alpha(m,n)\\right)\\\\\n \t\t \t& = O_{\\epsilon}\\left((|\\tilde{V}| + m )\\alpha(m,n)\\right)\\qquad \\mbox{(by \\hyperlink{P2L}{property (P2)})} \\\\ \n \t\t \t& = O_{\\epsilon}\\left(m\\alpha(m,n)\\right) \\qquad \\mbox{(by \\Cref{obs:virtualNum})}\n \t\\end{split}\n \\end{equation*}\n \\Cref{lm:NewConstruction} now follows. \\mbox{}\\hfill $\\Box$\\\\\n \n \n\n\\subsubsection{The construction of $\\mathbb{X}$}\\label{subsubsec:clustering}\n\nRecall that $\\widetilde{\\mst}$ is the tree obtained by subdividing $\\texttt{MST}$ edges by virtual vertices. For each edge $e \\in \\texttt{MST}$, we denote by $P_e$ the corresponding path of $\\texttt{MST}$ subdivided from $e$. We call $P_e$ the \\emph{subdivided path} of $e$. Since each virtual cluster $C\\in \\mathcal{C}_i$ only contains virtual vertices, $C$ induces a subpath of the subdivided path $P_e$ of some edge $e \\in \\texttt{MST}$. We call $P_e$ the \\emph{parent path} of $C$, and $e$ the \\emph{parent edge} of $C$; see Figure~\\ref{fig:cycleProp}(a). We also refer to $P_e$ as the parent path and to $e$ as the parent edge of the virtual node $\\varphi_{C}$ corresponding to $C$. \n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=1.0\\textwidth]{figs\/cycleProp}\n\t\\end{center}\n\t\\caption{Virtual clusters are yellow shaded and non-virtual clusters are green shaded. (a) A virtual cluster $C$, its parent path $P_e$, and the edge $e$ that corresponds to $P_e$. (b) The fundamental cycle $\\mathcal{Z}$ of $\\widetilde{\\mst}_{i}$ formed by an edge $\\mathbf{e}$. (c) The corresponding cycle $\\tilde{Z}$ in $\\tilde{G}_{\\text{heavy}}$ corresponding to $\\mathcal{Z}$. Shaded nodes are $u_0,v_0,u_1,v_1,\\ldots,u_{k-1},v_{k-1}$. (d) The cycle $Z$ of $G_{\\text{heavy}}$ corresponding to $\\tilde{Z}$ obtained by replacing each subdivided path $P_{e'}$ with the corresponding edge $e'$ in $\\texttt{MST}$. Solid (black) edges are $\\texttt{MST}$ edges, and red (dashed) edges are non-$\\texttt{MST}$ edges.}\n\t\\label{fig:cycleProp}\n\\end{figure}\n\nOur goal is to construct $\\mathbb{X}$ satisfying all properties in \\Cref{lm:Clustering}, and such that there is a significant reduction in the number of non-isolated clusters as claimed in \\Cref{lm:NewConstruction}. To guarantee this additional constraint, we rely on a specific structure of $\\mathcal{G}_i$ described in the following lemma, which is an analogous version of the cycle property of the minimum spanning tree.\n\n\n\n\\begin{lemma}\\label{lm:cycle-Prop} Let $\\mathbf{e} = (\\varphi_1,\\varphi_2)$ be any edge in $\\mathcal{E}_i$, and $\\mathcal{Z}$ the fundamental cycle of $\\widetilde{\\mst}_i$ formed by $\\mathbf{e}$. For any virtual node $\\varphi \\in \\mathcal{Z}$, $w(e_{\\varphi}) \\leq \\omega(\\mathbf{e})$ where $e_{\\varphi}$ is the parent edge of $\\varphi$. \n\\end{lemma}\n\\begin{proof}\n\tRecall that $\\tilde{G}_{\\text{heavy}}$ is obtained from $G_{\\text{heavy}}$ by subdividing $\\texttt{MST}$ edges, and that $G_{\\text{heavy}} = (V, E_{\\text{heavy}}\\cup E(\\texttt{MST}))$. Let $e$ be the edge in $G_{\\text{heavy}}$ corresponding to $\\mathbf{e}$. We construct a cycle $\\tilde{Z}$ of $\\tilde{G}_{\\text{heavy}}$ from $\\mathcal{Z}$ as follows. Write $$\\mathcal{Z} = (\\varphi_{C_0},\\mathbf{e}_0,\\varphi_{C_1},\\mathbf{e}_1,\\ldots,\\varphi_{C_{k-1}}, \\mathbf{e}_{k-1}, \\varphi_{C_0})$$ as an alternating sequence of nodes and edges that starts from and ends at the same node $\\varphi_{C_0}$. (See \\Cref{fig:cycleProp}(a) and (b) for an illustration.) For notational convenience, we regard the last node $\\varphi_{C_0}$ as $\\varphi_{C_k}$ with the subscript modulo $k$. Let $(u_i,v_i)$ be the edge in $\\tilde{G}_{\\text{heavy}}$ corresponding to $\\mathbf{e}_i$, and $Q_i$ be the shortest path from $v_{(i-1)\\pmod k}$ to $u_i$ in $H^{\\sigma}_{\\leq i}[C_i]$ for any $0\\leq i \\leq k-1$. Then $\\tilde{Z} = (u_0,v_0)\\circ Q_1\\circ (u_1,v_1) \\circ Q_2 \\circ\\ldots \\circ (u_{k-1},v_{k-1})\\circ Q_{k-1}$ is a cycle of $\\tilde{G}$; here $\\circ$ is the path concatenation operator. Observe that $\\tilde{Z}$ contains the parent path, say $P_e$, of $\\varphi$. Let $Z$ be the cycle of $G_{\\text{heavy}}$ obtained from $\\tilde{Z}$ by replacing each subdivided path say $P_{e'}$ in $\\tilde{Z}$ with the corresponding $\\texttt{MST}$ edge $e'$; see Figure~\\ref{fig:cycleProp}(d). Note that both $e$ and $e_{\\varphi}$ belong to $Z$. \n\t\t\n\tObserve by property \\hyperlink{P3L}{(P3)} that $\\mathsf{Dm}(H^{\\sigma}_{\\leq i}[C_i]) \\leq g\\epsilon L_i < L_i\/(1+\\epsilon) \\leq \\omega(\\mathbf{e}) = w(e)$ when $\\epsilon \\leq 1\/(2g)$. Thus, the weight of any non-$\\texttt{MST}$ edge in $Z$ is at most $w(e)$. That is, any edge of weight larger than $w(e)$ in $Z$ must be an $\\texttt{MST}$ edge. If there exists such an edge, then the edge of maximum weight in $Z$ is an $\\texttt{MST}$ edge, contradicting the cycle property of $\\texttt{MST}$. Thus, $e$ is an edge of maximum weight in $Z$, which gives $w(e_{\\varphi}) \\leq w(e) = \\omega(\\mathbf{e})$ as claimed. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof} \n\n\n\nNote by definition that a non-isolated node is a non-virtual node. We say that a subgraph $\\mathcal{X}$ is \\emph{good} if it either contains no non-isolated node or if it contains one non-isolated node, it has at least two non-virtual nodes (one of which is the non-isolated node). If every subgraph in $\\mathbb{X}$ is good, then we could show that the number of non-virtual clusters is reduced by at least $|\\mathcal{Y}_i|\/2$. In LS construction, which has five steps, only subgraphs formed in Steps 2 and 5 (more precisely, Step 5B) may not be good. For Step 5B, only need to make a minor modification and argue that the resulting subgraph is good using \\Cref{lm:cycle-Prop}. For Step 2, we need an entirely different construction. As a result, our construction also has five steps. Steps 1,3, 4 and 5A are the same as the LS construction, and are taken verbatim from~\\cite{LS21} for completeness. Notation introduced in this section is summarized in the following table.\n\n\\renewcommand{\\arraystretch}{1.3}\n\\begin{longtable}{| l | l|} \n\t\\hline\n\t\\textbf{Notation} & \\textbf{Meaning} \\\\ \\hline\n\t$E_{\\text{light}}$ &$ \\{e \\in E(G) : w(e)\\le w\/\\varepsilon\\}$\\\\ \\hline \n\t$E_{\\text{heavy}}$ & $E \\setminus E^{light}$ \\\\\\hline\n\t$E^{\\sigma} $ & $\\bigcup_{i \\in \\mathbb{N}^{+}}E_{i}^{\\sigma}$\\\\\\hline\n\t$E_{i}^{\\sigma} $ & $\\{e \\in E(G) : \\frac{L_i}{1+\\epsilon} < w(e) \\le L_i\\}$\\\\\\hline\n\t$H^\\sigma_i$ & A spanner constructed for edges in $E^{\\sigma}_i$\\\\ \\hline\t\n\t$H^\\sigma_{\\leq i}$ & $H^\\sigma_{\\leq i} =\\cup_{j\\leq i}H^{\\sigma}_i$\\\\ \\hline\n\t$g$ & constant in \\hyperlink{P3L}{property (P3)}, $g = 42$ \\\\\\hline\n\tNon-virtual cluster& A cluster containing at least one non-virtual vertex\\\\\\hline\n\tNon-virtual node& A node in $\\mathcal{V}_i$ corresponding to a non-virtual cluster\\\\\\hline\n\t$\\mathcal{N}_i$ & the set of non-virtual clusters (nodes) at level $i$\\\\\\hline\n\t$\\mathcal{M}_i$ & the set of virtual clusters (nodes) at level $i$\\\\\\hline\n\tNon-isolated cluster& A cluster containing an endpoint of an edge added to $H^\\sigma_i$\\\\ \\hline \n\tNon-isolated node& A node in $\\mathcal{V}_i$ corresponding to a non-isolated cluster\\\\\\hline\n\t$\\mathcal{Y}_i$ & the set of non-isolated clusters (nodes) at level $i$; $\\mathcal{Y}_i\\subseteq \\mathcal{N}_i$\\\\\\hline\n\t$\\mathcal{G}_i = (\\mathcal{V}_i, \\widetilde{\\mst}_{i} \\cup \\mathcal{E}_i, \\omega)$ & cluster graph \\\\\\hline\n\t$\\mathcal{E}_i$ & corresponds to a subset of edges of $E^{\\sigma}_i$\\\\\\hline\n\t$\\mathbb{X}$ & a collection of subgraphs of $\\mathcal{G}_i$\\\\\\hline\n\t$\\mathcal{X}, \\mathcal{V}(\\mathcal{X}), \\mathcal{E}(\\mathcal{X})$ & a subgraph in $\\mathbb{X}$, its vertex set, and its edge set\\\\\\hline\n\tGood subgraph $\\mathcal{X}$& $\\mathcal{X}$ contains no non-isolated node or at least two non-virtual nodes \\\\\\hline\n\t$\\Phi_i$ & $\\sum_{c \\in C_i}\\Phi(c)$ \\\\\\hline\n\t$\\Delta_{i+1} $&$ \\Phi_i - \\Phi_{i+1}$\\\\\\hline\n\t$\\Delta_{i+1}(\\mathcal{X})$ & $(\\sum_{\\phi_C\\in \\mathcal{X} }\\Phi(C) ) - \\Phi(C_{\\mathcal{X}})$\\\\\\hline\n\t$\\Delta_{i+1}^+(\\mathcal{X})$ & $\\Delta_{i+1}(\\mathcal{X}) + \\bigcup_{e \\in \\mathcal{E}(\\mathcal{X})\\cap \\widetilde{\\mst}_i}w(e)$\\\\\\hline\n\t$C_\\mathcal{X}$ & $\\bigcup_{\\phi_C \\in \\mathcal{X}}C$ \\\\\\hline\n\t$\\{\\mathcal{V}^{\\mathsf{high}}_i,\\mathcal{V}^{\\mathsf{low}^+},\\mathcal{V}^{\\mathsf{low}^-}_{i}\\}$ & a degree-specific partition of $\\mathcal{V}_i$ \\\\\\hline\n\t$\\{\\mathbb{X}^{\\mathsf{high}}, \\mathbb{X}^{\\mathsf{low}^+}, \\mathbb{X}^{\\mathsf{low}^-}\\}$ & A partition of $\\mathbb{X}$ conforming a degree-specific partition. \\\\\\hline\n\t\\caption{Notation introduced in this section}\n\t\\label{table:notation}\n\\end{longtable}\n\\renewcommand{\\arraystretch}{1}\n\n\n\n\n\n\\begin{lemma}[Step 1, Lemma 5.1~\\cite{LS21}]\\label{lm:Clustering-Step1} Let $\\mathcal{V}^{\\mathsf{high}}_i$ be the set of nodes incident to at least $2g\/\\epsilon$ edges in $\\mathcal{E}_i$, and $\\mathcal{V}^{\\mathsf{high}+}_i$ be the set of all nodes in $\\mathcal{V}^{\\mathsf{high}}_i$ and their neighbors that are connected via edges in $\\mathcal{E}_i$. We can construct in $O(|\\mathcal{V}_i| + |\\mathcal{E}_i|)$ time a collection of node-disjoint subgraphs $\\mathbb{X}_1$ of $\\mathcal{G}_i$ such that:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] Each subgraph $\\mathcal{X} \\in \\mathbb{X}_1$ is a tree.\n\t\t\\item[(2)] $\\cup_{\\mathcal{X} \\in \\mathbb{X}_1}\\mathcal{V}(\\mathcal{X}) = \\mathcal{V}^{\\mathsf{high}+}_i$.\n\t\t\\item[(3)] $L_i \\leq \\mathsf{Adm}(\\mathcal{X}) \\leq 13L_i$, assuming that $\\epsilon \\leq 1\/g$. \n\t\t\\item[(4)] $|\\mathcal{V}(\\mathcal{X})|\\geq \\frac{2g}{\\epsilon}$.\n\t\\end{enumerate}\n\\end{lemma}\n\n\n\n Let $\\widetilde{F}^{(2)}_i$ be the forest obtained from $\\widetilde{\\mst}_{i}$ by removing every node in $\\mathcal{V}^{\\mathsf{high}+}_i$ (defined in \\Cref{lm:Clustering-Step1}). LS algorithm deals with branching nodes of $\\widetilde{F}^{(2)}$ in Step 2. We say that a node in a tree $\\widetilde{T}$ is \\emph{$\\widetilde{T}$-branching} if it has degree at least $3$ in $\\widetilde{T}$. A node in a forest $\\widetilde{F}$ is $\\widetilde{F}$-branching if it is $\\widetilde{T}$-branching in some tree $\\widetilde{T}$ of $\\widetilde{F}$. We will omit the prefixes $\\widetilde{T}$ and $\\widetilde{F}$ in the branching notation whenever the tree and the forest are clear from the context.\n\n Similar to LS algorithm, our goal is to group all branching nodes of $\\widetilde{F}^{(2)}_i$ into subgraphs. However, we need to guarantee that subgraphs formed in this step are good, which a priori, are not guaranteed to be good in LS construction. \n\n\n\n\\begin{restatable}{lemma}{ClusteringStepTwo}\n\t\\label{lm:Clustering-Step2} We can construct in $O(|\\mathcal{V}_i|)$ time a collection $\\mathbb{X}_2$ of subtrees of $\\widetilde{F}^{(2)}_i$ and a subset of nodes $\\mathcal{Z}$ of $\\widetilde{F}^{(2)}_i$ such that, for every $\\mathcal{X} \\in \\mathbb{X}_2$:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] $\\mathcal{X}$ is a tree, has an $\\mathcal{X}$-branching node, and is good.\n\t\t\\item[(2)] $L_i \\leq \\mathsf{Adm}(\\mathcal{X})\\leq 20L_i$.\n\t\t\\item[(3)] $|\\mathcal{V}(\\mathcal{X})| = \\Omega(\\frac{1}{\\epsilon})$ when $\\epsilon \\leq 2\/g$. \n\t\t\\item[(4)] Let $\\widetilde{F}^{(3)}_i$ be obtained from $\\widetilde{F}^{(2)}_i$ by removing every node contained in subgraphs of $\\mathbb{X}_2$ and in $\\mathcal{Z}$. Then, for every tree $\\widetilde{T} \\subseteq \\widetilde{F}^{(3)}_i$, either (4a) $\\mathsf{Adm}(\\widetilde{T})\\leq 6L_i$ or (4b) $\\widetilde{T}$ is a path.\n\t\t\\item[(5)] Nodes in $\\mathcal{Z}$ are augmented to subgraphs in $\\mathbb{X}_1$ such that for every subgraph $\\mathcal{Y} \\in \\mathbb{X}_1$ that are augmented, $\\mathcal{Y}^{\\mathsf{aug}}$ remains a tree and $\\mathsf{Adm}(\\mathcal{Y}^{\\mathsf{aug}}) \\leq 24L_i$ where $\\mathcal{Y}^{\\mathsf{aug}}$ is $\\mathcal{Y}$ after the augmentation. \n\t\\end{enumerate}\n\\end{restatable}\n\n\nThere are two differences in the construction of Step 2 in our construction compared to the construction in LS algorithm. First, the graphs constructed are good. Second, for some edges cases where we could not group branching nodes into subgraphs satisfying Item (1), we show that they could be augmented to subgraphs in $\\mathbb{X}_1$. These nodes are in $\\mathcal{Z}$ in Item (5), and our construction guarantees that the augmentation does not change the structure of subgraphs in $\\mathbb{X}_1$. That is, subgraphs in $\\mathbb{X}$ remain trees, and their diameters are not increased by much. The increase in the diameter from $13 L_i$ in \\Cref{lm:Clustering-Step1} to $24L_i$ in Item (5) in \\Cref{lm:Clustering-Step2} does not affect the overall argument of Le and Solomon~\\cite{LS21}; this only affects the choice of $g$, which we have the freedom to choose as large as we want. The augmented diameter of $\\mathcal{X}$ in Item (2) in \\Cref{lm:Clustering-Step2} is also slightly larger than the diameter of subgraphs in~\\cite{LS21}, which is at most $2L_i$. This change also only affects the choice of $g$. The proof of \\Cref{lm:Clustering-Step2} will be delayed to \\Cref{subsec:proof}.\n\n\n\\paragraph{Step 3: Augmenting $\\mathbb{X}_1\\cup \\mathbb{X}_2$.~} We say that a path of augmented diameter at least $6L$ in the forest $\\widetilde{F}^{(3)}_i$ in Item (4) of \\Cref{lm:Clustering-Step2} a \\emph{long path}. In this step, we further augment graphs formed in Steps 1 and 2. The purpose is to guarantee that for any long path after this step, at least one endpoint of the path is connected to a node in a subgraph of $\\mathbb{X}_1\\cup \\mathbb{X}_2$ via an $\\widetilde{\\mst}_{i}$ edge. \n\\begin{quote}\n\\textbf{The construction.~} Let $\\mathcal{A}$ be the set of all nodes in a long path of $\\widetilde{F}^{(3)}_i$ that is $\\widetilde{\\mst}_{i}$-branching. For each node $\\varphi \\in \\mathcal{A}$, let $\\mathcal{X} \\in \\mathbb{X}_1\\cup \\mathbb{X}_2$ be (any) subgraph such that $\\varphi$ is connected to a node in $\\mathcal{X}$ via an $\\widetilde{\\mst}_{i}$ edge $\\mathbf{e}$. We then add $\\varphi$ and $\\mathbf{e}$ to $\\mathcal{X}$. \n\\end{quote}\n\n\n\\begin{lemma}[Lemma 5.3.~\\cite{LS21}]\\label{lm:Clustering-Step3} The augmentation in Step 3 can be implemented in $O(|\\mathcal{V}_i|)$ time, and increases the augmented diameter of each subgraph in $\\mathbb{X}_1\\cup \\mathbb{X}_2$ by at most $4L_i$ when $\\epsilon \\leq 1\/g$. \\\\\n\tFurthermore, let $\\widetilde{F}^{(4)}_i$ be the forest obtained from $\\widetilde{F}^{(3)}_i$ by removing every node in $\\mathcal{A}$. Then, for every tree $\\widetilde{T} \\subseteq \\widetilde{F}^{(4)}_i$, either:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)]$\\mathsf{Adm}(\\widetilde{T})\\leq 6L_i$ or\n\t\t\\item[(2)] $\\widetilde{T}$ is a path such that (2a) every node in $\\widetilde{T}$ has \\emph{degree at most $2$} in $\\widetilde{\\mst}_{i}$ and (2b) at least one endpoint $\\varphi$ of $\\widetilde{T}$ is connected via an $\\widetilde{\\mst}_{i}$ edge to a node $\\varphi'$ in a subgraph of $\\mathbb{X}_1\\cup \\mathbb{X}_2$, unless $\\mathbb{X}_1\\cup \\mathbb{X}_2 = \\emptyset$. \n\t\\end{enumerate}\n\\end{lemma}\n\n\nWe emphasize that in Item (2a) of \\Cref{lm:Clustering-Step3}, the degree bound is in $\\widetilde{\\mst}_{i}$. This is important for the construction in Step 5. Step 4 deals with long paths of $\\widetilde{F}^{(4)}_i$, the forest in \\Cref{lm:Clustering-Step3}. The construction uses Red\/Blue Coloring. The coloring guarantees that for any long path in $\\widetilde{F}^{(4)}_i$, the nodes in the prefix\/suffix of augmented length at most $L_i$ get red color, while other nodes get blue color. \n\n\\begin{quote}\n\t\\textbf{Red\/Blue Coloring.~}\\hypertarget{RBColoring}{} The coloring applies to each long path $\\widetilde{P} \\in \\widetilde{F}^{(4)}_i$. Specifically, a node gets red color if its augmented distance to at least one of the two endpoints of $\\widetilde{P}$ is at most $L_i$; otherwise, it gets blue color. \n\\end{quote}\n\n\n\\begin{lemma}[Step 4, Lemma 5.4~\\cite{LS21}]\\label{lm:Clustering-Step4} We can construct in $O((|\\mathcal{V}_i| + |\\mathcal{E}_i|)\\epsilon^{-1})$ time a collection $\\mathbb{X}_4$ of subgraphs of $\\mathcal{G}_i$ such that every $\\mathcal{X}\\in \\mathbb{X}_4$:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] $\\mathcal{X}$ contains a single edge in $\\mathcal{E}_i$.\n\t\t\\item[(2)] $L_i \\leq \\mathsf{Adm}(\\mathcal{X})\\leq 5L_i$.\n\t\t\\item[(3)] $|\\mathcal{V}(\\mathcal{X})| = \\Theta(\\frac{1}{\\epsilon})$ when $\\epsilon \\ll \\frac{1}{g}$. \n\t\t\\item[(4)] $\\Delta_{i+1}^{+}(\\mathcal{X}) = \\Omega(\\epsilon^2 |\\mathcal{V}(\\mathcal{X})| L_i)$.\n\t\t\\item[(5)] Let $\\widetilde{F}^{(5)}_i$ be obtained from $\\widetilde{F}^{(4)}_i$ by removing every node contained in subgraphs of $\\mathbb{X}_4$. If we apply \\hyperlink{RBColoring}{Red\/Blue Coloring} to each path of augmented diameter at least $6L_i$ in $\\widetilde{F}^{(5)}_i$, then there is no edge in $\\mathcal{E}_i$ that connects two blue nodes in $\\widetilde{F}^{(5)}_i$.\n\t\\end{enumerate}\n\\end{lemma}\n\n\nItem (5) of \\Cref{lm:Clustering-Step4} guarantees that for any edge with one endpoint in a long path of $\\widetilde{F}^{(5)}_i$, at least one of the endpoints must have red color. $\\widetilde{F}^{(5)}_i$ has the following structure.\n\n\\begin{observation}[Observation 5.7~\\cite{LS21}]\\label{obs:Clustering-F5} Every tree $\\widetilde{T} \\subseteq \\widetilde{F}^{(5)}_i$ of augmented diameter at least $6L_i$ is connected via $\\widetilde{\\mst}_{i}$ edge to a node in some subgraph $\\mathcal{X} \\in \\mathbb{X}_1 \\cup \\mathbb{X}_2\\cup \\mathbb{X}_4$, unless there is no subgraph formed in Steps 1-4, i.e., $ \\mathbb{X}_1 \\cup \\mathbb{X}_2\\cup \\mathbb{X}_4 = \\emptyset$.\n\\end{observation}\n\n\nWe observe that any tree $\\widetilde{T} \\subseteq \\widetilde{F}^{(5)}_i$ of diameter at least $6L_i$ must be a path, and that, by Item (2a) in \\Cref{lm:Clustering-Step3}, only endpoints of $\\widetilde{T}$ could have an edge in $\\widetilde{\\mst}_{i}$ to a node outside $\\widetilde{T}$. We call such an endpoint a \\emph{connecting endpoint} of $\\widetilde{T}$. Note that $\\widetilde{T}$ could have up to two connecting endpoints. \n\nStep 5 has two smaller steps. In Step 5A, we augment trees of $\\widetilde{F}^{(5)}_i$ of low augmented diameter to existing subgraphs. In Step 5B, we form new subgraphs from long paths, and augment the prefix\/suffix to an existing subgraph in previous steps. \n\n\n\\paragraph{Step 5.~} Let $\\widetilde{T}$ be a path in $\\widetilde{F}^{(5)}_i$ obtained by Item (5) of \\Cref{lm:Clustering-Step4}. We construct two sets of subgraphs, denoted by $\\mathbb{X}^{\\mathsf{intrnl}}_5$ and $\\mathbb{X}^{\\mathsf{pref}}_5$.\n\\begin{itemize}\n\t\\item (Step 5A)\\hypertarget{5A}{} If $\\widetilde{T}$ has augmented diameter at most $6L_i$, let $\\mathbf{e}$ be an $\\widetilde{\\texttt{MST}}_i$ edge connecting $\\widetilde{T}$ and a node in some subgraph $\\mathcal{X} \\in \\mathbb{X}_1\\cup \\mathbb{X}_2 \\cup \\mathbb{X}_4$, assuming that $\\mathbb{X}_1\\cup \\mathbb{X}_2 \\cup \\mathbb{X}_4 \\not= \\emptyset$. We add both $\\mathbf{e}$ and $\\widetilde{T}$ to $\\mathcal{X}$.\n\t\\item (Step 5B)\\hypertarget{5B}{} \tOtherwise, $\\widetilde{T}$ is a path. We break $\\widetilde{T}$ into subpaths of augmented diameter at least $L_i$ and at most $7L_i$ by applying the construction in \\Cref{lm:Clustering-Step5B} below. For any subpath $\\widetilde{P}$ broken from $\\widetilde{T}$, if $\\widetilde{P}$ is connected to a node in a subgraph $\\mathcal{X}$ via an edge $\\mathbf{e}\\in \\widetilde{\\mst}_{i}$, we add $\\widetilde{P}$ and $\\mathbf{e}$ to $\\mathcal{X}$; \totherwise, $\\widetilde{P}$ becomes a new subgraph. We add $\\widetilde{P}$ to $\\mathbb{X}^{\\mathsf{pref}}_5$ if it is a prefix\/suffix of $\\widetilde{T}$; otherwise, we add $\\widetilde{P}$ to $\\mathbb{X}^{\\mathsf{intrnl}}_5$. \n\\end{itemize}\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=1.0\\textwidth]{figs\/Step5B}\n\t\\end{center}\n\t\\caption{An illustration for the proof of \\Cref{lm:Clustering-Step5B}. Small circles are virtual vertices; black (solid) edges are $\\widetilde{\\mst}$ edges and red (dashed) edges are edges in $\\mathcal{E}_i$. (a) Non-isolated nodes in $\\widetilde{P}$ are those incident to red edges. Nodes grouped in previous steps are in the blue-shaded region. The path $Q_e$ corresponds to an $\\mathbf{e}$ in $\\mathcal{P}$ is highlighted. $Q_e$ is a subpath of the parent path $P_e$ of the virtual clusters in the construction of $\\mathbf{e}$. (b) The path $\\mathcal{P}$ obtained from $\\widetilde{P}$ in figure (a) by the construction in the proof of \\Cref{lm:Clustering-Step5B}; the only virtual node in $\\mathcal{P}$ is the (connecting) endpoint of $\\mathcal{P}$. Suppose that every edge in $Q_e$ in figure (a) has weight $1$, then $\\mathbf{e}$ has weight $5$ since $Q_e$ has 5 edges. In general, $\\omega(\\mathbf{e}) = w(Q_e)$. (c) Forest $\\mathcal{F}$ obtained from $\\mathcal{P}$ by removing every edge of weight at least $2L_i$. $\\mathbb{A}$ in this case includes two paths $\\mathcal{Q}_1$ and $\\mathcal{Q}_2$. (d) Two paths $\\tilde{Q}_1$ and $\\tilde{Q}_2$ in $\\mathbb{P}$ constructed from $\\mathcal{Q}_1$ and $\\mathcal{Q}_2$, respectively. Two other paths $\\tilde{R}_1$ and $\\tilde{R}_2$ are broken from the path $\\tilde{R}$ in (a).}\n\t\\label{fig:Step5B}\n\\end{figure}\n\n\\begin{restatable}{lemma}{ClusteringLastStep}\n\t\\label{lm:Clustering-Step5B} Let $\\widetilde{P}$ be a path of augmented diameter at least $6L_i$ in $\\widetilde{F}^{(5)}_i$. We can break $\\widetilde{P}$ into a collection of paths $\\mathbb{P}$ such that each path $\\widetilde{P}' \\in \\mathbb{P}$ has two properties:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] $L_i \\leq \\mathsf{Adm}(\\widetilde{P}')\\leq 7L_i$.\n\t\t\\item[(2)] If $\\widetilde{P}'$ contains a non-isolated node, then it contains at least two non-virtual nodes, or a connecting endpoint of $\\widetilde{P}$.\n\t\\end{enumerate}\n\tThe running time of the construction is $O(|\\mathcal{V}(\\widetilde{P})|)$. \n\\end{restatable}\n\\begin{proof}\n\tRecall that by Item (2a) in \\Cref{lm:Clustering-Step3}, every node in $\\widetilde{P}$ has degree 2 in $\\widetilde{\\mst}_{i}$. This means, if an endpoint of $\\widetilde{P}$ is non-connecting, then it is a non-virtual node. Recall by the definition of a virtual node $\\varphi_C$, its corresponding cluster $C$ is virtual, and hence, structurally, $C$ induced a subpath of the parent path $P_e$.\n\t\n\tWe construct path graph $\\mathcal{P}$ from $\\widetilde{P}$ that contains non-virtual nodes and the endpoints of $\\widetilde{P}$ as follows.\tEach edge $\\mathbf{e} = (\\varphi,\\varphi') \\in \\mathcal{P}$ corresponds to a path between $\\varphi$ and $\\varphi'$ in $\\widetilde{T}$ whose internal nodes are virtual. Note that all virtual nodes on the path between $\\varphi$ and $\\varphi'$ in $\\widetilde{T}$ share the same parent path $P_e$. Let $Q_e$ be the minimal subpath of $P_e$ whose endpoints are in the clusters corresponding to $\\varphi$ and $\\varphi'$. We then assign a weight $\\omega(\\mathbf{e}) = w(Q_e)$. Observe that $\\omega(\\mathbf{e})\\leq w(P_e) = w(e)$ where $e$ is the $\\texttt{MST}$ edge from which $P_e$ is subdivided. See \\Cref{fig:Step5B}(a) and (b) for an illustration.\n\t\n\tNote by Item (2a) of \\Cref{lm:Clustering-Step2}, every node in $\\widetilde{P}$ has degree at most $2$ in $\\widetilde{\\mst}_{i}$. If $\\varphi$ is a non-isolated node in $\\widetilde{P}$, then it is incident to an edge, say $\\mathbf{e}'$, in $\\mathcal{E}_i$ by definition. One of the incident edges of $\\varphi$ is part of the fundamental cycle of $\\widetilde{\\mst}_{i}$ formed by $\\mathbf{e}'$. It follows from \\Cref{lm:cycle-Prop} that at least one edge in $\\mathcal{P}$ of $\\varphi$ must have a weight at most $L_i$. \n\t\n\t\n\tLet $\\mathcal{F}$ be the forest induced by edges of weight at most $2L_i$ in $\\mathcal{P}$. We further remove singletons from $\\mathcal{F}$. Observe that a singleton in $\\mathcal{F}$ is either a connecting endpoint of $\\mathcal{P}$, or an isolated node. We then greedily break each path in $\\mathcal{F}$ that contains at least three edges into subpaths of at least two edges and at most three edges each. As a result, we obtain a collection $\\mathbb{A}$ of subpaths of $\\mathcal{P}$ that contain at least two nodes each. See \\Cref{fig:Step5B}(c). \n\t\n\tWe now construct $\\mathbb{P}$ as follows. (Step 1) For each path $\\mathcal{Q}\\in \\mathbb{A}$, we construct the corresponding subpath $\\widetilde{Q}$ of $\\widetilde{P}$ by replacing each edge in $\\mathcal{Q}$ by the corresponding subpath in $\\widetilde{P}$. We then add $\\widetilde{Q}$ to $\\mathbb{P}$. (Step 2) After Step 1, remaining nodes in $\\widetilde{P}$ that are not grouped to a path in $\\mathbb{P}$ induces a collection of subpaths, say $\\mathbb{Q}$, of $\\widetilde{P}$. Observe by the construction of $\\mathcal{F}$ that, each subpath in the collection $\\mathbb{Q}$ corresponds to a subpath of $\\mathcal{P}$, which only contains virtual nodes and isolated nodes, that has at least one edge of weight at least $2L_i$. Now for each path $\\tilde{R} \\in \\mathbb{Q}$, observe that $\\mathsf{Adm}(\\tilde{R})\\geq 2L_i - 2\\bar{w} - 2g\\epsilon L_i \\geq 2L_i - 4g\\epsilon L_i\\geq L_i$ when $\\epsilon \\leq 1\/2g$. The negative term $ - 2\\bar{w} - 2g\\epsilon L_i$ is due to that the two nodes neighboring the endpoints of $\\tilde{R}$ are grouped to subpaths in $\\mathbb{P}$. We then break $\\tilde{R}$ into subpaths of augmented diameter at least $L_i$ and at most $2L_i$ and add them to $\\mathbb{P}$. This completes the construction of $\\mathbb{P}$. See Figure~\\ref{fig:Step5B}(d) for an illustration.\n\t\n\tThe running time follows directly from the construction. To bound the augmented diameter of paths in $\\mathbb{P}$, we observe that path $\\widetilde{Q}$ in Step 1 has augmented diameter at most $3(2L_i) + 4\\epsilon g L_i \\leq 7L_i$ when $\\epsilon \\leq 1\/4g$. The additive term $4\\epsilon g L_i$ is due to (at most) four endpoints of (at most) three edges in $\\widetilde{Q}$. Thus, every path in $\\mathbb{P}$ has an augmented diameter of at most $\\max\\{7L_i,2L_i\\} = 7L_i$. The lower bound $L_i$ follows directly from the construction; this implies Item (1). Item (2) follows from the construction of $\\mathbb{A}$. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\nWe note that in Step 5B in LS algorithm, $\\widetilde{T}$ is broken into subpaths of augmented length at least $L_i$ and at most $2Li$ instead of at least $L_i$ and at most $7L_i$ as in our construction. The increase in the augmented diameter ultimately affects the choice of $g$. Other properties of subgraphs in $ \\mathbb{X}_5^{\\mathsf{intrnl}}$ and $\\mathbb{X}_5^{\\mathsf{pref}}$ remains the same.\n\n\\begin{lemma}[Lemma 5.8~\\cite{LS21}]\\label{lm:Clustering-Step5} We can implement the construction of $ \\mathbb{X}_5^{\\mathsf{intrnl}}$ and $\\mathbb{X}_5^{\\mathsf{pref}}$ in $O(|\\mathcal{V}_i|)$ time.\tFurthermore, every subgraph $\\mathcal{X} \\in \\mathbb{X}_5^{\\mathsf{intrnl}} \\cup \\mathbb{X}_5^{\\mathsf{pref}}$ satisfies:\n\t\\begin{enumerate}[noitemsep]\n\t\t\\item[(1)] $\\mathcal{X}$ is a subpath of $\\widetilde{\\mst}_{i}$.\n\t\t\\item[(2)] $L_i \\leq \\mathsf{Adm}(\\mathcal{X})\\leq 7 L_i$.\n\t\t\\item[(3)] $|\\mathcal{V}(\\mathcal{X})| = \\Theta(\\frac{1}{\\epsilon})$.\n\t\\end{enumerate}\n\\end{lemma}\n\n\nWe note that the degenerate case in the above construction happens when $\\mathbb{X}_1 \\cup \\mathbb{X}_2\\cup \\mathbb{X}_4 = \\emptyset$. When the degenerate case happens, $\\widetilde{F}^{(5)}_i$ has the following structure. \n\n\\begin{lemma}[Lemma 5.10~\\cite{LS21}]\\label{lm:exception}\n\tIf $\\mathbb{X}_1\\cup \\mathbb{X}_2\\cup \\mathbb{X}_4 = \\emptyset$, then\n\t$\\widetilde{F}^{(5)}_i = \\widetilde{\\mst}_{i}$, and $\\widetilde{\\mst}_{i}$ is a single (long) path. Moreover, every edge $\\mathbf{e} \\in \\mathcal{E}_i$ must be incident to a node in $\\widetilde{P}_1\\cup \\widetilde{P}_2$,\n\twhere $\\widetilde{P}_1$ and $\\widetilde{P}_2$ are the prefix and suffix subpaths of $\\widetilde{\\mst}_{i}$ of augmented diameter at most $L_i$. Furthermore, $|\\mathcal{E}_i| = O(1\/\\epsilon^2)$.\n\\end{lemma}\n\nWe are now ready to prove \\Cref{lm:additional-prop}.\n\n\\paragraph{Proof of \\Cref{lm:additional-prop}.~} The degree-specific partition $\\mathbb{V}$ of $\\mathcal{V}_i$ and the partition of $\\mathbb{X}$ conforming $\\mathbb{V}$ are constructed as follows. If the degenerate case happens, then $\\mathcal{V}^{\\mathsf{low}^-}_i = \\mathcal{V}_i$ (and hence $\\mathcal{V}^{\\mathsf{high}}_i = \\mathcal{V}^{\\mathsf{low}^+}_i = \\emptyset$). In this case, $\\mathbb{X}^{\\mathsf{low}^-} = \\mathbb{X}^{\\mathsf{intrnl}}_5\\cup \\mathbb{X}^{\\mathsf{pref}}_5$, while $\\mathbb{X}^{\\mathsf{high}} = \\mathbb{X}^{\\mathsf{low}^+} = \\emptyset$. Otherwise, \n$\\mathcal{V}^{\\mathsf{high}}_i$ to be the set of all nodes that are incident to at least $2g\/\\epsilon$ edges in $\\mathcal{E}_i$ in \\Cref{lm:Clustering-Step1}, $\\mathcal{V}^{\\mathsf{low}^-}_i = \\cup_{\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{intrnl}}_5}\\mathcal{V}(\\mathcal{X})$ and $\\mathcal{V}^{\\mathsf{low}^+}_i = \\mathcal{V}_i\\setminus (\\mathcal{V}^{\\mathsf{high}}_i \\cup \\mathcal{V}^{\\mathsf{low}^-}_i )$. The partition of $\\mathbb{X}$ is $\\{\\mathbb{X}^{\\mathsf{high}} =\\mathbb{X}_1$, $\\mathbb{X}^{\\mathsf{low}^+} = \\mathbb{X}_2\\cup \\mathbb{X}_4 \\cup \\mathbb{X}^{\\mathsf{pref}}_5$,$\\mathbb{X}^{\\mathsf{low}^-} = \\mathbb{X}^{\\mathsf{intrnl}}_5\\}$.\n\n\nWe note that Items (1) and (2) in \\Cref{lm:Clustering} hold by the same proof in~\\cite{LS21}. For Item (3), subgraphs in $\\mathbb{X}$ satisfy all properties (\\hyperlink{P1'L}{P1'})-(\\hyperlink{P3'L}{P3'}) with constant $g = 42$ instead of $31$ since the construction of Step 2 in \\Cref{lm:Clustering-Step2} increases the augmented diameter of subgraphs in $\\mathbb{X}_1$ by $11L_i$ (on top of the upper bound $31L_i$). We remark that the augmented diameter of other subgraphs is smaller than the augmented diameters of subgraphs in $\\mathbb{X}_1$, and hence, the increased diameter due to our construction does not affect $g$. The fact that $|\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i| = O(|\\mathcal{A}_{\\mathcal{X}}|)$ where $\\mathcal{A}_{\\mathcal{X}}$ is the set of nodes in $\\mathcal{X}$ incident to an edge in $\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i$ follows from that $\\mathcal{X}$ is a tree for all cases, except in Step 4 (\\Cref{lm:Clustering-Step4}). However, in this case, $\\mathcal{X}$ has a single edge in $\\mathcal{E}_i$, and hence $|\\mathcal{E}(\\mathcal{X})\\cap \\mathcal{E}_i| \\leq 1 = O(|\\mathcal{A}_{\\mathcal{X}}|)$. \n\nIt remains to show the reduction in the number of non-virtual clusters as claimed in \\Cref{lm:additional-prop}. All we need to show is that for every subgraph $\\mathcal{X}$ that contains a non-isolated node, it contains at least two non-virtual nodes. That is, $\\mathcal{X}$ is good. This holds for subgraphs in $\\mathbb{X}_1\\cup \\mathbb{X}_4$, since every subgraph in this set contains at least one edge in $\\mathcal{E}_i$, whose endpoints are non-isolated by the definition of a non-isolated node. Every subgraph in $\\mathbb{X}_2$ is good by Item (1) in \\Cref{lm:Clustering-Step2}. Observe that each subgraph $\\mathcal{X} \\in \\mathbb{X}^{\\mathsf{intrnl}}_5\\cup \\mathbb{X}^{\\mathsf{pref}}_5$ corresponds to a subpath of $\\widetilde{T}$ in Step 5B that does not contain the connecting endpoint. By Item (2) in \\Cref{lm:Clustering-Step5B}, $\\mathcal{X}$ contains at least two non-virtual nodes, if it contains at least one non-isolated node, and hence $\\mathcal{X}$ is good. \\Cref{lm:additional-prop} now follows. \\mbox{}\\hfill $\\Box$\\\\\n\n\n\n\n\\subsubsection{Proof of \\Cref{lm:Clustering-Step2}}\\label{subsec:proof}\nIn this section, we provide the proofs of \\Cref{lm:Clustering-Step2}, which we restate below.\n\n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.8\\textwidth]{figs\/Step2}\n\t\\end{center}\n\t\\caption{Virtual clusters are yellow shaded and non-virtual clusters are green shaded. Virtual vertices are small circles. (a) A tree $\\widetilde{T}$ considered in the construction of Step 2 (\\Cref{lm:Clustering-Step2}). (b) The tree $\\mathcal{T}$ constructed from non-virtual nodes and connecting nodes of $\\widetilde{T}$ in the proof of \\Cref{lm:Clustering-Step2}. If every edge in the path $Q_e$ has weight $1$ as in figure (a), then the weight of $\\mathbf{e}$ in figure (b) is $5$. In general, $\\omega(\\mathbf{e}) = w(Q_e)$. Every virtual node in $\\mathcal{T}$ is a connecting node. Subgraphs in the rectangular dashed curves are subgraphs formed in previous steps.}\n\t\\label{fig:Step2}\n\\end{figure}\n\n\n\\ClusteringStepTwo*\n\\begin{proof}\nLet $\\widetilde{T}$ be a tree of augmented diameter at least $6L_i$ in $\\widetilde{F}^{(2)}$. We say that a node $\\varphi \\in \\widetilde{T}$ is a connecting node if it has an $\\texttt{MST}$ edge to a subgraph $\\mathcal{X} \\in \\mathbb{X}_1$.\n\nWe now construct a tree $\\mathcal{T}$ in the same way we construct a path $\\mathcal{P}$ in \\Cref{lm:Clustering-Step5B}. $\\mathcal{T}$ is a tree that contains non-virtual nodes and connecting nodes of $\\widetilde{T}$, which may or may not be virtual. Note that branching nodes of $\\widetilde{T}$ are non-virtual. Each edge $\\mathbf{e} = (\\varphi,\\varphi') \\in \\mathcal{T}$ corresponds to a path between $\\varphi$ and $\\varphi'$ in $\\widetilde{T}$ whose internal nodes are virtual. Note that all virtual nodes on the path between $\\varphi$ and $\\varphi'$ in $\\widetilde{T}$ share the same parent path $P_e$. Let $Q_e$ be the minimal subpath of $P_e$ whose endpoints are in the clusters corresponding to $\\varphi$ and $\\varphi'$. We then assign a weight $\\omega(\\mathbf{e}) = w(Q_e)$. Observe that $\\omega(\\mathbf{e})\\leq w(P_e) = w(e)$ where $e$ is the $\\texttt{MST}$ edge from which $P_e$ is subdivided. See Figure~\\ref{fig:Step2} for an illustration.\n\n\n\\begin{claim}\\label{clm:T-Prop} If a node $\\varphi$ in $\\widetilde{T}$ is non-isolated and non-connecting, then $\\varphi$ is incident to an edge of weight at most $L_i$ in $\\mathcal{T}$.\n\\end{claim}\n\\begin{proof}\n\tBy definition of a non-isolated node, $\\varphi$ is incident to an edge, say $\\mathbf{e}'$, in $\\mathcal{E}_i$ by definition. One of the incident edges of $\\varphi$ belongs the fundamental cycle of $\\widetilde{\\mst}_{i}$ formed by $\\mathbf{e}'$. It follows from \\Cref{lm:cycle-Prop} that at least one edge in $\\mathcal{T}$ of $\\varphi$ must have a weight at most $\\omega(\\mathbf{e}')\\leq L_i$. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\nWe first apply the following construction to obtain a collection of trees, say $\\mathbb{A}$, and then we will post-process the trees to obtain $\\mathbb{X}_2$ as claimed in \\Cref{lm:Clustering-Step2}. We say that a tree $\\widetilde{T}$ in $\\widetilde{F}^{(2)}$ a long tree if its augmented diameter is at least $6L_i$. The construction of $\\mathbb{A}$ is similar to Step 2 in LS algorithm, except that the radius of the BFS step in our construction is slightly larger. \n\n\\begin{itemize}\n\t\\item (Step i) Pick a long tree $\\widetilde{T}$ of $\\widetilde{F}^{(2)}_i$ with at least one $\\widetilde{T}$-branching node, say $\\varphi$. If $\\widetilde{T}$ has a $\\widetilde{T}$-branching node that is non-isolated, we then choose $\\varphi$ to be a non-isolated node. We traverse $\\widetilde{T}$ by BFS starting from $\\varphi$ and {\\em truncate} the traversal at nodes whose augmented distance from $\\varphi$ is at least $2L_i$. The augmented radius (with respect to the center $\\varphi$) of the subtree induced by the visited nodes is at least $L_i$ and at most $2L_i + \\bar{w} + g\\epsilon L_i \\leq 2L_i + 2g\\epsilon L_i$. We then create a new tree $\\widetilde{T}'$ induced by the visited nodes. \n\\end{itemize}\n\nAfter the construction in Step i, every tree in $\\widetilde{T}$ either has augmented diameter at most $6L_i$ or is a path. \n\nAn important property that we would like to have is that every tree in $\\mathbb{A}$ either contains no non-isolated node or at least two non-virtual nodes. To this end, we need to post-process $\\mathbb{A}$. Our postprocessing relies on the following structure of trees in $\\mathbb{A}$.\n\n\n\\begin{figure}[h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.8\\textwidth]{figs\/Step2-Proof}\n\t\\end{center}\n\t\\caption{Two cases in the proof of \\Cref{clm:adj}. (a) $\\varphi'$ is a virtual node in $\\mathcal{T}$. Then it is a connecting node, and is grouped to $\\widetilde{T}'$. (b) $\\varphi'$ is a non-virtual node. Then it is grouped in $\\widetilde{T}''$ that is adjacent to $\\widetilde{T}'$. $\\widetilde{T}''$ contains at least two non-virtual nodes (3 non-virtual nodes in this figure).}\n\t\\label{fig:Step2-Proof} \n\\end{figure}\n\n\n\\begin{claim}\\label{clm:adj} Let $\\widetilde{T}' \\in \\mathbb{A}$ be a tree that contains exactly one non-isolated node, no connecting node, and no other non-virtual node. Then $\\widetilde{T}'$ is adjacent to a tree $\\mathcal{T}''\\in \\mathbb{A}$ that has at least two non-virtual nodes. \n\\end{claim}\n\\begin{proof}Let $\\varphi$ be the non-isolated node in $\\widetilde{T}'$. Observe that the center of $\\widetilde{T}'$ is a branching node, and hence, is non-virtual. It follows that $\\varphi$ must be the center of $\\widetilde{T}'$ since otherwise, $\\widetilde{T}'$ contains two non-virtual nodes, contradicting the assumption of the claim. Let $\\varphi'$ be the neighbor in $\\mathcal{T}$ of $\\varphi$ whose edge $(\\varphi',\\varphi)$ has weight at most $L_i$ by \\Cref{clm:T-Prop}. By construction, the radius of the traversal is at least $2L_i > L_i + 2\\epsilon gL_i $ when $\\epsilon \\leq 1\/4g$. If $\\varphi'$ is a virtual node (see Figure~\\ref{fig:Step2-Proof}(a)), then it must be connecting, and hence $\\varphi'$ belong to $\\widetilde{T}'$, contradicting that $\\widetilde{T}'$ has no connecting node. Otherwise, $\\varphi'$ is a non-virtual node and is grouped into another tree, say $\\widetilde{T}''\\in \\mathbb{A}$ (see Figure~\\ref{fig:Step2-Proof}(b)). Observe that $\\widetilde{T}'$ and $\\widetilde{T}''$ are adjacent, i.e., connected by an edge in $\\widetilde{\\mst}_{i}$, since all nodes between $\\varphi$ and $\\varphi'$ have degree 2 as they are virtual nodes. We claim that $\\widetilde{T}''$ must have at least two non-virtual nodes. If $\\varphi'$ is not a center of $\\widetilde{T}''$, then $\\widetilde{T}''$ contains at least two non-virtual nodes since its center is a non-virtual node. Otherwise, $\\varphi'$ is the center of $\\widetilde{T}''$, and hence, $\\varphi$ would have been merged to $\\widetilde{T}''$ during the construction of $\\widetilde{T}''$, a contradiction. \t\\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\\noindent Our construction in the next step is as follows.\n\n\\begin{itemize}\n\t\\item (Step ii) Pick a tree $\\widetilde{T}'$ in $\\mathbb{A}$ that has one non-isolated node and no other non-virtual node. If $\\widetilde{T}'$ contains a connecting node, say $\\varphi$. Let $\\mathcal{Y}\\in \\mathbb{X}_1$ be a subgraph such that $\\varphi$ has an $\\widetilde{\\mst}_{i}$ edge $\\mathbf{e}$ to a node in $\\mathcal{Y}$. We then add $\\widetilde{T}'$ and $\\mathbf{e}$ to $\\mathcal{Y}$, and add the set of nodes of $\\widetilde{T}'$ to $\\mathcal{Z}$. Otherwise, $\\widetilde{T}'$ is adjacent to another tree $\\widetilde{T}''\\in \\mathbb{A}$ that has at least two non-virtual nodes by \\Cref{clm:adj}. We then add $\\widetilde{T}'$ and the $\\widetilde{\\mst}_{i}$ edge connecting $\\widetilde{T}'$ and $\\widetilde{T}''$ to $\\widetilde{T}''$. We then repeat this step until it no longer applies. The set $\\mathbb{X}_2$ is the set of trees in $\\mathbb{A}$ after this step completed.\n\\end{itemize}\n\nWe now prove all properties in \\Cref{lm:Clustering-Step2}. Step i is the same as Step 2 in LS algorithm and hence can be implemented in $O(\\mathcal{V}_i)$ following~\\cite{LS21} (Lemma 5.2). Step ii can be implemented in $O(|\\mathcal{V}(\\widetilde{F}^{(2)})|) = O(\\mathcal{V}_i)$ by following each step of the construction. Thus, the total running time is $O(|\\mathcal{V}_i|)$.\n\n\nItem (1) of \\Cref{lm:Clustering-Step2} and Item (4) follows directly from the construction. By the construction in Step i, every tree has an augmented diameter at least $2L_i$ and at most $(2L_i + 2\\epsilon L_i )$. The augmentation in Step ii is done via a star-like way, and hence, increases the diameter of each tree in $\\mathbb{A}$ by at most $2(2L_i + 2\\epsilon g L_i) + 2\\bar{w} \\leq 2(2L_i + 2\\epsilon g L_i) + 2g\\epsilon L_i = 4L_i + 6\\epsilon L_i$. (Here we use the fact that $\\bar{w}\\leq L_{i-1} = \\epsilon L_i \\leq g\\epsilon L_i$.) Thus, the final diameter is at most $2L_i + 2\\epsilon g L_i + 4L_i + 6\\epsilon L_i \\leq 6L_i + 8\\epsilon L_i \\leq 14L_i < 20L_i $ when $\\epsilon \\leq 1\/g$; this implies Item (2) of \\Cref{lm:Clustering-Step2}.\n\nFor Item (3), note that each tree $\\mathcal{X} \\in \\mathbb{X}_2$ has augmented diameter at least $2L_i$, and that every edge\/node has a weight at most $\\max\\{\\bar{w}, g\\epsilon L_i\\} = g\\epsilon L_i$. It follows that $|\\mathcal{V}(\\mathcal{X})|\\geq \\frac{2L_i}{g\\epsilon L_i} = \\Omega(1\/\\epsilon)$, as claimed.\n\nFor Item (5), we observe that each subgraph $\\mathcal{Y}\\in \\mathbb{X}_1$ is augmented in Step ii via an $\\widetilde{\\mst}_i$ edges an in a star-like way. Thus, $\\mathsf{Adm}(\\mathcal{Y}^{\\mathsf{aug}})\\leq \\mathsf{Adm}(\\mathcal{Y}) + 2(2L_i + 2\\epsilon g L_i) + 2\\bar{w} \\leq \\mathsf{Adm}(\\mathcal{Y})+ 4L_i + 6\\epsilon g L_i \\leq 13 L_i+ 4L_i + 6g \\epsilon L_i \\leq 23L_i < 24L_i$ when $\\epsilon \\leq 1\/g$. This complete the proof of \\Cref{lm:Clustering-Step2}.\\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\n\\section{An $O(m\\alpha(m,n) + \\mathsf{SORT}(m))$-time Algorithm}\\label{sec:PointerMachine}\n\nIn this section we prove the first item of~\\Cref{thm:1}. By scaling, we assume that the minimum edge weight is $1$. We partition the edge set $E$ into $\\mu_\\epsilon = \\log_{1+\\epsilon} \\left(\\frac{1}{\\epsilon}\\right) = \\Theta(\\frac{\\log(1\/\\epsilon)}{\\epsilon})$ sets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$ such that each $E^{\\sigma}$ can be written as $E^{\\sigma} = \\cup_{i\\in \\mathbb{N}^+}E^{\\sigma}_i$ with: \n\\begin{equation} \\label{eq:EsigmaI}\n\tE^{\\sigma}_i = \\{e \\in E: \\frac{L_i}{(1+\\epsilon)} \\leq w(e) \\leq L_i, i \\in \\mathbb{N}\\} \\mbox{, where } L_i = L_0\/\\epsilon^{i}, L_0 = (1+\\epsilon)^{\\sigma}.\n\\end{equation} \nThus, for any edge set $E^{\\sigma}$, any two edge weights are either roughly the same (up to a factor of $1+\\epsilon$) or \nfar apart (separated by at least a factor of $1\/\\epsilon$).\nFor technical convenience, we shall define $L_{-1} = 0$.\n\nWe note that the time needed to compute the partition of $E$ into the sets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$\nis upper bounded by $O(m + \\mathsf{SORT}(m')) = O(\\mathsf{SORT}(m))$, where $m'$ is the number of non-empty sets.\nIndeed, this computation can be carried out naively in linear time, except for the time needed to sort the {\\em indices} of the non-empty sets in $\\{E^{\\sigma}_i\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon, i \\in \\mathbb{N}}$. In the runtime analysis that follows we shall disregard this initial time investment, under the understanding that we include it in the final runtime bound.\n \nWe now construct a $(2k-1)(1+O(\\epsilon))$-spanner $H^{\\sigma}$ for each set $E^{\\sigma}$ with sparsity $O(n^{1\/k})$ in $O(m \\cdot \\alpha(m,n))$ time. A $(2k-1)(1+O(\\epsilon))$-spanner $H$ for $G$ with sparsity $O(n^{1\/k} \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$ is then obtained as the union of all $H^{\\sigma}$'s: $H = \\cup_{1\\leq \\sigma \\leq \\mu_\\epsilon}H^{\\sigma}$, within time $O(m \\cdot \\alpha(m,n) \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$.\n\nWe focus on the construction of $H^{\\sigma}$, for a fixed $\\sigma \\in [1,\\mu_{\\epsilon}]$. Initially $H^{\\sigma}_{0} = (V,\\emptyset)$. The construction is carried out in what we call \\emph{levels}: at level $i$, we shall construct a subgraph $H^{\\sigma}_i$ such that $H^{\\sigma}_{\\leq i}$ is a $(2k-1)(1+O(\\epsilon))$-spanner for the edge set $E^{\\sigma}_{\\leq i}$. Here $H^{\\sigma}_{\\leq i} = \\cup_{0\\leq j \\leq i}H^{\\sigma}_j$ and $E^{\\sigma}_{\\leq i} = E^{\\sigma}_{0\\leq j \\leq i}$. By induction, $H^{\\sigma} \\defi \\cup_{i\\geq 0}H^{\\sigma}_i$ would provide a $(2k-1)(1+O(\\epsilon))$-spanner for the edge set $E^{\\sigma}$. \nConsequently, $H = \\cup_{1\\leq \\sigma \\leq \\mu_\\epsilon}H^{\\sigma}$ will provide a\n$(2k-1)(1+O(\\epsilon))$-spanner for $E = \\bigcup_{1\\leq \\sigma \\leq \\mu_\\epsilon} E^{\\sigma}$,\nand, by~\\Cref{stretch:ob}, also for $G$ .\nAll graphs $H^{\\sigma}_i$ share the same vertex set $V$ and hence are distinguished by the edge set.\n\nA {\\em cluster} is a set of vertices.\nOur construction uses a {\\em hierarchical clustering}, where for each $i \\ge 0$, the construction at level $i$ is associated with a set of { clusters}\n$\\mathcal{C}_i$ such that: \n\\begin{itemize}[noitemsep]\n\\item \\textbf{(P1)~} \t\\hypertarget{P1}{}\n\tEach cluster $C\\in \\mathcal{C}_i$ is a subset of $V$. Furthermore, clusters in $\\mathcal{C}_i$ induce a partition of $V$.\n\\item \\textbf{(P2)~} \t\\hypertarget{P2}{}\nEach cluster $C\\in \\mathcal{C}_i$ induces a subgraph $H_{\\leq i}^{\\sigma}[C]$ of diameter $\\le gL_{i-1}$ for some constant $g$. \n\\end{itemize}\n\n$\\mathcal{C}_0$ is the set of $n$ singletons of $V$ and hence trivially satisfies both Properties (\\hyperlink{P1}{P1}) and (\\hyperlink{P2}{P2}) (recall that $L_{-1} = 0$). The cluster sets $\\{\\mathcal{C}_0, \\mathcal{C}_1, \\ldots\\}$ provide a {\\em hierarchy of clusters} $\\mathcal{H}$. In particular, for any $i \\ge 1$, $\\mathcal{C}_{i-1}$ is a {\\em refinement} of $\\mathcal{C}_i$: any cluster $C\\in \\mathcal{C}_i$ is the union of a subset of clusters in $\\mathcal{C}_{i-1}$.\n\n\n\\paragraph{Representing $\\mathcal{C}_i$ by Disjoint Sets.~} We shall use the classic \\textsc{Union-Find}~data structure~\\cite{Tarjan75} in our clustering procedure, for representing clusters in $\\mathcal{C}_i$, grouping subsets of clusters to larger clusters (via the \\textsc{Union} operation), and checking whether a pair of vertices belongs to the same cluster (via the \\textsc{Find} operation). In particular, each cluster $C\\in \\mathcal{C}_i$ will have a {\\em representative} vertex, denoted by $r(C)$, that can be accessed from any vertex $v\\in C$ by calling $\\textsc{Find}(v)$; we define $r(v) := \\textsc{Find}(v)$. \nThe {\\em amortized} time per each \\textsc{Union} or \\textsc{Find} operation is $O(\\alpha(a,b))$, where $a$ is the total number of \\textsc{Union} and \\textsc{Find} operations and $b$ is the number of vertices in the data structure. \n\n\n\n\\paragraph{Constructing $H^{\\sigma}_i$.~} We assume that $|E^{\\sigma}_i| \\geq 0$; otherwise, we will skip the construction at level $i$ and set $\\mathcal{C}_{i+1} = \\mathcal{C}_i$. We say that a cluster at level $i$ is {\\em isolated} if none of its vertices is incident on any edge of $E^{\\sigma}_i$;\notherwise it is \\emph{non-isolated}. Let $\\mathcal{X}$ be the set of all non-isolated level-$i$ clusters. We say that two edges $(u,v)$ and $(u',v')$ in $E_i^{\\sigma}$ are {\\em parallel} if $r(u) = r(u')$ \n(i.e., $u$ and $u'$ are in the same level-$i$ cluster) and $r(v) = r(v')$ (i.e., $v$ and $v'$ are in the same level-$i$ cluster). We say that $(u,v)$ is a self-loop if $r(u) = r(v)$ (i.e., $u$ and $v$ are in the same level-$i$ cluster). Let $S_i$ be obtained from $E^{\\sigma}_i$ by removing from it all self-loops and keeping only the lightest edge in every maximal set of parallel edges of $E^{\\sigma}_i$;\nwe refer to the edges of $S_i$ as the {\\em source edges}. \n\nWe then construct an unweighted graph $R_i$, called the \\emph{representative graph}, as follows: $V(R_i) = \\{r(C): C\\in \\mathcal{X}\\}$ and \n$E(R_i) = \\{(r(u),r(v)): (u,v) \\in S_i\\}$. The vertices and edges of $R_i$ are referred to as the {\\em representative vertices} and {\\em representative edges}, respectively;\nnote that each representative edge corresponds to a unique source edge. Let $E'_i \\leftarrow \\textsc{HalperinZwick}(R_i,2k-1)$ be the edge set obtained by applying the spanner algorithm of~\\Cref{thm:HalperinZwick} to $R_i$. Let $S_i'$ be the subset of source edges in $S_i$ corresponding to the representative edges in $E_i'$. Our graph $H^{\\sigma}_i$ has $S_i'$ as its edge set.\n\n\n\\begin{lemma}\\label{lm:Stretch} $d_{H^{\\sigma}_{\\leq i}}(u,v) \\leq (2k-1)(1+O(\\epsilon))w(u,v)$ for every edge $(u,v) \\in E^{\\sigma}_i$, assuming $\\epsilon \\leq 1\/(2g)$. Furthermore, $S_i'$ can be constructed in $O(|E^{\\sigma}_i|\\alpha(m,n))$ time.\n\\end{lemma}\n\\begin{proof} Let $(u,v)$ be an arbitrary edge in $E_i^{\\sigma}$. \nWe first consider the case where $(u,v)\\in S_i$. Then, there is an edge $(r(u),r(v)) \\in R_i$. By \\Cref{thm:HalperinZwick}, there is a path $P$ between $r(u)$ and $r(v)$ in $(V(R_i),E'_i)$ that contains at most $2k-1$ edges. We write $P = (r(x_0)=r(u), (r(x_0),r(x_1)), r(x_1), (r(x_1),r(x_2)), \\ldots, r(x_{p}) = r(v))$ as an alternating sequence of representative vertices and edges, where $x_0 = u, x_p = v$ and $p \\le 2k-1$. Let $(y^2_\\ell, y^1_{\\ell+1})$ be the source edge in $S_i'$ that corresponds to the representative edge $ (r(x_\\ell),r(x_{\\ell+1}))$, for each $\\ell\\in [0,p-1]$. Denote by $C_\\ell$ the level-$i$ cluster with $r(C_\\ell) = r(x_\\ell)$. Let $y^1_0 = u$ and $y^2_{p}=v$. Let\n\t\\begin{equation}\n\t\tQ = Q_{0}(y^1_0,y^2_0)\\circ (y^2_0,y^1_1)\\circ Q_1(y^1_1,y^2_1) \\ldots \\circ (y^2_{p-1},y^1_p)\\circ Q_p(y^1_p,y^2_p)\n\t\\end{equation}\nbe a path from $u$ to $v$, where $Q_{\\ell}(y^1_\\ell,y^2_\\ell)$ is a shortest path between $y^1_\\ell$ and $y^2_\\ell$ in $H^{\\sigma}_{\\leq i-1}[C_\\ell]$, for each $0\\leq \\ell\\leq p$, and $\\circ$ is the path concatenation operator. By property \\hyperlink{P2}{(P2)}, $w(Q_{\\ell}(y^1_\\ell,y^2_\\ell)) \\le \ng L_{i-1} = g \\epsilon L_i$. It follows that \n\t\\begin{equation}\\label{eq:weightQ}\n\t\t\\begin{split}\n\t\t\tw(Q) &\\leq (2k-1)L_i + (2k)g\\epsilon L_i \\leq (2k-1)(1+ 2g\\epsilon)L_i\\\\\n\t\t\t&\\leq (2k-1)(1+2g\\epsilon)(1+\\epsilon)w(u,v) \\qquad \\mbox{(since $w(u,v)\\geq L_i\/(1+\\epsilon)$)}\\\\\n\t\t\t&\\leq (2k-1)(1+(4g+1)\\epsilon)w(u,v) \\qquad \\mbox{(since $\\epsilon \\leq 1$)}\n\t\t\\end{split}\n\t\\end{equation}\n\t\nThus, the stretch of $(u,v)$ is at most $(2k-1)(1+(4g+1)\\epsilon)$. \n\t\nNext, we consider the complementary case that $(u,v) {~\\not \\in~} S_i$. \nBy definition, the edge $(u,v)$ is not in $S_i$ either because it is a self-loop or it is parallel to another edge $(u',v')$ that belongs to $S_i$, with $w(u',v') \\le w(u,v)$. In the former case, property \\hyperlink{P2}{(P2)} implies the existence of a path from $u$ to $v$ in $H^{\\sigma}_{\\leq i-1}$ of weight at most $gL_{i-1} ~=~ g\\epsilon L_i ~\\leq \\frac{L_i}{1+\\epsilon}~\\leq~w(u,v)$ when $\\epsilon < \\frac{1}{2g}$. Thus, in this case the stretch of edge $(u,v)$ is $1$.\nFor the latter case, let $C_u$ and $C_v$ be the level-$i$ clusters containing $u$ and $v$, respectively, and without loss of generality assume that $u' \\in C_u$ and $v' \\in C_v$. \nBy property (\\hyperlink{P2}{P2}), $\\mathsf{Dm}(H^{\\sigma}_{\\leq i-1}[C_u]),\\mathsf{Dm}(H^{\\sigma}_{\\leq i-1}[C_v])) \\le \ngL_{i-1} = g\\epsilon L_i$. \nThe same argument used for deriving \\Cref{eq:weightQ}, when applied to the edge $(u',v')$ rather than $(u,v)$, yields:\n\\begin{equation} \\label{eq:simple}\nd_{H_{\\leq i}}(u',v') ~\\le~ (2k-1)(1+(4g+1)\\epsilon)w(u',v') ~\\le~ (2k-1)(1+(4g+1)\\epsilon)w(u,v).\n\\end{equation}\nBy the triangle inequality, \n\\begin{equation*}\n\t\\begin{split}\n\t\td_{H_{\\leq i}}(u,v) &\\leq~ \td_{H_{\\leq i}}(u',v') + \\mathsf{Dm}(H_{\\leq i-1}[C_u]) + \\mathsf{Dm}(H_{\\leq i-1}[C_v])\\\\\n\t\t&\\leq~ (2k-1)(1+ (4g+1)\\epsilon)w(u,v) + 2g\\varepsilon L_i \\qquad \\mbox{(by \\Cref{eq:simple})}\\\\\n\t\t&\\leq~ (2k-1)(1+ (4g+1)\\epsilon)w(u,v) + 4g\\epsilon w(u,v) \\qquad \\mbox{(since $w(u,v) \\geq L_i\/(1+\\epsilon) \\geq L_i\/2$)}\\\\\n\t\t&=~ (2k-1)(1+ (8g+1)\\epsilon)w(u,v) \\qquad \\mbox{(since $k\\geq 1$)}\n\t\\end{split}\n\\end{equation*}\nThus, the stretch of edge $(u,v)$ is $(2k-1)(1+ (8g+1)\\epsilon) = (2k-1)(1+O(\\epsilon))$.\nSummarizing, we have shown that in all cases the stretch of edge $(u,v)$ is at most $(2k-1)(1+O(\\epsilon))$, as required.\n\n\n\n\nBy construction of the representative graph $R_i$, all clusters corresponding to vertices of $R_i$ are non-isolated, hence no vertex of $R_i$ is isolated, yielding\n$|V(R_i)| = O(|E(R_i)|) = O(|E^{\\sigma}_i|)$. \nThus, the construction of the edge set $S_i$ and the representative graph $R_i$, via the usage of the $\\textsc{Union-Find}$ data structure, takes total time of $O(|E^{\\sigma}_i|\\alpha(m,n))$. The set of edges $S_i'$ by \\Cref{thm:HalperinZwick} can be constructed in time $O(|E(R_i)| + |V(R_i)|) = O(|E^{\\sigma}_i|)$ time. Thus, the total running time to construct $S_i'$ is $O(|E^{\\sigma}_i|\\alpha(m,n))$.\n\\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\n\\paragraph{Constructing $\\mathcal{C}_{i+1}$.~} Every cluster $C \\in \\mathcal{C}_i\\setminus \\mathcal{X}$ becomes a level-$(i+1)$ cluster. We next focus on the level-$i$ clusters of $\\mathcal{X}$. Recall that $V(R_i)$ is the set of all representatives of clusters in $\\mathcal{X}$. We construct a collection $\\mathcal{U}$ of vertex-disjoint subgraphs of $R_i$ in the following two steps: \n\\begin{itemize}\n\t\\item[(1)] Initially, we greedily construct a maximal set of vertex-disjoint stars of $R_i$, and initialize $\\mathcal{U}$ as this edge set; thus, each subgraph $U \\in \\mathcal{U}$ contains a vertex and all of its neighbors in $R_i$. \n\t\\item[(2)] We scan the remaining vertices in $V(R_i)$ that haven't been grouped to any subgraph in $\\mathcal{U}$. \n\tFor every such remaining vertex $v \\in V(R_i)$, it must have at least one neighbor that is contained in a subgraph $U\\in \\mathcal{U}$ (by the maximality of $\\mathcal{U}$); we add to $U$ the vertex $v$ and an edge $(v,u)$ leading to such a neighbor $u$ of $v$ \n\t (chosen arbitrarily if there are multiple such neighbors). \n\\end{itemize} \n\nFor each subgraph $U$ in the resulting edge set $\\mathcal{U}$, we form a level-$(i+1)$ cluster $C_U\\in \\mathcal{C}_i$ by taking the union of all the clusters whose representatives are $V(U)$ as $C_U$. \n\n\n\\begin{lemma}\\label{lm:Ci1}All clusters in $\\mathcal{C}_{i+1}$ satisfy Properties (\\hyperlink{P1}{P1}) and (\\hyperlink{P2}{P2}) when $\\epsilon \\leq 1\/(2g)$ and $g \\geq 9$, and they can be constructed in time $O(|E^{\\sigma}_i| \\cdot \\alpha(m,n))$. Furthermore, every cluster $C_U\\in \\mathcal{C}_{i+1}$ that is formed from a subgraph $U \\in \\mathcal{U}$, as described above, is the union of at least $2$ level-$i$ clusters. \n\\end{lemma}\n\\begin{proof} \nProperty (\\hyperlink{P1}{P1}) holds trivially. \nTo prove that Property (\\hyperlink{P2}{P2}) holds, we first note that each subgraph $U \\in \\mathcal{U}$ (with vertices in $R_i$)\nhas hop diameter at most $4$,\nwhich follows directly from the above two-step construction of $\\mathcal{U}$.\nAny edge connecting two vertices in $U$ corresponds to a source edge in $S_i$, and thus also in $E^{\\sigma}_i$, and as such has length at most $L_i$, which implies that $C_U$ induces a subgraph of diameter at most $ 5(g L_{i-1}) + 4 L_i = 5g\\epsilon L_i + 4L_i \\leq 9L_i = gL_i$, since $\\epsilon\\leq 1\/g$ and $g \\ge 9$. Thus, Property (\\hyperlink{P2}{P2}) holds.\n\n\n The construction of the edge set $S_i$ and the representative graph $R_i$ takes total time of $O(|E^{\\sigma}_i|\\alpha(m,n))$ using the $\\textsc{Union-Find}$ data structure. As for the construction of\nthe collection $\\mathcal{U}$ of vertex-disjoint subgraphs of $R_i$,\nStep (1) of this construction, i.e., which constructs a maximal set of vertex-disjoint stars,\ninvolves a greedy linear-time algorithm, whereas Step (2) naively takes linear time, so together they are implemented within time $O(|E(R_i)|) = O(|E^{\\sigma}_i|)$. \nConstructing the corresponding clusters $\\{C_U: U\\in \\mathcal{U}\\}$ can be implemented within the same amount of time in the obvious way. The construction of clusters in $\\mathcal{C}_{i+1}$ that are clusters in $\\mathcal{C}_i\\setminus \\mathcal{X}$ requires no extra time.\n\t\n\nFinally, \nwe argue that any cluster $C_U\\in \\mathcal{C}_{i+1}$ that is formed from a subgraph $U \\in \\mathcal{U}$ \n contains at least $2$ level-$i$ clusters.\nIndeed, any cluster formed in Step (1) of the construction of $\\mathcal{U}$ contains at least 2 level-$i$ clusters,\nby the maximality of $\\mathcal{U}$ and since no vertex in $R_i$ is isolated.\nAny remaining level-$i$ cluster must be grouped in Step (2) of the construction of $\\mathcal{U}$ \nto clusters formed in Step (1), \nand this too holds by the maximality in Step 1 of the construction of $\\mathcal{U}$ \nand since no vertex in $R_i$ is isolated. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\nWe are now ready to prove the first item of \\Cref{thm:1}.\n\n\\begin{proof}[Proof of the first item of~\\Cref{thm:1}] Recall that $H = \\cup_{1\\leq \\sigma \\leq \\mu_\\epsilon}H^{\\sigma}$. Let $\\Delta_{i+1} = |\\mathcal{C}_i| - |\\mathcal{C}_{i+1}|$. Recall that $\\mathcal{C}_0$ is the set of $n$ singletons, i.e., $|\\mathcal{C}_0| = n$. Thus, $\\sum_{i\\geq 0} \\Delta_{i+1} \\leq |\\mathcal{C}_0| = n$. \n\t\n\tBy \\Cref{lm:Ci1}, $\\Delta_{i+1} \\geq \\frac{|V(R_i)|}{2}$. Furthermore, \\Cref{thm:HalperinZwick} yields $|S_i'| = O(|V(R_i)|^{1+1\/k})$, hence $|S_i'| = O(n^{1\/k}) \\cdot \\Delta_{i+1}$. Thus, we have:\n\t\\begin{equation}\\label{eq:EHsigma}\n\t\t|E(H^{\\sigma})| ~=~ |\\cup_{i\\geq 0} E(H^{\\sigma}_i)| ~=~ \n\t\t\\sum_{i\\geq 0} |S_i'| ~=~ \\sum_{i\\geq 0}O(n^{1\/k}) \\cdot \\Delta_{i+1} ~=~ O(n^{1+1\/k}).\n\t\\end{equation}\n\t\n\t\n\t The sparsity of $H$ is $O(n^{1\/k} \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$ by \\Cref{eq:EHsigma}. The stretch of $H$ is at most $(2k-1)(1+O(\\epsilon))$ by~\\Cref{lm:Stretch}; we can reduce the stretch down to $(2k-1)(1+\\epsilon)$ by scaling $\\epsilon \\leftarrow \\epsilon\/c$, for a sufficiently large constant $c$, which will affect the sparsity and runtime bounds by constant factors. \tThe time needed to construct $H^\\sigma$ is $O(\\sum_{i\\geq 0}|E^{\\sigma}_i| \\cdot \\alpha(m,n)) = O(m \\cdot \\alpha(m,n))$ by \\Cref{lm:Stretch} and \\Cref{lm:Ci1}. Thus, the overall time needed to construct $H$, when also considering the \n\truntime $O(\\mathsf{SORT}(m))$ for computing the partition of $E$ into the sets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$,\n\tis $O(m \\cdot \\alpha(m,n) \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon} + \\mathsf{SORT}(m))$. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\n\n\n\n\n\n\\section{ A Linear Time Algorithm in the \\textsf{Transdichotomous~} Model}\\label{sec:sparseRAM}\n\nIn this section, we prove the second item of \\Cref{thm:1}. We follow the same framework as in \\Cref{sec:PointerMachine}; our focus, as before, is on constructing a $(2k-1)(1+O(\\epsilon))$-spanner $H^{\\sigma}$ for $E^{\\sigma}$, for a fixed $\\sigma \\in [1,\\mu_{\\epsilon}]$. The construction is carried out in levels, where $H_i^{\\sigma}$ is constructed at level $i$, and uses a hierarchy of clusters such that each cluster $C\\in \\mathcal{C}_i$ satisfies two properties that are similar to those used in \\Cref{sec:PointerMachine}, namely Properties (\\hyperlink{P1}{P1}) and (\\hyperlink{P2}{P2}). \n\n\nWe also use a \\textsc{Union-Find}~data structure to represent clusters in $\\mathcal{C}_i$. However, our construction relies on a special case of \\textsc{Union-Find}~, where the set of \\textsc{Union}~operations are pre-specified at the outset of the construction. Gabow and Tarjan~\\cite{GT85} designed a data structure for this special case of \\textsc{Union-Find}~in the \\textsf{Transdichotomous~} model; this result is summarized in the following theorem.\n\n\\begin{theorem}[Gabow and Tarjan~\\cite{GT85}]\\label{thm:GabowTarjan} \nLet $T$ be a rooted tree with $n$ vertices. One can design a \\textsc{Union-Find}~data structure in the \\textsf{Transdichotomous~} model that maintains disjoint sets of $V(T)$ and supports $m$ $\\textsc{Union}$ and $\\textsc{Find}$ operations in $O(m+n)$ total time, in which each \\textsc{Union}~operation is of the form $\\textsc{Union}(v,p_T(v))$ for some non-root vertex $v \\in V(T)$. Here $p_T(v)$ denotes the parent of $v$ in $T$. \n\\end{theorem}\n\nWe emphasize that the \\textsc{Union-Find}~data structure of Gabow and Tarjan in \\Cref{thm:GabowTarjan} only works in the \\textsf{Transdichotomous~} model. The tree $T$ in \\Cref{thm:GabowTarjan} is called a \\emph{union tree} of the \\textsc{Union-Find}~data structure. We use $\\textsc{Link}(v)$ to specifically denote the \\textsc{Union}~operation of the form $\\textsc{Union}(v, p_T(v))$. \n\nThe construction of~\\Cref{sec:PointerMachine} achieves a super-linear running time.\nTo improve this runtime to linear in $m$, we plug the following new ideas on top of the construction of~\\Cref{sec:PointerMachine}.\n\nThe second term in the super-linear runtime $O(m \\cdot \\alpha(m,n) \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon}+\\mathsf{SORT}(m))$, namely $\\mathsf{SORT}(m)$,\nstems from the time needed to compute the partition of $E$ into the sets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$,\nwhich boils down to sorting the indices of the non-empty sets in $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$. \nIn the $\\textsf{Word RAM~}$ model, we employ a rather simple trick to carry out such an index sorting in time $O(m \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$; the details of this optimization appear in~\\Cref{fusion}.\n\nThe main obstacle lies in shaving the factor $\\alpha(m,n)$ from the first term $O(m \\cdot \\alpha(m,n) \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$. \nFor this optimization, the two key ideas are the following:\n\n\\begin{itemize}\n\t\\item \\textbf{Idea 1.~} We use an MST for $G$ as the union tree for the \\textsc{Union-Find}~data structure.\n\tIn the \\textsf{Transdichotomous~} model, Fredman and Willard~\\cite{FW94} designed an algorithm to construct a minimum spanning tree in $O(m)$ time. \n\tLet\t$\\texttt{MST}$ be an arbitrary minimum spanning tree for $G$; we root $\\texttt{MST}$ at an arbitrary vertex $r$. \n\t\n\t\\item \\textbf{Idea 2.~} We guarantee that every level-$i$ cluster $C \\in \\mathcal{C}_i$ \\emph{induces} a subtree of $\\texttt{MST}$ of diameter at most $gL_{i-1}$, for some constant $g$. As we will show in the sequel, by forcing clusters to induce subtrees of $\\texttt{MST}$, we are able to use \\textsc{Link}~operations to form level-$(i+1)$ clusters from level-$i$ clusters, which is the source of our speed-up. \n\tThe crux of our construction is in realizing idea 2. \n\\end{itemize}\n\\Cref{thm:GabowTarjan} guarantees that each of the $\\textsc{Union}$ and $\\textsc{Find}$ operations takes $O(1)$ amortized time. As a result, we shave the $\\alpha(m,n)$ factor in the running time of the algorithm from \\Cref{sec:PointerMachine}.\n\nNext we proceed to the details of the linear-time construction. \nThe construction will satisfy the following two properties of clusters in $\\mathcal{C}_i$,\nthe first of which is identical to Property (\\hyperlink{P1}{P1}) of~\\Cref{sec:PointerMachine} whereas the second is an adaptation of Property (\\hyperlink{P2}{P2}).\n\n\\begin{itemize}[noitemsep]\n\\item \\textbf{(P1')~} \t\\hypertarget{P1'}{}\n\tEach cluster $C\\in \\mathcal{C}_i$ is a subset of $V$. Furthermore, clusters in $\\mathcal{C}_i$ induce a partition of $V$.\n\\item \\textbf{(P2')~} \t\\hypertarget{P2'}{} Each cluster $C\\in \\mathcal{C}_i$ induces a (connected) subtree $\\texttt{MST}[C]$ of $\\texttt{MST}$ with diameter at most $gL_{i-1}$, for some constant $g$ (the same constant used in Idea 2 above which is different than the one used\nin (\\hyperlink{P2}{P2})). \n\\end{itemize}\n\nWe will add all edges of $\\texttt{MST}$ to the spanner, by setting $H^{\\sigma}_{0}$ as $\\texttt{MST}$,\nwhich adds one unit to the sparsity and lightness.\nProperty (\\hyperlink{P2'}{P2'}) is inherently more restrictive than\n Property (\\hyperlink{P2}{P2}), as it aims at guaranteeing the same (perhaps up to a constant factor) diameter bound, but when restricted to subtrees of $\\texttt{MST}$. \n\n\\paragraph{Representing $\\mathcal{C}_i$.~} As in~\\Cref{sec:PointerMachine}, we use the \\textsc{Union-Find}~data structure to represent clusters in $\\mathcal{C}_i$, but we use the data structure provided by~\\Cref{thm:GabowTarjan}, which guarantees constant amortized cost.\nAs a result, we will maintain the property that the representative $r(C)$ of any cluster $C \\in \\mathcal{C}_i$ is always set to be the \\emph{root of the subtree} $\\texttt{MST}[C]$. By setting the representative of a cluster $C$ to be its root, $C$ can be united with other clusters via $\\textsc{Link}(r(C))$, which is crucial for applying the result of~\\Cref{thm:GabowTarjan}. The children of $C$ can be united to $C$ by the same way. \n\n\\paragraph{Constructing $H^{\\sigma}_i$.~} The construction is the same as the construction of $H^{\\sigma}_i$ in \\Cref{sec:PointerMachine}. Specifically, we construct a set of level-$i$ clusters $\\mathcal{X}$, the representative graph $R_i$, and the edge set $S'_i$, which is obtained by running the spanner algorithm of~\\Cref{thm:HalperinZwick} to $R_i$. Since the \\textsc{Union}~and \\textsc{Find}~operations now admit $O(1)$ (amortized) time, we derive the following lemma, whose proof follows along similar lines as those in the proof of~\\Cref{lm:Stretch}.\n\n\\begin{lemma}\\label{lm:RAM-Stretch} $d_{H_{\\leq i}}(u,v) \\leq (2k-1)(1+O(\\epsilon))w(u,v)$ for every edge $(u,v) \\in E^{\\sigma}_i$, assuming $\\epsilon < 1\/(2g)$. Furthermore, $S'_i$ can be constructed in $O(|E^{\\sigma}_i|)$ time.\n\\end{lemma}\n\n\n\n\\paragraph{Constructing $\\mathcal{C}_{i+1}$.~} Our construction of $\\mathcal{C}_{i+1}$ relies on the notion of {\\em cluster forest} defined below; see \\Cref{fig:ClusterForest} for an illustration.\n\n\\begin{definition}[Cluster Forest]\\label{def:cluster_tree} Let $\\mathcal{Y}\\subseteq \\mathcal{C}_i$ be a set of level-$i$ clusters. A {\\em cluster forest} for $\\mathcal{Y}$, denoted by $\\mathcal{F}_{\\mathcal{Y}}$, is a directed forest with a weight function $\\omega$ on the edges such that:\n\t\\begin{itemize}\n\t\t\\item[(1)] Each node $\\varphi_C \\in \\mathcal{F}_{\\mathcal{Y}}$ corresponds to a cluster $C\\in \\mathcal{Y}$,\n\t\t\\item[(2)] There is a directed edge $(\\varphi_{C_1} \\rightarrow \\varphi_{C_2})$ in the forest $\\mathcal{F}_{\\mathcal{Y}}$ if $C_2$ contains the parent, say $p_{\\texttt{MST}}(v)$, of the representative, say $v$, of $C_1$. Furthermore, $\\omega(\\varphi_{C_1} \\rightarrow \\varphi_{C_2}) = w(v,p_{\\texttt{MST}}(v))$.\n\t\\end{itemize}\n\\end{definition} \n\n\n\n\\begin{figure}[!h]\n\t\\begin{center}\n\t\t\\includegraphics[width=0.9\\textwidth]{figs\/ClusterF}\n\t\\end{center}\n\t\\caption{(a) Level-$i$ clusters induce subtrees of $\\texttt{MST}$ enclosed by oval curve, and (b) a cluster forest $\\mathcal{F}_{\\mathcal{Y}}$.}\n\t\\label{fig:ClusterForest}\n\\end{figure}\n\n\nBy definition, every edge of a cluster forest $\\mathcal{F}_{\\mathcal{Y}}$ corresponds to an $\\texttt{MST}$ edge. Let $\\mathcal{MST}_i = \\mathcal{F}_{\\mathcal{C}_i}$ be the cluster forest defined for the entire set $\\mathcal{C}_i$ of level-$i$ clusters; by Property (\\hyperlink{P2'}{P2'}), it holds that $\\mathcal{MST}_i$ is a tree. We stress that $\\mathcal{MST}_i$ is only used in the analysis of our algorithm; indeed, computing $\\mathcal{MST}_i$, at least naively, would require $\\Omega(|\\mathcal{C}_i|)$ time, which is too costly. \n\nFor a set $\\mathcal{Y}$ of level-$i$ clusters, \nwe say that the cluster forest $\\mathcal{F}_{\\mathcal{Y}}$ is \\emph{$L_i$-bounded} if every edge in it has weight at most $L_i$. The following lemma is the crux of our construction.\n\nRecall that $\\mathcal{X}$ denotes the set of all non-isolated level-$i$ clusters in the representative graph $R_i$.\n\n\\begin{lemma}\\label{lm:XTree} Let $\\mathcal{A}_i$ be the set of $\\mathcal{MST}_i$ edges of weight at most $L_i$, and let $\\mathcal{Y}$ be the set of nodes that are incident on at least one edge in $\\mathcal{A}_i$. Let $\\mathcal{F}_{\\mathcal{Y}}$ be the forest with node set $\\mathcal{Y}$ and edge set $\\mathcal{A}_i$. Then the following two conditions hold: \n\t\\begin{enumerate}\n\t\t\\item[(1)] $\\mathcal{X} \\subseteq \\mathcal{Y}$.\n\t\t\\item[(2)] Every tree in $\\mathcal{F}^{\\mathsf{pruned}}_{\\mathcal{Y}}$ has at least 2 nodes.\n\t\\end{enumerate}\n\\end{lemma}\n\\begin{proof}\n\tCondition (2) follows directly from the construction. We next prove that Condition (1) holds. \n\n Let $\\varphi_{C_u}$ be the node corresponding to a level-$i$ cluster $C_u$ in $\\mathcal{X}$. By the definition of $\\mathcal{X}$, there is an edge $(u,v) \\in E^{\\sigma}_i$ such that $u \\in \\varphi_{C_u}$. Let $C_v$ be the level-$i$ cluster containing $v$ and $\\varphi_{C_v}$ be the node corresponding to $C_v$. If $(u,v) \\in \\texttt{MST}$, then $\\varphi_{C_u} \\in \\mathcal{Y}$, and we're done. \n\t\nWe henceforth assume that $(u,v) {~\\not \\in~} \\texttt{MST}$. Consider the fundamental cycle $C_{uv}$ of $\\texttt{MST}$ formed by $\\texttt{MST}[u,v]$ and edge $(u,v)$. By the cycle property of $\\texttt{MST}$, every edge $e \\in \\texttt{MST}[u,v]$ satisfies $w(e)\\leq w(u,v)$. Recall that\n $\\mathcal{MST}_i$ is a tree by Property (\\hyperlink{P2'}{P2'}). Moreover, by the definition of $\\mathcal{MST}_i$, each edge in $\\mathcal{MST}_i[\\varphi_{C_u}, \\varphi_{C_v}]$ corresponds to an edge in $\\texttt{MST}[u,v]$, and so has weight at most $w(u,v)\\leq L_i$. Hence $\\varphi_{C_u}$ is incident to an edge of $\\mathcal{A}_i$ by the definition of $\\mathcal{A}_i$, which yields $\\varphi_{C_u} \\in \\mathcal{Y}$. \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\nWe now construct the set of level-$(i+1)$ clusters $\\mathcal{C}_{i+1}$ as follows. Let $\\mathcal{F}_{\\mathcal{Y}}$ be the cluster forest for $\\mathcal{Y}$ provided by~\\Cref{lm:XTree}. We construct $\\mathcal{C}_{i+1}$ as follows. Every level-$i$ cluster $C \\in \\mathcal{C}_i\\setminus \\mathcal{Y}$ becomes a level-$(i+1)$ cluster. \nThen, we construct a collection $\\mathbb{U}$ of subtrees of $\\mathcal{F}_{\\mathcal{Y}}$, such that each subtree $\\mathcal{U}\\in \\mathbb{U}$ contains at least two nodes and has hop-diameter at most $4$. \n For each subtree $\\mathcal{U}$, we form a level $(i+1)$ cluster $C_{\\mathcal{U}} = \\cup_{\\varphi_C \\in \\mathcal{V}(\\mathcal{U})}C$. \nWe note that $\\mathbb{U}$ can be constructed greedily via the same algorithm used in \\Cref{sec:PointerMachine}, within time $O(|\\mathcal{Y}|)$.\n\n\nIn the following lemma we assume that the set of clusters $\\mathcal{Y}$ is given to us. In the proof of \\Cref{thm:1} where we use \\Cref{lm:RamCi1}, we will specify the construction of $\\mathcal{Y}$. \n\n\n\\begin{lemma}\\label{lm:RamCi1}\nAll clusters in $\\mathcal{C}_{i+1}$ satisfy Properties (\\hyperlink{P1'}{P1'}) and (\\hyperlink{P2'}{P2'})\nwhen $\\epsilon \\leq 1\/(2g)$ and $g \\geq 9$, and they can be constructed in time $O(|\\mathcal{Y}|)$. Furthermore, every cluster $C_\\mathcal{U}\\in \\mathcal{C}_{i+1}$ that is formed from a subgraph $\\mathcal{U} \\in \\mathbb{U}$, as described above, is the union of at least $2$ level-$i$ clusters.\n\\end{lemma}\n\\begin{proof}\nThe proof of this lemma follows similar lines to those in the proof of~\\Cref{lm:Ci1} from Section~\\Cref{sec:PointerMachine}, hence we aim for conciseness. As mentioned, $\\mathbb{U}$ can be constructed within time $O(|\\mathcal{Y}|)$.\n\t\n\tRecall that every edge in $\\mathcal{F}_\\mathcal{Y}$ corresponds to an edge of the form $v\\rightarrow p_\\texttt{MST}(v)$ for some vertex $v\\in V$. Thus, for each subgraph $\\mathcal{U}\\in \\mathbb{U}$, the level-$(i+1)$ cluster $C_{\\mathcal{U}}$ can be constructed by calling $|\\mathcal{V}(\\mathcal{U})|-1$ \\textsc{Link}~operations. Therefore, $\\{C_\\mathcal{U}: \\mathcal{U}\\in \\mathbb{U}\\}$ can be constructed in time $O(\\sum_{\\mathcal{U}\\in \\mathbb{U}}|\\mathcal{U}|) = O(|\\mathcal{Y}|)$. Note that we do not pay any running time for constructing clusters in $\\mathcal{C}_{i+1}$ that are clusters in $\\mathcal{C}_i\\setminus \\mathcal{Y}$. Therefore $\\mathcal{C}_{i+1}$ can be constructed in $O(|\\mathcal{Y}|)$ time.\n\t\nProperty (\\hyperlink{P1'}{P1'}) holds trivially. \tProperty (\\hyperlink{P2'}{P2'}) follows from the fact that each subgraph $\\mathcal{U} \\in \\mathbb{U}$ has hop diameter at most $4$ and that each edge between two nodes in $\\mathcal{U}$ corresponds to an edge in $\\texttt{MST}$ of length at most $L_i$ since every edge of $\\mathcal{F}_{\\mathcal{Y}}$ has a weight at most $L_i$ by construction. \n\n\nNote that $\\mathbb{U}$ is constructed using same two-step algorithm used in \\Cref{sec:PointerMachine}. Thus, the same argument in \\Cref{lm:Ci1} applies to this case. Specifically, any cluster formed in Step (1) of the construction of $\\mathbb{U}$ contains at least 2 nodes, since no vertex in $\\mathcal{F}_\\mathcal{Y}$ is isolated, and any remaining node must be grouped in Step (2) of the construction of $\\mathbb{U}$ to sugraphs formed in Step (1). \\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\nWe are now ready to prove the second item of \\Cref{thm:1}.\n\n\\begin{proof}[Proof of the second item of~\\Cref{thm:1}] Recall that $H = \\cup_{1\\leq \\sigma \\leq \\mu_\\sigma}H^{\\sigma}$.\t We employ a similar charging argument to the one used in~\\Cref{sec:PointerMachine} to bound $|E(H^{\\sigma})|$.\tLet $\\Delta_{i+1} = |\\mathcal{C}_i| - |\\mathcal{C}_{i+1}|$. Note that $|\\mathcal{C}_0| = n$, hence $\\sum_{i\\geq 0} \\Delta_{i+1} \\leq n$. By \\Cref{lm:RamCi1} and \\Cref{lm:XTree}, we have $\\Delta_{i+1} \\geq \\frac{|\\mathcal{Y}|}{2} = \\Omega(|\\mathcal{X}|) = \\Omega(|V(R_i)|)$. (Note that $|\\mathcal{X}| = |V(R_i)|$.) Thus, \\Cref{eq:EHsigma} of~\\Cref{sec:PointerMachine} holds in this case as well. It follows that the sparsity of $H$ is $O(n^{1\/k} \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$. The stretch is $(2k-1)(1+O(\\epsilon))$ by~\\Cref{lm:RAM-Stretch};\nwe can reduce the stretch down to $(2k-1)(1+\\epsilon)$ by scaling $\\epsilon \\leftarrow \\epsilon\/c$, for a sufficiently large constant $c$, which will affect the sparsity and runtime bounds by constant factors.\n\tThe runtime to construct $H^\\sigma$ is $O(\\sum_{i\\geq 0}|E^{\\sigma}_i|) = O(m)$ by \\Cref{lm:RAM-Stretch}.\n\t\n\t\tWe now bound the time to construct the clusters in $\\mathcal{C}_{i+1}$. The main difficulty is that the size of $\\mathcal{Y}$ constructed in \\Cref{lm:XTree} could be much larger than $|E^{\\sigma}_i|$, hence we cannot bound the runtime by $|E^{\\sigma}_i|$\n\t\tas we did in \\Cref{sec:PointerMachine}. Here we employ a more delicate argument. At the outset of the construction, we divide the edges of $\\texttt{MST}$ into levels as we did for $E^{\\sigma}_i$. \n\t\tThe level-$i$ edges of $\\texttt{MST}$, denoted by $B_i$, include every edge of length larger than $L_{i-1}$ and at most $L_i$. The time to construct $B_i$ is $O(n\\log(1\/\\epsilon))$, following the same index-sorting argument used for constructing $E^{\\sigma}_i$ efficiently in \\Cref{fusion}.\n\t\n\n\tAt the outset of the construction of $\\mathcal{C}_{i+1}$, we assume that we are given the set of edges $\\mathcal{D}_{i-1}$ that contains every edge of weight at most $L_{i-1}$ of $\\mathcal{MST}_i$. For level $i = 0$, we set $\\mathcal{D}_{i-1} = \\emptyset$. Let $\\mathcal{B}_i$ be the set of edges of $\\mathcal{MST}_i$ corresponding to edges in $B_i$. The edge set $\\mathcal{B}_i$ can be constructed in $O(|B_i|)$ time as follows. For each edge $(u,v)\\in B_i$, we add an edge $(\\varphi_{C_u}, \\varphi_{C_v})$ to $\\mathcal{B}_i$, where $C_u$ and $C_v$ are the two level-$i$ clusters containing $u$ and $v$, respectively, which can be found via $\\textsc{Find}(u)$ and $\\textsc{Find}(v)$. \n\t\n\tThe set of edges $\\mathcal{A}_i$ defined in \\Cref{lm:XTree} is $\\mathcal{D}_{i-1}\\cup \\mathcal{B}_i$. Note that $|\\mathcal{A}_i| \\leq |\\mathcal{Y}|$ since $\\mathcal{F}_{\\mathcal{Y}}$ is acyclic. Thus, the running time to construct $\\mathcal{A}_{i}$ is $O(|\\mathcal{Y}|)$, as we have both $\\mathcal{D}_{i-1}$ and $\\mathcal{B}_i$ stored in a list data structure. To construct the set of $\\mathcal{D}_i$ for the construction at the next level, we simply identify edges in $\\mathcal{F}_{\\mathcal{Y}}$ that are between two different subgraphs $\\mathbb{U}$ in the construction of $\\mathcal{C}_{i+1}$. Thus, the running time to construct $\\mathcal{D}_i$ is also $O(|\\mathcal{Y}|)$. The running time to construct $\\mathcal{C}_{i+1}$ is $O(|\\mathcal{Y}|)$ by \\Cref{lm:RamCi1}. It follows that the total running time of the construction of clusters at level $i$ is $O(|\\mathcal{Y}|)$. Since $\\Delta_{i+1} \\geq \\frac{|\\mathcal{Y}|}{2}$, the time to construct $\\mathcal{C}_{i+1}$ is bounded by $O(\\Delta_{i+1})$, where $\\Delta_{i+1} = |\\mathcal{C}_i| - |\\mathcal{C}_{i+1}|$. It follows that the total running time to construct clusters over all levels is $\\sum_{i\\geq 0} O(\\Delta_{i+1}) = O(n)$. \n\t\n\tIn summary, the running time to construct $H$ is $O((m+n) \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon}) = O(m \\cdot \\frac{\\log(1\/\\epsilon)}{\\epsilon})$.\\mbox{}\\hfill $\\Box$\\\\\n\\end{proof}\n\n\n\\subsection{Index sorting in linear time} \\label{fusion}\n\nFirst, we assume that the word size is $\\bar{w} \\ge \\log(n)$ and all edge weights are bounded above by $2^{\\bar{w}}$, as per the \\textsf{Word RAM~} model. The total number of different indices is given by $\\log_{1+\\epsilon} 2^{\\bar{w}} = \\Theta(\\bar{w}\/\\epsilon)$. It follows that the number of integers is $ n' \\leq \\bar{w}\/\\epsilon$. In this range of values, predecessor search can be done in $O(\\log(n')\/\\log \\bar{w}) = O(\\log(1\/\\epsilon))$ time using the fusion tree data structure \\cite{FW90} (see also~\\cite{PT06}). \nConsequently, the time needed to compute the partition of $E$ into the sets $\\{E^{\\sigma}\\}_{1\\leq \\sigma \\leq \\mu_\\epsilon}$,\nwhich involves index sorting via predecssor search, is bounded by $ O(m\\log(1\/\\epsilon))$.\nPartitioning the set of edges of $\\texttt{MST}$ into levels can be done in the same way; the running time is $O(n\\log(1\/\\epsilon))$ as there are $n-1$ edges in $\\texttt{MST}$.\nSummarizing, the running time of these partitioning steps is bounded by $O((m+n)\\log(1\/\\epsilon)) = O(m\\log(1\/\\epsilon))$.\n\n\n\n\\section{Introduction}\n\nLet $G = (V,E,w)$ be a weighted undirected graph on $|V| = n$ vertices and $|E| = m$ edges.\nWe say that $H$ is a $t$-spanner for $G$, for a parameter $t \\ge 1$, if $H$ preserves all pairwise distances of $G$ to within a factor of $t$;\nthe parameter $t$ is called the \\emph{stretch} of the spanner.\n(A more detailed definition appears in~\\Cref{sec:prelim}.) \nThe most basic requirement from a low-stretch spanner is to be {\\em sparse}, i.e., of small size;\nthe normalized notion of size, {\\em sparsity}, is the ratio of the spanner size to the size $n-1$ of a spanning tree.\nA generalized requirement is to have a small {\\em weight}; the {\\em weight} of a spanner is the sum of its edge weights, and the normalized notion of weight, {\\em lightness}, \nis the ratio of the spanner weight to the weight $w(\\texttt{MST}(G))$ of a minimum spanning tree $\\texttt{MST}(G)$ for $G$.\n\nSparse and light spanners have been studied extensively over the years, and have found a wide variety of applications across different areas, from distributed computing and motion planning to computational biology and machine learning.\nAs prime examples, they have been used in achieving efficient broadcast protocols~\\cite{ABP90,ABP91}, for synchronizing networks and computing global functions \\cite{Awerbuch85,PU89,Peleg00}, in gathering and disseminating data~\\cite{BKRCV02,VWFME03,KV01}, and to routing~\\cite{WCT02,PU89b,DBLP:conf\/stoc\/AwerbuchBLP89,TZ01}.\n\nThe holy grail is to achieve optimal tradeoffs between stretch and sparsity and between stretch and lightness, within a small running time.\nFor unweighted graphs, this goal has been achieved already in the mid 90s, via a simple yet clever clustering approach due to Halperin and Zwick~\\cite{HZ96}: A linear-time construction of $(2k-1)$-spanners with the optimal (under Erdos' girth conjecture \\cite{Erdos64}) {sparsity} of $O(n^{1\/k})$;\nwe note that, for unweighted graphs, the sparsity and lightness parameters coincide.\n\n\nThe fundamental question underlying this work is whether one can achieve this goal in general weighted graphs. Chechik and Wulff-Nilsen \\cite{CW16} gave a poly-time construction of $(2k-1)(1+\\epsilon)$-spanners with \na {\\em near-optimal} bound of $O(n^{1\/k} \\cdot \\mathsf{poly}(1\/\\epsilon))$ on both sparsity and lightness;\nby {\\em near-optimal} we mean optimal under Erd\\H{o}s' girth conjecture and disregarding the $\\epsilon$-dependencies. Although the runtime of the construction of \\cite{CW16} is polynomial, it is far from linear.\nIs it possible to achieve a fast --- ideally linear time --- spanner construction with the same guarantees?\nThis question is open even disregarding the lightness: All known spanner constructions with near-optimal sparsity incur a rather high runtime.\n\nNext, we survey the main results on spanners for general graphs, starting with sparse spanners and proceeding to light spanners.\nSubsequently, we present our contribution. \n\n\\paragraph{Sparse spanners.~}\nGraph spanners were introduced in the late 80s \\cite{PS89,PU89}; initially, the focus was on the stretch-sparsity tradeoff.\nFor unweighted graphs, the aforementioned construction of \\cite{HZ96} gives an optimal result.\nWe shall henceforth consider general $n$-vertex $m$-edge weighted graphs. \nThe ``greedy spanner'' is perhaps the most basic spanner construction, introduced in the seminal work of\nAlth\\\"{o}fer et al.~\\cite{ADDJS93}. \nFor any integer parameter $k \\ge 1$, it provides a $(2k-1)$-spanner with sparsity $O(n^{1\/k})$. On the negative side, the running time of the greedy spanner is rather high, namely $O(m(n^{1+1\/k} + n \\log n))$. \n\nThe celebrated paper of Baswana and Sen \\cite{DBLP:conf\/icalp\/BaswanaS03} presents a randomized algorithm for constructing $(2k-1)$-spanners with sparsity $O(n^{1\/k} \\cdot k)$, within time $O(m \\cdot k)$.\nRoditty, Thorup and Zwick \\cite{roditty2005deterministic} derandomized the Baswana-Sen \\cite{DBLP:conf\/icalp\/BaswanaS03} algorithm, without any loss in parameters.\nThis result is optimal except for an extra factor of $k$ that appears in both the spanner size and the runtime bound.\n\nBuilding on Miller et al.~\\cite{MPVX15}, Elkin and Neiman \\cite{EN18} gave a randomized algorithm for constructing $(2k-1)(1+\\epsilon)$-spanners with sparsity $O(n^{1\/k} \\cdot \\log k \\cdot \\log(1\/\\epsilon)\/\\epsilon)$, within time $O(m)$, for any $\\epsilon < 1$; \nin fact, their runtime analysis overlooks the time consumed by a certain bucketing procedure, which, at least naively, requires $\\Omega(\\mathsf{SORT}(m))$ time, where $\\mathsf{SORT}(m)$ is the time needed to sort $m$ integers. \nAlstrup et al. \\cite{ADFSW19} achieved a deterministic algorithm with the same guarantees; we note that time $\\Omega(\\mathsf{SORT}(m))$ is also needed by the construction of \\cite{ADFSW19} for the same reason. \nThese results demonstrate that by incurring an arbitrarily small multiplicative error of $1+\\epsilon$ to the stretch bound, one can achieve, within linear time (modulo the overlooked time needed for integer sorting), a near-optimal sparsity bound, except for an extra $\\log k$ factor. Additional results are summarized in \\Cref{tab:spanners}.\n \\input{tableConstructions}\n\nAs shown in \\Cref{tab:spanners}, the previous state-of-the-art runtime for constructing $(2k-1)(1+\\epsilon)$-spanners with near-optimal sparsity\nis $O(\\min\\{m(n^{1+1\/k}) + n \\log n,k \\cdot n^{2+1\/k}\\})$, even regardless of the lightness.\n\n\n\\begin{question} \\label{q1}\nCan one achieve a (nearly) linear time spanner construction with near-optimal sparsity? \n\\end{question} \n\n\n\n\n\nWe answer~\\Cref{q1} in the affirmative by presenting two algorithms for constructing spanners with near-optimal sparsity in near-linear time. Specifically, we prove the following result. (Refer to\n \\Cref{tab:spanners} for a detailed comparison between our and previous results.)\n\\begin{theorem}\\label{thm:1}\nFor any weighted undirected $n$-vertex $m$-edge graph $G$, any integer $k \\ge 1$ and any $\\epsilon < 1$, one can deterministically construct $(2k-1)(1+\\epsilon)$-spanners with a near-optimal sparsity of $O(n^{1\/k} \\cdot \\log(1\/\\epsilon)\/\\epsilon)$. This construction can be implemented: \n\\begin{itemize}\n\\item In the pointer-machine model within time $O(m\\alpha(m,n) \\cdot \\log(1\/\\epsilon)\/\\epsilon + \\mathsf{SORT}(m))$.\\footnote{In the \\emph{pointer machine model}, one can perform binary comparisons between data, arithmetic operations on data, dereferencing of pointers, and equality tests on pointers. The model does not permit pointer arithmetic or tests other than equality on pointers and thus is less powerful than the RAM model \\cite{DBLP:journals\/jcss\/Tarjan79}.}\n\n\\item In the \\textsf{Word RAM~} model \nwithin time $O(m \\log(1\/\\epsilon)\/\\epsilon)$.\\footnote{The \\textsf{Word RAM~} model is similar to the classic unit-cost RAM model, except that (1) For a word length $w \\ge 1$ the contents of all memory cells are integers up to $2^w$. (2) Some additional instructions are available; in particular, the available unit-time operations are those from the \\emph{restricted instruction set}: addition and subtraction, (noncyclic) bit shifts by an arbitrary number of positions, and bitwise boolean operations, but not multiplication. (3) It is also assumed that $w \\ge \\log{n}$.\nWe note that if the running time of the algorithm depends on the input size but not on the word size,\nthen the model is further called \\textsf{Transdichotomous~} model; the running time of our algorithms do not depend on the word size.} \n\\end{itemize}\n\\end{theorem}\n\nWe remark that $\\alpha(m,n) = O(1)$ when $m = \\Omega(n\\log^{*}n)$. In fact, $\\alpha(m,n) = O(1)$ even when $m = \\Omega(n\\log^{*(c)}n)$ for any constant $c$, where $\\log^{*(\\ell)}(.)$ denotes the iterated log-star function with $\\ell$ stars; that is, $O(m\\alpha(m,n))$ is bounded by $O(m + n\\log^{*(c)}{n})$ for any constant $c$.\nThus the running time in the first item of~\\Cref{thm:1} is linear in $m$ in almost the entire regime of graph densities, i.e., except for very sparse graphs.\nMoreover, even when $\\alpha(m,n)$ is super-linear, it can still be viewed as constant for most practical purposes. \nHowever, there is a significant qualitative difference between truly linear-time and nearly linear-time algorithms,\nand shaving this factor for the entire regime of graph densities, as provided by the second item of~\\Cref{thm:1}, is of fundamental theoretical importance.\n\nThe previous linear-time algorithms for constructing sparse spanners in general weighted graphs \n\\cite{MPVX15,EN18,ADFSW19} achieve a sub-optimal sparsity bound of $O(n^{1\/k} \\cdot \\log k \\cdot \\log(1\/\\epsilon)\/\\epsilon)$, and, as mentioned, their runtime is actually $O(\\mathsf{SORT}(m))$.\nMoreover, these constructions, as well as all other spanner constructions with runtime $o(k m)$ (including ours), use a hierarchical clustering approach that involves constructing a so-called {\\em cluster graph} in each level of the hierarchy. Importantly, the cluster graph is a simple graph (without self loops and parallel edges), and all the previous works either overlooked the time needed to guarantee that the cluster graph is simple or they included an extra factor of $\\alpha(m,n)$ in the runtime bound --- due to the usage of the classic \n\\textsc{Union-Find}~data structure~\\cite{Tarjan75}. \nWe demonstrate that this factor can be shaved via a novel clustering approach, \nwhich we name {\\em MST-clustering}; refer to \\Cref{tech} for a discussion on the technical details.\n\n\n\\paragraph{Light spanners.~}\nLike sparsity, the lightness of spanners has been extremely well-studied. Alth\\\"{o}fer et al.~\\cite{ADDJS93} showed that the lightness of the greedy $(2k-1)$-spanner is $O(n\/k)$. \nDespite extensive research, the state-of-the-art lightness bound of any known $(2k-1)$-spanner construction (including the greedy spanner) remains poor, even regardless of the sparsity and runtime. It is thus only natural to explore the lightness bound for a slightly increased stretch of $(2k-1)(1+\\epsilon)$, where $\\epsilon < 1$ is an arbitrarily small parameter of our choice.\nChandra et al.~\\cite{CDNS92} showed that the greedy $(2k-1)(1+\\epsilon)$-spanner has lightness $O(k \\cdot n^{1\/{k}} \\cdot (1\/\\epsilon)^{2})$.\nThere was a sequence of works from recent years on light spanners \\cite{ES16,ENS14,CW16,FS20,EN18,ADFSW19,LS21}. \nIn particular, a construction of $(2k-1)(1+\\epsilon)$-spanners with a near-optimal lightness of $O(n^{1\/k} \\cdot \\mathsf{poly}(1\/\\epsilon))$ within a runtime of $O(m \\alpha(m,n)$ was presented recently \\cite{LS21}, where $\\alpha(\\cdot,\\cdot)$ is the inverse-Ackermann function;\non the negative side, the sparsity of the construction of \\cite{LS21} is unbounded.\nAs mentioned, the construction of \\cite{CW16} achieves a {near-optimal} bound of $O(n^{1\/k} \\cdot \\mathsf{poly}(1\/\\epsilon))$ on both sparsity and lightness, but its runtime is far from linear. The result of Filtser and Solomon~\\cite{FS20} implies that the greedy spanner achieves the same bounds\nas the construction of \\cite{CW16}, but the runtime $O(m(n^{1+1\/k} + n \\log n))$ of the greedy spanner is also rather high. \n\n\\begin{question} \\label{q2}\nCan one achieve a (nearly) linear time spanner construction with a near-optimal bound on both the sparsity and lightness?\n\\end{question} \n\n\nWe answer~\\Cref{q2} in the affirmative by presenting an algorithm for constructing $(2k-1)(1+\\epsilon)$-spanners with near-optimal sparsity and lightness in near-linear time, which culminates a long line of work in this area. Specifically, we prove the following result.\n\n\\begin{theorem}\\label{thm:2}\nFor any weighted undirected $n$-vertex $m$-edge graph $G$, any integer $k \\ge 1$ and any $\\epsilon < 1$, one can deterministically construct $(2k-1)(1+\\epsilon)$-spanners with a near-optimal bound of $O(n^{1\/k} \\cdot \\mathsf{poly}(1\/\\epsilon))$ on both sparsity and lightness. This construction can be implemented: \n\\begin{itemize}\n\\item In the pointer-machine model within time $O(m\\alpha(m,n) \\cdot \\mathsf{poly}(1\/\\epsilon) + \\mathsf{SORT}(m))$.\n\\item In the \\textsf{Word RAM~} model within time $O(m \\alpha(m,n) \\cdot \\mathsf{poly}(1\/\\epsilon))$.\n\\end{itemize}\n\\end{theorem}\n\nWe obtain the result of~\\Cref{thm:2} by strengthening the framework of~\\cite{LS21} for fast constructions of light spanners\nto achieve a near-optimal bound on the sparsity as well.\nTo this end, we plug the ideas used in the proof of~\\Cref{thm:1}, in conjunction with numerous new insights, on top of the framework of~\\cite{LS21} in a highly nontrivial way. Our MST-clustering approach plays a key role not just in the proof of~\\Cref{thm:1},\nbut also in the proof of~\\Cref{thm:2};\nrefer to \\Cref{tech} for more details. \n\n\\subsection{Technical Highlights} \\label{tech}\n\nOur spanner construction is inspired by the constructions of~\\cite{MPVX15},~\\cite{EN18} and~\\cite{ADFSW19}, which we briefly review next. \nAll these constructions achieve a runtime of $O(m)$, modulo the time needed for sorting the edge weights; we shall elaborate on this point later. \nThe construction of~\\cite{MPVX15} achieves stretch $O(k)$ with sparsity $O(n^{1\/k}\\log(k)))$, while the two other constructions achieve the same sparsity but with a stretch of $(2k-1)(1+\\epsilon)$. (For clarity, we shall ignore the dependency on $\\epsilon$ in the sparsity bounds.) \n\nThe construction of~\\cite{MPVX15}\\footnote{The algorithm used in~\\cite{MPVX15} is parallel, and our interpretation of it is in the standard sequential model.} starts by dividing the edge set into $\\mu_k \\defi O(\\log k)$ sets $\\{E^1,E^2,\\ldots, E^{\\mu_k}\\}$, such that for each set $E^{\\sigma}$, $\\sigma \\in [1,\\mu_k]$, any two edge weights are either within a factor of $2$ from each other or they are separated by at least a factor of $k^{c}$ for some constant $c$. The algorithm then focuses on constructing a spanner $H^{\\sigma}$ for each edge set $E^{\\sigma}$ separately; the sparsity of $H^{\\sigma}$ is $O(n^{1\/k})$, which ultimately leads to a sparsity bound of $O(\\mu_k \\cdot n^{1\/k}) = O(\\log(k)\\cdot n^{1\/k})$ of the final spanner $H$. In the construction of $H^{\\sigma}$, the edge set $E^{\\sigma}$ is further divided into smaller subsets $\\{E^{\\sigma}_1,E^{\\sigma}_2,\\ldots\\}$, where edges in the same set $E^{\\sigma}_i$ have the same weights up to a factor of 2, and the weights of edges in $E^{\\sigma}_i$ are at least $k^{c}$ times greater than the weights of edges in $E^{\\sigma}_{i-1}$, for each $i$. \nThe construction of~\\cite{MPVX15} uses a \\emph{hierarchy of clusters} and an \\emph{unweighted} {\\em cluster graph} $R_i$ for each level $i$ of the hierarchy. The vertex set of $R_i$ corresponds to a \\emph{subset} of level-$i$ clusters that are incident to at least one edge in $E_i^{\\sigma}$, and the edge set of $R_i$ corresponds to a subset of edges in $E^{\\sigma}_{i}$ \n interconnecting level-$i$ clusters. A preprocessing step is applied to the construction of $R_i$ to remove parallel edges, which are edges in $E^{\\sigma}_{i}$ connecting the same two level-$i$ clusters, and self-loops, which are edges in $E^{\\sigma}_{i}$ whose both endpoints are in the same level-$i$ cluster. The construction of~\\cite{MPVX15} then\nbuilds an $O(k)$-spanner for the (unweighted) graph $R_i$ to obtain a subset of edges $S_i$ of $E^{\\sigma}_{i}$ to add to $H^{\\sigma}$. Next, vertices in $R_i$ are grouped into a set $\\mathcal{U}$ of subgraphs of (unweighted) diameter $\\Theta(k)$; each subgraph in $\\mathcal{U}$ is then transformed into a level-$(i+1)$ cluster. The construction then continues to level $i+1$, then to level $i+2$, etc., until all the edges in the graph have been considered. The construction of the (unweighted) $O(k)$-spanner of $R_i$ and the set of subgraphs $\\mathcal{U}$ is randomized and based on sampling from an exponential distribution. \n\nThe construction of~\\cite{EN18} builds on that of~\\cite{MPVX15}. First, it partitions the edge set into $\\mu_{k,\\epsilon} = O(\\log(k)\/\\epsilon)$ sets of edges instead of $O(\\log k)$ sets as in~\\cite{MPVX15}; the idea is that for each set $E^{\\sigma}$, $\\sigma \\in [1,\\mu_{k,\\epsilon}]$, any two edge weights are either within a factor of $1+\\epsilon$ from each other, or are separated by at least a factor of $k^{c}$ for some constant $c$. Next, the construction of~\\cite{EN18} uses the same idea of~\\cite{MPVX15} to construct the spanner of $R_i$ and the set of subgraphs $\\mathcal{U}$. However, the stretch of the spanner is improved to $(2k-1)$, which readily implies a stretch of $(4k-2)(1+\\epsilon)$ for the final spanner. We note that the stretch is $(4k-2)(1+\\epsilon)$ instead of $(2k-1)(1+\\epsilon)$, due to a subtlety involving randomness in~\\cite{MPVX15}. With a more sophisticated analysis, \\cite{EN18} resolves this subtlety and reduces the stretch to $(2k-1)(1+\\epsilon)$. The sparsity of the final spanner is $O(\\mu_{k,\\epsilon} \\cdot n^k) = O(\\log(k) \\cdot n^{1\/k})$, ignoring the dependence on $\\epsilon$. \n\nUnlike the constructions of \\cite{MPVX15,EN18}, the construction of~\\cite{ADFSW19} is \\emph{deterministic}. A central idea in the construction of~\\cite{ADFSW19}, inspired by an earlier work~\\cite{ES16}, is to use a modified version of the Halperin-Zwick algorithm \\cite{HZ96} in the construction of the spanner of $R_i$. The spanner of $R_i$ has stretch $(2k-1)$, which implies the final stretch of $(2k-1)(1+\\epsilon)$. The sparsity of the spanner remains $O(\\mu_{k,\\epsilon} \\cdot n^k) = O(\\log(k) \\cdot n^{1\/k})$, as in \\cite{MPVX15,EN18}.\n\n\nWe note the following points regarding the aforementioned constructions. \n\\begin{enumerate}\n\\item First, the sparsity incurs an extra factor of $O(\\log k)$,\ni.e., it is $O(\\log(k) \\cdot n^{1\/k})$ rather than $O(n^{1\/k})$. This is inevitable, since subgraphs in $\\mathcal{U}$ of $R_i$ have a diameter of $\\Theta(k)$, hence the weights of edges in $E^{\\sigma}_{i+1}$ and $E^{\\sigma}_{i}$ must be at least a factor of $k^c$ apart from each other, which ultimately leads to a factor $O(\\log k)$ in the number of sets that the edge set $E$ is partitioned to. \n\\item Second, each set $E^{\\sigma}$ is partitioned into $O(\\log U)$ sets $\\{E_1^{\\sigma}, E_2^{\\sigma},\\ldots\\}$, where $U$ is the maximum edge weight. Thus, at least naively, the partition of $E^{\\sigma}$ can be constructed in time $O(m + \\log U)$ rather than $O(m)$, where $U$ could be unbounded. One way to avoid the dependency on $U$ is to sort all edge weights of $E^{\\sigma}$, which requires time $\\mathsf{SORT}(m)$. We note that the computation of the partition of $E^{\\sigma}$ into subsets has been overlooked in \nthe aforementioned constructions ~\\cite{MPVX15,EN18,ADFSW19}. In the \\textsf{Word RAM~} model, we use the simple observation that $O(\\log U)$ is roughly the word size to guarantee that such a partition can be computed within $O(m)$ time. \n\\item Third, the aforementioned constructions involve constructing a cluster graph $R_i$ associated with each level $i$ of the hierarchy. \nWhile the details of maintaining $R_i$ are not precisely described in these constructions, we observe that $R_i$ can be efficiently maintained using the \\textsc{Union-Find}~data structure. However, the total runtime would be $O(m\\alpha(m,n))$ rather than $O(m)$. \nWe next show that the non-optimal sparsity bound of $O(n^{1\/k}\\log k)$ achieved by the previous works can be used to remove the factor $\\alpha(m,n)$. Observe that $m \\alpha(m,n) = O(m)$ when $m = \\Omega(n \\log\\log(n))$. If $m = O(n^{1+1\/k}\\log k)$, we can simply return the whole graph as the output spanner. Otherwise, $m = \\Omega(n^{1+1\/k}\\log k) = \\Omega(n \\log\\log n)$ for every $k\\geq 2$, in which case the total time to construct a spanner of size $O(n^{1+1\/k}\\log k)$ is $O(m \\alpha(m,n)) = O(m)$. However, the same argument fails when aiming for the near-optimal sparsity bound of $O(n^{1\/k})$ that we achieve (e.g., $O(n^{1+1\/k}) = O(n)$ when $k = \\Omega(\\log n)$). To construct a spanner with a sparsity of $O(n^{1\/k})$ in $O(m)$ time, one must overcome the ``\\textsc{Union-Find}~barrier''. We note that even in the cell-probe model, which is stronger than the \\textsf{Word RAM~} model, one cannot avoid the factor $\\alpha(m,n)$ in the \\textsc{Union-Find}~data structure~\\cite{FS89}. \n\\end{enumerate}\n\n \n Our first construction is in the pointer-machine model; there we overcome the ``(unweighted) diameter barrier'' of $\\Theta(k)$ of subgraphs in $\\mathcal{U}$ constructed from $R_i$: Subgraphs in our construction have (unweighted) diameters of $O(1)$. As a consequence, we demonstrate that it suffices to partition $E$ into $\\mu_{\\epsilon} \\defi O(\\frac{1}{\\epsilon}\\log(1\/\\epsilon))$ sets instead of $O(\\log k\/\\epsilon)$ sets, which ultimately leads to the optimal sparsity of $O(n^{1\/k})$, ignoring the dependence on $\\epsilon$. The key idea behind our construction is rather simple --- we prove that it suffices to construct level-$(i+1)$ clusters from level-$i$ clusters such that the total number of clusters is reduced by $\\Omega(|V(R_i)|)$. We then use the Halperin-Zwick algorithm~\\cite{HZ96} to construct a $(2k-1)$-spanner for $R_i$. Next, we construct the set of subgraphs $\\mathcal{U}$ greedily, with each having diameter $O(1)$. By using the \\textsc{Union-Find}~data structure in the construction of $R_i$, the total running time of our algorithm is $O(m\\alpha(m,n))$, plus an additive term of $\\mathsf{SORT}(m)$ needed for computing the partition of $E^{\\sigma}$ as discussed above. Note that we cannot use the trick that we provided earlier to remove the $\\alpha(m,n)$ factor since our spanner construction does not have any slack on the sparsity.\nOur construction is deterministic, it improves the aforementioned constructions~\\cite{MPVX15,EN18,ADFSW19} ---\nyet is arguably simpler. \n\nOur linear-time spanner construction in the \\textsf{Word RAM~} model is based on a novel clustering approach, which we name {\\em MST-clustering}. Specifically, we guarantee that the subgraphs induced by clusters are subtrees of a minimum spanning tree (MST) of the graph, denoted by $\\texttt{MST}$, and hence, every \\textsc{Union}~operation is performed along the edges of $\\texttt{MST}$. That is, each \\textsc{Union}~operation is of the form $\\textsc{Union}(u,v)$, where $(u,v)$ is an edge in $\\texttt{MST}$. As a result, we are able to determine all the \\textsc{Union} ~operations even before the cluster construction takes place. This allows us to use a refined \\textsc{Union-Find}~data structure, by Gabow-Tarjan \\cite{GT85}, which has $O(1)$ amortized cost per \\textsc{Union}\/\\textsc{Find}~operation. To the best of our knowledge, this is the first time that the MST serves as the union tree in the Gabow-Tarjan \\textsc{Union-Find}~data structure, other than in applications that {\\em directly concern MST}.\n\nThe idea of using the MST in the context of clustering in spanner constructions is quite surprising. In many of the known spanner constructions, clusters in the cluster hierarchy need to satisfy a diameter constraint. That is, clusters at level-$i$ should have a diameter of at most $f(L_i)$, for some function $f$, often a linear function, where $L_i$ is an upper bound on the edge weights in $E^\\sigma_i$. \nIn particular, the approaches of~\\cite{MPVX15,EN18,ADFSW19,LS21} utilize the fact that some edges (not in $\\texttt{MST}$) have been added during the construction of clusters at lower levels, and use these edges to construct clusters that satisfy the diameter constraint. By restricting ourselves to only use $\\texttt{MST}$ for clustering, it seems much more challenging (and perhaps impossible at first) to guarantee the diameter constraint for level-$i$ clusters. Our key insight is that it is still possible to do so, and to this end we rely on the cycle property of MST, both for arguing that clusters have small diameters and for constructing clusters efficiently. \n \nFinally, we show how to construct a spanner with near-optimal sparsity {\\em and lightness}. Our construction builds on the fast construction of spanners with near-optimal lightness in~\\cite{LS21}. The construction of \\cite{LS21} has a preprocessing step and a main construction step. In the preprocessing step, every edge of weight at most $\\frac{w(\\texttt{MST})}{m\\epsilon}$ is added to the spanner. Clearly the number of edges added in this step could be as large as $\\Omega(n^2)$ (for dense graphs). Our first observation is that, except for $\\texttt{MST}$ edges, edges added in the preprocessing step are not involved in the main construction step, and hence we can apply our sparse spanner construction from \\Cref{thm:1} to reduce the number of edges added in the preprocessing step to $O(n^{1+1\/k})$. The main construction step is based on a cluster hierarchy. However, clusters in~\\cite{LS21} are ``equipped'' with a potential function, and the challenge of the cluster construction is to guarantee a sufficient reduction in the potential values between two consecutive levels of the hierarchy. A cluster graph is also used to select a subset of edges in $E^\\sigma_i$ to add to the spanner. Again, the number of edges added in this step could be as large as $\\Omega(n^2)$. In order to obtain a spanner with near-optimal guarantees on both sparsity and lightness, we employ the insight that we developed in this paper for the construction of sparse spanners, by constructing clusters in such a way that, between two consecutive levels, there is a sufficient reduction not just in the potential values, but also in the {\\em number of clusters}. This, in turn, makes the task of constructing clusters much more challenging; indeed, a-priori, it is unclear that it is possible to achieve both objectives via a single (fast) spanner construction. \n\n\nThe spanner construction of \\cite{LS21} constructs level-$(i+1)$ clusters in 5 steps; each level-$(i+1)$ cluster corresponds to a subgraph of a cluster graph $R_i$. We note that the cluster graph $R_i$ in this construction is different from the cluster graph used in the sparse spanner constructions in that its MST, denoted by $\\widetilde{\\mst}_{i}$, is derived from the MST of $G$. We observe that among the 5 steps used in \\cite{LS21}, there are two steps where the reduction in the number of clusters is not guaranteed. Furthermore, the clusters formed in these two steps are subgraphs of $\\widetilde{\\mst}_{i}$. Thus, our idea is to apply the insights we developed in the sparse spanner construction in the \\textsf{Word RAM~} model to this setting. However, there are two subtleties in the construction of \\cite{LS21} that we need to address. First, the cluster graph $R_i$ has weights on both edges and vertices. As a result, $\\widetilde{\\mst}_{i}$ also has weights on both edges on vertices. Second, clusters in the construction of \\cite{LS21} contain \\emph{virtual vertices}; these vertices are not in the input graph and are introduced to support the design of the potential function for clusters. We show an analogous version of the cycle property for $\\widetilde{\\mst}_{i}$. We use this property, in addition to several other technical ideas, to transfer insights that we developed in the construction of sparse spanners in the \\textsf{Word RAM~} model to the cluster construction in this setting. As a result, our spanner construction that achieves near-optimal bounds on both sparsity and lightness is much more involved than our two aforementioned constructions \n(which prove \\Cref{thm:1}) with near-optimal sparsity but possibly huge lightness. \n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\n\\section{Statement of the main result}\nThe problem we consider is whether or not one may find \nan embedded Lagrangian $\\mathbb{RP}^2$ in the three-fold blow-up of the symplectic ball.\nLet $(B,\\omega)$ be the symplectic ball with $\\int_{B} \\omega^2 = 1$, and \nlet $B_3(\\mu_1,\\mu_2,\\mu_3)$ be $B$ blown-up three times; here $\\mu_i > 0$ are the areas of the exceptional curves, which satisfy $1 - \\mu_i - \\mu_j > 0$.\nNote that the positivity condition $1 - \\sum_i \\mu_i^2 > 0$ is \nautomatically satisfied.\n\nWe will show that \n$B_3(\\mu_1,\\mu_2,\\mu_3)$\nadmits an embedded Lagrangian $\\mathbb{RP}^2$ if and only if\n$\\mu_i$ obey \n\\[\n\\mu_i < \\mu_j + \\mu_k,\n\\]\nso that the sum of the sizes of any two blow-ups must be greater than the \nsize of the remaining blow-up. The existence of a Lagrangian $\\mathbb{RP}^2$ in $B_3$ \nhas been previously reported in \\cite{BLW}, under the assumption that $\\mu_i$ \nare equal to each other and sufficiently small.\n\nAlthough it is immediate that there is no embedded Lagrangian $\\mathbb{RP}^2$ in the symplectic ball $B$,\none may ask if there is one in the blow-up of $B$ or the two-fold blow-up of $B$. The answer to this question \nis negative as there is a topological obstruction to such an embedding; a result due to Audin \\cite{Aud} says that if \n$L$ is an embedded Lagrangian $\\mathbb{RP}^2$ then \n\\[\n[L]^2 = 1\\ \\text{mod}\\,4.\n\\]\n(The reader will recall here that \nthe self-interestion number of mod 2 \nclasses has a lift to $\\zz_4$ coefficients,\nthe Pontrjagin square.) It is easy to see that neither the blow-up of $B$ nor the two-fold blow-up has suitable homology classes.\n\nThere is no general method to find obstructions for Lagrangian embeddings into symplectic $4$-manifolds, though there are many results known. \nFor instance, Li and Wu show (see \\cite{LW}) there exists an embedded Lagrangian sphere in the two-fold blow-up of $B$ if and only if the sizes of the blow-ups are equal to each other.\n\nAlthough one can always find an embedded Lagrangian torus in $B$, such an embedding must satisfy interesting symplectic constraints. We let $\\alpha$ to denote the action form on $B$, $\\text{\\normalfont{d}}\\,\\alpha = \\omega$. If $T^2$ is a Lagrangian torus in $B$, then the restriction of $\\alpha$ to $T^2$ is closed and, therefore, defines a class in $\\sfh^1(T^2;\\rr) \\cong \\rr^2$.\nA classical result of Gromov says (see \\cite{Gro}) that $[\\alpha]$ never vanishes. \nIn \\cite{HO}, Hind and Opshtein established a certain bound on the size of $B$ in terms of $[\\alpha] \\in \\sfh^1(T^2;\\rr)$. \n\nIt is shown by Nemirovski-Shevchishin (see \\cite{N,Sh}) that there is no Lagrangian embedding of the Klein bottle into $B$.\n\n\\state Acknowledgements. \nWe are indebted to Silvia Anjos and Rosa Sena-Dias for \nuseful discussions. We also wish to thank Jeff Hicks for \nreading the manuscript and for helpful comments.\n\n\n\\section{Preliminaries}\n\n\\subsection{Symplectic rational blow-up}\nFor symplectic $4$-manifolds, the standard blow-down is performed by removing a neighbourhood of a symplectic sphere with \nself-inter\\-section $-1$ and replacing the sphere with the standard symplectic $4$-ball. \nThe \\slsf{symplectic rational blow-down} involves replacing a neighbourhood of a symplectic $(-4)$-sphere \nwith the symplectic rational homology ball which is the standard symplectic neighbourhood of $\\mathbb{RP}^2$ in $T_{\\mathbb{RP}^2}$. \nFor details, see \\cite{F-S, Sym-1}, where more general blow-downs are considered.\n\nA different viewpoint comes from the symplectic sum surgery introduced in \\cite{MW,Gm}. Consider two symplectic $4$-manifolds $(X_i,\\omega_i)$, $i=1,2$, which contain symplectic spheres $S_i$ with \n\\[\n[S_1]^2 = -[S_2]^2\\quad \\text{and}\\quad \\int_{S_1} \\omega_1 = \\int_{S_2} \\omega_2.\n\\]\nLet $\\overline{X_i - S_i}$ be the manifold with boundary such that\n$\\overline{X_i - S_i} - Y_i$ is symplectomorphic to $(X_i - S_i, \\omega_i)$,\nwhere $Y_i = \\del(\\overline{X_i - S_i})$ is diffeomorphic to a circle bundle over $S_i$.\nThe symplectic sum $X_1 \\#_{S_1 = S_2} X_2$ is defined as $\\overline{X_1 - S_1} \\cup_{\\varphi} \\overline{X_2 - S_2}$, where $\\varphi \\colon Y_1 \\to Y_2$ is an orientation-reversing diffeomorphism.\n\nOne may equip $X_1 \\#_{S_1 = S_2} X_2$ with a symplectic structure $\\omega$ which agrees with $\\omega_i$ over $X_i - S_i$ and whose properties can be recovered from those of $\\omega_i$. For instance, \n\\[\n\\int_{X_1 \\#_{S_1 = S_2} X_2} \\omega^2 = \\int_{X_1} \\omega_1^2 + \\int_{X_2} \\omega_2^2.\n\\]\nThere are various descriptions of the symplectic sum available in the literature; the one in \n\\cite{Sym-2} is particularly visual.\n\nLet $(\\wt{X},\\omega)$ be a symplectic $4$-manifold containing a symplectic $(-4)$-sphere $\\Sigma$, and \nlet $\\omega_0$ be the Fubini-Study symplectic form on $\\cp^2$. One may perform \nthe symplectic sum \n\\begin{equation}\\label{eq:split}\nX := \\wt{X} \\#_{\\Sigma = Q} \\cp^2,\n\\end{equation}\nwhere $Q \\subset \\cp^2$ is the quadric \n$Q = \\left\\{ z_0^2 + z_1^2 + z_2^2 = 0 \\right\\}$. \nNote that we need to scale $\\omega_0$ up such that\n$$\n\\int_{\\Sigma} \\omega = \\int_{Q} \\omega_0.\n$$\nNote also that the complement of $Q$ in $\\cp^2$ is a symplectic neighbourhood \nof the Lagrangian projective plane $\\left\\{ z_i = \\bar{z}_i \\right\\}$, and the Lagrangian therefore embeds into $X$. \n\nSince a symplectic neighbourhood of an embedded Lagrangian $\\mathbb{RP}^2$ is entirely standard, the rational blow-down surgery is reversible. Namely, whenever $X$ contains an embedded Lagrangian $L \\cong \\mathbb{RP}^2$,\nthere exists a positive sufficiently small $\\varepsilon$ such that $X$ splits according to \\eqref{eq:split} with $\\int_{Q} \\omega_0 = 4\\,\\varepsilon$. \n\nWe shall say that the manifold $\\wt{X}$ in \\eqref{eq:split} is the \\slsf{symplectic rational blow-up} of $L$\nin $X$. Then the value of $4\\,\\varepsilon$, which may be chosen arbitrary small, is called the \\slsf{size of the rational blow-up}. See \\cite{Kh-1,Kh-2} for a detailed study of symplectic rational blow-ups.\n\n\nIf $X$ is the rational blow-down of $\\Sigma$ from $\\wt{X}$, then \n\\begin{equation}\\label{eq:b}\nb_1(X) = b_1(\\wt{X}), \\quad\nb_2^{+}(X) = b_2^{+}(\\wt{X}), \\quad \nb_2^{-}(X) = b_2^{-}(\\wt{X}) - 1.\n\\end{equation}\nThese equations follow from \\cite{F-S}. We now discuss the \nrelation between \nthe intersection form of $X$ and that of $\\wt{X}$ in detail.\n\n\\subsection{Lattice calculation.}\\label{lattice} \nIn this note a \\slsf{lattice} is a free Abelian group $\\Lambda\\cong \\zz^n$\nequipped with a non-degenerate symmetric bilinear form $q_\\Lambda: \\Lambda \\times \\Lambda \\to \\zz$. \n\nLet $(X,\\omega)$ be a compact symplectic manifold, $L \\cong \\mathbb{RP}^2$ be a \nLagrangian in $X$, and $(\\wt{X},\\ti\\omega)$ be the rational blow-up \nof $L$ in $X$. Denote by $\\Sigma$ the resulting exceptional $(-4)$-sphere, by $\\Lambda:=\\sfh_2(X,\\zz)\/\\mathrm{Tor}$ the $2$-homology lattice of $X$, \nand by $\\wt\\Lambda:= \\sfh_2( \\wt X,\\zz) \/\\mathrm{Tor}$ the same lattice of $\\wt X$.\n \n\n\\smallskip\nFollowing \\cite{BLW}, we describe \nthe relation of $\\wt\\Lambda$ to $\\Lambda$.\nThe intersection \nwith $L \\cong\\mathbb{RP}^2$ defines a homomorphism \n$w_L \\colon \\Lambda \\to \\zz_2$. \nDenote by $\\Lambda'$ the kernel of this homomorphism. \nThis is a sublattice of $\\Lambda$ of index $2$. \n\nThe elements of $\\Lambda'$ are represented by oriented surfaces in $X$ having vanishing $\\zz_2$-intersection index with $L$. \nBy placing the surface $Y$ in generic position we obtain an even number of transverse intersection points of $Y$ with $L$. The intersections points can easily be made to disappear, \nby cutting from $Y$ a small neighbourhood of each intersection point and \nconnecting the boundaries by tubes. If desired, the surgery can be done \nin such a way that the obtained surface remains orientable, see \\slsf{Lemma 4.10} \nin \\cite{BLW}.\n\nWe therefore conclude that $\\Lambda'$ is the $2$-homology lattice of $X\\bs L$.\nSince there exists a natural diffeomorphism $X\\bs L \\cong \\wt X \\bs \\Sigma$, we obtain a natural embedding $\\Lambda'\\subset \\wt \\Lambda$. The image of the latter will be denoted by $\\wt\\Lambda'$.\n\nOn the other hand, the homology class of $\\Sigma$ generates the sublattice $\\zz\\lan[\\Sigma]\\ran \\subset \\wt \\Lambda$ of rank $1$. \nIn a similar vein as above one shows that the orthogonal sublattice $[\\Sigma]^\\perp$ is generated by oriented surfaces disjoint from $\\Sigma$, and that sublattice is canonically identified with $\\wt\\Lambda'$.\nIf $S$ is an oriented embedded surface in $X$ such that $[S] \\in [\\Sigma]^\\perp$, \nthen one constructs a representative of $[S]$ that is disjoint from $\\Sigma$ as follows.\nArrange $S$ to be transverse to $\\Sigma$ so that they intersect each other in finitely many points \n$Q_1,\\ldots,Q_k$.\nPick a pair of points $Q_1,Q_2$ of opposite signs; we want to get rid of them.\nLet $\\Gamma_1$ and $\\Gamma_2$ be small circles in $S$ going around the points $Q_1$ and $Q_2$, respectively.\nPick a path $\\gamma \\subset \\Sigma$ from $Q_1$ to $Q_2$. \nThen, using a thin tube following the chosen path, we can connect $\\Gamma_1$ to $\\Gamma_2$. \nThe intersections $Q_1$ and $Q_2$ have now been eliminated. The number of positive points $Q_i$ must be equal to the number of negative $Q_i$, or $[\\Sigma] \\cdot [S]$ would not have vanished. \nSo pick another pair of points, find a path between them, eliminate, and so on till we run out of intersection point. \n\n\n\nThus the sum $\\wt\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran$ is orthogonal, and this is a sublattice in $\\wt\\Lambda$ of finite index. \n\nThe index of $\\big[\\wt \\Lambda: \\wt\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran \\big]$ is the square root of the discriminant of the lattice $\\wt\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran$.\nRecall that the \\slsf{discriminant} of a lattice is the absolute value of the Gram matrix of the lattice with respect to any basis. \nSince the sum $\\wt\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran$ is orthogonal, this discriminant is the product\nof the discriminants of $\\wt\\Lambda'$ and $\\zz\\lan[\\Sigma]\\ran$. \n\nThe first discriminant is $4=2^2$ since $\\Lambda' \\cong \\wt\\Lambda'$ has index $2$ in the unimodular lattice $\\Lambda$. In the case of $\\zz\\lan[\\Sigma]\\ran$\nthe discriminant is $|\\Sigma^2|=|-4|=4$. It follows that discriminant of the lattice $\\wt\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran$ is $4\\cdot4=16$, and so the index is $4$. In particular, for every $\\lambda\\in\\wt\\Lambda$ the multiple $4\\,\\lambda$ lies in $\\Lambda'\\oplus \\zz\\lan[\\Sigma]\\ran$.\n \\smallskip%\n\nWe sum up our previous considerations as follows:\n\\begin{lem} Let $(X,\\omega)$ be a closed symplectic $4$-manifold, and $L\\subset X$ be Lagrangian real projective plane in $X$.\nLet $(\\wt X, \\wt\\omega)$ be the symplectic rational blow-up of $L$ in $X$, and $\\Sigma$ the arising $(-4)$-sphere. Denote by $\\Lambda$ and $\\wt\\Lambda$ the integer lattices of $X$ and resp.\\ $\\wt X$. Let $\\Lambda'$ be the sublattice of vectors $\\lambda \\in \\Lambda$ having vanishing $\\zz_2$-intersection with $L$.\n\nThen the lattice $\\wt \\Lambda$ admits a sublattice naturally isomorphic to $\\Lambda' \\oplus \\zz\\lan[\\Sigma]\\ran$, and the quotient group is $\\zz_4$. (This follows from unimodularity of $\\wt\\Lambda$.)\n\nSince the rational blow-up surgery does not affect the symplectic form $\\omega$ away from some tubular neighbourhood of $L$, we see that the Chern class $c_1(\\wt X)$ coincides with the class $c_1(X)$ on the sublattice $\\Lambda'$, and so do the classes $[\\omega]$ and $[\\wt{\\omega}]$.\n\\end{lem}\n\n\\section{The inequalities}\n\nWe define a \\slsf{symplectic ball} $B_0$ as the round ball of radius $r$ in $\\rr^4=\\cc^2$ equipped with the standard symplectic structure\n\\[\n\\omega_{0}:= \\frac{\\cpi}{2} \\big( \\text{\\normalfont{d}} z_1 \\wedge \\text{\\normalfont{d}}\\bar{z}_1 +\n\\text{\\normalfont{d}} z_2 \\wedge \\text{\\normalfont{d}}\\bar{z}_2 \\big)\n= \\text{\\normalfont{d}} x_1 \\wedge \\text{\\normalfont{d}} y_1 + \\text{\\normalfont{d}} x_2 \\wedge \\text{\\normalfont{d}} y_2.\n\\]\nIn this case we say that the quantity $\\pi r^2$ is the \\slsf{size of the ball} $B_0$. This is the $\\omega_0$-area of the disc $\\{ (x_1,y_1;0,0)\\;:\\; x_1^2 +y_1^2 \\le1\\}$ in $B$. \n\nTake the symplectic ball $(B_0,\\omega_{0})$ of size $1$. Inside $B_0$ take three disjoint symplectic balls $B(x_i,\\mu_i)$, $i = 1,2,3$, of sizes $\\mu_i > 0$ and centers $x_i$. By $B_3(\\mu_1, \\mu_2, \\mu_3)$ we denote the three-fold blow-up of $B_0$ at $x_i$, and by $E_i \\subset B_3(\\mu_1, \\mu_2, \\mu_3)$ we denote the arising exceptional spheres.\n\n\\subsection{Construction of Lagrangian $\\mathbb{RP}^2$'s in a\ntriply blown-up ball.} For this discussion we follow closely \\slsf{\u00a7 4.3.1} in \\cite{BLW}. \n\nTake the symplectic ball $(B_0, \\omega_{0})$ of size $1$. \nInside $B_0$ take a symplectic ball $B(\\wt{x}_0,\\wt\\mu_0)$ of size \n$\\wt\\mu_0 > 0$ and center $\\wt{x}_0$. \nLet $(B_1,\\wt\\omega_1)$ be the symplectic blow-up of the ball $(B_0,\\omega_{0})$ at $\\wt{x}_0$ of size $\\wt\\mu_0$, using the ball $B(\\wt{x}_0,\\wt\\mu_0)$. \nDenote by $\\wt{E}_0$ the arising exceptional sphere. \nThen \n$\\int_{\\wt{E}_0}\\wt\\omega_1=\\wt\\mu_0$.\n\nTake three distinct points $\\wt{x}_1, \\wt{x}_2, \\wt{x}_3$ on $\\wt{E}_0$. \nThen there exist disjoint symplectic balls $B(\\wt{x}_i,\\wt\\mu_i)$ of some sizes $\\wt{\\mu}_i > 0$ such that each intersection \n$\\wt{E}_0 \\cap B(\\wt{x}_i,\\wt{\\mu}_i)$ is a disc $D(\\wt{x}_i,\\wt{\\mu}_i)$ of area $\\wt{\\mu}_i$. \nNotice that we get \n$\\wt{\\mu}_1 + \\wt{\\mu}_2 + \\wt{\\mu}_3 < \\wt{\\mu}_0$. \n\nLet $(B_4,\\wt{\\omega}_4)$ be the three-fold symplectic blow-up of the domain $(B_1, \\wt{\\omega}_1)$ at the points $\\wt{x}_i$ using the balls $B(\\wt{x}_i,\\wt{\\mu}_i)$. \nDenote by $\\wt{E}_1, \\wt{E}_2, \\wt{E}_3$ the arising exceptional spheres. \nThen $\\int_{\\wt{E}_i}\\wt{\\omega}_4 = \\wt{\\mu}_i$. \nThe proper preimage of $\\wt{E}_0$ in $(B_4,\\wt{\\omega}_4)$ is a symplectic sphere $\\Sigma$ of homology class $[\\Sigma] = \n[\\wt{E}_0] - ([\\wt{E}_1] + [\\wt{E}_2] +[\\wt{E}_3])$ and of area \n$\\int_{\\Sigma} \\wt{\\omega}_4 \n= \\wt{\\mu}_0 - (\\wt{\\mu}_1 + \\wt{\\mu}_2 + \\wt{\\mu}_3)$.\n\nRecall that there exists a symplectic embedding $(B_0,\\omega_0) \\subset (\\cp^2,\\omega_{st})$ such that the complement of $B_0$ in $\\cp^2$ is a projective line $H$. \nHere $\\omega_{st}$ is the Fubini-Study form on $\\cp^2$ normalized by $\\int_{\\cp^2} \\omega_{st}^2 = 1$.\nA classical result of Lalonde-McDuff \\cite{LaMc} says that if a symplectic $4$-manifold $X$ contains an embedded symplectic sphere of non-negative self-intersection number, then $X$ is either rational or ruled (not necessarily minimal.) If, moreover, there is an embedded sphere of positive self-intersection number, then $X$ is either $S^2 \\times S^2$ or is $\\cp^2$ blown-up a number of times.\nThis implies that every symplectic domain for which a collar neighbourhood \nof its boundary is symplectomorphic to that of $B_0$ is obtained from $B_0$ by finite sequence of symplectic blow-ups. \nConsequently, the rational blow-up of a Lagrangian projective plane in \n$B_3(\\mu_1,\\mu_2,\\mu_2)$ is $(B_4,\\wt{\\omega}_4)$ (as the rational blowing-up surgery is performed away from $\\del B_3$.)\n\n\\subsubsection{Necessity.}\nLet us make the homology lattice comparison of $B_3$ and $B_4$.\nFor this purpose we use an embedding $B_0$ in $\\cp^2$ for which \n$\\cp^2 = B \\sqcup H$, where $H \\subset \\cp^2$ is a projective line.\nWe use the notation $X_3,\\wt{X}_4$ for the $\\cp^2$ blown-up $3$ or resp.\\ $4$ times. \nWe obtain the lattices\n\\[\n\\Lambda_3 := \\sfh_2(X_3,\\zz) \n= \\zz \\lan\\; [H],\\, [E_1],\\,[E_2],\\,[E_3] \\;\\ran,\n\\]\n\\[\n\\Lambda_4 := \\sfh_2(\\wt{X}_4,\\zz) \n= \\zz \\lan\\; [H],\\, [\\wt{E}_0],\\, [\\wt{E}_1],\\,[\\wt{E}_2],\\,\n[\\wt{E}_3]\\; \\ran,\n\\]\nwhere $[H]$ denotes the class of the line in $\\cp^2$. \nIn this notation we have \n\\[\n[L]_{\\zz_2} \\equiv [E_1] +[E_2]+[E_3] \\mod 2\n\\]\nin $X_3$, and \n\\begin{equation} \\label{Sigma=E}\n[\\Sigma]= [\\wt{E}_0]-(\\, [\\wt{E}_1]+[\\wt{E}_2]+ [\\wt{E}_3]\\,)\n\\end{equation}\nin $\\wt{X}_4$. \nThe latter follows from the equations\n\\[\n[\\Sigma] \\cdot [H] = 0,\\quad [\\Sigma]^2 = -4,\\quad c_1(\\wt{X}_4) \\cdot [\\Sigma] = -2.\n\\]\nIndeed, the orthogonality condition $[\\Sigma] \\cdot [H] = 0$ implies that \n$[\\Sigma]$ can be written in the form\n\\[\n[\\Sigma] = l_0[\\wt{E}_0] + l_1[\\wt{E}_1] + l_2[\\wt{E}_2] + l_3[\\wt{E}_3].\n\\]\nSince $[\\Sigma]^2 = -4$, it follows that $l^2_i = \\pm 1$. \nBut only one of $l_i$ can be positive, or $c_1(\\wt{X}_4) \\cdot [\\Sigma]$ would not \nbe equal to $(-2)$. \nWe conclude that $[\\Sigma]$ is unique up to permutation of $[\\wt{E}_i],\\ i = 0,\\ldots,3$.\n\nFurther, the Chern classes of $X_3$ and $\\wt{X}_4$ are\n\\[\nc_1(X_3) = 3[H] - (\\, [E_1]+[E_2]+ [E_3]\\,),\n\\qquad\nc_1(\\wt{X}_4) = 3[H] - (\\,[\\wt{E}_0]+ [\\wt{E}_1]+[\\wt{E}_2]+ [\\wt{E}_3]\\,).\n\\]\nNext, recall that we have the sublattice $\\Lambda'_3$ consisting of vectors $\\lambda\\in \\Lambda_3$ such that $\\lambda\\cdot [L]\\equiv 0 \\mod 2$. The sublattice $\\Lambda'_3$ is generated by $[H]$ and the classes $[E_i]-[E_j]$, $2[E_i]$. The latter are primitive in $\\wt{\\Lambda}_4$, orthogonal to $[H]$, and characterised by the properties \n\\[\n([E_i]-[E_j])^2=-2,\\quad c_1\\cdot ([E_i]-[E_j])=0, \\quad\n(2[E_i])^2=-4,\\quad c_1\\cdot (2[E_i])=2.\n\\]\nLet us consider the sublattice $\\wt{\\Lambda}'_4 \\subset \\wt{\\Lambda}_4$ consisting of the vectors $\\lambda\\in \\Lambda_4$ orthogonal to $[\\Sigma]$ and find the classes with the properties above in $\\wt{\\Lambda}'_4$. The orthogonality to $[H]$ means that we seek vectors of the form \n\\begin{equation} \\label{la=sumEi}\n \\lambda= k_0[\\wt{E}_0] + k_1[\\wt{E}_1] +k_2[\\wt{E}_2] + k_3[\\wt{E}_3]. \n\\end{equation}\n\nThe condition $\\lambda^2=-2$ means that two of the coefficients $k_0,\\ldots,k_3$ are $0$ and two of them $\\pm1$. The orthogonality to $[\\Sigma]$ leaves two possibilities:\neither $[\\wt{E}_i]-[\\wt{E}_j]$ with $i\\ne j \\in \\{1,2,3\\}$ or $\\pm(\\,[\\wt{E}_0] + [\\wt{E}_i]\\,)$ with $i=1,2,3$. \nThe orthogonality to $c_1$ excludes the latter possibility. The classes with $\\lambda^2=-4$ are either $2[\\wt{E}_0],2[\\wt{E}_i]$, or with coefficients $k_i=\\pm1$ in \\eqref{la=sumEi}. The orthogonality to $[\\Sigma]$ excludes double classes $2[\\wt{E}_0],2[\\wt{E}_i]$ and says that two of the coefficients $k_0,\\ldots,k_3$ are the same as for $[\\Sigma]$ and two of the opposite sign. Finally, the condition $c_1\\cdot \\lambda=2$ says that one of the coefficients $k_0,\\ldots,k_3$ is $-1$ and three other are $+1$. So our classes $\\lambda$ with $\\lambda^2=-4$ are\n\\[\n[\\wt{E}_0]+ [\\wt{E}_1]+ [\\wt{E}_2] -[\\wt{E}_3],\\quad\n[\\wt{E}_0]+ [\\wt{E}_1]- [\\wt{E}_2] +[\\wt{E}_3],\\quad\n[\\wt{E}_0] -[\\wt{E}_1]+ [\\wt{E}_2] +[\\wt{E}_3].\n\\]\nNotice that the symmetric group $\\sym_3$ permuting the classes in the sets $\\{ [E_1], [E_2], [E_3] \\}$ and $\\{ [\\wt{E}_1], [\\wt{E}_2], [\\wt{E}_3] \\}$\nacts in compatible way on the generating classes of the lattices $\\Lambda'_3$ and $\\wt{\\Lambda}'_4$. \n\nThe last property we need is \n\\[\n\\begin{split}\n& [E_i] -[E_j] = \\tfrac{1}{2}\\,(\\,2[E_i] -2[E_j]\\,) \\qquad\n\\text{in } \\Lambda'_3,\n\\\\\n& [\\wt{E}_1] -[\\wt{E}_2] = \\tfrac{1}{2}\\,\\big(\\,\n([\\wt{E}_0]+ [\\wt{E}_1]- [\\wt{E}_2] +[\\wt{E}_3])\n- ([\\wt{E}_0]- [\\wt{E}_1] +[\\wt{E}_2] +[\\wt{E}_3])\\,\\big) \\qquad\n\\text{in } \\wt{\\Lambda}'_4\n\\end{split}\n\\]\nand similar for $[\\wt{E}_1] -[\\wt{E}_3]$, $[\\wt{E}_2] -[\\wt{E}_3]$. \n\n\nSumming up we conclude:\n\\begin{lem} There is a unique (up to $\\sym_3$) lattice isomorphism $\\Lambda'_3 \\to \\wt{\\Lambda}'_4$ which sends\n\\begin{equation} \\label{E-corresp}\n \\begin{split}\n& 2[E_1] \\mapsto [\\wt{E}_0] -[\\wt{E}_1] +[\\wt{E}_2] +[\\wt{E}_3], \\quad\n 2[E_2] \\mapsto [\\wt{E}_0] +[\\wt{E}_1] -[\\wt{E}_2] +[\\wt{E}_3], \\\\[2pt]\n& 2[E_3] \\mapsto [\\wt{E}_0] +[\\wt{E}_1] +[\\wt{E}_2] -[\\wt{E}_3]. \n\\end{split}\n\\end{equation}\nOn the other hand, those lattice isomorphisms which preserve $c_1$ satisfy \\eqref{E-corresp}.\n\\end{lem}\n\nNow we can give a proof of the triangle inequality. Let $(B_3, \\omega_3)$ be a symplectic ball blown-up triply, and $E_1,E_2,E_3$ the corresponding exceptional spheres. Denote by $\\mu_i:= \\int_{E_i} \\omega_3$ the periods of the symplectic form so $(B_3, \\omega_3)$ is $B_3(\\mu_1,\\mu_2,\\mu_3)$.\n\nAssume that there exists a Lagrangian $L \\cong \\mathbb{RP}^2$ in $(B_3,\\omega_3)$. Let $(B_4,\\wt{\\omega}_4)$ be the symplectic rational blow-up of $L$ of size $\\varepsilon>0$. Introduce the homology classes in $\\sfh_2(B_4,\\zz)$ according to the formulas \\eqref{Sigma=E} and \\eqref{E-corresp}. Set $\\wt{\\mu}_i := \\int_{\\wt{E}_i} \\wt\\omega_4, i = 0,\\ldots,3$.\nWe have the relations:\n\\[\n\\begin{split}\n& \\wt{\\mu}_0 - (\\wt{\\mu}_1 + \\wt{\\mu}_2 +\\wt{\\mu}_3) = 4\\varepsilon \\\\\n& \\wt{\\mu}_0 - \\wt{\\mu}_1 + \\wt{\\mu}_2 +\\wt{\\mu}_3 = 2\\mu_1 \\qquad\n\\wt{\\mu}_0 +\\wt{\\mu}_1 - \\wt{\\mu}_2 +\\wt{\\mu}_3 = 2\\mu_2 \\qquad\n\\wt{\\mu}_0 +\\wt{\\mu}_1 + \\wt{\\mu}_2 -\\wt{\\mu}_3 = 2\\mu_3 \n\\end{split}\n\\]\nor resolved in $\\wt{\\mu}_i$\n\\begin{equation} \\label{mu2mu'}\n\\begin{split}\n& \\wt{\\mu}_0 = \\frac{\\mu_1 + \\mu_2 +\\mu_3}{2} + \\varepsilon \n\\\\[2pt]\n& \\wt{\\mu}_1 = \\frac{\\mu_2 + \\mu_3 -\\mu_1}{2} - \\varepsilon \n\\qquad\n\\wt{\\mu}_2 = \\frac{\\mu_1 + \\mu_3 -\\mu_2}{2} -\\varepsilon \n\\qquad\n\\wt{\\mu}_3 = \\frac{\\mu_1 + \\mu_2 -\\mu_3}{2} -\\varepsilon.\n\\end{split}\n\\end{equation}\nThe latter formulas not only demonstrate the symplectic triangle inequality, but also give the upper bound on the maximal possible size of the rational symplectic blow-up.\n\n\n\\subsubsection{Sufficiency.}\nWe let $\\omega_3$ to denote the symplectic form on $B_3(\\mu_1,\\mu_2,\\mu_3)$. \nLet us extend $\\omega_3$ to a symplectic form on $X_3$, the three-fold blow-up of $\\cp^2$. \nWe use the same notation $\\omega_3$ for the extension; we get \n\\[\\textstyle\n[\\omega_3] = [H] - \\sum_i \\mu_i [E_i].\n\\]\n\nWe assume $\\omega_3$ to satisfy:\n\\begin{enumerate}\n\\item \\label{NM1}\n$[\\omega_3]^2 = 1 - \\sum_i \\mu_i^2 >0$ (``positive volume'');\n\\item \\label{NM2}\n$\\mu_i>0$ and $\\mu_i+\\mu_j<1$ (``effectivity of exceptional curves''); \n\\item \\label{NM.RP2}\n$\\mu_i+\\mu_j> \\mu_k$, the latter is the symplectic triangle inequality. \n\\end{enumerate}\nLet us show that under the additional condition \\eqref{NM.RP2} there exist a Lagrangian $L\\cong \\mathbb{RP}^2$ in $(X_3,\\omega_3)$ disjoint from the line $H$. For this purpose we fix some sufficiently small $\\varepsilon>0$ and define new periods $\\wt{\\mu}_0,\\ldots,\\wt{\\mu}_3$ by \\eqref{mu2mu'} so that they are positive and satisfy\n\\begin{equation}\\label{mu'ineq}\n1 - \\sum_{i = 0}^{3} \\wt\\mu_i^2 > 0,\n\\quad\n\\wt\\mu_0 - (\\wt\\mu_1 + \\wt\\mu_2 + \\wt\\mu_3) > 0,\n\\quad\n1 - \\wt\\mu_0 - \\wt\\mu_i > 0,\n\\ i = 1,2,3.\n\\end{equation}\nNow, consider a line $H$ in $\\cp^2$ and a point $\\wt{x}_0 \\in \\cp^2$ that does not lie on $H$. \nLet $\\wt{X}_1$ be the blow-up of $\\cp^2$ at $\\wt{x}_0$, and let $\\wt{E}_0$ be the arising exceptional curve. \nAfter that, take three distinct points $\\wt{x}_1,\\wt{x}_2,\\wt{x}_3$ on $\\wt{E}_0$ and blow-up $\\wt{X}_1$ at them. \nDenote by $\\wt{X}_4$ the resulting complex surface and by $\\wt{E}_i, i =1,2,3$ the corresponding exceptional complex curves. The proper preimage of $\\wt{E}_0$ in $\\wt{X}_4$, which is disjoint from $H$, is a \nrational $(-4)$-curve $\\Sigma$ in the homology class $[\\Sigma]=[\\wt{E}_0] - (\\, [\\wt{E}_1] + [\\wt{E}_2] + [\\wt{E}_3]\\,)$. \n\nAt this point we use the Nakai-Moishezon criterion and conclude that there exists a K\u00e4hler form $\\wt{\\omega}_4$ with the periods $\\int_H \\wt{\\omega}_4=1$ and $\\wt{\\mu}_i=\\int_{\\wt{E}_i} \\wt{\\omega}_4$. Since $\\Sigma$ is an $\\wt{\\omega}_4$-symplectic sphere of the area $4\\epsi$, we make the rational blow-down of $\\Sigma$ from $\\wt{X}_4$ and obtain the manifold $X_3$ with the desired symplectic form $\\omega_3$ on $X_3$ (with the prescribed periods and with an $\\omega_3$-Lagrangian $L\\cong \\mathbb{RP}^2$ in $X_3$.) \n\nWe will now give more details about applying the \nNakai-Moishezon criterion in this particular situation.\nWe let $\\calk(\\wt{X}_4)$ to denote \nthe K\u00e4hler cone of $\\wt{X}_4$. \n\\begin{lem} \nThe cone $\\calk(\\wt{X}_4)$ consists \nof those classes \n\\begin{equation} \\label{w.in.X4} \\textstyle\n[\\wt\\omega_4]= \\lambda[H] - \\sum_{i=0}^3 \\wt\\mu_i [\\wt{E}_i] \\in \\sfh^2(\\wt{X}_4;\\rr)\n\\end{equation}\nwhich satisfy\n\\begin{enumerate}[label=$\\wt{(\\arabic*)}$]\n\\item $[\\wt\\omega_4]^2= \\lambda^2 - \\sum_{i=0}^3 \\wt\\mu_i^2>0$;\n\\item \\label{some.E} \n$\\wt\\mu_i>0$ for $i = 0,\\ldots,3$ and $\\wt\\mu_0 + \\wt\\mu_i < \\lambda$ for $i=1,2,3$;\n\\item $\\wt\\mu_0 - ( \\wt\\mu_1 + \\wt\\mu_2 + \\wt\\mu_3)>0$. \n\\end{enumerate}\n\\end{lem}\n\\begin{proof}\nLet us first introduce more notations. \nThe pencil of lines passing through the point $\\wt{x}_0 \\in \\cp^2$ yields the holomorphic ruling \n$\\pr_1 \\colon \\wt{X}_1 \\to H$ for which \n$\\wt{E}_0$ is section of self-intersection number $(-1)$.\nThe fibers of $\\pr_1$ are in the class $[F] := [H] - [\\wt{E}_0]$.\n\nWe let $\\pr_4 \\colon \\wt{X}_4 \\to H$ to denote \nthe composition of the contractions of $\\wt{E}_i, i = 1,2,3$ from $\\wt{X}_4$ with the ruling $\\pr_1$.\nWhile the generic fiber of $\\pr_4 \\colon \\wt{X}_4 \\to H$ is a smooth holomorphic sphere in the class $[F]$, three fibers of $\\pr_4$ are singular; each of them consists of two holomorphic exceptional curves, $\\wt{E}_i, \\wt{E}^{\\prime}_{i}, i = 1,2,3$. \nThe homology class of \n$\\wt{E}^{\\prime}_{i}$ is \n$[\\wt{E}_{i}'] = [F] - [\\wt{E}_i] = \n[H] - [\\wt{E}_0] - [\\wt{E}_{i}], i = 1,2,3$. \n\\smallskip%\n\nGoing back to the proof of the lemma, note that it is sufficient to do \nthe rational classes $\\sfh^2(\\wt{X}_4;\\qq)$, as $\\calk(\\wt{X}_4)$ is an open convex cone, in which rational points are dense. Recall that a class $\\xi \\in \\sfh^2(\\wt{X}_4;\\qq)$ has a K\u00e4hler representative if and \nonly if $\\xi^2 > 0$ and $\\int_{C} \\xi > 0$ for each (irreducible) holomorphic curve $C$. (Note that $\\sfh^{1,1}(\\wt{X}_4) = \\sfh^2(\\wt{X}_4;\\cc)$, so that every integral class is the Chern class for some holomorphic line bundle.) Let us show that the classes $[\\wt\\omega_4]$ provided by the lemma are indeed positive on holomorphic curves. Consider the following cases:\n\\begin{itemize}\n\\item If $C$ is $\\Sigma$, then the positivity \nfollows from ($\\wt{3}$).\n \n\\item If $[C] \\cdot [F] = 0$, then $C$ is either a regular or a singular fiber of $\\pr_4$, in which case the positivity follows from ($\\wt{2}$).\n \n\\item In the last, the most general case, we have $[C] \\cdot [F] > 0$ and $C \\neq \\Sigma$.\n\\end{itemize}\nSet $d:= [C] \\cdot [F]$ and $n'_i:= [C] \\cdot [\\wt{E}^{'}_{i}]$. \nThen $0 \\leq n'_i \\leq d$, as $[F] = [\\wt{E}_i] + [\\wt{E}'_{i}]$. \nThus, we have:\n\\begin{equation}\\label{Sigma1} \\textstyle\n[C] = d[\\Sigma] + m[F] - \\sum _{i=1}^3 n'_i [\\wt{E}^{'}_{i}].\n\\end{equation}\nSince $[C] \\cdot [\\Sigma] > 0$, \nit follows that \n$m - 4\\,d > 0$. Therefore, one can rewrite \\eqref{Sigma1} as follows:\n\\begin{equation*} \\textstyle\n[C] = d[\\Sigma] + (m-3d) [F] + \\sum _{i=1}^3 (d-n'_i)[F] \n+\\sum _{i=1}^3 n'_i\\,([F] -[\\wt{E}^{'}_{i}]).\n\\end{equation*}\nClearly, $[\\wt\\omega_4]$ is non-negative on each summand and positive $d[\\Sigma]$. \\qed\n\\end{proof}\n\n\n\n\n\n\n\n\\input{bib-sympl-triangle.tex}\n\n\n\\end{document}\n\n\n\n\n\n\n\\section{}\n\\subsection{}\n\n\\begin{theorem}[Optional addition to theorem head]\n\\end{theorem}\n\n\\begin{proof}[Optional replacement proof heading]\n\\end{proof}\n\n\\begin{figure}\n\\includegraphics{filename}\n\\caption{text of caption}\n\\label{}\n\\end{figure}\n\n\n\\begin{equation}\n\\end{equation}\n\n\\begin{equation*}\n\\end{equation*}\n\n\\begin{align}\n & \\\\\n &\n\\end{align}\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\\subsection{Holomorphic anomaly equations}\nGromov--Witten invariants of a non-singular compact Calabi--Yau threefold $X$ are defined by the integrals\n\\[ \\mathsf{N}_{g, \\beta}\n= \\int_{[ {\\overline M}_{g}(X, \\beta) ]^{\\text{vir}}} \n1\n\\]\nwhere ${\\overline M}_{g}(X, \\beta)$ is the\nmoduli space of stable maps from connected genus~$g$ curves to $X$ of degree $\\beta \\in H_2(X,{\\mathbb{Z}})$,\nand $[ \\, - \\, ]^{\\text{vir}}$ is its virtual fundamental class.\nMirror symmetry \\cite{BCOV, Hos, ASYZ} makes the following predictions about the genus~$g$ potentials\n\\[ \\mathsf{F}_g(q) = \\sum_{\\beta} \\mathsf{N}_{g, \\beta} q^{\\beta}. \\]\n\n\\begin{enumerate}\n \\item[(i)] There exists a finitely generated subring of quasi-modular objects\n \\[ {\\mathcal R} \\subset {\\mathbb{Q}}[[ q^{\\beta} ]] \\]\n (depending on $X$) which contains all $\\mathsf{F}_g(q)$.\n \\item[(ii)] The series $\\mathsf{F}_g(q)$ satisfy \\emph{holomorphic anomaly equations},\n i.e. recursive formulas for the derivative of the modular completion of $\\mathsf{F}_g$ with respect to the non-holomorphic variables.\\footnote{\n In many cases ${\\mathcal R}$ can be described explicitly by generators and relations, and (ii) is equivalent to formulas for the formal derivative of $\\mathsf{F}_g$ with respect to distinguished generators of the ring.}\n\\end{enumerate}\nHere, the precise modular interpretation of $\\mathsf{F}_g(q)$ is part of the problem and not well understood in general.\nMathematically, the predictions (i, ii) are not known yet for any (compact) Calabi--Yau threefold.\\footnote{\nThe (non-compact) local $\\mathbb{P}^2$ case was recently established in \\cite{LP}.}\n\n\\subsection{The Schoen Calabi--Yau threefold} \\label{intro:schoen_calabiyau}\nA \\emph{rational elliptic surface} $R \\to \\mathbb{P}^1$ is the successive blowup of $\\mathbb{P}^2$ along\nthe base points of a pencil of cubics containing a smooth member.\nIts second cohomology group admits the splitting\n\\[ H^2(R,{\\mathbb{Z}}) = \\mathrm{Span}_{{\\mathbb{Z}}}(B,F) \\overset{\\perp}{\\oplus} E_8(-1) \\]\nwhere $B,F$ are the classes of a fixed section and a fiber respectively. Let also\n\\[ W = B + \\frac{1}{2} F . \\]\n\nLet $R_1, R_2$ be rational elliptic surfaces with disjoint sets of basepoints of singular fibers.\nThe \\emph{Schoen Calabi--Yau threefold} \\cite{Schoen} is the fiber product\n\\[ X = R_1 \\times_{\\mathbb{P}^1} R_2 \\,. \\]\nWe have the commutative diagram of fibrations\n\\begin{equation} \\label{dfsdfsaaad}\n\\begin{tikzcd}\n& X \\ar[swap]{dl}{\\pi_2} \\ar{dr}{\\pi_1} \\ar{dd}{\\pi} & \\\\\nR_1 \\ar[swap]{dr}{p_1} & & R_2 \\ar{dl}{p_2} \\\\\n& \\mathbb{P}^1 &\n\\end{tikzcd}\n\\end{equation}\nwhere $\\pi_i$ are the elliptic fibrations induced by $p_i : R_i \\to \\mathbb{P}^1$. Let\n\\[ W_i, F_i \\in H^2(R_i, {\\mathbb{Q}}), \\quad E^{(i)}_{8}(-1) \\subset H^2(R_i, {\\mathbb{Z}}) \\]\ndenote the classes $W,F$ and the $E_8$-lattice on $R_i$ respectively. We have\n\\[\nH^2(X,{\\mathbb{Q}}) = \\langle D \\rangle \\oplus\n\\Big( \\langle \\pi_1^{\\ast} W_2 \\rangle \\oplus \\pi_1^{\\ast} E_8^{(2)}(-1)_{{\\mathbb{Q}}} \\Big) \\oplus \\Big( \\langle \\pi_2^{\\ast} W_1 \\rangle \\oplus \\pi_2^{\\ast} E_8^{(1)}(-1)_{{\\mathbb{Q}}} \\Big)\n\\]\nwhere we let $\\langle \\cdot \\rangle$ denote the ${\\mathbb{Q}}$-linear span and $D$ is the class of a fiber of $\\pi$.\n\nFor all $(g,k) \\notin \\{ (0,0), (1,0) \\}$ define\\footnote{\nThe cases $(g,k) \\in \\{ (0,0), (1,0) \\}$ are excluded since\n$\\mathsf{N}_{g,0}$ is not defined for $g \\in \\{ 0, 1 \\}$.}\nthe \\emph{$\\pi$-relative Gromov--Witten potential}\n\\begin{equation} \\label{Defn_Fgk}\n\\mathsf{F}_{g,k}(\\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2)\n= \\sum_{\\pi_{\\ast} \\beta = k [\\mathbb{P}^1]} \\mathsf{N}_{g, \\beta}\nq_1^{W_1 \\cdot \\beta} q_2^{W_2 \\cdot \\beta} e( \\mathbf{z}_1 \\cdot \\beta ) e( \\mathbf{z}_2 \\cdot \\beta )\n\\end{equation}\nwhere the sum is over all curve classes $\\beta \\in H_2(X,{\\mathbb{Z}})$ of degree $k$ over $\\mathbb{P}^1$,\nwe have suppressed pullbacks by $\\pi_i$,\nwe write $e(x) = \\exp(2 \\pi i x)$ for all $x \\in {\\mathbb{C}}$, and\n\\[ \\mathbf{z}_i \\in E_8^{(i)}(-1) \\otimes {\\mathbb{C}} \\]\nis the (formal) coordinate on the $E_8$ lattice of $R_i$.\n\nA (weak) $E_8$-Jacobi form is a holomorphic function of variables\n\\[ q = e^{2 \\pi i \\tau}, \\tau \\in {\\mathbb{H}} \\quad \\text{ and }\n\\quad \\mathbf{z} \\in E_8 \\otimes {\\mathbb{C}} \\]\nwhich is semi-invariant under the action of the Jacobi group,\ninvariant under the Weyl group of $E_8$\nand satisfies a growth condition at the cusp;\nwe refer to Section~\\ref{Section_Lattice_Jacobi_forms} for an introduction to Jacobi forms.\nThe ring of weak $E_8$-Jacobi forms $\\mathrm{Jac}_{E_8}$ carries a bigrading\nby weight $\\ell \\in {\\mathbb{Z}}$ and index $m \\in {\\mathbb{Z}}_{\\geq 0}$,\n\\[ \\mathrm{Jac}_{E_8} = \\bigoplus_{\\ell,m} \\mathrm{Jac}_{E_8, \\ell,m}. \\]\nRecall the second Eisenstein series\n\\[ C_2(q) = -\\frac{1}{24} + \\sum_{n \\geq 1} \\sum_{d|n} d q^n. \\]\nBy assigning $C_2$ index $0$ and weight $2$ we have the bigraded extension\n\\begin{equation} \\label{fdfsdfsd}\n\\widetilde{\\mathrm{Jac}}_{E_8} = \\mathrm{Jac}_{E_8}[C_2]\n= \\bigoplus_{\\ell, m} \\widetilde{\\mathrm{Jac}}_{E_8, \\ell, m}.\n\\end{equation}\nThe ring \\eqref{fdfsdfsd} in the variables $q = q_i$ and $\\mathbf{z}_i \\in E_8^{(i)}$ is denoted by\n$\\widetilde{\\mathrm{Jac}}^{(q_i, \\mathbf{z}_i)}_{E_8}$.\n\nRecall also the modular discriminant\n\\[ \\Delta(q) = q \\prod_{m \\geq 1} (1-q^m)^{24}. \\]\n\nWe prove the following basic quasi-modularity result.\n\\begin{thm} \\label{Thm1} \nEvery relative potential $\\mathsf{F}_{g,k}$ is a $E_8 \\times E_8$ bi-quasi-Jacobi form:\n\\[ \\mathsf{F}_{g,k}(\\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2)\\ \\in\\ \n\\frac{1}{\\Delta(q_1)^{k\/2}} \\widetilde{\\mathrm{Jac}}_{E_8, \\ell,k}^{(q_1, \\mathbf{z}_1)}\n \\otimes \\frac{1}{\\Delta(q_2)^{k\/2}} \\widetilde{\\mathrm{Jac}}_{E_8, \\ell,k}^{(q_2, \\mathbf{z}_2)}\n\\]\nwhere $\\ell = 2g-2+6k$.\n\\end{thm}\n\\vspace{7pt}\n\nThe appearance of $E_8 \\times E_8$ bi-quasi-Jacobi forms\nis in perfect agreement with\npredictions made using mirror symmetry \\cite{Sakai, HSS, HST}.\n\nThe elements in $\\mathrm{Jac}_{E_8}$ are Jacobi forms and therefore modular objects.\nThe only source of non-modularity in $\\widetilde{\\mathrm{Jac}}_{E_8}$ and hence in $\\mathsf{F}_{g,k}$ arises from the\nstrictly quasi-modular series $C_2(q)$.\nWe state a holomorphic anomaly equation which determines the dependence on $C_2$ explicitly.\n\nIdentify the lattice $E_8^{(i)}$ with the pair $({\\mathbb{Z}}^8, Q_{E_8})$ where $Q_{E_8}$ is\nthe (positive definite) Cartan matrix of $E_8$, see Section~\\ref{Section_E8_lattice}.\nFor $j \\in \\{1,2\\}$ consider the differentiation operators with respect to $q_j$ and $\\mathbf{z}_j = (z_{j,1}, \\ldots, z_{j,8})$:\n\\[ D_{q_j} = \\frac{1}{2 \\pi i} \\frac{d}{d \\tau_j} = q_j \\frac{d}{dq_j}, \\quad D_{z_{j,\\ell}} = \\frac{1}{2 \\pi i} \\frac{d}{d z_{j,\\ell}}. \\]\n\n\\begin{thm} \\label{Thm2} Every $\\mathsf{F}_{g,k}$ satisfies the holomorphic anomaly equation\n\\begin{multline*}\n\\frac{d}{dC_2(q_2)} \\mathsf{F}_{g,k}\n=\n\\left( 2 k D_{q_1} - \\sum_{i,j=1}^{8} \\left( Q_{E_8}^{-1} \\right)_{ij} D_{\\mathbf{z}_{1,i}} D_{\\mathbf{z}_{1,j}} + 24k C_2(q_1) \\right) \\mathsf{F}_{g-1,k} \\\\\n+\n\\sum_{\\substack{g=g_1+g_2 \\\\ k = k_1 + k_2}}\n\\left(\n2 k_1 \\mathsf{F}_{g_1,k_1} \\cdot D_{q_1} \\mathsf{F}_{g_2,k_2} - \\sum_{i,j=1}^{8} \\left( Q_{E_8}^{-1} \\right)_{ij} D_{\\mathbf{z}_{1,i}}(\\mathsf{F}_{g_1,k_1}) \\cdot D_{\\mathbf{z}_{1,j}}(\\mathsf{F}_{g_2,k_2})\n\\right).\n\\end{multline*}\n\\end{thm}\n\\vspace{7pt}\n\nSince $X$ is symmetric in $R_1, R_2$ up to a deformation,\nthe potentials $\\mathsf{F}_{g,k}$ are symmetric under interchanging $(\\mathbf{z}_i,q_i)$:\n\\[ \\mathsf{F}_{g,k}(\\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2) = \\mathsf{F}_{g,k}(\\mathbf{z}_2, \\mathbf{z}_1, q_2, q_1). \\]\nHence Theorem~\\ref{Thm2} determines also the dependence of $\\mathsf{F}_{g,k}$ on $C_2(q_1)$.\n\nTheorems~\\ref{Thm1} and~\\ref{Thm2} show quasi-modularity and the holomorphic anomaly equation for the Gromov--Witten potentials of $X$ \\emph{relative} to $\\mathbb{P}^1$. This provides a partial verification of the absolute case (i,ii).\nIt also leads to modular properties when the Gromov--Witten potentials\nare summed over the genus as follows.\nConsider the topological string partition function\n(i.e. the generating series of disconnected Gromov--Witten invariants)\nof the Schoen geometry\n\\[ \\mathsf{Z}(t, u, \\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2)\n=\n\\exp\\left( \\sum_{g \\geq 0} \\sum_{\\beta > 0} \\mathsf{N}_{g,\\beta}\nu^{2g-2} t^{D \\cdot \\beta}\nq_1^{W_1 \\cdot \\beta} q_2^{W_2 \\cdot \\beta} e( \\mathbf{z}_1 \\cdot \\beta ) e( \\mathbf{z}_2 \\cdot \\beta ) \\right).\n\\]\nUnder a variable change, $\\mathsf{Z}$ is the generating series of Donaldson--Thomas \/ Pandharipande--Thomas invariants of the threefold $X$ \\cite{PaPix2}.\nFor any curve class $\\alpha \\in H_2(R_1, {\\mathbb{Z}})$\nof some degree $k$ over the base $\\mathbb{P}^1$\nconsider the coefficient\n\\[\n\\mathsf{Z}_{\\alpha}(u, \\mathbf{z}_2, q_2)\n=\n\\Big[ \\mathsf{Z}(t, u, \\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2) \\Big]_{t^k q^{W_1 \\cdot \\alpha} e(\\mathbf{z}_1 \\cdot \\alpha)} .\n\\]\nWe write $(\\mathbf{z}, q)$ for $(\\mathbf{z}_2, q_2)$,\nand work under the variable change $u = 2 \\pi z$ and $q = e^{2 \\pi i \\tau}$.\nWe then have the following.\n\n\\begin{cor} \\label{Cor_Schoen}\nUnder the variable change $u = 2 \\pi z$ and $q = e^{2 \\pi i \\tau}$\nthe series $\\mathsf{Z}_{\\alpha}(z, \\mathbf{z}, \\tau)$ satisfies\nthe modular transformation law of Jacobi forms of weight $-6$ and index\n$( \\frac{1}{2} \\langle \\alpha - c_1(R_1), \\alpha \\rangle ) \\oplus \\frac{k}{2} Q_{E_8}$,\nthat is for all $\\gamma = \\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$\n\\begin{multline*}\n\\mathsf{Z}_{\\alpha}\\left( \\frac{z}{c \\tau + d},\n\\frac{\\mathbf{z}}{c \\tau + d},\n\\frac{a \\tau + b}{c \\tau + d} \\right) \\\\\n=\n\\xi(\\gamma)^{k + 1} (c \\tau + d)^{-6}\ne\\left( \\frac{c}{2 (c \\tau + d)} \\Big[ k \\mathbf{z}^T Q_{E_8} \\mathbf{z} +\nz^2 \\langle \\alpha - c_1(R_1), \\alpha \\rangle \\Big] \\right)\n\\mathsf{Z}_{\\alpha}(z,\\mathbf{z}, \\tau)\n\\end{multline*}\nwhere $\\xi(\\gamma) \\in \\{ \\pm 1 \\}$ \nis determined by\n$\\Delta^{\\frac{1}{2}}(\\gamma \\tau) = \\xi(\\gamma) (c \\tau + d)^{6} \\Delta^{\\frac{1}{2}}(\\tau)$.\n\\end{cor}\n\nBy Theorem~\\ref{Thm1} the series $\\mathsf{Z}_{\\alpha}$ also satisfies\nthe elliptic transformation law of Jacobi forms \nin the variable $\\mathbf{z}$.\nThe elliptic transformation law in the genus variable $u$\nis conjectured by Huang--Katz--Klemm \\cite{HKK}\nand corresponds to the expected symmetry of Donaldson--Thomas invariants\nunder the Fourier--Mukai transforms by the Poincar\\'e sheaf of\n$\\pi_2$, see \\cite{OS1}. Hence conjecturally we find that\n$\\mathsf{Z}_{\\alpha}$ is a meromorphic Jacobi form\n(of weight and index as in Corollary~\\ref{Cor_Schoen}).\n\n\n\n\n\nWe end our discussion with two concrete examples.\nExpend the partition function $\\mathsf{Z}$\nby the degree over the base $\\mathbb{P}^1$:\n\\[\n\\mathsf{Z}(t, u, \\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2)\n=\n\\sum_{k = 0}^{\\infty} \\mathsf{Z}_k(u, \\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2) t^k.\n\\]\nBy a basic degeneration argument in degree $0$ we have \n\\[ \\mathsf{Z}_0 = \\frac{1}{\\Delta(q_1)^{\\frac{1}{2}} \\Delta(q_2)^{\\frac{1}{2}}}. \\]\nIn degree~$1$ the Igusa cusp form conjecture \\cite[Thm.1]{HAE}\nand an analysis of the sections of $\\pi : X \\to \\mathbb{P}^1$ yields\n\\[ \\mathsf{Z}_1 = \\frac{\\Theta_{E_8}(\\mathbf{z}_1, q_1) \\Theta_{E_8}(\\mathbf{z}_2, q_2)}{\\chi_{10}(e^{iu},q_1, q_2)} \\]\nwhere $\\chi_{10}$ is the Igusa cusp form, a Siegel modular form, defined by\n\\[ \\chi_{10}(p,q_1, q_2) = p q_1 q_2 \\prod_{(k,d_1, d_2)>0} (1-p^k q_1^{d_1} q_2^{d_2})^{c(4 d_1 d_2-k^2)} \\]\n(with $c(n)$ being coefficients of a certain $\\Gamma_0(4)$-modular form, see\n\\cite[Sec.0.2]{HAE}), and\n\\[ \\Theta_{E_8}(\\mathbf{z}, \\tau) = \n\\sum_{\\gamma \\in {\\mathbb{Z}}^{8}} q^{\\frac{1}{2} \\gamma^T Q_{E_8} \\gamma} e\\left( \\mathbf{z}^T Q_{E_8} \\gamma \\right),\n\\]\nis the Riemann theta function of the $E_8$-lattice.\nThe general relationship of $\\mathsf{Z}_k$\nto Siegel modular forms for $k > 1$ is yet to be found.\n\n\n\n\n\n\n\n\n\n\\subsection{Beyond Calabi--Yau threefolds and the proof}\nRecently it became clear\nthat we should expect properties (i, ii) not only for Calabi--Yau threefolds\nbut also for varieties $X$ (of arbitrary dimension) which are Calabi--Yau relative to a base $B$,\ni.e. those which admit a fibration\n\\[ \\pi : X \\to B \\]\nwhose generic fiber has trivial canonical class.\nThe potential $\\mathsf{F}_g(q)$ is replaced here by a $\\pi$-relative Gromov--Witten potential\nwhich takes values in cycles on ${\\overline M}_{g,n}(B,\\mathsf{k})$, the moduli space of stable maps to the base.\nIn this paper we conjecture and develop\nsuch a theory for \\emph{elliptic fibrations with section}.\nOur main theoretical result is a conjectural link\nbetween the Gromov--Witten theory of elliptic fibrations\nand the theory of lattice quasi-Jacobi forms.\nThis framework allows us to conjecture a holomorphic anomaly equation.\\footnote{See Section~\\ref{Section_Elliptic_fibrations_and_conjectures} for details\non the conjectures.}\n\n\nThe elliptic curve (or more generally, trivial elliptic fibrations)\nis the simplest case of our conjecture\nand was proven in \\cite{HAE}.\nIn this paper we prove the following new cases (see Section~\\ref{Subsection_RES_Statement_of_results}):\n\\begin{enumerate}\n \\item[(a)] \n The $\\mathbb{P}^1$-relative Gromov--Witten potentials of the rational elliptic surface are $E_8$-quasi-Jacobi forms\n numerically\\footnote{i.e. after specialization to ${\\mathbb{Q}}$-valued Gromov--Witten invariants}.\n \\item[(b)] The holomorphic anomaly equation holds for the rational elliptic surface numerically.\n\\end{enumerate}\nIn particular, (a) solves the complete descendent Gromov--Witten theory of\nthe rational elliptic surface in terms of $E_8$-quasi-Jacobi forms.\nWe also show:\n\\begin{enumerate}\n \\item[(c)] The quasi-Jacobi form property and the holomorphic anomaly equation\n are compatible with the degeneration formula (Section~\\ref{Subsection_Compa_with_deg_formula}).\n\\end{enumerate}\n\nThese results directly lead to\na proof of Theorem~\\ref{Thm1} and~\\ref{Thm2} as follows.\nThe Schoen Calabi--Yau $X$ admits a degeneration\n\\[ X \\rightsquigarrow (R_1 \\times E_2) \\cup_{E_1 \\times E_2} (E_1 \\times R_2), \\]\nwhere $E_i \\subset R_i$ are smooth elliptic fibers.\nBy the degeneration formula \\cite{Junli2}\nwe are reduced to studying the case $R_i \\times E_j$.\nBy the product formula \\cite{LQ}\nthe claim then follows from the holomorphic anomaly equation for the\nrational elliptic surface and the elliptic curve \\cite{HAE}.\n\nFor completeness we also prove the following case:\n\\begin{enumerate}\n\\item[(d)] The holomorphic anomaly equation holds for the reduced\n Gromov--Witten theory of the\n abelian surface in primitive classes numerically.\n\\end{enumerate}\nAn overview of the state of the art on holomorphic anomaly equations and the results of the paper is given in Table \\ref{table:1}.\n\n\n\\begin{table}[h!]\n\\centering\n\\begin{tabular}{|p{0.4cm}| p{2.7cm} | p{3.0cm} | c | p{3.0cm} |}\n \\hline\n dim & Geometry & Modularity & HAE & Comments\\\\ [0.5ex] \n \\hline\\hline\n \\multirow{2}{1cm}{$1$} & Elliptic curves & $\\mathrm{SL}_2({\\mathbb{Z}})$-quasimod. & Yes & cycle-valued \\cite{HAE} \\\\\n \\cline{2-5}\n & Elliptic orbifold $\\mathbb{P}^1$s & $\\Gamma(n)$-quasimod. & Yes & cycle-valued \\cite{MRS} (except case $(2^4)$)\\\\\n \\hline\n \\multirow{3}{1cm}{$2$} & K3 surfaces & $\\mathrm{SL}_2({\\mathbb{Z}})$-quasimod. & Yes & numerically, primitive only \\cite{MPT, HAE} \\\\ \\cline{2-5}\n & Abelian surfaces & $\\mathrm{SL}_2({\\mathbb{Z}})$-quasimod. & {\\bf Yes} & numerically, primitive only \\cite{BOPY} \\\\ \\cline{2-5}\n & Rational elliptic surface & $\\mathbf{E_8}${\\bf -quasi-Jacobi forms} & {\\bf Yes} & numerically, relative $\\mathbb{P}^1$ \\\\\n \\hline\n \\multirow{3}{1cm}{$3$} & Local $\\mathbb{P}^2$ & Explicit generators & Yes & cycle-valued \\cite{LP} \\\\ \\cline{2-5}\n & Formal Quintic & Explicit generators & Yes & cycle-valued \\cite{LP} \\\\ \\cline{2-5}\n & Schoen CY3 & $\\mathbf{E_8 \\times E_8}$ {\\bf bi-quasi-Jacobi forms} & \\textbf{Yes} & numerically, relative $\\mathbb{P}^1$\\\\\n \\hline\n\\end{tabular}\n\\caption{List of geometries for which modularity and holomorphic anomaly equations (HAE) are known. The {\\bf bold} entries are proven in this paper.\nCycle-valued = as Gromov--Witten classes on ${\\overline M}_{g,n}$; numerically = as numerical Gromov--Witten invariants; primitive = for primitive curve classes only;\nrelative $B$ = relative to the base $B$ of a Calabi--Yau fibration.}\n\\label{table:1}\n\\end{table}\n\n\n\\subsection{Overview of the paper}\nIn Section~\\ref{Section_Lattice_Jacobi_forms} we review the theory of lattice quasi-Jacobi forms. We introduce the derivations induced by the non-holomorphic completions, prove some structure results, and discuss examples.\nIn Section~\\ref{Section_Elliptic_fibrations_and_conjectures}\nwe present the main conjectures of the paper.\nWe conjecture that the $\\pi$-relative Gromov--Witten theory of\nan elliptic fibration is expressed by quasi-Jacobi forms\nand satisfies a holomorphic anomaly equation with respect to the modular parameter.\nIn Section~\\ref{Section_Consequences_of_the_Conjecture}\nwe discuss implications of the conjectures of Section~\\ref{Section_Elliptic_fibrations_and_conjectures}.\nIn particular, we deduce the weight of the quasi-Jacobi form,\npresent a holomorphic anomaly equation with respect to the elliptic parameter,\nand prove that under good conditions the Gromov--Witten potentials satisfy the elliptic transformation law of Jacobi forms.\nThe relationship to higher level quasi-modular forms is discussed.\nIn Section~\\ref{Section_relative_geomtries} we extend the conjectures of Section~\\ref{Section_Elliptic_fibrations_and_conjectures}\nto the Gromov--Witten theory of $X$ relative to a divisor $D$, when both admit compatible elliptic fibrations.\nWe show that the conjectural holomorphic anomaly equation is compatible with the degeneration formula.\nIn Section~\\ref{Section_RationalEllipticSurface} we study\nthe rational elliptic surface.\nWe show that the conjecture holds in all degrees and genera after specializing to numerical Gromov--Witten invariants;\nin particular we show that the Gromov--Witten potentials are $E_8$ quasi-Jacobi forms (Section~\\ref{Subsection_RES_Statement_of_results}).\nThe idea of the proof is to adapt a calculation scheme of Maulik--Pandharipande--Thomas \\cite{MPT}\nand show every step preserves the conjectured properties.\nIn Section~\\ref{Section_Schoen_variety} we prove Theorems~\\ref{Thm1} and~\\ref{Thm2}\nand Corollary~\\ref{Cor_Schoen}.\nIn Section~\\ref{Section_Abelian_surfaces} we numerically prove a holomorphic anomaly equation\nfor the reduced Gromov--Witten theory of abelian surfaces in primitive classes.\n\nIn Appendix~\\ref{Section_CohFTs} we introduce weak $B$-valued field theories\nand define a matrix action on the space of these theories.\nThis generalizes the Givental $R$-matrix action on cohomological field theories.\nWe express the conjectural holomorphic anomaly equation\nas a matrix action\nand discuss the compatibility with the Jacobi Lie algebra.\nIn Appendix~\\ref{Section_K3fibrations} we discuss\nrelative holomorphic anomaly equations for K3 fibrations\nin an example.\n\n\n\\subsection{Conventions} \\label{Subsection_Conventions}\nWe always work with integral\ncohomology modulo torsion,\nin particular $H^{\\ast}(X,{\\mathbb{Z}})$ will stand for singular cohomology of $X$ modulo torsion.\nOn smooth connected projective varieties\nwe identify cohomology with homology classes via Poincar\\'e duality.\nA curve class is the homology class of a (possibly empty) algebraic curve.\nGiven $x \\in {\\mathbb{C}}$ we write $e(x) = e^{2 \\pi i x}$.\nResults conditional on conjectures are denoted by Lemma*, Proposition*, etc.\n\n\\subsection{Acknowledgements}\nWe would like to thank Hyenho Lho and Rahul Pandharipande for conversations\non holomorphic anomaly equations, Jim Bryan for discussions on the Schoen Calabi-Yau,\nDavesh Maulik for sharing his insights,\nand Martin Raum for comments on Jacobi forms.\nThe results of the paper were first presented during a visit of the first author to\nETH Z\\\"urich in June 2017;\nwe thank the Forschungsinstitut f\\\"ur Mathematik for support.\nThe second author was supported by a fellowship from the Clay Mathematics Institute.\n\n\n\\section{Lattice Jacobi forms} \\label{Section_Lattice_Jacobi_forms}\n\\subsection{Overview}\nIn Section~\\ref{Subsection_Modular_forms} we briefly recall quasi-modular forms following Kaneko-Zagier \\cite{KZ} and\nBloch-Okounkov \\cite{BO}.\nSubsequently we give a modest introduction to lattice quasi-Jacobi forms.\nLattice Jacobi forms were defined in \\cite{Zie}\nand an introduction can be found in \\cite{Sko}.\nA definition of quasi-Jacobi forms of rank~$1$ appeared in \\cite{Lib},\nand for higher rank can be found in \\cite{KM}.\n\n\n\\subsection{Modular forms} \\label{Subsection_Modular_forms}\n\\subsubsection{Definition}\nLet ${\\mathbb{H}} = \\{ \\tau \\in {\\mathbb{C}} | \\mathrm{Im}(\\tau) > 0 \\}$ be the upper half plane and set $q = e^{2 \\pi i \\tau}$.\nA \\emph{modular form} of weight $k$ is a holomorphic function $f(\\tau)$ on ${\\mathbb{H}}$ satisfying\n\\begin{equation} f\\left( \\frac{a \\tau + b}{c \\tau + d} \\right) = (c \\tau + d)^k f(\\tau) \\label{TRPROP} \\end{equation}\nfor all $\\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$ and\nadmitting a Fourier expansion in $|q|<1$ of the form\n\\begin{equation} f(\\tau) = \\sum_{n = 0}^{\\infty} a_n q^n, \\quad \\quad a_n \\in {\\mathbb{C}}. \\label{ererqe33} \\end{equation}\n\nAn \\emph{almost holomorphic function} is a function\n\\[ F(\\tau) = \\sum_{i=0}^{s} f_i(\\tau) \\frac{1}{y^{i}}, \\quad \\quad y = \\mathrm{Im}(\\tau) \\]\non ${\\mathbb{H}}$ such that every $f_i$ has a Fourier expansion in $|q|<1$ of the form \\eqref{ererqe33}.\n\nAn \\emph{almost holomorphic modular form} of weight $k$ is an\nalmost holomorphic function\nwhich satisfies the transformation law \\eqref{TRPROP}.\n\nA \\emph{quasi-modular form} of weight $k$ is a function $f(\\tau)$\nfor which there exists an almost holomorphic modular form $\\sum_i f_i y^{-i}$ of weight $k$ with\n$f_0 = f$.\n\nWe let $\\mathrm{AHM}_{\\ast}$ (resp. $\\mathrm{QMod}_{\\ast}$) be the ring of almost holomorphic\nmodular forms (resp. quasi-modular forms) graded by weight.\nThe 'constant term' map\n\\begin{equation} \\mathrm{AHM} \\to \\mathrm{QMod}, \\quad \\sum_i f_i y^{-i} \\mapsto f_0 \\label{constanttermmap} \\end{equation}\nis well-defined and an isomorphism \\cite{KZ, BO}.\n\n\\subsubsection{Differential operators}\nThe non-holomorphic variable\n\\[ \\nu = \\frac{1}{8 \\pi y} \\]\ntransforms under the action of $\\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$ on ${\\mathbb{H}}$ as\n\\begin{equation} \\nu\\left( \\frac{ a \\tau + b}{c \\tau + d} \\right) = (c \\tau + d)^2 \\nu(\\tau) + \\frac{c (c \\tau + d)}{4 \\pi i}. \\label{3432} \\end{equation}\n\nWe consider $\\tau$ and $\\nu$ here as independent variables and define operators\n\\[ D_q = \\frac{1}{2\\pi i} \\frac{d}{d \\tau} = q \\frac{d}{dq}, \\quad D_{\\nu} = \\frac{d}{d \\nu}. \\]\nSince $\\tau$ and $\\nu$ are independent we have\n\\[ D_q \\nu = 0, \\quad D_{\\nu} \\tau = 0. \\]\n\nA direct calculation using \\eqref{3432} shows the ring $\\mathrm{AHM}_{\\ast}$ admits the derivations\n\\begin{align*} \\widehat{D}_q = ( D_q - 2 k \\nu + 2 \\nu^2 D_{\\nu} ) & \\colon \\mathrm{AHM}_k \\to \\mathrm{AHM}_{k+2} \\\\\n\nD_{\\nu} = \\frac{d}{d\\nu} & \\colon \\mathrm{AHM}_{k} \\to \\mathrm{AHM}_{k-2}.\n\\end{align*}\nSince $\\widehat{D}_q$ acts as $D_q$ on the constant term in $y$ we conclude that\n$D_q$ preserves quasi-modular forms:\n\\[ D_q : \\mathrm{QMod}_k \\to \\mathrm{QMod}_{k+2}. \\]\nSimilarly, define the anomaly operator\n\\[ \\mathsf{T}_q : \\mathrm{QMod}_k \\to \\mathrm{QMod}_{k-2} \\]\nto be the map which acts by $D_{\\nu}$ under the constant term isomorphism \\eqref{constanttermmap}.\nThe following diagrams therefore commute:\n\\[\n\\begin{tikzcd}\n\\mathrm{QMod}_k \\ar{d}{D_q} \\ar{r}{\\cong} & \\mathrm{AHM}_k \\ar{d}{\\widehat{D}_q} & & \\mathrm{QMod}_k \\ar{d}{\\mathsf{T}_q} \\ar{r}{\\cong} & \\mathrm{AHM}_k \\ar{d}{D_{\\nu}} \\\\\n\\mathrm{QMod}_{k+2} \\ar{r}{\\cong} & \\mathrm{AHM}_{k+2}, & & \\mathrm{QMod}_{k-2} \\ar{r}{\\cong} & \\mathrm{AHM}_{k-2}.\n\\end{tikzcd}\n\\]\n\nThe commutator relation\n$\\big[ D_{\\nu}, \\widehat{D}_q \\big]|_{\\mathrm{AHM}_k} = - 2 k \\cdot \\mathrm{id}_{\\mathrm{AHM}_k}$\nyields\n\\[ \\left[ \\mathsf{T}_q, D_q \\right]\\big|_{\\mathrm{QMod}_k} = - 2 k \\cdot \\mathrm{id}_{\\mathrm{QMod}_k}. \\]\n\nThe operators $\\mathsf{T}_q$ allows us to describe the modular transformation\nof quasi-modular forms.\n\\begin{lemma} \\label{Sdadgd}\nFor any $f(\\tau) \\in \\mathrm{QMod}_k$ we have\n\\[\nf\\left( \\frac{a \\tau + b}{c \\tau + d} \\right)\n= \\sum_{\\ell = 0}^{m} \\frac{1}{\\ell!} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} \\mathsf{T}_q^{\\ell} f(\\tau).\n\\]\n\\end{lemma}\n\\begin{proof}\nLet $F(\\tau) = \\sum_{i=0}^{m} f_i(\\tau) \\nu^i$ be the almost holomorphic modular form\nwith associated quasi-modular form $f(\\tau) = f_0(\\tau)$. \nLet $A = \\binom{a\\ b}{c\\ d}$, $j = c \\tau + d$ and $\\alpha = \\frac{c}{4 \\pi i}$.\nWe claim\n\\[ f_{r}(A \\tau) = \\sum_{\\ell = r}^{m} (-\\alpha)^{\\ell -r} \\binom{l}{r} j^{k-r-\\ell} f_{\\ell}(\\tau) \\]\nfor all $r$. The left-hand side is uniquely determined\nfrom $F( A \\tau ) = j^k F(\\tau)$ by solving recursively from the highest $\\nu$ coefficients on.\nOne checks the given equation is compatible with this constraint.\n\\end{proof}\n\\begin{comment}\n\\begin{lemma}\nLet $F(\\tau) = \\sum_{i=0}^{m} f_i(\\tau) \\nu^i$ be an almost holomorphic modular form\nwith associated quasi-modular form $f(\\tau) = f_0(\\tau)$.\nWe have\n\\begin{align*}\nf\\left( \\frac{a \\tau + b}{c \\tau + d} \\right)\n& = \\sum_{\\ell = 0}^{m} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} f_{\\ell}(\\tau) \\\\\n& = \\sum_{\\ell = 0}^{m} \\frac{1}{\\ell!} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} \\mathsf{T}_q^{\\ell} f(\\tau).\n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nLet $A = \\binom{a\\ b}{c\\ d}$, $j = c \\tau + d$ and $\\alpha = \\frac{c}{4 \\pi i}$.\nWe claim\n\\[ f_{r}(A \\tau) = \\sum_{\\ell = r}^{m} (-\\alpha)^{\\ell -r} \\binom{l}{r} j^{k-r-\\ell} f_{\\ell}(\\tau) \\]\nfor all $r$. The left-hand side is uniquely determined\nfrom $F( A \\tau ) = j^k F(\\tau)$ by solving recursively from the highest $\\nu$ coefficients on.\nOne checks the given equation is compatible with this recursion (use it to compute $F(A \\tau)$).\n\\end{proof}\n\\end{comment}\n\n\\subsubsection{Eisenstein Series}\nLet $B_k$ be the Bernoulli numbers. The Eisenstein series\n\\[\nC_{k}(\\tau) = -\\frac{B_k}{k \\cdot k!} + \\frac{2}{k!}\n\\sum_{n \\geq 1} \\sum_{d|n} d^{k-1} q^n\n\\]\nare modular forms of weight $k$ for every even $k > 2$.\nIn case $k=2$ we have\n\\[ C_2\\left( \\frac{a \\tau + b}{c \\tau + d} \\right) = (c \\tau + d)^2 C_2(\\tau) - \\frac{ c ( c \\tau + d)}{4 \\pi i} \\]\nfor all $\\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$.\nHence\n\\begin{equation} \\widehat{C}_2(\\tau) = \\widehat{C}_2(\\tau, \\nu) = C_2(\\tau) + \\nu \\label{C2completion} \\end{equation}\nis almost holomorphic and $C_2$ is quasi-modular (of weight $2$).\n\nIt is well-known that\n\\begin{equation} \\mathrm{QMod} = {\\mathbb{Q}}[C_2, C_4, C_6], \\quad \\mathrm{AHM} = {\\mathbb{Q}}[ \\widehat{C}_2, C_4, C_6 ] \\label{fdfsdfdf} \\end{equation}\nand the inverse to the constant term map \\eqref{constanttermmap} is\n\\[ \\mathrm{QMod} \\to \\mathrm{AHM}, \\ \\ f(C_2, C_4, C_6) \\mapsto \\widehat{f} = f( \\widehat{C}_2, C_4, C_6). \\]\nIn particular, \\[ \\mathsf{T}_q = \\frac{d}{dC_2}. \\]\n\n\\begin{rmk}\nOnce the structure result \\eqref{fdfsdfdf} is known we can immediately work with $\\frac{d}{dC_2}$\nand we do not need to talk about transformation laws. However, below\nin the context of quasi-Jacobi forms we do not have such strong results at hands and we will use\nan abstract definition of $\\mathsf{T}_q$ instead (though see Section~\\ref{Subsubsection_Rewriting_Tnu} for a version of $\\frac{d}{dC_2}$).\n\\end{rmk}\n\n\n\n\\subsection{Jacobi forms} \\label{Subsection_Jacobi_forms}\n\\subsubsection{Definition}\nConsider variables\n$z = (z_1, \\ldots, z_n) \\in {\\mathbb{C}}^n$, let $k \\in {\\mathbb{Z}}$, and let\n$L$ be a rational $n \\times n$-matrix\nsuch that $2L$ is integral and has even diagonals\\footnote{This is the weakest condition on $L$\nfor which the second equation in \\eqref{TRANSFORMATIONLAWJACOBI} can be nontrivially satisfied.\nIndeed, if the condition is violated then $\\lambda^T L \\lambda$ is not integral in general and\nhence the $q$-expansion of $\\phi$ is fractional which contradicts \\eqref{Jacobiexpansion}.}.\n\nA \\emph{weak Jacobi form} of weight $k$ and index $L$\nis a holomorphic function $\\phi(z, \\tau)$ on ${\\mathbb{C}}^n \\times {\\mathbb{H}}$\nsatisfying\n\\begin{equation} \\label{TRANSFORMATIONLAWJACOBI}\n\\begin{aligned}\n\\phi\\left( \\frac{z}{c \\tau + d}, \\frac{a \\tau + b}{c \\tau + d} \\right)\n& = (c \\tau + d)^k e\\left( \\frac{c z^t L z}{c \\tau + d} \\right) \\phi(z,\\tau) \\\\\n\\phi\\left( z + \\lambda \\tau + \\mu, \\tau \\right)\n& = e\\left( - \\lambda^t L \\lambda \\tau - 2 \\lambda^t L z \\right) \\phi(z,\\tau)\n\\end{aligned}\n\\end{equation}\nfor all $\\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$ and $\\lambda, \\mu \\in {\\mathbb{Z}}^n$\nand admitting a Fourier expansion of the form\n\\begin{equation} \\label{Jacobiexpansion}\n\\phi(z,\\tau) = \\sum_{n \\geq 0} \\sum_{r \\in {\\mathbb{Z}}^n} c(n,r) q^n \\zeta^r\n\\end{equation}\nin $|q|<1$; here we used the notation\n\\[ \\zeta^r = e(z \\cdot r) = e\\left( \\sum_i z_i r_i \\right) = \\prod_i \\zeta_i^{r_i} \\]\nwith $\\zeta_i = e(z_i)$.\n\nWe will call the first equation in \\eqref{TRANSFORMATIONLAWJACOBI} the modular,\nand the second equation in \\eqref{TRANSFORMATIONLAWJACOBI} the elliptic transformation law of Jacobi forms.\n\nBy definition weak Jacobi forms are allowed to have poles at cusps.\nIf the index $L$ is positive definite then a \\emph{(holomorphic) Jacobi form} is a weak Jacobi form\nwhich is holomorphic at cusps, or equivalently, satisfies $c(n,r) = 0$\nunless $r^{t} L^{-1} r \\leq 4n$.\nWe will not use this stronger notion and all the Jacobi forms\nare considered here to be weak.\n\n\n\n\\subsubsection{Quasi-Jacobi forms}\nFor every $i$ consider the real analytic function\n\\[ \\alpha_i(z,\\tau) = \\frac{z_i - \\overline{z_i}}{\\tau - \\overline{\\tau}} = \\frac{\\mathrm{Im}(z_i)}{\\mathrm{Im}(\\tau)} \\]\nand define\n\\[ \\alpha = (\\alpha_1, \\ldots, \\alpha_n). \\]\nWe have the transformations\n\\begin{align*}\n\\alpha\\left( \\frac{z}{c \\tau + d}, \\frac{a \\tau + b}{c \\tau + d} \\right)\n & = (c \\tau + d) \\alpha(z,\\tau) - c z \\\\\n \\alpha\\left( z + \\lambda \\tau + \\mu, \\tau\\right)\n & = \\alpha(z,\\tau) + \\lambda\n\\end{align*}\nfor all $\\binom{a\\ b}{c\\ d} \\in \\mathrm{SL}_2({\\mathbb{Z}})$ and $\\lambda, \\mu \\in {\\mathbb{Z}}^n$.\n\n\nAn \\emph{almost holomorphic function} on ${\\mathbb{C}}^n \\times {\\mathbb{H}}$ is a function\n\\[ \\Phi(z, \\tau) = \\sum_{i \\geq 0} \\sum_{j = (j_1, \\ldots, j_n) \\in ({\\mathbb{Z}}_{\\geq 0})^n}\n\\phi_{i, j}(z,\\tau) \\nu^{i} \\alpha^j, \\quad \\quad \\alpha^j = \\alpha_1^{j_1} \\cdots \\alpha_n^{j_n} \\]\nsuch that each\nof the finitely many non-zero\n$\\phi_{i, j}(z,\\tau)$ is holomorphic and admits a Fourier expansion\nof the form \\eqref{Jacobiexpansion} in the region $|q|<1$.\n\nAn \\emph{almost holomorphic weak Jacobi form} of weight $k$ and index $L$\nis an almost holomorphic function $\\Phi(z,\\tau)$\nwhich satisfies the transformation law \\eqref{TRANSFORMATIONLAWJACOBI} of weak Jacobi forms of weight $k$ and index $L$.\n\nA \\emph{quasi-Jacobi form} of weight $k$ and index $L$ is a function $\\phi(z,\\tau)$ on ${\\mathbb{C}}^n \\times {\\mathbb{H}}$\nsuch that there exists an almost holomorphic weak Jacobi form\n$\\sum_{i,j} \\phi_{i,j} \\nu^i \\alpha^j$ of weight $k$ and index $L$\nwith $\\phi_{0,0} = \\phi$.\n\nWe let $\\mathrm{AHJ}_{k,L}$ (resp. $\\mathrm{QJac}_{k,L}$) be the vector space of almost holomorphic weak (resp. quasi-) Jacobi forms\nof weight $k$ and index $L$.\nThe vector space of index~$L$ quasi-Jacobi forms is denoted by\n\\[ \\mathrm{QJac}_{L} = \\bigoplus_{k \\in {\\mathbb{Z}}} \\mathrm{QJac}_{k,L}. \\]\nMultiplication of functions endows the direct sum\n\\[ \\mathrm{QJac} = \\bigoplus_{L} \\mathrm{QJac}_{L}, \\]\nwhere $L$ runs over all \nrational $n \\times n$-matrices\nsuch that $2L$ is integral and has even diagonals,\nwith a commutative ring structure. We call\n$\\mathrm{QJac}$ the \\emph{algebra of quasi-Jacobi forms}\non $n$ variables.\n\n\n\\begin{lemma} The constant term map\n\\[ \\mathrm{AHJ}_{k,L} \\to \\mathrm{QJac}_{k,L}, \\quad \\sum_{i,j} \\phi_{i, j} \\nu^{i} \\alpha^j \\mapsto \\phi_{0,0} \\]\nis well-defined and an isomorphism.\n\\end{lemma}\n\\begin{proof}\nParallel to the rank $1$ case in \\cite{Lib}.\n\\end{proof}\n\n\n\\subsubsection{Differential operators} \\label{Subsubsection_quasiJacobi_Differentialoperators}\nConsider $\\tau, \\nu, z_i, \\alpha_i$ as independent variables and recall\nthe Fourier variables $q = e^{2 \\pi i \\tau}$ and $\\zeta_i = e^{2 \\pi i z_i}$.\nDefine the differential operators\n\\begin{alignat*}{2}\nD_q & = \\frac{1}{2 \\pi i} \\frac{d}{d\\tau} = q \\frac{d}{dq} & \\quad \\quad \\quad D_\\nu & = \\frac{d}{d \\nu} \\\\\nD_{\\zeta_i} & = \\frac{1}{2 \\pi i} \\frac{d}{d z_i} = \\zeta_i \\frac{d}{d \\zeta_i} & D_{\\alpha_i} & = \\frac{d}{d \\alpha_i}.\n\\end{alignat*}\n\nA direct check using the transformation laws \\eqref{TRANSFORMATIONLAWJACOBI} shows\n\\[ D_{\\nu} : \\mathrm{AHJ}_{k,L} \\to \\mathrm{AHJ}_{k-2, L}, \\quad D_{\\alpha_i} : \\mathrm{AHJ}_{k,L} \\to \\mathrm{AHJ}_{k-1,L}. \\]\nDefine anomaly operators $\\mathsf{T}_q$ and $\\mathsf{T}_{\\alpha_i}$ by the commutative diagrams\n\\[\n\\begin{tikzcd}\n\\mathrm{QJac}_{k,L} \\ar{d}{\\mathsf{T}_q} & \\ar{l}{\\cong} \\mathrm{AHJ}_k \\ar{d}{D_{\\nu}} & & \\mathrm{QJac}_{k,L} \\ar{d}{\\mathsf{T}_{\\alpha_i}} & \\ar{l}{\\cong} \\mathrm{AHJ}_k \\ar{d}{D_{\\alpha_i}} \\\\\n\\mathrm{QJac}_{k-2,L} & \\ar{l}{\\cong} \\mathrm{AHJ}_{k-2,L} & & \\mathrm{QJac}_{k-1,L} & \\ar{l}{\\cong} \\mathrm{AHJ}_{k-1,L}\n\\end{tikzcd}\n\\]\nwhere the horizontal maps are the 'constant term' maps.\n\nSimilarly, we have operators\\footnote{See \\cite[Sec.2]{CR} for a\nLie algebra presentation of these operators.}\n\\begin{align*}\n\\widehat{D}_q = \\left( D_q - 2 k \\nu + 2 \\nu^2 D_{\\nu} + \\sum_{i=1}^{n} \\alpha_i D_{\\zeta_i} + \\alpha^T L \\alpha \\right) \\colon \\mathrm{AHJ}_{k,L} \\to \\mathrm{AHJ}_{k+2,L} \\\\\n\\widehat{D}_{\\zeta_i} = \\left( D_{\\zeta_i} + 2 \\alpha^T L e_i - 2 \\nu D_{\\alpha_i} \\right) \\colon \\mathrm{AHJ}_{k,L} \\to \\mathrm{AHJ}_{k+1,L}\n\\end{align*}\nwhere $e_i = (\\delta_{ij})_{j}$ is the $i$-th standard basis vector in ${\\mathbb{C}}^n$.\nSince $\\widehat{D}_q, \\widehat{D}_{\\zeta_i}$ act as $D_q$ and $D_{\\zeta_i}$ on the constant term,\nwe find that $D_q, D_{\\zeta_i}$ act on quasi-Jacobi forms:\n\\[ D_q : \\mathrm{QJac}_{k,L} \\to \\mathrm{QJac}_{k+2,L}, \\quad D_{\\zeta_i} : \\mathrm{QJac}_{k,L} \\to \\mathrm{QJac}_{k+1,L}. \\]\n\n\nFor $\\lambda = (\\lambda_1, \\ldots, \\lambda_n) \\in {\\mathbb{Z}}^n$ we will write\n\\[ D_{\\lambda} = \\sum_{i=1}^{n} \\lambda_i D_{\\zeta_i}, \\quad \\quad\n\\mathsf{T}_{\\lambda} = \\sum_{i=1}^{n} \\lambda_i \\mathsf{T}_{\\alpha_i}.\n \\]\n\n\nThe commutation relations of the above operators read\\footnote{The operators $\\mathsf{T}_q, \\mathsf{T}_{\\lambda}, D_q, D_{\\lambda}$ as well as the weight and index grading operators define an action\nof the Lie algebra of the semi-direct product of $\\mathrm{SL}_2({\\mathbb{C}})$ with a Heisenberg group on the space $\\mathrm{QJac}_{\\mathbf{L}}$,\nsee \\cite[Sec.1]{Zie}, \\cite[Sec.2]{CR} and also \\cite[Thm.1.4]{EZ}.}\n\\begin{equation} \\label{COMMUTATIONRELATIONS}\n\\begin{alignedat}{2}\n\\left[ \\mathsf{T}_q, D_q \\right]\\big|_{\\mathrm{QJac}_{k,L}} & = -2 k \\cdot {\\mathrm{id}}_{\\mathrm{QJac}_{k,L}} &\n\\left[ \\mathsf{T}_{\\lambda}, D_q \\right] & = D_{\\lambda} \\\\\n\\left[ \\mathsf{T}_{\\lambda}, D_{\\mu} \\right]\\big|_{\\mathrm{QJac}_{k,L}} & = 2 ( \\lambda^T L \\mu )\\cdot {\\mathrm{id}}_{\\mathrm{QJac}_{k,L}}\n& \\quad \\quad \\left[ \\mathsf{T}_q, D_{\\lambda} \\right] & = -2 \\mathsf{T}_{\\lambda}\n\\end{alignedat}\n\\end{equation}\nand\n\\[ [ D_q, D_{\\lambda} ] = [ D_{\\lambda}, D_{\\mu} ] = [ \\mathsf{T}_q, \\mathsf{T}_{\\lambda} ] = [ \\mathsf{T}_{\\lambda} , T_{\\mu} ] = 0 \\]\nfor all $\\lambda, \\mu \\in {\\mathbb{Z}}^n$.\n\n\\begin{lemma}\n\\label{Lemma_ell_trans_law_for_quasi_Jac}\nLet $\\phi \\in \\mathrm{QJac}_{L}$. Then\n\\begin{align*} \\phi(z + \\lambda \\tau + \\mu, \\tau)\n& = e\\left( - \\lambda^t L \\lambda \\tau - 2 \\lambda^t L z \\right) \\sum_{\\ell \\geq 0} \\frac{(-1)^i}{i!} \\mathsf{T}_{\\lambda}^i \\phi(z,\\tau) \\\\\n& = e\\left( - \\lambda^t L \\lambda \\tau - 2 \\lambda^t L z \\right) \\exp\\left( - \\mathsf{T}_{\\lambda} \\right) \\phi(z,\\tau) \n\\end{align*}\n\\end{lemma}\n\\begin{proof}\nSince the claimed formula is compatible\nwith addition on ${\\mathbb{Z}}^n$, we may assume $\\lambda = e_i$.\nLet $\\Phi$ be the non-holomorphic completion of $\\phi$. We expand\n\\[ \\Phi = \\sum_{j \\geq 0} \\phi_j \\alpha_i^j \\] \nwhere $\\phi_j$ depends on all variables except $\\alpha_i$\n(these variables are invariant under $z \\mapsto z + e_i \\tau$).\nThen a direct check shows that the claimed formula is determined by, and compatible with the relation\n\\[ \\Phi(z + e_i \\tau) = e\\left( - e_i^t L e_i \\tau - 2 e_i L z \\right) \\Phi(z). \\qedhere \\]\n\\end{proof}\n\n\\begin{lemma} \\label{Lemma_QJAc_ModTrLaw}\nLet $\\phi \\in \\mathrm{QJac}_{k,L}$ such that $\\mathsf{T}_{\\lambda} \\phi = 0$ for all $\\lambda \\in {\\mathbb{Z}}^n$. Then\n\\[\n\\phi\\left( \\frac{a \\tau + b}{c \\tau + d} \\right)\n= e\\left( \\frac{c z^T L z}{c \\tau + d} \\right) \\sum_{\\ell \\geq 0} \\frac{1}{\\ell!} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} \\mathsf{T}_q^{\\ell} \\phi(\\tau).\n\\]\n\\end{lemma}\n\\begin{proof}\nSince $\\mathsf{T}_{\\lambda} \\phi = 0$ for all $\\lambda$, the non-holomorphic\ncompletion of $\\phi$ is of the form\n$\\Phi(z,\\tau) = \\sum_{i \\geq 0} \\phi_i(z,\\tau) \\nu^i$\nwhere $\\phi_i$ are \\emph{holomorphic} and in\n$\\cap_{\\lambda} \\mathrm{Ker}( T_{\\lambda} )$. \nThe same proof as Lemma~\\ref{Sdadgd} applies now.\n\\end{proof}\n\\begin{comment}\n\\begin{lemma}\nLet $\\Phi(z,\\tau) = \\sum_{i \\geq 0} \\phi_i(z,\\tau) \\nu^i$\nbe an almost holomorphic Jacobi form of weight $k$ and index $L$\nwith associated quasi-modular form $\\phi(z,\\tau) = \\phi_0(z,\\tau)$.\nNecessarily we have $\\phi_i \\in \\cap_{\\lambda} \\mathrm{Ker}( T_{\\lambda} )$. Then\n\\begin{align*}\n\\phi\\left( \\frac{a \\tau + b}{c \\tau + d} \\right)\n& = e\\left( \\frac{c z^T L z}{c \\tau + d} \\right) \\sum_{\\ell \\geq 0} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} \\phi_{\\ell}(\\tau) \\\\\n& = e\\left( \\frac{c z^T L z}{c \\tau + d} \\right) \\sum_{\\ell \\geq 0} \\frac{1}{\\ell!} \\left( - \\frac{c}{4 \\pi i} \\right)^{\\ell} (c \\tau + d)^{k-\\ell} \\mathsf{T}_q^{\\ell} f(\\tau).\n\\end{align*}\n\\end{lemma}\n\\begin{proof} Same proof as Lemma~\\ref{Sdadgd}. \\end{proof}\n\\end{comment}\n\n\\subsubsection{Rewriting $\\mathsf{T}_q$ as $\\frac{d}{dC_2}$} \\label{Subsubsection_Rewriting_Tnu}\nDefine the vector space of quasi-Jacobi forms which are annihilated by $\\mathsf{T}_q$:\n\\[ \\mathrm{QJac}'_{L} = \\mathrm{Ker}\\left( \\mathsf{T}_q : \\mathrm{QJac}_{L} \\to \\mathrm{QJac}_L \\right). \\]\nWe have the following structure result\nwhose proof is essentially identical to \\cite[Prop.3.5]{BO} and which we therefore omit.\n\n\\begin{lemma} $\\mathrm{QJac}_L = \\mathrm{QJac}'_L \\otimes_{{\\mathbb{C}}} {\\mathbb{C}}[C_2]$. \\end{lemma}\n\nBy the Lemma\nevery quasi-Jacobi form can be uniquely written as a polynomial in $C_2$.\nIn particular,\nthe formal derivative $\\frac{d}{dC_2}$ is well-defined. Comparing with \\eqref{C2completion} we conclude that\n\\[ \\mathsf{T}_q = \\frac{d}{dC_2} : \\mathrm{QJac}_L \\to \\mathrm{QJac}_L. \\]\n\n\\subsubsection{Specialization to quasi-modular forms} \\label{Subsubsection_Specialization_to_quasimodular_forms}\nBy setting $z=0$ the quasi-Jacobi forms of weight $k$ and index $L$\nspecialize to weight $k$ quasi-modular forms:\n\\begin{align*}\n\\mathrm{AHJ}_{k,L} \\to \\mathrm{AHM}_{k},& \\quad F(z,\\tau) \\mapsto F(0,\\tau) \\\\\n\\mathrm{QJac}_{k,L} \\to \\mathrm{QMod}_k,& \\quad f(z,\\tau) \\mapsto f(0,\\tau).\n\\end{align*}\nThe specialization maps commute with the operators $\\mathsf{T}_q$.\n\n\n\\subsection{Theta decomposition and periods} \\label{Subsection_theta_decomposition}\nWe discuss theta decompositions of quasi-Jacobi forms\nif the index $L$ is positive definite.\nFor this we will need to work with several more general notions of modular forms\nthan what we have defined above (e.g. for congruence subgroups, of half-integral weight, or vector-valued).\nSince we do not need the results of this section for the main arguments of the paper\nwe will not introduce these notions here and instead refer to \\cite{Sko, Scheit}.\\footnote{\nThe results of Section~\\ref{Subsection_theta_decomposition} are essential only for Section~\\ref{Subsubsection_elliptic_fibrations_quasimodularforms},\nwhich is not used later on.\nProposition~\\ref{QJac_Prop2} also appears in Section~\\ref{Subsection_ab_explain},\nbut in this case the lattice $2L$ is unimodular\nand hence we can use Proposition~\\ref{QJac_Prop3}\nto re-prove Proposition~\\ref{QJac_Prop2} without additional theory.\n}\n\nAssume $L$ is positive definite, and\nfor every $x \\in {\\mathbb{Z}}^n\/ 2 L {\\mathbb{Z}}^n$ define the index $L$ theta function\n\\[ \\vartheta_{L,x}(z,\\tau) = \\sum_{\\substack{r \\in {\\mathbb{Z}}^n \\\\ r \\equiv x \\text{ mod } 2L {\\mathbb{Z}}^n}} e\\left( \\tau \\frac{1}{4} r^{T} L^{-1} r + r^T z \\right). \\]\nLet $\\mathrm{Mp}_2({\\mathbb{Z}})$ be the metaplectic double cover of $\\mathrm{SL}_2({\\mathbb{Z}})$ and\nconsider the ring\n\\[ \\widetilde{\\mathrm{Jac}}_{k,L} = \\bigcap_{\\lambda \\in {\\mathbb{Z}}^n} \\mathrm{Ker}\\left( \\mathsf{T}_{\\lambda} : \\mathrm{QJac}_{k,L} \\to \\mathrm{QJac}_{k+1, L} \\right). \\]\n\n\\begin{prop} \\label{QJac_Prop1} Assume $L$ is positive definite and let $f \\in \\mathrm{QJac}_{k,m}$.\n\\begin{enumerate}\n\\item[(i)] There exist (finitely many) unique quasi-Jacobi forms $f_d \\in \\widetilde{\\mathrm{Jac}}_{k-\\sum_i d_i, L}$\nwhere $d=(d_1, \\ldots, d_n) \\in {\\mathbb{Z}}_{\\geq 0}^n$\nsuch that\n\\[\nf(z,\\tau) = \\sum_{d} D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} f_d(z,\\tau).\n\\]\n\\item[(ii)] Every $f_d(z,\\tau)$ above can be expanded as\n\\[ f_d(z,\\tau) = \\sum_{x \\in {\\mathbb{Z}}^n\/2 L {\\mathbb{Z}}^n} h_{k,x}(\\tau) \\vartheta_{L,x}(z,\\tau) \\]\nwhere $( h_{k,x} )_{x}$ is a vector-valued\nweakly-holomorphic quasi-modular form for the dual of the Weil representation\nof $\\mathrm{Mp}_2({\\mathbb{Z}})$ on ${\\mathbb{Z}}^n \/ 2 L {\\mathbb{Z}}^n$.\n\\end{enumerate}\n\\end{prop}\n\nThe quasi-modular forms $(h_{k,x})_x$ of (ii) are weakly holomorphic\n(i.e. have poles at cusps) \nsince we define our quasi-Jacobi forms as almost-holomorphic versions of weak-Jacobi forms. The quasi-Jacobi forms for which $(h_{k,x})_x$ are\nholomorphic correspond to\n\\emph{holomorphic} Jacobi forms\n(which require a stronger vanishing condition on their Fourier coefficients).\n\n\\begin{proof}[Proof of Proposition~\\ref{QJac_Prop1}]\n(i) Let $F$ be the completion of $f$ and consider the expansion\n\\[ F = \\sum_{j = (j_1, \\ldots, j_n)} f_j(z,\\tau,\\nu) \\alpha^j. \\]\nLet $j$ be a maximal index, i.e. $f_{j+e_i} = 0$ for every $i$ where $e_i$ is the standard basis.\nThen $\\mathsf{T}_{\\lambda} f_j = 0$ for every $\\lambda$ and hence $f_j \\in \\widetilde{\\mathrm{Jac}}_{k-|j|,L}$.\nReplacing $f$ by\n\\[ f - \\left( D_{\\frac{1}{2} L^{-1} e_1} \\right)^{j_1} \\cdots \\left( D_{\\frac{1}{2} L^{-1} e_n} \\right)^{j_n} f_j \\]\nthe claim follows by induction.\n\n(ii) The existence of $h_{k,x}(\\tau)$ follows from the elliptic transformation law.\nFor the modularity see \\cite[Sec.4]{Sko}.\n\\end{proof}\n\n\nThe \\emph{level of $L$} is the smallest positive integer $\\ell$ such that $\\frac{1}{2} \\ell L^{-1}$ has integral entries and even diagonal entries.\nLet \n\\[ \\Gamma(\\ell)^{\\ast} \\subset \\mathrm{Mp}_2({\\mathbb{Z}}) \\]\nbe the lift of the congruence subgroup $\\Gamma(\\ell)$ to $\\mathrm{Mp}_2({\\mathbb{Z}})$ defined in \\cite[Sec.2]{Sko}.\n\nGiven a function $f = \\sum_{r \\in {\\mathbb{Z}}^n} c_r(\\tau) \\zeta^r$ with $\\zeta^r = e(z^t r)$, let\n\\[ [ f ]_{\\zeta^{\\lambda}} = c_{\\lambda}(\\tau) \\]\ndenote the coefficient of $\\zeta^\\lambda$.\n\n\\begin{prop} \\label{QJac_Prop2} Assume $L$ is positive definite of level $\\ell$ and let $f \\in \\mathrm{QJac}_{k,L}$.\n\\begin{enumerate}\n \\item[(i)] For every $\\lambda \\in {\\mathbb{Z}}^n$, the coefficient\n \\[ [ \\, f \\, ]_{\\lambda} := q^{-\\frac{1}{4} \\lambda^T L^{-1} \\lambda} [ \\, f \\, ]_{\\zeta^{\\lambda}} \\]\n is a weakly-holomorphic quasi-modular form for $\\Gamma(\\ell)^{\\ast}$ of weight $\\leq k-\\frac{n}{2}$.\n If $\\lambda = 0$ then $[ \\, f \\, ]_{\\lambda}$ is homogeneous of weight $k-\\frac{n}{2}$.\n \\item[(ii)] We have\n \\[\n \\mathsf{T}_q [\\, f \\, ]_{\\lambda} = \\left[\\, \\mathsf{T}_q f\\, \\right]_{\\lambda} + \\frac{1}{2} \\sum_{a,b=1}^{n} \\left( L^{-1} \\right)_{ab}\n \\left[\\, \\mathsf{T}_{\\zeta_a} \\mathsf{T}_{\\zeta_b} f \\, \\right]_{\\lambda}.\n \\]\n\\end{enumerate}\n\\end{prop}\n\n\nIn (ii) of Proposition~\\ref{QJac_Prop2} we used an extension of the operator $\\mathsf{T}_q$ to quasi-modular forms for congruence subgroups.\nThe existence of this operator follows parallel to Section~\\ref{Subsection_Modular_forms}\nfrom the arguments of \\cite{KZ}.\n\nThe $\\zeta^{\\lambda}$-coefficients of Jacobi forms are sometimes referred to as periods.\nA quasi-modularity result for the periods of certain multivariable elliptic functions\n(certain meromorphic Jacobi forms of index $L=0$)\nhas been obtained in \\cite[App.A]{HAE}. The formula in \\cite[Thm.7]{HAE} is similar\nto the above but requires a much more delicate argument.\n\n\\begin{proof}[Proof of Proposition~\\ref{QJac_Prop2}]\n(i) The Weil representation restricts to the trivial representation on $\\Gamma(\\ell)$,\nsee \\cite[Prop.4.3]{Scheit}.\nHence the $h_{k,x}$ are $\\Gamma(\\ell)^{\\ast}$-quasi-modular by Proposition~\\ref{QJac_Prop1}(ii).\n\n(ii) For the second part we consider the expansion\n\\begin{equation} f(z,\\tau) = \\sum_{x \\in {\\mathbb{Z}}^n\/2 L {\\mathbb{Z}}^n} \\sum_{d}\nh_{k,x}(\\tau) D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x}(z,\\tau)\n\\label{dfsderwe} \\end{equation}\nwhich follows from combining both parts of Proposition~\\ref{QJac_Prop1}.\nLet\n\\[ \\mathsf{T}_{\\Delta} = \\frac{1}{2} \\sum_{a,b=1}^{n} \\left( L^{-1} \\right)_{ab} \\mathsf{T}_{e_a} \\mathsf{T}_{e_b}. \\]\nBy \\eqref{COMMUTATIONRELATIONS} we have\n$\\left[ \\mathsf{T}_q, D_{\\lambda} \\right] = -\\left[ \\mathsf{T}_{\\Delta} , D_{\\lambda} \\right]$\nfor every $\\lambda$.\nSince $\\vartheta_{L,x}$ is a Jacobi form (for a congruence subgroup\\footnote{We extend\nthe operators $\\mathsf{T}_q, \\mathsf{T}_{\\lambda}$ here to quasi-Jacobi forms for congruence subgroups. The commutation relations are identical.})\nwe also have $\\mathsf{T}_q \\vartheta_{L,x} = \\mathsf{T}_{\\Delta} \\vartheta_{L,x} = 0$.\nThis implies\n\\[ \\mathsf{T}_q D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x}(z,\\tau)\n = -\\mathsf{T}_{\\Delta} D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x}(z,\\tau).\n\\]\nHence applying $\\mathsf{T}_q$ to \\eqref{dfsderwe} yields\n\\begin{align*}\n\\mathsf{T}_q f & = \\sum_{x,d} \\left( \\mathsf{T}_q( h_{k,x} ) D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x}\n - h_{k,x} \\mathsf{T}_{\\Delta} D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x} \\right) \\\\\n & = \\left( \\sum_{x,d} \\mathsf{T}_q( h_{k,x} ) D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\vartheta_{L,x} \\right) - \\mathsf{T}_{\\Delta} f.\n\\end{align*}\nThe claim follows by taking the coefficient of $\\zeta^{\\lambda}$.\n\\end{proof}\n\n\\begin{cor} $\\mathrm{QJac}_{k,L}$ is finite-dimensional for every weight $k$\nand positive definite index $L$.\n\\end{cor}\n\\begin{proof}\nBy Proposition~\\ref{QJac_Prop1} the space $\\widetilde{\\mathrm{Jac}}_{k,L}$ is isomorphic to\na space of meromorphic vector-valued quasi-modular forms of some fixed weight $k$\nfor which the order of poles at the cusps is bounded by a constant depending only on $L$.\nIn particular, it is finite dimensional and vanishes for $k \\ll 0$.\nThe claim now follows from the first part of Proposition~\\ref{QJac_Prop1}.\n\\end{proof}\n\n\\subsection{Examples}\n\\subsubsection{Rank $0$}\nIf the lattice $\\Lambda$ has rank zero, a quasi-Jacobi form of weight $k$ is a quasi-modular form of the same weight.\n\n\\subsubsection{Rank $1$ lattice}\nThe ring of quasi-Jacobi forms in the rank $1$ case has been\ndetermined and studied by Libgober in \\cite{Lib}.\n\n\\subsubsection{Half-unimodular index} \\label{Subsubsection_Halfunimodular_index}\nLet $Q$ be positive definite and unimodular of rank $n$.\nWe describe the ring of quasi-Jacobi forms \nof index $L = \\frac{1}{2}Q$.\nThe main example is the Riemann theta function\n\\begin{equation} \\label{THETAFUNCTION} \\Theta_{Q}(z, \\tau)\n=\n\\sum_{\\gamma \\in {\\mathbb{Z}}^{n}} q^{\\frac{1}{2} \\gamma^T Q \\gamma} e\\left( z^T Q \\gamma \\right),\n\\end{equation}\nwhich is a Jacobi form\\footnote{ \nSince $Q$ is unimodular the theta function satisfies the transformation laws for the full modular group and \nnot just a subgroup \\cite[Sec.3]{Zie}.}\nof weight $n\/2$ and index $Q\/2$,\n\\[ \\Theta_{Q}(z,\\tau) \\in \\mathrm{Jac}_{\\frac{n}{2}, \\frac{1}{2}Q}. \\]\n\nThe following structure result shows that this is essentially the only Jacobi form that we need to consider in this index.\n\\begin{prop} \\label{QJac_Prop3}\nLet $Q$ be positive definite and unimodular. Then every $f \\in \\mathrm{QJac}_{k, \\frac{Q}{2}}$\ncan be uniquely written as a finite sum\n\\[ f = \\sum_{d = (d_1, \\ldots, d_n)} f_d(\\tau) D_{\\zeta_1}^{d_1} \\cdots D_{\\zeta_n}^{d_n} \\Theta_{Q}(z,\\tau) \\]\nwhere $f_d \\in \\mathrm{QMod}_{k-\\sum_i d_i}$ for every $d$. In particular, for every $\\lambda \\in {\\mathbb{Z}}^n$ we have\n\\[ q^{-\\frac{1}{4} \\lambda^T Q^{-1} \\lambda} [ \\, f \\, ]_{\\zeta^{\\lambda}} \\in \\mathrm{QMod}_{\\leq k}. \\]\n\\end{prop}\n\\begin{proof}\nParallel to the proof of Proposition~\\ref{QJac_Prop1}.\n\\end{proof}\n\n\n\\subsubsection{The $E_8$ lattice and $E_8$-Jacobi forms} \\label{Section_E8_lattice}\nConsider the Cartan matrix of the $E_8$ lattice\n\\[\nQ_{E_8} =\n\\begin{pmatrix}\n 2 & -1 & 0 & 0 & 0 & 0 & 0 & 0 \\\\\n-1 & 2 & -1& 0 & 0 & 0 & 0 & 0 \\\\\n 0 & -1 & 2 & -1 & 0 & 0 & 0 & 0 \\\\\n 0 & 0 & -1 & 2 & -1 & 0 & 0 & 0 \\\\\n 0 & 0 & 0 & -1 & 2 & -1 & 0 & -1 \\\\\n 0 & 0 & 0 & 0 & -1 & 2 & -1 & 0 \\\\\n 0 & 0 & 0 & 0 & 0 & -1 & 2 & 0 \\\\\n 0 & 0 & 0 & 0 & -1 & 0 & 0 & 2\n\\end{pmatrix}.\n\\]\nWe define a natural subspace of the space of Jacobi forms of index $\\frac{m}{2} Q_{E_8}$.\n\nA \\emph{weak $E_8$-Jacobi form} of weight $k$ and index $m$ is a\nweak Jacobi form $\\phi$ of weight $k$ and index $L = \\frac{m}{2} Q_{E_8}$\nwhich satisfies\n\\[ \\phi( w(z), \\tau ) = \\phi( z, \\tau ) \\]\nfor all $w \\in W(E_8)$, where $W(E_8)$ is the Weyl group of $E_8$.\nWe let \n\\[ \\mathrm{Jac}_{E_8, k,m} \\subset \\mathrm{Jac}_{k, \\frac{m}{2} Q_{E_8}} \\]\nbe the ring of weak $E_8$-Jacobi forms.\n\nPractically the subspace of $E_8$-Jacobi forms is much smaller than the large space of\nJacobi forms of index $\\frac{m}{2} Q_{E_8}$.\nThe first example of an $E_8$-Jacobi form is the theta function $\\Theta_{E_8}$ defined in \\eqref{THETAFUNCTION}.\nFurther examples and a conjectural structure result for the ring of weak $E_8$-Jacobi forms\ncan be found in \\cite{Sakai2}.\n\n\n\n\n\\section{Elliptic fibrations and conjectures}\n\\label{Section_Elliptic_fibrations_and_conjectures}\n\\subsection{Elliptic fibrations}\n\\subsubsection{Definition} \\label{Subsubsection_ellip_fibr_definition}\nLet $X$ and $B$ be non-singular projective varieties and let\n\\[ \\pi : X \\to B \\]\nbe an elliptic fibration, i.e. a flat morphism\nwith fibers connected curves of arithmetic genus $1$.\nWe always assume $\\pi$ satisfies the following properties\\footnote{After Conjecture~\\ref{Conj_HAE} we discuss how these assumptions can be removed.}:\n\\begin{enumerate}\n\\item[(i)] All fibers of $\\pi$ are integral.\n\\item[(ii)] There exists a section $\\iota : B \\to X$.\n\\item[(iii)] $H^{2,0}(X, {\\mathbb{C}}) = H^0(X, \\Omega_X^2) = 0$.\n\\end{enumerate}\n\n\\subsubsection{Cohomology}\nLet $B_0 \\in H^2(X)$\nbe the class of the section $\\iota$,\nand let $N_{\\iota}$ be the normal bundle of $\\iota$.\nWe define the divisor class\n\\[ W = B_0 - \\frac{1}{2} \\pi^{\\ast} c_1(N_{\\iota}). \\]\nConsider the endomorphisms of $H^{\\ast}(X)$ defined by\n\\[\nT_+(\\alpha) = ( \\pi^{\\ast} \\pi_{\\ast} \\alpha ) \\cup W, \\quad\nT_-(\\alpha) = \\pi^{\\ast} \\pi_{\\ast} ( \\alpha \\cup W ),\n\\]\nfor all $\\alpha \\in H^{\\ast}(X)$. The maps $T_{\\pm}$ satisfy the relations\n\\[ T_+^2 = T_+, \\quad T_-^2 = T_-, \\quad T_+ T_- = T_- T_+ = 0. \\]\nThe cohomology of $X$ therefore splits as\\footnote{\nThe subspaces $H_{+}, H_{-}, H_{\\perp}$ are the $+1, -1, 0$-eigenspaces respectively\nof the endomorphism of $H^{\\ast}(X)$ defined by\n\\[ \\alpha \\mapsto [ W\\cup \\ , \\pi^{\\ast} \\pi_{\\ast}] \\alpha = W \\cup \\pi^{\\ast} \\pi_{\\ast}(\\alpha) -\n\\pi^{\\ast} \\pi_{\\ast}( W \\cup \\alpha). \\]\n}\n\\begin{equation} H^{\\ast}(X) = H^{\\ast}_{+} \\oplus H^{\\ast}_- \\oplus H^{\\ast}_{\\perp} \\label{SPLITTING} \\end{equation}\nwhere $H^{\\ast}_{\\pm} = \\mathrm{Im}(T_\\pm)$\nand $H^{\\ast}_{\\perp} = \\mathrm{Ker}( T_+ ) \\cap \\mathrm{Ker}(T_-)$.\n\nWe have the relation\n\\[\n\\langle T_{+}(\\alpha), \\alpha' \\rangle = \\langle \\alpha, T_{-}(\\alpha') \\rangle,\n\\quad \\alpha, \\alpha' \\in H^{\\ast}(X)\n\\]\nwhere $\\langle \\ , \\ \\rangle$ is the intersection pairing on $H^{\\ast}(X)$. Therefore\n\\[ \\big\\langle H^{\\ast}_{+}, H^{\\ast}_{+} \\big\\rangle = \\big\\langle H^{\\ast}_{-}, H^{\\ast}_{-} \\big\\rangle =\n\\big\\langle H^{\\ast}_{\\pm}, H^{\\ast}_{\\perp} \\big\\rangle = 0. \\]\n\nConsider the isomorphisms\n\\begin{align*}\nH^{\\ast}(B) \\to H^{\\ast}_{-},\\ \\ & \\alpha \\mapsto \\pi^{\\ast}(\\alpha) \\\\\nH^{\\ast}(B) \\to H^{\\ast}_{+},\\ \\ & \\alpha \\mapsto \\pi^{\\ast}(\\alpha) \\cup W.\n\\end{align*}\nThe pairing between $H^{\\ast}_{+}$ and $H^{\\ast}_{-}$ is determined by the compatibility\n\\[ \\int_{B} \\alpha \\cdot \\alpha' = \\int_X \\pi^{\\ast}(\\alpha) \\cdot \\left( \\pi^{\\ast}(\\alpha') \\cdot W \\right)\n\\quad \\text{ for all } \\alpha, \\alpha' \\in H^{\\ast}(B).\n\\]\n\n\n\n\n\\subsubsection{The lattice $\\Lambda$} \\label{Subsubsection_LatticeLambda}\nLet $F \\in H_2(X,{\\mathbb{Z}})$ be the class of a fiber of $\\pi$ and consider the ${\\mathbb{Z}}$-lattice\n\\[ {\\mathbb{Z}} F \\oplus \\iota_{\\ast} H_2(B,{\\mathbb{Z}}) \\subset H_2(X,{\\mathbb{Z}}). \\]\nIts orthogonal complement in the dual space $H^2(X,{\\mathbb{Z}})$ is\nthe ${\\mathbb{Z}}$-lattice\n\\begin{equation} \\Lambda = \\Big( {\\mathbb{Q}} F \\oplus \\iota_{\\ast} H_2(B,{\\mathbb{Z}}) \\Big)^{\\perp} \\subset H^2(X,{\\mathbb{Z}}). \\label{fsdfsdgsdf} \\end{equation}\nSince\n${\\mathbb{Q}} F \\oplus \\iota_{\\ast} H_2(B,{\\mathbb{Z}})$ generates $H_{2,+} \\oplus H_{2,-}$ over ${\\mathbb{Q}}$,\nwe have\n\\[ \\Lambda \\subset H^2_{\\perp}, \\quad \\Lambda \\otimes {\\mathbb{Q}} = H^2_{\\perp}. \\]\n\nLet $\\mathsf{k}_1, \\ldots, \\mathsf{k}_r$ be an integral basis\\footnote{Recall we always work modulo torsion as per Section~\\ref{Subsection_Conventions}.}\nof $H_2(B,{\\mathbb{Z}})$\nand let $\\mathsf{k}_i^{\\ast} \\in H^2(B,{\\mathbb{Z}})$ be a dual basis.\nThe projection\n\\[ p_{\\perp} : H^2(X,{\\mathbb{Q}}) \\to H^2_{\\perp} \\]\nwith respect to the splitting \\eqref{SPLITTING}\nacts on $\\alpha \\in H^{2}(X)$ by\n\\[\np_{\\perp}(\\alpha) = \\alpha - (\\alpha \\cdot F) B_0 - \\sum_{i=1}^{r} \\Big( \\big( \\alpha - (\\alpha \\cdot F) B_0 \\big) \\cdot \\iota_{\\ast} \\mathsf{k}_i \\Big) \\pi^{\\ast} \\mathsf{k}_i^{\\ast}.\n\\]\nand is therefore defined over ${\\mathbb{Z}}$.\nHence the inclusion \\eqref{fsdfsdgsdf} splits.\n\n\\subsubsection{Variables}\nConsider a fixed integral basis of the free abelian group $\\Lambda$,\n\\[ b_1, \\ldots, b_n \\in \\Lambda. \\]\nWe will identify an element $z = (z_1, \\ldots, z_n) \\in {\\mathbb{C}}^n$ with the\nelement $\\sum_{i=1}^{n} z_i b_i$. Hence we obtain the identification\n\\[ {\\mathbb{C}}^n \\cong \\Lambda \\otimes {\\mathbb{C}} = H^2_{\\perp}(X,{\\mathbb{C}}). \\]\nGiven a class $\\beta \\in H_2(X,{\\mathbb{Z}})$, we write\n\\begin{equation}\\label{eq:zetabeta}\n\\zeta^{\\beta} = \\exp( z \\cdot \\beta ) = \\prod_{i=1}^{n} \\zeta_i^{b_i \\cdot \\beta}\n\\end{equation}\nwhere $\\zeta_i = e(z_i)$ and $\\cdot$ is the intersection pairing.\n\n\\subsubsection{Pairings and intersection matrices} \\label{subsubsection_pairing_and_intersection_matrix}\nEvery element $\\mathsf{k} \\in H_{2}(B,{\\mathbb{Z}})$ defines a\nsymmetric (possibly degenerate) bilinear form on $H^2_{\\perp}$ by\n\\[ (\\alpha, \\alpha')_\\mathsf{k} = \\int_B \\pi_{\\ast}(\\alpha \\cup \\alpha') \\cdot \\mathsf{k}. \\]\nThe restriction of $(\\cdot, \\cdot )_\\mathsf{k}$ to $\\Lambda$ takes integral values. \n\n\\begin{lemma} For every curve class $\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$\nthe quadratic form $( \\cdot , \\cdot )_\\mathsf{k}$ is even on $\\Lambda$,\nthat is $(\\alpha, \\alpha)_\\mathsf{k} \\in 2 {\\mathbb{Z}}$ for every $\\alpha \\in \\Lambda$,\n\\end{lemma}\n\\begin{proof}\nSince the pairing is linear in $\\mathsf{k}$ it suffices to prove\n$( \\cdot , \\cdot )_{\\mathsf{k} + \\ell}$ and $( \\cdot , \\cdot )_{\\ell}$ are even for a suitable class\n$\\ell \\in H_2(B,{\\mathbb{Z}})$.\nLet $C \\subset B$ be a curve in class $\\mathsf{k}$.\nWe can assume $C$ is reduced and irreducible (otherwise prove the claim for each\nreduced irreducible component).\nBy embedding $B$ into a projective space and choosing suitable hyperplane sections\nwe can find\\footnote{\nAssume $C \\subset B \\subset \\mathbb{P}^n$, and let $K_d$ be the\nthe kernel of $H^0({\\mathcal O}_{\\mathbb{P}^n}(d)) \\to H^0( {\\mathcal O}_{\\mathbb{P}^n}(d)|_C)$\nfor $d \\gg 0$.\nFor generic sections $f_1, \\ldots, f_{m} \\in K_d$, $m=\\dim B - 1$\nthe intersection $\\Sigma = B \\cap_i V(f_i)$ is a curve which contains $C$.\nThe key step is to show $\\Sigma = C + D$ for a smooth curve $D$ which does not contain $C$;\nall other conditions follow from a usual Bertini argument.\nTo show that $\\Sigma$ is of multiplicity $1$ at $C$, let $p \\in C$ be a point at which $C$ is smooth\nand consider the projectivized normal bundle $P$ of $C$ inside $B$ at $p$.\nThe set of $f_1, \\ldots, f_m$ which vanish at\nsome $v \\in P$ simultaneously is a closed co-dimension $m$ subset.\nSince $\\dim(P) = m-1$, by choosing $f_i$ generic we can guarantee the\ntangent spaces to $\\Sigma(f)$ and $C$ are the same at $p$;\nhence the multiplicity of $C$ in $\\Sigma$ is $1$.}\na curve $D \\subset B$ not containing $C$ and\na deformation of $C \\cup D$ to a curve $D'$,\nsuch that $D, D'$ are smooth and $X_D$ and $X_{D'}$ are smooth elliptic surfaces over $D, D'$ respectively;\nhere we let $X_{\\Sigma} = \\pi^{-1}(\\Sigma)$ for $\\Sigma \\subset B$.\nHence it suffices to show $( \\cdot , \\cdot )_\\mathsf{k}$ is even if $\\mathsf{k}$ is represented\nby a smooth curve $C$ such that $X_C$ is smooth. Let $\\alpha \\in \\Lambda$.\nSince $\\alpha|_{X_C}$ is of type $(1,1)$ and orthogonal to the section and fiber class\nthe claim now follows from the adjunction formula, see e.g. \\cite[Thm.7.4]{Shioda}.\n\\end{proof}\n\nThe matrix of $-( \\cdot , \\cdot )_{\\mathsf{k}}$\nwith respect to the basis $\\{ b_i \\}$ is denoted by\n\\[ Q_{\\mathsf{k}} \\in M_{n \\times n}({\\mathbb{Z}}). \\]\nHence for all $v = (v_1, \\ldots, v_n)$ and $v' = (v'_1, \\ldots, v'_n)$ in ${\\mathbb{Q}}^n$ we have\n\\[ \\Big( \\sum_i v_i b_i\\, , \\sum_{i} v'_i b_i \\Big)_{\\mathsf{k}} = - v^T Q_{\\mathsf{k}} v'. \\]\nIf $\\mathsf{k}$ is a curve class, the matrix $Q_{\\mathsf{k}}$ has even diagonal entries.\n\n\n\n\\subsection{Gromov--Witten classes and conjectures}\n\\subsubsection{Definition}\nLet $\\beta \\in H_2(X,{\\mathbb{Z}})$ be a curve class,\nlet $\\mathsf{k} = \\pi_{\\ast} \\beta \\in H_2(B,{\\mathbb{Z}})$ and let\n\\[ {\\overline M}_{g,n}(X,\\beta) \\]\nbe the moduli space of genus $g$ stable maps to $X$ in class $\\beta$ with $n$ markings.\n\nFor all $g,n,\\mathsf{k}$ such that\\footnote{\nIf $\\mathsf{k}=0$ and $2g-2+n \\leq 0$\nthe moduli space ${\\overline M}_{g,n}(B, \\mathsf{k})$ is empty,\nbut ${\\overline M}_{g,n}(X, \\beta)$ for some $\\beta>0$ with $\\pi_{\\ast} \\beta = \\mathsf{k}$ may be non-empty. In this case no induced morphism exists.}\n\\[ \\mathsf{k} > 0 \\quad \\text{ or } \\quad \n2g-2+n > 0\n\\]\nthe elliptic fibration $\\pi$ induces a morphism\n\\[ \\pi : {\\overline M}_{g,n}(X, \\beta) \\to {\\overline M}_{g,n}(B, \\mathsf{k}). \\]\nConsider cohomology classes \n\\[ \\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(X). \\]\nWe define the \\emph{$\\pi$-relative Gromov--Witten class}\n\\[ {\\mathcal C}^{\\pi}_{g,\\beta}(\\gamma_1, \\ldots, \\gamma_n)\n = \\pi_{\\ast}\\left( \\left[ {\\overline M}_{g,n}(X,\\beta) \\right]^{\\text{vir}} \\prod_{i=1}^{n} {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\right) \\in H_{\\ast}( {\\overline M}_{g,n}(B, \\mathsf{k}) ).\n\\]\n\n\\subsubsection{Quasi-Jacobi forms}\nLet $\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$ be a fixed class. Consider the generating series\n\\[\n{\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) = \\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} {\\mathcal C}^{\\pi}_{g,\\beta}(\\gamma_1, \\ldots, \\gamma_n) q^{W \\cdot \\beta} \\zeta^{\\beta}\n\\]\nwhere the sum is over all curve classes $\\beta \\in H_2(X,{\\mathbb{Z}})$ with $\\pi_{\\ast} \\beta = \\mathsf{k}$.\nBy definition,\n\\[ {\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) \\in H_{\\ast}( {\\overline M}_{g,n}(B, \\mathsf{k}) ) \\otimes {\\mathbb{Q}}[[q^{\\frac{1}{2}}, \\zeta^{\\pm 1}]]. \\]\n\nRecall the space $\\mathrm{QJac}_{L}$ of quasi-Jacobi forms of index $L$, and let \n\\[ \\Delta(q) = q \\prod_{m \\geq 1} (1-q^m)^{24} \\]\nbe the modular discriminant.\nThe following is a refinement of \\cite[Conj.A]{HAE}.\n\n\\begin{conjecture} \\label{Conj_Quasimodularity}\nThe series ${\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)$ is a cycle-valued quasi-Jacobi form\nof index $Q_{\\mathsf{k}}\/2$:\n\\[ \n{\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)\n\\, \\in \\,\nH_{\\ast}({\\overline M}_{g,n}(B, \\mathsf{k})) \\otimes\n\\frac{1}{\\Delta(q)^m}\n\\mathrm{QJac}_{Q_{\\mathsf{k}}\/2}\n\\]\nwhere $m = -\\frac{1}{2} c_1(N_{\\iota}) \\cdot \\mathsf{k}$.\n\\end{conjecture}\n\n\\subsubsection{Holomorphic anomaly equation}\nRecall the differential operator on $\\mathrm{QJac}_L$ induced by the non-holomorphic variable $\\nu$,\n\\[ \\mathsf{T}_q = \\frac{d}{dC_2} : \\mathrm{QJac}_L \\to \\mathrm{QJac}_L. \\]\nSince $\\Delta(q)$ is a modular form, we have\n\\[ \\mathsf{T}_q \\Delta(q) = 0. \\]\nWe conjecture a holomorphic anomaly equation for the classes ${\\mathcal C}^{\\pi}_{g,\\mathsf{k}}$.\nThe equation is exactly the same as in \\cite[Conj.B]{HAE}.\n\nConsider the diagram\n\\[\n\\begin{tikzcd}\n{\\overline M}_{g,n}(B,\\mathsf{k}) & M_{\\Delta} \\ar[swap]{l}{\\iota} \\ar{d} \\ar{r} & {\\overline M}_{g-1, n+2}(B, \\mathsf{k}) \\ar{d}{{\\mathrm{ev}}_{n+1} \\times {\\mathrm{ev}}_{n+2}} \\\\\n& B \\ar{r}{\\Delta} & B \\times B\n\\end{tikzcd}\n\\]\nwhere $\\Delta$ is the diagonal, $M_{\\Delta}$ is the fiber product\nand $\\iota$ is the gluing map along the last two points.\nSimilarly, for every\nsplitting $g = g_1 + g_2$, $\\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2$\nand $\\mathsf{k} = \\mathsf{k_1} + \\mathsf{k_2}$\nconsider\n\\[\n\\begin{tikzcd}\n{\\overline M}_{g,n}(B,\\mathsf{k}) & M_{\\Delta, \\mathsf{k}_1, \\mathsf{k}_2} \\ar{d} \\ar[swap]{l}{j} \\ar{r} & \n{\\overline M}_{g_1, S_1 \\sqcup \\{ \\bullet \\}}(B, \\mathsf{k_1})\n\\times {\\overline M}_{g_2, S_2 \\sqcup \\{ \\bullet \\}}(B, \\mathsf{k_2}) \\ar{d}{{\\mathrm{ev}}_{\\bullet} \\times {\\mathrm{ev}}_{\\bullet}} \\\\\n& B \\ar{r}{\\Delta} & B \\times B\n\\end{tikzcd}\n\\]\nwhere $M_{\\Delta, \\mathsf{k}_1, \\mathsf{k}_2}$ is the fiber product and\n$j$ is the gluing map along the marked points labeled by $\\bullet$.\n\n\n\\begin{conjecture} \\label{Conj_HAE} On ${\\overline M}_{g,n}(B, \\mathsf{k})$,\n\\begin{align*}\n\\mathsf{T}_q {\\mathcal C}^{\\pi}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n )\n\\ =\\ & \n\\iota_{\\ast} \\Delta^{!}\n{\\mathcal C}^{\\pi}_{g-1,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ) \\\\\n&\n+ \\sum_{\\substack{ g= g_1 + g_2 \\\\\n\\{1,\\ldots, n\\} = S_1 \\sqcup S_2 \\\\\n\\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2}}\nj_{\\ast} \\Delta^{!} \\left(\n{\\mathcal C}^{\\pi}_{g_1, \\mathsf{k}_1}( \\gamma_{S_1}, \\mathsf{1}) \\boxtimes\n{\\mathcal C}^{\\pi}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2}, \\mathsf{1} ) \\right) \\\\\n&\n- 2 \\sum_{i=1}^{n} \n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_{i-1},\n \\pi^{\\ast} \\pi_{\\ast} \\gamma_i , \\gamma_{i+1}, \\ldots, \\gamma_n ) \\cdot \\psi_i,\n\\end{align*}\nwhere $\\psi_i \\in H^2({\\overline M}_{g,n}(B,\\mathsf{k}))$\nis the cotangent line class at the $i$-th marking.\n\\end{conjecture}\n\nSince the moduli space of stable maps in negative genus is empty,\nthe corresponding terms in Conjecture~\\ref{Conj_HAE} vanish.\nFurther, the sum in the second term on the right runs over all splittings for which the\nmoduli spaces ${\\overline M}_{g_i, |S_i|+1}(B, \\mathsf{k}_i)$ are stable, or equivalently, for which the classes\n${\\mathcal C}_{g_i,\\mathsf{k}_i}^{\\pi}(\\gamma_{S_i}, \\mathsf{1})$ are defined.\nIn particular, if $g_i = 0$ and $\\mathsf{k}_i = 0$ we require $|S_i| \\geq 2$.\n\nBy Section~\\ref{Subsubsection_Specialization_to_quasimodular_forms} quasi-Jacobi forms specialize to quasi-modular forms\nunder $\\zeta = 1$, and the specialization map commutes with $\\mathsf{T}_q$.\nHence Conjectures~\\ref{Conj_Quasimodularity} and \\ref{Conj_HAE}\ngeneralize and are compatible with \\cite[Conj.A and B]{HAE}. \n\nWe have always assumed here that the elliptic fibration\nhas integral fibers, a section, and $h^{2,0}(X) = 0$,\nsee \\mbox{(i-iii)} in Section~\\ref{Subsubsection_ellip_fibr_definition}.\nWe expect Conjectures~\\ref{Conj_Quasimodularity} and~\\ref{Conj_HAE}\nhold without these assumption if some modifications are made:\nIt is plausible (i) can be removed without any modifications.\nIf we remove (ii) we need to work with a multi-section of the fibration,\nwhich leads to quasi-Jacobi forms which are modular\nwith respect to $\\Gamma(N)$ where\n$N$ is the degree of a multisection over the base.\nIf (iii) is violated then the Gromov--Witten theory of $X$ mostly vanishes\nby a Noether--Lefschetz argument.\nUsing instead a nontrivial reduced Gromov--Witten theory\n(such as \\cite{KT} for algebraic surfaces satisfying $h^{2,0} > 0$)\nforces then some basic modifications to the holomorphic anomaly equation,\nsee e.g. Section~\\ref{Section_Abelian_surfaces} for the case of the abelian surface.\n\n\n\n\n\\section{Consequences of the conjectures} \\label{Section_Consequences_of_the_Conjecture}\n\\subsection{A weight refinement} \\label{Subsection_weight_refinement}\nDefine a modified degree function $\\underline{\\deg}(\\gamma)$\non $H^{\\ast}(X)$ by the assignment\n\\begin{equation*} \\label{modified_degree_function}\n\\underline{\\deg}(\\gamma) =\n\\begin{cases}\n2 & \\text{if } \\gamma \\in \\mathrm{Im}(T_+) \\\\\n1 & \\text{if } \\gamma \\in \\mathrm{Ker}( T_+ ) \\cap \\mathrm{Ker}(T_-) \\\\\n0 & \\text{if } \\gamma \\in \\mathrm{Im}(T_-) \\\\\n\\end{cases}\n\\end{equation*}\n\nThe following is parallel to \\cite[Appendix B]{HAE}.\n\n\\begin{lemmastar} \\label{Lemma_Weight_statement}\nAssume Conjectures~\\ref{Conj_Quasimodularity} and \\ref{Conj_HAE} hold.\nThen for any $\\underline{\\deg}$-homogeneous classes\n$\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(X)$\nand $\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$ we have\n\\[\n{\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)\n\\, \\in \\,\nH_{\\ast}({\\overline M}_{g,n}(B, \\mathsf{k})) \\otimes \\frac{1}{\\Delta(q)^m} \\mathrm{QJac}_{\\ell, Q_{\\mathsf{k}}}\n\\]\nwhere $m = -\\frac{1}{2} c_1(N_{\\iota}) \\cdot \\mathsf{k}$\nand $\\ell = 2g - 2 + 12m + \\sum_i \\underline{\\deg}(\\gamma_i)$.\n\\end{lemmastar}\n\n\\subsection{Disconnected Gromov--Witten classes} \\label{Subsection_Disconnected_GW_classes}\nWe reformulate the holomorphic anomaly equation of Conjecture~\\ref{Conj_HAE} for disconnected Gromov--Witten classes.\nLet\n\\[ {\\overline M}_{g,n}^{\\bullet}(B,\\mathsf{k}) \\]\nbe the moduli space of stable maps $f : C \\to B$ from possibly \\emph{disconnected}\ncurves of genus $g$ in class $\\mathsf{k}$, with the requirement\nthat for every connected component $C' \\subset C$ \nat least one of the following holds:\n\\begin{enumerate}\n\\item[(i)] $f|_{C'}$ is non-constant,\nor\n\\item[(ii)] $C'$ has genus $g'$ and carries $n'$ markings with $2g'-2+n' > 0$.\n\\end{enumerate}\nLet ${\\overline M}_{g,n}'(X,\\beta)$\nbe the moduli space of stable maps $f : C \\to X$ from possibly disconnected\ncurves of genus $g$ in class $\\beta$, with the requirement\nthat for every connected component $C' \\subset C$ \nat least one of the following holds:\n\\begin{enumerate}\n \\item[(i)] $\\pi \\circ f|_{C'}$ is non-constant, or\n \\item[(ii)] $C'$ has genus $g'$ and carries $n'$ markings with $2g'-2+n' > 0$.\n \n\\end{enumerate}\nFor all\\footnote{Here ${\\overline M}_{g,n}^{\\bullet}(B,\\mathsf{k})$ is empty if and only if ${\\overline M}_{g,n}'(X,\\beta)$ is empty,\nso we do not need to exclude any values of $(g,\\mathsf{k})$.}\n$g \\in {\\mathbb{Z}}$ and curve classes $\\mathsf{k}$ the fibration $\\pi$ induces a map\n\\[ \\pi : {\\overline M}_{g,n}'(X,\\beta) \\to {\\overline M}_{g,n}^{\\bullet}(B,\\mathsf{k}). \\]\nDefine the disconnected Gromov--Witten classes by\n\\[\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi, \\bullet}(\\gamma_1, \\ldots, \\gamma_n)\n=\n\\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} \\zeta^{\\beta} q^{W \\cdot \\beta}\n\\pi_{\\ast} \\left( \\left[ {\\overline M}'_{g,n}(X,\\beta) \\right]^{\\text{vir}} \\prod_i {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\right).\n\\]\nThe right hand side is a series with coefficients in the homology of ${\\overline M}_{g,n}^{\\bullet}(B, \\mathsf{k})$.\n\nSince the disconnected classes ${\\mathcal C}^{\\bullet}_{g,k}$ can be expressed in terms of connected classes ${\\mathcal C}^{\\bullet}_{g,k}$ and vice versa, Conjecture~\\ref{Conj_Quasimodularity} is equivalent to\nthe quasi-Jacobi property of the disconnected theory:\n\\[\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi, \\bullet}(\\gamma_1, \\ldots, \\gamma_n)\n\\in H_{\\ast}( {\\overline M}_{g,n}^{\\bullet}(B, \\mathsf{k}) ) \\otimes\n\\frac{1}{\\Delta(q)^m} \\mathrm{QJac}_{Q_{\\mathsf{k}}\/2}\n\\]\nwhere $m = -\\frac{1}{2} c_1(N_{\\iota}) \\cdot \\mathsf{k}$.\nSimilarly, Conjecture~\\ref{Conj_HAE} is equivalent to the\nfollowing disconnected version of the holomorphic anomaly equation:\n\\begin{lemmastar} \\label{Lemma_discHAE}Conjecture~\\ref{Conj_HAE} is equivalent to\n\\begin{align*}\n\\mathsf{T}_q {\\mathcal C}^{\\pi, \\bullet}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n )\n\\ =\\ & \n\\iota_{\\ast} \\Delta^{!}\n{\\mathcal C}^{\\pi, \\bullet}_{g-1,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ) \\\\\n&\n- 2 \\sum_{i=1}^{n} \n\\psi_i \\cdot {\\mathcal C}^{\\pi, \\bullet}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_{i-1},\n \\pi^{\\ast} \\pi_{\\ast} \\gamma_i , \\gamma_{i+1}, \\ldots, \\gamma_n ).\n\\end{align*}\n\\end{lemmastar}\n\n\n\\subsection{Elliptic holomorphic anomaly equation} \\label{Subsection_EHAE}\nRecall the anomaly operator with respect to the elliptic parameter:\n\\[ \\mathsf{T}_{\\lambda} : \\mathrm{QJac}_{k,L} \\to \\mathrm{QJac}_{k-1, L}, \\quad \\lambda \\in \\Lambda \\]\n(recall we identify $\\Lambda$ with ${\\mathbb{Z}}^n$ here).\nThe anomaly equation of ${\\mathcal C}_g(\\ldots)$ with respect to the operator $\\mathsf{T}_{\\lambda}$ reads as follows.\n\n\\begin{lemmastar} \\label{Lemma_EllHAE}Assume Conjectures~\\ref{Conj_Quasimodularity} and \\ref{Conj_HAE} hold.\nThen\n\\[\n\\mathsf{T}_{\\lambda} {\\mathcal C}^{\\pi}_{g,k}(\\gamma_1, \\ldots, \\gamma_n) =\n\\sum_{i=1}^{n} {\\mathcal C}^{\\pi}_{g,k}(\\gamma_1, \\ldots, \\gamma_{i-1}, A(\\lambda) \\gamma_i, \\gamma_{i+1}, \\ldots, \\gamma_n),\n\\]\nfor any $\\lambda \\in \\Lambda$, where $A(\\lambda) : H^{\\ast}(X) \\to H^{\\ast}(X)$ is defined by\n\\[ A(\\lambda) \\gamma = \\lambda \\cup \\pi^{\\ast} \\pi_{\\ast}( \\gamma) - \\pi^{\\ast} \\pi_{\\ast}( \\lambda \\cup \\gamma ), \\quad \\gamma \\in H^{\\ast}(X). \\]\n\\end{lemmastar}\n\\begin{proof}\nLet $\\lambda \\in \\Lambda$ and recall from Section~\\ref{Subsubsection_quasiJacobi_Differentialoperators} the commutation relation\n\\[ \\left[ \\mathsf{T}_q , D_{\\lambda} \\right] = -2 \\mathsf{T}_{\\lambda}. \\]\n\nLet $p : {\\overline M}_{g,n+1}(B, \\mathsf{k}) \\to {\\overline M}_{g,n}(B,\\mathsf{k})$ be the map that forgets the last marked point.\nWe have\n\\[ D_{\\lambda} {\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n) = p_{\\ast} {\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n, \\lambda). \\]\nHence we obtain\n\\[\n-2 \\mathsf{T}_{\\lambda} {\\mathcal C}^{\\pi}_{g,k}(\\gamma_1, \\ldots, \\gamma_n)\n=\np_{\\ast} \\mathsf{T}_q {\\mathcal C}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n, \\lambda)\n- D_{\\lambda} \\mathsf{T}_q {\\mathcal C}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n).\n\\]\nOnly two terms contribute in this difference. The first arises from the second term in the holomorphic anomaly equation\non ${\\overline M}_{g,n+1}(B, \\mathsf{k})$. The summand with $g_i = 0$ and $n+1 \\in S_i$ with $|S_i| = 2$ contributes\n\\[ 2 \\sum_{i=1}^{n} {\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\pi^{\\ast} \\pi_{\\ast}( \\gamma_i \\cup \\lambda ), \\ldots, \\gamma_n ). \\]\nThe second contribution arises from the third term of the holomorphic anomaly equation when comparing the classes $\\psi_{i}$ under pullback by $p$. It is\n\\[ -2 \\sum_{i=1}^{n} {\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\lambda \\cup \\pi^{\\ast} \\pi_{\\ast}( \\gamma_i ), \\ldots, \\gamma_n ). \\]\nAdding up yields the claim.\n\\end{proof}\n\nConsider the exponential $\\exp(A(\\lambda))$ which acts on $\\gamma \\in H^{\\ast}(X)$ by\n\\[\n(\\exp A(\\lambda) )\\gamma = \\gamma + \\lambda \\cup \\pi^{\\ast} \\pi_{\\ast}( \\gamma) - \\pi^{\\ast} \\pi_{\\ast}( \\lambda \\cup \\gamma )\n- \\frac{1}{2} \\pi^{\\ast}\\left( \\pi_{\\ast}(\\lambda^2) \\cdot \\pi_{\\ast}(\\gamma) \\right).\n\\]\nLemma*~\\ref{Lemma_EllHAE} then yields\n\\begin{align*}\n\\exp( \\mathsf{T}_{\\lambda} ) {\\mathcal C}^{\\pi}_{g,k}(\\gamma_1, \\ldots, \\gamma_n)\n& =\n{\\mathcal C}_{g,k}( \\exp(A(\\lambda)) \\gamma_1, \\ldots, \\exp(A(\\lambda)) \\gamma_n).\n\\end{align*}\nWe will see in Section~\\ref{Section_The_elliptic_transformation_law} how in good situations this is related to the automorphism defined\nby adding the section corresponding to the class $\\lambda$.\n\n\n\n\n\n\\subsection{The elliptic transformation law}\n\\label{Section_The_elliptic_transformation_law}\nRecall the projection $p_{\\perp}$ to the lattice $\\Lambda$ from Section~\\ref{Subsubsection_LatticeLambda}.\nThroughout Section~\\ref{Section_The_elliptic_transformation_law} we assume that the fibration\n$\\pi : X \\to B$ satisfies the following condition,\nwhich holds for example for the rational elliptic surface:\n\n\\vspace{4pt}\n\\noindent\n\\textbf{Assumption ($\\star$).} For every $\\lambda \\in \\Lambda$ there is a unique section $B_{\\lambda} \\subset X$\nsuch that $p_{\\perp}( [ B_{\\lambda} ] ) = \\lambda$.\n\\vspace{4pt}\n\nLet $\\lambda \\in \\Lambda$ and consider the morphism\n\\[ t_{\\lambda} : X \\to X,\\ x \\mapsto (x + B_{\\lambda}(\\pi(x))) \\]\nof fiberwise addition with $B_{\\lambda}$.\nSince $\\pi \\circ t_{\\lambda} = \\pi$ this implies\n\\[\n{\\mathcal C}^{\\pi}_{g, t_{\\lambda \\ast} \\beta}(t_{\\lambda \\ast} \\gamma_1, \\ldots, t_{\\lambda \\ast} \\gamma_n)\n= {\\mathcal C}^{\\pi}_{g, \\beta}(\\gamma_1, \\ldots, \\gamma_n).\n\\]\nLet us write ${\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\ldots)(z)$ to denote the dependence\nof ${\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\ldots)$ on the variable $z \\in \\Lambda \\otimes {\\mathbb{C}}$. \nFrom the last equation we obtain\n\\begin{align*}\n& {\\mathcal C}^{\\pi}_{g, \\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)(z) \\\\\n& = \\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}}\n{\\mathcal C}^{\\pi}_{g,\\beta}(t_{\\lambda \\ast} \\gamma_1, \\ldots,\nt_{\\lambda \\ast} \\gamma_n) q^{(t_{\\lambda \\ast} W) \\cdot \\beta} e\\big( (t_{\\lambda \\ast} z) \\cdot \\beta \\big) \\\\\n& = e\\left( -\\frac{ \\tau}{2} \\pi_{\\ast}(\\lambda^2) \\cdot \\mathsf{k}\n- \\pi_{\\ast}( z \\cdot \\lambda) \\cdot \\mathsf{k} \\right)\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(t_{\\lambda \\ast} \\gamma_1, \\ldots, t_{\\lambda \\ast} \\gamma_n)\n(z + \\lambda \\tau) \\\\\n& = e\\left( \\frac{ \\tau}{2} \\lambda^T Q_{\\mathsf{k}} \\lambda + \\lambda^T Q_{\\mathsf{k}} z \\right)\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(t_{\\lambda \\ast} \\gamma_1, \\ldots, t_{\\lambda \\ast} \\gamma_n)\n(z + \\lambda \\tau).\n\\end{align*}\nRearranging the terms slightly yields\n\\begin{multline} \\label{ffsdghhjjjhhh}\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)\n(z + \\lambda \\tau)\n \\\\ =\ne\\left( -\\frac{1}{2} \\lambda^T Q_{\\mathsf{k}} \\lambda - \\lambda^T Q_{\\mathsf{k}} z \\right)\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(t_{-\\lambda \\ast} \\gamma_1, \\ldots, t_{-\\lambda \\ast} \\gamma_n)(z).\n\\end{multline}\nWe obtain the following.\n\n\\begin{lemma} \\label{Lemma_elliptic_transf_law}\nAssume $\\pi : X \\to B$ satisfies Assumption ($\\star$). If every $\\gamma_i$ is translation invariant,\ni.e. $t_{\\lambda \\ast} \\gamma_i = \\gamma_i$ for all $\\lambda \\in \\Lambda$,\nthen ${\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)$ satisfies the elliptic transformation law of Jacobi forms:\n\\[\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)\n(z + \\lambda \\tau)\n=\ne\\left( -\\frac{1}{2} \\lambda^T Q_{\\mathsf{k}} \\lambda - \\lambda^T Q_{\\mathsf{k}} z \\right)\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n)(z)\n\\]\nfor all $\\lambda \\in \\Lambda$.\n\\end{lemma}\n\n\nEven if the $\\gamma_i$ are not translation invariant we\nhave the following relationship to the transformation law of quasi-Jacobi forms.\nRecall the endomorphism $A(\\lambda)$ from Section~\\ref{Subsection_EHAE}.\nFor the rational elliptic surface we have\\footnote{It would be interesting\nto know for which elliptic fibrations\n\\eqref{dfgfdgfgfd} holds.\n}\n\\begin{equation} t_{\\lambda \\ast} = \\exp A(\\lambda) \\label{dfgfdgfgfd} \\end{equation}\nfor all $\\lambda \\in \\Lambda$.\nAssuming\nConjectures~\\ref{Conj_Quasimodularity} and \\ref{Conj_HAE} we can rewrite \\eqref{ffsdghhjjjhhh} as\n\\begin{align*}\n& {\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n)\n(z + \\lambda \\tau) \\\\\n=\\, &\ne\\left( -\\frac{1}{2} \\lambda^T Q_{\\mathsf{k}} \\lambda - \\lambda^T Q_{\\mathsf{k}} z \\right)\n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\exp(A(-\\lambda)) \\gamma_1, \\ldots, \\exp(A(-\\lambda)) \\gamma_n) \\\\\n=\\, &\ne\\left( -\\frac{1}{2} \\lambda^T Q_{\\mathsf{k}} \\lambda - \\lambda^T Q_{\\mathsf{k}} z \\right)\n\\exp( - \\mathsf{T}_{\\lambda}) {\\mathcal C}_{g,\\mathsf{k}}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n),\n\\end{align*}\nwhich is the elliptic transformation law of quasi-Jacobi forms stated in Lemma~\\ref{Lemma_ell_trans_law_for_quasi_Jac}.\n\n\n\n\n\n\\subsection{Quasi-modular forms} \\label{Subsubsection_elliptic_fibrations_quasimodularforms}\nThe elliptic periods (i.e. $\\zeta^{\\alpha}$-coefficients)\nof a quasi-Jacobi form are quasimodular forms,\nsee Proposition~\\ref{QJac_Prop2}.\nTogether with Conjecture~\\ref{Conj_Quasimodularity} this leads to a basic quasi-modularity statement for elliptic fibrations as follows.\nLet $\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$ be a curve class, and consider the pairing on $H^2(X,{\\mathbb{Z}})$ defined by\n\\begin{equation}\n\\quad ( \\alpha , \\alpha' )_{\\mathsf{k}} = \\int_{\\mathsf{k}} \\pi_{\\ast}(\\alpha \\cdot \\alpha') \\quad \\text{ for all } \n\\alpha, \\alpha' \\in H^2(X,{\\mathbb{Z}}).\n\\label{pairingg}\n\\end{equation}\nThroughout Section~\\ref{Subsubsection_elliptic_fibrations_quasimodularforms} we make the\nfollowing assumption which is equivalent to the positive definiteness of $Q_{\\mathsf{k}}$\nand holds in many cases of interest\\footnote{On\nan elliptic surface satisfying $h^{2,0} = 0$ the assumption holds by the Hodge index theorem whenever $k \\neq 0$.\n}.\n\n\\vspace{4pt}\n\\noindent \\textbf{Assumption ($\\dag$).} The restriction of $( \\cdot, \\cdot )_{\\mathsf{k}}$ to $\\Lambda$ is negative-definite.\n\n\\vspace{7pt}\nConsider the cohomology classes on $B$ orthogonal to $\\mathsf{k}$,\n\\[ \\mathsf{k}^{\\perp} = \\left\\{ \\gamma \\in H^2(B,{\\mathbb{Z}})\\, \\middle|\\, \\langle \\gamma, \\mathsf{k} \\rangle = 0 \\right\\} \\]\nwhere $\\langle \\cdot , \\cdot \\rangle$ is the pairing between cohomology and homology on $B$.\nConsider also the null space of $( \\cdot , \\cdot )_\\mathsf{k}$,\n\\[ N_{\\mathsf{k}} = \\left\\{ v \\in H^2(X,{\\mathbb{Z}})\\, \\middle| \\, (v, H^2(X,{\\mathbb{Z}}))_\\mathsf{k} = 0 \\right\\}. \\]\nWe have $\\pi^{\\ast} \\mathsf{k}^{\\perp} \\subset N_{\\mathsf{k}}$. By assumption ($\\dag$) this inclusion is an equality,\n\\[ N_{\\mathsf{k}} = \\pi^{\\ast} \\mathsf{k}^{\\perp}, \\]\nand the induced pairing on $H^2(X,{\\mathbb{Z}}) \/ N_{\\mathsf{k}}$ is of signature $(1, n+1)$.\\footnote{The combination of both statements is equivalent to Assumption ($\\dag$).}\n\nThe dual of $H^2(X,{\\mathbb{Z}}) \/ N_{\\mathsf{k}}$ is naturally identified with the lattice\n\\[ L_{\\mathsf{k}} = \\left\\{ \\beta \\in H_2(X,{\\mathbb{Z}})\\, \\middle| \\, \\pi_{\\ast} \\beta = c \\cdot \\mathsf{k} \\text{ for some } c \\in {\\mathbb{Q}} \\right\\}. \\]\nThe non-degenerate pairing on $H^2(X,{\\mathbb{Z}}) \/ N_{\\mathsf{k}}$ induces a non-degenerate pairing on $L_{\\mathsf{k}}$\nwhich we denote by $( \\cdot , \\cdot )_{\\mathsf{k}}$ as well.\n\nFor any $\\alpha \\in H_2(X,{\\mathbb{Z}}) \/ {\\mathbb{Q}} F$ with $\\pi_{\\ast} \\alpha = \\mathsf{k}$ consider the theta series\n\\[\n{\\mathcal C}^\\pi_{g,\\alpha}(\\gamma_1, \\ldots, \\gamma_n)\n=\n\\sum_{[ \\beta ] = \\alpha }\n{\\mathcal C}^\\pi_{g,\\beta}(\\gamma_1, \\ldots, \\gamma_n) q^{-\\frac{1}{2} \\langle \\beta, \\beta \\rangle_{\\mathsf{k}}} \n\\]\nwhere the sum is over all curve classes $\\beta$ with residue class $\\alpha$ in $H_2(X,{\\mathbb{Z}})\/ {\\mathbb{Q}} F$.\n\n\\begin{lemmastar} \\label{Lemma_Quasiodularformselli} Assume Conjecture~\\ref{Conj_Quasimodularity} and \\ref{Conj_HAE}, and Assumption \\textup{($\\dag$)}.\nLet $\\ell$ be the smallest positive integer such that $\\ell Q_{\\mathsf{k}}^{-1}$ has integral entries and even diagonal.\nThen every ${\\mathcal C}^\\pi_{g,\\alpha}(\\gamma_1, \\ldots, \\gamma_n)$\nis a cycle-valued weakly-holomorphic quasi-modular form of level $\\ell$.\n\\end{lemmastar}\n\nThe Lemma shows that although the elliptic fibration $\\pi : X \\to B$ has a section,\nwe should expect the generating series of Gromov--Witten invariants in the fiber direction\nto be quasi-modular of higher level (with pole at cusps).\nIt is remarkable that these higher-index quasi-modular forms\nwhen arranged together appropriately should form $\\mathrm{SL}_2({\\mathbb{Z}})$-quasi-Jacobi forms.\n\nIf $Q_{\\mathsf{k}}$ is unimodular then we obtain level $1$, hence $\\mathrm{SL}_2({\\mathbb{Z}})$-quasi-modular forms in Lemma*~\\ref{Lemma_Quasiodularformselli}.\nFor the rational elliptic surface the level of the quasi-modular form\nis exactly the degree over the base curve.\nThis compares well with the conjectural quasi-modularity of the Gromov--Witten invariants of K3 surfaces in inprimitive classes, see \\cite[Sec.7.5]{MPT}.\n\nUsing Proposition~\\ref{QJac_Prop2} (ii)\nthe holomorphic anomaly equation for the quasi-Jacobi classes ${\\mathcal C}^\\pi_{g,\\mathsf{k}}(\\ldots )$\nyields a holomorphic anomaly equation for the theta-series ${\\mathcal C}^\\pi_{g, \\alpha}( \\ldots )$.\nHowever, in the non-unimodular case the result is rather complicated and difficult to handle.\\footnote{The unimodular\ncase is further discussed in Section~\\ref{Section_Abelian_surfaces}.}\nThe holomorphic anomaly equation takes its simplest form for quasi-Jacobi forms.\n\n\\begin{proof}[Proof of Lemma~\\ref{Lemma_Quasiodularformselli}]\nLet $\\lambda$ be the image of $\\alpha$ in $H_{2, \\perp}$.\nA computation yields\n\\[\n{\\mathcal C}^\\pi_{g,\\alpha}(\\gamma_1, \\ldots, \\gamma_n)\n=\nq^{ - \\frac{1}{4} \\lambda^T L^{-1} \\lambda }\\left[ {\\mathcal C}^\\pi_{g, \\mathsf{k}} (\\gamma_1, \\ldots, \\gamma_n) \\right]_{\\zeta^{\\lambda}}\n\\]\nwhich implies the Lemma by Proposition~\\ref{QJac_Prop2}.\n\\end{proof}\n\n\n\n\n\\subsection{Calabi--Yau threefolds}\nLet $\\pi : X \\to B$ be an elliptically fibered Calabi--Yau threefold with section $\\iota : B \\to X$\nand $h^{2,0}(X) = 0$.\nThe moduli space of stable maps is of virtual dimension $0$.\nFor all $(g,\\mathsf{k}) \\notin \\{ (0,0), (1,0) \\}$ define the Gromov--Witten potential\n\\[ \\mathsf{F}_{g,\\mathsf{k}}(q, \\zeta)\n= \\int_{{\\overline M}_{g}(B,\\mathsf{k})} {\\mathcal C}^{\\pi}_{g,\\mathsf{k}}()\n= \\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} q^{W \\cdot \\beta} \\zeta^{\\beta} \\int_{[ {\\overline M}_{g}(X,\\beta) ]^{\\text{vir}}} 1.\n\\]\nBy the Calabi--Yau condition we have $N_{\\iota} \\cong \\omega_B$. Hence Conjecture~\\ref{Conj_Quasimodularity} implies\n\\[ \\mathsf{F}_{g,\\mathsf{k}}(q) \\in \\frac{1}{\\Delta(q)^{-\\frac{1}{2} K_B \\cdot \\mathsf{k}}} \\mathrm{QJac}. \\]\nWe have the following holomorphic anomaly equation (see also \\cite[0.5]{HAE}).\n\\begin{propstar} \\label{Prop_HAE_for_CY3}\nAssume Conjectures \\ref{Conj_Quasimodularity} and~\\ref{Conj_HAE}. Then we have\n\\[\n\\mathsf{T}_q \\mathsf{F}_{g,\\mathsf{k}}\n=\n\\langle \\mathsf{k} + K_B, \\mathsf{k} \\rangle \\mathsf{F}_{g-1, \\mathsf{k}}\n+ \\sum_{\\substack{g=g_1+g_2 \\\\ \\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2}}\n\\langle \\mathsf{k}_1, \\mathsf{k}_2 \\rangle \\mathsf{F}_{g_1, \\mathsf{k}_1} \\mathsf{F}_{g_2, \\mathsf{k}_2}\n+ \\frac{\\delta_{g2} \\delta_{k0}}{4} \\langle K_B, K_B \\rangle.\n\\]\nwhere we let $\\langle - , - \\rangle$ denote the intersection pairing on $B$,\nthe first term on the right is defined to vanish if $(g,\\mathsf{k}) = (2,0)$,\nand the sum is over all values $(g_i, \\mathsf{k}_i)$ for which $\\mathsf{F}_{g_i, \\mathsf{k}_i}$ is defined.\n\\end{propstar}\n\\begin{proof}\nIf $\\mathsf{k} > 0$ or $g > 2$ Conjecture~\\ref{Conj_HAE} implies\n\\begin{align*}\n\\mathsf{T}_q \\mathsf{F}_{g,\\mathsf{k}}\n& = \\int {\\mathcal C}_{g-1,\\mathsf{k}}(\\pi^{\\ast} \\Delta_B)\n+ \\sum_{\\substack{g=g_1+g_2 \\\\ \\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2}} \\sum_j\n\\int {\\mathcal C}_{g_1, \\mathsf{k}_1}( \\pi^{\\ast} \\Delta_{B,j} )\n\\cdot \\int {\\mathcal C}_{g_2, \\mathsf{k}_2}(\\pi^{\\ast} \\Delta_{B,j}^{\\vee}) \\\\\n& =\n\\langle \\mathsf{k}, \\mathsf{k} \\rangle \\mathsf{F}_{g-1,\\mathsf{k}}\n+\n\\sum_{\\substack{g=g_1+g_2 \\\\ \\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2 \\\\ \\mathsf{k}_1, \\mathsf{k}_2 > 0}}\n\\langle \\mathsf{k}_1, \\mathsf{k}_2 \\rangle \\mathsf{F}_{g_1, \\mathsf{k}_1} \\mathsf{F}_{g_2, \\mathsf{k}_2}\\\\\n& \n+ 2 \\sum_j \\int {\\mathcal C}_{g-1, \\mathsf{k}}(\\pi^{\\ast} \\Delta_{B, j})\n\\cdot \\int_{[{\\overline M}_{1,1}(X,0)]^{\\text{vir}}} {\\mathrm{ev}}_1^{\\ast}( \\pi^{\\ast} \\Delta_{B,j}^{\\vee})\n\\end{align*}\nwhere we have written \n\\[ \\Delta_B = \\sum_j \\Delta_{B,j} \\boxtimes \\Delta_{B,j}^{\\vee} \\in H^{\\ast}(B^2) \\]\nfor the K\\\"unneth decomposition of the diagonal of $B$.\nBy \\cite{GP} we have\n\\[ [ {\\overline M}_{1,1}(X,0) ]^{\\text{vir}} = (c_3(X) - c_2(X) \\lambda_1) \\cap [ {\\overline M}_{1,1} \\times X ] \\]\nand by \\cite[Sec.4]{AHR} we have\n\\[ c_2(X) = \\pi^{\\ast} (c_2(B) + c_1(B)^2) + 12 \\iota_{\\ast} c_1(B). \\]\nHence we find\n\\[ \\int_{[{\\overline M}_{1,1}(X,0)]^{\\text{vir}}} {\\mathrm{ev}}_1^{\\ast}( \\pi^{\\ast} \\Delta_{B,j}^{\\vee}) = -\\frac{1}{2} \\langle \\Delta_{B,j} , c_1(B) \\rangle \\]\nfrom which we obtain\n\\[ \\mathsf{T}_q \\mathsf{F}_{g,\\mathsf{k}}\n=\n\\langle \\mathsf{k} + K_B, \\mathsf{k} \\rangle \\mathsf{F}_{g-1, \\mathsf{k}}\n+ \\sum_{\\substack{g=g_1+g_2 \\\\ \\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2}}\n\\langle \\mathsf{k}_1, \\mathsf{k}_2 \\rangle \\mathsf{F}_{g_1, \\mathsf{k}_1} \\mathsf{F}_{g_2, \\mathsf{k}_2}.\n\\]\n\nIf $(g,\\mathsf{k}) = (2,0)$ Conjecture B yields\n\\begin{align*} \\mathsf{T}_q \\mathsf{F}_{2,0}(q)\n& = \\sum_j \\int_{[{\\overline M}_{1,1}(X,0)]^{\\text{vir}}} {\\mathrm{ev}}_1^{\\ast}( \\pi^{\\ast} \\Delta_{B,j}) \\cdot \\int_{[{\\overline M}_{1,1}(X,0)]^{\\text{vir}}} {\\mathrm{ev}}_1^{\\ast}( \\pi^{\\ast} \\Delta_{B,j}^{\\vee}) \\\\\n& = \\frac{1}{4} \\int_B c_1(B)^2. \\qedhere\n\\end{align*}\n\\end{proof}\n\n\nIt will be useful later on to consider the disconnected case as well.\nFor any $g \\in {\\mathbb{Z}}$ and $\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$ let\n\\[ \n\\mathsf{F}_{g,\\mathsf{k}}^{\\bullet} \n= \\int_{{\\overline M}^{\\bullet}_{g}(B, \\mathsf{k})} {\\mathcal C}^{\\bullet}_{g,\\mathsf{k}}()\n= \\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} q^{W \\cdot \\beta} \\zeta^{\\beta} \\int_{ [ {\\overline M}'_{g}(X,\\beta) ]^{\\text{vir}} } 1.\n\\]\nThe connected and disconnected potentials are related by\n\\begin{equation} \\label{Dis_Con_rel}\n\\sum_{g, \\mathsf{k}} \\mathsf{F}_{g,\\mathsf{k}}^{\\bullet} u^{2g-2} t^{\\mathsf{k}}\n=\n\\exp\\left( \\sum_{(g,\\mathsf{k}) \\notin \\{ (0,0), (1,0) \\} } \\mathsf{F}_{g,\\mathsf{k}} u^{2g-2} t^{\\mathsf{k}} \\right).\n\\end{equation}\nA direct calculation using \\eqref{Dis_Con_rel} and Proposition*~\\ref{Prop_HAE_for_CY3} implies the following disconnected holomorphic anomaly equation\n\\begin{equation} \\label{352rw}\n\\mathsf{T}_q \\mathsf{F}_{g,\\mathsf{k}}^{\\bullet} =\n\\left\\langle \\mathsf{k} + \\frac{1}{2} K_B, \\mathsf{k} + \\frac{1}{2} K_B \\right\\rangle \\mathsf{F}_{g-1,\\mathsf{k}}^{\\bullet}.\n\\end{equation}\n\n\n\n\n\n\\section{Relative geometries}\n\\label{Section_relative_geomtries}\n\\subsection{Relative divisor}\nLet $\\pi : X \\to B$ be an elliptic fibration with section and integral fibers\nsuch that $H^{2,0}(X) = 0$.\nLet\n\\[ D \\subset X. \\]\nbe a non-singular divisor. We assume $\\pi$ restricts to an elliptic fibration\n\\[ \\pi_D : D \\to A \\]\nfor a non-singular divisor $A \\subset B$.\nThe section of $\\pi$ restricts to a section of $\\pi_D$. Since $\\pi$ has integral fibers, so does $\\pi_D$.\nWe have the fibered diagram\n\\[\n\\begin{tikzcd}\nD \\ar{d}{\\pi_D} \\ar[hookrightarrow]{r} & X \\ar{d}{\\pi} \\\\\nA \\ar[hookrightarrow]{r} & B.\n\\end{tikzcd}\n\\]\n\n\\subsection{Relative classes}\nLet $\\eta = (\\eta_i)_{i=1, \\ldots, l(\\eta)}$ be an ordered partition. Let\n\\[ {\\overline M}_{g,n}(X\/D, \\beta ; \\eta) \\]\nbe the moduli space parametrizing stable maps from connected genus $g$ curves to $X$ relative to $D$\nwith ordered ramification profile $\\eta$ over the relative divisor $D$, see \\cite{Junli1, Junli2}\nfor definitions and \\cite[Sec.2]{GV} for an introduction to relative stable maps.\nWe have evaluation maps at the $n$ interior and the $l(\\eta)$ relative marked points. The latter are denoted by\n\\[ {\\mathrm{ev}}^{\\mathrm{rel}}_{i} : {\\overline M}_{g,n}(X\/D, \\beta ; \\eta) \\to D, \\ i=1, \\ldots, l(\\eta). \\]\nSince $D$ is non-singular, we have the induced morphism\n\\[\n\\pi : {\\overline M}_{g,n}(X\/D, \\beta; \\eta) \\to {\\overline M}_{g,n}(B\/A, \\mathsf{k} ; \\eta)\n\\]\nwhere $\\mathsf{k} = \\pi_{\\ast} \\beta$.\n\nLet $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(X)$, let\n$\\mathsf{k} \\in H_2(B,{\\mathbb{Z}})$ be a curve class and let\n\\[\n\\underline{\\eta} = \\big( (\\eta_1, \\delta_1), \\ldots, (\\eta_{l(\\eta)}, \\delta_{l(\\eta)} ) \\big), \\quad \\delta_i \\in H^{\\ast}(D),\n\\]\nbe an ordered cohomology weighted partition. Define the relative potential\n\\begin{multline*}\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D}( \\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta} ) \\\\\n=\n\\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} \\zeta^{\\beta} q^{W \\cdot \\beta}\n\\pi_{\\ast} \\left( \\left[ {\\overline M}_{g,n}(X\/D, \\beta; \\eta) \\right]^{\\text{vir}} \\prod_{i=1}^{n} {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\prod_{i=1}^{l(\\eta)} {\\mathrm{ev}}_i^{\\mathrm{rel} \\ast}(\\delta_i) \\right)\n\\end{multline*}\nwhere as before $W = [ \\iota(B) ] - \\frac{1}{2} \\pi^{\\ast} c_1(N_{\\iota})$\nand $\\zeta^{\\beta} = e(z \\cdot \\beta)$ with $z \\in \\Lambda \\otimes {\\mathbb{C}}$.\n\nIn line with the rest of the paper we conjecture the following.\n\\begin{conjecture} \\label{Conj_RelQuasimodularity} The series\n${\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D}( \\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta} )$\nis a cycle-valued quasi-Jacobi form of index $Q_{\\mathsf{k}}\/2$:\n\\[\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D}( \\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta} )\n\\in H_{\\ast}({\\overline M}_{g,n}(B\/A, \\mathsf{k} ; \\eta)) \\otimes \\frac{1}{\\Delta(q)^m} \\mathrm{QJac}_{Q_{\\mathsf{k}}\/2}\n\\]\nwhere $m = -\\frac{1}{2} c_1(N_{\\iota}) \\cdot \\mathsf{k}$.\n\\end{conjecture}\n\n\\subsection{Rubber classes}\nStating the holomorphic anomaly equation for relative classes requires rubber classes.\nLet $N$ be the normal bundle of $D$ in $X$, and consider the projective bundle\n\\[ \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \\to D. \\]\nWe let\n\\[ D_0, D_{\\infty} \\subset \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \\]\nbe the sections corresponding to the summands ${\\mathcal O}_D$ and $N$ respectively.\n\nThe group ${\\mathbb{C}}^{\\ast}$ acts naturally on $\\mathbb{P}( N \\oplus {\\mathcal O}_D )$ by scaling in the fiber direction,\nand induces an action on the moduli space of stable maps relative to both divisors denoted by\n\\[\n {\\overline M}_{g,n}( \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \/ \\{ D_0, D_{\\infty} \\}, \\beta ; \\lambda, \\mu)\n\\]\nwhere the ordered partitions $\\lambda, \\mu$ are the ramification profiles at $D_0$ and $D_{\\infty}$ respectively.\nWe let\n\\[ {\\overline M}_{g,n}^{\\sim}( \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \/ \\{ D_0, D_{\\infty} \\}, \\beta ; \\lambda, \\mu) \\]\ndenote the corresponding space of stable maps to the rubber target \\cite{MP}.\n\nLet $N'$ be the normal bundle to $A$ in $B$ and consider the relative geometry\n\\[ \\mathbb{P}(N' \\oplus {\\mathcal O}_{A}) \/ \\{ A_0, A_{\\infty} \\}. \\]\nSince $D$ is non-singular the fibration $\\pi$ induces a well-defined map\n\\[\n\\rho : \\mathbb{P}(N \\oplus {\\mathcal O}_{D}) \\to \\mathbb{P}(N' \\oplus {\\mathcal O}_{A})\n\\]\nwhich is an elliptic fibration with section and integral fibers. Let\n\\begin{multline*} \\rho : {\\overline M}_{g,n}^{\\sim}( \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \/ \\{ D_0, D_{\\infty} \\}, \\beta; \\lambda, \\mu) \\\\\n\\to \n{\\overline M}_{g,n}^{\\sim}( \\mathbb{P}( N' \\oplus {\\mathcal O}_A ) \/ \\{ A_0, A_{\\infty} \\}, \\mathsf{k} ; \\lambda, \\mu)\n\\end{multline*}\nbe the induced map. We also let ${\\mathrm{ev}}_i^{\\text{rel }0}$ and ${\\mathrm{ev}}_i^{\\text{rel }\\infty}$ denote the evaluation maps\nat the relative marked points mapping to $D_0$ and $D_{\\infty}$ respectively.\nBecause of the rubber target, the evaluation maps of the moduli space at the interior marked points take value in $D$.\n\nFor any $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(D)$ and any\nordered weighted partitions\n\\[ \\underline{\\lambda} = \\big( (\\lambda_i, \\delta_i) \\big)_{i=1, \\ldots, l(\\lambda)}, \\quad\n \\underline{\\mu} = \\big( (\\mu_i, \\epsilon_i) \\big)_{i=1, \\ldots, l(\\mu)},\n\\quad \\delta_i, \\epsilon_i \\in H^{\\ast}(D) \n\\]\nwe define\n\\begin{multline*}\n{\\mathcal C}^{\\rho, \\mathrm{rubber}}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n ; \\underline{\\lambda}, \\underline{\\mu} ) \\\\\n=\n \\sum_{\\rho_{\\ast} \\beta = \\mathsf{k}} \\zeta^{\\beta} q^{W \\cdot \\beta}\n\\rho_{\\ast} \\bigg(\n\\left[ {\\overline M}_{g,n}^{\\sim}( \\mathbb{P}( N \\oplus {\\mathcal O}_D ) \/ \\{ D_0, D_{\\infty} \\}, \\beta ; \\lambda, \\mu) \\right]^{\\text{vir}} \\\\\n\\cdot \\prod_{i=1}^{n} {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\prod_{i=1}^{l(\\lambda)} {\\mathrm{ev}}_{i}^{\\text{rel }0 \\ast}(\\delta_i)\n\\prod_{i=1}^{l(\\mu)} {\\mathrm{ev}}_i^{\\text{rel }\\infty \\ast}(\\epsilon_i) \\bigg).\n\\end{multline*}\n\n\\subsection{Disconnected classes}\nTo simplify the notation we will work with disconnected classes.\nThe disconnected versions\nof moduli spaces and the classes ${\\mathcal C}$ will be denoted by a '$\\bullet$' resp. a dash,\nfollowing the conventions of Section~\\ref{Subsection_Disconnected_GW_classes}. Since connected and disconnected invariants\nmay be expressed in terms of each other, Conjecture \\ref{Conj_RelQuasimodularity}\nis equivalent to the quasi-Jacobi form property for the disconnected theory:\n\\[\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta} ) \\in\nH_{\\ast}( {\\overline M}^{\\bullet}_{g,n}(B\/A, \\mathsf{k} ; \\eta) ) \\otimes \\frac{1}{\\Delta(q)^m} \\mathrm{QJac}_{Q_{\\mathsf{k}}\/2}\n\\]\nwhere $m = -\\frac{1}{2} c_1(N_{\\iota}) \\cdot \\mathsf{k}$.\nThe holomorphic anomaly equation conjectured below for disconnected relative classes (Conjecture~\\ref{Conj_RELHAE})\nis equivalent to a corresponding version for connected classes.\n\n\n\\subsection{Holomorphic anomaly equation for relative classes}\nConsider the diagram\n\\[\n\\begin{tikzcd}\n{\\overline M}_{g,n}^{\\bullet}(B\/A, \\mathsf{k}, \\eta) & M_{\\Delta} \\ar[swap]{l}{\\xi} \\ar{d} \\ar{r} & {\\overline M}_{g-1,n+2}^{\\bullet}(B\/A, \\mathsf{k}, \\eta) \\ar{d}{{\\mathrm{ev}}_{n+1} \\times {\\mathrm{ev}}_{n+2}} \\\\\n& {\\mathcal B} \\ar{r}{\\Delta_{{\\mathcal B}}} & {\\mathcal B} \\times {\\mathcal B}\n\\end{tikzcd}\n\\]\nwhere ${\\mathcal B}$ is the stack of target degenerations of $B$ relative to $A$, the map $\\Delta_{{\\mathcal B}}$ is the diagonal,\n$M_{\\Delta}$ is the fiber product and $\\xi$ is the gluing map along the final two marked points.\nFor simplicity, we will write\n\\[\n{\\mathcal C}_{g-1,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots, \\gamma_n, \\Delta_{B\/A} ; \\underline{\\eta} ) \\\\\n=\n\\Delta_{{\\mathcal B}}^{!} {\\mathcal C}_{g-1,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ; \\underline{\\eta} ).\n\\]\n\nWe state the relative holomorphic anomaly equation.\n\n\\begin{conjecture} \\label{Conj_RELHAE} On ${\\overline M}_{g,n}^{\\bullet}(B\/A, \\mathsf{k} ; \\eta)$ we have\n\\begin{align*} & \\mathsf{T}_q {\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta} ) \\\\\n& \n= \\iota_{\\ast}\n {\\mathcal C}_{g-1,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots, \\gamma_n, \\Delta_{B\/A} ; \\underline{\\eta} ) \\\\\n& + 2 \\sum_{\\substack{ \\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ m \\geq 0 \\\\ g = g_1 + g_2 + m \\\\ \\mathsf{k}_1, \\mathsf{k}_2 }}\n\\sum_{\\substack{ b ; b_1, \\ldots, b_m \\\\ \\ell ; \\ell_1, \\ldots, \\ell_m}}\n\\frac{\\prod_{i=1}^{m} b_i}{m!}\n\\xi_{\\ast} \n\\Bigg[\n{\\mathcal C}_{g_1,\\mathsf{k}_1}^{\\pi\/D, \\bullet}\\Big( \\gamma_{S_1} ; \\big( (b, \\Delta_{A, \\ell}), (b_i, \\Delta_{D, \\ell_i})_{i=1}^{m}\\big) \\Big)\\\\\n& \\hspace{11.5em} \\boxtimes \n{\\mathcal C}_{g_2, \\mathsf{k}_2}^{\\rho, \\bullet, \\mathrm{rubber}}\\Big( \\gamma_{S_2} ; \\big( (b, \\Delta_{A, \\ell}^{\\vee}), (b_i, \\Delta^{\\vee}_{D, \\ell_i})_{i=1}^{m} \\big), \\underline{\\eta} \\Big) \\Bigg] \\\\\n& - 2 \\sum_{i=1}^{n}\n \\psi_i \\cdot {\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma_1, \\ldots,\\gamma_{i-1},\n \\pi^{\\ast} \\pi_{\\ast} \\gamma_i , \\gamma_{i+1}, \\ldots \\gamma_n ; \\underline{\\eta} ) \\\\\n & - 2 \\sum_{i=1}^{l(\\eta)} \n \\psi_{i}^{\\text{rel}} \\cdot {\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}\\Big( \\gamma_1, \\ldots, \\gamma_n ;\n \\big( (\\eta_1, \\delta_1), \\ldots, \\underbrace{(\\eta_i, \\pi_D^{\\ast} \\pi_{D\\ast} \\delta_i )}_{i\\text{-th}}, \\ldots, (\\eta_n, \\delta_n)\\big) \\Big)\n\\end{align*}\nwith the following notation.\nWe let\n$\\psi_i, \\psi_i^{\\text{rel}} \\in H^2( {\\overline M}_{g,n}(B\/A, \\mathsf{k} ; \\eta) )$\ndenote the cotangent line classes at the $i$-th interior and relative marked points respectively.\nThe first sum is over\nall $\\mathsf{k}_1 \\in H_2(B,{\\mathbb{Z}})$ and $\\mathsf{k}_2 \\in H_2( \\mathbb{P}(N' \\oplus {\\mathcal O}_A), {\\mathbb{Z}})$\nsatisfying\n\\[ \\mathsf{k}_1 \\cdot A = \\mathsf{k}_2 \\cdot A \\quad \\text{ and } \\quad\n\\mathsf{k}_1 + r_{\\ast} \\mathsf{k}_2 = \\mathsf{k} \\]\nwhere $r : \\mathbb{P}(N' \\oplus {\\mathcal O}_A) \\to B$ is the composition of the projection to $A$ followed by the natural inclusion into $B$.\nThe $b, b_1, \\ldots, b_m$ run over all positive integers such that\n$b+ \\sum_i b_i = \\mathsf{k}_1 \\cdot A = \\mathsf{k}_2 \\cdot A$,\nand the $\\ell, \\ell_i$ run over the splitting of the diagonals of $A$ and $D$ respectively:\n\\[ \\Delta_A = \\sum_{\\ell} \\Delta_{A,\\ell} \\otimes \\Delta_{A, \\ell}^{\\vee}, \\quad \\quad\n \\forall i \\colon\\ \\Delta_{D} = \\sum_{\\ell_i} \\Delta_{D, \\ell_i} \\otimes \\Delta_{D, \\ell_i}^{\\vee}.\n\\]\nThe map $\\xi$ is the gluing map to ${\\overline M}_{g,n}^{\\bullet}(B\/A, \\mathsf{k} ; \\eta)$\nalong the common relative marking with ramification profile $(b, b_1, \\ldots, b_m)$.\nSince we cup with the diagonal classes of $A$ and $D$, the gluing map $\\xi$ is well-defined.\n\\end{conjecture}\n\nThe relative product formula of \\cite{LQ} together with \\cite[Thms. 2 and 3]{HAE} yields the following.\n\\begin{prop}\nConjectures \\ref{Conj_RelQuasimodularity} and \\ref{Conj_RELHAE}\nhold if $X = B \\times E$ and $D = A \\times E$, and $\\pi : X \\to B$ is the projection onto the first factor.\n\\end{prop}\n\n\n\n\\subsection{Compatibility with the degeneration formula}\n\\label{Subsection_Compa_with_deg_formula}\nA degeneration of $X$ compatible with the elliptic fibration $\\pi : X \\to B$ is a flat family\n\\[ \\epsilon : {\\mathcal X} \\to \\Delta \\]\nover a disk $\\Delta \\subset {\\mathbb{C}}$ satisfying:\n\\begin{enumerate}\n\n \\item[(i)] $\\epsilon$ is a flat projective morphism, smooth away from $0$.\n \\item[(ii)] $\\epsilon^{-1}(1) = X$.\n \\item[(iii)] $\\epsilon^{-1}(0) = X_1 \\cup_D X_2$ is a normal crossing divisor.\n \\item[(iv)] There exists a flat morphism $\\tilde{\\epsilon} : {\\mathcal B} \\to \\Delta$ satisfying (i-iii) with\n $\\tilde{\\epsilon}^{-1}(1) = B$ and $\\tilde{\\epsilon}^{-1}(0) = B_1 \\cup_A B_2$.\n \\item[(v)] There is an elliptic fibration ${\\mathcal X} \\to {\\mathcal B}$ with section and integral fiber\nthat restricts to elliptic fibrations with integral fibers:\n\\[ \\pi : X \\to B, \\quad \\pi_i : X_i \\to B_i,\\, i=1,2 \\quad \\rho : D \\to A. \\]\n\\end{enumerate}\nWe further assume that the canonical map\n\\begin{equation} H^{\\ast}(X_1 \\cup_D X_2) \\to H^{\\ast}(X) \\label{canonicalmap} \\end{equation}\ndetermined by $\\epsilon$ yields an inclusion \n$\\Lambda_1 \\oplus \\Lambda_2 \\subset \\Lambda$\nwhere $\\Lambda_i = H^{2}_{\\perp}(X_i, {\\mathbb{Z}})$. Let\n\\[ \\mathbf{z}_i \\in \\Lambda_i \\otimes {\\mathbb{C}} \\]\ndenote the coordinate on the $i$-th summand.\n\nConsider cohomology classes\n\\[ \\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(X) \\]\nwhich lift to the total space of the degeneration or equivalently\\footnote{We assume the disk is sufficiently small.} which lie in the image of \\eqref{canonicalmap}.\nBelow let $p$ always denote the forgetful morphism from various moduli spaces of stable maps to the moduli space of stable curves, for example\n\\[ p : {\\overline M}_{g,n}^{\\bullet}(B\/A, \\mathsf{k}, \\eta) \\to {\\overline M}_{g,n}^{\\bullet}. \\]\nThe application of the degeneration formula \\cite{Junli1, Junli2} to $\\epsilon$ yields\n\\begin{multline} \\label{323fewdfsdF}\np_{\\ast} {\\mathcal C}^{\\pi, \\bullet}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) \\Big|_{\\mathbf{z} = (\\mathbf{z}_1, \\mathbf{z}_2)} \\\\\n=\n\\sum_{\\substack{ \\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ \\mathsf{k}_1, \\mathsf{k}_2 \\\\ m \\geq 0 \\\\ g = g_1 + g_2 + m - 1 }}\n\\sum_{\\substack{ \\eta_1, \\ldots, \\eta_m \\\\ \\ell_1, \\ldots, \\ell_m}}\n\\frac{\\prod_{i} \\eta_i}{m!}\np_{\\ast} \\xi_{\\ast} \\bigg[ \n{\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}\\left(\\gamma_{S_1}; \\underline{\\eta} \\right) \\boxtimes \n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}\\left(\\gamma_{S_2}; \\underline{\\eta}^{\\vee} \\right) \\bigg]\n\\end{multline}\nwhere $\\mathsf{k}_1, \\mathsf{k}_2$ run over all possible splittings of the curve class $\\mathsf{k}$,\nthe $\\eta_1, \\ldots, \\eta_m$ run over all positive integers such that \n\\[ \\sum_i \\eta_i = \\mathsf{k}_1 \\cdot A = \\mathsf{k}_2 \\cdot A, \\]\nthe $\\ell_i$ run over the splitting of the diagonals of $D$, and we have written\n\\[ \\underline{\\eta} = (\\eta_i, \\Delta_{D, \\ell_i})_{i=1}^{m},\n\\quad\n\\underline{\\eta}^{\\vee} = (\\eta_i, \\Delta^{\\vee}_{D, \\ell_i})_{i=1}^{m}.\n\\]\nMoreover, the map $\\xi$ is the gluing map along the relative point (well-defined since we inserted the diagonal).\n\nAssume Conjectures~\\ref{Conj_Quasimodularity} and~\\ref{Conj_RelQuasimodularity} hold, so that \\eqref{323fewdfsdF}\nis an equality of quasi-Jacobi forms.\nThen Conjectures~\\ref{Conj_HAE} and~\\ref{Conj_RELHAE} each give a way to compute the class\\footnote{We will omit the restriction of $\\mathbf{z}$ to the pair $(\\mathbf{z}_1, \\mathbf{z}_2)$ in the notation from now on.}\n\n\\begin{equation*} \\frac{d}{dC_2} p_{\\ast} {\\mathcal C}^{\\pi, \\bullet}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) \\label{ERWER} \\end{equation*}\nas follows:\n\\begin{enumerate}\n \\item[(a)] Apply $\\mathsf{T}_q$ to the left-hand side of \\eqref{323fewdfsdF}, use Conjecture~\\ref{Conj_HAE}, and apply the degeneration formula to each term of the result.\n \\item[(b)] Apply $\\mathsf{T}_q$ to the right-hand side of \\eqref{323fewdfsdF} and use Conjecture~\\ref{Conj_RELHAE}.\n\\end{enumerate}\n\nWe say Conjectures~\\ref{Conj_HAE} and~\\ref{Conj_RELHAE} are \\emph{compatible with the degeneration formula}\nif methods (a) and (b) yield the same result.\n\n\\begin{prop} \\label{Dsdfsrgrsgdf} \\label{Proposition_Compatibility_with_degeneration_formula}\nAssume Conjectures~\\ref{Conj_Quasimodularity} and~\\ref{Conj_RelQuasimodularity}.\nConjectures~\\ref{Conj_HAE} and~\\ref{Conj_RELHAE} are compatible with the degeneration formula.\n\\end{prop}\n\\begin{proof}\nAfter pushforward to the moduli space of stable curves,\nwe apply the degeneration formula to the right-hand side of Lemma*~\\ref{Lemma_discHAE}.\nThe result is\n\\begin{align*}\n& \\mathsf{T}_q p_{\\ast} {\\mathcal C}^{\\pi, \\bullet}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) \\\\\n& =\n\\sum_{\\substack{ \\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ \\mathsf{k}_1, \\mathsf{k}_2;\\, m \\geq 0 \\\\\n\\eta_1, \\ldots, \\eta_m, \\ell_1, \\ldots, \\ell_m \\\\\ng-1 = g_1 + g_2 + m - 1 }} \\frac{\\prod_i \\eta_i}{m!}\n\\bigg[ p_{\\ast} \\xi_{\\ast} \\left( {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}(\\gamma_{S_1}, \\Delta_{B_1\/A} ; \\underline{\\eta} ) \\boxtimes {\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2} ; \\underline{\\eta}^{\\vee}) \\right) \\\\\n& \\hspace{9.1em} + \\ p_{\\ast} \\xi_{\\ast} \\left( {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}(\\gamma_{S_1} ; \\underline{\\eta} ) \\boxtimes {\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2}, \\Delta_{B_2\/A} ; \\underline{\\eta}^{\\vee}) \\right) \\bigg] \\\\\n& \n-2\n\\sum_{\\substack{ \\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ \\mathsf{k}_1, \\mathsf{k}_2;\\, m \\geq 0 \\\\\n\\eta_1, \\ldots, \\eta_m, \\ell_1, \\ldots, \\ell_m \\\\\ng-1 = g_1 + g_2 + m - 1 }} \\frac{\\prod_i \\eta_i}{m!} \\cdot \\\\\n& \\cdot \\Bigg[ \\sum_{i \\in S_1}\np_{\\ast} \\xi_{\\ast} \\Big( \\psi_i {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}(\\gamma_{S_1 \\setminus \\{ i \\}}, \\pi^{\\ast}\\pi_{\\ast}(\\gamma_i); \\underline{\\eta} )\n\\boxtimes\n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2} ; \\underline{\\eta}^{\\vee}) \\Big) \\\\\n& + \\hspace{0.2em}\n\\sum_{i \\in S_2}\np_{\\ast} \\xi_{\\ast} \\left( {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}(\\gamma_{S_1}; \\underline{\\eta} ) \\boxtimes\n\\psi_i {\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2 \\setminus \\{ i \\}}, \\pi^{\\ast} \\pi_{\\ast}(\\gamma_i) ; \\underline{\\eta}^{\\vee}) \\right) \\Bigg]\n\\end{align*}\nwhere the sums are over the same data as in \\eqref{323fewdfsdF}.\n\nWe need to compare this expression with the relative holomorphic anomaly equation applied to the right-hand side of \\eqref{323fewdfsdF}.\nIn Conjecture~\\ref{Conj_RELHAE} we have four terms on the right-hand side.\nThe first and third term of Conjecture~\\ref{Conj_RELHAE} applied to \\eqref{323fewdfsdF} yield exactly the four terms above.\nHence we are left to show that the second and fourth terms of Conjecture~\\ref{Conj_RELHAE} applied to \\eqref{323fewdfsdF} vanish.\n\nWe consider first the second term applied to the first factor in \\eqref{323fewdfsdF} plus\nthe fourth term applied to the second factor in \\eqref{323fewdfsdF}. The result is\n\\begin{equation} \\begin{aligned}\n\\label{MEG}\n2 \\sum\n\\frac{\\prod_{i} c_i}{r!} \\frac{\\prod_i \\eta_i}{m!}\n& p_{\\ast} \\xi_{\\ast} \n\\left[\n{\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1', \\mathsf{k}_1'}\\big(\\gamma_{S_1'}; \\underline{\\lambda} \\big) \\boxtimes\n{\\mathcal C}_{g_1'', \\mathsf{k}_1''}^{\\rho, \\bullet, \\text{rub}}\\big( \\gamma_{S_1^{\\prime \\prime}} ; \\underline{\\lambda}^{\\vee}, \\underline{\\eta} \\big) \\boxtimes\n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}\\big(\\gamma_{S_2}; \\underline{\\eta}^{\\vee} \\big) \\right] \\\\\n- 2 \\sum \\sum_{i=1}^{m}\n\\frac{\\prod_j \\eta_j}{m!}\n& p_{\\ast} \\xi_{\\ast} \\left[ {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}\\big(\\gamma_{S_1}; \\underline{\\eta} \\big) \\boxtimes\n\\psi_i^{\\text{rel}} {\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}\\big(\\gamma_{S_2}; \\underline{\\eta}^{\\vee}\\big|_{\\delta_i \\mapsto \\pi^{\\ast} \\pi_{\\ast} \\delta_i} \\big) \\right],\n\\end{aligned} \\end{equation}\nwhere the sum in the second line is over the same data as in \\eqref{323fewdfsdF},\nand the sums in the first line run additionally also over the following data: splittings\nof $\\mathsf{k}_1$ into $\\mathsf{k}_1', \\mathsf{k}_1''$, decompositions $S_1 = S_1' \\sqcup S_1^{\\prime \\prime}$,\npositive integers $c ; c_1, \\ldots, c_r$, $r \\geq 0$\nsumming up to $\\mathsf{k}_1' \\cdot A$,\nsplittings $g_1 = g_1' + g_1'' + r$, and diagonal splittings $\\tilde{\\ell} ; \\tilde{\\ell}_1, \\ldots, \\tilde{\\ell}_r$\nin the weighted partitions\n\\[\n\\underline{\\lambda} = \\left( (c, \\Delta_{A, \\tilde{\\ell}}), (c_i, \\Delta_{D, \\tilde{\\ell_i}})_{i=1}^{r}\\right),\n\\quad \n\\underline{\\lambda}^{\\vee} = \\left( (c, \\Delta_{A, \\tilde{\\ell}}^{\\vee}), (c_i, \\Delta^{\\vee}_{D, \\tilde{\\ell_i}})_{i=1}^{r} \\right).\n\\]\nAlso, we write $\\underline{\\eta}\\big|_{\\delta_i \\mapsto \\alpha}$ if the $i$-th cohomology class in $\\underline{\\eta}$ is replaced by\nsome $\\alpha$.\n\nWe use Lemma~\\ref{lemma_psi_splitting} below to remove the relative $\\psi$-class in the second line of \\eqref{MEG}.\nWhen doing that, the second term on the right in Lemma~\\ref{lemma_psi_splitting} (the bubble term)\nprecisely cancels with the expression in the first line (switch $\\eta \\mapsto \\lambda, \\mu \\mapsto \\eta$ and trade the sum $\\sum_{i=1}^m$ for a factor of $m$).\nHence we find that \\eqref{MEG} is equal to\n\\begin{equation}\n\\label{dfsdg}\n2 \\sum \\sum_{i=1}^{l(\\eta)} \\frac{\\prod_{j \\neq i} \\eta_j}{m!}\np_{\\ast} \\xi_{\\ast} \\Bigg[ {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}\\left(\\gamma_{S_1}; \\underline{\\eta} \\right) \\boxtimes\n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}\\left(\\gamma_{S_2}; \\underline{\\eta}^{\\vee}\\big|_{\\delta_i \\mapsto\n\\pi^{\\ast} \\pi_{\\ast} (\\delta_i) c_1(N_{A\/B_2}) } \\right) \\Bigg],\n\\end{equation}\nwhere the first sum is over the same data as in \\eqref{323fewdfsdF}.\n\nBy a parallel discussion, the second term of Conjecture~\\ref{Conj_RELHAE} applied to the second factor in \\eqref{323fewdfsdF}\nplus the fourth term applied to the first factor is\n\\begin{equation} \\label{term22}\n2 \\sum \\sum_{i=1}^{l(\\eta)} \\frac{\\prod_{j \\neq i} \\eta_j}{m!}\np_{\\ast} \\xi_{\\ast}\n\\Bigg[ {\\mathcal C}^{\\pi_1\/D, \\bullet}_{g_1, \\mathsf{k}_1}\\left(\\gamma_{S_1}; \\underline{\\eta}\\big|_{\\delta_i \\mapsto\n\\pi^{\\ast} \\pi_{\\ast} (\\delta_i) c_1(N_{A\/B_1}) } \\right) \\boxtimes\n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2}\\left(\\gamma_{S_2}; \\underline{\\eta}^{\\vee} \\right) \\Bigg].\n\\end{equation}\n\nThe term \\eqref{term22} agrees exactly with \\eqref{dfsdg} except for the $i$-th relative insertion.\nWe consider the $i$-th relative insertion more closely. \nUsing\n\\[ (\\mathrm{id} \\boxtimes \\pi^{\\ast} \\pi_{\\ast})\\Delta_D \n= \\Delta_A \n\\]\nand the balancing condition\n\\[ N_{A\/B_1} \\otimes N_{A\/B_2} = {\\mathcal O}_A \\]\nthe $i$-th relative insertion in \\eqref{dfsdg} is\n\\begin{align*}\n\\big( 1 \\boxtimes c_1(N_{A\/B_2})\\big) \\cdot (\\mathrm{id} \\boxtimes \\pi^{\\ast} \\pi_{\\ast})\\Delta_D\n& = \\big( 1 \\boxtimes c_1(N_{A\/B_2})\\big) \\cdot \\Delta_A \\\\\n& = \\big( c_1(N_{A\/B_2}) \\boxtimes 1 \\big) \\cdot \\Delta_A \\\\\n& = - \\big( c_1(N_{A\/B_1}) \\boxtimes 1 \\big) \\cdot \\Delta_A.\n\\end{align*}\nSince this is precisely the negative of the $i$-th relative insertion in \\eqref{term22}, the sum of \\eqref{dfsdg} and \\eqref{term22} vanishes.\n\\end{proof}\n\n\n\n\n\\begin{lemma} Let $\\underline{\\eta} = \\{ (\\eta_i, \\delta_i) \\}$ be a cohomology weighted partition\nand let $\\gamma = (\\gamma_1, \\ldots, \\gamma_n)$ with $\\gamma_i \\in H^{\\ast}(X)$ be a list of cohomology classes.\nWe have\n\\label{lemma_psi_splitting}\n\\begin{multline*}\n\\eta_i \\cdot p_{\\ast} \\left( \\psi_i^{\\text{rel}} \n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma ; \\underline{\\eta} ) \\right) \n=\n- p_{\\ast} \\left(\n{\\mathcal C}_{g,\\mathsf{k}}^{\\pi\/D, \\bullet}( \\gamma ; \\underline{\\eta}\\big|_{\\delta_i \\mapsto \\delta_i c_1(N_{A\/B}) } ) \n\\right) \\\\\n+ \\sum_{\\substack{ \\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ \\mathsf{k}_1, \\mathsf{k}_2 \\\\ s \\geq 0 \\\\ g = g_1 + g_2 + s - 1 }}\n\\sum_{\\substack{ \\mu_1, \\ldots, \\mu_s \\\\ \\ell_1, \\ldots, \\ell_s}}\n\\frac{\\prod_{i} \\mu_i}{s!}\np_{\\ast} \\xi_{\\ast} \\bigg[ \n{\\mathcal C}^{\\rho\/D, \\bullet, \\textup{rub}}_{g_1, \\mathsf{k}_1} \\left(\\gamma_{S_1}; \\underline{\\eta}, \\underline{\\mu} \\right) \\boxtimes \n{\\mathcal C}^{\\pi_2\/D, \\bullet}_{g_2, \\mathsf{k}_2} \\left(\\gamma_{S_2}; \\underline{\\mu}^{\\vee} \\right) \n\\bigg]\n\\end{multline*}\nwhere the sum is over the splittings of $\\mathsf{k}$ into $\\mathsf{k}_1 \\in H_2( \\mathbb{P}(N' \\oplus {\\mathcal O}_A), {\\mathbb{Z}})$\nand $\\mathsf{k}_2 \\in H_2(B,{\\mathbb{Z}})$,\nall positive integers $\\mu_1, \\ldots \\mu_s$ summing up to $\\mathsf{k}_1 \\cdot A$,\nand over indices of diagonal splittings $\\ell_1, \\ldots, \\ell_s$ for the cohomology weighted partitions\n\\[\n\\underline{\\mu} = \\{ (\\mu_i, \\Delta_{D, \\ell_i})_{i=1}^{s} \\},\n\\quad\n\\underline{\\mu}^{\\vee} = \\{ (\\mu_i, \\Delta^{\\vee}_{D, \\ell_i})_{i=1}^{s} \\}.\n\\]\nAs before we write $\\underline{\\eta}\\big|_{\\delta_i \\mapsto \\alpha}$ if the class $\\delta_i$ is replaced by\nsome class $\\alpha$.\n\\end{lemma}\n\\begin{proof}\nWe will remove the class $\\psi_i^{\\text{rel}}$ by an argument parallel to \\cite[Sec.4.5, End of Case (ii-a)]{BOPY}.\nLet ${\\mathcal X}$ be the stack of target degenerations of the pair $(X,D)$ and let\n\\[ f : C \\to {\\mathcal X} \\]\nbe a stable map parametrized by the moduli space \n$M = M^{\\bullet}_{g,n}(X \/ D, \\beta ; \\underline{\\eta} )$.\n\nLet $c : {\\mathcal X} \\overset{c}{\\to} X$ be the canonical map contracting the bubbles.\nLet $p_i^{\\text{rel}} \\in C$ be the $i$-th relative point and let\n\\[ q_i = c(f(p_i^{\\text{rel}})) \\in D \\]\nbe its image in $X$.\nIf the irreducible component of $C$ containing $p_i^{\\text{rel}}$ maps into a bubble of ${\\mathcal X}$, then\nthe composition $c \\circ f$ vanishes to infinite order at $p$ in the direction normal to $D$.\nIf the component containing $p_i^{\\text{rel}}$ maps into $X$, then\nby the tangency condition the composition $c \\circ f$ vanishes to order exactly $\\eta_i$ in the normal direction.\nIn either case, the differential in the normal direction induces a map\n\\[ N_{D\/X, q_i}^{\\vee} \\to {\\mathfrak{m}}^{\\eta_i} \/ {\\mathfrak{m}}^{\\eta_i+1}, \\]\nwhere ${\\mathfrak{m}}$ is the maximal ideal of the point $p_i^{\\text{rel}} \\in C$.\nSee also \\cite[Proof of Prop. 1.1]{OP_GWH} for a similar argument.\nConsidering this map in family yields a map of line bundles on $M$:\n\\[ {\\mathrm{ev}}_i^{\\text{rel} \\ast} N_{D\/X}^{\\vee} \\to \\left( L_i^{\\text{rel}} \\right)^{\\otimes \\eta_i}, \\]\nwhere $L_i^{\\text{rel}}$ is the cotangent line bundle on $M$.\nDualizing we obtain a section\n\\[ {\\mathcal O}_M \\to \\left( L_i^{\\text{rel}} \\right)^{\\eta_i} \\otimes {\\mathrm{ev}}_i^{\\text{rel} \\ast} N_{D\/X}. \\]\nThe vanishing locus of this section is the boundary divisor of the moduli space $M$\ncorresponding to the first bubble of $D$ (compare \\cite{BOPY}).\nExpressing the class\n\\[ c_1\\left( (L_i^{\\text{rel}} )^{\\eta_i} \\otimes {\\mathrm{ev}}_i^{\\text{rel} \\ast} N_{D\/X} \\right) = \\eta_i \\psi_i^{\\text{rel}} + {\\mathrm{ev}}_i^{\\text{rel} \\ast} c_1(N_{D\/X}) \\]\nthrough the vanishing locus of the section and using the splitting formula, as well as the relation\n\\[ N_{D\/X} = \\pi_D^{\\ast} N_{A\/B}, \\]\nthen yields the claimed formula.\n\\end{proof}\n\n\n\n\n\n\n\n\n\\section{The rational elliptic surface} \\label{Section_RationalEllipticSurface}\n\\subsection{Definition and cohomology}\nLet $R$ be a rational elliptic surface defined by a pencil of cubics.\nWe assume the pencil is generic, so the induced elliptic fibration\n\\[ R \\to \\mathbb{P}^1 \\]\nhas $12$ rational nodal fibers.\nLet $H, E_1, \\ldots, E_9$\nbe the class of a line in $\\mathbb{P}^2$ and the exceptional classes\nof blowup $R \\to \\mathbb{P}^2$ respectively. We let\n$B = E_9$ be the zero-section of the elliptic fibration, and let $F$ be the class of a fiber:\n\\[ B = E_9, \\quad F = 3 H - \\sum_{i=1}^{9} E_i . \\]\nWe measure the degree in the fiber direction against the class\n\\[ W = B + \\frac{1}{2} F . \\]\n\nThe orthogonal complement of $B,F$ in $H^2(R, {\\mathbb{Z}})$ is a negative-definite unimodular lattice of rank $8$ and hence is isomorphic to $E_8(-1)$,\n\\[ H^2(R, {\\mathbb{Z}}) = {\\mathbb{Z}} B \\oplus {\\mathbb{Z}} F \\oplus E_8(-1) . \\]\nAs in Section~\\ref{Section_Elliptic_fibrations_and_conjectures},\nwe identify the lattice $E_8(-1)$ with ${\\mathbb{Z}}^8$ by picking a basis $b_1, \\ldots, b_n$.\nWe may assume the basis is chosen such that\n\\[ Q_{E_8} = \\left( - \\int_R b_i \\cup b_j \\right)_{i,j=1,\\ldots, 8} \\]\nis the (positive definite) Cartan matrix of $E_8$.\nIn the notation of Section~\\ref{subsubsection_pairing_and_intersection_matrix}\nthe matrix $Q_k$ for $k \\in H_2(\\mathbb{P}^1, {\\mathbb{Z}}) \\cong {\\mathbb{Z}}$ is then\n\\[ Q_{k} = k Q_{E_8}. \\]\n\n\n\\subsection{The tautological ring and a convention} \\label{Subsection_CONVENTION}\nIf $2g-2+n>0$, let $p : {\\overline M}_{g,n}(\\mathbb{P}^1,k) \\to {\\overline M}_{g,n}$\nbe the forgetful map to the moduli space of stable curves, and let\n\\[ R^{\\ast}({\\overline M}_{g,n}) \\subset H^{\\ast}({\\overline M}_{g,n}) \\]\nbe the tautological subring spanned by push-forwards of\nproducts of $\\psi$ and $\\kappa$ classes on boundary strata \\cite{FP13}.\n\nWe extend both definitions to the unstable case as follows.\nIf $g,n \\geq 0$ but $2g-2+n \\leq 0$, we define\n${\\overline M}_{g,n}$ to be a point, $p$ to be the canonical projection, and\n$R^{\\ast}({\\overline M}_{g,n}) = H^{\\ast}({\\overline M}_{g,n}) = {\\mathbb{Q}}$.\n\n\\subsection{Statement of results} \\label{Subsection_RES_Statement_of_results}\nThe following result shows that Conjecture~\\ref{Conj_Quasimodularity} holds\nfor rational elliptic surfaces \\emph{numerically}, i.e. after integration against any tautological class\npulled back from ${\\overline M}_{g,n}$\n(with the convention of Section~\\ref{Subsection_CONVENTION} in the unstable cases).\n\n\\begin{thm} \\label{theorem_RES1}\nLet $\\pi : R \\to \\mathbb{P}^1$ be a rational elliptic surface.\nFor all $g,k \\geq 0$ and $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$\nand for every tautological class $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$,\n\\[\n\\int_{{\\overline M}_{g,n}(\\mathbb{P}^1, k)} p^{\\ast}(\\alpha) \\cap {\\mathcal C}_{g,k}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n) \n\\in \\frac{1}{\\Delta(q)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}.\n\\]\n\\end{thm}\n\\vspace{7pt}\n\nBy trading descendent insertions for tautological classes\nTheorem~\\ref{theorem_RES1} implies that the generating series of descendent invariants\nof a rational elliptic surface (for base degree $k$ and genus $g$) are quasi-Jacobi forms of index $\\frac{k}{2} Q_{E_8}$.\n\nAn inspection of the proof actually yields a slightly sharper result:\nthe ring of quasi-Jacobi forms $\\oplus_k \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}$\nin Theorem~\\ref{theorem_RES1}\nmay be replaced by the $\\mathrm{QMod}$-algebra generated by the theta function\n$\\Theta_{E_8}$ and all its derivatives.\n\nWe show that the holomorphic anomaly equation \nholds for the rational elliptic surface numerically.\nConsider the right-hand side of Conjecture~\\ref{Conj_HAE}:\n\\begin{align*}\n\\mathsf{H}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n )\n\\ =\\ & \n\\iota_{\\ast} \\Delta^{!}\n{\\mathcal C}^{\\pi}_{g-1,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ) \\\\\n&\n+ \\sum_{\\substack{ g= g_1 + g_2 \\\\\n\\{1,\\ldots, n\\} = S_1 \\sqcup S_2 \\\\\n\\mathsf{k} = \\mathsf{k}_1 + \\mathsf{k}_2}}\nj_{\\ast} \\Delta^{!} \\left(\n{\\mathcal C}^{\\pi}_{g_1, \\mathsf{k}_1}( \\gamma_{S_1}, \\mathsf{1}) \\boxtimes\n{\\mathcal C}^{\\pi}_{g_2, \\mathsf{k}_2}( \\gamma_{S_2}, \\mathsf{1} ) \\right) \\\\\n&\n- 2 \\sum_{i=1}^{n} \n{\\mathcal C}^{\\pi}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_{i-1},\n \\pi^{\\ast} \\pi_{\\ast} \\gamma_i , \\gamma_{i+1}, \\ldots, \\gamma_n ) \\cdot \\psi_i.\n\\end{align*}\n\n\\begin{thm} \\label{theorem_RES2}\nFor every tautological class $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$,\n\\[\n\\frac{d}{dC_2} \\int p^{\\ast}(\\alpha) \\cap {\\mathcal C}_{g,k}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n) \n\\ = \\ \n\\int p^{\\ast}(\\alpha) \\cap \\mathsf{H}_{g,\\mathsf{k}}( \\gamma_1, \\ldots, \\gamma_n ).\n\\]\n\\end{thm}\n\\vspace{7pt}\n\nIn the remainder of Section~\\ref{Section_RationalEllipticSurface}\nwe present the proof of Theorems~\\ref{theorem_RES1} and~\\ref{theorem_RES2}.\nIn Section~\\ref{Subsection_RES_Sections} we recall a few basic\nresults on the group of sections of a rational elliptic surface.\nThis leads to the genus $0$ case of Theorem~\\ref{theorem_RES1} in Section~\\ref{Subsection_Genus_Zero}.\nIn Section~\\ref{Subsection_RES_relative_in_absolute} we discuss\nthe invariants of $R$ relative to a non-singular elliptic fiber of $\\pi$.\nIn the last two sections we present the proofs of the general cases of Theorems~\\ref{theorem_RES1} and~\\ref{theorem_RES2}.\n\n\n\\subsection{Sections} \\label{Subsection_RES_Sections}\nRecall from \\cite{Shioda}\nthe 1-to-1 correspondence between sections of $R \\to \\mathbb{P}^1$ and elements in the lattice $E_8(-1)$.\nA section $s$ yields an element in $E_8(-1)$ by projecting its class $[s]$ onto the $E_8(-1)$ lattice.\nConversely, an element $\\lambda \\in E_8(-1) \\subset H^2(R, {\\mathbb{Z}})$\nhas a unique lift\n$\\hat{\\lambda} \\in H^2(R, {\\mathbb{Z}})$ such that $\\hat{\\lambda}^2 = -1$, $\\hat{\\lambda} \\cdot F = 1$ and $\\hat{\\lambda}$ pairs\npositively with any ample class. By Grothendieck-Riemann-Roch $\\hat{\\lambda}$\nis the cohomology class of a unique section $B_{\\lambda}$. Explicitly,\n\\[ [B_{\\lambda}] = W - \\left( \\frac{\\langle \\lambda, \\lambda \\rangle + 1}{2} \\right) F + \\lambda \\]\nwhere $\\langle a, b \\rangle = \\int_R a \\cup b$ for all $a,b \\in H^{\\ast}(R)$ is the intersection pairing.\n\nBy fiberwise addition and multiplication by $-1$\nthe set of sections of $R \\to \\mathbb{P}^1$ form a group, the Mordell-Weil group.\nThe correspondence between sections and classes in $E_8(-1)$ is a group homomorphism,\n\\[ B_{\\lambda + \\mu} = B_{\\lambda} \\oplus B_{\\mu}, \\quad B_{-\\lambda} = \\ominus B_{\\lambda} \\]\nwhere we have written $\\oplus, \\ominus$ for the addition resp. subtraction on the elliptic fibers.\nThe translation by a section $\\lambda \\in E_8(-1)$,\n\\[ t_{\\lambda} : R \\to R, \\ x \\mapsto x + B_{\\lambda}(\\pi(x)), \\]\nacts on a cohomology class $\\gamma \\in H^{\\ast}(X)$ by\n\\[ t_{\\lambda \\ast} \\gamma = \\gamma + \\lambda \\cup \\pi^{\\ast} \\pi_{\\ast}( \\gamma) - \\pi^{\\ast} \\pi_{\\ast}( \\lambda \\cup \\gamma )\n- \\frac{1}{2} \\pi^{\\ast}\\left( \\pi_{\\ast}(\\lambda^2) \\cdot \\pi_{\\ast}(\\gamma) \\right). \\]\n\n\n\\subsection{Genus zero} \\label{Subsection_Genus_Zero}\n\\subsubsection{Overview}\nConsider the genus $0$ stationary invariants\n\\begin{align*}\nM_k(\\zeta,q) & = \\int {\\mathcal C}^\\pi_{0,k}({\\mathsf{p}}^{\\times k-1}) \\\\\n& = \\sum_{\\pi_{\\ast} \\beta = k} q^{W \\cdot \\beta} \\zeta^{\\beta}\n\\int_{ [ {\\overline M}_{0,k-1}(R, \\beta) ]^{\\text{vir}}} \\prod_{i=1}^{k-1} {\\mathrm{ev}}_i^{\\ast}( {\\mathsf{p}} )\n\\end{align*}\nfor all $k \\geq 1$, where ${\\mathsf{p}} \\in H^4(R, {\\mathbb{Z}})$ is the class Poincar\\'e dual to a point.\n\n\\begin{prop} \\label{Prop_Mkquasi} $M_k \\in \\frac{1}{\\Delta(q)^{k\/2}} \\mathrm{QJac}_{8k-4, \\frac{k}{2} Q_{E_8}}$ for all $k \\geq 1$. \\end{prop}\n\nIn the remainder of Section~\\ref{Subsection_Genus_Zero} we prove Proposition~\\ref{Prop_Mkquasi}.\n\n\n\\subsubsection{The $E_8$ theta function} \\label{Subsubsection_case_k=1}\nAll curve classes on $R$ of degree $1$ over $\\mathbb{P}^1$ are of the form $B_{\\lambda} + d F$ for some section $\\lambda \\in E_{8}(-1)$ and $d \\geq 0$.\nUsing Section~\\ref{Subsection_RES_Sections} and \\cite[Sec.6]{BL} we find\n\\begin{align*}\nM_1(q)\n& = \\sum_{\\lambda \\in E_{8}(-1)} \\sum_{d \\geq 0} q^{W \\cdot (B_{\\lambda} + dF)} \\zeta^{\\lambda} \\int_{[ {\\overline M}_{0,0}(R, B_{\\lambda} + dF) ]^{\\text{vir}}} 1 \\\\\n& = \\sum_{\\lambda \\in E_{8}(-1)} \\sum_{d \\geq 0} q^{d - \\frac{1}{2} - \\frac{1}{2} \\langle \\lambda, \\lambda \\rangle} \\zeta^{\\lambda} \\left[ \\frac{1}{\\Delta(q)^{1\/2}} \\right]_{q^{d-\\frac{1}{2}}} \\\\\n& = \\frac{1}{\\Delta(q)^{\\frac{1}{2}}} \\sum_{\\lambda \\in E_8(-1)} q^{-\\frac{1}{2} \\langle \\lambda, \\lambda \\rangle} \\zeta^{\\lambda} \\\\\n& = \\frac{1}{\\Delta(q)^{\\frac{1}{2}}} \\Theta_{E_8}(z, \\tau).\n\\end{align*}\nBy Section~\\ref{Section_E8_lattice}, $\\Theta_{E_8}$ is a Jacobi form\nof index $\\frac{1}{2} Q_{E_8}$ and weight $4$.\n\n\\subsubsection{WDVV equation}\nFor any $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$ define the\nquantum bracket\n\\[\n\\big\\langle \\gamma_1, \\ldots, \\gamma_n \\big\\rangle_{0,k} =\n\\sum_{\\pi_{\\ast} \\beta = k} q^{W \\cdot \\beta} \\zeta^{\\beta} \\int_{[ {\\overline M}_{0,n}(R, \\beta) ]^{\\text{vir}}} \\prod_i {\\mathrm{ev}}_i^{\\ast}(\\gamma_i).\n\\]\n\nRecall the WDVV equation from \\cite{FulPand}: For all $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$ with\n\\[ \\sum_{i=1}^{n} \\deg(\\gamma_i) = n+k-2 \\]\nwe have\n\\begin{align*}\n& \\sum_{\\substack{ k=k_1 + k_2 \\\\ \\{ 1, \\ldots, n-4 \\} = S_1 \\sqcup S_2 }}\n\\sum_{\\ell}\n\\big\\langle \\gamma_{S_1}, \\gamma_{a}, \\gamma_{b}, \\Delta_\\ell \\big\\rangle_{0,k_1} \\big\\langle \\gamma_{S_2}, \\gamma_c, \\gamma_d, \\Delta_\\ell^{\\vee} \\big\\rangle_{0,k_2} \\\\\n= & \n\\sum_{\\substack{ k=k_1 + k_2 \\\\ \\{ 1, \\ldots, n-4 \\} = S_1 \\sqcup S_2 }} \\sum_{\\ell}\n\\big\\langle \\gamma_{S_1}, \\gamma_{a}, \\gamma_{c}, \\Delta_\\ell \\big\\rangle_{0,k_1} \\big\\langle \\gamma_{S_2}, \\gamma_b, \\gamma_d, \\Delta_\\ell^{\\vee} \\big\\rangle_{0,k_2},\n\\end{align*}\nwhere $\\sum_{\\ell} \\Delta_{\\ell} \\otimes \\Delta_{\\ell}^{\\vee}$ is the K\\\"unneth decomposition\nof the diagonal class $\\Delta \\in H^{\\ast}(R \\times R)$.\nLet also\n\\[ D = D_q, \\quad D_i = D_{b_i} = D_{\\zeta_i} = \\frac{1}{2\\pi i} \\frac{d}{dz_i}. \\]\nWe solve for the remaining series $M_k$ by applying the WDVV equation.\n\n\\subsubsection{Proof of Proposition~\\ref{Prop_Mkquasi}}\nThe case $k=1$ holds by Section~\\ref{Subsubsection_case_k=1}.\nFor $k = 2$ recall the basis $\\{ b_i \\}$ of $\\Lambda$\nand apply the WDVV equation for $(\\gamma_i)_{\\ell=1}^{4}= (F,F, b_i, b_j)$. The result is\n\\[\n 4 \\langle b_i, b_j \\rangle M_2 = D_i \\langle \\Delta_1 \\rangle_{0,1} \\cdot D_j \\langle \\Delta_2 \\rangle_{0,1}\n - \\langle \\Delta_1 \\rangle_{0,1} \\cdot D_i D_j \\langle \\Delta_2 \\rangle_{0,1}\n\\]\nwhere $\\Delta_1, \\Delta_2$ indicates that we sum over the diagonal splitting.\nChoosing $i,j$ such that $\\langle b_i, b_j \\rangle \\neq 0$\nand applying the divisor equation on the right-hand side we find\n$M_2$ expressed as a sum of products of derivatives of $M_1$. Checking the weight and index yields the claim for $M_2$.\n\nSimilarly, the WDVV equation for $(\\gamma_i)_{i=1}^{4} = ({\\mathsf{p}}, F, F, W)$ yields\n\\[\n3 M_3 = M_1 \\cdot D^2 M_2 - 4 D^2 M_1 \\cdot M_2 +\n\\sum_{i=1}^{8} \\left( D_i D M_1 \\cdot 2 D_i M_2 - D_i M_1 \\cdot D_i D M_2 \\right)\n\\]\nwhich completes the case $k=3$.\n\nIf $k \\geq 4$ we apply the WDVV equation for $(\\gamma_1, \\ldots, \\gamma_k) = ({\\mathsf{p}}^{k-2}, \\ell_1, \\ell_2)$\nfor some $\\ell_1, \\ell_2 \\in H^2(R)$. The result is\n\\begin{multline*}\n( \\ell_1 \\cdot \\ell_2 )\n\\big\\langle {\\mathsf{p}}^{k-1} \\big\\rangle_{0,k}\n=\n\\sum_{a+b=k-4} \\binom{k-4}{a} \\Big( \\big\\langle {\\mathsf{p}}^{a+1}, \\ell_1, \\Delta_1 \\big\\rangle_{0,a+2} \\big\\langle {\\mathsf{p}}^{b+1}, \\ell_2, \\Delta_2 \\big\\rangle_{0,b+2} \\\\\n- \\big\\langle {\\mathsf{p}}^{a+2, \\Delta_1} \\big\\rangle_{0,a+3} \\big\\langle {\\mathsf{p}}^b, \\ell_1, \\ell_2, \\Delta_2 \\big\\rangle_{0,b+1} \\Big).\n\\end{multline*}\nTaking $\\ell_1 \\cdot \\ell_2 = 1$ and using an induction argument the proof is complete. \\qed\n\n\n\\subsection{Relative in terms of absolute} \\label{Subsection_RES_relative_in_absolute}\nLet $\\leq$ be the lexicographic order on the set of pairs $(k,g)$, i.e.\n\\begin{equation} \\label{lexigraph}\n(k, g) \\leq (k', g') \\quad \\Longleftrightarrow \\quad k < k' \\text{ or } \\big( k=k' \\text{ and } g \\leq g' \\big).\n\\end{equation}\n\nLet $E \\subset R$ be a non-singular fiber of $\\pi : R \\to \\mathbb{P}^1$\nover the point $0 \\in \\mathbb{P}^1$,\nand recall from Section~\\ref{Section_relative_geomtries} the $E$-relative\nGromov--Witten classes\n\\[ {\\mathcal C}_{g,k}^{\\pi\/E}(\\gamma_1, \\ldots, \\gamma_n; \\underline{\\eta})\n \\in H_{\\ast}( {\\overline M}_{g,n}(\\mathbb{P}^1\/0,k ; \\eta) ) \\otimes {\\mathbb{Q}}[[ q^{\\pm \\frac{1}{2}}, \\zeta^{\\pm 1} ]]\n\\]\nwhere $\\underline{\\eta}$ is the ordered cohomology weighted partition \n\\begin{equation} \\label{Eetetaerapart}\n\\underline{\\eta} = \\big( (\\eta_1, \\delta_1), \\ldots, (\\eta_{l(\\eta)}, \\delta_{l(\\eta)} ) \\big), \\quad \\delta_i \\in H^{\\ast}(E).\n\\end{equation}\n\nWe show the (numerical) quasi-Jacobi form property and holomorphic anomaly equation\nin the absolute case imply the corresponding relative case.\nFor the statement and the proof we use the convention of Section~\\ref{Subsection_CONVENTION}.\n\n\\begin{prop} \\label{DEGENERATIONPROP} Let $K,G \\geq 0$ be fixed. Assume\n\\[\n\\int_{{\\overline M}_{g,n}(\\mathbb{P}^1, k)} p^{\\ast}(\\alpha) \\cap {\\mathcal C}_{g,k}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n) \n\\in \\frac{1}{\\Delta(q)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}\n\\]\nfor all $(k,g) \\leq (K,G)$, $n \\geq 0$, $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$\nand $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$.\nThen\n\\[\n\\int_{{\\overline M}_{g,n}(\\mathbb{P}^1\/0, k ; \\underline{\\eta})} p^{\\ast}(\\alpha) \\cap {\\mathcal C}_{g,k}^{\\pi\/E}(\\gamma_1, \\ldots, \\gamma_n; \\underline{\\eta}) \n\\in \\frac{1}{\\Delta(q)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}\n\\]\nfor all $(k,g) \\leq (K,G)$, $n \\geq 0$, $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$,\n$\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$ and cohomology weighted partitions $\\underline{\\eta}$.\n\nSimilarly, if the holomorphic anomaly equation holds numerically for all ${\\mathcal C}_{g,k}^{\\pi}(\\gamma_1, \\ldots, \\gamma_n)$\nwith $(k,g) \\leq (K,G)$, then the relative holomorphic anomaly equation of Conjecture~\\ref{Conj_RELHAE}\nholds numerically for all ${\\mathcal C}_{g,k}^{\\pi\/E}(\\gamma_1, \\ldots, \\gamma_n ; \\underline{\\eta})$ with $(k,g) \\leq (K,G)$.\n\\end{prop}\n\\begin{proof}\nThe degeneration formula applied to the normal cone degeneration\n\\begin{equation} R \\rightsquigarrow R \\cup_E ( \\mathbb{P}^1 \\times E ) \\label{3sdfsdfd} \\end{equation}\nexpresses the absolute invariants of $R$ in terms of the relative invariants of $R\/E$ and $(\\mathbb{P}^1 \\times E)\/E_0$.\nThe quasi-modularity of the invariants of $(\\mathbb{P}^1 \\times E)\/E_0$ relative to $\\mathbb{P}^1$\nfollows from the product formula \\cite{LQ} and \\cite[Thm.2]{HAE}.\nWe may hence view the degeneration formula as a matrix between the absolute and relative (numerical) invariants of $R$\nwith coefficients that are quasi-modular forms.\nBy \\cite[Thm.2]{MP} it is known that the matrix is non-singular: The absolute invariants determine the relative invariants of $R$.\nWe only need to check that the absolute terms with $(k,g) \\leq (K,G)$ determine\nthe relative ones of the same constraint,\nand that the quasi-Jacobi form property is preserved by this operation.\nSince $\\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}$ is a module over $\\mathrm{QMod}$,\nthe second statement is immediate from the induction argument used to prove the first.\nThe first follows from scrutinizing the algorithm in \\cite[Sec.2]{MP} and we only sketch the argument here.\n\nGiven $(k,g) \\leq (K,G)$, a cohomological weighted partition $\\underline{\\eta}$ as in \\eqref{Eetetaerapart},\ninsertions $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(R)$, and a tautological class $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$,\nconsider the absolute invariant\n\\begin{multline} \\label{DFSDFas}\n\\Big\\langle \\alpha\\, ;\\, \\prod_{i=1}^{n} \\tau_0(\\gamma_i) \\prod_{i=1}^{l(\\eta)} \\tau_{\\eta_i-1}(j_{\\ast}\\delta_i) \\Big\\rangle^{R}_{g,k} \\\\\n= \\sum_{\\pi_{\\ast} \\beta = \\mathsf{k}} \\zeta^{\\beta} q^{W \\cdot \\beta}\n\\int_{[ {\\overline M}_{g,n+l(\\eta)}(X, \\beta) ]^{\\text{vir}}} p^{\\ast}(\\alpha) \\prod_{i=1}^{n} {\\mathrm{ev}}_i^{\\ast}(\\gamma_i)\n\\prod_{i=1}^{l(\\eta)} \\psi_i^{\\eta_i-1} {\\mathrm{ev}}_i^{\\ast}(j_{\\ast} \\delta_i)\n\\end{multline}\nwhere we used the Gromov--Witten bracket notation of \\cite{MP}, $j : E \\to R$ is the inclusion,\nand $\\psi_i$ are the cotangent line classes on the moduli space of stable maps to $R$.\nBy trading the $\\psi_i$ classes for tautological classes (modulo lower order terms)\nand using the assumption on absolute invariants, we see that\nthe series \\eqref{DFSDFas} is a quasi-Jacobi form of index $\\frac{k}{2} Q_{E_8}$.\nWe apply the degeneration formula with respect to \\eqref{3sdfsdfd} to the invariant \\eqref{DFSDFas}.\nThe cohomology classes are lifted to the total space of the degeneration\nas in \\cite[Sec.2]{MP}, i.e. the $\\gamma_i$ are lifted by pullback and the\n$j_{\\ast} \\delta_i$ are lifted by inclusion of the proper transform of $E \\times {\\mathbb{C}}$.\nUsing a bracket notation for relative invariants parallel to the above\\footnote{The bracket notation is\nexplained in more detail in \\cite{MP} with the difference that the ramification profiles\n$\\underline{\\nu}$ are \\emph{ordered} here. This yields slightly different factors\nin the degeneration formula than in \\cite{MP} but is otherwise not important.},\nthe degeneration formula yields\n\\begin{multline} \\label{sdfsdf}\n\\Big\\langle \\alpha ; \\prod_i \\tau_0(\\gamma_i) \\cdot \\prod_{i=1}^{l(\\eta)} \\tau_{\\eta_i-1}(j_{\\ast}\\delta_i) \\Big\\rangle^{R}_{g,k} = \\\\\n\\sum_{\\substack{ m \\geq 0 \\\\ \\nu_1, \\ldots, \\nu_m, \\ell_1, \\ldots, \\ell_m \\\\\ng = g_1 + g_2 + m-1 \\\\\n\\{ 1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\\n\\alpha_1, \\alpha_2}}\n\\frac{\\prod_{i} \\nu_i}{m!}\n\\Big\\langle \\alpha_1 ; \\tau_0(\\gamma_{S_1}) \\Big| \\underline{\\nu} \\Big\\rangle^{R\/E, \\bullet}_{g_1,k}\n\\Big\\langle \\alpha_2 ; \\tau_0(\\gamma_{S_2}) \\prod_{i=1}^{l(\\eta)} \\tau_{\\eta_i-1}(j_{\\ast}\\delta_i) \\Big| \\underline{\\nu}^{\\vee}\n\\Big\\rangle^{(\\mathbb{P}^1 \\times E)\/E, \\bullet}_{g_2,k}\n\\end{multline}\nwhere $\\nu_1, \\ldots, \\nu_m$ run over all positive integers with sum $k$,\n$\\ell_1, \\ldots, \\ell_m$ run over all diagonal splittings in the cohomology weighted partitions\n\\[\n\\underline{\\nu} =\n( \\nu_i, \\Delta_{E,\\ell_i} )_{i=1}^{m}, \\quad \n\\underline{\\nu}^{\\vee}\n= ( \\nu_i, \\Delta^{\\vee}_{E,\\ell_i} )_{i=1}^{m},\n\\]\nand $\\alpha_1, \\alpha_2$ run over all splittings of the tautological class $\\alpha$.\nThe sum is taken only over those configurations of disconnected curves which yield a connected domain after gluing.\n\nWe argue now by an induction over the relative invariants of $R\/E$ with respect to the lexicographic ordering on $(k,g,n)$.\nIf the invariants of $R\/E$ in \\eqref{sdfsdf} (the first factor on the right)\nare disconnected, each connected component is of lower degree over $\\mathbb{P}^1$,\nand therefore these contributions are determined by lower order terms.\nHence we may assume that the invariants of $R\/E$ are connected.\nBy induction over the genus we may further assume $g_1 = g$ in \\eqref{sdfsdf},\nor equivalently $g_2 = 1 - m$.\nConsider a stable relative map in the corresponding moduli space and let \n\\[ f : C_2 \\to (\\mathbb{P}^1 \\times E)[a] \\]\nbe the component which maps to an expanded pair of $(\\mathbb{P}^1 \\times E, E_0)$.\nSince $g_2 = 1-m$ the curve $C_2$\nhas at least\n$m$ connected components of genus $0$. Since each of these meets the relative divisor\nand $l(\\nu) = m$, the curve $C_2$ is a disjoint union of genus $0$ curves.\nThe rational curves in $\\mathbb{P}^1 \\times E$ are fibers of the projection to $E$.\nHence we find the right-hand side in \\eqref{sdfsdf} is a fiber class integral (in the language of \\cite{MP}).\nFinally, by induction over $n$ we may assume $S_2 = \\varnothing$.\nAs in \\cite[Sec.2.3]{MP} we make a further induction over $\\deg(\\underline{\\eta}) = \\sum_i \\deg(\\delta_i)$\nand a lexicographic ordering of the partition parts $\\underline{\\eta}$.\nArguing as in \\cite[Sec.1, Relation 1]{MP}\\footnote{\nUsing the dimension constraint the class $\\alpha_2$ only increases the parts $\\nu_k$, and hence by induction\nwe may assume $\\alpha_2=1$.} we finally arrive at\n\\[\n \\Big\\langle \\alpha ; \\prod_i \\tau_0(\\gamma_i) \\cdot \\prod_{i=1}^{l(\\eta)} \\tau_{\\eta_i-1}(j_{\\ast}\\delta_i) \\Big\\rangle^{R}_{g,k} \\\\\n=\nc \\cdot \\Big\\langle \\alpha ; \\prod_i \\tau_0(\\gamma_i) | \\underline{\\nu} \\Big\\rangle^{R\/E}_{g,k}\n+ \\ldots\n\\]\nwhere $c \\in {\\mathbb{Q}}$ is non-zero and '$\\ldots$' is a sum\nof a product of quasi-modular forms and relative invariants of $R\/E$ of lower order.\nBy induction the lower order terms are quasi-Jacobi forms of index $\\frac{1}{2} k Q_{E_8}$\nwhich completes the proof of the quasi-Jacobi property of the invariants of $R\/E$.\n\nThe relative holomorphic anomaly equation follows immediately from this algorithm and\nthe compatibility with the degeneration formula (Proposition~\\ref{Proposition_Compatibility_with_degeneration_formula}).\n\\end{proof}\n\n\n\\subsection{Proof of Theorem~\\ref{theorem_RES1}}\nAssume that the classes $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(S)$ and $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$ are homogenenous.\nWe consider the dimension constraint\n\\begin{equation} \\label{dimensionconstraint}\nk + g - 1 + n = \\deg(\\alpha) + \\sum_{i=1}^{n} \\deg(\\gamma_i)\n\\end{equation}\nwhere $\\deg( )$ denotes half the real cohomological degree.\nThe left-hand side in \\eqref{dimensionconstraint} is the virtual dimension of ${\\overline M}_{g,n}(S, \\beta)$ where $\\pi_{\\ast} \\beta = k$.\nIf the dimension constraint is violated, the\nleft-hand side in Theorem~\\ref{theorem_RES1} vanishes and the claim holds.\nHence we may assume \\eqref{dimensionconstraint}.\n\nWe argue by induction on $(k,g,n)$ with respect to the lexicographic ordering\n\\begin{align*}\n(k_1, g_1, n_1) < (k_2, g_2, n_2) \\quad \\Longleftrightarrow \\quad\n\\quad & k_1 < k_2 \\\\\n\\text{ or } & \\big( k_1=k_2 \\text{ and } g_1 < g_2 \\big) \\\\\n\\text{ or } & \\big( k_1=k_2 \\text{ and } g_1 = g_2 \\text{ and } n_1 < n_2 \\big)\n\\end{align*}\n\n\\vspace{5pt}\n\\noindent \\textbf{Case (i):} $g=0$.\n\n\\begin{enumerate}\n\\item[(i-a)] If $k=0$ all invariants vanish, so we may assume $k>0$.\n\n\\item[(i-b)] If $\\deg(\\alpha) > 0$ then $\\alpha$ is the pushforward of a cohomology class from the boundary $\\iota : \\partial {\\overline M}_{0,n} \\to {\\overline M}_{0,n}$:\n\\[ \\alpha = \\iota_{\\ast} \\alpha'. \\]\nUsing $\\alpha'$ and the compatibility of the virtual class with boundary restrictions\nwe can replace the left-hand side of Theorem~\\ref{theorem_RES1} by terms of lower order (see \\cite[Sec.3]{HAE} for a parallel argument).\n\n\\item[(i-c)] If $\\deg(\\alpha) = 0$ but $\\deg(\\gamma_i) \\leq 1$ for some $i$, then either\nthe series is zero (if $\\deg(\\gamma_i) = 0$) and the claim holds,\nor we can apply the divisor equation to reduce to lower order terms.\nSince derivatives of quasi-Jacobi forms are quasi-Jacobi forms of the same index the claim follows from the induction hypothesis.\n\n\\item[(i-d)] If $\\deg(\\alpha) = 0$ and $\\gamma_i = {\\mathsf{p}}$ for all $i$ the claim follows by Proposition~\\ref{Prop_Mkquasi}.\n\\end{enumerate}\n\n\\vspace{5pt}\n\\noindent \\textbf{Case (ii):} $g>0$ and $\\deg(\\alpha) \\geq g$.\n\\vspace{5pt}\n\nBy \\cite[Prop.2]{FPrel} we have\n\\[ \\alpha = \\iota_{\\ast} \\alpha' \\]\nfor some $\\alpha'$ where $\\iota : \\partial {\\overline M}_{g,n} \\to {\\overline M}_{g,n}$ is the inclusion of the boundary.\nBy restriction to the boundary we are reduced to lower order terms.\n\n\\vspace{5pt}\n\\noindent \\textbf{Case (iii):} $g>0$ and $\\deg(\\alpha) < g$.\n\\vspace{5pt}\n\nBy the dimension constraint we have\n\\[ \\sum_{i=1}^{n} \\deg(\\gamma_i) - n \\geq k. \\]\nHence after reordering we may assume $\\gamma_1 = \\ldots = \\gamma_k = {\\mathsf{p}}$.\nConsider the degeneration of $R$ to the normal cone of a non-singular fiber $E$,\n\\[ R \\rightsquigarrow R \\cup_E (\\mathbb{P}^1 \\times E). \\]\nWe let $\\rho : \\mathbb{P}^1 \\times E \\to \\mathbb{P}^1$\nbe the projection to the first factor and let $E_0$ denote the\nfiber of $\\rho$ over $0 \\in \\mathbb{P}^1$.\nWe apply the degeneration formula \\cite{Junli1, Junli2} where\nwe specialize the insertions $\\gamma_1, \\ldots, \\gamma_k$ to the component $\\mathbb{P}^1 \\times E$\nand lift the other insertions by pullback.\nIn the notation of Section~\\ref{Section_relative_geomtries} the result is\n\\begin{multline} \\label{DEGGGG}\np_{\\ast} {\\mathcal C}^{\\pi}_{g,\\mathsf{k}}(\\gamma_1, \\ldots, \\gamma_n) \\\\\n=\n\\sum_{ \\substack{ \nm \\geq 0 \\\\\n\\eta_1, \\ldots, \\eta_m, \\ell_1, \\ldots, \\ell_m \\\\\n\\{ k+1, \\ldots, n \\} = S_1 \\sqcup S_2 \\\\ g = g_1 + g_2 + m - 1 }}\n\\frac{\\prod_i \\eta_i}{m!}\np_{\\ast} \\xi^{\\mathrm{conn}}_{\\ast} \\left( {\\mathcal C}^{\\pi\/E, \\bullet}_{g_1, \\mathsf{k}}(\\gamma_{S_1}; \\underline{\\eta} ) \\boxtimes {\\mathcal C}^{\\rho\/E_0, \\bullet}_{g_2, \\mathsf{k}}( {\\mathsf{p}}^k, \\gamma_{S_2} ; \\underline{\\eta}^{\\vee}) \\right)\n\\end{multline}\nwhere $\\eta_1, \\ldots, \\eta_m$ run over all positive integers summing up to $k$,\n$\\ell_1, \\ldots, \\ell_m$ run over all diagonal splittings in the partitions\n\\[\n\\underline{\\eta} =\n( \\eta_i, \\Delta_{E,\\ell_i} )_{i=1}^{m}, \\quad \n\\underline{\\eta}^{\\vee}\n= ( \\eta_i, \\Delta^{\\vee}_{E,\\ell_i} )_{i=1}^{m},\n\\]\nthe map $\\xi$ is the gluing map along the relative markings,\nand $\\xi^{\\mathrm{conn}}_{\\ast}$ is pushforward by $\\xi$ followed by\ntaking the summands with connected domain curve.\n\nWe will show that the right-hand side of \\eqref{DEGGGG}, when integrated against any tautological class,\nis a quasi-Jacobi form of index $\\frac{k}{2} Q_{E_8}$.\n\nBy the product formula \\cite{LQ} and \\cite[Thm.2]{HAE}, each term \n\\[ {\\mathcal C}^{\\rho\/E_0, \\bullet}_{g_2, \\mathsf{k}}( {\\mathsf{p}}^k, \\gamma_{S_2} ; \\underline{\\eta}^{\\vee}) \\]\nis a cycle-valued quasi-modular form.\nWe consider the first factor\n\\begin{equation} {\\mathcal C}^{\\pi\/E, \\bullet}_{g_1, \\mathsf{k}}(\\gamma_{S_1}; \\underline{\\eta} ) \\label{resdfsd} \\end{equation}\nafter integration against any tautological class. We make two reduction steps:\n\n\\vspace{5pt} \\noindent (1) We may assume \\eqref{resdfsd} are connected Gromov--Witten invariants.\n\n(Proof: The difference between connected and disconnected invariance is a sum of products of connected invariants of $R\/E$\nof degree lower than $k$ over the base. Hence\nby Proposition~\\ref{DEGENERATIONPROP} and the induction hypothesis they are quasi-Jacobi forms\nafter integration against tautological classes.)\n\n\\vspace{5pt} \\noindent (2) We may assume $g_1 = g$.\n\n(Proof: If $g_10$. Let $\\beta \\in H_2(R,{\\mathbb{Z}})$ be a curve class with $\\pi_{\\ast} \\beta = k$.\nLet $L \\in \\mathop{\\rm Pic}\\nolimits(R)$ be the line bundle with $c_1(L) = \\beta$.\nConsider a relative stable map to an extended relative pair of $(R,E)$ in class $\\beta$,\n\\[ f : C \\to R[a]. \\]\nSince $R$ is rational, the universal family of curves on $R$ in a given class is a linear system.\nHence the intersection of $f(C)$ with the distinguished relative divisor $E \\subset R[n]$ satisfies\n\\[ {\\mathcal O}_E( f(C) \\cap E ) = L|_{E}. \\]\nLet $x_1, \\ldots, x_k \\in E$ be fixed points with ${\\mathcal O}_E(x_1 + \\ldots + x_k) \\neq L|_{E}$.\nIt follows that no stable relative map in class $\\beta$ is incident to $(x_1, \\ldots, x_k)$ at the relative divisor.\nWe conclude \n\\[ \\left[ {\\overline M}_{g,n}(R\/E, \\beta ; (1)^k) \\right]^{\\text{vir}} \\prod_{i=1} {\\mathrm{ev}}_i^{\\text{rel}\\ast}( [x_i] )\\, = \\, 0 \\]\nwhich implies the claim.\n\nIt remains to consider the case $k=0$. We have the equality of moduli spaces\n\\[\n{\\overline M}_{g,n}(R\/E, dF ; ()) = {\\overline M}_{g,n}(R, dF). \n\\]\nUnder this identification the obstruction sheaf of stable maps to $R$ relative to $E$\nfor a fixed source curve $C$ is\n\\[ \\mathrm{Ob}_{C,f} = H^1( C, f^{\\ast} T_{R\/E} ) \\]\nwhere $T_{R\/E} = \\Omega_R(\\log E)^{\\vee}$\nis the log tangent bundle relative to $E$.\nSince $K_R + E = 0$ there exists a meromorphic $2$-form \n\\[ \\sigma \\in H^0(R, \\Omega_R^2(E)) \\]\nwith a simple pole along $E$ and nowhere vanishing outside $E$.\nBy the construction \\cite[Sec.4.1.1]{STV}\nthe form $\\sigma$ yields a surjection\n\\[ \\mathrm{Ob}_{C,f} \\to {\\mathbb{C}} \\]\nwhich in turn induces a\nnowhere-vanishing cosection of the perfect obstruction theory on the moduli space.\nBy \\cite{KL} we conclude\n\\[ [ {\\overline M}_{g,n}(R\/E, dF ; ()) ]^{\\text{vir}} = 0, \\]\nwhich implies the claim.\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{theorem_RES2}}\nThe holomorphic anomaly equation is implied by the following compatibilitites\nwhich cover all all steps in the algorithm used in the proof of Theorem~\\ref{theorem_RES1}:\n\\begin{itemize}\n\\item The compatibility with boundary restrictions (parallel to \\cite[Sec.2.5]{HAE}).\n\\item The compatibility with the degeneration formula (Proposition~\\ref{DEGENERATIONPROP}).\n\\item The compatibility with the WDVV equation (special case of (i)).\n\\item The compatibility with the divisor equation (follows by proving a refined weight statement parallel to \\cite[Sec.3]{HAE}).\n\\item The holomorphic anomaly equation holds for $\\int {\\mathcal C}_{0,1}() = \\Theta_{E_8} \\Delta^{-1\/2}$. \\qed\n\\end{itemize}\n\n\n\n\n\\section{The Schoen Calabi--Yau threefold}\n\\label{Section_Schoen_variety}\n\n\\subsection{Preliminaries}\nLet $X = R_1 \\times_{\\mathbb{P}^1} R_2$ be a Schoen Calabi--Yau\nand recall the notation from Section~\\ref{intro:schoen_calabiyau}.\nIn particular we have the commutative diagram of fibrations\n\\begin{equation}\n\\label{Diag2}\n\\begin{tikzcd}\n& X \\ar[swap]{dl}{\\pi_2} \\ar{dr}{\\pi_1} \\ar{dd}{\\pi} & \\\\\nR_1 \\ar[swap]{dr}{p_1} & & R_2 \\ar{dl}{p_2} \\\\\n& \\mathbb{P}^1 &\n\\end{tikzcd}\n\\end{equation}\n\nLet $\\alpha \\in H_2(R_1, {\\mathbb{Z}})$ be a curve class. For all $(g, \\alpha) \\notin \\{ (0,0), (1,0) \\}$ define\n\\[\n\\mathsf{F}_{g,\\alpha}(\\mathbf{z}_2, q_2) = \\int {\\mathcal C}^{\\pi_2}_{g,\\alpha}() \n= \n\\sum_{\\pi_{2 \\ast} \\beta = \\alpha}\nq_2^{W_2 \\cdot \\beta} \\zeta_2^{\\beta} \\int_{{\\overline M}_{g}(X,\\beta) ]^{\\text{vir}}} 1.\n\\]\nFor all $(g, k) \\notin \\{ (0,0), (1,0) \\}$ we have\n\\begin{equation} \\label{4tsfgsdf}\n\\mathsf{F}_{g,k}(\\mathbf{z}_1, \\mathbf{z}_2, q_1, q_2)\n=\n\\sum_{\\substack{ \\alpha \\in H_2(R_1, {\\mathbb{Z}}) \\\\ p_{1 \\ast} \\alpha = k}} \\mathsf{F}_{g,\\alpha}(\\mathbf{z}_2, q_2) q_1^{W_1 \\cdot \\alpha} e( \\mathbf{z}_1 \\cdot \\alpha).\n\\end{equation}\n\nWe first prove a weaker version of Theorem~\\ref{Thm1}.\n\\begin{prop} We have\n\\[\n\\mathsf{F}_{g,k} \\in\n\\frac{1}{\\Delta(q_1)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}^{(q_1, \\mathbf{z}_1)}\n\\otimes\n\\frac{1}{\\Delta(q_2)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}^{(q_2, \\mathbf{z}_2)}.\n\\]\n\\end{prop}\n\n\\begin{proof}\nThe Schoen Calabi--Yau can be written as a complete intersection\n\\[ X \\subset \\mathbb{P}^1 \\times \\mathbb{P}^2 \\times \\mathbb{P}^2 \\]\ncut out by sections of tri-degree $(1,3,0)$ and $(1,0,3)$. Hence\nthere exist smooth elliptic fibers $E_i \\subset R_i$ of $\\pi_i$ for $i=1,2$ and a degeneration\n\\begin{equation} X \\rightsquigarrow (R_1 \\times E_2) \\cup_{E_1 \\times E_2} (E_1 \\times R_2) \\label{DEGGGG} \\end{equation}\nwhich is compatible with the fibration structure of diagram~\\eqref{Diag2}.\n\nThe degeneration formula applied with respect to this degeneration yields\n\\begin{equation} \\label{dfsdfds}\n\\mathsf{F}_{g,k} = \n\\sum_{\\substack{m \\geq 0 \\\\ \\eta_1, \\ldots, \\eta_m, \\ell_1, \\ldots, \\ell_m \\\\ g = g_1 + g_2 + l(\\underline{\\eta}) - 1}}\n\\frac{\\prod_i \\eta_i}{m!}\n\\big\\langle \\varnothing \\big| \\underline{\\eta} \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2), \\bullet}_{g_1, k}\n\\big\\langle \\varnothing \\big| \\underline{\\eta}^{\\vee} \\big\\rangle^{(E_1 \\times R_2) \/ (E_1 \\times E_2), \\bullet}_{g_2, k}\n\\end{equation}\nwhere $\\eta_1, \\ldots, \\eta_m$ run over all positive integers summing up to $k$,\nthe $\\ell_1, \\ldots, \\ell_m$ run over all diagonal splittings in the weighted partition\n\\[ \\underline{\\eta} = (\\eta_i, \\Delta_{E_1 \\times E_2, \\ell_i})_{i=1}^m, \\quad \n \\underline{\\eta}^{\\vee} = (\\eta_i, \\Delta^{\\vee}_{E_1 \\times E_2, \\ell_i})_{i=1}^m,\n\\]\nand the sum is over those disconnected stable maps on each sides which yield a connected\ndomain after gluing (the bullet $\\bullet$ reminds us of the disconnected invariants);\nmoreover we have used\n\\begin{multline*}\n\\big\\langle \\varnothing \\big| \\underline{\\eta} \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2), \\bullet}_{g_1, k} \\\\\n=\n\\sum_{\\pi_{\\ast} \\beta = k}\n\\big\\langle \\varnothing \\big| \\underline{\\eta} \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2), \\bullet}_{g_1, \\beta}\nq_1^{W_1^{(R_1)} \\cdot \\beta} q_2^{W_2^{(E_2)} \\cdot \\beta} \\exp( \\mathbf{z}_1 \\cdot \\beta )\n\\end{multline*}\nwhere we use the Gromov--Witten bracket notation on the right side and\n\\[ W_1^{(R_1)}, W_2^{(E_2)} \\in H^2(R_1 \\times E_2) \\]\nare the pullbacks of $W_1 \\in H^2(R_1)$ and the point class $[0] \\in H^2(E)$\nrespectively.\nThe definition of the second factor in \\eqref{dfsdfds} is parallel.\n\nWe will show\n\\begin{equation} \\label{dfsdfdsrr4444}\n\\big\\langle \\varnothing \\big| \\underline{\\eta} \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2), \\bullet}_{g_1, k}\n\\in \\frac{1}{\\Delta(q_1)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}^{(q_1, \\mathbf{z}_1)} \\otimes \\mathrm{QMod}^{(q_2)}.\n\\end{equation}\nBy an induction argument it is enough to prove the statement for connected Gromov--Witten invariants.\nLet us write\n\\[ \\underline{\\eta} = ( \\eta_i, c_i \\otimes d_i )_{i=1}^{m}, \\quad c_i \\in H^\\ast(E_1), d_i \\in H^{\\ast}(E_2) \\]\nThen the relative product formula \\cite{LQ} yields\n\\[\n\\big\\langle \\varnothing \\, \\big| \\, \\underline{\\eta} \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2)}_{g_1, k}\n=\n\\int_{{\\overline M}_{g,m}}\np_{\\ast} {\\mathcal C}^{R_1\/E_1}_{g_1,k}\\big( \\varnothing \\, ; (\\eta_i, c_i)_{i} \\big) \\cdot {\\mathcal C}^{E_2}_{g_1}(d_1, \\ldots, d_m).\n\\]\nwhere $p$ is the forgetful map to ${\\overline M}_{g,m}$.\nBy \\cite{Jtaut,HAE} the class ${\\mathcal C}^{E_2}_{g_1}(d_1, \\ldots, d_m)$\nis a linear combination of tautological classes with coefficients that are quasi-modular forms.\nUsing Theorem~\\ref{theorem_RES1} and Proposition~\\ref{DEGENERATIONPROP} we obtain \\eqref{dfsdfdsrr4444}.\n\nBy an identical argument for $E_1 \\times R_2$ we conclude that\n\\[\n\\mathsf{F}_{g,k} \\in\n\\frac{1}{\\Delta(q_1)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}^{(q_1, \\mathbf{z}_1)}\n\\otimes\n\\frac{1}{\\Delta(q_2)^{k\/2}} \\mathrm{QJac}_{\\frac{k}{2} Q_{E_8}}^{(q_2, \\mathbf{z}_2)}. \\qedhere\n\\]\n\\end{proof}\n\n\\subsection{Proof of Theorem~\\ref{Thm1}}\nWe first show that the classes ${\\mathcal C}^{\\pi_2}_{g,\\alpha}()$ satisfy the holomorphic anomaly equation numerically, i.e. after taking degrees.\nUsing the degeneration \\eqref{DEGGGG}\nand the compatibility of the holomorphic anomaly equation with the degeneration formula (Proposition~\\ref{Proposition_Compatibility_with_degeneration_formula})\nthe holomorphic anomaly equation for $\\int {\\mathcal C}^{\\pi_2}_{g,\\alpha}$ follows from the\nholomorphic anomaly equations for the elliptic fibrations\n\\[ \\mathrm{pr}_1 : R_1 \\times E_2 \\to R_1, \\quad \\quad \\mathrm{id}_{E_1} \\times p_2 : E_1 \\times R_2 \\to E_1 \\times \\mathbb{P}^1. \\]\nrelative to $E_1 \\times E_2$.\nTo show the holomorphic anomaly equation for $R_1 \\times E_2$ (relative to $E_1 \\times E_2$)\nwe again apply the product formula \\cite{LQ} and use the holomorphic anomaly equation for the elliptic curve \\cite{HAE}.\nFor $E_1 \\times R_2$ we apply the product formula and Theorem~\\ref{theorem_RES2}.\nHence ${\\mathcal C}^{\\pi_2}_{g,\\alpha}()$ satisfies the holomorphic anomaly equation numerically.\n\nFrom Lemma*~\\ref{Lemma_EllHAE} after numerical specialization it follows that \n\\[ \\mathsf{F}_{g,\\alpha} \\in \\bigcap_{\\lambda \\in E_8^{(2)}} \\mathrm{Ker}( \\mathsf{T}_{\\lambda} ) \\]\nor equivalently, that $\\mathsf{F}_{g,\\alpha}$ satisfies the elliptic transformation law.\\footnote{\nSince $\\mathsf{F}_{g,\\alpha}$ is invariant under translation by sections of $\\pi_2$\nthis also follows from Section~\\ref{Section_The_elliptic_transformation_law}.\n}\nBy \\eqref{4tsfgsdf} and since $\\mathsf{F}_{g,k}$ is symmetric under exchanging $(\\mathbf{z}_1, q_1)$ and $(\\mathbf{z}_2, q_2)$ we obtain\n\\[ \\mathsf{F}_{g,k} \\in \\bigcap_{\\lambda_1 \\in E_8^{(1)}, \\lambda_2 \\in E_8^{(2)} }\n\\mathrm{Ker}\\left( \\mathsf{T}_{\\lambda_1} \\otimes \\mathsf{T}_{\\lambda_2} \\right).\n\\]\nSimilarly, the series $\\mathsf{F}_{g,k}$ is invariant under reflection along the elliptic fibers of $\\pi_1$ and $\\pi_2$.\nSince every reflection along a root can be written as a composition of translation and reflection at the origin,\nwe conclude that\n\\[ \\mathsf{F}_{g,k} \\in \n \\frac{1}{\\Delta(q_1)^{k\/2}} \\widetilde{\\mathrm{Jac}}_{E_8,k}^{(q_1, \\mathbf{z}_1)}\n \\otimes \\frac{1}{\\Delta(q_2)^{k\/2}} \\widetilde{\\mathrm{Jac}}_{E_8, k}^{(q_2, \\mathbf{z}_2)}.\n\\]\nFinally, the weight of the bi-quasi-Jacobi form follows from the holomorphic anomaly equation, see Section~\\ref{Subsection_weight_refinement} and \\cite[Sec.2.6]{HAE}. \\qed\n\n\n\n\n\\subsection{Proof of Theorem~\\ref{Thm2}}\nAssume first $g > 2$ or $k > 0$.\nUsing \\eqref{4tsfgsdf} and Proposition*~\\ref{Prop_HAE_for_CY3}\\footnote{\nIn the proof of Theorem~\\ref{Thm1} \nwe have shown that Conjectures~\\ref{Conj_Quasimodularity} and~\\ref{Conj_HAE}\nhold for the Schoen Calabi--Yau numerically. Hence\nwe may apply Proposition*~\\ref{Prop_HAE_for_CY3} unconditionally.}\nwe find\n\\begin{multline} \\label{3dfsdfa}\n\\frac{d}{d C_2(q_2)} \\mathsf{F}_{g,\\mathsf{k}} \\\\\n=\n\\sum_{p_{1 \\ast} \\alpha = k} q_1^{W_1 \\cdot \\alpha} \\zeta_1^{\\alpha}\n\\Bigg[\n\\langle K_{R_1} + \\alpha, \\alpha \\rangle \\mathsf{F}_{g-1, \\alpha} +\n\\sum_{\\substack{g=g_1 + g_2 \\\\ \\alpha = \\alpha_1 + \\alpha_2}} \\langle \\alpha_1, \\alpha_2 \\rangle \\mathsf{F}_{g_1, \\alpha_1} \\mathsf{F}_{g_2, \\alpha_2}\n\\Bigg].\n\\end{multline}\nWe analyze the terms on the right side.\nIf we write $\\alpha = k W + d F + \\alpha_0$ for some $d \\geq 0$ and $\\alpha_0 \\in E_8^{(1)}$ then we have\n\\[ \\langle \\alpha, \\alpha \\rangle = 2 k d + \\langle \\alpha_0, \\alpha_0 \\rangle, \\quad \\langle K_{R_1}, \\alpha \\rangle = -k. \\]\nHence the first term in the bracket on the right of \\eqref{3dfsdfa} can be written as\n\\begin{multline*}\n\\sum_{p_{1 \\ast} \\alpha = k} q_1^{W_1 \\cdot \\alpha} \\zeta_1^{\\alpha}\n\\langle K_{R_1} + \\alpha, \\alpha \\rangle \\mathsf{F}_{g-1, \\alpha} \\\\\n=\n\\left( -k + 2 k D_{q_1} - \\sum_{i,j=1}^{8} \\left( Q_{E_8}^{-1} \\right)_{ij} D_{\\mathbf{z}_{1,i}} D_{\\mathbf{z}_{1,j}} \\right) \\mathsf{F}_{g-1, k}.\n\\end{multline*}\nWith a similar argument the sum\n\\[\n\\sum_{p_{1 \\ast} \\alpha = k} q_1^{W_1 \\cdot \\alpha} \\zeta_1^{\\alpha}\n\\sum_{\\substack{g=g_1 + g_2, \\alpha = \\alpha_1 + \\alpha_2 \\\\\n\\forall i \\in \\{ 1, 2 \\} \\colon g_i \\geq 2 \\text{ or } p_{1 \\ast} \\alpha_i > 0}}\n\\langle \\alpha_1, \\alpha_2 \\rangle \\mathsf{F}_{g_1, \\alpha_1} \\mathsf{F}_{g_2, \\alpha_2}\n\\]\nyields exactly the second term on the right in Theorem~\\ref{Thm2}.\nUsing Lemma~\\ref{Lemma_Calculation_of_fiber} below, the remaining terms are\n\\begin{align*}\n& 2 \\sum_{p_{1 \\ast} \\alpha = k} q_1^{W_1 \\cdot \\alpha} \\zeta_1^{\\alpha}\n\\sum_{\\substack{ g' \\in \\{ 0, 1\\} \\\\ \\ell \\geq 1 }} \\langle \\alpha - \\ell F_1, \\ell F_1 \\rangle \\mathsf{F}_{g-g', \\alpha - \\ell F_1} \\mathsf{F}_{g', \\ell F_1} \\\\\n=\\ & 2 \\sum_{p_{1 \\ast} \\alpha = k} q_1^{W_1 \\cdot \\alpha} \\zeta_1^{\\alpha} \\sum_{\\ell \\geq 1} k \\ell \\cdot \\mathsf{F}_{g-1, \\alpha - \\ell F_1} \\cdot 12 \\frac{\\sigma(\\ell)}{\\ell} \\\\\n=\\ & \\left( 24 k \\sum_{\\ell \\geq 1} \\sigma(\\ell) q_1^{\\ell} \\right) \\mathsf{F}_{g-1, k}. \\qedhere\n\\end{align*}\nPutting all three expressions together yields the desired expression. \n\nFinally, if $g=2$ and $k=0$ a similar analysis shows\n\\[ \n\\pushQED{\\qed} \n\\frac{d}{d C_2(q_2)} \\mathsf{F}_{2,0} = 0. \\qedhere \n\\popQED\n\\]\n\n\n\\begin{lemma} \\label{Lemma_Calculation_of_fiber} For all $(\\ell_1, \\ell_2) \\neq 0$ we have\n\\[\n\\mathsf{N}^X_{g, \\ell_1 F_1 + \\ell_2 F_2}\n=\n\\begin{cases}\n12 \\delta_{\\ell_1 0} \\frac{\\sigma(\\ell_2)}{\\ell_2} + 12 \\delta_{\\ell_2 0} \\frac{\\sigma(\\ell_1)}{\\ell_1} & \\text{ if } g = 1 \\\\\n0 & \\text{ if } g \\neq 1.\n\\end{cases}\n\\]\n\\end{lemma}\n\\begin{proof}\nUsing the degeneration \\eqref{DEGGGG} we have\n\\begin{align*}\n\\mathsf{N}^X_{g, \\ell_1 F_1 + \\ell_2 F_2}\n& = \n\\big\\langle \\varnothing \\big| \\varnothing \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2)}_{g, \\ell_1 F_1 + \\ell_2 F_2}\n+ \\big\\langle \\varnothing \\big| \\varnothing \\big\\rangle^{(E_1 \\times R_2) \/ (E_1 \\times E_2)}_{g, \\ell_1 F_1 + \\ell_2 F_2}.\n\\end{align*}\nBecause the surface $E_1 \\times E_2$ carries a holomorphic symplectic form,\nall Gromov--Witten invariants of $\\mathbb{P}^1 \\times E_1 \\times E_2$\nwith non-trivial curve degree over $E_1 \\times E_2$ vanish. Hence by a degeneration argument we have\n\\[ \n\\big\\langle \\varnothing \\big| \\varnothing \\big\\rangle^{(R_1 \\times E_2) \/ (E_1 \\times E_2)}_{g, \\ell_1 F_1 + \\ell_2 F_2}\n= \\big\\langle \\varnothing \\big\\rangle^{R_1 \\times E_2}_{g, \\ell_1 F_1 + \\ell_2 F_2}.\n\\]\nThe expression for the second term is parallel.\nNow the result follows by adding in markings, using the divisor equation and applying the product formula.\n\\end{proof}\n\n\\subsection{Proof of Corollary~\\ref{Cor_Schoen}}\nSince the series $\\mathsf{F}_{g,\\alpha}$ satisfies the holomorphic anomaly equation,\nthe disconnected series $F^{\\bullet}_{g,\\alpha}$ satisfies \\eqref{352rw}.\nThe claim now follows from Lemma~\\ref{Lemma_QJAc_ModTrLaw}.\n\n\\begin{comment}\nConsider now the Schoen Calabi--Yau threefold $\\pi_2 : X \\to R_1$.\nFor any $\\alpha \\in H_2(R_1)$ consider the all genus series\n\\[ F^{\\bullet}_{\\alpha}(z,\\mathbf{z}, \\tau) =\n\\sum_{g} \\mathsf{F}_{g,\\alpha}^{\\bullet}(\\mathbf{z},\\tau) u^{2g-2} \\]\nwhere we let $u = 2 \\pi z$ and $q = e^{2 \\pi i \\tau}$\nEquation \\eqref{352rw} yields\n\\begin{multline} \\label{mooo}\nF^{\\bullet}_{\\alpha}\\left( \\frac{z}{c \\tau + d},\n\\frac{\\mathbf{z}_2}{c \\tau + d},\n\\frac{a \\tau + b}{c \\tau + d} \\right) \\\\\n=\ne\\left( \\frac{c}{2 (c \\tau + d)} \\Big[ k \\mathbf{z}_2^T Q_{E_8} \\mathbf{z}_2 +\nz^2 \\langle \\alpha_k - c_1(R_1), \\alpha_k \\rangle \\Big] \\right)\nF^{\\bullet}_{\\alpha}(z,\\mathbf{z}, \\tau)\n\\end{multline}\nThis is the modular transformation law for Jacobi forms\nof weight $0$ and index \n\\[ \\Big( \\frac{1}{2} \\langle \\alpha_k - c_1(R_1), \\alpha_k \\rangle \\Big) \\bigoplus \\frac{k}{2} Q_{E_8}. \\]\n\nThe partition function $\\mathsf{Z}_k$ defined in the introduction\nis related to $\\mathsf{F}_{\\alpha}^{\\bullet}$ via\n\\[\n\\mathsf{Z}_k(z, \\mathbf{z}_1, \\mathbf{z}_2, \\tau_1, \\tau_2)\n=\n\\frac{1}{\\Delta(q_2)^{\\frac{1}{2}}} \\sum_{\\alpha_k} \\mathsf{F}_{\\alpha_k}^{\\bullet}(\nz, \\mathbf{z}_2, \\tau_2) q_1^{W_1 \\cdot \\alpha_k} e( \\mathbf{z}_1 \\cdot \\alpha_k).\n\\]\nwhere the sum is over all $\\alpha_k \\in H_2(R_1)$ of degree $k$ over $\\mathbb{P}^1$.\nDoes \\eqref{mooo} imply a transformation law for $\\mathsf{Z}_k$?\n\\end{comment}\n\n\n\\begin{comment}\n\\subsection{Examples and Siegel modular forms}\n\\label{Subsection_examples_and_siegel_modular_forms}\nLet $X$ be the Schoen Calabi--Yau threefold and recall from \\eqref{Defn_Fgk}\nits relative Gromov--Witten potentials $\\mathsf{F}_{g,k}$.\nWe describe $\\mathsf{F}_{g,k}$\nin degrees $k \\in \\{ 0,1 \\}$ and discuss a prediction for $k \\geq 2$.\n\n\\subsubsection{Degree $k=0$}\nEvery curve class $\\beta \\in H_2(X,{\\mathbb{Z}})$ with $\\pi_{\\ast} \\beta = 0$ is of the form \n\\[ \\beta = d_1 F_1 + d_2 F_2, \\quad d_1, d_2 \\geq 0, \\]\nwhere $F_i$ is the fiber of $\\pi_i$.\nBy an application of the degeneration formula we find\n$\\mathsf{N}_{g,\\beta} = 0$ for all $g \\neq 1$. In genus $1$ we obtain\n\\begin{equation} \\label{Deg00ev}\n\\sum_{(d_1,d_2) > 0} \\mathsf{N}_{1, d_1 F_1 + d_2 F_2} q_1^{d_1} q_2^{d_2}\n=\n12 \\sum_{m,n \\geq 1} \\frac{q_1^{mn}}{n} + 12 \\sum_{m,n \\geq 1} \\frac{q_2^{mn}}{n}.\n\\end{equation}\nThe right hand side is not a quasi-modular form. However, this does not contradict Theorem~\\ref{Thm1} since the potential $\\mathsf{F}_{g=1,k=0}$ is not defined.\nQuasi-modularity holds only after adding insertions,\nsee \\cite[App.B]{HAE} for a discussion.\n\n\\subsubsection{Degree $k=1$}\nRecall from \\cite[Sec.0.2]{HAE} a particular genus $2$ Siegel modular form, the Igusa cusp form\n\\[ \\chi_{10}(p,q_1, q_2) = \\prod_{(k,d_1, d_2)>0} (1-p^k q_1^{d_1} \\tilde{q_2}^{d_2})^{c(4 d_1 d_2-k^2)} \\]\nwhere $c(n)$ are coefficients of a certain $\\Gamma_0(4)$-modular form.\n\nBy the Igusa cusp form conjecture \\cite[Thm.1]{HAE}, a degeneration argument\nand an analysis of the sections of $\\pi$ we have\n\\begin{equation} \\label{Deger3sdf}\n\\frac{1}{\\Delta(q_1)^{\\frac{1}{2}} \\Delta(q_2)^{\\frac{1}{2}}}\n\\sum_{g \\geq 0} \\mathsf{F}_{g,1}(q_1, q_2, \\mathbf{z}_1, \\mathbf{z}_2) u^{2g-2}\n=\n\\frac{\\Theta_{E_8}(\\mathbf{z}_1, q_1) \\Theta_{E_8}(\\mathbf{z}_2, q_2)}{\\chi_{10}(e^{iu},q_1, q_2)}.\n\\end{equation}\n\nThe evaluation \\eqref{Deger3sdf} exhibits the following properties and symmetries.\n\\begin{enumerate}\n \\item[(i)] Each $u^{2g-2}$-coefficient is a $E_8 \\times E_8$ bi-quasi-Jacobi form (of specified weight and index).\n \\item[(ii)] Each $u^{2g-2}$-coefficient satisfies a holomorphic anomaly equation with respect to $\\pi_1$ and $\\pi_2$.\n \\item[(iii)] The elliptic transformation law in the $(u,q_i)$-variables holds.\\footnote{\n This property is the expected symmetry of Pandharipande--Thomas invariants under\n Fourier--Mukai transforms with respect to the Poincar\\'e sheaves of $\\pi_1$ and $\\pi_2$, see \\cite{OS1}.}\n \\item[(iv)] Symmetry under interchanging $q_1$ and $q_2$ (corresponding to the symmetry of $X$ in the $R_i$).\n\\end{enumerate}\n\n\\subsubsection{Degree $k \\geq 2$}\nConsider the all genus degree $k$ potential\n\\[ \\mathsf{F}_k(q_1, q_2, \\mathbf{z}_1, \\mathbf{z}_2, u)\n = \\sum_{g \\geq 0} \\sum_{\\substack{\\beta > 0 \\\\ \\pi_{\\ast} \\beta = k}} \\mathsf{N}_{g,\\beta}\n u^{2g-2} q_1^{W_1 \\cdot \\beta} q_2^{W_2 \\cdot \\beta} \\exp( ( \\mathbf{z}_1 + \\mathbf{z}_2) \\cdot \\beta ).\n\\]\nThe partition function of disconnected invariants is defined by\n\\begin{equation} \\label{dfdfsdfs}\n\\sum_{k \\geq 0} \\mathsf{Z}_k(q_1, q_2, \\mathbf{z}_1, \\mathbf{z}_2, u) t^k\n =\n \\exp\\left( \\sum_{k \\geq 0} \\mathsf{F}_k(q_1, q_2, \\mathbf{z}_1, \\mathbf{z}_2, u) t^k \\right)\n\\end{equation}\nand equals the generating series of Pandharipande--Thomas invariants of $X$ under a variable change. \nIn degree $0$ the evaluation \\eqref{Deg00ev} yields\n\\[ \\mathsf{Z}_0 = \\frac{1}{\\Delta(q_1)^{\\frac{1}{2}} \\Delta(q_2)^{\\frac{1}{2}}}. \\]\nIn degree $1$ the evaluation \\eqref{Deger3sdf} yields\n\\[ \\mathsf{Z}_1 = \\frac{\\Theta_{E_8}(\\mathbf{z}_1, q_1) \\Theta_{E_8}(\\mathbf{z}_2, q_2)}{\\chi_{10}(e^{iu},q_1, q_2)}. \\]\nBoth evaluations satisfy properties (i-iv).\n\nBy Theorem~\\ref{Thm1} and~\\ref{Thm2} and the symmetry of $X$\nall partition functions $\\mathsf{Z}_k$ satisfy properties (i, ii, iv).\nBy the Huang--Katz--Klemm conjecture \\cite{HKK} the series $\\mathsf{Z}_k$ also satisfies (iii),\nand a strategy for proving it using derived equivalences was proposed in \\cite{OS1}.\nWe hence expect $\\mathsf{Z}_k$ to be a form of (meromorphic) $E_8 \\times E_8$ quasi-Siegel form.\n\nIt would be interesting to understand the modular properties of the full partition function~\\eqref{dfdfsdfs}.\n\\end{comment}\n\n\n\n\\section{Abelian surfaces} \\label{Section_Abelian_surfaces}\n\\subsection{Overview}\nWe present (Section~\\ref{Subsection_ab_statement}) and prove numerically\n(Section~\\ref{Subsection_ab_proof}) the holomorphic anomaly equation\nfor the reduced Gromov--Witten theory of abelian surfaces in primitive classes.\nThe quasi-modularity of the theory was proven previously in \\cite{BOPY}.\nThe result and strategy of proof is almost identical to the\ncase of K3 surfaces which appeared in detail in \\cite[Sec.0.6]{HAE} and we will be brief.\nSince we work with reduced Gromov--Witten theory,\nan additional term appears in the holomorphic anomaly equation\nfor both abelian and K3 surfaces.\nThis term appeared somewhat mysteriously in \\cite{HAE} in the form of a certain operator $\\sigma$.\nIn Section~\\ref{Subsection_ab_explain} we explain how it arises\nnaturally from the theory of quasi-Jacobi forms.\n\n\\subsection{Results}\\label{Subsection_ab_statement}\nLet $E_1, E_2$ be non-singular elliptic curves and consider the abelian surface\n\\[ \\mathsf{A} = E_1 \\times E_2 \\]\nelliptically fibered over $E_1$ via the projection $\\pi$ to the first factor,\n\\[ \\pi : \\mathsf{A}\\to E_1. \\]\nLet $0_{E_2} \\in E_2$ be the zero and fix the section\n\\[ \\iota : E_1 = E_1 \\times 0_{E_2} \\hookrightarrow \\mathsf{A}. \\]\nA pair of integers $(d_1,d_2)$ determines a class in $H_2(\\mathsf{A},{\\mathbb{Z}})$ by\n\\[ (d_1, d_2) = d_1 \\iota_{\\ast} [E_1] + d_2 j_{\\ast} [E_2] \\]\nwhere $j : 0_{E_1} \\times E_2 \\to \\mathsf{A}$ is the inclusion.\n\nSince $\\mathsf{A}$ carries a holomorphic symplectic form, the virtual fundamental class of\n${\\overline M}_{g,n}(\\mathsf{A}, \\beta)$ vanishes if $\\beta \\neq 0$. A nontrivial\nGromov--Witten theory of $\\mathsf{A}$ is defined by the \\emph{reduced} virtual class\n$[ {\\overline M}_{g,n}(\\mathsf{A}, \\beta) ]^{\\text{red}}$, see \\cite{BOPY} for details.\nFor any $\\gamma_1, \\ldots, \\gamma_n \\in H^{\\ast}(\\mathsf{A})$ define the reduced primitive potential\n\\begin{multline*} {\\mathcal A}_{g}(\\gamma_1, \\ldots, \\gamma_n)\n = \\sum_{d=0}^{\\infty} q^d \\pi_{\\ast}\\left( \\left[ {\\overline M}_{g,n}\\left(\\mathsf{A}, (1,d) \\right) \\right]^{\\text{red}} \\prod_{i=1}^{n} {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\right) \\\\\n \\in H_{\\ast}({\\overline M}_{g,n}(E_1, 1))[[q]].\n\\end{multline*}\nBy deformation invariance the classes ${\\mathcal A}_g$ determine the Gromov--Witten classes\nof any abelian surface in primitive classes.\n\n\\begin{conjecture} \\label{Conj_Quasimodularity_A}\n${\\mathcal A}_{g,n}(\\gamma_1, \\ldots, \\gamma_n) \\in \\mathrm{QMod} \\otimes H_{\\ast}( {\\overline M}_{g,n}(E_1, 1) )$\n\\end{conjecture}\n\nWe state the reduced holomorphic anomaly equation.\nFor any $\\lambda \\in H^{\\ast}(\\mathsf{A})$ define the endomorphism $A(\\lambda) : H^{\\ast}(\\mathsf{A}) \\to H^{\\ast}(\\mathsf{A})$ by\n\\[ A(\\lambda) \\gamma = \\lambda \\cup \\pi^{\\ast} \\pi_{\\ast}( \\gamma) - \\pi^{\\ast} \\pi_{\\ast}( \\lambda \\cup \\gamma ) \\quad\n\\text{ for all } \\gamma \\in H^{\\ast}(\\mathsf{A}). \\]\nDefine the operator $T_{\\lambda}$ by\\footnote{\nThe notation $T_{\\lambda}$ (serif) matches the expected value of the action\nof the anomaly operator $\\mathsf{T}_{\\lambda}$ (sans-serif) given in Lemma*~\\ref{Lemma_EllHAE}.\nThe operator $T_{\\lambda}$ is defined independently of the modular properties of ${\\mathcal A}$.}\n\\[ T_{\\lambda} {\\mathcal A}_{g,n}(\\gamma_1, \\ldots, \\gamma_n)\n = \\sum_{i=1}^{n} {\\mathcal A}_{g,n}( \\gamma_1,\\ldots, A(\\lambda) \\gamma_i, \\ldots, \\gamma_n).\n\\]\nLet $V \\subset H^{2}(\\mathsf{A}, {\\mathbb{Q}})$ be the orthogonal complement to $[E_1], [E_2]$\nand define\n\\begin{equation} T_{\\Delta} = - \\sum_{i,j=1}^{4} \\left( G^{-1} \\right)_{ij} T_{b_i} T_{b_j} \\label{TDelta} \\end{equation}\nwhere $\\{ b_i \\}$ is a basis of $V$ and $G = \\big( \\langle b_i, b_j \\rangle \\big)_{i,j}$.\n\nRecall also the virtual class on the moduli space of degree $0$,\n\\[\n[ {\\overline M}_{g,n}(\\mathsf{A},0) ]^{\\text{vir}}\n=\n\\begin{cases}\n[{\\overline M}_{0,n} \\times \\mathsf{A}] & \\text{if } g= 0 \\\\\n0 & \\text{if } g \\geq 1,\n\\end{cases}\n\\]\nwhere we used the identification ${\\overline M}_{g,n}(\\mathsf{A},0) = {\\overline M}_{g,n} \\times \\mathsf{A}$.\nWe define\n\\[ {\\mathcal A}_g^{\\text{vir}}(\\gamma_1, \\ldots, \\gamma_n)\n=\n\\pi_{\\ast}\\left( [ {\\overline M}_{g,n}({\\mathcal A}, 0) ]^\\text{vir} \\prod_i {\\mathrm{ev}}_i^{\\ast}(\\gamma_i) \\right). \\]\n\nConsider the class in $H_{\\ast}({\\overline M}_{g,n}(E_1,1))$ defined by\n\\begin{equation} \\label{HAE_for_A}\n\\begin{aligned}\n\\mathsf{H}^{{\\mathcal A}}_{g}(\\gamma_1, \\ldots, \\gamma_n)\n& =\n\\iota_{\\ast} \\Delta^{!} {\\mathcal A}_{g-1}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ) \\\\\n& + 2 \\sum_{\\substack{g= g_1 + g_2 \\\\ \\{1,\\ldots, n\\} = S_1 \\sqcup S_2}}\nj_{\\ast} \\Delta^{!} \\left( {\\mathcal A}_{g_1}( \\gamma_{S_1}, \\mathsf{1} ) \\boxtimes {\\mathcal A}^{\\text{vir}}_{g_2}( \\gamma_{S_2}, \\mathsf{1} ) \\right) \\\\\n& - 2 \\sum_{i=1}^{n} {\\mathcal A}_g( \\gamma_1, \\ldots, \\gamma_{i-1}, \\pi^{\\ast} \\pi_{\\ast} \\gamma_i, \\gamma_{i+1}, \\ldots, \\gamma_n ) \\cup \\psi_i \\\\\n& + T_{\\Delta} {\\mathcal A}_{g}(\\gamma_1, \\ldots, \\gamma_n)\n\\end{aligned}\n\\end{equation}\n\n\n\\begin{conjecture} \\label{Conj_HAE_A}\n$\\frac{d}{dC_2} {\\mathcal A}_{g}(\\gamma_1, \\ldots, \\gamma_n) = \\mathsf{H}^{{\\mathcal A}}_{g}(\\gamma_1, \\ldots, \\gamma_n)$.\n\\end{conjecture}\n\nLet $p : {\\overline M}_{g,n}(E_1,1) \\to {\\overline M}_{g,n}$\nbe the forgetful map, and recall the tautological subring\n$R^{\\ast}({\\overline M}_{g,n}) \\subset H^{\\ast}({\\overline M}_{g,n})$.\nIn the unstable cases we will use the convention of Section~\\ref{Subsection_CONVENTION}.\nBy \\cite{BOPY} Conjecture~\\ref{Conj_Quasimodularity_A} holds numerically:\n\\begin{equation} \\label{454353}\n\\int_{{\\overline M}_{g,n}(E_1,1)} p^{\\ast}(\\alpha) \\cap {\\mathcal A}_g(\\gamma_1, \\ldots, \\gamma_n) \\, \\in \\mathrm{QMod}\n\\end{equation}\nfor all tautological classes $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$.\nWe show the holomorphic anomaly equation holds numerically as well.\n\\begin{thm} \\label{thm_AHAE}\nFor any tautological class $\\alpha \\in R^{\\ast}({\\overline M}_{g,n})$,\n\\[\n\\frac{d}{dC_2} \\int p^{\\ast}(\\alpha) \\cap {\\mathcal A}_g(\\gamma_1, \\ldots, \\gamma_n)\n=\n\\int p^{\\ast}(\\alpha) \\cap \\mathsf{H}^{{\\mathcal A}}_{g}(\\gamma_1, \\ldots, \\gamma_n).\n\\]\n\\end{thm}\n\n\n\\subsection{Discussion of the anomaly equation} \\label{Subsection_ab_explain}\nThe holomorphic anomaly equation for abelian and K3 surfaces (see \\cite{HAE}) require two modifications\nto Conjecture~\\ref{Conj_HAE}.\nThe first is the modified splitting term (the second term on the right-hand side of \\eqref{HAE_for_A}).\nIt arises naturally from the formula for the restriction of the reduced virtual class $[ \\, \\cdot \\, ]^{\\text{red}}$\nto boundary components, see e.g. \\cite[Sec.7.3]{MPT}.\n\nThe second modification in \\eqref{HAE_for_A} is the term $T_{\\Delta} {\\mathcal A}_{g}(\\gamma_1, \\ldots, \\gamma_n)$ which appears for K3 surfaces\nin \\cite[Sec.0.6]{HAE} in its explicit form.\nTo explain its origin we consider the difference in definition of the Gromov--Witten potentials ${\\mathcal C}^{\\pi}_{g,\\mathsf{k}}$ and ${\\mathcal A}$.\nThe class ${\\mathcal C}_{g,\\mathsf{k}}^{\\pi}$ is defined by summing over all classes $\\beta$ on $X$ which are of degree $\\mathsf{k}$ over the base,\nwhile for ${\\mathcal A}$ we fix the base class $[E_1]$ and sum over the fiber direction $[E_1] + d [E_2]$.\nThe latter\ncorresponds to taking the $\\zeta^{0}$-coefficient of the quasi-Jacobi form ${\\mathcal C}_{g,\\mathsf{k}}^{\\pi}$.\nBy Proposition~\\ref{QJac_Prop2} the $C_2$-derivative of this $\\zeta^{0}$-coefficient then naturally acquires an extra term\nwhich exactly matches $T_{\\Delta} {\\mathcal A}_g$.\n\nTo make the discussion more concrete consider a rational elliptic surface $\\pi : R \\to \\mathbb{P}^1$ and\nconsider the $\\zeta^{0}$-coefficient of the class ${\\mathcal C}_{g,k=1}$,\n\\[ {\\mathcal R}_{g}(\\gamma_1, \\ldots, \\gamma_n) = \\left[ {\\mathcal C}^{\\pi}_{g,1}(\\gamma_1, \\ldots, \\gamma_n) \\right]_{\\zeta^0}. \\]\nThe class ${\\mathcal R}_g$ should roughly correspond to the classes ${\\mathcal A}_g$ for abelian\nand ${\\mathcal K}_g$ for K3 surfaces\\footnote{The classes ${\\mathcal K}_g$ are the analogues of ${\\mathcal A}_g$ for K3 surfaces, see \\cite[Sec.1.6]{HAE} for a definition.}.\nAssuming Conjecture~\\ref{Conj_Quasimodularity} and using Section~\\ref{Subsubsection_Halfunimodular_index}\nwe find ${\\mathcal R}_g$ is a cycle-valued $\\mathrm{SL}_2({\\mathbb{Z}})$-quasi-modular form.\nAssuming Conjecture~\\ref{Conj_HAE} and using Proposition~\\ref{QJac_Prop2} then yields the holomorphic anomaly equation\n\\begin{align*}\n\\frac{d}{dC_2} {\\mathcal R}_{g}(\\gamma_1, \\ldots, \\gamma_n)\n= \\,\n& \\iota_{\\ast} \\Delta^{!} {\\mathcal R}_{g-1}( \\gamma_1, \\ldots, \\gamma_n, \\mathsf{1}, \\mathsf{1} ) \\\\\n& + 2 \\sum_{\\substack{g= g_1 + g_2 \\\\ \\{1,\\ldots, n\\} = S_1 \\sqcup S_2}}\nj_{\\ast} \\Delta^{!} \\left( {\\mathcal R}_{g_1}( \\gamma_{S_1}, \\mathsf{1} ) \\boxtimes {\\mathcal C}^{\\pi}_{g_2,0}( \\gamma_{S_2}, \\mathsf{1} ) \\right) \\\\\n& - 2 \\sum_{i=1}^{n} {\\mathcal R}_g( \\gamma_1, \\ldots, \\gamma_{i-1}, \\pi^{\\ast} \\pi_{\\ast} \\gamma_i, \\gamma_{i+1}, \\ldots, \\gamma_n ) \\cup \\psi_i \\\\\n& + T_{\\Delta} {\\mathcal R}_{g}(\\gamma_1, \\ldots, \\gamma_n)\n\\end{align*}\nwhere the operator $T_{\\Delta}$ is defined as in \\eqref{TDelta}\nbut with $V$ replaced by $H^2_{\\perp}$.\nHence we recover the same term as for abelian and K3 surfaces.\n\n\n\n\\subsection{Proof of Theorem~\\ref{thm_AHAE}} \\label{Subsection_ab_proof}\nThe quasi-modularity \\eqref{454353} was proven in \\cite{BOPY}\nby an effective calculation scheme using the following ingredients:\n(i) an abelian vanishing equation, (ii) tautological relations \/ restriction to boundary,\n(iii) divisor equation, (iv) degeneration to the normal cone of an elliptic fiber.\nOne checks that each such step is compatible with the holomorphic anomaly equation.\nFor the K3 surface this was done in detail in \\cite{HAE} and the abelian surface case is parallel. \\qed\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nThe possibility that the Baryon Asymmetry of the Universe (BAU) could\noriginate from a lepton number asymmetry generated in the $CP$ violating\ndecays of the heavy seesaw Majorana neutrinos was put forth about twenty years\nago by Fukugita and Yanagida.~\\cite{Fukugita:1986hr} Their proposal came\nshortly after Kuzmin, Rubakov and Shaposhnikov pointed out that above the\nelectroweak phase transition $B+L$ is violated by fast electroweak anomalous\ninteractions.~\\cite{Kuzmin:1985mm} This implies that any lepton asymmetry\ngenerated in the unbroken phase would be unavoidably converted in part into a\nbaryon asymmetry. However, the discovery that at $T\\gsim 100\\,$GeV electroweak\ninteractions do not conserve baryon number, also suggested the exciting\npossibility that baryogenesis could be a purely standard model (SM)\nphenomenon, and opened the way to electroweak\nbaryogenesis.~\\cite{Farrar:1993sp} Even if rather soon it became clear that\nwithin the SM electroweak baryogenesis fails to reproduce the correct BAU by\nmany orders of magnitude,~\\cite{Gavela:1993ts} within the minimal\nsupersymmetric standard model (MSSM) the chances of success were much better,\nand this triggered an intense research activity in that direction. Indeed, in\nthe early 90's electroweak baryogenesis attracted more interest than\nleptogenesis, but still a few remarkable papers appeared that put the first\nbasis for {\\it quantitative} studies of leptogenesis. Here I will just mention\ntwo important contributions that established the structure of the two main\ningredients of leptogenesis: the rates for several washout processes relevant\nfor the leptogenesis Boltzmann equations, that were presented by Luty in his\n1992 paper,~\\cite{Luty:1992un} and the correct expression for the $CP$\nviolating asymmetry in the decays of the lightest Majorana neutrino, first\ngiven in the 1996 paper of Covi, Roulet and Vissani.~\\cite{Covi:1996wh}\n\nAround year 2000 a flourishing of detailed studies of leptogenesis begins,\nwith a corresponding burst in the number of papers dealing with this\nsubject.~\\cite{Nardi:2007fs} This raise of interest in leptogenesis can be\ntraced back to two main reasons: firstly, the experimental confirmation (from\noscillation experiments) that neutrinos have nonvanishing masses strengthened\nthe case for the seesaw mechanism, that in turn implies the existence, at some\nlarge energy scale, of lepton number violating ($\\not \\! L$) interactions.\nSecondly, the fact that the various analysis of supersymmetric electroweak\nbaryogenesis cornered this possibility in a quite restricted region of\nparameter space, leaving for example for the Higgs mass just a 5 GeV window \n(115 - 120 GeV).~\\cite{Balazs:2005tu}\n\nThe number of important papers and the list of people that contributed to the\ndevelopment of leptogenesis studies and to understand the various implications\nfor the low energy neutrino parameters is too large to be recalled here.\nHowever, let me mention the remarkable paper of Giudice {\\it et\n al.}~\\cite{Giudice:2003jh} that appeared at the end of 2003: in this paper a\nwhole set of thermal corrections for the relevant leptogenesis processes were\ncarefully computed, a couple of mistakes common to previous studies were\npointed out and corrected, and a detailed numerical analysis was presented\nboth for the SM and the MSSM cases. Eventually, it was claimed that the\nresidual numerical uncertainties would probably not exceed the 10\\%-20\\%\nlevel. A couple of years later, Nir, Roulet, Racker and\nmyself~\\cite{Nardi:2005hs} carried out a detailed study of additional effects\nthat were not accounted for in the analysis of ref..~\\cite{Giudice:2003jh}\nThis included electroweak and QCD sphaleron effects, the effects of the\nasymmetry in the Higgs number density, as well as the constraints on the\nparticles asymmetry-densities implied by the spectator reactions that are in\nthermal equilibrium in different temperature ranges relevant for\nleptogenesis.~\\cite{Nardi:2005hs} Indeed, we found that the largest of theses\nnew effects would barely reach the level of a few tens of percent.\n\n\nHowever, two important ingredients had been overlooked in practically all\nprevious studies, and had still to be accounted for. These were the role of\nthe light lepton flavors, and the role of the heavier seesaw Majorana\nneutrinos. One remarkable exception was the 1999 paper by Barbieri {\\it et\n al.}~\\cite{Barbieri:1999ma} that, besides addressing as the main topic the\nissue of flavor effects in leptogenesis, also pointed out that the lepton\nnumber asymmetries generated in the decays of the heavier seesaw neutrinos can\ncontribute to the final value of the BAU.\\footnote{Lepton flavor effects were\n also considered by Endoh, Morozumi and Xiong in their 2003\n paper,~\\cite{Endoh:2003mz} in the context of the minimal seesaw model with\n just two right handed neutrinos.} However, these important results did not\nhave much impact on subsequent analyses. The reason might be that these were\nthought to be just order one effects on the final value of the lepton\nasymmetry, with no other major consequences for leptogenesis. As I will\ndiscuss in the following, the size of the effects could easily reach the one\norder of magnitude level and, most importantly, they can spoil the\nleptogenesis constraints on the neutrino low energy parameters, and in\nparticular the limit on the absolute scale of neutrino\nmasses.~\\cite{buch02-03} This is important, since it was thought that this\nlimit was a firm prediction of leptogenesis with hierarchical seesaw\nneutrinos, and that the discovery of a neutrino mass $m_\\nu\\gsim 0.2\\,$eV\nwould have strongly disfavored leptogenesis, or hinted to different scenarios\n(as e.g. resonant leptogenesis~\\cite{pila0405}).\n\n\n\\section{The standard scenario}\n\nLet us start by writing the first few terms of the leptogenesis Lagrangian,\nneglecting for the moment the heavier neutrinos $N_{2,3}$ (except for their\nvirtual effects in the $CP$ violating asymmetries):\n\\begin{equation}\n\\label{lagrangian1}\n{\\cal L} =\\frac{1}{2}\\left[\\bar N_1 (i\\! \\not\\! \\partial) N_1 - \nM_1 N_1 N_1\\right] -(\\lambda_{1}\\,\n\\bar N_1\\, \\ell_{1}{H} +{\\rm h.c.}).\n\\end{equation}\nHere $N_1$ is the lightest right-handed Majorana neutrino with mass\n$M_1$, $H$ is the Higgs field, and $\\ell_1$ is the lepton doublet to\nwhich $N_1$ couples, that when expressed on a complete orthogonal basis \n$\\{\\ell_i \\}$ reads \n\\begin{equation}\n|\\ell_1\\rangle = (\\lambda \\lambda^\\dagger )_{11}^{-1\/2} \n\\sum_i \\lambda_{1i}\\,|\\ell_i\\rangle.\n\\end{equation}\nIn practice it is always convenient to use the basis that diagonalizes the\ncharged lepton Yukawa couplings (the flavor basis) that also has well defined\n$CP$ conjugation properties $CP(\\{\\ell_i \\})=\\{\\bar \\ell_i \\}$ with\n$i=e,\\,\\mu,\\,\\tau $. Note that in the first and third term in\n(\\ref{lagrangian1}) a lepton number can be assigned to $N_1$, that is however\nviolated by two units by the mass term. Then eq.~(\\ref{lagrangian1}) implies\nprocesses that violate $L$, like inverse-decays followed by $N_1$ decays\n$\\ell_1 \\leftrightarrow N_1 \\leftrightarrow\\bar \\ell_1$, off-shell $\\Delta\nL=2$ scatterings $\\ell_1 H\\leftrightarrow \\bar\\ell_1 \\bar H$, $\\Delta L=1$\nscatterings involving the top-quark like $N_1 \\ell_1 \\leftrightarrow Q_3 \\bar\nt$ or involving the gauge bosons like $N_1\\ell_1 \\to A \\bar H$ (with\n$A=W_i,\\,B$). The temperature range in which $\\not\\!\\! L$ processes can be\nimportant for leptogenesis is around $T\\sim M_1$. This is because if the\n$\\lambda_1$ couplings were large enough that these processes were already\nrelevant at $T\\gg M_1$ (when the Universe expansion is fast) than they would\ncome into complete thermal equilibrium at lower temperatures (when the\nexpansion slows down) thus forbidding the survival of any macroscopic $L$\nasymmetry. On the other hand at $T\\ll M_1$ decays, inverse decays and $\\Delta\nL=1$ scatterings are Boltzmann suppressed, $\\Delta L=2$ scatterings are power\nsuppressed, and therefore $L$ violating processes become quite inefficient as\nthe temperature drops well below $M_1$.\n\nThe possibility of generating an asymmetry between the number of leptons\n$n_{\\ell_1}$ and antileptons $n_{\\bar \\ell_1}$ is due to a non-vanishing $CP$\nasymmetry in $N_1$ decays:\n\n\\begin{equation}\n\\epsilon_1\\equiv \\frac{\n\\Gamma(N_1\\to \\ell_1 H)-\n\\Gamma(N_1\\to \\bar \\ell_1 \\bar H)}\n{\\Gamma(N_1\\to \\ell_1 H)+\n\\Gamma(N_1\\to \\bar \\ell_1 \\bar H)} \\neq 0.\n\\end{equation}\nIn order that a macroscopic $L$ asymmetry can build up, the condition\nthat $\\not\\!\\! L$ reactions are (at least slightly) out of equilibrium at\n$T\\sim M_1$ must also be satisfied. This condition can be expressed in terms of two\ndimensionfull parameters, defined in terms of the Higgs vev $v\\equiv \\langle\nH\\rangle$ and of the Plank mass $M_P$ as:\n\\begin{equation}\n\\label{mstar}\n\\tilde m_1 =\\frac{(\\lambda\\lambda^\\dagger)_{11}\\,v^2 }{M_1}, \\qquad\\qquad \\quad m_* \\approx\n10^3\\,\\frac{v^2}{M_P}\\approx 10^{-3}\\,{\\rm eV}.\n\\end{equation} \nThe first parameter ($\\tilde m_1$) is related to the rates of $N_1$ processes\n(like decays and inverse decays) while the second one ($m_*$) is related to\nthe expansion rate of the Universe at $T\\sim M_1$. When $\\tilde m_1\\lsim\nm_*$, $\\not\\!\\! L \\>$ processes are slower than the Universe expansion rate\nand leptogenesis can occur. As $\\tilde m_1$ increases to values larger than\n$m_*$, $\\not\\!\\! L$ reactions approach thermal equilibrium thus rendering\nleptogenesis inefficient because of the back-reactions that tend to erase any\nmacroscopic asymmetry. However, even for $\\tilde m_1$ as large as $\\sim\n100\\,m_*$ a lepton asymmetry sufficient to explain the BAU can be generated.\nIt is customary to refer to the condition $\\tilde m_1> m_*$ as to the {\\it\n strong washout regime} since washout reactions are rather fast. This regime\nis considered more likely than the {\\it weak washout regime} $\\tilde m_1< m_*$\nin view of the experimental values of the light neutrino mass-squared\ndifferences (that are both $> m_*^2$) and of the theoretical lower bound\n$\\tilde m_1 \\geq m_{\\nu_1}$, where $m_{\\nu_1}$ is the mass of the lightest\nneutrino. The strong washout regime is also theoretically more appealing\nsince the final value of the lepton asymmetry is independent of the particular\nvalue of the $N_1$ initial abundance, and also of a possible asymmetry\n$Y_{\\ell_1}\\! =(n_{\\ell_1}-n_{\\bar \\ell_1})\/s\\neq 0$ (where $s$ is the entropy\ndensity) preexisting the $N_1$ decay era. This last fact has been often used\nto argue that for $\\tilde m_1> m_*$ only the dynamics of the lightest Majorana\nneutrino $N_1$ is important, since asymmetries generated in the decays of the\nheavier $N_{2,3}$ would be efficiently erased by the strong $N_1$-related\nwashouts. As we will see below, the effects of $N_1$ interactions on the\n$Y_{\\ell_{2,3}}$ asymmetries are subtle, and the previous argument is\nincorrect. The result of numerical integration of the Boltzmann equations for\n$Y_{\\ell_1}$ can be conveniently expressed in terms of an efficiency factor\n$\\eta_1$, that ranges between 0 and 1:\n\\begin{equation}\nY_{\\ell_1} = 3.9\\times 10^{-3}\\ \\eta_1 \\epsilon_1, \n\\qquad\\qquad \\qquad \\eta_1 \\approx \\frac{m_*}{\\tilde m_1}.\n\\end{equation}\nThe second relation gives a rough approximation for $\\eta_1$ in the strong\nwashout regime, that will become useful in analyzing the impact of flavor\neffects. Clearly, too strong washouts ($\\tilde m_1 \\gg m_*$) can put in\njeopardy the success of leptogenesis by suppressing too much the efficiency.\nHowever, it should also be stressed that washouts constitute a fundamental\ningredient to generate a lepton asymmetry. This is particularly true in\nthermal leptogenesis, with zero initial $N_1$ abundance, and is illustrated in\nfig.~\\ref{fig:Ylevolution} where the evolution of the lepton asymmetry for a\nrepresentative model is plotted against decreasing values of the temperature.\nThe different curves correspond to different level of (artificial) reduction\nin the strength of the washout rates (but not in the $N_1$ production rates)\nfrom the model value (solid red line), to 10\\% (dashed blue line), 1\\%\n(dot-dashed pink line) and 0.1\\% (dotted green line). The solid black line\ncorresponds to switching off all back-reactions. (Of course the last four\ncurves correspond to unphysical conditions.) It is apparent that while a\npartial reduction in the washout rates is beneficial to leptogenesis, an\nexcessive reduction suppresses the final asymmetry and eventually, when\nwashouts are switched off completely, no asymmetry survives.\n\\begin{figure}\n \\psfig{figure=WashoutNew.eps,width=15truecm}\n\\caption{Evolution of the lepton asymmetry plotted against $z=M_1\/T$. \n The different curves depict the effects of reducing progressively the rates\n of the washout processes (as detailed in the legend). Complete switch off\n of the washouts (thin solid black line) yields a vanishing lepton asymmetry.\n}\n\\label{fig:Ylevolution}\n\\end{figure}\nThis behavior can be understood as follows: all leptogenesis processes can be\nseen as scatterings between standard model particle states $X,\\,Y$ involving\nintermediate on-shell and off-shell unstable $N_1$'s: $X \\leftrightarrow\nN^{(*)}_1 \\leftrightarrow Y$. Since the CP asymmetry of any $X \\leftrightarrow\nY$ process is at most of ${\\cal O}(\\lambda^6)$\\cite{Kolb:1979qa}, if the\nlepton asymmetries generated in the different processes were exactly\nconserved, the overall amount that could survive would not exceed this order.\nMoreover, since ${\\cal O}(\\lambda_1^6)$ asymmetries are systematically\nneglected in the Boltzmann equations, the numerical result would be exactly\nzero. However, the on-shell and off-shell components of each process have\nmuch larger CP asymmetries of ${\\cal O}(\\lambda_1^4)$, and the cancellation to\n${\\cal O}(\\lambda_1^6)$ occurs because they are of opposite sign and (at\nleading order in the couplings) of the same magnitude. Moreover, since the\nlong range and short range components of each process have different time\nscales, at each instant during leptogenesis a lepton asymmetry up to ${\\cal\n O}(\\lambda_1^4)$ can be present. Washout processes by definition do not\nconserve the lepton asymmetries, and most importantly they act unevenly over\nthe different processes as well as over their short and long range components,\nerasing more efficiently the asymmetries generated in $N_1$ production\nprocesses and off-shell scatterings that on average occur at earlier times,\nand washing out less efficiently the asymmetries of processes that destroy\n$N_1$'s (on-shell scatterings and decays). It is thanks to the washouts that\nan unbalanced lepton asymmetry up to ${\\cal O}(\\lambda_1^4)$ can eventually\nsurvive. In the next section we will see that when flavor effects are\nimportant, washouts can play an even more dramatic role in leptogenesis.\n\nThe possibility of deriving an upper limit for the the light neutrino\nmasses~\\cite{buch02-03} follows from the existence of a theoretical bound on\nthe maximum value of the $CP$ asymmetry $\\epsilon_1$ (that holds when $\nN_{1,2,3}$ are sufficiently hierarchical, and $m_{\\nu_{1,2,3}}$ quasi\ndegenerate) and relates $M_1$, $m_{\\nu_3}$ and the washout parameter $\\tilde\nm_1$:\n\\begin{equation}\n\\label{limit}\n | \\epsilon_1| \\leq \\left[\\frac{3}{16\\pi}\\frac{M_1}{v^2} (m_{\\nu_3}-m_{\\nu_1})\\right]\n \\,\\sqrt{1-\\frac{m_{\\nu_1}^2}{\\tilde m_1^2}}. \n\\end{equation}\nThe term in square brackets is the so called Davidson-Ibarra limit~\\cite{da02}\nwhile the correction in the square root was first given in ref..~\\cite{hamby03}\nWhen $m_{\\nu_3}\\gsim 0.1\\,$eV, the light neutrinos are quasi-degenerate and\n$m_{\\nu_3}-m_{\\nu_1}\\sim \\Delta m^2_{atm}\/2 m_{\\nu_3} \\to 0$ so that, to keep\n$\\epsilon_1$ finite, $M_1$ is pushed to large values $\\gsim 10^{13}\\,$GeV.\nSince at the same time $\\tilde m_1$ must remain larger than $m_{\\nu_{1}} $\nthe washouts also increase, until the surviving asymmetry is\ntoo small to explain the BAU.\\footnote{$\\Delta L=2$ washout processes, that\n depend on a different parameter than $\\tilde m_1$, and that can become\n important when $M_1$ is large, also play a role in establishing the limit.}\nThe limit $m_{\\nu_3} \\lsim 0.15\\,$eV results.\n \n\n\n\\section{Lepton flavor effects}\n\nIn the Lagrangian (\\ref{lagrangian1}) the terms involving the charged\nlepton Yukawa couplings have not been included. Since all these\ncouplings are rather small, if leptogenesis occurs at temperatures $T\n\\gsim 10^{12}\\,$GeV, when the Universe is still very young, not many of\nthe related (slow) processes could have occurred during its short\nlifetime, and leptogenesis has essentially no knowledge of lepton\nflavors. At $T\\lsim 10^{12}\\,$GeV the reactions mediated by the tau\nYukawa coupling $h_\\tau$ become important, and at $ T\\lsim 10^{9}\\,$GeV\nalso $h_\\mu$-reactions have to be accounted for. \nIncluding the Yukawa terms for the leptons yields the Lagrangian:\n\\begin{equation}\n\\label{lagrangian2}\n{\\cal L} =\\frac{1}{2}\\left[\\bar N_1 (i\\! \\not\\! \\partial) N_1 - \nM_1 N_1 N_1\\right] -(\\lambda_{1i}\\,\n\\bar N_1\\, \\ell_{i}{H} +h_i\\bar e_i\\ell_i H^\\dagger + {\\rm h.c.}),\n\\end{equation}\nwhere (in the flavor basis) the matrix $h$ of the Yukawa\ncouplings is diagonal. The flavor content of the (anti)lepton doublets\n$\\ell_1$ ($\\bar \\ell'_1$) to which $N_1$ decays is now important, since these\nstates do not remain coherent, but are effectively resolved into their flavor\ncomponents by the fast Yukawa interactions\n$h_i$.~\\cite{Barbieri:1999ma,Abada:2006fw,Nardi:2006fx} Note that because of\n$CP$ violating loop effects, in general $CP(\\bar \\ell'_1)\\neq \\ell_1$, that is\nthe antileptons produced in $N_1$ decays are not the $CP$ conjugated of the\nleptons, implying that the flavor projections $K_i\\equiv |\\langle\\ell_i\n|\\ell_1 \\rangle |^2$ and $\\bar K_i\\equiv |\\langle\\bar\\ell_i |\\bar\\ell_1'\n\\rangle |^2$ differ: $\\Delta K_i = K_i-\\bar K_i \\neq 0$. \nThe flavor $CP$ asymmetries can be defined as:~\\cite{Nardi:2006fx}\n\\begin{equation}\n\\epsilon_1^i = \\frac{\n\\Gamma(N_1\\to \\ell_i H)-\n\\Gamma(N_1\\to \\bar \\ell_i \\bar H)}\n{\\Gamma_{N_1}}=K_i\\epsilon_1 + {\\Delta K_i}\/{2}.\n\\end{equation}\nThe factor $\\Delta K_i$ in the second equality accounts for the flavor\nmismatch between leptons and antileptons. The factor $K_i$ in front of\n$\\epsilon_1$ accounts for the reduction in the strength of the $N_1$-$\\ell_i$\ncoupling with respect to $N_1$-$\\ell_1$, and thus reduces also the strength of\nthe washouts for the $i$-flavor, yielding an efficiency factor\n$\\eta_{1}^i=\\min (\\eta_1\/K_i,1)$. Assuming for illustration $\\eta_1\/K_i < 1$\nthe resulting asymmetry is\n\\begin{equation}\nY_L \\approx \\sum_i \\epsilon^i_1\\, \\eta_{1}^i \\approx n_f \nY_{\\ell_1} + \\sum_i\\frac{\\Delta K_i}{2K_i}\\>\\frac{m_*}{\\tilde m_1}.\n\\end{equation}\nIn the first term on the r.h.s. $n_f$ represents the number of flavors\neffectively resolved by the charged lepton Yukawa interactions ($n_f=2$ or 3)\nwhile $Y_{\\ell_1}$ is the asymmetry that would have been obtained by\nneglecting the decoherence of $\\ell_1$. The second term, that is controlled by\nthe `flavor mismatch' factor $\\Delta K_i$, can become particularly large in\nthe cases when the flavor $i$ is almost decoupled from $N_1$ ($K_i \\ll 1$).\nThis situation is depicted in fig.~2 for the two-flavor case and for two\ndifferent temperature regimes. The two flat curves give $|Y_{B-L}|$ as a\nfunction of the flavor projector $K_\\tau$ assuming $\\Delta K_\\tau=0$, and show\nrather clearly the enhancement of a factor $\\approx 2$ with respect to the one\nflavor case (the points at $K_\\tau=0,\\,1$). The other two curves are peaked\nat values close to the boundaries, when $\\ell_\\tau$ or a combination\northogonal to $\\ell_\\tau$ are almost decoupled from $N_1$, and show how the\n$\\ell_1$-$\\ell'_1$ flavor mismatch can produce much larger enhancements.\n\nIt was first noted in ref.~\\cite{Nardi:2006fx} that flavored-leptogenesis can\nbe viable even when the branching ratios for decays into leptons and\nantileptons are equal, that is in the limit when $L$ is conserved in decays\nand the total asymmetry $\\epsilon_1=0$ vanishes. This is a surprising\npossibility, that can occur when the $CP$ asymmetries for the single flavors\nare non-vanishing, thanks to the fact that lepton number is in any case\nviolated by the washout interactions.\n\nIn conclusion, the relevance of flavor effects is at least twofold:\n\\begin{figure}\n\\hspace{.8cm}\n\\psfig{figure=Kplot.ps,width=.35\\textheight,height=13truecm,angle=270}\n\\caption{\n $|Y_{B-L}|$ (in units of $10^{-5}|\\epsilon_1|$) as a function of $K_\\tau $\n in two two-flavor regimes. The thick solid and dashed lines correspond to\n the special case when $K_\\tau =\\bar K_\\tau$. The two thin lines give an example\n of the results for $K_\\tau \\neq \\bar K_\\tau$. The filled circles and squares\n at $K_\\tau = 0\\,,1$ correspond to the aligned cases where flavor effects\nare irrelevant.\n\\label{fig:kplot}}\n\\end{figure}\n\\begin{itemize} \n\\item[1.] \nThe BAU resulting from leptogenesis can be several times larger \nthan what would be obtained neglecting flavor effects. \n\\item[2.] \nIf leptogenesis occurs at temperatures when flavor\neffects are important, the limit on the light neutrino \nmasses does not hold.~\\cite{Abada:2006fw,DeSimone:2006dd} \nThis is because there is no analogous of the Davidson-Ibarra bound \nin eq.~(\\ref{limit}) for the flavor asymmetries $\\epsilon_1^i$. \n\n\\end{itemize}\n\n\\section{The effects of the heavier Majorana neutrinos}\n\nWhat about the possible effects of the heavier Majorana neutrinos $N_{2,3}$\nthat we have so far neglected? A few recent studies analyzed the so called\n``$N_1$-decoupling'' scenario, in which the Yukawa couplings of $N_1$ are\nsimply too weak to washout the asymmetry generated in $N_2$ decays (and\n$\\epsilon_1$ is too small to explain the BAU).~\\cite{decoupling}\nThis is a consistent scenario in which $N_2$ leptogenesis could successfully\ngenerate enough lepton asymmetry. However, in the opposite situation when\nthe Yukawa couplings of $N_1$ are very large, it was generally assumed that\nthe asymmetries related to $N_{2,3}$ are irrelevant, since they would be\nwashed out during $N_1$ leptogenesis. In contrast to this, in\nref.~\\cite{Barbieri:1999ma} (and more recently also in\nref.~\\cite{Strumia:2006qk}) it was stated that part of the asymmetry from\n$N_{2,3}$ decays does in general survive, and must be taken into account when\ncomputing the BAU. In ref.~\\cite{Engelhard:2006yg} Engelhard, Grossman, Nir\nand myself carried out a detailed study of the fate of a lepton asymmetry\n$Y_P$ preexisting $N_1$ leptogenesis, and we reached conclusions that agree with\nthese statements. I will briefly describe the reasons for this and the\nimportance of the results. Including also $N_{2,3}$ the leptogenesis\nLagrangian reads:\n\\begin{equation}\n\\label{lagrangian3}\n{\\cal L} =\\frac{1}{2}\\left[\\bar N_\\alpha (i\\! \\not\\! \\partial) N_\\alpha - \n{N_\\alpha} M_\\alpha N_\\alpha\\right] -(\\lambda_{\\alpha i}\\,\n\\bar N_\\alpha\\, \\ell_{i}{H} +h_i\\bar e_i\\ell_i H^\\dagger + {\\rm h.c.}),\n\\end{equation}\nwhere the heavy neutrinos are written in the mass basis with $\\alpha=1,2,3$.\nIt is convenient to define the three (in general non-orthogonal) combinations\nof lepton doublets $\\ell_\\alpha$ to which the corresponding $N_\\alpha$ decay:\n\\begin{equation}\n|{\\ell_\\alpha}\\rangle=(\\lambda\\lambda^\\dagger)_{\\alpha\\alpha}^{-1\/2}\n\\sum_i\\lambda_{\\alpha i}\\,|{\\ell_i}\\rangle.\n\\end{equation}\nLet us discuss for definiteness the case when $N_2$-related washouts are not\ntoo strong ($\\tilde m_2\\not\\gg m_*$) , so that a sizeable asymmetry\nproportional to $\\epsilon_2$ is generated, while $N_1$-related washouts are so\nstrong that by itself $N_1$ leptogenesis would not be successful ($\\tilde m_1\\gg m_*$).\nTo simplify the arguments, let us also impose two additional conditions:\nthermal leptogenesis, that is a vanishing initial $N_1$ abundance\n$n_{N_1}(T\\gg M_1)\\approx0$, and a strong hierarchy $M_2\/M_1\\gg1$. From this\nit follows that there are no $N_1$ related washout effects during $N_2$\nleptogenesis and, because $n_{N_2}(T\\approx M_1)$ is Boltzmann suppressed,\nthere are no $N_2$ related washouts during $N_1$ leptogenesis. Thus $N_2$ and\n$N_1$ dynamics are decoupled. Now, the second condition in (\\ref{conditions})\nimplies that already at $T\\gsim M_1$ the interactions mediated by the $N_1$\nYukawa couplings are sufficiently fast to quickly destroy the coherence of\n$\\ell_2$ produced in $N_2$ decays. Then a statistical mixture of $\\ell_1$ and\nof states orthogonal to $\\ell_1$ builds up, and it can be described by a\nsuitable {\\it diagonal} density matrix. For simplicity, let us assume that\nboth $N_{2}$ and $N_1$ decay at $T\\gsim10^{12}\\,$GeV when flavor effects are\nirrelevant. In this regime a convenient choice for the orthogonal lepton\nbasis is $(\\ell_1,\\,\\ell_{1_\\perp}$) where $\\ell_{1_\\perp}$ denotes generically\nthe flavor components orthogonal to $\\ell_1$.\nThen any asymmetry $Y_P$ preexisting the $N_1$ leptogenesis phase \n(as for example $Y_{\\ell_2}$) \ndecomposes as:\n\\begin{equation}\n\\label{eq:c2}\nY_P= Y_{\\ell_{1_\\perp}} + \nY_{\\ell_1}\\,. \n\\end{equation}\nThe crucial point here is that in general we can expect $Y_{\\ell_{1_\\perp}}$ to be\nof the same order than $Y_P$, and since $\\ell_{1_\\perp}$ is orthogonal to\n$\\ell_1$, this component of the asymmetry remains protected against $N_1$\nwashouts. Therefore, a finite part of any preexisting asymmetry (and in\nparticular of $Y_{\\ell_2}$ generated in $N_2$ decays) survives through $N_1$\nleptogenesis. A more detailed study~\\cite{Engelhard:2006yg} reveals also some\nadditional features. For example, in spite of the strong $N_1$-related\nwashouts $Y_{\\ell_1}$ is not driven to zero, rather, only the sum of\n$Y_{\\ell_1}$ and of the Higgs asymmetry $Y_H$ vanishes, but not the two\nseparately. (This can be traced back to the presence of a conserved charge\nrelated to $Y_{\\ell_{1_\\perp}}$.)\n\nFor $10^9\\lsim M_1 \\lsim 10^{12}\\,$GeV the lepton flavor structures are only\npartially resolved during $N_1$ leptogenesis, and a similar result is\nobtained. However, when $M_1 \\lsim 10^9\\,$GeV and the full flavor basis\n$(\\ell_e,\\ell_\\mu,\\ell_\\tau)$ is resolved, there are no directions in flavor\nspace where an asymmetry can remain protected, and then $Y_P$ can be\ncompletely erased independently of its flavor composition. In conclusion, the\ncommon assumption that when $N_1$ leptogenesis occurs in the strong washout\nregime the final BAU is independent of initial conditions, does not hold in\ngeneral, and is justified only in the following cases:~\\cite{Engelhard:2006yg}\n{\\it i)}~Vanishing decay asymmetries and\/or efficiency factors for $N_{2,3}$\n($\\epsilon_{{2}}\\eta_{2}\\approx 0$ and $\\epsilon_{{3}}\\eta_{3}\\approx 0$);\n{\\it ii)}~$N_1$-related washouts are still significant at $T\\lsim10^9\\,$GeV;\n{\\it iii)}~Reheating occurs at a temperature in between $M_2$ and $M_1$. In\nall other cases the initial conditions for $N_1$ leptogenesis, and in\nparticular those related to the possible presence of an initial asymmetry from\n$N_{2,3}$ decays, cannot be ignored when calculating the BAU, and any\nconstraint inferred from analyses based only on $N_1$ leptogenesis are not\nreliable.\n\n\n\\section*{Acknowledgments}\nThis contribution is based on work done in collaboration with G. Engelhard, Y.\nGrossman, Y. Nir, J. Racker and E. Roulet. I acknowledge partial support \nfrom INFN in Italy and from Colciencias in Colombia under contract \nNo. 1115-333-18739. \n\n\n\n\\section*{References}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} diff --git a/data_all_eng_slimpj/shuffled/split2/finalzzfwzf b/data_all_eng_slimpj/shuffled/split2/finalzzfwzf new file mode 100644 index 0000000000000000000000000000000000000000..9dd6b170306334cdfde6d1cef89dc77069040782 --- /dev/null +++ b/data_all_eng_slimpj/shuffled/split2/finalzzfwzf @@ -0,0 +1,5 @@ +{"text":"\\section{Methods}\n\n\\subsection{The Controversy \u2013 Massive Black Hole or Supernova Remnant:}\n\nIn recent years, evidence has been mounting for a massive black hole powering a low-luminosity active galactic nucleus (AGN) at the center of Henize 2-10 \\cite{reines2011actively_Henize,reines2012parsec,reines2016deep,riffel2020evidence}, although a supernova remnant has been proposed as an alternative by some authors \\cite{hebbar2019x,cresci2017muse}. As discussed in the main text, the radio and X-ray point source luminosities are consistent with both. A recent study \\cite{hebbar2019x} argues for a supernova remnant based on their findings that the X-ray spectrum is better fit by a hot plasma model (typically used for supernova remnants) than a power-law model (typically used for luminous AGNs). However, the soft X-ray spectrum of the nuclear source in Henize 2-10 does resemble massive black holes accreting at very low Eddington fractions \\cite{constantin2009probing} including Sagittarius A$^*$ in the Milky Way \\cite{baganoff2003chandra}. Another study using ground-based spectroscopy favored a supernova remnant origin for the central source based on a lack of any AGN ionization signatures \\cite{cresci2017muse}. However, the ground-based observations used in that work had an angular resolution of $\\sim$0.7\", which is not sufficient to cleanly isolate the weakly accreting black hole from nearby young ($<$5 Myr) massive (M$_* \\sim 10^5 M_{\\odot}$) star clusters that dominate the line ratios at this relatively course angular resolution.\nThere are other observational results to consider regarding the origin of the nuclear radio\/X-ray source in Henize 2-10. For example, a recent study using adaptive optics integral field spectroscopy provides evidence for gas excited by an AGN and an enhanced stellar velocity dispersion at the location of the nuclear source consistent with a $\\sim10^6 M_{\\odot}$ black hole, favoring the low-luminosity AGN interpretation \\cite{riffel2020evidence}. There is also evidence for moderately significant variability on hour-long timescales in the X-ray light curve, which is incompatible with a supernova remnant \\cite{reines2016deep,hebbar2019x}. Moreover, it is reasonable to expect that Henize 2-10 hosts a massive black hole since its overall structure resembles an early-type galaxy (albeit with a central starburst) and its stellar mass may be as high as M$_* \\sim 10^{10} M_{\\odot}$ \\cite{nguyen2014extended}, a regime where the black hole occupation fraction is near unity \\cite{greene2020intermediate}. The central starburst complicates the identification of the weakly accreting black hole, yet a variety of multiwavelength observational results taken collectively strongly support its presence. These results are summarized in Figure \\ref{tab:AGNSNR_tab} (Extended Data Table 1). A highly sub-Eddington massive black hole is consistent with all of the observations, including the new work presented here, while a supernova remnant is not. The present study not only adds to the evidence for a massive black hole in Henize 2-10, it also demonstrates that a bipolar outflow from the black hole is enhancing\/triggering star formation in its vicinity.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/M2_Table_1.jpg}\n\\caption{\\textbf{(Extended Data Table 1) Summary of observational results regarding the nature of the nucleus in Henize 2-10.} A highly sub-Eddington massive black hole is consistent with all the available observations including new results presented here, while a supernova remnant is not.}\n\\label{tab:AGNSNR_tab}\n\\end{figure}\n\n\n\\subsection{STIS Observations and Data Reduction:}\n\nSpatially resolved spectroscopic observations of the nuclear regions of the dwarf starburst galaxy Henize 2-10 were obtained using the Space Telescope Imaging Spectrograph (STIS) instrument on the Hubble Space Telescope (HST). We obtained observations with two slit orientations. The first is aligned with the quasi-linear ionized gas structure identified by Reines et al.\\cite{reines2011actively_Henize} and covers the central radio\/X-ray source and the bright knot of ionized gas to the east. We refer to this as the East-West (EW) orientation. The second slit orientation was placed perpendicular to the EW observation at the location of the central radio\/X-ray source. We refer to this as the North-South (NS) orientation. The candidate AGN itself was too faint to acquire directly, therefore we used a target acquisition with an offset from a bright point source 7.9\" to the southeast.\nSpectra were taken with the G750M and G430M gratings providing medium spectral resolution (R $\\sim$ 5000-6000) coverage of key optical emission lines. The central wavelengths were set at 6581 \\AA \\ and 4961 \\AA \\ for the G750M and G430M gratings, respectively. At each slit orientation, two orbits were spent in G430M and one orbit in G750M. The observations were taken with a two-point dither pattern with CR-SPLITS (multiple exposures taken to aid in cosmic ray rejection) at each position to help eliminate cosmic rays. The calibrated dithered images were combined and have a spatial resolution of $\\sim$0.1\", which corresponds to a physical scale of $\\sim$4 pc at the distance of Henize 2-10 ($\\sim$9 Mpc).\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/M2_Figure_4.pdf}\n\\caption{\\textbf{(Extended Data Figure 1) Raw 2D spectra showing the [OI]6300 emission line at the location of the nucleus in the EW slit orientation.} The location of the nucleus is indicated by white circles and the two images correspond to the two dithered sub-exposures.}\n\\label{fig:OIRAW}\n\\end{figure}\n\n\n\\subsection{Emission Line Fitting:}\n\n\nBefore we fit emission lines in the spectra, we first modeled and removed the continuum. The reduced spectra were continuum-subtracted by masking emission line regions and fitting a low order polynomial to the continuum in each row in the spatial dimension. A low order polynomial was used given the lack of absorption features in the spectra. We did, however, consider the potential impact of stellar absorption lines on our measurements and found that the absorption line strengths are negligible compared to the emission line strengths. Scaling a Starburst99 \\cite{starburst99} model for a 4 Myr stellar population (see Stellar Ages section below) to our observed spectra, we find that the flux of H$\\alpha$ absorption is smaller by a factor of 61 than the H$\\alpha$ emission and the flux of the H$\\beta$ absorption is smaller by a factor of 7 than the H$\\beta$ emission at the location of the central source. Accounting for this absorption has a negligible effect on the line ratios of the nuclear source and does not impact the classifications based on the diagnostic diagrams. \nOnce the spectra were continuum-subtracted, we fit each emission line of interest with a linear combination of Gaussian profiles to characterize the flux and estimate kinematic properties along the spatial dimension of each slit. The fitting was done using lmfit \\cite{newville2016lmfit}, a non-linear least squares curve fitting package in Python. We fit each emission line with up to two Gaussian components when needed. To determine if a second Gaussian component is warranted, we require that the flux of both components be greater than the 3$\\sigma$ error of the flux. This process is performed row by row in the spatial direction along each slit for emission lines of interest. During this process we fit the H$\\beta$, [OIII]5007, H$\\alpha$, [NII]6548\/6583 and [SII]6716\/6731 emission lines. We fit the H$\\alpha$ and [NII] lines simultaneously, fixing the spacing between [NII] Gaussian components to their corresponding component in the H$\\alpha$ emission line to laboratory values. Additionally, we tie the widths of [NII] components and fix the flux ratio of the [NII] lines to the laboratory value of 1:2.96. Similarly, the two [SII] emission lines are fit simultaneously with the spacing between Gaussian components of the two lines held fixed and the widths of the Gaussian components are tied together. \nWe also fit [OI]6300 in the spectra of the nuclear source and the eastern star-forming region, but the line is too weak to be detected all along the slits. Since the [OI]6300 line has a complex profile at the location of the black hole, with possible double peaked narrow lines and a much broader component than the other emission lines, we confirmed that this was not due to an artifact in the data. In Figure \\ref{fig:OIRAW}, we show close-up views of the raw 2D spectra along the EW slit position. The two images correspond to the two dithered sub-exposures offset by 7.5 pixels. The broad [OI] line at the location of the central source is seen in both images at different positions on the detector (indicated by white circles). Note that the locations of hot pixels do not change between the two images. In Figure \\ref{fig:OICOMB} we also show the final reduced 2D image with the sub-exposures combined. The broad, double peaked nature of the [OI] line is clearly visible. We also note that [OI] is similarly broadened in the nuclear spectrum extracted from the NS slit position, although there is not an obvious double-peaked narrow line component (see Figure 2 in the main paper). In any case, broadened [OI] is clearly detected at the location of the nuclear source in both slit positions indicating an outflow on the order of ~500 km s$^{-1}$. \n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.8\\textwidth]{figs\/M2_Figure_5.pdf}\n\\caption{\\textbf{(Extended Data Figure 2) Combined 2D spectra showing the [OI]6300 emission line at the location of the nucleus in the EW slit orientation.} Same as Figure \\ref{fig:OIRAW} but showing the reduced 2D image with the dithered sub-exposures combined. }\n\\label{fig:OICOMB}\n\\end{figure}\n\n\n\\subsection{Gas Density:}\n\nWe estimate the electron density, n$_e$, using the density sensitive line ratio [SII]6716\/6731 \\cite{osterbrock2006astrophysics}. This ratio is sensitive to electron densities in the range of $\\sim$10$^2$-10$^4$ cm$^{-3}$. Along the EW slit orientation, we find a range of n$_e \\sim 10^{2.5}-10^4 cm^{-3}$, indicating a relatively high-density gas (see Figure \\ref{fig:gas_density}). These density estimates are in general agreement with the gas densities predicted by the Allen et al. \\cite{allen2008mappings} shock\/shock+precursor models in the central regions of Henize 2-10 as described in the next section.\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/M2_Figure_6.pdf}\n\\caption{\\textbf{(Extended Data Figure 3) The electron density, n$_e$, along the EW slit orientation.} We measure the electron density along the EW slit from the ratio of [SII]6716\/[SII]6731 and find the electron density ranges from $\\sim10^{2.5}-10^4 cm^{-3}$ , which is within the range the [SII] ratio is sensitive to density. The high densities are consistent with those predicted by optical emission line diagnostics derived from the Allen et al. \\cite{allen2008mappings} shock models. }\n\\label{fig:gas_density}\n\\end{figure}\n\n\n\\subsection{Emissin Line Diagnostics - Photoionization and Shock Models:}\n\nTo understand the ionization mechanisms in the central regions of Henize 2-10 we compare our emission line measurements in various regions to photoionization and shock models. In addition to the central radio\/X-ray source, we identified 7 regions of interest that are shown in Figure \\ref{fig:extract_reg} and serve to provide a spatially resolved picture of the kinematics and ionization conditions in the central regions of Henize 2-10. The extraction regions taken along the EW slit orientation were chosen to correspond with emission features seen in the H$\\alpha$ and I-band imaging from HST (young star clusters, knots of ionized gas) as well as features seen in the STIS spectroscopy (broad emission, double peaks). \\par\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/M2_Figure_7.pdf}\n\\caption{\\textbf{(Extended Data Figure 4) The spatial extraction regions taken along the EW slit orientation.} We place these regions on optical emission line diagnostic diagrams (Figures \\ref{fig:BPT}-\\ref{fig:ShockPre2}). Top panel: the extraction regions are shown on the narrow band H$\\alpha$ + continuum image from HST to highlight the ionized gas features that several of the spatial extractions probe. Bottom panel: the extraction regions are shown on the archival 0.8 micron HST image, showing young star clusters that the EW slit orientation passes through.}\n\\label{fig:extract_reg}\n\\end{figure}\n\n\nWe first utilize the standard emission line diagnostic diagrams described by Baldwin et al. \\cite{baldwin1981classification} and Veilleux et al. \\cite{veilleux1987spectral} that have been expanded upon and summarized in Kewley et al. \\cite{kewley2006host}. An accreting BH will produce a much harder continuum than is emitted by hot stars, and these diagrams take advantage of this fact by comparing strong emission line ratios that are close together in frequency to mitigate reddening effects. In this study we employ widely used emission line diagnostic diagrams that take [OIII]\/H$\\beta$ versus [NII]\/H$\\alpha$, [SII]\/H$\\alpha$, and [OI]\/H$\\alpha$ (see Figure \\ref{fig:BPT}). In the [NII]\/H$\\alpha$ diagram, line-emitting galaxies separate into a V-shape \\cite{kewley2006host} with star forming galaxies occupying the left most plume while AGNs occupy the right branch of galaxies. These regions are quantified by an empirical division between HII regions and emission from AGNs developed by Kauffmann et al. \\cite{kauffmann2003host}. The \"composite\" region between this empirical division and the theoretical maximum starburst line from Kewley et al. \\cite{kewley2001theoretical} indicates there is likely significant emission form both HII regions and AGNs. Like the [NII]\/H$\\alpha$ diagram, the [SII]\/H$\\alpha$ and [OI]\/H$\\alpha$ diagnostics provide diagnostics for differentiating between emission from HII regions and AGNs. These two diagrams add a dividing line to distinguish between emission from Seyferts and Low Ionization Nuclear Emission Regions (LINERs). LINER emission can be generated both by shocks and very hard AGN spectra and determining the primary ionization mechanism can be complicated \\cite{baldwin1981classification}. It should be noted that while these diagnostic diagrams are useful for identifying regions dominated by luminous AGNs, they have limitations and can yield ambiguous results for (or completely miss) massive black holes accreting at very low Eddington ratios such as the one in Henize 2-10. Indeed, non-stellar ionization is clearly indicated in the [OI]\/H$\\alpha$ diagram at the location of the nuclear source, but not so for the other diagnostic diagrams (see discussion in the main paper). Figure \\ref{fig:BPT} shows the [OIII]\/H$\\beta$ versus [NII]\/H$\\alpha$, [SII]\/H$\\alpha$, and [OI]\/H$\\alpha$ diagnostic diagrams for the various extraction regions along the EW slit. \\par\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=\\textwidth]{figs\/M2_Figure_8.pdf}\n\\caption{\\textbf{(Extended Data Figure 5) Narrow emission line diagnostic diagrams showing various extraction regions along the EW slit orientation (see Figure \\ref{fig:extract_reg}).} The nucleus (yellow point) falls in the Seyfert region of the [OI]\/H$\\alpha$ diagram. The young star-forming region $\\sim$70 pc to the east of the low-luminosity AGN is depicted with a blue triangle and star for the primary emission line component and the blue-shifted secondary component, respectively. [OI] is not detected in all of the regions.}\n\\label{fig:BPT}\n\\end{figure}\n\n\nIn addition to the diagnostic diagrams discussed above, we also investigate whether the ionization conditions seen in the central regions of Henize 2-10 can be explained by mechanical excitation from shocks. To investigate this, we employ ionization models of shock and shock+precursor emission developed by Allen et al. \\cite{allen2008mappings}. These models provide emission line fluxes for ionization from a pure shock (possibly driven by an outflow from an AGN or by regions of intense star formation), where the gas is collisionally ionized, or a shock+precursor where ionizing photons produced in the shock-heated gas travel upstream and ionize the gas before the shock reaches it. We explore models with a variety of electron densities (0.01-1000 cm$^{-3}$), shock velocities (100-600 km s$^{-1}$) and transverse magnetic field strengths (0.01-32 $\\mu$G). These are shown in Figures \\ref{fig:ShockPre1} and \\ref{fig:ShockPre2}. \\par\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/M2_Figure_9.pdf}\n\\caption{\\textbf{(Extended Data Figure 6) Optical emission line diagnostics from the shock and shock+precursor models with varying gas density.} We place the spatial extractions from the EW slit orientation shown in Figure \\ref{fig:extract_reg} on a grid of shock excitation models (presented in Allen et al. \\cite{allen2008mappings}) with varying gas density (n = 0.01-1000 cm$^{-3}$) and shock velocity (v = 100-600 km\/s). We fix the transverse magnetic field to be b = 1$\\mu$G and the assume solar metallicity.}\n\\label{fig:ShockPre1}\n\\end{figure}\n\n\nThe emission line ratios from the nuclear source are best described by the shock+precursor models with a low shock velocity (100-250 km s$^{-1}$), a high-density gas (n = 1000 cm$^{-3}$), and a low transverse magnetic field parameter (b = 0.01 \u2013 1 $\\mu$G) (Figures \\ref{fig:ShockPre1} and \\ref{fig:ShockPre2}). Shock+precursor models are thought to be a good description for AGN+outflow emission. Along the filament (extraction regions 2-5), the line ratios are explained well by a low velocity shock or shock+precursor model ($\\sim$200 km s$^{-1}$) in a high density (n$_e \\sim 1000$ cm$^{-3}$) gas with a transverse magnetic field parameter in the range of 1-10 $\\mu$G. This is consistent with a scenario where the central black hole is driving a bipolar outflow that shocks the gas and dominates the ionization conditions along the filament. \\par\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.85\\textwidth]{figs\/M2_Figure_10.pdf}\n\\caption{\\textbf{(Extended Data Figure 7) Optical emission line diagnostics from the shock and shock+precursor models with varying magnetic field.} The models (presented in Allen et al. \\cite{allen2008mappings}) are shown as a grid with dashed blue lines indicating constant shock velocity and dashed black lines indicating constant transverse magnetic field. For these models, the density is fixed to n = 1000 cm$^{-3}$ and the transverse magnetic field parameter is allowed to vary from b = 0.01-32 $\\mu$G. }\n\\label{fig:ShockPre2}\n\\end{figure}\n\n\nAt the region of intense star formation located $\\sim$70 pc to the east of the black hole, we observe strong emission lines including a secondary, blue-shifted kinematic component. To properly fit the emission lines in this region, an additional Gaussian component is added (see Figure \\ref{fig:H210_AGN_spec2} in the main paper). The separation of this component is determined using the [SII] emission line region where the secondary peak is most clearly resolved. We find that the secondary peak is offset from the primary peak by 154 km s$^{-1}$, and we use this value when fitting the other emission line regions where the secondary peak is not as well resolved. When comparing to shock(+ precursor) models, we find that the [NII]\/H$\\alpha$ diagram indicates a low shock velocity (100-250 km s$^{-1}$) in a high-density gas (n = 1000 cm$^{-3}$) with a low transverse magnetic field parameter (b = 0.01 \u2013 1 $\\mu$G) (see Figures \\ref{fig:ShockPre1} and \\ref{fig:ShockPre2}). The [SII]\/H$\\alpha$ diagram shows the primary emission peak is inconsistent with emission from shocks or shocks+precursor models. This indicates that the primary emission peak at this location is primarily due to star formation. The secondary emission peak is consistent with shock+precursor models for the low velocity, high density conditions, indicating that this kinematically distinct emission component is dominated by a shock+precursor from the AGN-driven outflow. \\par\n\nEmission from extraction regions 6 and 7 (i.e., the western star-forming region) is not consistent with any shock or shock+precursor models, which is in agreement with their location in the HII region of the Baldwin\u2013Phillips\u2013Terlevich (BPT) diagram. The line ratios are dominated by star formation in this region.\n\n\\subsection{Star Cluster Ages:}\n\nWe estimate the ages of the young stellar clusters that fall within the EW slit from their H$\\alpha$ and H$\\beta$ equivalent widths. To ascertain ages from these equivalent widths we employ simple stellar population (SSP) models from Starburst99 \\cite{starburst99}. We use models from Version 7.1 with solar metallicity (appropriate for the central regions of Henize 2-10 \\cite{martin2006high}), an instantaneous burst of 10$^4$ M$_\\odot$ with a Kroupa IMF (0.1 \u2013 100 M$_\\odot$), the Geneva evolutionary tracks with high mass loss and the Pauldrach-Hillier atmospheres. \\par\n\nAt the location of the young stellar clusters in the eastern star-forming region (region 1 in Figure \\ref{fig:extract_reg}) we find an equivalent width of 478 \\AA \\ and 70 \\AA \\ for H$\\alpha$ and H$\\beta$ respectively. These both give stellar population age estimates of $\\sim$4.3 Myr, which is in good agreement with previous estimates of the ages of other young star clusters in the region \\cite{chandar2003stellar}. The ages of these clusters are larger than the crossing time for the AGN-driven outflow ($\\sim$0.3 Myr), based on the minimum outflow velocity measured from emission line spectra ($\\sim$200 km s$^{-1}$) and the distance between the AGN and the eastern star-forming knot ($\\sim$70 pc). Therefore, the timescales allow for the AGN-driven outflow to have triggered\/enhanced the formation of star clusters in the Eastern star-forming knot. \\par\n\nThe EW slit orientation also passes through a young star cluster in region 3. At this location we find equivalent widths of 212 \\AA \\ and 41 \\AA \\ for H$\\alpha$ and H$\\beta$ respectively, both indicating an age of $\\sim$5.2 Myr for the stellar cluster. Finally, region 6 in the western star-forming region has H$\\alpha$ and H$\\beta$ equivalent widths of 1092 \\AA \\ and 196 \\AA, respectively. These equivalent widths indicate the stellar clusters have ages $\\leq$3 Myr. \n\n\n\\subsection{Bipolar Outflow Model:}\n\nHere we provide a derivation of the model used to describe a precessing bipolar outflow emanating from the central radio\/X-ray source, which can explain the coherent velocity structure seen in the central $\\sim$120 pc of the EW orientation observations. In this model we align the EW slit orientation with the $z$-axis and assume the outflow precesses about this axis with a small angle $\\theta$ and an angular precession frequency $\\omega$. If the gas being ejected by the outflow has velocity $v_0$, and we orient the $x$-axis to be in the direction of the observer, then the radial (Doppler shifted) velocity seen at the location of the AGN as a function of time will be given by\n\n\\begin{equation}\n v_r(t) = v_x(t) = v_0 \\sin{\\left(\\theta\\right)} \\sin{\\left(\\omega t + \\gamma\\right)},\n\\end{equation}\n\n\\noindent\nwhere $\\gamma$ represents a phase shift that accounts for small asymmetries in the outflow profile (see Figure \\ref{fig:outflow_model} for an illustration of our model).\n\nIn order to find the radial velocity as a function of distance ($z$) along the slit axis, we must consider what angle the outflow made with the (line-of-sight) $x$-axis when the gas at distance $z$ was emitted. Since this angle is time dependent as the outflow precesses, the line-of-sight velocity of the gas will depend on the orientation of the outflow at some time $t_0$ in the past. The time that has passed since the gas at distance $z$ was ejected by the outflow is determined by the $z$ velocity of the gas. Due to the symmetry of the model about the $z$-axis, the $z$ component of the gas velocity will be time independent and only depend on the angle of the outflow with the $z$-axis,\n\n\\begin{equation}\n v_z = v_0 \\cos{(\\theta)}.\n\\end{equation}\n\n\\noindent\nThe time, $t_0$, for gas to reach a distance $z$ along the slit is then given by\n\n\\begin{equation}\n t_0 = \\frac{z}{v_0\\cos(\\theta)}.\n\\end{equation}\n\n\\noindent\nWe are then able to find the an expression for $v_r(z)$ by evaluating the expression for $v_r(t)$ at the time $-t_0$:\n\n\\begin{equation}\n v_r(z) = v_0 \\sin{\\left(\\theta\\right)} \\sin{\\left( \\gamma - \\frac{\\omega}{v_0\\cos{(\\theta)}} z \\right)}\n\\end{equation}\n\n\nTo fit this model to the data we require a rough estimate of the bulk outflow velocity, $v_0$. We estimate this parameter using $W80$, the velocity interval containing 80\\% of the line flux, of the broad emission seen at the location of the candidate AGN. We find $W80 \\approx 200 - 500$ km s$^{-1}$ based on measurements of the [OIII]5007 and [OI]6300 lines at the location of the candidate AGN. This allows us to determine the best-fit angle of precession, $\\theta$, and the frequency of precession, $f = \\omega\/2\\pi$, to be\n\n\\begin{align}\n \\theta &= 2.4^{\\circ} - 6.1^{\\circ} \\\\\n f &= 3.0 - 7.5 \\ {\\rm revolutions \\ Myr}^{-1}\n\\end{align}\n\nwhere the larger angle of precession and smaller frequency of precession corresponds to lower outflow velocities. We find consistent results when using the Doppler shift profile of H$\\alpha$, H$\\beta$ and [OIII] emission lines to fit the model derived above (the results using the [OIII] emission line is shown in Figure \\ref{fig:H210_AGN_spec2}). The Doppler shift profile can be coherently traced out to 50-60 pc on either side of the candidate AGN, most definitively out to the bright eastern star forming region after which the Doppler shifts of the emission lines are influenced by the bright young stars and then shown no coherent pattern further along the slit. The coherent velocity structure seen on the scale of 100 pc is not consistent with young supernova remnant as the compact radio\/x-ray source in the central regions of Henize 2-10, which provides further motivation that a low luminosity AGN is driving the outflow. \n\nThese results are roughly consistent with jet parameters derived in other studies where precessing or reorienting jet models have been applied. The long precession period we observe ($\\sim$200,000 years) is shorter by a factor of a few than those seen predicted by Dunn et al. \\cite{dunn2006precession}, Nawaz et al. \\cite{nawaz2016jet} and Cielo et al. \\cite{cielo2018feedback} but longer by a factor of a few than those predicted by Gower et al. \\cite{gower1982precessing} when jet precession is invoked to explain the complex bending and knotting seen in large radio jets.\n\n\n\n\\begin{figure}\n\\centering\n\\includegraphics[width=0.9\\textwidth]{figs\/M2_Figure_11.pdf}\n\\caption{\\textbf{(Extended Data Figure 8) A diagram of the toy model of the bipolar outflow generated by the low-luminosity AGN in Henize 2-10.} Our simple model depends on the outflow velocity of the ionized gas (v$_{outflow}$), the angle the outflow makes with its precession axis ($\\theta$) and the angular frequency with which the outflow precesses ($\\omega$). Similar models have been used to describe the bending seen in large radio jets \\cite{gower1982precessing,dunn2006precession}.}\n\\label{fig:outflow_model}\n\\end{figure}\n\n\n\\clearpage\n\n\\section{Acknowledgments}\n\nWe are grateful to Mallory Molina for useful discussions regarding shocks. We also thank Mark Whittle and Kelsey Johnson for their assistance with the HST\/STIS proposal while AER was a graduate student at the University of Virginia, as well as subsequent discussions. Support for Program number HST-GO-12584.006-A was provided by NASA through a grant from the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. AER also acknowledges support for this work provided by NASA through EPSCoR grant number 80NSSC20M0231. ZS acknowledges support for this project from the Montana Space Grant Consortium.\n\n\\section{Author contributions statement}\n\nZS reduced and analyzed the STIS data and compared the results to models. AER led the HST\/STIS proposal and helped with the data reduction. Both authors worked on the interpretation of the results and writing of the paper. \n\n\\section{Data Availability Statement}\nThe spectroscopic data analyzed in this study are available from the Mikulski Archive for Space Telescopes (MAST), https:\/\/archive.stsci.edu\/\n\n\\section{Competing Interest Statement}\nThe authors declare no competing interests.\n\n\n\n\\clearpage\n\n\\bibliographystyle{plain}\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction} \\label{sec:intro} Over the last few years\nnonlocal models have increasingly impacted upon a number of important\nfields in science and technology.\nThe evidence of anomalous diffusion processes, for example, has been\nfound in several physical and social environments \\cite{Klafter,\n MetzlerKlafter}, and corresponding transport models have been\nproposed in various areas such as electrodiffusion in nerve\ncells~\\cite{AnomalousElectrodiffusion} and ground-water solute\ntransport~\\cite{BensonWheatcraft}. Non-local models have also been\nproposed in fields such as finance~\\cite{CarrHelyette,RamaTankov} and\nimage processing~\\cite{GattoHesthaven,GilboaOsher}.\nOne of the fundamental non-local operators is the Fractional Laplacian\n$(-\\Delta)^s$ ($0 < s < 1$) which, from a probabilistic point of view\ncorresponds to the infinitesimal generator of a stable L\\'evy process\n\\cite{Valdinoci}.\n\nThe present contribution addresses theoretical questions and puts\nforth numerical algorithms for the numerical solution of the Dirichlet\nproblem\n\\begin{equation}\n\\left\\lbrace\n \\begin{array}{rl}\n (-\\Delta)^s u = f & \\mbox{ in }\\Omega, \\\\\n u = 0 & \\mbox{ in }\\Omega^c \\\\\n \\end{array}\n \\right.\n\\label{eq:fraccionario_dirichlet}\n\\end{equation}\non a bounded one-dimensional domain $\\Omega$ consisting of a union of\na finite number of intervals (whose closures are assumed mutually\ndisjoint). This approach to enforcement of (nonlocal) boundary\nconditions in a bounded domain $\\Omega$ arises naturally in connection\nwith the long jump random walk approach to the Fractional\nLaplacian~\\cite{Valdinoci}. In such random walk processes, jumps of\narbitrarily long distances are allowed. Thus, the payoff of the\nprocess, which corresponds to the boundary datum of the Dirichlet\nproblem, needs to be prescribed in $\\Omega^c$.\n\n\nLetting $s$ and $n$ denote a real number ($00$\nthen the solution $u$ may be written as $w^s\\phi + \\chi$, where $\\phi\n\\in H^{r+s}(\\Omega)$ and $\\chi \\in H^{r+2s}_0(\\Omega)$. Interior regularity\nresults for the Fractional Laplacian and related operators have also\nbeen the object of recent studies~\\cite{Albanese2015,Cozzi2016}.\n\n\nThe sharp regularity results put forth in the present contribution, in\nturn, are related to but different from those mentioned above. Indeed\nthe present regularity theorems show that the fractional Laplacian in\nfact induces a {\\em bijection} between certain weighted Sobolev\nspaces. Using an appropriate version of the Sobolev lemma put forth in\nSection~\\ref{regularity}, these results imply, in particular, that the\nregular factors in the decompositions of fractional Laplacian solutions\nadmit $k$ continuous derivatives for a certain value of $k$ that\ndepends on the regularity of the right-hand side. Additionally, this\npaper establishes the operator regularity in spaces of analytic\nfunctions: denoting by $A_\\rho$ the space of analytic functions in the\nBernstein Ellipse $\\mathcal{E}_\\rho$, the weighted operator $K_s(\\phi)\n= (-\\Delta)^s(\\omega^s \\phi)$ maps $A_\\rho$ into itself bijectively. In\nother words, for a right-hand side which is analytic in a Bernstein\nEllipse, the solution is characterized as the product of an analytic\nfunction in the same Bernstein Ellipse times an explicit singular\nweight.\n\nThe theoretical treatment presented in this paper is essentially\nself-contained. This approach recasts the problem as an integral\nequation in a bounded domain, and it proceeds by computing certain\nsingular exponents $\\alpha$ that make $(-\\Delta)^s (\\omega^\\alpha\n\\phi(x))$ analytic near the boundary for every polynomial $\\phi$. As\nshown in Theorem~\\ref{teo1} a infinite sequence of such values of\n$\\alpha$ is given by $\\alpha_n = s + n$ for all $n\\geq 0$. Morever,\nSection~\\ref{two_edge_sing} shows that the weighted operator $K_s$\nmaps polynomials of degree $n$ into polynomials of degree $n$---and it\nprovides explicit closed-form expressions for the images of each\npolynomial $\\phi$.\n \nA certain hypersingular form we present for the operator $K_s$ leads\nto consideration of a weighted $L^2$ space wherein $K_s$ is\nself-adjoint. In view of the aforementioned polynomial-mapping\nproperties of the operator $K_s$ it follows that this operator is\ndiagonal in a basis of orthogonal polynomials with respect to a\ncorresponding inner product. A related diagonal form was obtained in\nthe recent independent contribution~\\cite{Dyda2016} by employing\narguments based on Mellin transforms. The diagonal\nform~\\cite{Dyda2016} provides, in particular, a family of explicit\nsolutions in the $n$ dimensional ball in ${\\mathbb {R}}^n$, which are given by\nproducts of the singular term $(1-|z|^2)^s$ and general Meijer\nG-Functions. The diagonalization approach proposed in this paper,\nwhich is restricted to the one-dimensional case, is elementary and is\nsuccinctly expressed: the eigenfunctions are precisely the Gegenbauer\npolynomials.\n\nThis paper is organized as follows: Section~\\ref{integraleq} casts the\nproblem as an integral equation, and Section~\\ref{diagonalform}\nanalyzes the boundary singularity and produces a diagonal form for the\nsingle-interval problem. Relying on the Gegenbauer eigenfunctions and\nassociated expansions found in Section~\\ref{diagonalform},\nSection~\\ref{regularity} presents the aforementioned Sobolev and\nanalytic regularity results for the solution $u$, and it includes a\nweighted-space version of the Sobolev lemma. Similarly, utilizing\nGegenbauer expansions in conjunction with Nystr\\\"om discretizations\nand taking into account the analytic structure of the edge\nsingularity, Section~\\ref{HONM} presents a highly accurate and\nefficient numerical solver for Fractional-Laplacian equations posed on\na union of finitely many one-dimensional intervals. The sharp error\nestimates presented in Section~\\ref{HONM} indicate that the proposed\nalgorithm is spectrally accurate, with convergence rates that only\ndepend on the smoothness of the right-hand side. In particular,\nconvergence is exponentially fast (resp. faster than any power of the\nmesh-size) for analytic (resp. infinitely smooth) right-hand sides. A\nvariety of numerical results presented in Section~\\ref{num_res}\ndemonstrate the character of the proposed solver: the new algorithm is\nsignificantly more accurate and efficient than those resulting from\nprevious approaches.\n\n\n\n\n\n\n\\section{Hypersingular Bounded-Domain Formulation\\label{integraleq}}\n\nIn this section the one-dimensional operator \n\\begin{equation}\n(-\\Delta)^s u(x) = C_1(s) \\mbox{ P.V.} \\int_{-\\infty}^\\infty \\left( u(x)-u(x-y) \\right) |y|^{-1-2s} dy\n\\label{frac1d}\n\\end{equation} \ntogether with Dirichlet boundary conditions outside the bounded domain\n$\\Omega$, is expressed as an integral over $\\Omega$. The Dirichlet\nproblem~\\eqref{eq:fraccionario_dirichlet} is then identified with a\nhypersingular version of Symm's integral equation; the precise\nstatement is provided in Lemma~\\ref{lemma_hypersingular} below. In\naccordance with Section~\\ref{sec:intro}, throughout this paper we\nassume the following definition holds.\n\\begin{definition}\\label{union_intervals_def}\n The domain $\\Omega$ equals a finite union\n\\begin{equation}\n\\label{union_intervals}\n\\Omega = \\bigcup_{i=1}^M (a_i, b_i)\n\\end{equation}\nof open intervals $(a_i,b_i)$ with disjoint closures. We denote\n$\\partial\\Omega =\\{a_1,b_1,\\dots, a_M,b_M\\}$.\n\\end{definition}\n\n\\begin{definition}\n $C^2_0(\\Omega)$ will denote, for a given open set $\\Omega \\subset\n \\mathbb{R}$, the space of all functions $u \\in C^2(\\Omega) \\cap\n C(\\mathbb{R})$ that vanish outside of $\\Omega$. For $\\Omega =(a,b)$ we will\n simply write $C^2_0((a,b)) = C^2_0(a,b)$.\n\\end{definition}\nThe following lemma provides a useful expression for the Fractional\nLaplacian operator in terms of a certain integro-differential\noperator. For clarity the result is first presented in the following\nlemma for the case $\\Omega=(a,b)$; the generalization to \ndomains $\\Omega$ of the form~\\eqref{union_intervals} then follows easily in\nCorollary~\\ref{coro_lemma_hypersingular}.\n\n\\begin{lemma}\n\\label{lemma_hypersingular}\nLet $s \\in (0,1)$, let $u \\in\nC^2_0(a,b)$ such that $|u'|$ is integrable in $(a,b)$, let $x \\in \\mathbb{R}, x\\not \\in \\partial \\Omega=\\{a,b\\}$,\nand define\n\\begin{equation}\\label{eq:c_s}\nC_s = \\frac{C_1(s)}{2s(1-2s)} = -\\Gamma(2s-1)\\sin(\\pi s) \/ \\pi \\quad (s\\ne 1\/2);\n\\end{equation}\nWe then have\n\\begin{itemize}\n\\item [---] Case $s \\ne \\frac{1}{2}$:\n\\begin{equation}\n\\label{eq:hypersingular}\n(-\\Delta)^s u (x) = C_s \\frac{d}{dx} \\int_{a}^b |x-y|^{1-2s} \\frac{d}{dy} u(y) dy.\n\\end{equation}\n\\item [---] Case $s=\\frac{1}{2}$:\n\\begin{equation}\n\\label{eq:hypersingular_s12}\n(-\\Delta)^{1\/2} u (x) = \\frac{1}{\\pi} \\frac{d}{dx} \\int_{a}^b \\ln |x-y| \\frac{d}{dy} u(y) dy.\n\\end{equation}\n\\end{itemize}\n\\end{lemma}\n\\begin{proof}\n We note that, since the support of $u=u(x)$ is contained in $[a,b]$,\n for each $x \\in \\mathbb{R}$ the support of the translated function\n $u=u(x-y)$ as a function of $y$ is contained in the set\n $[x-b,x-a]$. Thus, using the decomposition ${\\mathbb {R}} = [x-b,x-a] \\cup\n (-\\infty,x-b) \\cup (x-a,\\infty)$ in~\\eqref{frac1d}, we obtain the\n following expression for $(-\\Delta)^s u (x)$:\n\\begin{equation}\\label{spliting}\n C_1(s) \\Bigg( \\mbox{P.V.} \\int_{x-b}^{x-a} ( u(x)-u(x-y) ) |y|^{-1-2s} dy + \n \\left[\\int_{-\\infty}^{x-b} dy + \\int_{x-a}^\\infty dy\\right] u(x) |y|^{-1-2s} \\Bigg). \n\\end{equation}\n\n\nWe consider first the case $x\\not\\in [a,b]$, for\nwhich~\\eqref{spliting} becomes\n\\begin{equation}\\label{spliting_2}\n -C_1(s)\\Bigg( \\mbox{P.V.} \\int_{x-b}^{x-a} u(x-y) |y|^{-1-2s} dy\\Bigg). \n\\end{equation}\nNoting that the integrand~\\eqref{spliting_2} is smooth, integration by\nparts yields\n\\begin{equation}\\label{parts_firstterm}\n \\frac{C_1(s)}{2s} \\int_{x-b}^{x-a} u'(x-y) \\operatorname{sgn}(y)|y|^{-2s} dy\n\\end{equation}\n(since $u(a)=u(b)=0$), and, thus, letting $z=x-y$ we obtain\n\\begin{equation}\\label{singleinterval_smooth}\n (-\\Delta)^s u(x) = \\frac{C_1(s)}{2s} \\int_{a}^{b} \\operatorname{sgn}(x-z)|x-z|^{-2s} u'(z) dz\\quad , \\quad x\\not\\in [a,b].\n\\end{equation}\nThen, letting\n\\begin{equation*}\\label{K_def}\n\\Phi_s(y) = \\left\\lbrace\n \\begin{array}{ll}\n |y|^{1-2s}\/(1-2s) & \\mbox{ for } s\\in (0,1), \\, s\\ne 1\/2 \\\\\n \\log|y| & \\mbox{ for } s = 1\/2 \n \\end{array}\n \\right. ,\n\\end{equation*}\nnoting that\n\\begin{equation}\n\\label{sgn_x_der}\n\\operatorname{sgn}(x-z)|x-z|^{-2s} = \n\\frac{\\partial}{\\partial x} \\Phi_s(x-z),\n\\end{equation}\nreplacing~\\eqref{sgn_x_der} in ~\\eqref{singleinterval_smooth} and\nexchanging the $x$-differentiation and $z$-integration yields the\ndesired expressions~\\eqref{eq:hypersingular}\nand~\\eqref{eq:hypersingular_s12}. This completes the proof in the case\n$x\\not\\in [a,b]$.\n\nLet us now consider the case $x\\in(a,b)$. The second term\nin~\\eqref{spliting} can be computed exactly; we clearly have\n\\begin{equation}\\label{eq:bnd_term}\n\\left[\\int_{-\\infty}^{x-b} dy + \\int_{x-a}^\\infty dy\\right] u(x) |y|^{-1-2s} = \n\\left[ \\frac{u(x)}{2s} \\operatorname{sgn}(y) |y|^{-2s} \\bigg|_{y=x-b}^{y=x-a} \\right] .\n\\end{equation}\nIn order to integrate by parts in the P.V. integral in~\\eqref{spliting} consider the set $$ D_\\varepsilon =\n[x-b,x-a] \\setminus (-\\varepsilon,\\varepsilon).$$\nThen, defining\n\\begin{equation*}\n\\label{parts_secondterm_0}\nQ_\\epsilon (x) = \\int_{D_\\epsilon } \\left( u(x)-u(x-y) \\right) |y|^{-1-2s} dy \n\\end{equation*}\nintegration by parts yields\n\\begin{equation*}\n\\label{parts_secondterm}\nQ_\\epsilon (x) = \n-\\frac{1}{2s}\\left ( g_{a}^b(x) - h_{a}^b(x) - \\frac{\\delta^2_\\varepsilon}{\\varepsilon^{2s}} - \\int_{D_\\epsilon} u'(x-y) \\operatorname{sgn}(y)|y|^{-2s} dy \\right)\n\\end{equation*}\nwhere $\\delta_\\varepsilon = u(x+\\varepsilon) + u(x-\\varepsilon) - 2 u(x)$, $g_{a}^b(x) =\nu(x)(|x-a|^{-2s} + |x-b|^{-2s})$ and $h_{a}^b(x) = u(a)|x-a|^{-2s} +\nu(b)|x-b|^{-2s}$. \n\nThe term $h_{a}^b(x)$ vanishes since $u(a)=u(b)=0$. The contribution\n$g_{a}^b(x)$, on the other hand, exactly cancels the boundary terms in\nequation~\\eqref{eq:bnd_term}.\nFor the values $x\\in (a,b)$ under\nconsideration, a Taylor expansion in $\\varepsilon$ around $\\varepsilon=0$\nadditionally tells us that the quotient\n$\\frac{\\delta^2_\\varepsilon}{\\varepsilon^{2s}}$ tends to $0$ as $\\varepsilon\\to\n0$. Therefore, using the change of variables $z=x-y$ and letting\n$\\varepsilon\\to 0$ we obtain a principal-value expression valid for $x \\ne a, x\\ne b$:\n\\begin{equation}\n\\label{eq:pv_singleinterval}\n(-\\Delta)^s u (x) = \\frac{C_1(s)}{2s} \\mbox{ P.V.} \\int_{a}^b \\operatorname{sgn}(x-z)|x-z|^{-2s} u'(z) dz.\n\\end{equation}\nReplacing~\\eqref{sgn_x_der} in ~\\eqref{eq:pv_singleinterval} then\nyields~\\eqref{eq:hypersingular} and~\\eqref{eq:hypersingular_s12},\nprovided that the derivative in $x$ can be interchanged with the\nP.V. integral. This interchange is indeed correct, as it follows from\nan application of the following Lemma to the function $v=u'$. The\nproof is thus complete.\n\\end{proof}\n\\begin{lemma}\n\\label{lemma_exchangePV}\nLet $\\Omega \\subset \\mathbb{R}$ be as indicated in\nDefinition~\\ref{union_intervals_def} and let $v\\in C^1(\\Omega)$ such\nthat $v$ is absolutely integrable over $\\Omega$, and let\n$x\\in\\Omega$. Then the following relation holds:\n\\begin{equation} \\label{der_pv}\n P.V. \\int_\\Omega \\frac{\\partial}{\\partial x} \\Phi_s(x-y) v(y) dy = \\frac{\\partial}{\\partial x} \\int_\\Omega \\Phi_s(x-y) v(y) dy\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\nSee Appendix~\\ref{app_exchangePV}.\n\\end{proof}\n\n\n\\begin{corollary}\n\\label{coro_lemma_hypersingular}\nGiven a domain $\\Omega$ as in Definition~\\eqref{union_intervals_def}, and\nwith reference to equation~\\eqref{eq:c_s}, for $u\\in C_0^2(\\Omega)$ and\n$x\\not\\in \\partial \\Omega$ we have\n\\begin{itemize}\n\\item [---] Case $s \\ne \\frac{1}{2}$:\n\\begin{equation}\n\\label{eq:mutli_int}\n(-\\Delta)^s u (x) = C_s \\frac{d}{dx} \\sum_{i=1}^M \\int_{a_i}^{b_i} |x-y|^{1-2s} \\frac{d}{dy} u(y) dy\n\\end{equation}\n\\item [---] Case $s=\\frac{1}{2}$:\n\\begin{equation}\n\\label{eq:mutli_int_s12}\n (-\\Delta)^{1\/2} u (x) = \\frac{1}{\\pi} \\frac{d}{dx} \\sum_{i=1}^M \\int_{a_i}^{b_i} \\ln |x-y| \\frac{d}{dy} u(y) dy\n\\end{equation}\n\\end{itemize}\nfor all $x\\in\\mathbb{R}\\setminus \\partial \\Omega = \\cup_i^M \\{a_i, b_i\\}$.\n\\end{corollary}\n\\begin{proof}\n Given $u\\in C_0^2(\\Omega)$ we may write $u=\\sum_i^M u_i$ where, for\n $i=1,\\dots,M$ the function $u_i=u_i(x)$ equals $u(x)$ for $x \\in\n (a_i,b_i)$ and and it equals zero elsewhere. In view of\n Lemma~\\ref{lemma_hypersingular} the result is valid for each\n function $u_i$ and, by linearity, it is thus valid for the function\n $u$. The proof is complete.\n\\end{proof}\n\\begin{remark}\\label{bound_rem}\n A point of particular interest arises as we examine the character of\n $(-\\Delta)^s u$ with $u\\in C_0^2(\\Omega)$ for $x$ at or near $\\partial\n \\Omega$. Both Lemma~\\ref{lemma_hypersingular} and its\n corollary~\\ref{coro_lemma_hypersingular} are silent in these\n regards. For $\\Omega = (a,b)$, for example, inspection of\n equation~\\eqref{eq:pv_singleinterval} leads one to generally expect\n that $(-\\Delta)^s u(x)$ has an infinite limit as $x$ tends to each\n one of the endpoints $a$ or $b$. But this is not so for all\n functions $u\\in C_0^2(\\Omega)$. Indeed, as established in\n Section~\\ref{diag}, the subclass of functions in $C_0^2(\\Omega)$ for\n which there is a finite limit forms a dense subspace of a relevant\n weighted $L^2$ space. In fact, a dense subset of functions exists\n for which the image of the fractional Laplacian can be extended as\n an analytic function in the complete complex $x$ variable plane.\n But, even for such functions, definition~\\eqref{frac1d} still\n generically gives $(-\\Delta)^s u (x) =\\pm \\infty$ for $x=a$ and\n $x=b$. Results concerning functions whose Fractional Laplacian blows\n up at the boundary can be found in~\\cite{Abatangelo2015}.\n\\end{remark}\nThe next section concerns the single-interval case ($M=1$\nin~\\eqref{eq:mutli_int},~\\eqref{eq:mutli_int_s12}). Using\ntranslations and dilations the single interval problem in any given\ninterval $(a_1, b_1)$ can be recast as a corresponding problem in any\ndesired open interval $(a,b)$. For notational convenience two\ndifferent selections are made at various points in\nSection~\\ref{diagonalform}, namely $(a,b)=(0,1)$ in\nSections~\\ref{edge_sing_left} and~\\ref{two_edge_sing}, and $(a,\nb)=(-1,1)$ in Section~\\ref{diag}. The conclusions and results can\nthen be easily translated into corresponding results for general\nintervals; see for example Corollary~\\ref{diag_ab}.\n\n\\section{Boundary Singularity and Diagonal Form of the Single-Interval\n Operator \\label{diagonalform}}\n\n\nLemma~\\ref{lemma_hypersingular} expresses the action of the operator\n$(-\\Delta)^s$ on elements $u$ of the space $C_0^2(\\Omega)$ in terms of the\nintegro-differential operators on the right-hand side of\nequations~\\eqref{eq:hypersingular} and~\\eqref{eq:hypersingular_s12}. A\nbrief consideration of the proof of that lemma shows that for such\nrepresentations to be valid it is essential for the function $u$ to\nvanish on the boundary---as all functions in $C_0^2(a,b)$ do, by\ndefinition. Section~\\ref{edge_sing_left} considers, however, the\naction under the integral operators on the right-hand side of\nequations~\\eqref{eq:hypersingular} and~\\eqref{eq:hypersingular_s12} on\ncertain functions $u$ defined on $\\Omega = (a,b)$ {\\em which do not\n necessarily vanish at $a$ or $b$}. To do this\nwe study the closely related integral operators\n \\begin{align}\n S_s[u](x) &:= C_s \\int_a^b \\left( |x-y|^{1-2s} - (b-a)^{1-2s} \\right) u(y) dy \\;\\; ( s \\ne \\frac{1}{2} ), \\label{Tsdef_eq1}\\\\\n S_{\\frac{1}{2}}[u](x) &:= \\frac{1}{\\pi} \\int_a^b \\log\\left(\\frac{|x-y|}{b-a}\\right) u(y) dy, \\label{Tsdef_eq2}\\\\\n T_s[u](x) &:= \\frac{\\partial}{\\partial x} S_s\\left[\n \\frac{\\partial}{\\partial y} u(y) \\right](x) .\\label{Tsdef_eq3}\n\\end{align}\n\n\n\\begin{remark}\\label{const_term}\n The addition of the constant term $-(b-a)^{1-2s}$ in the\n integrand~\\eqref{Tsdef_eq1} does not have any effect in the\n definition of $T_s$: the constant $-(b-a)^{1-2s}$ only results in the addition\n of a constant term on the right-hand side of~\\eqref{Tsdef_eq1},\n which then yields zero upon the outer differentiation in\n equation~\\eqref{Tsdef_eq3}. The integrand~\\eqref{Tsdef_eq1} is\n selected, however, in order to insure that the kernel of $S_s$\n (namely, the function $C_s \\left( |x-y|^{1-2s} -(b-a)^{1-2s}\\right)$) tends to\n the kernel of $S_{\\frac{1}{2}}$ in~\\eqref{Tsdef_eq2} (the function\n $\\frac{1}{\\pi} \\log(|x-y|\/(b-a))$) in the limit as $s\\to \\frac{1}{2}$.\n\\end{remark}\n\\begin{remark}\\label{Ts_PV}\n In view of Remark~\\ref{const_term} and Lemma~\\ref{lemma_exchangePV},\n for $u\\in C^2(a,b)$ we additionally have\n\\begin{equation}\\label{Tsdef2}\n T_s[u](x) = \\frac{C_1(s)}{2s} \\mbox{ P.V.} \\int_{a}^b\n \\operatorname{sgn}(x-z)|x-z|^{-2s} u'(z) dz.\n\\end{equation}\n\\end{remark}\n\\begin{remark}\n\\label{remark_Ts}\n The operator $T_s$ coincides with $(-\\Delta)^s$ for functions $u$\n that satisfy the hypothesis of Lemma~\\ref{lemma_hypersingular}, but\n $T_s$ does not coincide with $(-\\Delta)^s$ for functions $u$ which,\n such as those we consider in Section~\\ref{edge_sing_left} below, do\n not vanish on $\\partial \\Omega = \\{a,b\\}$.\n\\end{remark}\n\n\\begin{remark}\\label{remark_openarcs}\n The operator $S_{\\frac{1}{2}}$ coincides with Symm's integral\n operator~\\cite{Symm}, which is important in the context of\n electrostatics and acoustics in cases where Dirichlet boundary\n conditions are posed on infinitely-thin open\n plates~\\cite{OpenArcsRadioScience,OpenArcsTheoretical,Symm,YanSloan}. The\n operator $T_{\\frac{1}{2}}$, on the other hand, which may be viewed\n as a {\\em hypersingular version} of the Symms operator\n $S_{\\frac{1}{2}}$, similarly relates to electrostatics and\n acoustics, in cases leading to Neumann boundary conditions posed on\n open-plate geometries. The operators $S_{s}$ and $T_{s}$ in the\n cases $s \\ne \\frac{1}{2}$ can thus be interpreted as generalizations\n to fractional powers of classical operators in potential theory,\n cf. also Remark~\\ref{remark_Ts}.\n\\end{remark}\nRestricting attention to $\\Omega = (a,b) = (0,1)$ for notational\nconvenience and without loss of generality,\nSection~\\ref{edge_sing_left} studies the image $T_s[u_\\alpha]$ of the\nfunction \n\\begin{equation}\\label{u_alpha}\n u_\\alpha(y) =y^\\alpha\n\\end{equation}\nwith $\\Re\\alpha > 0$---which is smooth in $(0,1)$, but which has an\nalgebraic singularity at the boundary point $y=0$. That section shows\nin particular that, whenever $\\alpha = s +n$ for some $n\\in\n\\mathbb{N}\\cup \\{ 0 \\}$, the function $T_s[u_\\alpha](x)$ can be\nextended analytically to a region containing the boundary point $x=0$.\nBuilding upon this result (and assuming once again $\\Omega = (a, b) =\n(0,1)$), Section~\\ref{two_edge_sing}, explicitly evaluates the images\nof functions of the form $v(y) = y^{s+n}(1-y)^s$ ($n\\in \\mathbb{N}\\cup\n\\{ 0 \\}$), which are singular (not smooth) at the two boundary points\n$y=0$ and $y=1$, under the integral operators $T_s$ and $S_s$. The\nresults in Section~\\ref{two_edge_sing} imply, in particular, that the\nimage $T_s[v]$ for such functions $v$ can be extended analytically to\na region containing the interval $[0,1]$. Reformulating all of these\nresults in the general interval $\\Omega = (a,b)$, Section~\\ref{diag} then\nderives the corresponding single-interval diagonal form for weighted\noperators naturally induced by $T_s$ and $S_s$.\n\\subsection{Single-edge singularity\\label{edge_sing_left}}\nWith reference to equations~\\eqref{Tsdef2} and \\eqref{eq:c_s}, and\nconsidering the aforementioned function $u_\\alpha(y) = y^\\alpha$ we\nclearly have\n\\begin{equation*}\\label{eq:TsNs}\n T_s[u_\\alpha](x) =\\alpha (1-2s) C_s N_\\alpha^s(x) \\;\\; \\mbox{, where}\n\\end{equation*}\n\\begin{equation}\n N_\\alpha^s(x) := P.V.\\int_{0}^{1} \\operatorname{sgn}(x-y)|x-y|^{-2s} y^{\\alpha-1} dy .\n\\label{eq:def_Ns}\n\\end{equation} \nAs shown in Theorem~\\ref{teo1} below (equation~\\eqref{eq:Ns_Betas}),\nthe functions $N_\\alpha^s$ and (thus) $ T_s[u_\\alpha]$ can be\nexpressed in terms of classical special functions whose singular\nstructure is well known. Leading to the proof of that theorem, in what\nfollows we present a sequence of two auxiliary lemmas.\n\\begin{lemma}\n\\label{lemma0_analytic}\nLet $ E = (a, b)\\subset\\mathbb{R}$, and let $C\\subseteq \\mathbb{C}$\ndenote an open subset of the complex plane. Further, let $f=f(t,c)$ be\na function defined in $E\\times C$, and assume 1)~$f$ is continuous in\n$E\\times C$, 2)~$f$ is analytic with respect to $c=c_1+ic_2\\in C$ for\neach fixed $t\\in E$, and 3)~$f$ is ``uniformly integrable over\ncompact subsets of $C$''---in the sense that for every compact set $K\n\\subset C$ the functions\n\\begin{equation}\n\\label{hab_eta}\nh_a(\\eta,c) = \\left| \\int_a^{a+\\eta} f(t,c) dt \\,\\right|\\quad \\mbox{and}\\quad\nh_b(\\eta,c) = \\left| \\int_{b-\\eta}^b f(t,c) dt \\,\\right|\n\\end{equation}\ntend to zero uniformly for $c\\in K$ as $\\eta\\to 0^+$. Then the\nfunction\n$$F(c) := \\int_E f(t,c) dt$$\nis analytic throughout $C$.\n\\end{lemma}\n\\begin{proof} \n Let $K$ denote a compact subset of $C$. For each $c\\in K$ and each\n $n\\in\\mathbb{N}$ we consider Riemann sums $R_n^h(c)$ for the\n integral of $f$ in the interval $[a+\\eta_n,b-\\eta_n]$, where\n $\\eta_n$ is selected in such a way that $h_a(\\eta_n,c) \\leq 1\/n$ and\n $h_b(\\eta_n,c) \\leq 1\/n$ for all $c\\in K$ (which is clearly possible\n in view of the hypothesis~\\eqref{hab_eta}). The Riemann sums are\n defined by $R_n^h(c)=h \\sum_{j=1}^Mf(t_j,c)$, with $ h = (b-a\n +2\\eta_n)\/M$ and $t_{j+1} - t_j =h $ for all $j$.\n\n Let $n\\in \\mathbb{N}$ be given. In view of the uniform continuity of\n $f(t,c)$ in the compact set $[a+\\eta_n,b-\\eta_n]\\times K$, the\n difference between the maximum and minimum of $f(t,c)$ in each\n integration subinterval $(t_j,t_{j+1})\\subset [a+\\eta_n,b-\\eta_n]$\n tends uniformly to zero for all $c\\in K$ as the integration meshsize\n tends to zero. It follows that a meshsize $h_n$ can be found for\n which the approximation error in the corresponding Riemann sum\n $R_n^h(c)$ is {\\em uniformly small for all $c\\in K$}:\n\\[\n\\left| \\int_{a+\\eta_n}^{b-\\eta_n} f(t,c) dt - R_n^h(c)\\right| < \\frac 1n\n\\quad \\mbox{for all $c\\in K$ and for all $n\\in \\mathbb{N}$}.\n\\]\nThus $F(c)$ equals a uniform limit of analytic functions over every\ncompact subset of $C$, and therefore $F(c)$ is itself analytic\nthroughout $C$, as desired.\n\\end{proof}\n\n\\begin{lemma}\\label{lemma_analytic}\n Let $x\\in (0,1)$ and let $g(s,\\alpha) = N_\\alpha^s(x)$ be defined\n by~\\eqref{eq:def_Ns} for complex values of $s$ and $\\alpha$\n satisfying $ \\Re s < 1$ and $\\Re \\alpha > 0$. We then have:\n\\begin{itemize}\n\\item[(\\it{i})] For each fixed $\\alpha$ such that $\\Re \\alpha > 0$,\n $g(s,\\alpha)$ is an analytic function of $s$ for $\\Re s < 1$; and\n\\item[(\\it{ii})] For each fixed $s$ such that $\\Re s < 1$,\n $g(s,\\alpha)$ is an analytic of $\\alpha$ for $\\Re \\alpha > 0$.\n\\end{itemize}\nIn other words, for each fixed $x\\in (0,1)$ the function $\nN_\\alpha^s(x)$ is jointly analytic in the $(s,\\alpha)$ domain $D = \\{\n\\Re s < 1\\} \\times \\{\\Re \\alpha > 0\\} \\subset \\mathbb{C}^2$.\n\\end{lemma}\n\\begin{proof} \n We express the integral that defines $N_\\alpha^s$ as the sum $g_1(s,\\alpha) +\n g_2(s,\\alpha)$ of two integrals, each one of which contains only one of the\n two singular points of the integrand ($y=0$ and $y=x$):\n\\begin{equation*}\\label{ns_two}\n g_1 = \\int_{0}^{x\/2}\n \\operatorname{sgn}(x-y)|x-y|^{-2s} y^{\\alpha-1} dy \\, \\mbox{ and } g_2 = P.V. \\int_{x\/2}^1\n \\operatorname{sgn}(x-y)|x-y|^{-2s} y^{\\alpha-1} dy. \n\\end{equation*}\nLemma~\\ref{lemma0_analytic} tells us that $g_1$ is an analytic\nfunction of $s$ and $\\alpha$ for $(s,\\alpha)\\in D_1 = \\mathbb{C}\\times\n\\{\\Re \\alpha > 0\\}$. \n\nIntegration by parts in the $g_2$ term, in turn, yields\n\\begin{equation}\n\\label{g2_parts}\n(1-2s) g_2(s,\\alpha) = (1-x)^{1-2s} - \\left( \\frac{x}{2}\\right) ^{\\alpha-2s}-\n(\\alpha-1) \\int_{x\/2}^{1} |x-y|^{1-2s} y^{\\alpha-2} dy .\n\\end{equation}\nBut, writing the the integral on the right-hand side\nof~\\eqref{g2_parts} in the form $\\int_{x\/2}^1 = \\int_{x\/2}^x +\n\\int_{x}^1$ and applying Lemma~\\ref{lemma0_analytic} to each one of\nthe resulting integrals shows that the quantity $(1-2s) g_2(s,\\alpha)$\nis an analytic function of $s$ and $\\alpha$ for $(s,\\alpha)\\in D_2 =\n\\mathbb{C}\\times \\{ \\alpha > 0\\}$. In view of the $(1-2s)$ factor,\nhowever, it still remains to be shown that $g_2(s,\\alpha)$ is analytic\nat $s=1\/2$ as well.\n\nTo check that both $g_2(s,\\alpha)$ and $g(s,\\alpha)$ are analytic\naround $s=1\/2$ for any fixed $\\alpha \\in \\{ \\Re \\alpha>0 \\}$, we first\nnote that since $\\int_0^1 1\\cdot y^{\\alpha-1} dy$ is a constant\nfunction of $x$ we may write\n$$ g(s,\\alpha) = \\frac{1}{1-2s} \\frac{\\partial}{\\partial x} \\int_0^1 \\left(|x-y|^{1-2s} - 1 \\right) y^{\\alpha-1} dy. $$\nBut since we have the uniform limit\n\\[\n\\lim_{s\\to 1\/2}\\frac{|x-y|^{1-2s} - 1 }{1-2s} =\n\\left .\\frac{\\partial}{\\partial r} |x-y|^r\\right|_{r=0} = \\log|x-y|\n\\]\nas complex values of $s$ approach $s=1\/2$, we see that $g$ is in fact a\ncontinuous and therefore, by Riemann's theorem on removable\nsingularities, analytic at $s=1\/2$ as well. The proof is now complete.\n\\end{proof}\n\\begin{theorem}\\label{teo1} \n Let $s\\in (0,1)$ and $\\alpha >0$. Then $N_\\alpha^s(x)$ can be\n analytically continued to the unit disc $\\{x:|x|<1\\}\\subset\n \\mathbb{C}$ if and only if either $\\alpha = s + n$ or $\\alpha = 2s +\n n$ for some $n\\in \\mathbb{N}\\cup \\{ 0 \\}$. In the case $\\alpha = s +\n n$, further, we have\n \\begin{equation}\n \\label{eq:teo1}\n N_{s+n}^s(x) = \\sum_{k=0}^{\\infty} \\frac{(2s)_k}{s-n+k} \\frac{x^k}{k!} \n\\end{equation}\nwhere, for a given complex number $z$ and a given non-negative integer $k$\n\\begin{equation}\n\\label{def_Pochhamer}\n(z)_k:=\\frac{\\Gamma(z+k)}{\\Gamma(z)}\n \\end{equation}\ndenotes the Pochhamer symbol.\n\\end{theorem}\n\\begin{proof}\n We first assume $s<\\frac{1}{2}$ (for which the integrand\n in~\\eqref{eq:def_Ns} is an element of $L^1(0,1)$) and $\\alpha < 2s$\n (to enable some of the following manipulations); the result for the\n full range of $s$ and $\\alpha$ will subsequently be established by\n analytic continuation in these variables. Writing\n$$ N_\\alpha^s(x) = x^{-2s} \\int_{0}^{1} \\operatorname{sgn}(x-y) \\left|1-\\frac{y}{x}\\right|^{-2s} y^{\\alpha-1} dy ,$$\nafter a change of variables and some simple calculations for $x\\in\n(0,1)$ we obtain\n\\begin{equation}\n\\label{eq:Ns_secondterm}\nN_\\alpha^s(x) = x^{-2s+\\alpha} \\left[ \\int_{0}^{1} (1-r)^{-2s} r^{\\alpha-1} dr - \\int_{1}^{\\frac{1}{x}} (r-1)^{-2s} r^{\\alpha-1} dr \\right].\n\\end{equation}\nIt then follows that\n\\begin{equation}\n\\label{eq:Ns_Betas}\nN_\\alpha^s(x) = x^{-2s+\\alpha} \\left[\\mbox{B}(\\alpha, 1 - 2 s)\n -\\mbox{B}(1 - 2 s,2s-\\alpha) + \\mbox{B}_x(-\\alpha + 2 s, 1 - 2 s)\n\\right],\n\\end{equation}\nwhere \n\\begin{equation}\\label{Betas}\n \\begin{split}\n\\mbox{B}(a,b) & := \\int_0^1 t^{a-1}(1-t)^{b-1} dt = \\frac{\\Gamma(a)\\Gamma(b)}{\\Gamma(a+b)} \\;\\;\\; \\mbox{ and} \\\\\n\\mbox{B}_x(a,b) & := \\int_0^x t^{a-1}(1-t)^{b-1} dt = x^{a} \\sum_{k=0}^{\\infty} \\frac{(1-b)_k}{a+k} \\frac{x^k}{k!}\n \\end{split}\n\\end{equation}\ndenote the Beta Function~\\cite[eqns. 6.2.2]{AbramowitzStegun} and the\nIncomplete Beta function~\\cite[eqns. 6.6.8 and\n15.1.1]{AbramowitzStegun}, respectively. Indeed, the first integral\nin~\\eqref{eq:Ns_secondterm} equals the first Beta function on the\nright-hand side of~\\eqref{eq:Ns_Betas}, and, after the change of\nvariables $w= 1\/r$, the second integral is easily seen to equal the\ndifference $\\mbox{B}(1-2s,2s-\\alpha) - \\mbox{B}_x(-\\alpha + 2 s, 1 - 2\ns)$.\n\nIn view of~\\eqref{eq:Ns_Betas} and the right-hand expressions in\nequation~\\eqref{Betas} we can now write\n\\begin{equation}\n\\label{eq:caso_s_unmedio}\n N_\\alpha^s(x) = x^{-2s+\\alpha} \\left[ \\frac{ \\Gamma(\\alpha) \\Gamma(1 - 2 s) }{ \\Gamma(1+\\alpha-2s) } - \\frac{ \\Gamma(1 - 2 s) \\Gamma(-\\alpha + 2 s) }{\\Gamma(1-\\alpha)} \\right] + \\sum_{k=0}^{\\infty} \\frac{(2s)_k}{2s-\\alpha+k} \\frac{x^k}{k!}\n\\end{equation}\nfor all $x\\in(0,1)$, $00 \\} \\subset \\mathbb{C}^2$. To do\nthis we consider the following facts:\n\\begin{enumerate}\n\\item \\label{dos} Since $\\Gamma(z)$ is a never-vanishing function of\n $z$ whose only singularities are simple poles at the nonpositive\n integers $z=-n$ ($n\\in \\mathbb{N}\\cup \\{0\\}$), and since, as a\n consequence, $1\/\\Gamma(z)$ is an entire function of $z$ which only\n vanishes at non-positive integer values of $z$, the quotient\n $\\Gamma(\\alpha) \/ \\Gamma(1+\\alpha-2s)$ is analytic and non-zero for\n $(s,\\alpha)\\in D$.\n\\item \\label{tres} The function $G(s)$ that appears on the right hand \n side of~\\eqref{eq:Ns_euler2} ($s\\ne 1\/2$) can be continued analytically to the domain $\\Re s < 1$ with\n the value $G(1\/2)=\\pi$. Further, this function does not vanish for\n any $s$ with $0 < \\Re s < 1$.\n\\item \\label{cuatro} For fixed $s\\in \\mathbb{C}$ the quotient\n $\\sin(\\pi (\\alpha-s)) \/ \\sin(\\pi (\\alpha-2s)) = \\sin(\\pi (q+s)) \/\n \\sin(\\pi q)$ is a meromorphic function of $q$---whose singularities\n are simple poles at the integer values $q = n\\in \\mathbb{Z}$ with\n corresponding residues given by $(-1)^n \\sin(\\pi\n (q+s))\/\\pi$. Further, for $s\\not\\in\\mathbb{Z}$ the quotient vanishes\n if and only if $q=n-s$ (or equivalently, $\\alpha = s + n$) for some $n\\in\n \\mathbb{Z}$.\n\\item \\label{cinco} For each $x$ in the unit disc $\\{x\\in\\mathbb{C}:\n |x|<1\\}$ the infinite series on the right-hand side\n of~\\eqref{eq:Ns_euler} converges uniformly over compact subsets of $D\n \\setminus \\{ \\alpha=2s+n, n\\in \\mathbb{N}\\cup \\{0\\} \\}$. This is\n easily checked by using the asymptotic\n relation~\\cite[6.1.46]{AbramowitzStegun} $\\lim_{k\\to \\infty}\n k^{1-2s}(2s)_k \/ k! = 1\/\\Gamma(2s)$, and taking into account that\n the functions $s\\to (2s)_k$ and $s\\to 1\/\\Gamma(2s)$ are entire and,\n thus, finite-valued for each $s\\in \\mathbb{C}$ and each $k\\in\n \\mathbb{N}\\cup \\{0\\}$.\n\\item \\label{seis} For each fixed $s\\in \\mathbb{C}$ and each\n $x\\in\\mathbb{C}$ with $|x|<1$ the series on the right hand side\n of~\\eqref{eq:Ns_euler} is a meromorphic function of $q$ containing\n only simple polar singularities at $q = n\\in \\mathbb{N}\\cup \\{0\\}$,\n with corresponding residues given by $(2s)_n x^n\/n!$. Indeed,\n point~\\eqref{cinco} above tells us that the series is an analytic\n function of $q$ for $q \\not\\in \\mathbb{N}\\cup \\{0\\}$; the residue at\n the non-negative integer values of $q$ can be computed immediately\n by considering a single term of the series.\n\\item \\label{siete} The residue of the two terms under brackets on the\n right-hand side of~\\eqref{eq:Ns_euler2} are negatives of each\n other. This can be established easily by considering points\n \\eqref{cuatro} and~\\eqref{seis} as well as the identity $\\lim_{q\\to\n n} (-1)^n G(s)\\sin(\\pi(q+s))\/\\pi = 1\/\\Gamma(2s)$---which itself\n results from Euler's reflection formula and standard trigonometric\n identities.\n\\item \\label{ocho} The sum of the bracketed terms\n in~\\eqref{eq:Ns_euler2} is an analytic function of $q$ up to and\n including non-negative integer values of this variable, as it\n follows from point~\\eqref{siete}. Its limit as $q\\to n$, further, is\n easily seen to equal the product of an analytic function of $q$ and\n $s$ times the monomial $x^n$.\n\\end{enumerate}\n\nExpressions establishing the $x$-analyticity properties of\n$N_\\alpha^s(x)$ can now be obtained. On one hand, by\nLemma~\\ref{lemma_analytic} the function $N_\\alpha^s(x)$ is a jointly\nanalytic function of $(s,\\alpha)$ in the domain $D$. In view of\npoints~\\eqref{cuatro} through ~\\eqref{ocho}, on the other hand, we see\nthat the right-hand side expression in equation~\\eqref{eq:Ns_euler} is\nalso an analytic function throughout $D$. Since, as shown above in\nthis proof, these two functions coincide in the open set $U :=\n(0,\\frac{1}{2}) \\times ( 0, 2s )\\subset D$, it follows that they must\ncoincide throughout $D$. In other words, interpreting the right-hand\nsides in equations~\\eqref{eq:Ns_euler} and~\\eqref{eq:Ns_euler2} as\ntheir analytic continuation at all removable-singularity points\n(cf. points~\\eqref{tres} and~\\eqref{siete}) these two equations hold\nthroughout $D$.\n\nWe may now establish the $x$-analyticity of the function\n$N_\\alpha^s(x)$ for given $\\alpha$ and $s$ in $D$. We first do this\nin the case $\\alpha = s+n$ with $n\\in \\mathbb{N}\\cup \\{0\\}$ and\n$s\\in(0,1)$. Under these conditions the complete first term\nin~\\eqref{eq:Ns_euler} vanishes---even at $s=1\/2$---as it follows from\npoints~\\eqref{dos} through~\\eqref{cuatro}. The function\n$N_\\alpha^s(x)$ then equals the series on the right-hand side\nof~\\eqref{eq:Ns_euler}. In view of point~\\eqref{cinco} we thus see\nthat, at least in the case $\\alpha = s+n$, $N_\\alpha^s(x)$ is analytic\nwith respect to $x$ for $|x|<1$ and, further, that the desired\nrelation~\\eqref{eq:teo1} holds.\n\nIn order to establish the $x$-analyticity of $N_\\alpha^s(x)$ in the\ncase $\\alpha = 2s+n$ (or, equivalently, $q=n$) with $n\\in\n\\mathbb{N}\\cup \\{0\\}$ and $s\\in (0,1)$, in turn, we consider the limit\n$q\\to n$ of the right-hand side in\nequation~\\eqref{eq:Ns_euler2}. Evaluating this limit by means of\npoints~\\eqref{cinco} and~\\eqref{ocho} results in an expression which,\nin view of point~\\eqref{cinco}, exhibits the $x$-analyticity of the\nfunction $N_\\alpha^s$ for $|x|<1$ in the case under consideration.\n\nTo complete our description of the analytic character of\n$N_\\alpha^s(x)$ for $(\\alpha,s)\\in D$ it remains to show that this\nfunction is not $x$-analytic near zero whenever\n$(\\alpha - s)$ and $(\\alpha - 2s)$ are not elements of $\\mathbb{N}\\cup\n\\{0\\}$. But this follows directly by consideration\nof~\\eqref{eq:Ns_euler}---since, per points~\\eqref{dos}, ~\\eqref{tres}\nand ~\\eqref{cuatro}, for such values of $\\alpha$ and $s$ the\ncoefficient multiplying the non-analytic term $x^{-2s+\\alpha}$\nin~\\eqref{eq:Ns_euler} does not vanish. The proof is now complete.\n\\end{proof}\n\n\\subsection{Singularities on both edges\\label{two_edge_sing}}\nUtilizing Theorem~\\ref{teo1}, which in particular establishes that the\nimage of the function $u_\\alpha(y) = y^\\alpha$\n(equation~\\eqref{u_alpha}) under the operator $T_s$ is analytic for\n$\\alpha = s+n$, here we consider the image of the function\n\\begin{equation}\\label{udef}\n u(y) := y^s(1-y)^s y^n \n\\end{equation}\nunder the operator $T_s$ and we show that, in fact, $T_s[u]$ is a\npolynomial of degree $n$. This is a desirable result which, as we\nshall see, leads in particular to (i)~Diagonalization of weighted\nversion of the fractional Laplacian operator, as well as (ii)~Smoothness\nand even analyticity (up to a singular multiplicative weight) of\nsolutions of equation~\\eqref{eq:fraccionario_dirichlet} under suitable\nhypothesis on the right-hand side $f$.\n\n\\begin{remark}\\label{remark_idea_2s}\n Theorem~\\ref{teo1} states that the image of the aforementioned\n function $u_\\alpha$ under the operator $T_s$ is analytic not only\n for $\\alpha = s+n$ but also for $\\alpha = 2s+n$. But, as shown in\n Remark~\\ref{remark_2s}, the smoothness and analyticity theory\n mentioned in point~(ii) above, which applies in the case $\\alpha =\n s+n$, cannot be duplicated in the case $\\alpha = 2s+n$. Thus, except\n in Remark~\\ref{remark_2s}, the case $\\alpha = 2s+n$ will not be further\n considered in this paper.\n\\end{remark}\n\n\nIn view of Remark~\\ref{Ts_PV} and in order to obtain an explicit\nexpression for $T_s[u]$ we first express the derivative of $u$ in the\nform\n$$u'(y) = \\frac{d}{dy} \\left(y^{s} (1-y)^s y^n \\right) = y^{s-1}\n(1-y)^{s-1} \\left[ y^n (s+n-(2s+n)y) \\right] $$\nand (using~\\eqref{eq:c_s}) we thus obtain\n\\begin{equation}\n\\label{eq:Ts_useful}\n T_s[ u ] = (1-2s)C_s \\left( (s+n) L^s_n - (2s+n) L^s_{n+1} \\right) .\n\\end{equation}\nwhere\n\\begin{equation}\n\\label{eq:Ks_n}\nL^s_n := P.V. \\int_{0}^{1} \\operatorname{sgn}(x-y)|x-y|^{-2s} y^{s-1} (1-y)^{s-1} y^n dy\n\\end{equation}\nOn the other hand, in view of definitions~\\eqref{Tsdef_eq1}\nand~\\eqref{Tsdef_eq2} and Lemma~\\ref{lemma_exchangePV} it is easy to\ncheck that\n\\begin{equation}\n\\label{eq:Ss_useful}\n\\frac{\\partial}{\\partial x} S_{s}( y^{s-1} (1-y)^{s-1} y^n ) = (1-2s)C_s L^s_n .\n\\end{equation}\nIn order to characterize the image $T_s[u]$ of the function $u$\nin~\\eqref{udef} under the operator $T_s$, Lemma~\\ref{lemma_Lns} below\npresents an explicit expression for the closely related function\n$L^s_n$. In particular the lemma shows that $L^s_n$ is a polynomial of\ndegree $n-1$, which implies that $T_s[u]$ is a polynomial of degree\n$n$.\n\\begin{lemma}\n\\label{lemma_Lns}\n $L^s_n(x)$ is a polynomial of degree $n-1$. More precisely,\n \\begin{equation}\n\\label{eq:lemma_Lns}\nL^s_n(x) = \\Gamma(s) \\sum_{k=0}^{n-1} \\frac{(2s)_k}{k!} \\frac{\\Gamma( n - k - s+1)} { (s+ k - n ) \\Gamma(n - k)} x^k.\n\\end{equation}\n\\end{lemma}\n\\begin{proof}\n We proceed by substituting $(1-y)^{s-1}$ in the\n integrand~\\eqref{eq:Ks_n} by its Taylor expansion around $y=0$,\n\\begin{equation}\\label{taylor_one_bnd}\n (1-y)^{s-1} = \\sum_{j=0}^{\\infty} q_j y^j, \\mbox{ with } q_j = \\frac{(1-s)_j}{j!}, \n\\end{equation}\nand subsequently exchanging the principal value integration with the\ninfinite sum (a step that is justified in\nAppendix~\\ref{appendix_pvseries}). The result is\n\\begin{equation}\n\\label{eq:series_Ks}\nL^s_n(x) = \\sum_{j=0}^{\\infty} \\left( \\mbox{P.V.} \\int_0^1 \\operatorname{sgn}(x-y)|x-y|^{-2s} q_j y^{s-1+n+j} dy \\right)\n\\end{equation}\nor, in terms of the functions $N_\\alpha^s$ defined in\nequation~\\eqref{eq:def_Ns},\n\\begin{equation}\\label{lns2}\nL^s_n(x) = \\sum_{j=0}^{\\infty} q_j N^s_{{s+n+j}} .\n\\end{equation}\n\nIn view of~\\eqref{eq:teo1}, equation~\\eqref{lns2} can also be made to\nread\n\\begin{equation}\n \\label{eq:series_ajk}\n L^s_n(x) = \\sum_{j=0}^{\\infty} \\sum_{k=0}^{\\infty} \\frac{(1-s)_j}{j!} \\frac{(2s)_k}{k!} \\frac{1}{s-n-j+k} \\, x^k,\n\\end{equation}\nor, interchanging of the order of summation in this expression (which\nis justified in Appendix~\\ref{appendix_sumorder}),\n\\begin{equation}\n\\label{eq:Kz_series}\n L^s_n(x) = \\sum_{k=0}^{\\infty} \\frac{(2s)_k}{k!} a_{k}^n x^k, \\mbox{ where } a_{k}^n = \\sum_{j=0}^{\\infty} \\frac{(1-s)_j}{j!} \\frac{1}{s-n-j+k}.\n\\end{equation}\nThe proof will be completed by evaluating explicitly the coefficients\n$a_k^n$ for all pairs of integers $k$ and $n$.\n\nIn order to evaluate $a_k^n$ we consider the Hypergeometric function\n\\begin{equation}\\label{hyper_geom}\n _2F_1(a,b;c;z)=\\sum_{j=0}^\\infty \\frac{(a)_j\n (b)_j}{(c)_j} \\frac{z^j}{j!}.\n\\end{equation}\nComparing the $a_k^n$ expression in~\\eqref{eq:Kz_series}\nto~\\eqref{hyper_geom} and taking into account the relation\n$$\\frac{1}{s-n-j+k} = \\frac{(n-k-s)_j}{(n-k-s+1)_j} \\, \\frac{1}{s+k-n}$$\n(which follows easily from the recursion $(z+1)_j= (z)_j (z+j)\/z$ for\nthe Pochhamer symbol defined in equation~\\eqref{def_Pochhamer}),\nwe see that $a_k^n$ can be expressed in terms of the Hypergeometric\nfunction $_2F_1$ evaluated at $z=1$:\n$$a_k^n = 2F_1(1-s,n-k-s;n-k-s+1;1)\/(s+k-n). $$ \nThis expression can be simplified further: in view of Gauss' formula\n$_2F_1(a,b;c;1) =\n\\frac{\\Gamma(c)\\Gamma(c-a-b)}{\\Gamma(c-a)\\Gamma(c-b)}$ (see\ne.g.~\\cite[p. 2]{Bailey}) we obtain the concise expression\n\\begin{equation}\n\\label{eq:ak_finitos}\n a_{k}^n = \\frac{\\Gamma(n-k-s+1)\\Gamma(s)} { (s+k-n) \\Gamma(n - k) } .\n\\end{equation}\nIt then clearly follows that $a_{k}^n = 0$ for $k \\ge n$---since the\nterm $\\Gamma(n - k)$ in the denominator of this expression is infinite\nfor all integers $k\\geq n$. The series in~\\eqref{eq:Kz_series} is\ntherefore a finite sum up to $k=n-1$ which, in view\nof~\\eqref{eq:ak_finitos}, coincides with the desired\nexpression~\\eqref{eq:lemma_Lns}. The proof is now complete.\n\\end{proof}\n\\begin{corollary}\\label{cor_poly_w}\n Let $w(y) = u(y)\\chi_{(0,1)}(y)$ where $u= y^s(1-y)^s y^n$\n (equation~\\eqref{udef}) and where $\\chi_{(0,1)}$ denotes the\n characteristic function of the interval $(0,1)$. Then, defining the\n $n$-th degree polynomial $p(x) = (1-2s)C_s \\left( (s+n) L^s_n -\n (2s+n) L^s_{n+1} \\right)$ with $L^s_n$ given\n by~\\eqref{eq:lemma_Lns}, for all $x\\in \\mathbb{R}$ such that $x\\ne\n 0$ and $x\\ne 1$ (cf. Remark~\\ref{bound_rem}) we have\n \\begin{equation}\\label{Ts_pol}\nT_s[u] (x) = p(x)\n\\end{equation}\nand, consequently,\n\\begin{equation}\\label{frac_pol}\n(-\\Delta)^s w(x) = p(x).\n\\end{equation}\n\\end{corollary}\n\\begin{proof}\n In view of equation~\\eqref{eq:Ts_useful} and Lemma~\\ref{lemma_Lns}\n we obtain~\\eqref{Ts_pol}. The relation~\\eqref{frac_pol} then follows\n from Remark~\\ref{remark_Ts}.\n\\end{proof}\n\n\n\n\nIn view of equation~\\ref{eq:Ss_useful} and Lemma~\\ref{lemma_Lns}, the\nresults obtained for the image of $u(y) = y^{s}(1-y)^{s} y^n$ under\nthe operator $T_s$ can be easily adapted to obtain analogous\npolynomial expressions of degree exactly $n$ for the image of the\nfunction $\\tilde{u}(y) = y^{s-1}(1-y)^{s-1} y^n$ under the operator\n$S_{s}$. And, indeed, both of these results can be expressed in terms\nof isomorphisms in the space $\\mathbb{P}_n$ of polynomials of degree\nless or equal than $n$, as indicated in the following corollary,\n\\begin{corollary}\n\\label{coro_diagonal}\nLet $s\\in (0,1)$, $m\\in \\mathbb{N}$, and consider the linear mappings\n$P:\\mathbb{P}_m \\to \\mathbb{P}_m$ and $Q:\\mathbb{P}_m \\to\n\\mathbb{P}_m$ defined by\n\\begin{equation}\\label{eq_PQ}\n \\begin{split}\n P : p & \\to T_s[y^s(1-y)^s p(y)] \\;\\;\\; \\mbox{and} \\\\\n Q : p & \\to S_{s}[y^{s-1}(1-y)^{s-1} p(y)].\n \\end{split}\n\\end{equation}\nThen the matrices $[P]$ and $[Q]$ of the linear mappings $P$ and $Q$\nin the basis $\\{ y^n:n=0,\\dots,m\\}$ are upper-triangular and their\ndiagonal entries are given by\n\\begin{equation*}\n \\begin{split}\n P_{nn} = & \\frac{\\Gamma(2s+n+1) }{n!}\\;\\;\\; \\mbox{and} \\\\\n Q_{nn} = & -\\frac{\\Gamma(2s+n-1) }{n!},\n \\end{split}\n\\end{equation*}\nrespectively. In particular, for $s = \\frac 12$ we have\n\\begin{equation}\\label{diagonal_entries}\n \\begin{split}\n P_{nn} = & \\;\\; 2 n \\\\\n Q_{nn} = & - \\frac{2}{n} \\;\\;\\; \\mbox{for } \\;\\;\\; n\\ne 0 \\;\\;\\; \\mbox{and }\\;\\;\\; Q_{00} = -2\\log(2). \\\\\n \\end{split} \n\\end{equation}\n\\end{corollary}\n\\begin{proof}\n The expressions for $n\\ne 0$ and for $P_{00}$ follow directly from\n equations \\eqref{eq:Ts_useful}, \\eqref{eq:Ss_useful}\n and~\\eqref{eq:lemma_Lns}. In order to obtain $Q_{00}$, in turn, we\n note from~\\eqref{eq:Ss_useful} that for $n=0$ we have\n $\\frac{\\partial}{\\partial x} S_{s}( y^{s-1} (1-y)^{s-1} y^n )=0$.\n In particular, $S_{s}( y^{s-1} (1-y)^{s-1} )$ does not depend on $x$\n and we therefore obtain\n\\begin{equation*}\n \\begin{split}\n Q_{00} = S_{s}( y^{s-1} (1-y)^{s-1} ) &= C_s \\int_0^1 ( y^{2s-1} - 1 ) y^{s-1} (1-y)^{s-1} dy \\\\\n &= C_s \\left( \\mbox{B}(3s-1,s) - \\mbox{B}(s,s) \\right).\n \\end{split}\n\\end{equation*} \nIn the limit as $s \\to 1\/2$, employing l'H\\^opital's rule together with\nwell known values\\cite[6.1.8, 6.3.2, 6.3.3]{AbramowitzStegun} for the\nGamma function and it's derivative at $z=1\/2$ and $z=1$, we obtain\n$S_{\\frac 12}( y^{-1\/2} (1-y)^{-1\/2} ) = -2\\log(2)$\n\\end{proof}\n\n\\subsection{Diagonal Form of the Weighted Fractional Laplacian\\label{diag}}\nIn view of the form of the mapping $P$ in equation~\\eqref{eq_PQ} and\nusing the ``weight function''\n$$\\omega^s(y) = (y-a)^s(b-y)^s,$$\nfor $\\phi \\in C^2(a,b)\\cap C^1[a,b]$ (that is, $\\phi$ smooth up to the\nboundary but it does not necessarily vanish on the boundary) we\nintroduce the weighted version\n\\begin{equation}\\label{eq:weighted_hypersingular}\nK_s(\\phi) = C_s \\frac{d}{dx} \\int_{a}^{b} |x-y|^{1-2s} \\frac{d}{dy} \\left( \\omega^s \\phi(y) \\right) dy \\quad (s\\ne 1\/2),\n\\end{equation}\nof the operator $T_s$ in equation~\\eqref{Tsdef_eq3}. In view of Lemma~\\ref{lemma_hypersingular}, $K_s$ can also be viewed as a weighted version of the Fractional Laplacian operator, and we therefore define\n\\begin{equation}\\label{eq:weighted_fractional}\n(-\\Delta)_\\omega^s[ \\phi] = K_s(\\phi) \\;\\; \\mbox{for} \\;\\; \\phi \\in C^2(a,b)\\cap C^1[a,b].\n\\end{equation}\n\n\\begin{remark}\\label{rem_connection_u}\n Clearly, given a solution $\\phi$ of the equation \n \\begin{equation}\\label{eqn_weighted}\n (-\\Delta)_\\omega^s[\\phi] = f\n\\end{equation}\nin the domain $\\Omega = (a,b)$, the function $u= \\omega^s \\phi$ extended\nby zero outside $(a,b)$ solves the Dirichlet problem for the\nFractional Laplacian~\\eqref{eq:fraccionario_dirichlet}\n(cf. Lemma~\\ref{lemma_hypersingular}).\n\\end{remark}\n\n\nIn order to study the spectral properties of the operator\n$(-\\Delta)^s_\\omega,$ consider the weighted $L^2$ space\n\\begin{equation}\\label{weighted_L2}\n L^2_s(a,b) = \\left\\{ \\phi:(a,b) \\to {\\mathbb {R}} \\ \\colon \\int_a^b |\\phi|^2 \\omega^s < \\infty \\right\\},\n\\end{equation}\nwhich, together with the inner product\n\\begin{equation}\\label{scalarproduct_L2}\n (\\phi,\\psi)^s_{a,b} = \\int_a^b \\phi \\, \\psi \\, \\omega^s \n\\end{equation}\nand associated norm is a Hilbert space. We can now establish the\nfollowing lemma.\n\\begin{lemma}\n\\label{teo:autoadj}\nThe operator $(-\\Delta)_\\omega^s$ maps $\\mathbb{P}_n$ into itself.\nThe restriction of $(-\\Delta)_\\omega^s$ to $\\mathbb{P}_n$ is a self\nadjoint operator with respect to the inner product\n$(\\cdot,\\cdot)^s_{a,b}$.\n\\end{lemma}\n\\begin{proof}\n Using the notation $K_s=(-\\Delta)_\\omega^s$, we first establish the\n relation $(K_s[p],q) = (p,K_s[q])$ for $p,q \\in \\mathbb{P}_n$. But\n this follows directly from application of integration by parts and\n Fubini's theorem followed by an additional instance of integration\n by parts in \\eqref{eq:weighted_hypersingular}, and noting that the\n the boundary terms vanish by virtue of the weight $\\omega^s$.\n\\end{proof}\nThe orthogonal polynomials with respect to the inner product under\nconsideration are the well known Gegenbauer\npolynomials~\\cite{AbramowitzStegun}. These are defined on the interval $(-1,1)$ by\nthe recurrence\n\\begin{equation}\\label{eq:recurrencia}\n\\begin{split}\n C_{0}^{(\\alpha)}(x) & = 1, \\\\\n C_{1}^{(\\alpha)}(x) & = 2 \\alpha x, \\\\\n C_{n}^{(\\alpha)}(x) & = \\frac{1}{n}\n \\left[2x(n+\\alpha-1)C_{n-1}^{(\\alpha)}(x) -\n (n+2\\alpha-2)C_{n-2}^{(\\alpha)}(x) \\right];\n\\end{split}\\end{equation}\nfor an arbitrary interval $(a,b)$, the corresponding orthogonal\npolynomials can be easily obtained by means of a suitable affine\nchange of variables. Using this orthogonal basis we can now produce an\nexplicit diagonalization of the operator $(-\\Delta)^s_\\omega$. We first consider the\ninterval $(0,1)$; the corresponding result for a general interval\n$(a,b)$ is presented in Corollary~\\ref{diag_ab}.\n\\begin{theorem} \\label{teo:diagonalform} Given $s\\in(0,1)$ and $n \\in\n \\mathbb{N}\\cup \\{ 0 \\} $, consider the Gegenbauer polynomial\n $C^{(s+1\/2)}_n$, and let $p_n(x) = C^{(s+1\/2)}_n(2x-1)$. Then the\n weighted operator $(-\\Delta)^s_\\omega$ in the interval $(0,1)$ satisfies\n the identity \n\\begin{equation} \\label{eq_diagform}\n(-\\Delta)^s_\\omega( p_n ) =\n\\frac{\\Gamma(2s+n+1)}{n!} \\, p_n .\n\\end{equation}\n\\end{theorem}\n\\begin{proof}\n By Lemma~\\ref{teo:autoadj} the restriction of the operator\n $(-\\Delta)^s_\\omega$ to the subspace $\\mathbb{P}_m$ is self-adjoint and\n thus diagonalizable. We may therefore select polynomials $q_0,\n q_1,\\dots,q_m\\in\\mathbb{P}_m$ (where, for $0\\leq n\\leq m$, $q_n$ is\n a polynomial eigenfunction of $(-\\Delta)^s_\\omega$ of degree exactly\n $n$) which form an orthogonal basis of the space\n $\\mathbb{P}_m$. Clearly, the eigenfunctions $q_n$ are orthogonal and,\n therefore, up to constant factors, the polynomials $q_n$ must\n coincide with $p_n$ for all $n$, $0\\leq n\\leq m$. The corresponding\n eigenvalues can be extracted from the diagonal elements, displayed\n in equation~\\eqref{diagonal_entries}, of the upper-triangular matrix $[P]$\n considered in Corollary~\\ref{coro_diagonal}. These entries coincide\n with the constant term in~\\eqref{eq_diagform}, and the proof is\n thus complete.\n\\end{proof}\n\n\\begin{corollary}\\label{diag_ab}\n The weighted operator $(-\\Delta)^s_\\omega$ in the interval $(-1,1)$\n satisfies the identity\n\\[\n(-\\Delta)^s_\\omega(C_n^{(s+1\/2)}) = \\lambda_n^s \\, C_n^{(s+1\/2)} ,\n\\]\t\nwhere\n\\begin{equation}\n\\label{Eigenvalues}\n\\lambda_n^s = \n\\frac{\\Gamma(2s+n+1)}{n!}.\n\\end{equation}\nMoreover in the interval $(a,b)$, we have\t\n\\begin{equation}\\label{eigenfuncts} (-\\Delta)^s_\\omega(p_n) = \n\\lambda_n^s \\, p_n ,\n\\end{equation}\nwhere $p_n(x) = C_n^{(s+1\/2)}\\left(\\frac{2(x-a)}{b-a} - 1 \\right)$. \n\\end{corollary}\n\\begin{proof}\n The formula is obtained by employing the change of variables\n $\\tilde x=(x-a)\/(b-a) $ and $\\tilde y =(y-a)\/(b-a)$ in equation~\\eqref{eq:weighted_hypersingular}\n to map the weighted operator in $(a,b)$ to the corresponding operator in $(0,1)$, \n\tand observing that $\\omega^s(y) = (b-a)^{2s} \\tilde \\omega^s (\\tilde y)$, where \n\t$\\tilde \\omega^s (\\tilde y) = \\tilde{y}^s (1 - \\tilde y)^s.$\n\\end{proof}\n\n\\begin{remark}\\label{eigenvalue_asymptotics}\n It is useful to note that, in view of the formula $\\lim_{n\\to\\infty}\n n^{\\beta-\\alpha}\\Gamma(n+\\alpha)\/\\Gamma(n+\\beta) = 1$ (see\n e.g.~\\cite[6.1.46]{AbramowitzStegun}) we have the asymptotic\n relation $\\lambda_n^s \\approx O\\left( n^{2s} \\right)$ for the\n eigenvalues~\\eqref{Eigenvalues}. This fact will be exploited in the\n following sections in order to obtain sharp Sobolev regularity\n results as well as regularity results in spaces of analytic\n functions.\n\\end{remark}\n\nAs indicated in the following corollary, the background developed in\nthe present section can additionally be used to obtain the\ndiagonal form of the operator $S_s$ for all $s\\in (0,1)$. This\ncorollary generalizes a corresponding existing result\nfor the case $s=1\/2$---for which, as indicated in\nRemark~\\ref{remark_openarcs}, the operator $S_{s}$ coincides with the\nsingle-layer potential for the solution of the two-dimensional Laplace\nequation outside a straight arc or ``crack''.\n\\begin{corollary} The weighted operator $\\phi \\to\n S_s[\\omega^{s-1} \\phi]$ can be diagonalized in terms of the Gegenbauer\n polynomials $C_n^{(s-1\/2)}$\n\\begin{equation*}\n S_{s} \\left[\\omega^{s-1} C_n^{(s-1\/2)} \\right] = \\mu_n^s C^{(s-1\/2)}_n ,\n\\end{equation*}\nwhere in this case the eigenvalues are given by\n$$ \\mu_n^s =\n- \\frac{\\Gamma(2s+n-1) } {n!} .$$\n\\end{corollary}\n\\begin{proof}\nThe proof for the interval $[0,1]$ is analogous to that of Theorem \\ref{teo:diagonalform}. In this case, the eigenvalues are extracted from the diagonal entries of the upper triangular matrix $[Q]$ in equation \\eqref{diagonal_entries}. A linear change of variables allows to obtain the desired formula for an arbitrary interval.\n\\end{proof}\n\n\\begin{corollary}\n In the particular case $s=1\/2$ on the interval $(-1,1)$, the\n previous results amount, on one hand, to the known\n result~\\cite[eq. 9.27]{Handscomb} (cf also~\\cite{YanSloan}),\n \n \n\\begin{equation*}\n\\int_{-1}^1 \\log|x-y| T_n(y) (1-y^2)^{-1\/2} dy = \n\\left\\lbrace\n \\begin{array}{rl}\n -\\frac{\\pi}{n} T_n & \\mbox{ for } n \\ne 0 \\\\\n -2\\log(2) & \\mbox{ for } n = 0 \\\\\n \\end{array}\n \\right. \n\\end{equation*}\n(where $T_n$ denotes the Tchevyshev polynomial of the first kind),\nand, on the other hand, to the relation\n\\begin{equation*}\n\\frac{\\partial}{\\partial x} \\int_{-1}^1 \\log|x-y| \\frac{\\partial}{\\partial y} \\left( U_n(y) (1-y^2)^{1\/2} \\right) dy = (n+1) \\pi U_n \n\\end{equation*}\n(where $U_n$ denotes the Tchevyshev polynomial of the second kind).\n\\end{corollary}\n\n\n\n\n\n\n\\section{Regularity Theory}\n\\label{regularity}\nThis section studies the regularity of solutions of the fractional\nLaplacian equation~\\eqref{eq:fraccionario_dirichlet} under various\nsmoothness assumptions on the right-hand side $f$--including\ntreatments in both Sobolev and analytic function spaces, and for\nmulti-interval domains $\\Omega$ as in\nDefinition~\\ref{union_intervals_def}. In particular,\nSection~\\ref{Sobolev} introduces certain weighted Sobolev spaces\n$H^r_{s}(\\Omega)$ (which are defined by means of expansions in Gegenbauer\npolynomials together with an associated norm). The space $A_\\rho$ of\nanalytic functions in a certain ``Bernstein Ellipse''\n$\\mathcal{B}_\\rho$ is then considered in\nSection~\\ref{single_interval_analytic}. The main result in\nSection~\\ref{Sobolev} (resp. Section~\\ref{single_interval_analytic})\nestablishes that for right-hand sides $f$ in the space $H^r_{s}(\\Omega)$\nwith $r\\geq 0$ (resp. the space $A_\\rho(\\Omega)$ with $\\rho >0$) the\nsolution $u$ of equation~\\eqref{eq:fraccionario_dirichlet} can be\nexpressed in the form $u(x) = \\omega^s(x) \\phi(x)$, where $\\phi$ belongs\nto $H^{r+2s}_s(\\Omega)$ (resp. to $A_\\rho(\\Omega)$). Sections~\\ref{Sobolev}\nand~\\ref{single_interval_analytic} consider the single-interval case;\ngeneralizations of all results to the multi-interval context are\npresented in Section~\\ref{regularity_multi_int}. The theoretical\nbackground developed in the present Section~\\ref{regularity} is\nexploited in Section~\\ref{HONM} to develop and analyze a class of\neffective algorithms for the numerical solution of\nequation~\\eqref{eq:fraccionario_dirichlet} in multi-interval domains\n$\\Omega$.\n\n\\subsection{Sobolev Regularity, single interval case}\\label{Sobolev}\n\nIn this section we define certain weighted Sobolev spaces, which\nprovide a sharp regularity result for the weighted Fractional\nLaplacian $(-\\Delta)^s_\\omega$ (Theorem \\ref{teo_extended}) as well as a\nnatural framework for the analysis of the high order numerical methods\nproposed in Section~\\ref{HONM}. It is noted that these spaces\ncoincide with the non-uniformly weighted Sobolev spaces introduced\nin~\\cite{BabuskaGuo}; Theorem~\\ref{gegen_embedding_Hr} below provides\nan embedding of these spaces into spaces of continuously\ndifferentiable functions. For notational convenience, in the present\ndiscussion leading to the definition~\\ref{def:sobolev} of the Sobolev\nspace $H^r_s(\\Omega)$, we restrict our attention to the domain $\\Omega =\n(-1,1)$; the corresponding definition for general multi-interval\ndomains then follows easily.\n\n\nIn order to introduce the weighted Sobolev spaces we note that the set\nof Gegenbauer polynomials $C^{(s + 1\/2)}_n$ constitutes an orthogonal\nbasis of $L^2_s(-1,1)$ (cf. ~\\eqref{weighted_L2}). The $L^2_s$ norm of \na Gegenbauer polynomial (see ~\\cite[eq 22.2.3]{AbramowitzStegun}), is given by \n\\begin{equation} \n\\label{eq:norm_gegen}\nh_j^{(s+1\/2)} = \\left\\| C^{(s + 1\/2)}_j \\right\\|_{L^2_s(-1,1)} = \\sqrt{\\frac{2^{-2s}\\pi}{\\Gamma^2(s + 1\/2)} \\frac{\\Gamma(j+2s+1)}{\\Gamma(j+1)(j+s+1\/2)}}.\n\\end{equation}\n\\begin{definition}\\label{normal}\n Throughout this paper $\\tilde{C}^{(s + 1\/2)}_j$ denotes the\n normalized polynomial $C^{(s + 1\/2)}_j \/ h_j^{(s+1\/2)}$.\n\\end{definition}\nGiven a function $v\\in L^2_s(-1,1)$, we have the following expansion\n\\begin{equation}\\label{exp_gegen}\nv (x) = \\sum_{j=0}^\\infty v_{j,s} \\tilde{C}^{(s + 1\/2)}_j (x), \n\\end{equation}\nwhich converges in $L^2_s(-1,1)$, and where\n\\begin{equation}\\label{gegen_coef}\n v_{j,s} = \\int_{-1}^1 v(x) \\tilde{C}^{(s + 1\/2)}_j (x) (1-x^2)^s dx.\n\\end{equation}\n\n\nIn view of the expression\n \\begin{equation}\\label{gegen_poly_der}\n \\frac{d}{dx} C^{(\\alpha)}_j (x) = 2\\alpha C_{j-1}^{(\\alpha + 1)}(x), \\ j \\geq 1,\n \\end{equation}\n for the derivative of a Gegenbauer polynomial (see e.g.~\\cite[eq. 4.7.14]{szego}), we have\n\\begin{equation}\n \\frac{d}{dx} \\tilde{C}^{(s+1\/2)}_j (x) = (2s+1) \\frac{h^{(s+3\/2)}_{j-1}}{h^{(s+1\/2)}_j} \\tilde{C}^{s+3\/2}_{j-1}.\n\\end{equation}\nThus, using term-wise differentiation in~\\eqref{exp_gegen} we may\nconjecture that, for sufficiently smooth functions $v$, we have\n\\begin{equation}\\label{k-der}\n v^{(k)}(x) =\\sum_{j=k}^\\infty v_{j-k,s+k}^{(k)} \\tilde {C}_{j-k}^{(s+k+1\/2)}(x)\n\\end{equation}\nwhere $ v^{(k)}(x)$ denotes the $k$-th derivative of the function\n$v(x)$ and where, calling\n\\begin{equation}\\label{A_nk_def}\n A_j^{k} = \\prod_{r=0}^{k-1} \\frac{h^{(s+3\/2+r)}_{j-1-r}}{h^{(s+1\/2+r)}_{j-r}} (2(s+r)+1) = 2^k \\frac{h^{(s+1\/2+k)}_{j-k}}{h^{(s+1\/2)}_{j}} \\frac{\\Gamma(s+1\/2+k)}{\\Gamma(s+1\/2)},\n\\end{equation}\nthe coefficients in~\\eqref{k-der} are given by\n\\begin{equation}\\label{k-der-coeffs}\n v_{j-k,s+k}^{(k)} = A_j^{k}\\, v_{j,s}.\n\\end{equation}\n\n\n\nLemma~\\ref{lemma_gegen_parts} below provides, in particular, a\nrigorous proof of~\\eqref{k-der} under minimal hypothesis. Further,\nthe integration by parts formula established in that lemma together\nwith the asymptotic estimates on the factors $B_j^{k}$ provided in\nLemma~\\ref{A_jk_lemma}, then allow us to relate the smoothness of a\nfunction $v$ and the decay of its Gegenbauer coefficients; see\nCorolary~\\ref{gegen_decay_coro}.\n\\begin{lemma}[Integration by parts]\\label{lemma_gegen_parts}\n Let $k\\in \\mathbb{N}$ and let $v \\in C^{k-2}[-1,1]$ such that for a\n certain decomposition $[-1,1] = \\bigcup_{i=1}^n\n [\\alpha_i,\\alpha_{i+1}]$ ($-1=\\alpha_1<\\alpha_i<\\alpha_{i+1}\n <\\alpha_n=1$) and for certain functions $\\tilde v_i \\in\n C^{k}[\\alpha_i,\\alpha_{i+1}]$ we have $v(x)=\\tilde v_i(x)$ for all\n $x\\in (\\alpha_i,\\alpha_{i+1})$ and $1\\leq i\\leq n$. Then for $j\\geq\n k$ the $s$-weighted Gegenbauer coefficients $v_{j,s}$ defined in\n equation~\\eqref{gegen_coef} satisfy\n\\begin{equation}\\label{gegen_coef_parts_k}\n\\begin{split}\nv_{j,s} = B_j^{k} \\int_{-1}^1 & \\tilde v^{(k)}(x) \\tilde{C}_{j-k}^{(s+k+1\/2)}(x) (1-x^2)^{s+k} dx \\\\\n &-B_j^{k} \\sum_{i=1}^n \\left[ \\tilde v^{(k-1)}(x) \\tilde{C}_{j-k}^{(s+k+1\/2)}(x) (1-x^2)^{s+k} \\right]_{\\alpha_i}^{\\alpha_{i+1}},\n\\end{split}\n\\end{equation}\nwhere\n\\begin{equation}\\label{B_jk_def}\n B_j^k = \\frac{h_{j-k}^{(s+k+1\/2)}}{h_j^{(s+1\/2)}} \\prod_{r=0}^{k-1} \\frac{(2(s+r)+1)}{(j-r)(2s+r+j+1)}.\n\\end{equation}\nWith reference to equation~\\eqref{A_nk_def}, further, we have\n$A_j^k=\\frac{1}{B_j^k}$. In particular, under the additional\nassumption that $v \\in C^{k-1}[-1,1]$ the\nrelation~\\eqref{k-der-coeffs} holds.\n\\end{lemma}\n\\begin{proof}\n Equation~\\eqref{gegen_coef_parts_k} results from iterated\n applications of integration by parts together with the\n relation~\\cite[eq. 22.13.2]{AbramowitzStegun}\n$$ \\frac{\\ell(2t+\\ell+1)}{2t+1} \\int (1-y^2)^{t} C_\\ell^{(t+1\/2)}(y) dy = \n- (1-x^2)^{t+1} C_{\\ell-1}^{(t+3\/2)}(x). $$ and subsequent\nnormalization according to Definition~\\ref{normal}. The validity of\nthe relation $A_j^k=\\frac{1}{B_j^k}$ can be checked easily.\n\\end{proof}\n\n\\begin{lemma}\\label{A_jk_lemma} There exist constants $C_1$ and $C_2$ such that the\n factors $B_j^k$ in equation~\\eqref{A_nk_def} satisfy\n\\begin{equation*}\\label{A_jk_inequality}\n C_1 j^{-k} < |B_j^k| < C_2 j^{-k}\n\\end{equation*}\n\\end{lemma} \n\\begin{proof}\n In view of the relation $\\lim_{j\\to\\infty}\n j^{b-a}\\Gamma(j+a)\/\\Gamma(j+b) = 1$\n (see~\\cite[6.1.46]{AbramowitzStegun}) it follows that\n $h_j^{(s+1\/2)}$ in equation~\\eqref{eq:norm_gegen} satisfies\n \\begin{equation}\\label{gegen_norm_est}\n \\lim_{j\\to\\infty} j^{1\/2-s} h_j^{(s+1\/2)} \\ne 0\n \\end{equation}\n and, thus, letting\n\\begin{equation}\\label{gegen_ratio}\n q_j^k=\\frac{h_{j-k}^{(s+k+1\/2)}}{h_j^{(s+1\/2)}},\n\\end{equation}\nwe obtain \n\\begin{equation}\\label{q_bound}\n \\lim_{j\\to\\infty} q_j^k\/j^{k} \\ne 0.\n\\end{equation}\nThe lemma now follows by estimating the asymptotics of the product\nterm on the right-hand side of~\\eqref{B_jk_def} as $j\\to\\infty$.\n\\end{proof}\n\\begin{corollary}\\label{gegen_decay_coro}\n Let $k\\in \\mathbb{N}$ and let $v$ satisfy the hypothesis of\n Lemma~\\ref{lemma_gegen_parts}. Then the Gegenbauer coefficients\n $v_{j,s}$ in equation~\\eqref{gegen_coef} are quantities of order\n $O(j^{-k})$ as $j\\to\\infty$:\n$$|v_{j,s}| < C j^{-k}$$\nfor a constant $C$ that depends on $v$ and $k$.\n\\end{corollary}\n\\begin{proof}\n The proof of the corollary proceeds by noting that the factor\n $B_j^{k}$ in equation~\\eqref{gegen_coef_parts_k} is a quantity of\n order $j^{-k}$ (Lemma~\\ref{A_jk_lemma}), and obtaining bounds for\n the remaining factors in that equation. These bounds can be\n produced by (i)~applying the Cauchy-Schwartz inequality in the space\n $L^2_{s+k}(-1,1)$ to the $(s+k)$-weighted scalar\n product~\\eqref{scalarproduct_L2} that occurs in\n equation~\\eqref{gegen_coef_parts_k}; and\n (ii)~using~\\cite[eq. 7.33.6]{szego} to estimate the boundary terms\n in equation~\\eqref{gegen_coef_parts_k}. The derivation of the bound\n per point~(i) is straightforward. From~\\cite[eq. 7.33.6]{szego}, on\n the other hand, it follows directly that for each $\\lambda >0$ there\n is a constant $C$ such that\n$$ |\\sin(\\theta)^{2\\lambda-1} C^{\\lambda}_j(\\cos(\\theta))| \\le C j^{\\lambda-1}. $$\nLetting $x=\\cos(\\theta)$, $\\lambda =s+k+1\/2$ and dividing by the\nnormalization constant $h_{j}^{(s+k+1\/2)}$ we then obtain\n$$ \\left |\\tilde{C}^{s+k+1\/2}_j(x)(1-x^2)^{s+k}\\right | < Cj^{s+k-1\/2} \/ h_{j}^{(s+k+1\/2)}.$$ \nIn view of~\\eqref{gegen_norm_est}, the right hand side in this equation\nis bounded for all $j\\geq 0$. The proof now follows from\nLemma~\\ref{A_jk_lemma}.\n\\end{proof}\n\n\n\n\nWe now define a class of Sobolev spaces $H^r_s$ \nthat, as shown in Theorem~\\ref{teo_extended}, completely characterizes\nthe Sobolev regularity of the weighted fractional Laplacian operator\n$(-\\Delta)^s_\\omega$. \n\\begin{remark}\\label{no-s}\n In what follows, and when clear from the context, we drop the\n subindex $s$ in the notation for Gegenbauer coefficients such as\n $v_{j,s}$ in~\\eqref{gegen_coef}, and we write e.g. $v_j=v_{j,s}$,\n $w_j=w_{j,s}$, $f_j=f_{j,s}$, etc.\n\\end{remark}\n\n\\begin{definition}\\label{def:sobolev} Let $r,s\\in\\mathbb{R}$, \n $r\\geq 0$, $s> -1\/2$ and, for $v \\in L^2_s(-1,1)$ call $v_j$ the\n corresponding Gegenbauer coefficient~\\eqref{gegen_coef} (see\n Remark~\\ref{no-s}). Then the complex vector space $H^r_s(-1,1) =\n \\left\\{ v \\in L^2_s(-1,1) \\colon \\sum_{j=0}^\\infty (1+j^{2})^r\n |v_j|^2 < \\infty \\right\\}$ will be called the $s$-weighted Sobolev\n space of order $r$.\n\\end{definition}\n\\begin{lemma}\\label{lemma_hilbert_space}\n Let $r,s\\in\\mathbb{R}$, $r\\geq 0$, $s> -1\/2$. Then the\n space $H^r_s(-1,1)$ endowed with the inner product $\\langle v, w\n \\rangle_s^r = \\sum_{j=0}^\\infty v_j w_j (1 + j^{2})^r$ and\n associated norm \n\\begin{equation}\\label{H_r_norm}\n \\| v \\|_{H_s^r} = \\sum_{j=0}^\\infty |v_j|^2 (1 + j^{2}\n )^r\n\\end{equation}\n is a Hilbert space.\n\\end{lemma}\n\\begin{proof}\n The proof is completely analogous to that of \\cite[Theorem\n 8.2]{Kress}.\n\\end{proof}\n\\begin{remark}\\label{gegen_aprox_Hs} \n By definition it is immediately checked that for every function\n $v\\in H^r_s(-1,1)$ the Gegenbauer expansion~\\eqref{exp_gegen} with\n expansion coefficients~\\eqref{gegen_coef} is convergent in\n $H_s^r(-1,1)$.\n\\end{remark}\n\\begin{remark}\\label{sobolev_remark}\n In view of the Parseval identity $\\|v\\|_{L^2_s(-1,1)}^2 =\n \\sum_{n=0}^\\infty |v_n|^2$ it follows that the Hilbert spaces\n $H^0_s(-1,1)$ and $L^2_s(-1,1)$ coincide. Further, we have the dense\n compact embedding $H^t_s(-1,1)\\subset H^r_s(-1,1)$ whenever $r0$,\n $H^r_s(-1,1)$ constitutes an interpolation space between $H^{\\lfloor\n r \\rfloor}_s(-1,1)$ and $H^{\\lceil r \\rceil}_s(-1,1)$ in the sense\n defined by \\cite[Chapter 2]{BerghLofstrom}.\n\\end{remark}\n\nClosely related ``Jacobi-weighted Sobolev spaces'' $\\mathcal{H}^k_s$\n(Definition~\\ref{Guo_def}) were introduced\npreviously~\\cite{BabuskaGuo} in connection with Jacobi approximation\nproblems in the $p$-version of the finite element method. As shown in\nLemma~\\ref{sobolev_equivalence} below, in fact, the spaces\n$\\mathcal{H}^k_s$ coincide with the spaces $H^k_s$ defined above, and\nthe respective norms are equivalent.\n\\begin{definition}[Babu\\v{s}ka and Guo~\\cite{BabuskaGuo}]\\label{Guo_def}\n Let $k\\in\\mathbb{N}\\cup \\{0\\}$ and $r>0$. The $k$-th order\n non-uniformly weighted Sobolev space $\\mathcal{H}^k_s(a,b)$ is\n defined as the completion of the set $C^\\infty(a,b)$ under the norm\n\\begin{equation*}\\label{gou_norm}\n \\|v \\|_{\\mathcal{H}^k_s} = \\left( \\sum_{j=0}^k \\int_{a}^b |v^{(j)}(x)|^2 \\omega^{s+j} dx \\right)^{1\/2} = \\left( \\sum_{j=0}^k \\| v^{(j)} \\|_{L^2_{s+j}}^2 \\right)^{1\/2}.\n\\end{equation*} \nThe $r$-th order space $\\mathcal{H}^r_s(a,b)$, in turn, is defined by\ninterpolation of the spaces $\\mathcal{H}^k_s(a,b)$\n($k\\in\\mathbb{N}\\cup \\{0\\}$) by the $K$-method (see ~\\cite[Section\n3.1]{BerghLofstrom}).\n\\end{definition}\n\n\\begin{lemma}\\label{sobolev_equivalence}\n Let $r>0$. The spaces $H^r_s(a,b)$ and $\\mathcal{H}^r_s(a,b)$\n coincide, and their corresponding norms $\\| \\cdot \\|_{H^r_s}$ and $\\|\n \\cdot \\|_{\\mathcal{H}^r_s}$ are equivalent.\n\\end{lemma}\n\\begin{proof}\n A proof of this lemma for all $r>0$ can be found in \\cite[Theorem\n 2.1 and Remark 2.3]{BabuskaGuo}. In what follows we present an\n alternative proof for non-negative integer values of $r$:\n $r=k\\in\\mathbb{N}\\cup \\{0\\}$. In this case it suffices to show that\n the norms $\\| \\cdot \\|_{H^k_s}$ and $\\| \\cdot \\|_{\\mathcal{H}^k_s}$\n are equivalent on the dense subset $C^\\infty[a,b]$ of both\n $H^k_s(a,b)$ (Remark~\\ref{gegen_aprox_Hs}) and\n $\\mathcal{H}^k_s(a,b)$. But, for $v\\in C^\\infty[a,b]$,\n using~\\eqref{k-der}, Parseval's identity in $L^2_{s+k}$ and\n Lemma~\\ref{lemma_gegen_parts} we see that for every integer $k\\geq\n 0$ we have $\\| v^{(k)} \\|_{L^2_{s+k}} = \\sum_{j=k}^\\infty\n |v_{j-k,s+k}^{(k)}|^2 = \\sum_{j=k}^\\infty |v_{j,s}|^2 \/\n |B_j^k|^2$. From Lemma~\\ref{A_jk_lemma} we then obtain\n$$ D_1 \\sum_{j=k}^\\infty |v_{j,s}|^2 j^{2k} \\le \\| v^{(k)} \\|_{L^2_{s+k}}^2 \\le D_2 \\sum_{j=k}^\\infty |v_{j,s}|^2 j^{2k} $$\nfor certain constants $D_1$ and $D_2$, where $v_{j-k,s+k}^{(k)}$. In\nview of the inequalities\n $$ (1+j^{2k}) \\le (1+j^2)^k \\le (2j^2)^k \\le 2^k(1+j^{2k}) $$ \n the claimed norm equivalence for $r=k\\in\\mathbb{N}\\cup \\{0\\}$ and\n $v\\in C^\\infty[a,b]$ follows. \n\\end{proof}\n\nSharp regularity results for the Fractional Laplacian in the Sobolev space\n$H^r_s(a,b)$ can now be obtained easily.\n\\begin{theorem}\\label{teo_extended}\n Let $r\\geq 0$. Then the weighted fractional Laplacian\n operator~\\eqref{eq:weighted_fractional} can be extended uniquely to\n a continuous linear map $(-\\Delta)^s_\\omega$ from $H_s^{r+2s}(a,b)$ into\n $H_s^{r}(a,b)$. The extended operator is bijective and bicontinuous.\n\\end{theorem}\n\\begin{proof} Without loss of generality, we assume $(a,b)=(-1,1)$.\n Let $\\phi\\in H^{r+2s}_s(-1,1)$, and let $\\phi^n = \\sum_{j=0}^n\n \\phi_j \\tilde{C}_j^{(s+1\/2)} $ where $\\phi_j$ denotes the Gegenbauer\n coefficient of $\\phi$ as given by equation~\\eqref{gegen_coef} with\n $v=\\phi$. According to Corollary~\\ref{diag_ab} we have\n $(-\\Delta)^s_\\omega \\phi^n = \\sum_{j=0}^n \\lambda_j^s \\phi_j\n \\tilde{C}_j^{(s+1\/2)}$. In view of Remarks~\\ref{gegen_aprox_Hs}\n and~\\ref{eigenvalue_asymptotics} it is clear that $(-\\Delta)^s_\\omega\n \\phi^n$ is a Cauchy sequence (and thus a convergent sequence) in\n $H_s^{r}(-1,1)$. We may thus define \n$$(-\\Delta)^s_\\omega \\phi = \\lim_{n\\to\\infty}\n(-\\Delta)^s_\\omega \\phi^n = \\sum_{j=0}^\\infty\\lambda_j^s \\phi_j\n\\tilde{C}_j^{(s+1\/2)}\\in H_s^{r}(-1,1).$$ \nThe bijectivity and bicontinuity of the\nextended mapping follows easily, in view of Remark~\\ref{eigenvalue_asymptotics}, \nas does the uniqueness of continuous extension. The proof is complete.\n\\end{proof}\n\\begin{corollary} \\label{coro_u_sobolev} The solution $u$\n of~\\eqref{eq:fraccionario_dirichlet} with right-hand side $f\\in\n H^r_s(a,b)$ ($r\\geq 0$) can be expressed in the form $u=\\omega^s\\phi$ for\n some $\\phi \\in H^{r+2s}_s(a,b)$.\n\\end{corollary}\n\\begin{proof}\n Follows from Theorem~\\ref{teo_extended} and\n Remark~\\ref{rem_connection_u}.\n\\end{proof}\n\nThe classical smoothness of solutions of\nequation~\\eqref{eq:fraccionario_dirichlet} for sufficiently smooth\nright-hand sides results from the following version of the ``Sobolev\nembedding'' theorem.\n\\begin{theorem}[Sobolev's Lemma for weighted\n spaces]\\label{gegen_embedding_Hr}\n Let $s \\ge 0$, $k\\in \\mathbb{N}\\cup \\{0\\}$ and $r > 2k + s +\n 1$. Then we have a continuous embedding $H^r_s(a,b)\\subset C^k[a,b]$\n of $H^r_s(a,b)$ into the Banach space $C^k[a,b]$ of $k$-continuously\n differentiable functions in $[a,b]$ with the usual norm $\\| v \\|_k$\n (given by the sum of the $L^\\infty$ norms of the function and the\n $k$-th derivative): $\\| v \\|_k = \\| v \\|_\\infty + \\| v^{(k)}\n \\|_\\infty$.\n\\end{theorem}\n\\begin{proof} Without loss of generality we restrict attention to\n $(a,b)=(-1,1)$. Let $0\\le \\ell \\le k$ and let $v\\in H^r_s(-1,1)$ be\n given. Using the expansion~\\eqref{exp_gegen} and in view of the\n relation~\\eqref{gegen_poly_der} for the derivative of a Gegenbauer polynomial, we\n consider the partial sums\n \\begin{equation}\\label{exp_gegen_der}\n v^{(\\ell)}_n(x) = 2^\\ell \\prod_{i=1}^\\ell (s + i - 1\/2)\n \\sum_{j=\\ell}^n \\frac{v_j}{h_j^{(s+1\/2)}} C^{(s+\\ell+1\/2)}_{j-\\ell}(x)\n \\end{equation}\n that result as the partial sums corresponding to~\\eqref{exp_gegen} up to $j=n$ are differentiated $\\ell$ times.\n But we have the estimate\n \\begin{equation}\\label{bound_gegen}\n \\|C^{(s+1\/2)}_{n}\\|_\\infty \\sim O(n^{2s}).\n \\end{equation}\n which is an immediate consequence of~\\cite[Theorem 7.33.1]{szego}. Thus, taking into account~\\eqref{gegen_norm_est}, we obtain \n $$ |v_n^{(\\ell)}(x)| \\le C(\\ell) \\sum_{j=0}^{n-\\ell} \\frac{| v_{j+\\ell}|} {h_{j+\\ell}^{(s+1\/2)}} |C^{(s+\\ell+1\/2)}_{j}(x)| \n \\le C(\\ell) \\sum_{j=0}^{n-\\ell} (1+j^2)^{(s+2\\ell)\/2 + 1\/4} | v_{j+\\ell}| ,$$\nfor some constant $C(\\ell)$. Multiplying and dividing by $(1+j^2)^{r\/2}$ and\napplying the Cauchy-Schwartz inequality in the space of square\nsummable sequences it follows that\n\\begin{equation}\\label{g_der}\n| v_n^{(\\ell)}(x) | \\le C(\\ell) \\left(\\sum_{j=0}^{n-\\ell} \\frac{1}{(1+j^2)^{r - (s+2\\ell+1\/2) }}\\right)^{1\/2} \\left(\\sum_{j=0}^{n-\\ell} (1+j^2)^r |v_{j+\\ell}|^2\\right)^{1\/2}. \n\\end{equation}\nWe thus see that, provided $r - (s + 2\\ell+1\/2)>1\/2$ (or equivalently,\n$r>2\\ell+s+1$), $v_n^{(\\ell)}$ converges uniformly to\n$\\frac{\\partial^\\ell}{\\partial x^\\ell} v(x)$ (cf. ~\\cite[Th. 7.17]{baby_rudin}) for all \n$\\ell$ with\n$0\\leq \\ell \\leq k$. It follows that $v\\in C^k[-1,1]$, and, in view\nof~\\eqref{g_der}, it is easily checked that there exists a constant\n$M(\\ell)$ such that $\\| \\frac{\\partial^{(\\ell)}}{\\partial x^k} v(x)\n\\|_\\infty \\le M(\\ell) \\| v \\|_s^r$ for all $0\\leq \\ell \\leq k.$ The\nproof is complete.\n\\end{proof}\n\n\\begin{remark}\nIn order to check that the previous result is sharp we consider an example in the case $k=0$: \nthe function $v(x)=|\\log(x)|^{\\beta}$ with $0<\\beta<1\/2$ is not bounded, but a straightforward computation \nshows that, for $s \\in {\\mathbb {N}}$, $v\\in \\mathcal{H}_s^{s+1}(0,1)$, or equivalently (see Lemma \\ref{sobolev_equivalence}),\n$v\\in H_s^{s+1}(0,1)$.\n\\end{remark}\n\n\n\\begin{corollary}\\label{cinfty}\n The weighted fractional Laplacian\n operator~\\eqref{eq:weighted_fractional} maps bijectively the\n space $C^\\infty[a,b]$ into itself.\n\\end{corollary}\n\\begin{proof}\n Follows directly from Theorem~\\ref{teo_extended} together \n with lemmas~\\ref{lemma_gegen_parts},~\\ref{A_jk_lemma} and~\\ref{gegen_embedding_Hr}.\n\\end{proof}\n\\subsection{Analytic Regularity, single interval case}\n\\label{single_interval_analytic}\nLet $f$ denote an analytic function defined in the closed interval\n$[-1,1]$. Our analytic regularity results for the solution of\nequation~\\eqref{eq:fraccionario_dirichlet} relies on consideration of\nanalytic extensions of the function $f$ to relevant neighborhoods of\nthe interval $[-1,1]$ in the complex plane. We thus consider the {\\em\n Bernstein ellipse} $\\mathcal{E}_\\rho$, that is, the ellipse with\nfoci $\\pm 1$ whose minor and major semiaxial lengths add up to\n$\\rho\\geq 1$. We also consider the closed set $\\mathcal{B}_\\rho$ in\nthe complex plane which is bounded by $\\mathcal{E}_\\rho$ (and which\nincludes $\\mathcal{E}_\\rho$, of course). Clearly, any analytic\nfunction $f$ over the interval $[-1,1]$ can be extended analytically\nto $\\mathcal{B}_\\rho$ for some $\\rho > 1$. We thus consider the\nfollowing set of analytic functions.\n\\begin{definition}\\label{A_rho_def}\n For each $\\rho >1$ let $A_\\rho$ denote the normed space of analytic\n functions $A_\\rho = \\{ f \\colon f \\text{ is analytic on }\n \\mathcal{B}_\\rho\\} $ endowed with the $L^\\infty$ norm\n $\\|\\cdot\\|_{L^\\infty\\left (\\mathcal{B}_{\\rho}\\right)}$.\n\\end{definition}\n\\begin{theorem}\n\\label{teo_analyticity}\nFor each $f \\in A_\\rho$ we have $((-\\Delta)^s_\\omega)^{-1} f \\in\nA_\\rho$. Further, the mapping $((-\\Delta)^s_\\omega)^{-1}: A_\\rho \\to\nA_\\rho$ is continuous.\n\\end{theorem}\n\\begin{proof}\n Let $f \\in A_\\rho$ and let us consider the Gegenbauer expansions\n\\begin{equation}\\label{expansions}\n f=\\sum_{j=0}^\\infty f_j \\tilde{C}_j^{(s+1\/2)} \\quad\\mbox{and}\\quad ((-\\Delta)^s_\\omega)^{-1} f=\\sum_{j=0}^\\infty (\\lambda_j^s)^{-1} f_j \\tilde{C}_j^{(s+1\/2)}.\n\\end{equation}\nIn order to show that $((-\\Delta)^s_\\omega)^{-1} f\\in A_\\rho$ it suffices\nto show that the right-hand series in this equation converges\nuniformly within $\\mathcal{B}_{\\rho_1}$ for some $\\rho_1 > \\rho$. To\ndo this we utilize bounds on both the Gegenbauer coefficients and the\nGegenbauer polynomials themselves.\n\nIn order to obtain suitable coefficient bounds, we note that, since $f\n\\in A_\\rho$, there indeed exists $\\rho_2 > \\rho$ such that $f \\in\nA_{\\rho_2}$. It follows~\\cite{ZhaoWangXie} that the Gegenbauer\ncoefficients decay exponentially. More precisely, for a certain\nconstant $C$ we have the estimate\n\\begin{equation} \\label{eq:cota_an} |f_j| \\le C\n \\max_{z\\in\\mathcal{B}_{\\rho_2}} |f(z)| \\rho_2^{-j}\n j^{-s}\\quad\\mbox{for some}\\quad \\rho_2>\\rho,\n\\end{equation} \nwhich follows directly from corresponding bounds~\\cite[eqns 2.28, 2.8,\n1.1, 2.27]{ZhaoWangXie} on Jacobi coefficients. (Here we have used\nthe relation\n$$ C_j^{(s+1\/2)} = r^s_j P_j^{(s,s)} \\quad \\mbox{with} \\quad r^s_j = \\frac{(2s+1)_j}{(s+1)_j} = O(j^{s}) $$\nthat expresses Gegenbauer polynomials $C_j^{(s+1\/2)}$ in terms of\nJacobi polynomials $P_j^{(s,s)}$.)\n\n\nIn order to the adequately account for the growth of the Gegenbauer\npolynomials, on the other hand, we consider the estimate\n\\begin{equation}\n\\label{eq:crecimiento_gegenbauer}\n\\frac{\\| C_j^{(s+1\/2)} \\|_{L^\\infty(\\mathcal{B}_{\\rho_1})}}{h_j^{(s+1\/2)}} \\le \nD \\rho_1^j\\quad\\mbox{for all}\\quad \\rho_1>1,\n\\end{equation}\nwhich follows directly from~\\cite[Theorem 3.2]{XieWangZhao} and\nequation~\\eqref{gegen_norm_est}, where $D=D(\\rho_1)$ is a constant\nwhich depends on $\\rho_1$.\n\nLet now $\\rho_1\\in [\\rho,\\rho_2)$. In view of~\\eqref{eq:cota_an}\nand~\\eqref{eq:crecimiento_gegenbauer} we see that the $j$-th term of\nthe right-hand series in equation~\\eqref{expansions} satisfies\n\\begin{equation}\\label{series_est}\n\\left | \\frac{\\lambda_j^s f_j C_j^{(s+1\/2)}(x)}{h_j^{(s+1\/2)}}\\right |\\leq C D(\\rho_1)\n\\left( \\frac{\\rho_1}{\\rho_2} \\right)^j j^{-s} (\\lambda_j^s)^{-1} \\max_{z\\in\\mathcal{B}_{\\rho_1}} |f(z)|\n\\end{equation} \nthroughout $\\mathcal{B}_{\\rho_1}$. Taking $\\rho_1\\in (\\rho,\\rho_2)$\nwe conclude that the series converges uniformly in\n$\\mathcal{B}_{\\rho_1}$, and that the limit is therefore analytic\nthroughout $\\mathcal{B}_{\\rho}$, as desired. Finally, taking\n$\\rho_1=\\rho$ in~\\eqref{series_est} we obtain the estimates\n\\begin{equation*}\n \\| ((-\\Delta)^s_\\omega)^{-1} f \\|_{L^\\infty(\\mathcal{B}_{\\rho})} \\le C D(\\rho) \\sum_{j=0}^\\infty \\left( \\frac{\\rho}{\\rho_2} \\right)^j j^{-s} (\\lambda_j^s)^{-1} \\max_{z\\in\\mathcal{E}_{\\rho}} |f(z)| \n \\le E \\|f \\|_{L^\\infty(\\mathcal{B}_{\\rho})}\n\\end{equation*} \nwhich establish the stated continuity condition. The proof is thus\ncomplete.\n\\end{proof}\n\n\\begin{corollary}\n Let $f \\in A_\\rho$. Then the solution $u$ of\n \\eqref{eq:fraccionario_dirichlet} can be expressed in the form\n $u=\\omega^s\\phi$ for a certain $\\phi \\in A_\\rho$.\n\\end{corollary}\n\\begin{proof}\n Follows from Theorem~\\ref{teo_analyticity} and\n Remark~\\ref{rem_connection_u}.\n\\end{proof}\n\n\\begin{remark}\\label{remark_2s}\n We can now see that, as indicated in Remark~\\ref{remark_idea_2s},\n the smoothness and analyticity theory presented throughout\n Section~\\ref{regularity} cannot be duplicated with weights of\n exponent $2s$, in spite of the ``local'' regularity result of\n Theorem~\\ref{teo1}---that establishes analyticity of\n $T[y^{\\alpha}](x)$ around $x=0$ for both cases, $\\alpha = s+n$ and\n $\\alpha = 2s+n$. Indeed, we can easily verify that\n $T(y^{2s}(1-y)^{2s} y^n)$ cannot be extended analytically to an open\n set containing $[0,1]$. If it could, Theorem~\\ref{teo_analyticity}\n would imply that $y^{s}(1-y)^{s}$ is an analytic function around\n $y=0$ and $y=1$.\n \n \n \n\\end{remark}\n\n\n\n\\subsection{Sobolev and Analytic Regularity on Multi-interval\n Domains}\\label{regularity_multi_int}\nThis section concerns multi-interval domains $\\Omega$ of the\nform~\\eqref{union_intervals}. Using the characteristic functions\n$\\chi_{(a_i,b_i)}$ of the individual component interval, letting\n$\\omega^s(x)=\\sum_{i=1}^M(x-a_i)^s(b_i-x)^s \\chi_{(a_i,b_i)}(x)$ and\nrelying on Corollary~\\ref{coro_lemma_hypersingular}, we define the\nmulti-interval weighted fractional Laplacian operator on $\\Omega$ by\n$(-\\Delta)^s_\\omega \\phi = (-\\Delta)^s[\\omega^s \\phi]$, where $\\phi: \\mathbb{R} \\to\n\\mathbb{R}$.\nIn view of the various\nresults in previous sections it is natural to use the decomposition\n$(-\\Delta)^s_\\omega = \\mathcal{K}_s + \\mathcal{R}_s$, where $\\mathcal{K}_s[\\phi] =\n\\sum_{i=1}^M\\chi_{(a_i,b_i)} K_s\\chi_{(a_i,b_i)}\\phi$ is a\nblock-diagonal operator and where $\\mathcal{R}_s$ is the associated off-diagonal\nremainder. Using integration by parts is easy to check that\n\n\\begin{equation}\\label{other_intervals}\n \\mathcal{R}_s\\phi(x) = C_1(s) \\int_{\\Omega\\setminus (a_j,b_j)} |x-y|^{-1-2s} \\omega^s(y) \\phi(y) dy\\quad\\mbox{for}\\quad x\\in (a_j,b_j).\n\\end{equation}\n\n\\begin{theorem}\n Let $\\Omega$ be given as in\n Definition~\\ref{union_intervals_def}. Then, given $f \\in L^2_s(\\Omega)$,\n there exists a unique $\\phi \\in L^2_s(\\Omega)$ such that\n $ (-\\Delta)^s_\\omega \\phi = f$. Moreover, for $f \\in H^r_s(\\Omega)$\n (resp. $f \\in A_\\rho(\\Omega)$) we have $\\phi \\in H^{r+2s}_s(\\Omega)$\n (resp. $\\phi \\in A_\\nu(\\Omega)$ for some $\\nu >1$).\n\\end{theorem} \n\\begin{proof}\n Since $(-\\Delta)^s_\\omega=(\\mathcal{K}_s+\\mathcal{R}_s)$,\n left-multiplying the equation $ (-\\Delta)^s_\\omega \\phi = f$ by\n $\\mathcal{K}_s^{-1}$ yields\n\\begin{equation}\n\\label{eq:Preconditioner}\n\\left( I + \\mathcal{K}_s^{-1}\\mathcal{R}_s \\right) \\phi = \\mathcal{K}_s^{-1} f .\n\\end{equation}\nThe operator $\\mathcal{K}_s^{-1}$ is clearly compact in $L^2_s(\\Omega)$\nsince the eigenvalues $\\lambda_j^s$ tend to infinity as $j\\to\\infty$\n(cf. \\eqref{Eigenvalues}). On the other hand, the kernel of the\noperator $\\mathcal{R}_s$ is analytic, and therefore $\\mathcal{R}_s$ is\ncontinuous (and, indeed, also compact) in $L^2_s(\\Omega)$. It follows\nthat the operator $\\mathcal{K}_s^{-1}\\mathcal{R}_s$ is compact in\n$L^2_s(\\Omega)$, and, thus, the Fredholm alternative tells us that\nequation~\\eqref{eq:Preconditioner} is uniquely solvable in $L^2_s(\\Omega)$\nprovided the left-hand side operator is injective.\n\nTo establish the injectivity of this operator, assume $\\phi \\in L^2_s$\nsolves the homogeneous problem. Then\n$\\mathcal{K}_s(\\phi) = - \\mathcal{R}_s(\\phi)$, and since\n$\\mathcal{R}_s(\\phi)$ is an analytic function of $x$, in view of the\nmapping properties established in Theorem~\\ref{teo_analyticity} for\nthe self operator $K_s$ (which coincides with the\nsingle-interval version of the operator $(-\\Delta)^s_\\omega)$), we\nconclude the solution $\\phi$ to this problem is again analytic. Thus,\na solution to~\\eqref{eq:fraccionario_dirichlet} for a null right-hand\nside $f$ can be expressed in the form be $u = \\omega^s\\phi$ for a certain\nfunction $\\phi$ which is analytic throughout $\\Omega$. But this\nimplies that the function $u = \\omega^s\\phi$ belongs to the classical\nSobolev space $H^s(\\Omega)$. (To check this fact we consider that\n(a)~$\\omega^s\\in H^s(\\Omega)$, since, by definition, the Fourier transform of\n$\\omega^s$ coincides (up to a constant factor) with the confluent\nhypergeometric function $M(s+1,2s+2,\\xi)$ whose\nasymptotics~\\cite[eq. 13.5.1]{AbramowitzStegun} show that $\\omega^s$ in\nfact belongs to the classical Sobolev space $H^{s+1\/2-\\varepsilon}(\\Omega)$ for\nall $\\varepsilon>0$; and (b)~the product $fg$ of functions $f$, $g$ in\n$H^s(\\Omega)\\cap L^\\infty(\\Omega)$ is necessarily an element of $H^s(\\Omega)$---as\nthe Aronszajn-Gagliardo-Slobodeckij semi-norm~\\cite{Hitchhikers} of\n$fg$ can easily be shown to be finite for such functions $f$ and $g$,\nwhich implies $fg \\in H^s(\\Omega)$~\\cite[Prop 3.4]{Hitchhikers}). Having\nestablished that $u = \\omega^s\\phi\\in H^s(\\Omega)$, the injectivity of the\noperator in~\\eqref{eq:Preconditioner} in $L^2_s(\\Omega)$ follows from the\nuniqueness of $H^s$ solutions, which is established for example\nin~\\cite{AcostaBorthagaray}. As indicated above, this injectivity\nresult suffices to establish the claimed existence of an $L^2_s(\\Omega)$\nsolution for each $L^2_s(\\Omega)$ right-hand side.\n\nAssuming $f$ is analytic (resp. belongs to $H^r_s(\\Omega)$), finally, the\nregularity claims now follow directly from the single-interval results\nof Sections~\\ref{Sobolev} and~\\ref{single_interval_analytic}, since a\nsolution $\\phi$ of $(-\\Delta)^s_\\omega \\phi = f$ satisfies\n\\begin{equation}\\label{multi_single}\n\\mathcal{K}_s(\\phi) = f - \\mathcal{R}_s(\\phi).\n\\end{equation}\nThe proof is now complete.\n\\end{proof}\n\n\n\n\\section{High Order Numerical Methods\\label{HONM}}\nThis section presents rapidly-convergent numerical methods for single-\nand multi-interval fractional Laplacian problems. In particular, this\nsection establishes that the proposed methods, which are based on the\ntheoretical framework introduced above in this paper, converge\n(i)~exponentially fast for analytic right-hand sides $f$,\n(ii)~superalgebraically fast for smooth $f$, and (iii)~with\nconvergence order $r$ for $f \\in H_s^r(\\Omega)$.\n\n\\subsection{Single-Interval Method: Gegenbauer Expansions\\label{num_single}}\nIn view of Corollary~\\ref{diag_ab}, a spectrally accurate algorithm\nfor solution of the single-interval equation~\\eqref{eqn_weighted} (and\nthus equation~\\eqref{eq:fraccionario_dirichlet} for $\\Omega=(a,b)$)\ncan be obtained from use of Gauss-Jacobi quadratures. Assuming\n$(a,b)=(-1,1)$ for notational simplicity, the method proceeds as\nfollows: 1)~The continuous scalar product~\\eqref{gegen_coef} with\n$v=f$ is approximated with spectral accuracy (and, in fact, exactly\nwhenever $f$ is a polynomial of degree less or equal to $n+1$) by\nmeans of the discrete inner product\n\\begin{equation}\n\\label{eq:disc_inner}\nf_j^{(n)}:= \\frac{1}{h_j^{(s+1\/2)}} \\sum_{i=0}^n f(x_i) C^{(s+1\/2)}_j(x_i) w_i, \n\\end{equation}\nwhere $(x_i)_{i=0}^n$ and $(w_i)_{i=0}^n$ denote the nodes and weights\nof the Gauss-Jacobi quadrature rule of order $2n+1$. (As is well\nknown~\\cite{HaleTowsend}, these quadrature nodes and weights can be\ncomputed with full accuracy at a cost of $O(n)$ operations.) 2)~For\neach $i$, the necessary values $C^{(s+1\/2)}_j(x_i)$ can be obtained\nfor all $j$ via the three-term recurrence\nrelation~\\eqref{eq:recurrencia}, at an overall cost of $O(n^2)$\noperations. The algorithm is then completed by 3)~Explicit evaluation\nof the spectrally accurate approximation\n\\begin{equation}\n\\label{eq:geg_rec}\n\\phi_n := K_{s,n}^{-1} f = \\sum_{j=0}^n \\frac{f_j^{(n)}}{\\lambda_j^s h_j^{(s+1\/2)}} C^{(s+1\/2)}_j\n\\end{equation}\nthat results by using the expansion~\\eqref{exp_gegen} with $v=f$\nfollowed by an application of equation~\\eqref{eigenfuncts} and\nsubsequent truncation of the resulting series up to $j=n$. The\nalgorithm requires accurate evaluation of certain ratios of Gamma\nfunctions of large arguments; see equations~\\eqref{Eigenvalues}\nand~\\eqref{eq:norm_gegen}, for which we use the Stirling's series as\nin~\\cite[Sec 3.3.1]{HaleTowsend}. The overall cost of the algorithm is\n$O(n^2)$ operations. The accuracy of this algorithm, in turn, is\nstudied in section~\\ref{error_estimates}.\n\n\\subsection{Multiple Intervals: An iterative Nystr\\\"om Method}\\label{num_multi}\nThis section pre\\-sents a spectrally accurate iterative Nystr\\\"om method\nfor the numerical solution of\nequation~\\eqref{eq:fraccionario_dirichlet} with $\\Omega$ as\nin~\\eqref{union_intervals}. This solver, which is based on use of the\nequivalent second-kind Fredholm equation~\\eqref{eq:Preconditioner},\nrequires (a)~Numerical approximation of $\\mathcal{K}_s^{-1}f$,\n(b)~Numerical evaluation of the ``forward-map''\n$(I+\\mathcal{K}_s^{-1}\\mathcal{R}_s)[\\tilde \\phi]$ for each given\nfunction $\\tilde \\phi$, and (c)~Use of the iterative linear-algebra\nsolver GMRES~\\cite{GMRES}. Clearly, the algorithm in\nSection~\\ref{num_single} provides a numerical method for the\nevaluation of each block in the block-diagonal inverse operator\n$\\mathcal{K}_s^{-1}$. Thus, in order to evaluate the aforementioned\nforward map it now suffices to evaluate numerically the off-diagonal\noperator $\\mathcal{R}_s$ in equation~\\eqref{other_intervals}.\n\nAn algorithm for evaluation of $\\mathcal{R}_s[\\tilde \\phi](x)$ for $x\\in\n(a_j,b_j)$ can be constructed on the basis of the Gauss-Jacobi\nquadrature rule for integration over the interval $(a_\\ell,b_\\ell)$\nwith $\\ell\\ne j$, in a manner entirely analogous to that described in\nSection~\\ref{num_single}. Thus, using Gauss-Jacobi nodes and weights\n$y_i^{(\\ell)}$ and $w_i^{(\\ell)}$ ($i = 1,\\dots, n_\\ell$) for each\ninterval $(a_\\ell,b_\\ell)$ with $\\ell\\ne j$ we may construct a\ndiscrete operator $R_n$ that can be used to approximate $\\mathcal{R}_s[\\tilde\n\\phi](x)$ for each given function $\\tilde\\phi$ and for all observation\npoints $x$ in the set of Gauss-Jacobi nodes used for integration in\nthe interval $(a_j,b_j)$ (or, in other words, for $x = y_k^{(j)}$ with\n$k=1,\\dots,n_j$). Indeed, consideration of the numerical\napproximation\n$$ R[\\tilde\\phi](y_k^{(j)}) \\approx \\sum_{\\ell \\ne j} \\sum_{i=0}^{n_\\ell} |y_k^{(j)}-y_i^{(\\ell)}|^{-2s-1} \\tilde\\phi(y_i^{(\\ell)}) w_i^{(\\ell)} $$\nsuggests the following definition. Using a suitable ordering to define\na vector $Y$ that contains all unknowns corresponding to\n$\\tilde\\phi(y_i^{(\\ell)})$, and, similarly, a vector $F$ that contains all\nof the values $f(y_i^{(\\ell)})$, the discrete equation to be solved\ntakes the form\n\\begin{equation*}\n\\label{eq:disc_precond}\n(I + K_{s,n}^{-1} R_{s,n}) Y = K_{s,n}^{-1} [F]\n\\end{equation*}\nwhere $R_n$ and $K_{s,n}^{-1}$ are the discrete operator that\nincorporate the aforementioned ordering and quadrature rules.\n\nWith the forward map $(I + K_{s,n}^{-1} R_{s,n})$ in hand, the multi-interval\nalgorithm is completed by means of an application of a suitable\niterative linear algebra solver; our implementations are based on the\nKrylov-subspace iterative solver GMRES~\\cite{GMRES}. Thus, the overall\ncost of the algorithm is $O(m\\cdot n^2)$ operations, where $m$ is the\nnumber of required iterations. (Note that the use of an iterative\nsolver allows us to avoid the actual construction and inversion of the\nmatrices associated with the discrete operators in\nequation~\\eqref{eq:disc_precond}, which would lead to an overall cost of\nthe order of $O(n^3)$ operations.) As the equation to be solved\noriginates from a second kind equation, it is not unreasonable to\nanticipate that, as we have observed without exception (and as\nillustrated in Section~\\ref{num_res}), a small number of GMRES\niterations suffices to meet a given error tolerance.\n\n\n\\subsection{Error estimates}\\label{error_estimates}\nThe convergence rates of the algorithms proposed in\nSections~\\ref{num_single} and~\\ref{num_multi} are studied in what\nfollows. In particular, as shown in Theorems~\\ref{teo_sobolev_error}\nand~\\ref{teo_analytic_error}, the algorithm's errors are exponentially\nsmall for analytic $f$, they decay superalgebraically fast (faster\nthan any power of meshsize) for infinitely smooth right-hand sides,\nand with a fixed algebraic order of accuracy $O(n^{-r})$ whenever $f$\nbelongs to the Sobolev space $H^r_s(\\Omega)$ (cf. Section~\\ref{Sobolev}).\nFor conciseness, fully-detailed proofs are presented in the\nsingle-interval case only. A sketch of the proofs for the\nmulti-interval cases is presented in\nCorollary~\\ref{multi_interval_error}.\n\n\\begin{theorem}\\label{teo_sobolev_error}\n Let $r > 0$, $00$). Indeed, it suffices to show that, for a\n given bounded sequence $\\{\\phi_{n}\\}\\subset H^r_s(\\Omega)$, the\n sequence $R_{s,n}[\\phi_{n}]$ admits a convergent subsequence in\n $H^r_s(\\Omega)$. But, selecting $00$ and any $0\\le s\n\\le 1$) are displayed in Fig.~\\ref{fig_nonsmooth}. The errors decay\nwith the order predicted by Theorem~\\ref{teo_sobolev_error} in the\n$H^{2s}_s(-1,1)$ norm, and with a slightly better order than predicted\nby that theorem for the $L_s^2(-1,1)$ error norm, although the\nobserved orders tend to the predicted order as $s\\to 0$ (cf.\nRemark~\\ref{remark_negative_norm}).\n\n\n \n\\begin{center}\n\\begin{figure}[htbp]\n \\begin{minipage}[t]{0.59\\linewidth}\n \\vspace{0pt}\n \\centering\n \\includegraphics[scale=0.27]{.\/sol_multi.png}\n \\end{minipage}\n \\begin{minipage}[t]{0.4\\linewidth}\n \\vspace{0pt}\n \\centering\n \\begin{tabular}{ c | c }\n $N$ \t& rel. err. \\\\\n \\hline\n \\hline\n\t8 & 9.3134e-05 \\\\ \n\t12 & 1.6865e-06 \\\\ \n\t16 & 3.1795e-08 \\\\ \n\t20 & 6.1375e-10 \\\\ \n\t24 & 1.1857e-11 \\\\ \n\t28 & 1.4699e-13 \\\\\n \\hline\n \\end{tabular} \n\\end{minipage}\n\\caption{Multiple (upper curves) vs. independent single-intervals\n solutions (lower curves) with $f=1$. A total of five GMRES\n iterations sufficed to achieve the errors shown on the right table\n for each one of the discretizations considered.}\n\\label{fig:multi}\n\\end{figure}\n\\end{center}\n \nA solution for a multi-interval (two-interval) test problem with right\nhand side $f=1$ is displayed in Figure~\\ref{fig:multi}. A total of\nfive GMRES iterations sufficed to reach the errors displayed for each\none of the discretizations considered on the right-hand table in\nFigure~\\ref{fig:multi}. The computational times required for each one\nof the discretizations listed on the right-hand table are of the order\nof a few hundredths of a second.\n\n\\section{Acknowledgments}\n\nThe authors thankfully acknowledge support from various agencies. GA\nwork was partially supported by CONICET, Argentina, under grant PIP\n2014--2016 11220130100184CO. JPB's and MM's efforts were made possible\nby graduate fellowship from CONICET, Argentina. OB efforts were\nsupported by the US NSF and AFOSR through contracts DMS-1411876 and\nFA9550-15-1- 0043, and by the NSSEFF Vannevar Bush Fellowship under\ncontract number N00014-16-1-2808.\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\n2M1207b\\ is a planet-mass companion orbiting the young brown dwarf 2M1207A at a\nprojected separation of $\\sim 46$ AU \\cite[]{Chauvin2004, Chauvin2005}. As a\nmember of the $\\sim 8$ Myr old TW Hydra association \\cite[]{Gizis2002}, 2M1207b\\\nis one of the youngest companions of its mass ($\\sim$ 2 -- 5 $M_{\\rm Jup}$) ever\nimaged. Regardless of how the system formed, 2M1207b\\ provides a unique look\ninto the atmospheric properties of giant planets and brown dwarfs in the very\nearly stages of their evolution.\n\nSoon after 2M1207b\\ was discovered, it was recognized that its low luminosity and\nred near-IR colors were seemingly inconsistent. With m$_K$ = 16.93 $\\pm 0.11$\n\\cite[]{Chauvin2004}, a $K$-band bolometric correction of 3.25 $\\pm 0.14$\n\\cite[]{Mamajek2005}, and a distance of 52.4 $\\pm 1.1$ pc\n\\cite[]{Ducourant2008}, 2M1207b\\ has a luminosity of $\\log L\/L_\\odot = -4.73 \\pm\n0.12$. With this luminosity and an age of 8$^{+4}_{-3}$ Myr\n\\cite[]{Chauvin2004}, brown dwarf cooling tracks \\cite[]{Baraffe2003}\npredict\\footnote{The ranges in predicted values come from the luminosity and\nage uncertainties.} $T_{\\rm eff}$\\ = 936 -- 1090 K, $\\log(g)$\\ = 3.5 -- 3.8 (cgs units),\nradius = 1.34 -- 1.43 $R_{\\rm Jup}$, and a mass of 2.3 -- 4.8 $M_{\\rm Jup}$. The red near-IR\ncolors (e.g., $H-K = 1.16 \\pm 0.24$; Chauvin et al. 2004\\nocite{Chauvin2004})\nand near-IR spectra of 2M1207b, on the other hand, are indicative of a mid to\nlate L spectral type and effective temperature of $\\sim 1600$K\n\\cite[]{Mohanty2007, Patience2010}. Such a combination of high effective\ntemperature and low luminosity would require a radius of $\\sim 0.7 $$R_{\\rm Jup}$,\nabout a factor of two below evolution model predictions. Under the assumption\nthat the higher $T_{\\rm eff}$\\ is correct, two possible explanations for the apparent\nunderluminosity of 2M1207b\\ have been put forward. An edge-on disk, albeit with\nunusual characteristics, could provide the necessary 2.5 mags of gray\nextinction to accommodate a larger and more reasonable radius\n\\cite[]{Mohanty2007}. A more provocative suggestion requires 2M1207b\\ to be the\nhot afterglow of a recent protoplanetary collision allowing it to be\nsimultaneously hot and small \\cite[]{Mamajek2007}. \n\nBoth of these early explanations for the observed properties of 2M1207b\\ hinge\nupon the effective temperature being as high as 1600 K. Earlier estimates of\n$T_{\\rm eff}$\\ were likely lead astray by the lack of color-$T_{\\rm eff}$\\ relationships\nproperly calibrated for young, planet-mass objects and the lack of model\natmospheres spanning a broad enough range of cloud and chemical parameters to\nencompass objects like 2M1207b. The discovery of HR8799b \\cite[]{Marois2008}, a\nsecond planet-mass object with similar near-IR colors and luminosity as 2M1207b,\nmotivates this new model atmosphere study of 2M1207b, as it is highly unlikely\nthat either the edge-on disk or recent-collision explanation applies to both\nobjects.\n\nIn this Letter, an atmosphere-only explanation for the observed properties of\n2M1207b\\ is presented. A combination of clouds of modest thickness and\nnon-equilibrium CO\/CH$_4$ ratio is shown to simultaneously reproduce both the\nobserved photometric and spectroscopic properties of 2M1207b, with bulk\nproperties consistent with evolution model predictions. Such an explanation\nwas touched upon in several recent papers \\cite[]{Currie2011, Skemer2011,\nBarman2011}, but here the atmosphere of 2M1207b\\ is explored in more detail. \n\n\\section{Clouds}\n\nThe properties of brown dwarfs and giant planets are known to be influenced by\natmospheric cloud opacity and it is well established that clouds play a\ncentral role in the transition from spectral types L to T \\cite[]{Allard2001}.\nEarly work on brown dwarf atmospheres approached cloud modeling\nphenomenologically, parameterizing the problem with an emphasis on vertical\nmixing \\cite[]{Ackerman2001}. Cloudy and cloud-free limits provide useful\ninsight into the expected spectroscopic and photometric trends but often fail,\nunsurprisingly, to match individual brown dwarfs, especially in the L-to-T\ntransition region \\cite[]{Burrows2006}.\n\nFigure \\ref{fig1} compares field brown dwarfs to 2M1207b\\ and the HR8799 planets\nin a color-magnitude diagram (CMD). 2M1207b\\ is located between the cloudy and\ncloud-free limits and, consequently, one should not expect either of these\nlimiting cases to be appropriate when modeling its photometric and\nspectroscopic properties. Figure \\ref{fig1} shows the path brown dwarfs or\ngiant planets of various effective temperatures would follow in a near-IR CMD\nif the vertical thickness of clouds is allowed to continuously increase from\nzero (cloud-free) to well above the photosphere (pure equilibrium clouds). The\nradius for these tracks, 1.4 $R_{\\rm Jup}$, was specifically chosen to match the radius\npredicted for 2M1207b\\ as discussed above; however, the paths traced by these\ntracks are otherwise independent of evolutionary models. 2M1207b\\ is intersected\nby the $T_{\\rm eff}$ = 1000 K cloud-track, which is the expected value from evolution\nmodels, demonstrating that low-temperature cloudy atmospheres can achieve very\nred near-IR colors, even with clouds significantly thinner than the extreme\nlimits.\n\n\\begin{figure}[t]\n\\plotone{figure1.eps}\n\\caption{Absolute $H$-band magnitude vs. $H$-$K$ near-IR color-magnitude\ndiagram for field brown dwarfs \\cite[]{Leggett2002, Knapp2004}. The planets\n2M1207b\\ and HR8799bcd are indicated by symbols with 1-$\\sigma$ error bars (see\nlegend). Dotted lines show color-magnitude tracks for chemical equilibrium\nmodels with radius = 1.4 $R_{\\rm Jup}$, $\\log(g) = 3.5$, mean particle size equal to 5\n$\\mu$m and T$_{\\rm eff}$ equal to 700 -- 1200 K in steps of 100 K, from bottom to\ntop. Cloud thickness increases from left to right. The arrows indicate the\napproximate locations of cloud free (left) and extremely cloudy (right) models,\nwith arrow-direction pointing toward decreasing $T_{\\rm eff}$. The ``transition\"\nregion, between cloudy and cloud-free atmospheres covers a broad range in the\ncolor-magnitude diagram, beyond what is currently occupied by field brown dwarfs.\n\\label{fig1}}\n\\end{figure}\n\n\\section{Non-local Equilibrium Chemistry}\n\nIf the methane-rich atmospheres of mid to late T dwarfs were in a pure chemical\nequilibrium state, CO mole fractions would be too small to have a major\nimpact on their spectra. However, CO has been detected in many T dwarfs,\nsuggesting that their atmospheres are out of equilibrium \\cite[]{Noll1997,\nSaumon2000, Saumon2006, Geballe2009}. The most likely mechanism for CO\nenhancement is vertical mixing from deep layers, where the temperatures and\npressures are higher and CO is in ready supply. At photospheric depths and\nabove, the chemical timescale ($\\tau_{\\rm chem}$) to reestablish an equilibrium\nCO\/CH$_4$ ratio becomes far greater than the mixing timescale ($\\tau_{\\rm mix}$),\nthereby allowing larger CO mole fractions to exist at otherwise\nmethane-dominated pressures. Through the same mixing process, N$_2$\/NH$_3$ can\nalso depart from local chemical equilibrium (LCE; Saumon et al. 2006;\nHubeny \\& Burrows 2007) \\nocite{Saumon2006, Hubeny2007}. The standard non-LCE\nmodel quenches the CO and CH$_4$ mole fractions at the atmospheric pressure\n($P_{q}$) where $\\tau_{\\rm mix}$ = $\\tau_{\\rm chem}$, with $\\tau_{\\rm mix}$\\ computed following\n\\cite{Smith1998}. Below the quenching depth ($P >$ $P_{q}$) the mole fractions for\nCO and CH$_4$ are in chemical equilibrium while above ($P \\le P_{q}$) they are\nset to the values at $P_{q}$.\n\nThe mole fractions for CO and CH$_4$, N(CO) and N(CH$_4$), at and above $P_{q}$\\\nare very sensitive to the underlying temperature-pressure (T-P)\nprofile and, thus, are sensitive to gravity, cloud opacity, and metallicity\n\\cite[]{Hubeny2007, Fortney2008, Barman2011}. Certain combinations of low\ngravity and clouds can result in $P_{q}$\\ {\\em below} the CO\/CH$_4$\nequilibrium chemistry crossing point ($P_{eq}$). When $P_{q}$\\ is sufficiently deep\nand $P_{q}$\\ $>$ $P_{eq}$, N(CO) can be quenched at its maximum value while N(CH$_4$)\nis quenched near its minimum. When this situation occurs, the non-LCE\nCO\/CH$_4$ ratio becomes fairly insensitive to the mixing timescale in the\nradiative zone (determined by the adopted coefficient of eddy diffusion, $K_{\\rm zz}$).\nThis situation is similar to the N$_2$\/NH$_3$ chemistry where the NH$_3$ mole\nfractions can also be nearly independent of $K_{\\rm zz}$, even in high-gravity\ncloud-free atmospheres \\cite[]{Saumon2006, Hubeny2007}. \n\nThis atmospheric behavior is highly significant to 2M1207b\\ as it allows for \na photosphere with much higher N(CO) and much lower N(CH$_4$) mole fractions at the\nlow $T_{\\rm eff}$\\ predicted by evolution models. Previous studies, focused primarily\non field brown dwarfs, present far less severe situations with non-LCE only\naltering spectra at $\\lambda > 4$ $\\mu$m where the strongest absorption\nbands of CO and NH$_3$ occur \\cite[]{Hubeny2007}. At lower surface gravities,\nhowever, strong non-LCE effects have been shown to extend well into the near-IR\n\\cite[]{Fortney2008, Barman2011}. \n\n\\section{Model Comparisons}\n\n2M1207b\\ has been observed extensively from the ground and space, with\nphotometric coverage between 0.9 and $\\sim$ 9 \\micron\\ \\cite[]{Chauvin2004,\nSong2006, Mohanty2007, Skemer2011}. High signal-to-noise $J$, $H$, and $K$\nspectra are also available \\cite[]{Patience2010}. With the goal of finding a\nmodel atmosphere that agrees with these observations and that has $T_{\\rm eff}$\\ and $\\log(g)$\\ in the\nrange predicted by evolution models, a sequence of atmosphere models was\ncomputed covering $T_{\\rm eff}$\\ = 900 -- 1200 K and $\\log(g)$ from 3.0 to 4.5 (cgs\nunits). The same intermediate cloud (ICM) and non-LCE prescriptions from\n\\cite{Barman2011} were used. Synthetic photometry was generated by convolving\nsynthetic spectra with filter response curves. $J$, $H$, and $K$ spectra were\nproduced by convolving the model spectra with a Gaussian filter matching the\nobserved spectral resolution, then interpolated onto the same wavelength\npoints. The best-fit was determined by least-squares minimization.\n\nFigure \\ref{fig2} illustrates the basic structure of the model that best fits the\ndata ($T_{\\rm eff}$ = 1000 K and $\\log(g)$ = 4.0). The atmospheric cloud, composed mostly\nof Fe and Mg-Si grains, has a base at $\\sim 3$ bar and extends upward before\ndropping off in number-density at $\\sim 1$ bar. Despite the rapid drop in\nnumber-density, the cloud extends across the photospheric depths. Also, \n$P_{q}$\\ ($\\sim 3$ bar) is well below the N(CO)$_{eq}$ = N(CH$_4$)$_{eq}$\npoint ($P \\sim 0.3$ bar), with the CO mole fractions set to the maximum value and\nCH$_4$ is close to its minimum value.\n\nWhile the model cloud and non-LCE properties are determined by free-parameters, they\nare likely supported by low gravity and efficient vertical mixing. In the\nconvection zone $\\tau_{\\rm mix}$ $\\propto$ $H_{\\rm P}$\/$V_{\\rm c}$, where $V_{\\rm c}$\\ is the convective velocity\nand $H_{\\rm P}$\\ is the local pressure scale-height. With $K_{\\rm zz}$\\ $\\propto$\n$H_p^2\/$$\\tau_{\\rm mix}$\\ $\\propto$ $V_{\\rm c}$$H_{\\rm P}$, $K_{\\rm zz}$\\ increases with decreasing\ngravity in the convection zone as velocity and scale-height increase. The\nradiative-convective boundary also shifts toward the photosphere as surface\ngravity decreases, further suggesting that vertical mixing in the radiative\nzone near this boundary will also be enhanced in low gravity atmospheres. In\nthe radiative zone it is unclear whether the vertical mixing is predominantly\ndriven by convective overshoot or gravity waves, but the picture emerging from\nmulti-dimensional hydrodynamical simulations in M dwarfs and brown dwarfs \nindicates that $K_{\\rm zz}$\\ is both depth dependent and easily achieves values $>\n10^8$ cm$^2$ s$^{-1}$ \\cite[]{Ludwig2006, Freytag2010}. \n \nFigure \\ref{fig3} compares the best-fitting model photometry and spectrum to\nthe observations. The model photometry agrees very well with the observations\nin nearly ever bandpass, with most bands agreeing at 1$\\sigma$. The model\n$J$-band spectrum has about the right slope and nicely reproduces the water\nabsorption starting at 1.33 $\\mu$m. At $H$ band, the model also has roughly\nthe correct shape but is slightly too linear across the central wavelengths.\nThe CO band in the $K$ band is very well reproduced by the model but the\ncentral wavelength region is again too flat. \\cite{Skemer2011} also compare\nmodels \\cite[]{Burrows2006, Madu2011} with similar $T_{\\rm eff}$\\ and $\\log(g)$\\ to the\nsame observations; however, these models likely underestimate the non-LCE, as\nthey do not adequately reproduce the near-IR spectrum, especially the CO band\nat 2.3 \\micron. An LCE version of the best-fit non-LCE model is shown in Figure\n\\ref{fig3} and demonstrates the impact non-LCE has on the near-IR spectrum.\n\n\\begin{figure}[t]\n\\plotone{figure2.eps}\n\\caption{\nAtmospheric properties for the best fitting model for 2M1207b. {\\em Top:}\ntemperature-pressure structure compared to condensation curves for two abundant\ncloud species. {\\em Middle:} CO and CH$_4$ mole fractions for equilibrium\n(dashed, red) and non-equilibrium (solid, with $K_{\\rm zz}$\\ = 10$^8$ cm$^2$ s$^{-1}$)\nchemistry. Chemical and mixing timescales are also plotted (dotted lines).\n{\\em Bottom:} dust-to-gas ratio for the intermediate cloud model (ICM) and the\npure equilibrium cloud model.\n\\label{fig2}}\n\\end{figure}\n \nAlso shown in Figure \\ref{fig3} is the 1600 K ``Dusty\" \\cite[]{Allard2001}\nequilibrium cloud model often selected as the best match to the near-IR spectra\n\\cite[]{Mohanty2007, Patience2010}. The 1600 K model photometry does not agree\nwell with the observations across the full wavelength range. The 1600 K model\nspectrum, however, is very similar to the best-fit 1000 K model across the $K$\nband, but only when the near-IR bands are scaled to match individually (to\naccount for the photometric disagreement for the 1600 K model). In both $J$ and\n$H$ bands, the 1600 K model overpredicts the peak flux and is noticeably more\ntriangular at $H$ than the observations. At $K$ band, the 1600 K model provides\nonly slightly better agreement with the observations than the 1000 K model. \n \nThe remaining differences between the 1000 K model and near-IR spectra can be\nattributed to an incorrect proportion of dust opacity relative to molecular\nopacity. Given the simplicity of the cloud model used here such discrepancies\nare not surprising. Without a doubt, a more parameter-rich cloud model could\nbe used to fine tune the comparison, but it is unlikely that such an exercise\nwill lead to significantly greater insights into the physical properties of the\ncloud. The primary lesson from this comparison is that atmospheric clouds and\nchemistry can dramatically alter the spectral shape and potentially lead to\nerrors in effective temperature as great as 50\\%.\n\nThe model comparisons to the photometry provide a new estimate for the\nbolometric luminosity. The mean luminosity, found by comparing to pure\nequilibrium models, ICM\/non-LCE models and black bodies, is $\\log L\/L_\\odot =\n-4.68$ with rms of 0.05. This luminosity is consistent with the earlier value\nmentioned above (based on $K$-band bolometric corrections). \n\n\\begin{figure*}[t]\n\\epsscale{1.04}\n\\plotone{figure3.eps}\n\\caption{\nPhotometric (top) and spectroscopic (bottom) comparison between 2M1207b\\\nobservations (shown in black, see the text for details) and best-fitting model\n(blue). For comparison, synthetic photometry and spectra from a 1600 K model\n(red) with an equilibrium cloud model (aka ``Dusty\") along with an LCE model (green) with the\nsame parameters as the non-LCE model. Surface gravities, radii, and effective\ntemperatures are indicated in the legend. All fluxes have been scaled to 10\npc.\n\\label{fig3}}\n\\end{figure*}\n\n\\section{Discussion and Conclusions}\n\nThe best fitting $T_{\\rm eff}$, $\\log(g)$, and radius (1000 K, 10$^4$ cm s$^{-2}$, and 1.5 $R_{\\rm Jup}$)\nfor 2M1207b\\ are consistent with the cooling track predictions discussed above. This\nmodel demonstrates that including typical cloud thickness and non-LCE are all\nthat is required to reproduce the current observations of 2M1207b. Such a model\nreminds us that the spectra of brown dwarfs are not strictly a function of\ntemperature and, at young ages, can deviate significantly from expectations\nderived from older field brown dwarfs. The primary evidence supporting the\nedge-on-disk and protoplanetary collision hypotheses, was the previously\ndeduced 1600 K effective temperature. A model with this temperature is shown to\ninadequately reproduce the available photometry and compares no better to\nnear-IR spectra than a cooler cloudy model. Also, the disk-model comparison by\n\\cite{Skemer2011} further weakens the case for an edge-on disk. Consequently,\nno strong evidence remains for the previous disk or collision hypotheses. This\nconclusion is independent of the existence of HR8799b, but is certainly\nsupported by it.\n\nThe primary atmospheric contributors to the L-type appearance of 2M1207b, despite\nits low $T_{\\rm eff}$, are clouds extending across the photosphere, thereby reddening\nthe near-IR colors, and non-equilibrium chemistry, establishing a CO\/CH$_4$\nratio that is nearly the reciprocal of what is present in the photospheres of\nolder field T dwarfs. The cloud properties, in particular the thickness, are\nprobably not substantially different from those found in late L dwarfs and the\nrequired $K_{\\rm zz}$\\ ($\\sim 10^8$ cm$^2$ s$^{-1}$) is well within the range of\nfield dwarfs. \\cite{Skemer2011} stress clouds as the primary explanation for\nthe photometric and spectroscopic properties of 2M1207b. However, non-LCE plays\nan equally important role. Probably the most important underlying distinction\nbetween 2M1207b\\ and field brown dwarfs is low surface gravity, provided by its\nyouth and low mass. Without low surface gravity, it is unlikely that clouds or\nnon-LCE would be sufficient to give 2M1207b\\ its current appearance. Such\nobjects, therefore, should not be considered members of a new class, but rather\nrepresent the natural extension of substellar atmospheres to low gravity.\n\nGiven their similar masses ($\\sim 5$$M_{\\rm Jup}$), it is possible that 2M1207b\\ and\nHR8799b represent two distinct states in the evolution of substellar\natmospheric properties (despite potentially different formation scenarios).\nAfter $\\sim 20$ Myr of cooling, perhaps 2M1207b\\ will spectroscopically evolve\ninto something resembling HR8799b and, eventually, into a traditional looking\nmethane-rich T-dwarf. The very distinct spectra of the two objects (see Figure\n15, \\nocite{Barman2011}Barman et al. 2011) would suggest rapid spectral\nevolution in the first 50 Myr, post formation. It is worth noting that the\nbest-fit models for 2M1207b\\ and HR8799b, though similar in atmospheric\nparameters, differ in one significant respect -- the radius derived from the\neffective temperature and luminosity. Our model here gives a radius of 1.5 $R_{\\rm Jup}$\\\nfor 2M1207b, while even the coldest fit to HR8799b in \\cite{Barman2011} gives a\nradius of only 1 $R_{\\rm Jup}$. There are several possible explanations for this.\nFirst, the recovered radius is of course very sensitive to the effective\ntemperature -- a 100 K increase in 2M1207b\\ and a 100 K decrease in HR8799b would\nremove the discrepancy, though neither would be a good fit to the spectra.\nSecond, this may represent a fundamental difference in their internal state due\nto different formation scenarios. 2M1207b\\ almost certainly did not form through\na core-accretion process as it is highly unlikely that a disk surrounding a\nlow-mass brown dwarf would have had enough material to accrete a giant planet,\nespecially at large separations and in a short time. It is more likely that\n2M1207b\\ represents the tail end of binary star formation and, thus, might be\nexpected to follow a similar cooling evolution as brown dwarfs. The formation\nof the HR8799 planets is less clear but could potentially have involved an\naccretion period. The ``cold start\" accretion models of \\cite{Marley2007} predict\nsignificantly smaller radii for a given age and mass. Although the temperature\nand luminosity of HR8799b are too high for the extreme cold start models, the\nsmaller radius may be pointing toward a formation process that involved at\nleast some loss of entropy.\n\nFinally, one can speculate on the implications that 2M1207b\\ and HR8799b might\nhave on the spectral properties for the broader young brown dwarf and planet\npopulation. If one adopts, $\\sim$ 1500 K as the upper $T_{\\rm eff}$\\ limit for\nT-dwarfs, then all T dwarfs younger than $\\sim 100$ Myr should be in the planet\nmass regime ($\\lesssim 13$ $M_{\\rm Jup}$) and should have very low gravity ($\\log(g)\n\\lesssim 4.5$). However, if 2M1207b\\ and HR8799b are representative of cool, low\ngravity, substellar atmospheres, then non-LCE (and possibly clouds) will\ndiminish the strength of CH$_4$ absorption across the $H$ and $K$ bands, making\nvery young methane dwarfs rare. This prediction, however, is at odds with the\ntentative discoveries of $\\sim 1$ Myr-old T dwarfs \\cite[]{Zap2002,\nBurgess2009, Marsh2010}. While at least one of these discoveries has been\ndrawn into question \\cite[]{Burgasser2004}, if others with strong near-IR\nCH$_4$ absorption are confirmed, then it must be explained why substellar\nobjects of similar age and gravity have atmospheres with wildly different cloud\nand non-LCE properties.\n\n\\vspace{-0.5cm}\n\\acknowledgements\nWe thank the anonymous referee for their review. This Letter benefited from\nmany useful discussions with Brad Hansen, Mark Marley, and Didier Saumon. This\nresearch was supported by NASA through Origins grants to Lowell Observatory and\nLLNL along with support from the HST GO program. Support was also provided by\nthe NASA High-End Computing (HEC) Program through the NASA Advanced\nSupercomputing (NAS) Division at Ames Research Center. Portions of this work\nwere performed under the auspices of the U.S. Department of Energy by Lawrence\nLivermore National Laboratory under Contract DE-AC52-07NA27344\n(LLNL-JRNL-485291).\n\n\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nAtomistic simulations are indispensable in materials science and require a robust description of the interatomic interaction. \nThe application of highly accurate density functional theory (DFT) calculations is often limited by the computationally involved description of the electronic structure. \nA systematic coarse-graining of the electronic structure~\\cite{Drautz2006,Drautz2011} leads to the tight-binding (TB) bond model~\\cite{Sutton-88} and to analytic bond-order potentials (BOP)~\\cite{Hammerschmidt-09-IJMR,Drautz2015}.\nThese TB\/BOP models can be evaluated at significantly reduced numerical effort and provide a transparent and intuitive model of the interatomic bond for the prediction of materials properties, see e.g. Refs.~\\cite{Mrovec2004,Mrovec2007,Mrovec2011,Seiser-11-2,Cak2014,Ford-14,Ford2015,Wang-19}. \n\nThe TB\/BOP models employ adjustable parameters that need to be optimized for a particular material.\nThis optimization is in principle comparable to the parameterization of classical potentials which can be performed with various existing software packages~\\cite{Brommer2007,Duff2015,Barrett2016,Stukowski2017}.\nThe parameterization of TB\/BOP models, however, requires sophisticated successive optimization steps, computationally-efficient handling of large data sets and an interface to a TB\/BOP calculator~\\cite{Horsfield-96,Hammerschmidt2019}.\n\nHere, we present BOPcat (\\textbf{B}ond-\\textbf{O}rder \\textbf{P}otential \\textbf{c}onstruction \\textbf{a}nd \\textbf{t}esting), a software to parameterize TB\/BOP models as implemented in the BOPfox software~\\cite{Hammerschmidt2019}. \nThe parameters of the models are optimized to reproduce target properties like energies, forces, stresses, eigenvalues, defect formation energies, elastic constants, etc. \nWith the interface of BOPfox to LAMMPS~\\cite{Plimpton-95} and ASE~\\cite{Larsen2017}, the list of target properties can in principle be extended to include dynamical properties. \nWe illustrate the capability of the BOPcat software by constructing and testing an analytic BOP with collinear magnetism for Fe. \nExtensive tests show the good transferability of the BOP to properties which were not included in the parameterization, particularly to elastic constants, point defects, $\\gamma$ surfaces, phonon spectra and deformation paths of the ferromagnetic bcc groundstate and to other crystal structures.\nThe structures and target properties used here are taken from DFT calculations but could also include experimental data or other data sources. \n\nWe first provide a brief introduction of bond-order potentials in Sec.~\\ref{sec:BOPformalism}. \nIn Sec.~\\ref{sec:programflow}, the BOPcat program is outlined and the implemented modules are described.\nIn Sec.~\\ref{sec:application}, we discuss the construction of an analytic BOP for Fe and its testing. In the appendix we provide\nfurther examples of parameterization protocols for BOPcat.\n\n\\section{Bond-order potential formalism}\\label{sec:BOPformalism}\n\nAnalytic BOPs provide a robust and transparent description of the interatomic interaction that is based on a coarse-graining of the electronic structure~\\cite{Drautz2006,Drautz2011,Drautz2015} from DFT to TB to BOP.\nIn the following we compile the central equations and functions of the TB\/BOP formalism that are parameterized in BOPcat. \n\nIn the TB\/BOP models, the total binding energy is given by\n\\begin{equation}\n \\label{eq:U_B_general}\n U_B = U_{\\rm{bond}} + U_{\\rm{prom}} + U_{\\rm{ion}} + U_{\\rm{es}} + U_{\\rm{rep}} + U_{\\rm{X}}\n\\end{equation}\nwith the bond energy $U_{\\rm{bond}}$ due to the formation of chemical bonds, the promotion energy $U_{\\rm{prom}}$ from redistribution of electrons across orbitals upon bond formation, the onsite ionic energy $ U_{\\rm{ion}}$ to charge an atom, the intersite electrostatic energy $U_{\\rm{es}}$, \nthe exchange energy $U_{\\rm{X}}$ due to magnetism and the repulsive energy $U_{\\rm{rep}}$ that includes all further terms of the second-order expansion of DFT. \nThe individual energy and force contributions are described in detail in Ref.~\\cite{Hammerschmidt2019}, their functional forms and according parameters for the exemplary construction of a magnetic BOP for Fe are given in Sec.~\\ref{sec:application}.\n\nThe bond energy is obtained by integration of the electronic density of states (DOS) $n_{i\\alpha}$,\n\\begin{equation}\n U_\\mathrm{bond} = 2\\sum_{i\\alpha}\\int^{E_F} (E-E_{i\\alpha})n_{i\\alpha}(E)dE\n\\end{equation}\nwith atomic onsite levels $E_{i\\alpha}$ for orbital $\\alpha$ of atom $i$.\nIn TB calculations, the DOS is obtained by diagonalization of the Hamiltonian $\\hat{H}$ for a given structure.\nIn analytic BOPs~\\cite{Drautz2006,Drautz2011,Drautz2015}, the DOS is determined analytically from the moments \n\\begin{eqnarray}\n \\label{eq:mu}\n \\mu_{i\\alpha}^{(n)} &=& \\braopket{i\\alpha}{\\hat{H}^n}{i\\alpha} \\nonumber \\\\ \n &=&H_{i\\alpha j\\beta}H_{j\\beta k\\gamma}H_{k\\gamma\\ldots}\\dotso H_{\\ldots i\\alpha} \\nonumber \\\\\n &=&\\int E^{n} n_{i\\alpha}(E)dE\n\\end{eqnarray}\nthat are computed from the matrix elements $H_{i\\alpha j\\beta}$ of the Hamiltonian between pairs of atoms.\nThe BOP DOS provides an approximation to the TB DOS at a computational effort for energy and force calculations that scales linearly with the number of atoms~\\cite{Teijeiro-16-2}.\nThe quality of the BOP approximation can be improved systematically by including higher moments with a power-law scaling for the increase in computational effort~\\cite{Teijeiro-16-1}.\nA detailed introduction to TB\/BOP calculations in BOPfox is given in Ref.~\\cite{Hammerschmidt2019}.\n\n\\section{Program Flow}\\label{sec:programflow}\n\nBOPcat is a collection of Python modules and tools for the construction and testing of TB\/BOP models.\nThe individual steps are specified by the user as a protocol in terms of a Python script. \nThe overall construction and testing proceeds as follows:\n\\begin{itemize}\n \\item define and initialize input controls (CATControls)\n \\item generate initial model (CATParam)\n \\item read reference data (CATData) \n \\item initialize calculator interface (CATCalc)\n \\item build and run optimization kernel (CATKernel)\n\\end{itemize}\nThe individual tasks are modularized in the modules CATControls, CATParam, CATData, CATCalc and CATKernel that interact as illustrated in Fig.~\\ref{fig:workflow}.\nDue to its modular structure, one can also use only a subset of the modules, e.g., for successive optimizations.\n\n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.9\\columnwidth]{workflow}\n \\caption{Workflow of the core optimization process in BOPcat. The input controls are specified and initialized.\n The reference data is read from a database and the reference and\n testing sets are generated. The initial model is read from a model database.\n The structures and initial model are assigned to the calculator interface.\n The calculator, reference properties and weights are then used to build the \n optimization kernel. The kernel can be run as standalone process or be sent to \n the process management of a queuing system.}\n \\label{fig:workflow} \n\\end{figure*}\nIn the following, the individual modules are described in detail with snippets of the python protocol for a basic construction process of a magnetic BOP for Fe.\nFurther example protocols are given in the appendix.\n\n\\subsection{Input controls: CATControls}\n\\label{sec:CATControls}\n\nThe variables required for running BOPcat are defined in the module CATControls. \nThese include the list of chemical symbols for which the model is constructed, calculator settings, filters for the reference data, optimization variables, specifications of the model, etc.. \nThe consistency of the input variables is checked within the module to catch inconsistent settings at an early stage. \nThe CATControls object is then passed on to succeeding modules from which the relevant variables are assigned. \nAs an example initialization of the CATControls object we define the functions of the TB\/BOP model and the initial guess for the according parameters by providing an existing model from a file (\\verb+models.bx+).\n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.catcontrols import CATcontrols\n ctr = CATControls()\n ctr.elements = ['Fe']\n ctr.model_pathtomodels = 'models.bx'\n ctr.calculator_nproc = 4\n ctr.calculator_parallel = 'multiprocessing'\n ctr.calculator_settings = {'scfsteps':500}\n ctr.data_filename = 'Fe.fit'\n ctr.data_free_atom_energies = {'Fe':-0.689}\n ctr.data_system_parameters = {'spin':[1,2],\n 'system_type':[0]}\n ctr.opt_optimizer = 'leastsq'\n ctr.opt_variables = [{'bond':['Fe','Fe'],\n 'rep1':[True,True]}]\n \\end{verbatim}\n}\n\nAlternatively, BOPcat can generate an initial guess from raw bond integrals~\\cite{Jenke2019} with additional user-specified information on the functions to be used for the individual energy contributions, the valence-type ($s$\/$p$\/$d$) and number of valence electrons of the elements, the cut-off radii for determining pairs of interacting atoms, etc. (see ~\\ref{sec:app2}) \n\n\\subsection{Reference data: CATData}\n\\label{sec:CATData}\n\nThe set of reference data of structures and properties is read from a text file by the CATData module.\nThe entries for reference data have the following format:\n\n{\\footnotesize\n\\begin{verbatim}\ndata_type = 0\ncode = 7\nbasis_set = 0\npseudopotential = 30\nstrucname = bcc\na1 = -1.3587 1.3587 1.3587\na2 = 1.3587 -1.3587 1.3587\na3 = 1.3587 1.3587 -1.3587\ncoord = cartesian\nFe 0.0000 0.0000 0.0000\nenergy = -7.9561\nforces 0.0000 0.0000 0.0000\nstress = -0.08423 -0.0843 -0.0843 -0. -0. -0.\nsystem_type = 0\nspace_group = 229\ncalculation_type = 1\nstoichiometry = Fe\ncalculation_order = 10.033\nweight = 1.0\nspin = 1\n\\end{verbatim}\n}\n\nIn contrast to optimization procedures for classical potentials, the optimization of TB\/BOP models with BOPcat can be carried out not only for atomic properties but also at the electronic-structure level by providing eigenvalues and corresponding $k$-points as reference data. \nThe optimization can be further extended to other properties of TB\/BOP calculations (e.g. band width, magnetic moment) by supplying the according keyword and the data type of the BOPfox software in the reference data module of BOPcat.\n\nCATData stores the reference data in a set of \\verb+Atoms+ objects of the ASE python framework for atomistic simulations~\\cite{Larsen2017}. \nEach of the ASE \\verb+Atoms+ objects can have calculator-specific identifiers such as the settings of the DFT calculation (\\verb+code+, \\verb+basis_set+, \\verb+pseudopotential+) as well as structure-related identifiers such as calculation type (\\verb+calculation_type+) and calculation order (\\verb+calculation_order+), which are used as filters to use only a subset of the available reference data.\nA complete description of the implemented identifiers and filters is given in the manual of BOPcat. \nThe corresponding reference data can include basic quantities like energies, forces, stresses as well as derived quantities such as defect energies. \nExtracting the reference structures and their corresponding properties is illustrated in the following: \n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.catdata import CATData\n data = CATData(controls=controls)\n ref_atoms = data.get_ref_atoms(\n structures=['bcc*','fcc*','hcp*'],\n quantities=['energy'],\n sort_by='energy')\n ref_data = data.get_ref_data()\n\\end{verbatim}\n}\n\nThe user-specified reference data, i.e. structures and properties, are passed to the calculator and optimization kernel, respectively.\n\n\\subsection{Initial model: CATParam}\n\\label{sec:CATParam}\n\nThe construction of TB\/BOP models in BOPcat relies on iterative optimization steps that require an initial guess that can be provided by the user or generated by the CATParam module. \nIn the latter case the functions and parameters are initialized from a recently developed database of TB bond integrals from downfolded DFT eigenspectra~\\cite{Jenke2019}. \nIn the following snippet, the initial model is read from the filename specified in the input controls.\n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.catparam import CATParam\n param = CATParam(controls=controls)\n ini_model = param.models[0]\n\\end{verbatim}\n}\n\nThe CATParam module also provides meta-operations on sets of models such as identifying the optimum model for a given set of reference data.\n\n\\subsection{Calculator: CATCalc}\n\\label{sec:CATCalc}\n\nAfter the reference structures are read and the initial model is generated, the CATCalc module sets up the calculator for the computation of the specified properties for the reference structures by the initial and then optimized TB\/BOP model. \nTo this end, a list of BOPfox-ASE calculators is constructed which can be run in serial or in multiprocessing mode. \nThe latter is optimized by a load balancing scheme that is based on the size of the structures. \nAdditional properties for the model optimization can easily be included by extending this module accordingly.\nIn the following snippet, an initial model is provided and an ASE calculator is initialized for each of the reference structures.\n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.catcalc import CATCalc\n calc = CATCalc(controls=ctr,\n model=ini_model,\n atoms=ref_atoms)\n\\end{verbatim}\n}\n\nHere, the input controls (\\verb+ctr+), the initial guess for the TB\/BOP model (\\verb+ini_model+) and the reference structures (\\verb+ref_atoms+) are specified in the code snippets given in Sec.~\\ref{sec:CATControls},~\\ref{sec:CATParam} and~\\ref{sec:CATData}, respectively.\n\n\\subsection{Construction and testing kernel: CATKernel}\n\\label{sec:CATKernel}\n\nThe reference data and associated calculators are then used to set up the CATKernel module.\nThis module provides an interface to the objective function for the construction and testing of TB\/BOP models. \nThe default objective function for $N_p$ properties of $N_s$ structures is given by\n\\begin{equation}\n\\chi^2 = \\sum_p \\frac{w_p}{N_p} \\sum_s \\frac{\\tilde{w_{ps}}}{N_s^{(p)}} \\frac{X_{ps}^\\mathrm{model}-X_{ps}^\\mathrm{ref}}{\\bar{X}^\\mathrm{ref}_p} .\n\\end{equation}\nThe user can choose other definitions of the objective function that are implemented in BOPcat or provide external implementations.\nThe weights for the individual structures \n\\begin{equation}\n \\tilde{w_{ps}} = \\frac{1}{\\gamma_pN_\\mathrm{atoms}^{(i)}}\n\\end{equation}\nare specified by the user or determined by the module. \nThe dimensionality factor $\\gamma_{p}$ balances the relative weights of energies, forces and stress by taking values of $1,3,6$, respectively. \nThe objective function can also be normalized by the variance of the properties as in Ref.~\\cite{Krishnapriyan2017}.\nFor construction purposes, this module provides an interface between objective function and optimizer algorithms that are either readily available in the Python modules Scipy~\\cite{Scipy} and NLopt~\\cite{Nlopt} or provided by the user as external module. \n\nIn the following example we illustrate the initialization and running of the CATkernel object for the input controls (\\verb+ctr+), calculator (\\verb+calc+) and reference properties (\\verb+ref_data+) defined in Sec.~\\ref{sec:CATControls},~\\ref{sec:CATCalc} and~\\ref{sec:CATParam}.\n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.catkernel import CATKernel\n kernel = CATKernel(controls=ctr,\n calc=calc,\n ref_data)\n kernel.run() \n\\end{verbatim}\n}\n\n\\subsection{Parallel execution of BOPcat}\n\nThe optimization kernel can be executed efficiently in parallel by distributing the required property calculations with Python subprocesses and the message passing interface. \nFor this purpose, BOPcat also features a process management module to interact with the queuing system of compute clusters.\nThis allows high-throughput optimizations of TB\/BOP models with, e.g., different initial models or different sets of reference data.\nWe illustrate this in the following snippet:\n\n{\\footnotesize\n\\begin{verbatim}\n from bopcat.process_management\\\n import Process_catkernels\n from bopcat.process_management\\\n import Process_catkernel\n from bopcat.process_management import queue\n models = [ini_model.rattle(kernel.variables) \n for i in range(5)]\n subprocs = []\n for i in range(len(kernels)):\n ckern = kernel.copy()\n ckern.calc.set_model(models[i])\n proc = Process_catkernel(catkernel=ckern,\n queue=queue)\n subprocs.append(proc)\n proc = Process_catkernels(procs=subprocs)\n proc.run()\n\\end{verbatim}\n}\n\nA list of random models (\\verb+models+) is first generated by applying noise to the parameters of the initial model (\\verb+ini_model+). \nA copy of the CATKernel object is then generated (\\verb+ckern+) and a model from the list is assigned. \nThe kernel is then serialized by the \\verb+Process_catkernel+ object which contains the details on how to execute the kernel. \nFinally, the individual subprocesses are wrapped in the main object \\verb+Process_catkernels+ which submits them to a specified queue (\\verb+queue+) of the compute cluster.\n\n\\subsection{Graphical user interface}\n\nThe setup of a construction or testing process with BOPcat discussed so far is based on writing and editing Python scripts. \nAlternatively, BOPcat can be driven with a graphical user interface (GUI), see snapshots in Fig.~\\ref{fig:gui}. \n\\begin{figure*}[t]\n \\centering\n \\includegraphics[width=1.75\\columnwidth]{gui}\n \\caption{Snapshots of the graphical user interface for (a) constructing a TB\/BOP model, (b) generating reference structures,\n (c) setting optimization variables and (d) defining a parameterization protocol.}\n \\label{fig:gui} \n\\end{figure*}\nBasic information of a construction such as the current value of the parameters and objective function can be visualized. \nThe reference data can be inspected visually and corresponding attributes such as the weights can be set directly.\nFor testing purposes, a given TB\/BOP model can be evaluated and visualized for a given set of reference data. \nThe GUI also comes with a parameterization builder which allows one to design and execute a parameterization protocol. \nIn particular, one can add a number of optimization layers that consist of several optimization kernels with communication of intermediate TB\/BOP models between subsequent layers. \nThis provides the technical prerequisite for automatizing the construction of TB\/BOP models by sophisticated protocols.\n\n\\section{Construction and testing of simple Fe BOP}\\label{sec:application}\n\n\\subsection{Functional form}\n\nAs an example application of BOPcat we construct a simple BOP model for Fe including magnetism.\nWe note that this Fe BOP is constructed to demonstrate BOPcat and that capturing the reference data more precisely would require a more complex functional form.\nThe atoms are assumed to be charge-neutral so that the binding energy given in Eq.~\\ref{eq:U_B_general} reduces to\n\\begin{equation}\n\\label{eq:U_B}\n U_\\mathrm{binding} = U_\\mathrm{bond} + U_\\mathrm{rep} + U_\\mathrm{X} + U_\\mathrm{emb}.\n\\end{equation}\nWe choose a $d$-valent model including only two-center contributions, similar to previous TB\/BOP models for Fe~\\cite{Mrovec2011,Ford2015,Madsen2011}, with an initial number of $d$-electrons of $N_d=6.8$. \nWe use a BOP expansion up to the 9-th moment (Eq.~\\ref{eq:mu}) as a good compromise between performance and accuracy of the DOS.\n\nThe Hamiltonian matrix elements $H_{i\\alpha j\\beta}$ which enter the calculation of the moments are constructed from Slater-Koster bond integrals $\\beta_{ij}^\\sigma$ with the angular character $\\sigma=dd\\sigma, dd\\pi, dd\\delta$. \nThe initial guess of the bond integral parameters is taken from projections of the DFT eigenspectrum on a TB minimal basis for the Fe-Fe dimer~\\cite{Jenke2019}. \nThe distance dependence of the bond integrals is represented by a simple exponential function\n\\begin{equation}\n\\beta_{ij}^\\sigma = a\\exp{-bR_{ij}^c} .\n\\end{equation}\nFor this simple BOP model, the repulsive energy is given as a simple pairwise contribution.\n\\begin{equation}\n U_\\mathrm{rep} = \\frac{1}{2}\\sum_{i,i\\neq j}a_\\mathrm{rep}\\exp\\left(-b_\\mathrm{rep}R_{ij}^{c_\\mathrm{rep}}\\right).\n\\end{equation}\nThe magnetic contribution to the energy $U_X$ is evaluated using\n\\begin{equation}\\label{eq:Stoner}\n U_\\mathrm{X} = -\\frac{1}{4}\\sum_i I m_i^2\n\\end{equation}\nwith $m_i=N_i^\\uparrow-N_i^\\downarrow$ the magnetic moment of atom $i$ and $I$ the Stoner exchange integral that is initially set to 0.80~eV similar to Ref.~\\cite{Mrovec2011}.\nIn extension to Eq.~\\ref{eq:U_B_general} we use an additional empirical embedding contribution in Eq.~\\ref{eq:U_B}\n\\begin{equation}\n U_\\mathrm{emb} = -\\sum_i\\sqrt{\\sum_{j,j\\neq i} a_\\mathrm{emb}\\exp\\left(-b_\\mathrm{emb}(R_{ij}-c_\\mathrm{emb})^2\\right)}\n\\end{equation}\nwhich may be understood as providing contributions due to the missing $s$ electrons and non-linear exchange correlation.\n\nThe bond integrals, repulsive and embedding functions are multiplied with a cut-off function\n\\begin{equation}\n f_\\mathrm{cut} = \\frac{1}{2}\\left[ \\cos\\left(\\pi \\frac{R_{ij}-(r_\\mathrm{cut}-d_\\mathrm{cut})}{d_\\mathrm{cut}}\\right) + 1 \\right]\n\\end{equation}\nin the range of $[r_\\mathrm{cut} - d_\\mathrm{cut}$, $r_\\mathrm{cut}]$ in order to restrict the range of the interatomic interaction.\nFor the Fe BOP constructed here we used $r_\\mathrm{cut}=3.8~\\AA$, $d_\\mathrm{cut}=0.5~\\AA$ for the bond integrals and $r_\\mathrm{cut}=6.0~\\AA$, $d_\\mathrm{cut}=1.0~\\AA$ for the repulsive and embedding energy terms.\n\n\\subsection{Reference data}\n\nThe reference data for constructing and testing the Fe BOP is obtained by DFT calculations using the \\texttt{VASP} software~\\cite{Kresse-96-1,Kresse-96-2} with a high-throughput environment~\\cite{Hammerschmidt-13}.\nWe used the PBE exchange-correlation functional~\\cite{Perdew-96} and PAW pseudopotentials~\\cite{Bloechl-94} with $p$ semicore states. \nA planewave cut-off energy of 450~eV and Monkhorst-Pack $k$-point meshes~\\cite{Monkhorst-76} with a density of 6~\/\\AA$^{-1}$ were sufficient to converge the total energies to 1~meV\/atom. \nThe reference data for constructing the Fe BOP comprised the energy-volume curves of the bulk structures bcc, fcc, hcp, and the topologically close-packed (TCP) phases A15, $\\sigma$, $\\chi$, $\\mu$, C14, C15 and C36 that are relevant for Fe compounds~\\cite{Ladines2015}. \n\nFor testing the Fe BOP, several additional properties were determined that are related to the ferromagnetic bcc groundstate structure. \nIn particular, the elastic constants at the equilibrium volume were determined by fitting the energies as function of the relevant strains~\\cite{Golesorkhtabar2013}. \nThe formation energies of point defects were calculated with the supercell approach with fixed volume corresponding to the bulk equilibrium volume. \nA $6\\times 6\\times 6$ supercell was found to be sufficient to converge the formation energies to an uncertainty of 0.1~eV.\nThe energy barriers for vacancy migration were calculated with the nudged elastic band method.\nThe phonon spectra are computed with the Phonopy software~\\cite{Togo2015}. \nThe transformation paths from bcc to other crystal structures were determined according to Ref.~\\cite{Paidar1999}. \n\n\\subsection{Construction}\n\nThe parameters of the Fe BOP with the functions and reference data given above were optimized with the following BOPcat protocol: \n\\begin{enumerate}\n \\item The magnitudes of the repulsive and embedding terms $a_\\mathrm{rep}$ and $a_\\mathrm{emb}$ are adjusted to reproduce the energies of hydrostatically deformed bcc, fcc and hcp with fixed values for the exponents taken from Ref.~\\cite{Madsen2011}.\n \\item The prefactors of the bond integrals are adjusted including the energies of the TCP phases.\n \\item The exponents of the repulsive functions are optimized including the randomly deformed structures in the reference set.\n \\item The exponents of the bond integrals are optimized by increasing the size of the reference data. \n \\item The other parameters are optimized further while increasing the number of reference structures.\n\\end{enumerate}\n\nThe resulting parameters of the model are compiled in Tab.~\\ref{tab:parameters}.\n\\begin{table}[t]\n \\centering\n \\caption{Parameters of the BOP model for Fe. The unit of $b_\\mathrm{emb}$ is $\\AA^{-2}$ and $c_\\mathrm{emb}$ is $\\AA$.}\n \\scalebox{1.00}{\n \\begin{tabular}{l c c c}\n \\hline\n \\hline\n & $a$ (eV) & $b$ ($\\AA^{-1}$) & c \\\\\n \\hline \n $dd\\sigma$ & -24.9657 & 1.4762 & 0.9253 \\\\\n $dd\\pi$ & 21.7965 & 1.4101 & 1.0621 \\\\\n $dd\\delta$ & -2.3536 & 0.7706 & 1.3217 \\\\\n \\hline\n $U_\\mathrm{rep}$ & 1797.4946 & 3.2809 & 1.0067 \\\\\n \\hline\n $U_\\mathrm{emb}$ & -1.3225 & 1.3374 & 2.1572 \\\\\n \\hline\n $N_d$ & 6.8876 & & \\\\\n $I (eV)$ & 0.9994 & & \\\\\n \\hline\n \\hline\n \\end{tabular}}\n \\label{tab:parameters}\n\\end{table}\nThe initial and optimized bond integrals, and the empirical potentials are plotted in Fig.~\\ref{fig:bondint_fe}. \nThere is no substantial change in the bond integrals except that the bond integrals become longer-ranged after optimization. \nAt shorter distance the bond integrals become weaker which can be rationalized by the screening influence of the neighboring atoms in the bulk structure. \nThe effective number of $d$ electrons and the Stoner integral increased during optimization.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{bondint_fe}\n \\caption{Distance dependence of the $d$ bond integrals as obtained by downfolding from DFT eigenspectra for the Fe-Fe dimer (dashed)~\\cite{Jenke2019} and after optimization to Fe bulk structures (solid).}\n \\label{fig:bondint_fe} \n\\end{figure}\nThe optimized pairwise repulsive term $U_{\\mathrm{rep}}$ shown in Fig.~\\ref{fig:U_rep} is repulsive for all distances. \nThe embedding term $U_{\\mathrm{emb}}$, in contrast, is negative for all distances.\n\\begin{figure}[htb]\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{uemp_fe}\n \\caption{Distance dependence of the pairwise repulsive potential $U_{\\mathrm {rep}}$ (black) and the embedding term $U_{\\mathrm {emb}}$ (violet) after optimization to Fe bulk structures.}\n \\label{fig:U_rep} \n\\end{figure}\n\n\\subsection{Testing}\n\n\\subsubsection{Properties related to bcc-Fe}\n\nAn important basic test of the Fe BOP is the performance for properties that are closely related to the ferromagnetic bcc groundstate structure at 0~K. \nIn Tab.~\\ref{tab:bulk_properties} we compare elastic constants as well as point defect and surface properties as predicted by the Fe BOP to available DFT and experimental values.\nThe overall good agreement demonstrates the robust transferability of the Fe BOP to properties not included in the training set. \n\\begin{table}[htb]\n \\centering\n \\caption{Bulk properties of ferromagnetic bcc-Fe. The experimental values for the elastic constants, vacancy formation energy and vacancy migration energy are taken from Refs.~\\cite{Rayne1961}, ~\\cite{DeSchepper1983} and ~\\cite{Vehanen1982}, respectively. The DFT values for the vacancy migration energy and surface energies are taken from Refs. ~\\cite{Domain2001} and ~\\cite{Blonski2007}, respectively.}\n \\scalebox{1.00}{\n \\begin{tabular}{l c c c}\n \\hline\n \\hline\n & BOP & DFT & experiment \\\\\n \\hline \n V\/atom (\\AA$^3$) & 11.48 & 11.46 & 11.70 \\\\\n $B$ (GPa) & 171 & 176 & 168 \\\\\n $C_{11}$ (GPa) & 265 & 257 & 243 \\\\\n $C_{12}$ (GPa) & 125 & 154 & 138 \\\\\n $C_{44}$ (GPa) & 87 & 85 & 122 \\\\\n $E_f^\\mathrm{vac}$ (eV) & 2.03 & 2.20 & 2.00 \\\\\n $E_\\mathrm{mig}^\\mathrm{vac}$ (eV) & 1.33 & 0.65 & 0.55 \\\\\n $E_f^\\mathrm{100}$ (eV) & 3.65 & 4.64 & \\\\\n $E_f^\\mathrm{110}$ (eV) & 3.13 & 3.64 & \\\\\n $E_f^\\mathrm{111}$ (eV) & 3.59 & 4.34 & \\\\ \n $\\gamma_{(100)}$ (J\/m$^2$) & 1.44 & 2.47 & \\\\\n $\\gamma_{(110)}$ (J\/m$^2$) & 1.27 & 2.37 & \\\\\n $\\gamma_{(111)}$ (J\/m$^2$) & 2.04 & 2.58 & \\\\\n $\\gamma_{(211)}$ (J\/m$^2$) & 1.50 & 2.50 & \\\\\n \\hline\n \\hline\n \\end{tabular}}\n \\label{tab:bulk_properties}\n\\end{table}\n\nThe predicted elastic constants $C_{11}$, $C_{12}$ and $C_{44}$ are in good agreement with DFT although the specific deformed structures were not included in the reference set for constructing the BOP. \nThe prediction for the vacancy formation energy $E_f^\\mathrm{vac}$ is also in good agreement with DFT while the vacancy migration energy $E_\\mathrm{mig}^\\mathrm{vac}$ is overestimated as in previous TB\/BOP models for Fe~\\cite{Madsen2011, Mrovec2011}.\nThe formation energies of the vacancy and self-interstitial atoms calculated by the Fe BOP exhibit the correct energetic ordering although they were not included in the parameterization. \nThe absolute values are slightly underestimated as compared to DFT, similar to previous models~\\cite{Madsen2011, Mrovec2011}.\nThe relative stability of the low-index surfaces are also satisfactorily reproduced by the present Fe BOP. However, similar to the previous BOP model of Mrovec\\cite{Mrovec2011}, the energy difference of the other surfaces relative to $(110)$ is overestimated.\nThe deviations for point defects and surfaces are attributed to the missing $s$ electrons in the model and the lack of screening in the orthogonal TB model.\n\nAs a test towards finite-temperature applications we determine the phonon bandstructure for ferromagnetic bcc Fe shown in Fig.~\\ref{fig:phonon_fe}.\n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{phonon_fe}\n \\caption{Phonon bandstructure of ferromagnetic bcc-Fe. The solid (dashed) lines are the BOP (DFT) results\n while the symbols correspond to experimental data taken from Ref~\\cite{Klotz2000}.}\n \\label{fig:phonon_fe} \n\\end{figure}\nThe prediction of the Fe BOP is in good overall agreement with DFT and experimental results. \nThe close match near the $\\Gamma$ point is expected from the good agreement of the elastic constants. \nThe experimental data were obtained at 300~K which can explain the deviation of the experimental from the\nDFT\/BOP data for the $T2$ branch along $T-N$. \n\nThe transferability of the model in the case of large deformations of the bcc groundstate structure is tested using deformation paths that connect the high symmetry structures bcc, fcc, hcp, sc and bct.\nThe energy profile versus the deformation parameter for the tetragonal, hexagonal, orthogonal and trigonal deformation paths is shown in Fig.~\\ref{fig:trans_fe}. \n\\begin{figure}[h]\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{trans_fe}\n \\caption{Energy profile along the transformation path connecting the ferromagnetic bcc Fe with other high-symmetry structures as obtained by the Fe BOP (red) and DFT (black).}\n \\label{fig:trans_fe} \n\\end{figure}\nThe energy profiles for all transformation paths are predicted very well by the Fe BOP model. \nThe energies at the high symmetry points are in good agreement with DFT except for the sc structure which is slightly underestimated by BOP. \n\n\\subsubsection{Transferability to other crystal structures}\n\nIn order to verify the transferability of the Fe BOP model to other crystal structures, we determined the equilibrium binding energy, volume and bulk modulus for a broader set of crystal structures as shown in Fig.~\\ref{fig:eos_fe}. \n\\begin{figure}[htb!]\n \\centering\n \\includegraphics[width=0.99\\columnwidth]{eos_fe}\n \\caption{Relative binding energy, bulk modulus and equilibrium volume of bulk structures including TCP phases. The difference in background shading indicates structures that belong to the reference set used for construction (green) and for testing (blue).}\n \\label{fig:eos_fe} \n\\end{figure}\nThe properties of the bcc, fcc, hcp, A15, $\\sigma$, $\\chi$, $\\mu$, C14, C15 and C36 structures that were included in the parameterization are reproduced with excellent agreement.\nThis shows that the chosen functional form of the Fe BOP is sufficiently flexible to adapt to this set of reference data.\nThe Fe BOP model also shows robust predictions of equilibrium binding energy, volume and bulk modulus across the set of tested structures.\nThe open crystal structures (e.g. A4) show comparably larger errors which can be expected as the reference set used in the parameterization covers mostly close-packed local atomic environments~\\cite{Jenke2018} while local atomic environments of open structures were not part of the training set.\n\n\\section{Conclusions}\n\nWe present the BOPcat software for the construction and testing of TB\/BOP models as implemented in the BOPfox code. \nTB\/BOP models are parameterized to reproduce reference data from DFT calculations including energies, forces, and stresses as well as derived properties like defect formation energies.\nThe modular framework of BOPcat allows one to implement complex parameterization protocols by flexible python scripts. \nBOPcat provides a graphical user interface and a highly-parallelized optimization kernel for fast and efficient handling of large data sets. \n\nWe illustrate the key features of the BOPcat software by constructing and testing a simple $d$-valent BOP model for Fe including magnetism. \nThe resulting BOP predicts a variety of properties of the groundstate structure with good accuracy and shows good quantitative transferability to other crystal structures.\n\n\\section*{Acknowledgements}\n\nThe authors are grateful to Aparna Subramanyam, Malte Schr{\\\"o}der, Alberto Ferrari, Jan Jenke, Yury Lysogorskiy, Miroslav Cak, and Matous Mrovec for discussions and feedback using BOPcat.\nWe acknowledge financial support by the German Research Foundation (DFG) through research grant HA 6047\/4-1 and project C1 of the collaborative research centre SFB\/TR 103.\nPart of the work was carried out in the framework of the International Max-Planck Research School SurMat.\n\n","meta":{"redpajama_set_name":"RedPajamaArXiv"}} +{"text":"\\section{Introduction}\n\nBelief change\\footnote{\\, In some literature, the term ``belief revision\" is used as a synonym for belief change. In what follows, we use belief revision to refer to a particular kind of belief change.} theory studies how a rational agent changes her belief state when she is exposed to new information. Studies in this field have traditionally had a strong focus on two types of change: \\emph{contraction} in which a specified sentence has to be removed from the original belief state, and \\emph{revision} in which a specified sentence has instead to be consistently added. This paper is mainly concerned with the latter.\n\nAlchourr\\'on, G\\\"ardenfors and Makinson (AGM) performed the pioneering formal study on these two types of change in their seminal paper \\cite{alchourron_logic_1985}. In the AGM theory of belief change, the agent's belief state is represented by a set of sentences from some formal language $\\mathcal{L}$, usually denoted by $K$. The new information is represented by a single sentence in $\\mathcal{L}$. Belief revision and contraction on $K$ are formally represented by two operations $\\ast$ and $\\div$, mapping from a sentence $\\varphi$ to a new set of sentences $K \\ast \\varphi$ and $K \\div \\varphi$ respectively. \\cite{alchourron_logic_1985} postulated some conditions that a rational revision or contraction operation should satisfy, which are called AGM postulates on revision and contraction. \n\nFurthermore, \\cite{alchourron_logic_1985} showed that contraction and revision satisfying AGM postulates could be precisely constructed from a model based on partial meet functions on remainder sets. After that, many alternative models \\cite[etc.]{alchourron_logic_1985_safe,grove_two_1988,gardenfors_revisions_1988,hansson_kernel_1994} have been proposed to construct the contraction and revision operations satisfying AGM postulates. Although these models look entirely different on the surface, most of them employ the same select-and-intersect strategy \\cite[p. 19]{hansson_descriptor_2017}. For example, in partial meet construction for contraction \\cite{alchourron_logic_1985}, a selection is made among remainders and in sphere modelling for revision \\cite{grove_two_1988}, a selection is made among possible worlds. The intersection of the selected objects is taken as the outcome of the operation in both cases.\n\nAlthough the AGM theory has virtually become a standard model of theory change, many researchers are unsatisfied with its settings in several aspects and have proposed several modifications and generalizations to that framework (see \\cite{ferme_agm_2011} for a survey). Here we only point out two inadequatenesses of the AGM theory.\n\nOn the one hand, in the original AGM model, the input is represented by a single sentence. This is unrealistic since agents often receive more than one piece of information at the same time. In order to cover these cases, we can generalize sentential revision to multiple revision, where the input is a finite or infinite set of sentences. On the other hand, in AGM revision, new information has priority over original beliefs. This is represented by the success postulate: $\\varphi \\in K \\ast \\varphi$ for all $\\varphi$. The priority means that the new information will always be entirely incorporated, whereas previous beliefs will be discarded whenever the agent need do so in order to incorporate the new information consistently. This is a limitation of AGM theory since in real life it is a common phenomenon that agents do not accept the new information that they receive or only accept it partially. As a modification, we can drop the success postulate and generalize prioritized revision to non-prioritized belief revision. \n\nIn this contribution, we will put these two generalizations together and consider the so called multiple non-prioritized belief revision. In \\cite{falappa_prioritized_2012}, two different kinds of such generalized revision are distinguished:\n\n\\begin{enumerate}\n\\item Merge: $K$ and $A$ are symmetrically treated, i.e., sentences of $K$ and $A$ could be accepted or rejected. \n\\item Choice revision\\footnote{\\, Here we use the term ``choice revision'', introduced by \\cite{Fuhrmann_phd}, to replace the term``selective change'' used in \\cite{falappa_prioritized_2012}, for it is easier for us to distinguish it from the ``selective revision'' introduced in \\cite{ferme_selective_1999}, which is a sort of non-prioritized revision with a single sentence as input. It should be noted that generally choice revision by a finite set $A$ cannot be reduced to selective revision by the conjunction $\\& A$ of all elements in $A$. To see this, let $\\ast_{s}$ be some selective revision. It is assumed that $\\ast_{s}$ satisfies extensionality, i.e. if $\\varphi $ is logically equivalent to $ \\psi$, then $K \\ast_s \\varphi = K \\ast_s \\psi$. So, $K \\ast^\\prime \\& \\{\\varphi, \\neg \\varphi\\} = K \\ast^\\prime \\& \\{\\psi, \\neg \\psi\\}$ for all $\\varphi$ and $\\psi$. However, it is clearly unreasonable for choice revision $\\ast_c$ that $K \\ast_c \\{\\varphi, \\neg \\varphi\\} = K \\ast_c \\{\\psi, \\neg \\psi\\}$ should hold for all $\\varphi$ and $\\psi$.}: some sentences of A could be accepted, some others could be rejected.\n\\end{enumerate}\n\nWe use $\\ast_c$ to denote a choice revision operation. \\cite{falappa_prioritized_2012} investigated the formal properties of merge but left the study on choice revision as future work. As far as we know, little work has been done on this kind of revision in the literature. This fact can be partly explained by that the operation $\\ast_c$ has the unusual characteristic that the standard select-and-intersect approach is not in general applicable. To see why, let the set $K$ of original beliefs not contain any element of $A =\\{ \\varphi, \\neg \\varphi \\}$. We are going to construct a set $K \\ast_c A$ which incorporates $\\varphi$ or its negation. Suppose that we do that by first selecting a collection $\\mb{X} =\\{X_1, X_2, X_3, \\cdots \\}$ of sets of beliefs, each of which satisfies the success condition for choice revision with $A$, i.e. $X_i \\cap A \\neq \\emptyset$ for each $X_i$. Then it may be the case that $\\varphi \\in X_1$ and $\\neg \\varphi \\in X_2$. Given that $X_1$ and $X_2$ are consistent, it follows that the intersection $\\cap \\mb{X}$ cannot satisfy the success condition, i.e. it contains neither $\\varphi$ or $\\neg \\varphi$.\n\nTherefore, to develop a modelling for choice revision, we need to choose another strategy than the select-and-intersect method. \\cite{hansson_descriptor_2013} introduced a new approach of belief change named ``descriptor revision'', which employs a ``select-direct'' methodology: It assumes that there is a set of belief sets as potential outcomes of belief change, and the belief change is performed by a direct choice among these potential outcomes. Furthermore, this is a very powerful framework for constructing belief change operations since success conditions for various types of belief changes are described in a general way with the help of a metalinguistic belief operator $\\mathfrak{B}$. For instance, the success condition of contraction by $\\varphi$ is $\\neg \\mathfrak{B} \\varphi$, that of revision by $\\varphi$ is $\\mathfrak{B} \\varphi$. Descriptor revision on a belief set $K$ is performed with a unified operator $\\circ$ which applies to any success condition that is expressible with $\\mathfrak{B}$. Hence, choice revision $\\ast_c$ with a finite input set can be constructed from descriptor revision in the way of $K \\ast_c \\{\\varphi_1, \\varphi_2, \\cdots, \\varphi_n\\} = K \\circ \\{\\mathfrak{B} \\varphi_1 \\vee \\mathfrak{B} \\varphi_2 \\vee \\cdots \\vee \\mathfrak{B} \\varphi_n \\}$ \\cite[p. 130]{hansson_descriptor_2017}. \n\nAlthough the construction of choice revision in the framework of descriptor revision has been introduced in \\cite{hansson_descriptor_2017}, the formal properties of this type of belief change are still in need of investigation. The main purpose of this contribution is to conduct such an investigation. After providing some formal preliminaries in Section \\ref{Preliminaries}, we will review how to construct choice revision in the framework of descriptor revision in Section \\ref{section choice revison based on descriptor revision}. More importantly, in this section, we will investigate the postulates on choice revision which could axiomatically characterize these constructions. In Section \\ref{section on an alternative modelling for choice revision} we will propose an alternative modelling for choice revision, which is based on \\textit{multiple believability relations}, a generalized version of \\textit{believability relation} introduced in \\cite{hansson_relations_2014} and further studied in \\cite{zhang_believability_2017}. We will investigate the formal properties of this modelling and prove the associated representation theorems. Section \\ref{section conclusion} concludes and indicates some directions for future work. All proofs of the formal results are placed in the appendix.\n\n\n\\section{Preliminaries}\\label{Preliminaries}\n\nThe object language $\\mathcal{L}$ is defined inductively by a set $v$ of propositional variables $\\{p_0 ,\\, p_1, \\, \\cdots, \\, p_n, \\, \\cdots \\}$ and the truth-functional operations $\\neg, \\wedge, \\vee$ and $\\rightarrow$ in the usual way. $\\taut$ is a tautology and ${\\scriptstyle \\perp}$ a contradiction. $\\mathcal{L}$ is called finite if $v$ is finite. Sentences in $\\mathcal{L}$ will be denoted by lower-case Greek letters and sets of such sentences by upper-case Roman letters. \n\n$\\cn$ is a consequence operation for $\\mathcal{L}$ satisfying supraclassicality (if $\\varphi$ can be derived from $A$ by classical truth-functional logic, then $\\varphi \\in \\cons{A}$), compactness (if $\\varphi \\in \\cons{A}$, then there exists some finite $B \\subseteq A$ such that $\\varphi \\in \\cons{B}$) and the deduction property ($\\varphi \\in \\cn(A \\cup \\{\\psi\\})$ if and only if $\\psi \\rightarrow \\varphi \\in \\cn(A)$). $X\\vdash \\varphi$ and $X \\nvdash \\varphi$ are alternative notations for $\\varphi \\in \\cn(X)$ and $\\varphi \\notin \\cn(X)$ respectively. $\\{ \\varphi \\} \\vdash \\psi$ is rewritten as $\\varphi \\vdash \\psi$ for simplicity. And $\\varphi \\dashv \\Vdash \\psi$ means $\\varphi \\vdash \\psi$ and $\\psi \\vdash \\varphi$. $A \\equiv B$ holds iff for every $\\varphi \\in A$, there exists some $\\psi \\in B$ such that $\\varphi \\dashv \\Vdash \\psi$ and vice versa. \n\nFor all finite $A$, let $\\& A$ denote the conjunction of all elements in $A$. For any $A$ and $B$, $A \\owedge B = \\{\\varphi \\wedge \\psi \\mid \\varphi \\in A \\mbox{ and } \\psi \\in B\\}$. We write $\\varphi \\owedge \\psi$ and $A_0 \\owedge A_1 \\owedge \\cdots \\owedge A_n$ instead of $\\{ \\varphi\\} \\owedge \\{\\psi\\}$ and $( \\cdots ( A_0 \\owedge A_1) \\owedge \\cdots) \\owedge A_n$ for simplicity.\n\nThe set of beliefs an agent holds is represented by a \\emph{belief set}, which is a set $X \\subseteq \\mathcal{L}$ such that $X = \\cn(X)$. $K$ is fixed to denote the original beliefs of the agent. We assume that $K$ is consistent unless stated otherwise.\n\n\n\n\\section{Choice revision based on descriptor revision}\\label{section choice revison based on descriptor revision}\n\nBefore investigating the properties of choice revision constructed in the framework of descriptor revision, we first present some formal basics of this framework, which is mainly based on \\cite{hansson_descriptor_2013}. \n\n\\subsection{Basics of descriptor revision}\n\nAn atomic belief descriptor is a sentence $\\mathfrak{B}\\varphi$ with $\\varphi\\in \\mathcal{L}$. Note that the symbol $\\mathfrak{B}$ is not part of the object language $\\mathcal{L}$. A \\emph{molecular belief descriptor} is a truth-functional combination of atomic descriptors. A composite belief descriptor (henceforth: descriptor; denoted by upper-case Greek letters) is a set of molecular descriptors. \n\n$\\mathfrak{B}\\varphi$ is \\emph{satisfied} by a belief set $X$, if and only if $\\varphi \\in X$. Conditions of satisfaction for molecular descriptors are defined inductively, hence, provided that $\\varphi$ and $\\psi$ stand for molecular descriptors, $X$ satisfies $\\neg \\varphi$ if and only if it does not satisfy $\\varphi$, it satisfies $\\varphi \\wedge \\psi$ if and only if it satisfies both $\\varphi$ and $\\psi$, etc. It satisfies a composite descriptor $\\Phi$ if and only if it satisfies all its elements. $X \\Vdash \\Phi$ denotes that set $X$ satisfies descriptor $\\Phi$.\n\n\nDescriptor revision on a belief set $K$ is performed with a unified operator $\\circ$ such that $K \\circ \\Phi$ is an operation with $\\Phi$ as its success condition. \\cite{hansson_descriptor_2013} introduces several constructions for descriptor revision operations, of which the relational model defined as follows has a canonical status.\n\n\\begin{DEF}[\\cite{hansson_descriptor_2013}]\\label{definition of relational model}\n\n\n$(\\mb{X}, \\leqq)$ is a relational select-direct model (in short: relational model) with respect to $K$ if and only if it satisfies:\\footnote{\\, We will drop the phrase ``with respect to $K$'' if this does not affect the understanding, and write $\\set{\\varphi}$ and $\\mini{\\varphi}$ instead of $\\set{\\{\\mathfrak{B} \\varphi\\}}$ and $\\mini{\\{\\mathfrak{B} \\varphi\\}}$ for simplicity.}\n\\begin{enumerate}\n\\item[]$(\\mb{X} 1)$ $\\mb{X}$ is a set of belief sets.\n\\item[]$(\\mb{X} 2)$ $K \\in \\mb{X}$.\n\\item[]$(\\leqq1)$ $K \\leqq X$ for every $X \\in \\mb{X}$.\n\\item[]$(\\leqq2)$ For any $\\Phi$, if $\\{X \\in \\mb{X} \\mid X \\Vdash \\Phi \\}$ (we denote it as $ \\set{\\Phi}$) is not empty, then it has a unique $\\leqq$-minimal element denoted by $\\mini{\\Phi}$.\n\\end{enumerate}\n\nA descriptor revision $\\circ$ on $K$ is \\emph{based on} (or \\emph{determined by}) some relational model $(\\mathbb{X},\\leqq)$ with respect to $K$ if and only if for any $\\Phi$,\n\n\\begin{eqnarray}\\nonumber\n\\mbox{$\\langle \\leqq$ $\\mr{to}$ $\\circ \\rangle$}\\footnotemark \\,\\,\\,\\,\\,\\,\\, K \\circ \\Phi=\n \\begin{cases}\n \\mini{\\Phi} &\\mbox{if $\\set{\\Phi}$ is not empty,}\\\\\n K &\\mbox{otherwise.}\n \\end{cases}\n\\end{eqnarray}\n\n\\footnotetext{\\, Provided that $(\\mathbb{X},\\leqq)$ is a relational model, $\\mb{X} $ is equivalent to the domain of $\\leqq$ since $K \\in \\mb{X}$ and $K \\leqq X$ for all $X \\in \\mb{X}$. So $\\leqq$ in itself can represent the $(\\mathbb{X},\\leqq)$ faithfully.}\n\n\\end{DEF}\n\n\n$\\mb{X}$ could be seen as an \\textit{outcome set} which includes all the potential outcomes under various belief change patterns. The ordering $\\leqq$ (with the strict part $<$) brings out a direct-selection mechanism, which selects the final outcome among candidates satisfying a specific success condition. Given condition $(\\leqq2)$, this sort of selection is achievable for any success condition satisfiable in $\\mb{X}$. We call descriptor revision constructed in this way \\emph{relational} descriptor revision.\n\nAs \\cite{zhang_believability_2017} pointed out, in so far as the selection mechanism is concerned, descriptor revision is at a more abstract level comparing with the AGM revision. In the construction of descriptor revision $\\circ$, ``it assumes that there exists an outcome set which contains all the potential outcomes of the operation $\\circ$, but it says little about what these outcomes should be like'' \\cite[p. 41]{zhang_believability_2017}. In contrast, in the AGM framework, the belief change is supposed to satisfy the principle of consistency preservation and the principle of the informational\neconomy \\cite{gardenfors_knowledge_1988}. Therefore, the intersection step in the construction of belief change in the AGM framework becomes dispensable in the context of descriptor revision. This may explain why the descriptor revision model could be a select-direct approach.\n\n\n\\subsection{Choice revision constructed from descriptor revision}\n\nThe success condition for choice revision $\\ast_c$ with a finite input could be easily expressed by descriptor $\\{ \\mathfrak{B} \\varphi_0 \\vee \\cdots \\vee \\mathfrak{B} \\varphi_n \\}$. So, it is straightforward to construct choice revision through descriptor revision as follows.\n\n\\begin{DEF}[\\cite{hansson_descriptor_2017}]\nLet $\\circ$ be some descriptor revision. A choice revision $\\ast_c$ on $K$ is \\emph{based on} (or \\emph{determined by}) $\\circ$ if and only if for any finite set $A$, \n\n\\begin{eqnarray}\\nonumber\n\\mbox{$\\langle \\circ$ $\\mr{to}$ $\\cro \\rangle$} \\,\\,\\,\\,\\,\\,\\, K \\ast_c A= \n\\begin{cases}\nK \\circ \\{ \\mathfrak{B} \\varphi_0 \\vee \\cdots \\vee \\mathfrak{B} \\varphi_n \\} & \\mbox{if $A = \\{\\varphi_0 , \\cdots , \\varphi_n \\} \\neq \\emptyset$},\\\\\nK & \\mbox{otherwise}.\n\\end{cases}\n\\end{eqnarray}\n\n\\end{DEF}\n\nHenceforth, we say $\\ast_c$ is \\emph{based on} (or \\emph{determined by}) some relational model if it is based on the descriptor revision determined by the same model. The main purpose of this section is to investigate the formal properties of choice revision based on such models.\n\n\\subsection{Postulates and representation theorem}\n\nIt is observed that the choice revision determined by relational models should satisfy a set of arguably plausible postulates on choice revision. \n\n\\begin{OBS}\\label{Observation on the postulates satisfied by the choice revision constructed from relational model}\nLet $\\ast_c$ be a choice revision determined by any relational descriptor revision $(\\mathbb{X},\\leqq)$. Then it satisfies the following postulates: \n\\begin{enumerate}\n\\item[] $\\mathrm{(\\ast_c 1)}$ $\\cons{K \\ast_c A} = K \\ast_c A$. (\\textbf{$\\ast_c$-closure})\n\\item[] $\\mathrm{(\\ast_c 2)}$ $K \\ast_c A = K$ or $A \\cap (K \\ast_c A) \\neq \\emptyset$. (\\textbf{$\\ast_c$-relative success})\n\\item[] $\\mathrm{(\\ast_c 3)}$ If $A \\cap (K \\ast_c B) \\neq \\emptyset$, then $A \\cap (K \\ast_c A) \\neq \\emptyset$. (\\textbf{$\\ast_c$-regularity}) \n\\item[] $\\mathrm{(\\ast_c 4)}$ If $A \\cap K \\neq \\emptyset$, then $K \\ast_c A = K$. (\\textbf{$\\ast_c$-confirmation})\n\\item[] $\\mathrm{(\\ast_c 5)}$ If $(K \\ast_c A) \\cap B \\neq \\emptyset$ and $(K \\ast_c B) \\cap A \\neq \\emptyset$, then $K \\ast_c A = K \\ast_c B$. (\\textbf{$\\ast_c$-reciprocity})\n\\end{enumerate}\n\\end{OBS}\n\nMoreover, another plausible condition on choice revision follows from this set of postulates. \n\n\\begin{OBS}\\label{Observation on cro-syntax irrelevance}\nIf $\\ast_c$ satisfies $\\ast_c$-closure, relative success, regularity and reciprocity, then $\\ast_c$ satisfies:\n\\begin{enumerate}\n\\item[] If $A \\equiv B$, then $K \\ast_c A = K \\ast_c B$. (\\textbf{$\\ast_c$-syntax irrelevance})\n\\end{enumerate}\n\\end{OBS}\n\nIt is easy to see that the postulates in above are natural generalizations of the following postulates on sentential revision:\n\n\\begin{enumerate}\n\\item[] $\\mathrm{(\\ast 1)}$ $\\cn(K \\ast \\varphi)=K \\ast \\varphi$ \\textit{\\textbf{($\\ast$-closure)}}\n\\item[] $\\mathrm{(\\ast 2)}$ If $K \\ast \\varphi \\neq K $, then $\\varphi \\in K \\ast \\varphi$ \\textit{\\textbf{($\\ast$-relative success)}}\n\\item[] $\\mathrm{(\\ast 3)}$ If $\\varphi \\in K $, then $K \\ast \\varphi = K$ \\textit{\\textbf{($\\ast$-confirmation)}}\n\\item[] $\\mathrm{(\\ast 4)}$ If $\\psi \\in K \\ast \\varphi$, then $\\psi \\in K \\ast \\psi$ \\textit{\\textbf{($\\ast$-regularity)}}\n\\item[] $\\mathrm{(\\ast 5)}$ If $\\psi \\in K \\ast \\varphi$ and $\\varphi \\in K \\ast \\psi$, then $K\\ast \\varphi = K \\ast \\psi$ \\textit{\\textbf{($\\ast$-reciprocity)\\footnote{\\, This postulate is first discussed in \\cite{alchourron1982logic} in the context of maxichoice revision.}}}\n\\end{enumerate}\nand \n\\begin{enumerate}\n\\item[] If $\\varphi \\dashv \\Vdash \\psi$, then $K\\ast \\varphi = K \\ast \\psi$. \\footnote{\\, It is easy to check that $\\ast$-extensionality is derivable from $(\\ast 1)$, $(\\ast 2)$, $(\\ast 3)$ and ($\\ast 5$).} \\textit{\\textbf{($\\ast$-extensionality)}}\n\\end{enumerate}\n\n\nThe above postulates on choice revision are as intuitively plausible as their correspondents on sentential revision, except that the meaning of $\\ast_c$-reciprocity seems not so transparent as that of $\\ast$-reciprocity. However, given some weak conditions, we can show that the $\\ast_c$-reciprocity postulate is equivalent to a more understandable condition as follows.\n\n\\begin{OBS}\\label{Observation that reciprocity is equivalent to cautiousness}\nLet choice operation $\\ast_c$ satisfy $\\ast_c$-relative success and \\linebreak $\\ast_c$-regularity. Then it satisfies $\\ast_c$-reciprocity iff it satisfies:\n\\begin{enumerate}\n\\item[] If $A \\subseteq B$ and $(K \\ast_c B) \\cap A \\neq \\emptyset$, then $K \\ast_c A =K \\ast_c B$. (\\textbf{$\\ast_c$-cautiousness})\n\\end{enumerate} \n\\end{OBS}\n\n\\noindent The postulate $\\ast_c$-cautiousness reflects a distinctive characteristic of choice revision modelled by relational models: The agent who performs this sort of belief change is cautious in the sense of only adding the smallest possible part of the new information to her original beliefs. It follows immediately from $\\ast_c$-relative success and $\\ast_c$-cautiousness that if $A \\cap (K \\ast_c A) \\neq \\emptyset$, then $K \\ast_c A = K \\ast_c \\{\\varphi\\}$ for some $\\varphi \\in A$. Thus, it is not surprising that the following postulate follows.\n\n\\begin{OBS}\\label{Observation that dichotomy can be derived from reciprocity}\nIf $\\ast_c$ satisfies $\\ast_c$-relative success, regularity and reciprocity, then $\\ast_c$ satisfies:\n\\begin{enumerate}\n\\item[] $K \\ast_c (A \\cup B) = K \\ast_c A$ or $K \\ast_c (A \\cup B) = K \\ast_c B$. (\\textbf{$\\ast_c$-dichotomy})\n\\end{enumerate}\n\\end{OBS}\n\n\n\n\\noindent In contrast to $(\\ast_c 1)$ through $(\\ast_c 5)$, postulates $\\ast_c$-cautiousness and $\\ast_c$-dichotomy do not have directly corresponding postulates in the context of sentential revision. This suggests that though $(\\ast_c 1)$ through $(\\ast_c 5)$ naturally generalize $(\\ast 1)$ through $(\\ast 5)$, this sort of generalization is not so trivial as we may think of. As another evidence for this, the following observation shows that the properties of $(\\ast 1)$ through $(\\ast 5)$ and those of their generalizations are not always paralleled.\n\n\\begin{OBS}\\label{Observation that reciprocity is equivalent to strong reciprocity}\nLet $\\ast_c$ satisfy $\\ast_c$-regularity. Then it satisfies $\\ast_c$-reciprocity iff it satisfies \n\\begin{enumerate}\n\\item[] For any $n \\geq 1 $, if $(K \\ast_c A_1 ) \\cap A_0 \\neq \\emptyset$, $\\cdots$, $(K \\ast_c A_{n} ) \\cap A_{n-1} \\neq \\emptyset$, $(K \\ast_c A_{0} ) \\cap A_{n} \\neq \\emptyset$, then $K \\ast_c A_0 = K \\ast_c A_1 = \\cdots = K \\ast_c A_n$. (\\textbf{$\\ast_c$-strong reciprocity})\n\\end{enumerate}\n\\end{OBS}\n\n\\noindent $\\ast$-strong reciprocity is a generalization of the following postulate on sentential revision:\n\n\\begin{enumerate}\n\\item[] For any $n \\geq 1 $, if $\\varphi_0 \\in K \\star \\varphi_{1}$, $\\cdots$, $\\varphi_{n-1} \\in K \\ast \\varphi_n$ and $\\varphi_n \\in K \\star \\varphi_0 $, then $K \\star \\varphi_0 = K \\star \\varphi_2= \\cdots = K \\star \\varphi_n$. (\\textbf{$\\ast$-strong reciprocity} )\\footnote{\\, $\\ast$-strong reciprocity \nis closely related to a non-monotonic reasoning rule named ``loop'' which is first introduced in \\cite{kraus_nonmonotonic_1990}. For more discussion on this, see \\cite{makinson_relations_1991}.}\n\\end{enumerate}\n\n\\noindent However, in contrast to the result in Observation \\ref{Observation that reciprocity is equivalent to strong reciprocity}, $\\ast$-strong reciprocity is not derivable from $(\\ast 1)$ through $(\\ast 5)$.\\footnote{\\, To see this, let $K = \\conp{\\taut}$ and revision operation $\\ast$ on $K$ defined as: (i) if $p_0 \\wedge p_1 \\vdash \\varphi$ and $\\varphi \\vdash p_0$, then $K \\ast \\varphi = \\conp{p_0 \\wedge p_1}$; (ii) if $p_1 \\wedge p_2 \\vdash \\varphi$ and $\\varphi \\vdash p_1$, then $K \\ast \\varphi = \\conp{p_1 \\wedge p_2}$; (iii) if $p_0 \\wedge p_2 \\vdash \\varphi$ and $\\varphi \\vdash p_2$, then $K \\ast \\varphi = \\conp{p_0 \\wedge p_2}$; (iv) otherwise, $K \\ast \\varphi = \\conp{\\varphi}$. It is easy to check that $\\ast$ satisfies $(\\ast 1)$ through $(\\ast 5)$ but not $\\ast$-strong reciprocity.}\n\nAfter an investigation on the postulates $(\\ast_c 1)$ through $(\\ast_c 5)$ satisfied by choice revision based on rational models, the question raises naturally whether the choice revision could be axiomatically characterized by this set of postulates. We get a partial answer to this question: a representation theorem is obtainable when $\\mathcal{L}$ is finite.\n\n\\begin{THE}\\label{Representation theorem for choice revision derived from descriptor revision of finite language}\nLet $\\mathcal{L}$ be a finite language. Then, $\\ast_c$ satisfies $(\\ast_c 1)$ through $(\\ast_c 5)$ iff it is a choice revision based on some relational model.\n\\end{THE}\n\n\\subsection{More properties of choice revision}\n\nIn this subsection, we will study additional properties of choice revision from the point of view of postulates. The postulates introduced in the previous subsection do not necessarily cover all the reasonable properties of this operation. In what follows we are going to investigate some additional ones, in particular, the following:\n\n\\begin{enumerate}\n\\item[] If $A \\neq \\emptyset$, then$A \\cap (K \\ast_c A) \\neq \\emptyset$. (\\emph{\\textbf{$\\ast_c$-success}})\n\\item[] If $A \\not \\equiv \\{{\\scriptstyle \\perp}\\}$, then $K \\ast_c A \\nvdash {\\scriptstyle \\perp}$. (\\emph{\\textbf{$\\ast_c$-consistency}})\n\\end{enumerate}\n\nTo some extent, $\\ast_c$-success is a strengthening of $\\ast_c$-relative success and $\\ast_c$-regularity, but it does not say anything about the limiting case in which the input is empty. To cover this limiting case, we need the following postulate:\n\n\\begin{enumerate}\n\\item[] If $A = \\emptyset$, then $K \\ast_c A = K$. (\\textbf{$\\ast_c$-vacuity})\n\\end{enumerate}\n\nThe interrelations among $\\ast_c$-success, $\\ast_c$-relative success and \\linebreak $\\ast_c$-regularity are summarized as follows.\n\n\\begin{OBS}\\label{Observation on the inter-derivability among success, relative , vacuity and regularity }\nLet $\\ast_c$ be some choice revision on $K$.\n\\begin{enumerate}\n\\item If $\\ast_c$ satisfies relative success, then it satisfies vacuity.\n\\item If $\\ast_c$ satisfies success and vacuity, then it satisfies relative success.\n\\item If $\\ast_c$ satisfies success, then it satisfies regularity.\n\\end{enumerate}\n\\end{OBS}\n\n$\\ast_c$-consistency is a plausible constraint on a rational agent. While accepting $\\ast_c$-success and $\\ast_c$-consistency as ``supplementary'' postulates for choice revision $\\ast_c$, the corresponding relational model on which $\\ast_c$ is based will also need to satisfy some additional properties. We use the following representation theorem to conclude this subsection.\n\n\\begin{THE}\\label{Representation thoerem for choice revision additionally satisfying success and consistency }\nLet $\\mathcal{L}$ be a finite language and $\\ast_c$ some revision operation on $K \\subseteq \\mathcal{L}$. Then, $\\ast_c$ satisfies $\\ast_c$-closure, $\\ast_c$- success, $\\ast_c$-vacuity, $\\ast_c$-confirmation, $\\ast_c$-reciprocity and $\\ast_c$-consistency iff \n it is a choice revision determined by some relational model which satisfies the following two condition:\n\\begin{enumerate}\n\\item[] $(\\mb{X} 3)$ $\\conp{{\\scriptstyle \\perp}} \\in \\mb{X}$; \n\\item[] $(\\leqq 3)$ $\\set{\\mathfrak{B} \\varphi} \\neq \\emptyset$ and $\\mini{\\mathfrak{B} \\varphi } < \\conp{{\\scriptstyle \\perp}}$ for every $\\varphi \\not \\vdash {\\scriptstyle \\perp}$.\n\\end{enumerate}\n\\end{THE}\n\n\\section{An alternative modelling for choice revision}\\label{section on an alternative modelling for choice revision}\n\nIn this section, we propose an alternative modelling for choice revision, which is based on so-called \\textit{multiple believability relations}. A believability relation $\\preceq$ is a binary relation on sentences of $\\mathcal{L}$. Intuitively, $\\varphi \\preceq \\psi$ means that the subject is at least as prone to believing $\\varphi$ as to believing $\\psi$.\\footnote{\\, For more detailed investigation on believability relations, including its relationship with the epistemic entrenchment relation introduced in \\cite{gardenfors_revisions_1988}, see \\cite{hansson_relations_2014} and \\cite{zhang_believability_2017}.} We can generalize $\\preceq$ to a multiple believability relation $\\preceq_{\\ast}$ which is a binary relation on the set of all finite subsets of $\\mathcal{L}$ satisfying:\n\n\\begin{enumerate}\n\\item[] \\mbox{$\\langle \\preceq_{\\ast}$ $\\mr{to}$ $\\preceq \\rangle$} \\,\\,\\,\\,\\,\\,$\\varphi \\preceq \\psi$ iff $\\{\\varphi\\} \\preceq_{\\ast} \\{\\psi\\}$.\n\\end{enumerate}\n\n\\noindent This kind of generalization can be done in different ways, and at least two distinct relations can be obtained, namely \\textit{package multiple believability relations}, denoted by $\\preceq_{p}$, and \\textit{choice multiple believability relations}, denoted by $\\preceq_{c}$ (with symmetric part $\\simeq_{c}$ and strict part $\\prec_{c}$). Intuitively, $A \\preceq_{p} B$ means that it is easier for the subject to believe all propositions in $A$ than to believe all propositions in $B$ and $A \\preceq_{c} B$ means that it is easier for the subject to believe some proposition in A than to believe some proposition in $B$. \n\n\\noindent $\\preceq_{p}$ is of little interest since $A \\preceq_{p} B$ can be immediately reduced to $\\& A \\preceq \\& B$, given that $A$ and $B$ are finite. In what follows, multiple believability relations (or multi-believability relations for short) only refer to choice multiple believability relations $\\preceq_{c}$. ($\\{\\varphi\\} \\preceq_{c} A$ will be written as $\\varphi \\preceq_{c} A$ for simplicity.) \n\n\\subsection{Postulates on multi-believability relations}\n\nRecall the following postulates on believability relations $\\preceq$ introduced in \\cite{zhang_believability_2017}:\n\\begin{enumerate}\n\\item[] \\emph{\\textbf{$\\preceq$-transitivity}}: If $\\varphi \\preceq \\psi$ and $\\psi \\preceq \\lambda$, then $\\varphi \\preceq \\lambda$. \n\\item[] \\emph{\\textbf{$\\preceq$-weak coupling}}: If $\\varphi \\simeq \\varphi \\wedge \\psi $ and $\\varphi \\simeq \\varphi \\wedge \\lambda$, then $\\varphi \\simeq \\varphi \\wedge (\\psi \\wedge \\lambda)$.\n\\item[] \\emph{\\textbf{$\\preceq$-coupling}}: If $\\varphi \\simeq \\psi$, then $\\varphi \\simeq \\varphi \\wedge \\psi$.\n\\item[] \\emph{\\textbf{$\\preceq$-counter dominance}}: If $\\varphi \\vdash \\psi$, then $\\psi \\preceq \\varphi$.\n\\item[] \\emph{\\textbf{$\\preceq$-minimality}}: $\\varphi \\in K$ if and only if $\\varphi \\preceq \\psi$ for all $\\psi$.\n\\item[] \\emph{\\textbf{$\\preceq$-maximality}}: If $\\psi \\preceq \\varphi$ for all $\\psi$, then $\\varphi \\equiv {\\scriptstyle \\perp}$.\n\\item[] \\emph{\\textbf{$\\preceq$-completeness}}: $\\varphi \\preceq \\psi$ or $\\psi \\preceq \\varphi$\n\\end{enumerate} \n\nTransitivity is assumed for almost all orderings. In virtue of the intuitive meaning of believability relation, $\\varphi \\simeq \\varphi \\wedge \\psi$ represents that the agent will accept $\\psi$ in the condition of accepting $\\varphi$. Thus, the rationale for $\\preceq$-weak coupling is that if the agent will consequently add $\\psi$ and $\\lambda$ to her beliefs when accepting $\\varphi$, then she also adds the conjunction of them to her beliefs in this case. This is reasonable if we assume that the beliefs of the agent are closed under the consequence operation. The justification of $\\preceq$-counter dominance is that if $\\varphi$ logically entails $\\psi$, then it will be a smaller change and hence easier for the agent to accept $\\psi$ rather than to accept $\\varphi$, because then $\\psi$ must be added too, if we assume that the beliefs of the agent are represented by a belief set. $\\preceq$-coupling is a strengthening of $\\preceq$-weak coupling.\\footnote{\\, It is easy to see that $\\preceq$-coupling implies $\\preceq$-weak coupling, provided that $\\preceq$-transitivity and $\\preceq$-counter dominance hold. } It says that if $\\varphi$ is equivalent to $\\psi$ in believability, then the agent will consequently add $\\psi$ to her beliefs in case of accepting $\\varphi$ and vice versa. $\\preceq_{c}$-minimality is justifiable since nothing needs to be done to add $\\varphi$ to $K$ if it is already in $K$. $\\preceq$-maximality is justifiable since it is reasonable to assume that it is strictly more difficult for a rational agent to accept ${\\scriptstyle \\perp}$ than to accept any non-falsum. $\\preceq$-completeness seems a little bit strong. It says that all pairs of sentences are comparable in believability. In accordance with \\cite{zhang_believability_2017}, we call relations satisfying all these postulates \\textit{quasi-linear believability relations}.\n\nWe can generalize these postulates on believability relations in a natural way to postulates multi-believability relations as follows:\\footnote{\\, In what follows, it is always assumed that all sets $A$ and $B$ and $C$ mentioned in postulates on multi-believability relations are finite sets.}\n\n\\begin{enumerate}\n\\item[] \\emph{\\textbf{$\\preceq_{c}$-transitivity}}: If $A \\preceq_{c} B$ and $B \\preceq_{c} C$, then $A \\preceq_{c} C$.\n\\item[] \\emph{\\textbf{$\\preceq_{c}$-weak coupling}}: If \\( {A \\simeq_{c} A \\owedge B} \\) and \\( {A \\simeq_{c} A \\owedge C} \\), then $A \\simeq_{c} A \\owedge B \\owedge C$.\n\\item[] \\emph{\\textbf{$\\preceq_{c}$-coupling}}: If $A \\simeq_{c} B$, then $A \\simeq_{c} A \\owedge B$. \n\\item[] \\emph{\\textbf{$\\preceq_{c}$-counter dominance}}: If for every $\\varphi \\in B$ there exists $\\psi \\in A$ such that $\\varphi \\vdash \\psi$, then $A \\preceq_{c} B$. \n\\item[] \\emph{\\textbf{$\\preceq_{c}$-minimality}}: $A \\preceq_{c} B$ for all $B$ if and only if $A \\cap K \\neq \\emptyset$. \n\\item[] \\emph{\\textbf{$\\preceq_{c}$-maximality}}: If $B$ is not empty and $A \\preceq_{c} B$ for all non-empty $A$, then $B \\equiv \\{{\\scriptstyle \\perp}\\}$. \n\\item[] \\emph{\\textbf{$\\preceq_{c}$-completeness}}: $A \\preceq_{c} B$ or $B \\preceq_{c} A$.\n\\end{enumerate}\n\n\\noindent These postulates on multi-believability relations can be understood in a similar way that their correspondents on believability relations are understood. \n\nFurthermore, we propose the following two additional postulates on multi-believability relations:\n\n\\begin{enumerate}\n\\item[] \\emph{\\textbf{$\\preceq_{c}$-determination}}: $A \\prec_{c} \\emptyset$ for every non-empty $A$.\n\\item[] \\emph{\\textbf{$\\preceq_{c}$-union}}: $A \\preceq A \\cup B$ or $B \\preceq A \\cup B$.\n\\end{enumerate}\n\n\\noindent At least on the surface, these two could not be generalizations of any postulate on believability relation. In some sense the meaning of \\linebreak $\\preceq_{c}$-determination is correspondent to that of $\\ast_c$-success, since if it is a \\linebreak strictly smaller change for the agent to accept some sentences from a non-empty $A$ rather than to take some sentences from the empty set, which is obviously impossible, then it seems to follow that the agent will successfully add some sentences in $A$ to her original beliefs when exposed to the new information represented by $A$, and vice versa. Similarly, there is an obvious correspondence between the forms and meanings of $\\preceq_{c}$-union and $\\ast_c$-dichotomy. They both suggest that to partially accept a non-empty $A$ is equivalent to accept some single sentence in $A$. This is plausible if we assume that the agent is extremely cautious to the new information.\n\n\n\\begin{OBS}\\label{Observation on the interderivability among postulates on multiple believability relations}\nLet $\\preceq_{c}$ be some multi-believability relation satisfying $\\preceq_{c}$-transitivity and $\\preceq_{c}$-counter dominance. If it satisfies $\\preceq_{c}$-union in addition, then\n\\begin{enumerate}\n\\item It satisfies $\\preceq_{c}$-completeness.\n\\item It satisfies $\\preceq_{c}$-weak coupling iff it $\\preceq_{c}$-satisfies coupling.\n\\end{enumerate} \n\\end{OBS}\n\n\\noindent Observation \\ref{Observation on the interderivability among postulates on multiple believability relations} indicates that $\\preceq_{c}$-union is strong. It should be noted that for a believability relation, neither $\\preceq$-completeness nor $\\preceq$-coupling can be derived from $\\preceq$-transitivity, $\\preceq$-counter dominance and $\\preceq$-weak coupling.\n\nIn what follows, we name multi-believability relations satisfying all the above postulates \\textit{standard multi-believability relations}. \n\n\\subsection{Translations between believability relations and multiple believability relations}\nIn this subsection, we will show that although it is impossible to find a postulate on believability relations that corresponds to $\\preceq_{c}$-determination or $\\preceq_{c}$-union, there exists a translation between quasi-linear believability relations and standard multi-believability relations.\n\n\\begin{OBS}\\label{observation for reduction to single sentence}\nLet $\\preceq_{c}$ satisfy $\\preceq_{c}$-determination, $\\preceq_{c}$-transitivity and $\\preceq_{c}$-counter dominance. Then, for any non-empty finite sets $A$ and $B$,\n\\begin{enumerate}\n\\item $A \\preceq_{c} B$ if and only if there exists $\\varphi \\in A$ such that $\\varphi \\preceq_{c} B$.\n\\item $A \\preceq_{c} B$ if and only if $A \\preceq_{c} \\varphi$ for all $\\varphi \\in B$.\n\\end{enumerate}\n\\end{OBS}\n\n\\noindent This observation suggests that $\\preceq$ and $\\preceq_{c}$ can be linked through the following two transitions:\n\\begin{enumerate}\n\\item[] \\mbox{$\\langle \\preceqc$ $\\mr{to}$ $\\preceq \\rangle$} \\,\\,\\,\\,\\,\\,$\\varphi \\preceq \\psi$ iff $\\{\\varphi\\} \\preceq_{c} \\{\\psi\\}$.\n\\item[] \\mbox{$\\langle \\preceq$ $\\mr{to}$ $\\preceqc \\rangle$} \\,\\,\\,\\,\\,\\,$A \\preceq_{c} B$ iff $B = \\emptyset$ or there exists $\\varphi \\in A$ such that $\\varphi \\preceq \\psi$ for every $\\psi \\in B$.\n\\end{enumerate}\n\n\\noindent This is confirmed by the following theorem.\n\n\\begin{THE}\\label{Theorem on the translation between single and multi believability relations}\n\\begin{enumerate}\n\\item If $\\preceq$ is a quasi-linear believability relation and $\\preceq_{c}$ is constructed from $\\preceq$ through the way of \\mbox{$\\langle \\preceq$ $\\mr{to}$ $\\preceqc \\rangle$}, then $\\preceq_{c}$ is a standard multi-believability relation and $\\preceq$ can be retrieved from $\\preceq_{c}$ in the way of \\mbox{$\\langle \\preceqc$ $\\mr{to}$ $\\preceq \\rangle$}.\n\\item If $\\preceq_{c}$ is a standard multi-believability relation and $\\preceq$ is constructed from $\\preceq_{c}$ through \\mbox{$\\langle \\preceqc$ $\\mr{to}$ $\\preceq \\rangle$}, then $\\preceq$ is a quasi-linear believability relation and $\\preceq_{c}$ can be retrieved from $\\preceq$ through \\mbox{$\\langle \\preceq$ $\\mr{to}$ $\\preceqc \\rangle$}.\n\\end{enumerate}\n\\end{THE}\n\n\n\\subsection{Choice revision constructed from multi-believability relations}\n\nNow we turn to the construction of choice revision through \\linebreak multi-believability relations. Recall that a sentential revision $\\ast$ can be constructed from a believability relation $\\preceq$ in this way \\cite{zhang_believability_2017}:\n\\begin{eqnarray}\\nonumber\n\\mbox{$\\langle \\preceq$ $\\mr{to}$ $\\ast \\rangle$} \\,\\,\\,\\,\\,\\,\\,K \\ast \\varphi= \\{ \\psi \\mid \\varphi \\simeq \\varphi \\wedge \\psi \\} \n\\end{eqnarray} \n\n\\noindent As we have explained, $\\varphi \\simeq \\varphi \\wedge \\psi$ could be understood as that the agent will consequently accept $\\psi$ in case of accepting $\\varphi$. So, the set $\\{ \\psi \\mid \\varphi \\simeq \\varphi \\wedge \\psi \\} $ is just the agent's new set of beliefs after she performed belief revision with input $\\varphi$. Thus, we can similarly construct choice revision from multi-believability relations in the following way:\n\n\\begin{DEF}\\label{definition of choice revision based on multi-believability relations}\nLet $\\preceq_{c}$ be some multi-believability relation. A choice revision $\\ast_c$ on $K$ is based on (or determined by) $\\preceq_{c}$ if and only if: for any finite $A$, \n\\begin{eqnarray}\\nonumber\n\\mbox{$\\langle \\preceqc$ $\\mr{to}$ $\\cro \\rangle$} \\,\\,\\,\\,\\,\\,\\, K \\ast_c A= \n\\begin{cases}\n\\{\\varphi \\mid A \\simeq_{c} A \\owedge \\varphi \\} & \\mbox{If $A \\prec_{c} \\emptyset$},\\\\\nK & \\mbox{otherwise}.\n\\end{cases}\n\\end{eqnarray}\n\\end{DEF}\n\nThe primary results of this section are the following two representation theorems. Comparing with Theorems \\ref{Representation theorem for choice revision derived from descriptor revision of finite language} and \\ref{Representation thoerem for choice revision additionally satisfying success and consistency }, these two theorems are applicable to more general cases since they do not assume that the language $\\mathcal{L}$ is finite. These two theorems demonstrate that multi-believability relations provide a fair modelling for choice revision characterized by the set of postulates mentioned in Section \\ref{section choice revison based on descriptor revision}.\n\n\\begin{THE}\\label{Representation theorem for choice revision based on weak multiple believability relation}\nLet $\\ast_c$ be some choice revision on $K$. Then, $\\ast_c$ satisfies $(\\ast_c 1)$ through $(\\ast_c 5)$ iff it is determined by some multi-believability relation $\\preceq_{c}$ satisfying $\\preceq_{c}$-transitivity, $\\preceq_{c}$-weak coupling, $\\preceq_{c}$-counter-dominance, $\\preceq_{c}$-minimality \\linebreak and $\\preceq_{c}$-union.\n\\end{THE}\n\n\\begin{THE}\\label{Representation theorem for choice revision based on standard multiple believability relation}\nLet $\\ast_c$ be some choice revision on $K$. Then, $\\ast_c$ satisfies $\\ast_c$-closure, $\\ast_c$- success, $\\ast_c$-vacuity, $\\ast_c$-confirmation, $\\ast_c$-reciprocity and $\\ast_c$-consistency iff it is determined by some standard multi-believability relation.\n\\end{THE}\n\nConsidering the translation between multi-believability relations and believability relations (Theorem \\ref{Theorem on the translation between single and multi believability relations}), it seems that these results can be easily transferred to the context of believability relations. However, if we drop some postulates on multi-believability relation such as $\\preceq_{c}$-determination, the translation between multi-believability relation and believability relation will not be so transparent, at least it will not be so straightforward as \\mbox{$\\langle \\preceq$ $\\mr{to}$ $\\preceqc \\rangle$}{ }and \\mbox{$\\langle \\preceqc$ $\\mr{to}$ $\\preceq \\rangle$}. As a consequence, the result in Theorem \\ref{Representation theorem for choice revision based on weak multiple believability relation} may not be possible to transfer to believability relations in a straightforward way. Moreover, comparing with postulates on believability relations, postulates on multi-believability relations such as $\\preceq_{c}$-determination and $\\preceq_{c}$-union can present our intuitions on choice revision in a more direct way. Thus, the multi-believability relation is still worth to be studied in its own right.\n\n\n\\section{Conclusion and future work}\\label{section conclusion}\n\nAs a generalization of traditional belief revision, choice revision has more realistic characteristics. The new information is represented by a set of sentences and the agent could partially accept these sentences as well as reject the others. From the point of technical view, choice revision is interesting since the standard ``select-and-intersect'' methodology in modellings for belief change is not suitable for it. But instead, it can be modelled by a newly developed framework of descriptor revision, which employs a ``select-direct'' approach. After reviewing the construction of choice revision in the framework of descriptor revision, under the assumption that the language is finite, we provided two sets of postulates as the axiomatic characterizations for two variants of choice revision based on such constructions (in Theorem \\ref{Representation theorem for choice revision derived from descriptor revision of finite language} and \\ref{Representation thoerem for choice revision additionally satisfying success and consistency }). These postulates, in particular, \\linebreak $\\ast_c$-cautiousness and $\\ast_c$-dichotomy, point out that choice revision modelled by descriptor revision has the special characteristic that the agent who performs this sort of belief change is cautious in the sense that she only accepts the new information to the smallest possible extent.\n\nFor AGM revision and contraction, there are various independently motivated modellings which are equivalent in terms of expressive power. In this contribution, we also propose an alternative modelling for choice revision. We showed that multi-believability relations can also construct the choice revision axiomatically characterized by the sets of postulates proposed for choice revision based on descriptor revision (Theorem \\ref{Representation theorem for choice revision based on weak multiple believability relation} and \\ref{Representation theorem for choice revision based on standard multiple believability relation}). Moreover, these results are obtainable without assuming that the language is finite. This may indicate that multi-believability relations are an even more suitable modelling for choice revision.\n\nThe study in this contribution can be developed in at least three directions. First, the cautiousness constraint on choice revision, reflected by $\\ast_c$-cautiousness, certainly could be loosened. We think it is an interesting topic for future work to investigate the modeling and axiomatic characterization of more ``reckless'' variants of choice revision. Secondly, as it was showed in \\cite{zhang_believability_2017} that AGM revision could be reconstructed from believability relations satisfying certain conditions, it is interesting to ask which conditions a multi-believability relation should satisfy so that its generated choice revision coincides with an AGM revision when the inputs are limited to singletons. Finally, it is technically interesting to investigate choice revisions with an infinite input set, though they are epistemologically unrealistic.\n\n\\section*{Appendix: Proofs}\\label{appendix}\n\n\\begin{LEM}\\label{Lemma on the representation element}\nLet $\\preceq_{c}$ be some multiple believability relation which satisfies \\linebreak $\\preceq_{c}$-counter~dominance and $\\preceq_{c}$-transitivity. Then, \n\\begin{enumerate}\n\\item If $\\preceq_{c}$ satisfies $\\preceq_{c}$-union, then for every non-empty $A$, there exists some $\\varphi \\in A$ such that $\\varphi \\simeq_{c} A$.\n\\item For every $\\varphi \\in A$, $A \\simeq_{c} A \\owedge \\varphi$ if and only if $\\varphi \\simeq_{c} A$.\n\\end{enumerate}\n\\end{LEM}\n\n\\begin{proof}[Proof for Lemma \\ref{Lemma on the representation element}:]\n\\textit{1.} We prove this by mathematical induction on the size $n$ ($n \\geq 1$) of $A$. Let $n = 1$, then it follows immediately. Suppose hypothetically that it holds for $n = k$ ($k \\geq 1$). Let $n = k+1$. Since $k \\geq 1$, there exists a non-empty set $B$ containing $k$ elements and a sentence $\\varphi$ such that $A = B \\cup \\{\\varphi\\}$. By $\\preceq_{c}$-counter dominance and $\\preceq_{c}$-union, (i) $A \\simeq_{c} \\{\\varphi\\}$ or (ii) $A \\simeq_{c} B$. The case of (i) is trivial. In the case of (ii), by the hypothetical supposition, there exists some $\\psi \\in B \\subseteq A$ such that $A \\simeq_{c} B \\simeq_{c} \\psi$. So, by $\\preceq_{c}$-transitivity, $A \\simeq_{c} \\varphi$. To sum up (i) and (ii), there always exists some $\\varphi \\in A$ such that $\\varphi \\simeq_{c} A$.\\\\\n\\textit{2. From left to right:} Let $\\varphi \\in A $ and $A \\owedge \\varphi \\simeq_{c} A$. By $\\preceq_{c}$-counter dominance, $A \\preceq_{c} \\varphi$ and $ \\varphi \\preceq_{c} A \\owedge \\varphi$. And it follows from $\\varphi \\preceq_{c} A \\owedge \\varphi$ and $A \\owedge \\varphi \\simeq_{c} A$ that $\\varphi \\preceq_{c} A$ by $\\preceq_{c}$-transitivity. Thus, $\\varphi \\simeq_{c} A$. \\textit{From right to left:} Let $\\varphi \\in A$ and $\\varphi \\simeq A$. By $\\preceq_{c}$-counter-dominance, $A \\owedge \\varphi \\preceq_{c} \\varphi$. So $A \\owedge \\varphi \\preceq_{c} A$ by $\\preceq_{c}$-transitivity. Moreover, $A \\preceq_{c} A \\owedge \\varphi$ by $\\preceq_{c}$-counter-dominance. Thus, $A \\owedge \\varphi \\simeq_{c} A$.\n\\end{proof}\n\n\\begin{proof}[Proof for Observation \\ref{Observation on the postulates satisfied by the choice revision constructed from relational model}:]\nIt is easy to see that $\\ast_c$ satisfies $\\ast_c$-closure and $\\ast_c$-relative success. We only check the remaining three postulates. We let $\\Belsome{A}$ denote the descriptor $ \\{ \\mathfrak{B} \\varphi_0 \\vee \\cdots \\vee \\mathfrak{B} \\varphi_n\\}$ when $A = \\{ \\varphi_0 , \\cdots , \\varphi_n\\} \\neq \\emptyset$.\\\\\n\\textit{$\\ast_c$-regularity:} Let $(K \\ast_c B) \\cap A \\neq \\emptyset$. It follows that $A \\neq \\emptyset$ and $\\set{\\Belsome{A}} \\neq \\emptyset$. So $K \\ast_c A = \\mini{\\Belsome{A}}$ by the definition of $\\ast_c$. Thus, $(K \\ast_c A) \\cap A \\neq \\emptyset$.\\\\\n\\textit{$\\ast_c$-confirmation:} Let $A \\cap K \\neq \\emptyset$. Then $A \\neq \\emptyset $ and $K \\in \\set{\\Belsome{A}}$. It follows from $(\\leqq 1)$ and $(\\leqq 2)$ that $K$ is the unique $\\leqq$-minimal element in $\\set{\\Belsome{A}}$. Thus, $K \\ast_c A = \\mini{\\Belsome{A}} = K$.\\\\ \n\\textit{Reciprocity:} Let $(K \\ast_c A_0 ) \\cap A_1 \\neq \\emptyset$ and $(K \\ast_c A_{1} ) \\cap A_{0} \\neq \\emptyset$. Let $i \\in \\{0,1\\}$. It follows that $A_i \\neq \\emptyset $ and $\\set{\\Belsome(A_i)} \\neq \\emptyset$ and hence $K \\ast_c A_i = \\mini{\\Belsome(A_i)}$ by the definition of $\\ast_c$. So it follows from $\\mini{\\Belsome{A_0}} \\cap A_1 \\neq \\emptyset$ and $\\mini{\\Belsome{A_1}} \\cap A_{0} \\neq \\emptyset$ that $\\mini{\\Belsome{A_0}} \\in \\set{\\Belsome{A_1}}$ and $\\mini{\\Belsome{A_1}} \\in \\set{\\Belsome{A_0}}$ and hence $\\mini{\\Belsome{A_0}} \\leqq \\mini{\\Belsome{A_1}} \\leqq \\mini{\\Belsome{A_0}}$ by $(\\leqq 2)$. Since the minimal element in $\\set{\\Belsome(A_0)}$ is unique by $(\\leqq 2)$, it follows that $\\mini{\\Belsome{A_0}} = \\mini{\\Belsome{A_1}}$, i.e. $ K \\ast_c A_{0} = K \\ast_c A_{1}$.\n\\end{proof}\n\n\\begin{proof}[Proof for Observation \\ref{Observation on cro-syntax irrelevance}:]\nLet $A \\equiv B$. Suppose $A \\cap (K \\ast_c A) = \\emptyset$, then $A \\cap (K \\ast_c B) = \\emptyset$ due to $\\ast_c$-regularity. Hence, $B \\cap (K \\ast_c B) = \\emptyset$ by $\\ast_c$-closure. It follows that $K \\ast_c A = K \\ast_c B = K$ by $\\ast_c$-relative success. Suppose $A \\cap (K \\ast_c A) \\neq \\emptyset$, then $B \\cap (K \\ast_c A) \\neq \\emptyset$ by $\\ast_c$-closure, so $B \\cap (K \\ast_c B) \\neq \\emptyset$ by $\\ast_c$-regularity, and hence $A \\cap (K \\ast_c B) \\neq \\emptyset$ by $\\ast_c$-closure. It follows that $K \\ast_c A = K \\ast_c B$ by $\\ast_c$-reciprocity. Thus, $\\ast_c$ satisfies syntax irrelevance in any case.\n\\end{proof}\n\n\n\\begin{proof}[Proof for Observation \\ref{Observation that reciprocity is equivalent to cautiousness}:]\n\\textit{From left to right:} Let $A \\subseteq B$ and $(K \\ast_c B) \\cap A \\neq \\emptyset$. Then, $A \\neq \\emptyset $ and hence $ (K \\ast_c A) \\cap A \\neq \\emptyset$ by $\\ast_c$-regularity. Since $A \\subseteq B$, it follows that $(K \\ast_c A) \\cap B \\neq \\emptyset$. Thus, $K \\ast_c A =K \\ast_c B$ by $\\ast_c$-reciprocity.\\\\ \n\\textit{From right to left:} Let $A \\cap (K \\ast_c B) \\neq \\emptyset$ and $B \\cap (K \\ast_c A) \\neq \\emptyset$. It follows that $(A \\cup B) \\cap (K \\ast_c B) \\neq \\emptyset$. By $\\ast_c$-regularity, it follows that $(A \\cup B) \\cap (K \\ast_c (A \\cup B)) \\neq \\emptyset$. So $A \\cap (K \\ast_c (A \\cup B)) \\neq \\emptyset$ or $B \\cap (K \\ast_c (A \\cup B)) \\neq \\emptyset$. Without loss of generality, let $A \\cap (K \\ast_c (A \\cup B)) \\neq \\emptyset$, then $K \\ast_c A = K \\ast_c (A \\cup B)$ by $\\ast_c$-cautiousness. It follows that $B \\cap (K \\ast_c (A \\cup B)) = B \\cap (K \\ast_c A) \\neq \\emptyset$ and hence $K \\ast_c B = K \\ast_c (A \\cup B)$ by $\\ast_c$-cautiousness. So $K \\ast_c A = K \\ast_c (A \\cup B) = K \\ast_c B$.\n\\end{proof}\n\n\\begin{proof}[Proof for Observation \\ref{Observation that dichotomy can be derived from reciprocity}:]\nSuppose $(A \\cup B) \\cap (K \\ast_c (A\\cup B)) = \\emptyset$, then $(A \\cup B) \\cap (K \\ast_c A) = (A \\cup B) \\cap (K \\ast_c B)= \\emptyset$ by $\\ast_c$-regularity. So $A \\cap (K \\ast_c A) = B \\cap (K \\ast_c B) = \\emptyset$ and hence $K \\ast_c (A \\cup B) = K \\ast_c A = K \\ast_c B = K$ by $\\ast_c$-relative success. Suppose $(A \\cup B) \\cap (K \\ast_c (A\\cup B)) \\neq \\emptyset$, then $A \\cap (K \\ast_c (A\\cup B)) \\neq \\emptyset $ or $B \\cap (K \\ast_c (A\\cup B)) \\neq \\emptyset$. Let $A \\cap (K \\ast_c (A\\cup B)) \\neq \\emptyset $, then $A \\cap (K \\ast_c A) \\neq \\emptyset$ by $\\ast_c$-regularity and hence $(A \\cup B) \\cap (K \\ast_c A) \\neq \\emptyset$. It follows that $K \\ast_c (A \\cup B) = K \\ast_c A$ by $\\ast_c$-reciprocity. Similarly, we can show that $K \\ast_c (A \\cup B) = K \\ast_c B$ holds in the case of $B \\cap (K \\ast_c (A\\cup B)) \\neq \\emptyset$. Thus, $\\ast_c$ satisfies $\\ast_c$-dichotomy in any case.\n\\end{proof}\n\n\\begin{proof}[Proof for Observation \\ref{Observation that reciprocity is equivalent to strong reciprocity}:]\n\\textit{From right to left:} It follows immediately.\\\\\n\\textit{From left to right:} Assume $(\\star)$ that $(K \\ast_c A_1 ) \\cap A_0 \\neq \\emptyset$, $\\cdots$, $(K \\ast_c A_n ) \\cap A_{n-1} \\neq \\emptyset$ and $(K \\ast_c A_{0} ) \\cap A_{n} \\neq \\emptyset$ for some $n \\geq 1$. We prove that $K \\ast_c A_0 = K \\ast_c A_1 = \\cdots = K \\ast_c A_n$ by mathematical induction on $n$. For $n =1$, this follows immediately from $\\ast_c$-reciprocity. Let us hypothetically suppose that it holds for $n = k$ ($k \\geq 1$), then we should show that it also holds for $n = k+1$.\\\\\nLet $A = \\bigcup_{0 \\leq i \\leq k+1} A_{i}$. It follows from $(\\star)$ that $A \\cap (K \\ast_c A_i) \\neq \\emptyset$ for every $0 \\leq i \\leq k+1$. So $A \\cap (K \\ast_c A) \\neq \\emptyset$ by $\\ast_c$-regularity. It follows that there exists some $j$ with $0 \\leq j \\leq k+1$ such that $A_j \\cap (K \\ast_c A) \\neq \\emptyset$. Moreover, according to $(\\star)$, if $j = 0$ then $A_{k+1} \\cap (K \\ast_c A_j) \\neq \\emptyset$ else $A_{j-1} \\cap (K \\ast_c A_j) \\neq \\emptyset$. It follows that $A \\cap (K \\ast_c A_j) \\neq \\emptyset$ in any case. So $K \\ast_c A_j = K \\ast_c A$ by $\\ast_c$-reciprocity. Hence, as $A \\cap (K \\ast_c A_i) \\neq \\emptyset$ for every $0 \\leq i \\leq k+1$, it follows from $(\\star)$ and $\\ast_c$-reciprocity that if $0$\\\\\n ${\\sf setpoint}_{\\rm oc}$};\n \\node (if3) [decision, right of=if2, aspect=2.0, node distance=5.75cm]\n\t {\\hspace{0.4em} $T_{\\rm sim}[t_{\\rm pd\\_oc}$--1$]\\leq$\\\\\n\t ${\\sf setpoint}_{\\rm oc}$};\n \\node (late) [process, below of=if2, text width=4.9cm, node distance=2.30cm]\n {$t_{\\rm max} \\gets t^*-1$\\\\\n $t^* \\gets t_{\\rm min}+(t^*-t_{\\rm min})\/2$};\n \\node (soon) [process, below of=if3, text width=4.9cm, node distance=2.30cm]\n {$t_{\\rm min} \\gets t^*+1$\\\\\n $t^* \\gets t^*+(t_{\\rm max}-t^*)\/2$}; \n \\node (dot) [dot, below left = 0.20cm and 0.30cm of soon] {};\n \n \n \\node (return) [init, below of=dot, node distance=1.7cm] {Output $t^*$};\n \\node (end) [startstop, below of=return, node distance=1.25cm] {End};\n \\node (input) [input, right of=run, node distance=4.5cm] {};\n\n \n \\draw [arrow] (start) -- (init);\n \\draw [arrow] (init) -- (if1);\n \\draw [arrow] (if1.south) -- node[near start] {yes} (run.north);\n \n \\draw [arrow] (run.south) -- +(0,-11pt) node[align=center, near end, below] {simulated temperature ($T_{\\rm sim}$)} -- +(-91pt,-11pt) -- (if2.north);\n \\draw [arrow] (input) -- node {\\ \\ \\ \\ \\ \\ \\ $T[t_{\\rm now}]$} (run.east);\n \\draw [arrow] (if2) -- node[align=left] {yes (too late)} (late);\n \\draw [arrow] (late.east) -- (dot);\n \\draw [arrow] (if2) -- node[align=left] {no} (if3);\n \\draw [arrow] (if3) -- node[align=left] {yes (too soon)} (soon);\n \\draw [arrow] (soon.west) -- (dot);\n \\draw [arrow] (if3.east) -- +(10pt,0) node[near start] {no} |- +(0,-110pt) -- +(-32pt, -110pt) -- +(-150pt, -110pt) node[near start, below] {$t^*$ is the best possible pre-cooling time} -- (return.north);\n \\draw [arrow] (if1.east) -- +(20pt,0) node[near start] {no} -- +(112pt,0) |- +(0,-267pt) -- (return.east);\n \\draw [arrow] (dot.west) -- +(-150pt,0) |- +(-60pt,219pt) -- (if1.west);\n \n \n \\draw [arrow] (return) -- (end);\n \\end{tikzpicture}}\n \\caption{Flowchart of our \\emph{Pre-cooling} procedure, which determines the pre-cooling time based on the predicted occupancy.}\n \\label{fig:eval_precool_flowchart}\n\\end{figure}\n\n\n\n\nHaving explained the pseudocode of {\\tt HVAC-MPC}, we now explain the workings of the \\emph{Pre-cooling} procedure used therein. Figure~\\ref{fig:eval_precool_flowchart} illustrates the flowchart of this procedure. Basically, the goal here is to determine the time $t^*$ at which the pre-cooling should start; this time falls within a certain range of feasible times, denoted by $[t_{\\rm min}, t_{\\rm max}]$. Initially, the algorithm sets $t_{\\rm min}$ to be equal to the current time (i.e., $t_{\\rm now}$), sets $t_{\\rm max}$ to be equal to the predicted time of occupancy (i.e., $t_{\\rm pd\\_oc}$), and sets $t^*$ to be right in the middle between the two (i.e., $t^* = t_{\\rm min}+(t_{\\rm max}-t_{\\rm min})\/2$). After that, $t^*$ is adjusted iteratively as follows. In each iteration, the algorithm runs an EnergyPlus simulation in which the HVAC is switched off from $t_{\\rm min}$ to $t^*-1$, and then the pre-cooling starts at $t^*$, leaving the HVAC on from $t^*$ to $t_{\\rm max}$. Thus, we obtain the simulated temperature, $T_{\\rm sim}[t]$, for every $t\\in [t_{\\rm min}, t_{\\rm max}]$. These simulated temperatures should ideally satisfy the following conditions:\n\n\n\n\\begin{figure*}[t]\n \\centering\n \\begin{subfigure}[b]{0.475\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/eval_res_a.pdf}\n \\caption{{\\bf Without {\\tt HVAC-MPC}:} full-powered HVAC at\n all times (indicated by the strip at the bottom) despite\n intermittent occupancy.}\n \\label{fig:eval_res_a}\n \\end{subfigure}\n \\hfill\n \\begin{subfigure}[b]{0.475\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/eval_res_b.pdf}\n \\caption{{\\bf Without {\\tt HVAC-MPC}:} incurred thermal\n comfort sensation. PMV index is always outside the recommended\n [-0.5,+0.5] range.}\n \\label{fig:eval_res_b}\n \\end{subfigure}\n \\begin{subfigure}[b]{0.475\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/eval_res_c.pdf}\n \\caption{{\\bf With {\\tt HVAC-MPC}:} occupancy and\n temperature forecasts are used to find the best possible pre-cooling schedule\n as indicated by the sequence of HVAC state transitions from\n {\\em off} to {\\em on} in the strip at the bottom.}\n \\label{fig:eval_res_c}\n \\end{subfigure}\n \\quad\n \\begin{subfigure}[b]{0.475\\textwidth}\n \\centering\n \\includegraphics[width=\\textwidth]{figs\/eval_res_d.pdf}\n \\caption{{\\bf With {\\tt HVAC-MPC}:} The PMV index is almost\n always within the recommended [-0.5,+0.5] range during the periods in which the building is occupied.}\n \\label{fig:eval_res_d}\n \\end{subfigure}\n \\caption[]{Results of HVAC model predictive control ({\\tt HVAC-MPC}) in our testbed.}\n \\label{fig:eval_results}\n\\end{figure*}\n\n\n\n\n\n\\begin{equation}\\label{condition1}\nT_{\\rm sim}[t_{\\rm pd\\_oc}]={\\sf setpoint}_{\\rm oc}\n\\end{equation}\n\\begin{equation}\\label{condition2}\nT_{\\rm sim}[t]>{\\sf setpoint}_{\\rm oc},\\ \\forall t : t^* \\leq t < t_{\\rm pd\\_oc}\n\\end{equation}\n\n\\noindent which basically means that the temperature becomes satisfactory exactly when needed, and not before. The algorithm checks whether the above two conditions hold. Now:\n\\begin{itemize}\\itemsep-0.5em\n\\item if Condition~\\eqref{condition1} does not hold, it implies that the pre-cooling process started too late, and so $t^*$ should be set to an earlier time. To this end, the algorithm sets $t_{\\rm max}$ to be equal to $t^*-1$, and then updates $t^*$ as follows:\n\\begin{equation}\nt^* \\gets t_{\\rm min}+(t^*-t_{\\rm min})\/2\n\\end{equation}\n\\item if Condition~\\eqref{condition2} does not hold, it implies that the pre-cooling process started too soon, and so $t^*$ should be delayed. To this end, the algorithm sets $t_{\\rm min}$ to be equal to $t^*+1$, and then updates $t^*$ as follows:\n\\begin{equation}\nt^* \\gets t^*+(t_{\\rm max}-t^*)\/2\n\\end{equation}\n\\end{itemize}\nAfter that, the algorithm proceeds to the next iteration to check whether the new $t^*$ needs any further adjustments. This process is repeated until either $t_{\\rm min}\\geq t_{\\rm max}$, or conditions \\eqref{condition1} and \\eqref{condition2} are both met. Either way, the best possible $t^*$ is found.\n\n\n\n\n\n\\subsection{Evaluation Results} \\label{subsec:eval_res}\n\n\n\\noindent In this subsection, we evaluate the performance of our {\\tt HVAC-MPC} algorithm. To this end, we consider two standard measures of thermal comfort, namely \\emph{Predicted Mean Vote} (PMV) and \\emph{Predicted Percentage Dissatisfied} (PPD) \\cite{pmv}. Specifically, PMV quantifies the human perception of thermal sensation on a scale that runs from -3 to +3, where -3 is very cold, 0 is neutral, and +3 is very hot. On the other hand, PPD is built upon PMV to quantify the percentage of occupants that are dissatisfied given the current thermal conditions. {\\color{black} The recommended PMV range for thermal comfort is between -0.5 and +0.5 for indoor spaces, while the acceptable PPD range is between 5\\% and 10\\% \\cite{ashrae55}.}\n\n\nWe compared our {\\tt HVAC-MPC} algorithm against a baseline alternative, where the HVAC is turned on throughout the day, regardless of the varying occupancy. The evaluation results are shown in Figure~\\ref{fig:eval_results}. Specifically:\n\\begin{itemize}\n\\item Figure~\\ref{fig:eval_res_a} depicts the observed occupancy, the indoor and outdoor temperatures, and the HVAC status \\emph{given the baseline HVAC control};\n\\item Figure~\\ref{fig:eval_res_b} depicts the thermal comfort according to PMV and PPD \\emph{given the baseline HVAC control};\n\\item Figure~\\ref{fig:eval_res_c} depicts the predicted occupancy, the indoor temperature, and the HVAC status \\emph{given {\\tt HVAC-MPC} algorithm};\n\\item Figure~\\ref{fig:eval_res_d} depicts the thermal comfort according to PMV and PPD \\emph{given {\\tt HVAC-MPC} algorithm}.\n\\end{itemize}\n\n\\begin{figure*}[htbp!]\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[height=1.6in]{figs\/eval_energy_a.pdf}\n \\caption{When compared to the immediately preceding day.}\n \\label{fig:eval_energy_a}\n \\end{subfigure}\n \\qquad\n \\begin{subfigure}[b]{0.47\\textwidth}\n \\centering\n \\includegraphics[height=1.6in]{figs\/eval_energy_b.pdf}\n \\caption{When compared to the same day of the preceding week.}\n \\label{fig:eval_energy_b}\n \\end{subfigure}\n \\caption{Reduction in HVAC energy consumption achieved by {\\tt HVAC-MPC} (Algorithm~\\ref{alg:control}).}\n \\label{fig:eval_energy}\n\\end{figure*}\n\n\nLet us first comment on the results of the baseline HVAC control. As can be seen in Figure~\\ref{fig:eval_res_a}, the HVAC is turned on throughout the day, despite the significant changes in both the occupancy and the external temperature. In terms of thermal comfort, the baseline HVAC control performs rather poorly, as shown in Figure~\\ref{fig:eval_res_b}. Here, the recommended range for PMV is highlighted by the dotted region. As can be seen, the PMV index is outside the recommended range throughout the day. Furthermore, according to the PDD index, a considerable percentage of occupants are dissatisfied most of the day.\n\n\nMoving on to the results of our {\\tt HVAC-MPC} algorithm, Figure~\\ref{fig:eval_res_c} shows how the HVAC is switched off for a considerable number of hours. This is because, whenever the building becomes unoccupied, {\\tt HVAC-MPC} increases the temperature setpoint and predicts the future temperature and occupancy to find the best possible time for pre-cooling. Figure~\\ref{fig:eval_res_d}, on the other hand, depicts the PMV and PPD indices throughout the day. As can be seen, when the building is occupied, the PMV index is almost always within the recommended range. Likewise, when the building is occupied, the PPD is close to 5\\% (which is the best possible PPD score that can be achieved).\n\n\n{\\color{black} Finally, we present the energy savings that are attained by our {\\tt HVAC-MPC} algorithm. The algorithm was activated in the testbed building for a duration of one week in total and the results are provided in Figure~\\ref{fig:eval_energy}. In particular, Figure~\\ref{fig:eval_energy_a} shows the results for three different days, where each day is in a different month. For each day shown, the algorithm performance is compared with the immediately preceding day where {\\tt HVAC-MPC} is not activated. Figure~\\ref{fig:eval_energy_b} shows the results of another experiment where we activated {\\tt HVAC-MPC} for four consecutive days in the same week. For each day, the results are compared with the corresponding day of the previous week in which {\\tt HVAC-MPC} was not activated. The purpose of this experiment was to find out how much energy is saved by the MPC algorithm relative to the same day of the previous week. The energy savings attained by {\\tt HVAC-MPC} range from 23\\% to 39\\%. Table~\\ref{tab:eval_energy} lists the average daily savings, the standard deviation, and the total energy savings over the experiment period.}\n\n\\begin{table}[hbtp!]\n \\small\n \\caption{Summary of energy savings attained by {\\tt HVAC-MPC}.}\n \\label{tab:eval_energy}\n \\centering\n \\begin{tabular}{ p{6.5cm} | c}\n \n\t\\hline \\hline\n Average daily savings over experiment period & 456 kWh \\\\ \\hline\n Standard deviation of daily savings & 118 kWh \\\\ \\hline\n Total savings over the seven-day experiment & 3197 kWh \\\\\n\t\\hline \\hline\n \\end{tabular}\n\\end{table}\n\n\n\\section{Occupancy Recognition} \\label{sec:recognition}\n\n\\noindent Occupancy information is crucial for many applications, such as building management and human behavior studies. We develop an occupancy recognition system based on real-time video processing of a video stream to infer the occupancy patterns dynamically. This raises a number of challenges. First, structures differ from one building to another, leading to the possibility of occupants being obscured by various obstacles, such as pillars. Hence, we need to track the movements of occupants to determine the occupancy more accurately. Second, our algorithm is executed on an embedded system (e.g., Raspberry Pi), which has limited processing power and memory space compared to a typical desktop computer; this is particularly challenging since the typical video-processing algorithms are computationally demanding.\n\nThe basic idea of our occupancy-recognition algorithm is to count the number of people crossing a virtual reference line in the video, captured by a fisheye camera. Objects are identified as moving blobs (i.e., a set of connected points whose position is changing during the video stream); every such blob is interpreted as a person. Whenever a moving blob is detected, the algorithm keeps track of its movement to determine whether it passes the virtual reference line, and then updates the number of occupants accordingly. In particular, we position the line near the entrance of the space. Whenever the moving blob passes inward, the total number of occupants is increased by 1, and whenever the moving blob passes outward, the total number of occupants is decreased by 1.\n\nOur algorithm consists of the following five steps: (1) background isolation, (2) silhouette detection, (3) object tracking, (4) inward\/outward logging, and (5) inconsistency resolution. The flowchart of our algorithm is presented in Figure~\\ref{fig:flowchart}. In our implementation, two open-source projects were used: \\emph{OpenCV} \\cite{opencv_library} and \\emph{openFramework} \\cite{ox_library}. Next, we will explain each step in details.\n\n\n\\begin{figure}[ht!]\n \\centering\n \\scalebox{0.75}{\n \\begin{tikzpicture}[node distance = 1.5cm, auto]\n\n \n \\node (init) [init] {Initialize Algorithm};\n \\node (isol) [process, right of=init, node distance=4.7cm, text width=4.0cm] {Isolate background by comparing current and previous frames};\n \\node (extr) [process, below of=isol, node distance=1.65cm, text width=3.6cm] {Extract silhouettes in current frame};\n \\node (add) [process, below of=extr, node distance=1.45cm, text width=3.7cm] {Add each silhouette to a tracking list};\n \\node (if) [decision, below of=add, text width=5.2cm, aspect=4, node distance=1.7cm] {If a silhouette passes a reference line};\n \\node (upda) [process, left of=if, text width=2cm, node distance=4.7cm] {Update occupancy};\n \\node (rese) [process, below of=if, text width=4.0cm, node distance=1.75cm] {Reset occupancy if \\\\inconsistency detected};\n\n \n \\draw [arrow] (init) -- (isol);\n \\draw [arrow] (isol) -- (extr);\n \\draw [arrow] (extr) -- (add);\n \\draw [arrow] (add) -- (if);\n \\draw [arrow] (if) -- node[auto] {yes} (upda);\n \\draw [arrow] (if) -- node[auto] {no} (rese);\n \\draw [arrow] (rese.east) -- +(35pt,0) |- (isol.east);\n \\draw [arrow] (upda) |- (rese);\n \\end{tikzpicture}}\n \\caption{Flowchart of our occupancy-recognition algorithm.}\n \\label{fig:flowchart}\n\\end{figure}\n\n\n\n\\subsection{Background Isolation} \\label{sub:background_updating}\n\n\\noindent The purpose of background isolation is to identify a background image in the current video frame. Note that the appearance of the background may vary over the course of the video, depending on the time-of-day (e.g., turning on the lights at night may significantly alter the appearance of the background compared to natural light). The shadows of occupants must also be taken into consideration during the background isolation. To overcome these challenges, we employ the Gaussian mixture-based background\/foreground segmentation algorithm proposed in \\cite{zivkovic2006efficient,zivkovic2004improved}, and implemented on OpenCV.\n\n\n\n\\subsection{Silhouette Detection} \\label{sub:silhouette_searching}\n\n\\noindent In our setting, the term ``silhouette'' is used to refer to the border of a set of continuous points. Silhouette detection is based on the algorithm proposed in \\cite{Suzuki85Topological}. The silhouettes of moving objects are extracted by comparing the current frame with the previous one; see Figure~\\ref{fig:videoprocessing_result} for some examples. Here, only the silhouettes larger than a certain threshold area, $A$, are considered. The threshold is adjusted depending on the viewpoint and orientation of the camera. Furthermore, we use different values of $A$ for different parts of the space, to reflect the fact that individuals appear smaller as they move farther away from the camera. Importantly, the silhouettes of two or more people may overlap, and may, therefore, be interpreted as only one individual. To overcome this challenge, we use a machine-learning technique which will be explained later on in Section~\\ref{sub:machine_learning}.\n\n\\begin{figure}[ht!]\n \\centering\n \\includegraphics[width=.99\\linewidth]{figs\/extract}\n \\caption{Examples of silhouette detection, taken from different buildings in which our system is deployed.}\n \\label{fig:videoprocessing_result}\n\\end{figure}\n\n\n\n\\subsection{Object Tracking} \\label{sub:object_tracking}\n\n\\noindent After obtaining the silhouettes of occupants, their bounding boxes are logged for tracking. To this end, the list ${T_a}$ of silhouettes from the last frame is maintained. The algorithm then obtains the list ${T_b}$ of silhouettes from the current frame. For each bounding box $B_b$ in ${T_b}$, the algorithm determines whether there exists a box $B_a$ in $T_a$ that is within a certain distance, $d$, from $B_b$. If so, then $B_b$ is assumed to be the new location of $B_a$. Consequently, $B_a$ is updated in $T_a$, and its location is set to be that of $B_b$, in preparation for the next iteration of the algorithm. Note that the silhouettes are detected when they are moving. However, in cases where people might stop for a few seconds and then continue to walk, those objects will be removed from $T_a$ when they are missed for a certain number of seconds.\n\n\n\n\\subsection{Inward\/Outward Logging} \\label{sub:in_out_checking}\n\n\\noindent Given the collection of moving objects and their locations, a virtual reference line is used to count occupants. Specifically, whenever the locations are updated, the algorithm checks every object to determine whether that object has crossed from one side of the line to the other. If so, then the number of occupants is updated according to the direction of the movement. For inward movement, the number of occupants increases by 1, otherwise it decreases by 1. Furthermore, since the silhouettes of any two moving objects may overlap, the width of the bounding box can be used to infer the number of occupants contained in each object.\n\n\n\n\\subsection{Inconsistency Resolution} \\label{sub:error_detection}\n\n\\noindent With all the techniques described thus far, the performance of the algorithm may not be satisfactory, due to one major challenge: \\emph{the silhouettes of different occupants may overlap}. In this case, some of the overlapping occupants may go undetected by the algorithm. This problem becomes even more evident when the movement patterns are affected by whether the occupants are entering or leaving the building, e.g., due to the fact that occupants arrive one by one, but leave all at once. For instance, in our application domain, the pace at which people leave is significantly faster than the pace at which they enter. Consequently, when all occupants leave at once, the number of overlapping silhouettes increases, leading to a larger number of people going undetected by the algorithm. As a result, the algorithm on average misses more occupants leaving than entering the building, leading to the erroneous conclusion that there are still occupants in the building when in fact there are none.\n\nTo resolve such inconsistency, two simple techniques are used. First, if the number of occupants becomes negative, the occupant counter is frozen until another object crosses the reference line inward. Second, if no moving object is detected for a certain period of time, the occupant counter is reset to zero. While these simple techniques reduce the error, they are clearly insufficient. In Section~\\ref{sub:machine_learning}, we propose a dedicated technique to address this issue.\n\n\n\n\\subsection{Improvement by Machine Learning} \\label{sub:machine_learning}\n\n\\noindent As mentioned earlier, one of the major challenges that we have encountered while deploying our occupancy-detection algorithm is to resolve the problem of overlapping silhouettes. One solution is based on the width of bounding box; the wider the box, the more occupants it contains. However, we observe that such a solution is insufficient, especially when an occupant happens to be directly in front of another. To resolve this issue, we employ machine-learning and image-classification techniques, coupled with randomized principal component analysis (PCA) \\cite{halko2011finding}, which is implemented in \\cite{scikit-learn}.\n\nIn more detail, we collected 13,000 blobs from video footages spanning a period of one week and segmented each such blob as a separate image to create our training dataset. The number of occupants in each image was manually identified; see Figure~\\ref{fig:trainingdata} for some examples. The images were then transformed to grayscale in order to reduce the computational load. After that, the images were rescaled to the same size, 30$\\times$15 pixels, thereby making the size of each image 450 pixels. Second, randomized PCA is used to project the pixels in the original array to a smaller array that preserves the characteristics of the images, as a set of 25 features for each image in this case. The projection aims to further reduce the computational complexity. Moreover, we added the original width, height and the ratio of black pixels to the set of features. After obtaining the features, Gaussian Naive Bayes is used to classify the blobs with respect to the number of occupants.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.75\\linewidth]{figs\/silhouettes}\n\\caption{Sample images from the training dataset used for machine learning and image classification.}\n\\label{fig:trainingdata}\n\\end{figure}\n\n\n\n\\subsection{Privacy Enhancement} \\label{sub:privacy_concerns}\n\n\\noindent In our system, the videos and images are discarded immediately after processing. Still, privacy invasion is always a concern for video based detection methods. With this in mind, we introduced a number of techniques to preserve the privacy of the occupants. First of all, to prevent any potential hacker from accessing the video feed, we ensured that the camera stream can only be accessed by a single process at a time. Consequently, when our system is running, all the other programs are blocked from accessing the camera.\nIn addition, a hardware-based solution has been tested. Specifically, we created a frosted lens by overlaying a semi-transparent layer in front of the camera, so as to blur the camera feed. In so doing, the faces of occupants are no longer recognizable; see Figure~\\ref{fig:withoutlayer} for an illustration.\n\n\\begin{figure}[ht!]\n\\centering\n\\includegraphics[width=0.8\\linewidth]{figs\/frost}\n\\caption{Enhanced privacy by using frosted lens on camera.}\n\\label{fig:withoutlayer}\n\\end{figure}\n\n\n\n\\subsection{Evaluation Results} \\label{sec:occupancy_detection_results}\n\n\\noindent In this section, we evaluate the accuracy of our occupancy recognition. To this end, we collected video footages from our testbed, each with a frame size of 800$\\times$600 pixels, and a sample rate of 30 frames per second. The true occupancy was obtained by manually counting the number of occupants in each frame. Due to this laborious process, we were only able to obtain the true occupancy in a small number of videos, which are used in the evaluations.\n\nWe start by defining the accuracy rate as follows:\n\\begin{equation}\n {\\sf AccuracyRate} \\triangleq 1-\\frac{{\\sf Missing}_{\\rm in}+{\\sf Missing}_{\\rm\n out}}{{\\sf Total}_{\\rm in}+{\\sf Total}_{\\rm out}}\n\\end{equation}\nwhere ${\\sf Total}_{\\rm in}$ is the number of individuals who entered the building and ${\\sf Missing}_{\\rm in}$ is the number of such individuals who were undetected by the algorithm. Conversely, ${\\sf Total}_{\\rm out}$ is the number of individuals who exited the building and ${\\sf Missing}_{\\rm out}$ is the number of those who exited without being detected by our algorithm.\n\nTable~\\ref{tab:occupancyresults_1} presents the system accuracy given different numbers of occupants over different durations. As can be seen, the algorithm is able to recognize the number of occupants with high accuracy, even when over 400 individuals enter the building in just 20 minutes. Naturally, given a greater rate at which occupants enter the building, the accuracy of the system decreases because there are more cases in which the occupants overlap (in many such cases, it was hard, even for a human, to determine the exact number of occupants, and the video had to be replayed several times before the human was able to determine the true occupancy with certainty).\n\n\\begin{table}[ht!]\n\\centering\n\\caption{Results of occupancy recognition given different durations and different numbers of occupants.}\n\\label{tab:occupancyresults_1}\n\\scalebox{0.85}{\n\\begin{tabular}{@{}c|c|c@{}}\n\\hline\\hline\nVideo Length & Total No. of Occupants & ${\\sf AccuracyRate}$ \\\\\n & of Occupants & \\\\ \\hline\n 40 mins & 127 & 90\\% \\\\\n 40 mins & 154 & 90\\% \\\\\n 20 mins & 407 & 84\\% \\\\ \\hline\\hline\n\\end{tabular}}\n\\end{table}\n\nNext, we evaluate our system over one-day periods during the weekend, weekday and Friday. Note that in our testbed Friday is the busiest day in the week due to the Friday sermon, which takes place only once a week, and typically attracts a large audience. The results are presented in Table~\\ref{tab:occupancyresults_2}. As expected, the system accuracy correlates with the occupancy rates. In particular, the system is least accurate on Friday (which is the busiest day in our experiment), and most accurate during the weekend (which is the least busy in our experiment). Importantly, the accuracy seems sufficiently high throughout the week to provide a reasonable approximation of the actual occupancy state in the building, which is arguably sufficient for our purpose of HVAC control.\n\n\\begin{table}[ht!]\n\\centering\n\\caption{Results of occupancy recognition comparing weekend, weekday, and Friday.}\n\\label{tab:occupancyresults_2}\n\\scalebox{0.85}{\n\\begin{tabular}{@{}c|c|c@{}}\n\\hline\\hline\nType & Video Length & ${\\sf AccuracyRate}$ \\\\ \\hline\nWeekend & 1 day & 88\\% \\\\\nWeekday & 1 day & 86\\% \\\\\nFriday & 1 day & 81\\% \\\\ \\hline\\hline\n\\end{tabular}}\n\\end{table}\n\nWe now turn our attention to quantifying the loss in accuracy that occurs when using our privacy-preserving frosted lens from Section~\\ref{sub:privacy_concerns}. The results of this evaluation are presented in Table~\\ref{tab:occupancyresults_3}. As can be seen, the frosted lens only reduces the accuracy slightly, and that is despite the fact that the video footage is considerably blurred, as we have shown earlier in Figure~\\ref{fig:withoutlayer}.\n\n\\begin{table}[htbp!]\n\\centering\n\\caption{Results of occupancy recognition comparing normal lens and frosted lens.}\n\\label{tab:occupancyresults_3}\n\\scalebox{0.85}{\n\\begin{tabular}{@{}c|c|c|c@{}}\n\\hline\\hline\nType & Video Length & Total No. & ${\\sf AccuracyRate}$ \\\\\n & & of Occupants & \\\\ \\hline\nNormal Lens & 160 mins & 322 & 87.8\\% \\\\\nFrosted Lens & 160 mins & 339 & 80.2\\% \\\\ \\hline\\hline\n\\end{tabular}}\n\\end{table}\n\nNext, we evaluate the effectiveness of our machine-learning technique from Section~\\ref{sub:machine_learning}. To this end, we use three performance measures that are widely used in the Machine-Learning literature. The first is ${\\sf Precision}$, which is defined as the fraction of occupants that were correctly classified out of all those who were classified as either moving inward or as moving outward. The second measure is ${\\sf Recall}$, defined as the fraction of occupants that were correctly classified out of all those who actually entered the building or exited it. {\\color{black} Since it is often possible to have a naive classifier that has a high ${\\sf Precision}$ but a low ${\\sf Recall}$ or vice versa, a better metric called ${\\sf F1\\mbox{-}Score}$ has been utilized in \\cite{f1score1,f1score2}, which is basically a harmonic mean of ${\\sf Precision}$ and ${\\sf Recall}$}. Based on those three measures, Table~\\ref{tab:occupancyresults_4} shows the results before and after applying our machine-learning technique. The evaluation is carried out using 10-fold cross-validation of our dataset of 13,000 blobs. As can be seen, according to the most important measure, namely ${\\sf F1\\mbox{-}Score}$, our machine-learning technique from Section~\\ref{sub:machine_learning} significantly improves the performance of our system.\n\nFinally, to better understand how occupancy changes during the daytime in our application, we plotted in Figure~\\ref{fig:occupancy_plot} the actual occupancy, as well as the occupancy detected by our algorithm, given a typical Friday, and a typical Saturday. As can be seen, the general occupancy trend is clearly captured by the algorithm.\n\n\n\\begin{table}[ht!]\n\\centering\n\\caption{Results of occupancy recognition comparing with and without machine-learning technique.}\n\\label{tab:occupancyresults_4}\n\\scalebox{0.85}{\n\\begin{tabular}{@{}c|c|c|c@{}}\n\\hline\\hline\nType & ${\\sf Precision}$ & ${\\sf Recall}$ & ${\\sf F1\\mbox{-}Score}$ \\\\ \\hline\nWithout Machine Learning & 0.97 & 0.27 & 0.42 \\\\\nWith Machine Learning & 0.69 & 0.78 & 0.73 \\\\ \\hline\\hline\n\\end{tabular}}\n\\end{table}\n\n\\begin{figure*}[ht!]\n \\centering\n \\includegraphics[width=1\\linewidth]{figs\/occup_rec.pdf}\n \\caption{How occupancy changes during a typical day. Each peak represents one of the five daily prayers in Islam. We distinguish between Friday and other days of the week due to the Friday sermon, which precedes the midday prayer and usually attracts many more worshippers compared to any other prayer throughout the week (note that the scale of the y-axis is greater in the left plot than in the right plot).}\\label{fig:occupancy_plot}\n\\end{figure*}\n\n\\section{Introduction} \\label{sec:intro}\n\n\\noindent Heating, ventilation, and air-conditioning (HVAC) units, which are a primary target of building automation, make up almost 50\\% of the energy consumed in both residential and commercial buildings \\cite{HVACConsumption}. In general, building automation systems aim to intelligently control building facilities in response to dynamic environmental factors, while maintaining satisfactory performance in energy consumption and comfort. The primary functions of a building automation system include: (1) sensing of the environmental factors by measurements, and (2) optimizing control strategies based on the current and predictive states of building and occupancy. These tasks require an integrated process of sensing, computation, and control.\n\nTraditional building automation systems rely on fairly inaccurate occupancy sensors, which hinder the responsiveness of automation systems. For example, passive infrared and ultra-sound occupancy sensors produce poor accuracy, because they are unable to determine the occupancy state adequately when occupants remain stationary for a prolonged period of time. They also have a limited range which hinders their performance, especially in a large area. More accurate sensing technology, such as cameras that use visible or infra-red lights, can significantly improve the accuracy of occupancy recognition.\n\nOn the other hand, model predictive control, by which the future thermal response and external environmental factors are anticipated to make control decisions accordingly, has been considered in a number of studies \\cite{mpc1,mpc2,mpc3,mpc5,mpc6,mpc7} which are shown to be more effective than classical PID and hysteresis controllers that do not consider anticipated events. However, these studies are often based on time-invariant, first-principle linear models (also known as lumped element resistance-capacitance (RC) models \\cite{lti_rc}), considering only simple building geometry and single-zone in near-future time horizon. Although these linear models are easier for calibration (e.g., using frequency domain decomposition, or subspace system identification methods \\cite{lti_rc, mpc_rc_calib1, mpc_rc_calib2}), the error accumulates considerably when a longer time horizon is considered in model predictive control. While non-linear models are rather complicated and impractical, other alternatives based on physical models of building thermal response can provide a feasible solution.\n\nRecently, there have been remarkable advances in embedded system technologies, which provide low-cost platforms with powerful processors and sizeable memory storage in a small footprint. In particular, the emergence of \\emph{system-on-a-chip} technology \\cite{soc}, which integrates all major components of a computer into a single chip, can provide versatile computing platforms with low-power consumption and mobile network connectivity in a cost-effect\\-ive manner for mass production. As a result, smartphones are able to rapidly evolve from single-core to multi-core processors with a low incremental production cost. Notably, the \\emph{Raspberry Pi} project \\cite{rpi}, which originally aimed to provide affordable solutions for the teaching of computer science, has rapidly evolved for a wide range of advanced scientific projects. Therefore, there are plenty of opportunities to harness recent embedded system technologies in intelligent building automation systems. Particularly, sophisticated computational tasks can be conducted on these embedded systems efficiently, such as real-time video processing and accurate building thermal response simulation (e.g., \\cite{embedded_ref2}).\n\n\\medskip\n\nWith this in mind, we designed and implemented an occupancy-predictive HVAC control system in a low-cost yet powerful embedded system (using Raspberry Pi 3) for building automation. The rest of the paper is organized as follows. In Section~\\ref{sec:related}, we present the background information and literature review. In the remaining sections, we highlight three key features of our system.\n\n\n\\paragraph{{\\em(Section~\\ref{sec:recognition})} Real-time Video-based Occupancy Recognition}\nWe apply advanced video-processing techniques to analyze the features of occupants from video cameras, and automatically classify and infer the states of occupancy. Moreover, we consider privacy enhancement using a frosted lens. Our system achieves 80-90\\% accuracy for occupancy recognition by real-time video processing. Furthermore, we improve the performance of our occupancy recognition by using Machine Learning for considerably crowded settings.\n\n\\paragraph{{\\em(Section~\\ref{sec:prediction})} Dynamic Occupancy Prediction}\nWe employ various linear and non-linear regression models to capture and predict occupancy trends according to different day-of-week, seasonal patterns, etc. We present general as well as domain-specific approaches for occupancy prediction. Our models are able to identify the future occupancy trends considering a variety of dynamic usage patterns.\n\n\\paragraph{{\\em(Sections~\\ref{sec:simulation}-\\ref{sec:eval})} Simulation-guided Model Predictive Control}\nWe employ \\emph{EnergyPlus} simulator \\cite{eplus} for real-time HVAC control. We ported EnergyPlus simulator to the Raspberry Pi embedded system platform for simulation-guided model predictive control. A co-simulation framework is utilized to provide accurate building thermal response simulation under proper calibration. Noteworthily, we also release our Raspberry Pi version of EnergyPlus publicly \\cite{eplus_rpi} to enable other researchers to take advantage of our work for future building automation projects.\n\n\n\\medskip\n\nOur automatic HVAC control system is intended for public indoor spaces, such as corridors, libraries, or communal areas. Unlike private spaces such as homes, these public indoor spaces are not controlled by a particular occupant and can be affected by a diverse set of occupancy patterns. Such patterns tend to vary more dynamically in public spaces compared to private ones, posing challenges for effective occupancy sensing and prediction systems.\n\nIn particular, our system is deployed and evaluated for providing automatic HVAC control in the large public indoor space of a mosque (see Figure~\\ref{fig:mosque}), which is the worship place for followers of Islam. Typically, mosques have large public spaces, and are open 24-hours a day and 7-days a week. There are nearly 5,000 mosques in the UAE \\cite{NumberOfMusquesInUAE}, and over 55,000 mosques in Saudi Arabia \\cite{NumberOfMusquesInSaudi}. Due to the hot climate in this region, HVAC is required on a regular basis. The results obtained from our testbed implementation in Section~\\ref{sec:testbed} demonstrate the significant energy savings that can be achieved by using automatic HVAC control systems in public indoor spaces.\n\n\\begin{figure}[htp!]\n \\centering\n \\includegraphics[width=1.0\\linewidth]{figs\/mosque}\n \\caption{A large public indoor space of a mosque is used as a testbed for our automatic HVAC control system. Fisheye video camera, temperature and humidity sensors, as well as real-time controller for HVAC have been deployed in our testbed.}\n \\label{fig:mosque}\n\\end{figure}\n\n\\section{Model}\n\nThe number of occupants is denoted by ${O}(t)$ at time $t$. Consider a single-zone model of a building, where the indoor room temperature is denoted by $T_{\\rm i}(t)$ and outdoor temperature by $T_{\\rm o}(t)$.\nLet $Q_{\\rm AC}(t)$ be heat transfer from HVAC system, where as $Q(O(t))$ be the heat emitted by occupants.\nThe temperature change of $T_{\\rm i}(t)$ is modeled by \n\\begin{equation}\nC_{\\rm r} \\frac{{\\rm d} T_{\\rm i}(t)}{{\\rm d} t} =\n\\frac{T_{\\rm o}(t) - T_{\\rm i}(t)}{R_{\\rm r}} + Q_{\\rm AC}(t) + Q(O(t))\n\\end{equation}\nwhere $C_{\\rm r}$ is the heat capacity of the room, and $R_{\\rm r}$ is the thermal resistance of walls in the room.\n\nBy Euler method, we discretize time into time slots $(t_1, t_2, ..., t_T)$.\n\\begin{equation}\nC_{\\rm r} (T_{\\rm i}(t_i) - T_{\\rm i}(t_{i-1})) =\n\\frac{T_{\\rm o}(t_i) - T_{\\rm i}(t_i)}{R_{\\rm r}} + Q_{\\rm AC}(t_i) + Q(O(t_i))\n\\end{equation}\n\nLet ${\\tt E}(Q_{\\rm AC}(t))$ be the energy consumption for the HVAC system. Let $H(t)$ be the ambient humidity, $\\underline{T}(H(t))$ and $\\overline{T}(H(t))$ be the range of comfortable room temperature with respect to the humidity.\n\nSuppose that the data of occupancy, outdoor temperature and ambient humidity are predicted for an interval $T$, denoted by $\\big(\\hat{O}(t), \\hat{T}_{\\rm o}(t), \\hat{H}(t)\\big)_{t=1}^T$. Define an optimization problem of temperature control given predicted data $\\big(\\hat{O}(t), \\hat{T}_{\\rm o}(t), \\hat{H}(t)\\big)_{t=1}^T$ as follows.\n\n\\begin{align}\n& \\min_{\\big(Q_{\\rm AC}(t)\\big)_{t=1}^T} \\quad \\sum_{t = 1}^T {\\tt E}(Q_{\\rm AC}(t)) \\\\\n& \\mbox{subject to} \\notag \\\\\n& C_{\\rm r} \\frac{{\\rm d} T_{\\rm i}(t)}{{\\rm d} t} =\n\\frac{\\hat{T}_{\\rm o}(t) - T_{\\rm i}(t)}{R_{\\rm r}} + Q_{\\rm AC}(t) + Q(\\hat{O}(t)) \\\\\n& \\underline{T}(\\hat{H}(t)) \\le T_{\\rm i}(t) \\le \\overline{T}(\\hat{H}(t)) \\mbox{\\ if\\ } \\hat{O}(t) \\ge 0\n\\end{align}\n\n\nDefine several control strategies:\n\\begin{enumerate}\n\n\\item ({\\tt MPC}) Model predictive control strategy optimizes $Q_{\\rm AC}(t)$ for an interval $T$, based on predicted data $\\big(\\hat{O}(\\tau), \\hat{T}_{\\rm o}(\\tau), \\hat{H}(\\tau)\\big)_{\\tau=t}^{T+t}$.\n\n\\item ({\\tt AHC}) Ad hoc control strategy optimizes each $Q_{\\rm AC}(t)$ at time $t$ based on previous decisions $\\big(Q_{\\rm AC}(\\tau)\\big)_{\\tau